Tumgik
#the interfacing learning cap is higher but he still had to put points in it to have more than 1
bluastro-yellow · 7 months
Text
Kurvitz stresses that Kim doesn't actually have a character sheet hidden in Disco Elysium's code. Imagining that Lieutenant Kitsuragi has only one natural attribute point in Motorics helps the ZA/UM team to understand the depth of his character beyond what's referenced in the game's dialogue. "We just came up with this stuff for coherency," says Kurvitz. "And because we're nerds."
"I like to think Kim has a Thought Cabinet project called Revolutionary Aerostatic Brigades that he's worked on since he was a teenager," Kurvitz says. "This raises the learning caps for his Reaction Speed and Interfacing."
Kim's high Volition skill makes him impervious to prying, Kurvitz says, as the detective can find out on occasions being met with Kim's brick-wall resolve. Kim often chastises these whims of the detective's, but will occasionally play along. The Lieutenant finds his new partner funny, says Kurvitz.
Kim is naturally shit at Motorics and thinks Harry is funny source
1K notes · View notes
viktorbezic · 5 years
Text
Double Dribbble: Losing Out To Homogenous Design
Tumblr media
I’m not deliberately trying to single out dribbble here. Dribbble is typically used as the poster child and a culprit for homogenous design. Not gonna lie. I wanted to use the title Double Dribbble, while also highlighting what happens when we all use the same sources of online inspiration. IE. The sea of sameness effect. After the demise of FFFFound where I’d uncover all sorts of new visuals, there were no platforms that provided serendipitous discovery until Are.na came along. But getting offline, going through old books, traveling, making trips to galleries, looking at things from different disciplines helps. If your only source of inspiration are the most popular Dribbble shots, or the top UI Pinterest boards, algorithmically served, of course, it’s very likely you’ll come up with something that looks just like the sum total of what you’ve consumed. Ideas come from different recombinations of existing ideas and knowledge. This is true of all creative disciplines. The more unique the combination, the higher the chance of leading to a significant breakthrough (1).
There are many culprits from an over-reliance on data, a focus on utilitarianism, the re-use of the same frameworks, and designing templates without a feel for the content. Why am I writing about this? Hasn’t this piece been done before? Sure it has. But every now again I get bored to tears by digital and UI design and go on the hunt for folks who are pushing the boundary. It’s also a good exercise to re-ignite my own interest in a design discipline that I’ve been involved some, way shape or form over the past decade. I share a similar sentiment to Marc Kremers, “Too many sites are just exercises in good, generically appealing taste. Anyone can do that. It’s super boring.” (2)
I’ve reflected on how we got here not only to push myself but to also highlight some of the pieces I go back to and re-read. Disclaimer: I’ve been guilty of going the templated approach on side projects I’ve launched. I also use a bought template on Tumblr. I’m in the process of rolling my own. So I’m part of the problem as well.
How we got here.
Clean, workable, well-designed interfaces are the baseline. When you get to this point, you’re not finished. There’s still more to be done. I’ve re-read Yaaron Schoen’s piece “In Defense of Homogenous Design” and do agree with some of Schoen’s points. There are some learned behaviors and existing digital design paradigms. Examples include, pull to refresh, swipe to dismiss, underlined hyperlinks, etc. The argument as Schoen presents them go something like this: “This is great. We’re teaching people how to use digital products and if your digital product looks similar, all good. Jackets look the same, we know where the pockets are.” It makes sense, but my gut feeling is this line of thinking can stop us from really evolving a design (3). I know some restraint needs to be exercised so care needs to be taken as you select the right opportunities to level up on. There needs to be some expressive elements and a human touch. If all we did were the basics we wouldn’t need designers. All we would need are the UI frameworks to paint by numbers with.
We can’t always stick to existing paradigms.
There are instances when we can’t always stick to existing paradigms. One example was highlighted in a designer news thread by Renee P. that goes something like this: a decade ago people hadn’t heard of “pull to refresh” or “swipe to dismiss.” These new paradigms had to be invented (4). But with every new platform, the paradigms will need to be created or advance as the technology evolves. When I was working at a large holding company owned digital agency, I remember a VR commerce project we made for the high-end retailer. We’d demo it to our clients and teach them an entirely new paradigm for VR navigation (5). Staring at a diamond for a certain amount of time to advance the experience. Once folks learned it, it became second nature. I’m not big on VR but thought it was a good example from first-hand experience. There’s a host of new technologies we are still defining interfaces for. Some that have no UIs at all as with conversational interfaces and multiple dimensions when it comes to AR and mixed reality, etc. In the context of new platforms, screen-based interfaces may need to be recontextualized as people learn new behaviors and paradigms.
The other reason to break an existing paradigm is to infuse some sort of character into the product and put a unique spin on it. The question goes something like this, back to Schoen’s Jacket Principle, what if I don’t want to wear the same jacket as someone else, or design one like someone else? If I had a to design one, I might put extra pockets someplace else. Maybe there’s an act of discovery there. It’s about a point of view and bringing something unique to the table. As Alan Kay stated, “Point of View is worth 80 IQ points”. Which leads me to my next point.
An over-reliance on utilitarianism creates forgettable products.
Brett Bergeron, Creative Director at the digital product design studio This Also, presents an argument in his piece on “Good Enough Design” that we no longer have major constraints that bind us to just focusing on utilitarianism. And by not injecting personality into a digital product you fail to keep people’s attention. “More than ever, we are at a place in technology where interfaces can be utilitarian and emotionally expressive at the same time.” Bergeron uses This Also’s Google Dots case study as one of the examples. Injecting the search giant’s digital ecosystem with personality with the use of playful animations. Through an expressive logomark people understood when Google was doing something magical for them. He goes on to note that by not going beyond functional design a lot of products fall into the usable but forgettable category. A reason needs to be given to keep a product installed and opened again. As Bergeron notes, It’s worthwhile infusing the product with character so it’s differentiated. As a result, it protects the initial investment in building it (6).
Solely relying on web metrics discounts the team’s intuition and experience.
It’s been shown time an and again that a lot of digital metrics are bullshit. Especially with the increasing levels of bot traffic and fraudulent media buys. If you don’t have a core community of people using your product and you are heavily relying on media buys you may have a harder time trusting your metrics. Numerous studies are out with numbers showcasing billions of lost ad dollars to bots and click farms. The former CEO of Reddit came out confirming how bad the problem has become. Even Facebook isn’t able to identify genuine numbers.
You can’t do without metrics. We need them but it’s a single data point that needs to be paired with feedback from your community, user tests, research and any information to help set the appropriate context. Unfortunately, too many times I’ve seen clients afraid to make calls or provide their own point of view and instead fall back on metrics as a cover your ass tactic. “The numbers said so.” Also, let’s hypothetically say web metrics were accurate and bots didn’t exist. To use them exclusively to design anything results in sterile outcomes.
When folks do A/B or multivariate tests it’s typically done as a best effort approach. We question the design more than we do the science behind the testing and the reliability of the tests. It’s the combination of data (from multiple sources) and intuition together. Not taking the design team’s years of intuition and experience in coming up with a solution is a miss. Lastly, metrics are used to optimize for the local maximum. To make incremental improvements to an existing design. They can’t tell you whether or not you need a whole new design altogether.
Doing what you know will work.
There are many reasons why we fall into this trap. The independent digital designer and author Paul Jarvis attributes it to being victims of our own success. “Sometimes successful work can lead to less innovation, and then the real making stops. You become more like a factory production line than a meaningful creator.” We fall back on what works because it’s comfortable. Not only that, but we’d like to re-use as much work from the last time to get to the finish line faster. Do that enough you stunt your skills and growth, and start to lose touch. Boredom slowly sets in. This doesn’t mean we can’t leverage design patterns. It’s about determining how best to apply them (7).
Lack of diversity in thinking and experiences.
The digital artist and game designer, Morgane Santos not only points out that we are making more or less the same thing, but there’s also a cultural element here at play as well. IE. “The Designer Daves” of the world. Male, black wardrobe, 5-panel cap. The lack of people diversity in the digital design world produces one point of view. Diversity matters in producing work that’s different. Numerous studies show this. As Santos notes, “So, this whole designing with empathy thing? It literally cannot happen if all designers have the same background, the same look, the same style.” (8)
Marc Kremers echoed a similar sentiment and also related it back to the culture of being a designer, “I think designers naturally just want to fit in, have a nice, cute life, do nice, cute things. Work hard, be nice to people. Read Kinfolk. Raw denim. Beards. Flat Whites. Nice fonts, nice illustrations, nice design. Go with the flow. Just good, tasteful things, experiences, and activities. And before you know it, your life is an Instagram feed, literally indistinguishable to any other designer’s nice Instagram feed. You melted into the digital soup. I don’t know if this rant makes any sense, but I guess my awareness or fear of this singularity is just naturally percolating in my work. I’m a nice guy though.” (9)
This is an excellent reminder to collaborate with folks who aren’t exactly like you or at least to reach out for feedback and perspectives from more diverse groups. It takes effort. It’s way too easy to talk to folks that are just like you.
Disregarding Content
Another issue is not taking a holistic view of what we are designing for. Especially when it comes to content. I’ve listened to an Executive Creative Director once tell a team, “We’re great at making beautiful boxes without anything to put in them.” Very organized and thoughtful experience design without any regard for a greater concept, narrative or developing a paradigm that the best suits the content. Travis Gertz in ”Design Machines: How to Survive the Digital Apocalypse”, highlights the perils of not taking content and it’s unique needs to into consideration.
Gertz compares digital design to editorial design and highlights the emotive qualities editorial design historically has. The divergence between the disciplines is in the following areas: How systems are designed, How content is treated, and how we collaborate. In digital design, we nerd out on our CSS responsive grid frameworks, design systems and obsess over style guides and pattern libraries. The goal is typically to design for maximum template efficiency and component reuse. Unfortunately, this is where things end. In editorial design, the philosophy was slightly different. Editorial design systems are made for variation, not prescription.
Gertz also elaborates on how content is treated in the process. It’s not about content first or content last. It’s about the content’s connection to the design. In editorial design its standard practice for content creators and editors to work hand in hand while designing the system. Content development doesn’t come at a later stage to be plugged in once the design is done. It’s just as important as design and engineering. When digital products are built in an assembly line fashion and the boxes are built before the content exists we’ve missed an opportunity.
With the added complexity in digital design over editorial design, other design disciplines were created. User experience design was needed so that the flow of a site, application or product worked. As Gertz alludes to, this is another step in the assembly line where content isn’t carefully considered as there’s not enough collaboration between the creative disciplines. In digital design, a heavy emphasis is placed on dividing up roles by the stages of a project to gain efficiencies from each of the design disciplines. Unfortunately, this created more silos. As compared to editorial design. Where editorial designers knew the design stack from a system-level and how it laddered up to the expression of each piece of content.
Gertz boils down the problem to poor collaboration and a disregard for content. I know this type of tighter collaboration would be harder to scale, but a more editorial lens on things would help guide teams on what should be produced. There’s no need to create a component library of 30 components as a “just in case measure” if the current content only needs 5 of them (10).
Have a concept.
This is critical in other design disciplines. In digital design sometimes we can get away with not having a concept because if the thing works, no matter how basic or boring, you can check the box and tell everyone it works. Or you can fool yourself and everyone else with the cop-out that it’s an MVP (Minimum Viable Product). I remember being part of a large scale multi-brand platform redesign team. We had our very intelligent, engineering-minded UX team present the designs by nerding out on how flexible the components were and how great they looked across breakpoints. The clients were bored but did wind up asking some great questions, “How will our X product look in this thing?” and “What about X visuals that were unique to the brand? Will we have those?” Shame on me, shame on us. Making a site responsive or adding parallax scrolling isn’t a concept. A concept should give the team a guidepost when it comes to selecting grids, type, illustration and interaction paradigms. Developing a concept requires research and mining for an insight to ensure you’re in fact solving the right problem.
References
1. Batey, Mark. “How to Have Breakthrough Ideas.” Psychology Today, Sussex Publishers, 20 July 2017, www.psychologytoday.com/us/blog/working-creativity/201707/how-have-breakthrough-ideas.
2. “Exploring Digital Design | Marc Kremers, London.” Exploring Digital Design | Marc Kremers, London, Represent UK, digitaldesign.represent.uk.com/index.php/marc-kremers.
3. Schoen, Yaron. “In Defense of Homogeneous Design — Yaron Schoen — Medium.” Medium.com, Medium, 16 Mar. 2016, medium.com/@yarcom/in-defense-of-homogeneous-design-b27f79f4bb87.
4. Schiff, Eli. “In Defense of Homogeneous Design — Designer News.” Open Sans and Baskerville (Libre… — If You Had to Choose 1 Font Pair to Use for the Rest of Your Life –, Developer News, 21 Mar. 2016, www.designernews.co/stories/65889-in-defense-of-homogeneous-design.
5. Publicis Sapient North America. “‘The Apartment — Virtual Reality Retail Experience’ for Retail and E-Commerce (by Publicis Sapient North America).” The Best and Largest Global Advertising Agency Directory & Creative Library — AdForum, Adforum, www.adforum.com/agency/6644039/creative-work/34520718/the-apartment-virtual-reality-retail-experience/sapientnitro-retail-and-e-commerce.
6. Bergeron, Brett. “Good Enough Design — Brett Bergeron — Medium.” Medium.com, Medium, 20 Sept. 2016, medium.com/this-also/good-enough-design-29ab5132f3a3.
7. “Everything I Know.” Everything I Know, by Paul Jarvis, Paul Jarvis, 2013, p. 102.
8. Santos, Morgane. “The Unbearable Homogeneity of Design — Morgane Santos — Medium.” Medium.com, Medium, 10 Mar. 2016, medium.com/@morgane/the-unbearable-homogeneity-of-design-fe1a44d48f3d.
9. “Exploring Digital Design | Marc Kremers, London.” Exploring Digital Design | Marc Kremers, London, Represent UK, digitaldesign.represent.uk.com/index.php/marc-kremers.
10. Gertz, Travis. “Design Machines.” Louder Than Ten, 18 Sept. 2018, louderthanten.com/coax/design-machines.
1 note · View note
scifigeneration · 7 years
Text
Mind-controlled device helps stroke patients retrain brains to move paralyzed hands
Stroke patients who learned to use their minds to open and close a device fitted over their paralyzed hands gained some control over their hands, according to a new study from Washington University School of Medicine in St. Louis.
Tumblr media
By mentally controlling the device with the help of a brain-computer interface, participants trained the uninjured parts of their brains to take over functions previously performed by injured areas of the brain, the researchers said.
"We have shown that a brain-computer interface using the uninjured hemisphere can achieve meaningful recovery in chronic stroke patients," said Eric Leuthardt, MD, a professor of neurosurgery, of neuroscience, of biomedical engineering, and of mechanical engineering & applied science, and the study's co-senior author.
The study is published May 26 in the journal Stroke.
Stroke is the leading cause of acquired disability among adults. About 700,000 people in the United States experience a stroke every year, and 7 million are living with the aftermath.
In the first weeks after a stroke, people rapidly recover some abilities, but their progress typically plateaus after about three months.
"We chose to evaluate the device in patients who had their first stroke six months or more in the past because not a lot of gains are happening by that point," said co-senior author Thy Huskey, MD, an associate professor of neurology at the School of Medicine and program director of the Stroke Rehabilitation Center of Excellence at The Rehabilitation Institute of St. Louis. "Some lose motivation. But we need to continue working on finding technology to help this neglected patient population."
David Bundy, PhD, the study's first author and a former graduate student in Leuthardt's lab, worked to take advantage of a quirk in how the brain controls movement of the limbs. In general, areas of the brain that control movement are on the opposite side of the body from the limbs they control. But about a decade ago, Leuthardt and Bundy, who is now a postdoctoral researcher at University of Kansas Medical Center, discovered that a small area of the brain played a role in planning movement on the same side of the body.
To move the left hand, they realized, specific electrical signals indicating movement planning first appear in a motor area on the left side of the brain. Within milliseconds, the right-sided motor areas become active, and the movement intention is translated into actual contraction of muscles in the hand.
A person whose left hand and arm are paralyzed has sustained damage to the motor areas on the right side of the brain. But the left side of the person's brain is frequently intact, meaning many stroke patients can still generate the electrical signal that indicates an intention to move. The signal, however, goes nowhere since the area that executes the movement plan is out of commission.
"The idea is that if you can couple those motor signals that are associated with moving the same-sided limb with the actual movements of the hand, new connections will be made in your brain that allow the uninjured areas of your brain to take over control of the paralyzed hand," Leuthardt said.
That's where the Ipsihand, a device developed by Washington University scientists, comes in. The Ipsihand comprises a cap that contains electrodes to detect electrical signals in the brain, a computer that amplifies the signals, and a movable brace that fits over the paralyzed hand. The device detects the wearer's intention to open or close the paralyzed hand, and moves the hand in a pincer-like grip, with the second and third fingers bending to meet the thumb.
"Of course, there's a lot more to using your arms and hands than this, but being able to grasp and use your opposable thumb is very valuable," Huskey said. "Just because your arm isn't moving exactly as it was before, it's not worthless. We can still interact with the world with the weakened arm."
Leuthardt played a key role in elucidating the basic science, and he worked with Daniel Moran, PhD, a professor of biomedical engineering at Washington University School of Engineering & Applied Science, to develop the technology behind the Ipsihand. He and Moran co-founded the company Neurolutions Inc. to continue developing the Ipsihand, and Leuthardt serves on the company's board of directors. Neurolutions funded this study.
To test the Ipsihand, Huskey recruited moderately to severely impaired stroke patients and trained them to use the device at home. The participants were encouraged to use the device at least five days a week, for 10 minutes to two hours a day. Thirteen patients began therapy, but three dropped out due to unrelated health issues, poor fit of the device or inability to comply with the time commitment. Ten patients completed the study.
Participants underwent a standard motor skills evaluation at the start of the study and every two weeks throughout. The test measured their ability to grasp, grip and pinch with their hands, and to make large motions with their arms. Among other things, participants were asked to pick up a block and place it atop a tower, fit a tube around a smaller tube, and move their hands to their mouths. Higher scores indicated better function.
After 12 weeks of using the device, the patients' scores increased an average of 6.2 points on a 57-point scale.
"An increase of six points represents a meaningful improvement in quality of life," Leuthardt said. "For some people, this represents the difference between being unable to put on their pants by themselves and being able to do so."
Each participant also rated his or her ability to use the affected arm and his or her satisfaction with the skills. Self-reported abilities and satisfaction significantly improved over the course of the study.
How much each patient improved varied, and the degree of improvement did not correlate with time spent using the device. Rather, it correlated with how well the device read brain signals and converted them into hand movements.
"As the technology to pick up brain signals gets better, I'm sure the device will be even more effective at helping stroke patients recover some function," Huskey said.
75 notes · View notes
davidrsmithlove · 6 years
Text
Voice Marketing Tactics: There's Only 100k Searches a Month Up For Grabs Anyway
I was recently preparing a presentation and came across a presentation I gave to a small meetup in London in 2013. While there were only 100 or so people in the live audience that day, the presentation has now probably been seen by a hundred thousand people - between Slideshare, a video of a webinar version, and the blog post I wrote about it at the time. When I stumbled back across it, I found it interesting to look back on because it made a bunch of predictions about the next 10 years and now, in 2018, we are halfway through those 10 years.
I was struck by how time has flown and I thought it would be interesting to do a midway-point review of what I was thinking in 2013. I also thought I could use some of the information it gives us about the pace of technology change and user behaviour change to attempt to understand current trends better - particularly around voice interfaces and voice search.
Preview of the punchline: voice isn’t as disruptive as many seem to think
I’m going to run through my predictions and how I think they’re coming along, but I also wanted to give you a preview of where my argument is going. Ultimately, while I think that voice recognition technology has become incredibly good at recognising words and sentences, there are a variety of things that will prevent it quickly cannibalising the rest of search in the short term. This is true of voice interaction generally in my opinion, but is especially true in search where I believe voice is mainly incremental (and isn’t even responsible for anything like all the incremental query growth).
The bulk of the general argument is made very well by Ben Evans in his article voice and the uncanny valley of AI (though these response and rebuttal articles are worth a read too).
I particularly loved the simple way of describing availability and appropriateness as two big issues for voice that I came across in this intercom article: what voice UI is good for (and what it isn’t) credited to Bill Buxton where he talks about what he calls “placeonas”:
I’m not sure about the “placeona” language (a placeona being an adaptation of a persona that focuses on location changing your preferences or behaviour). For reasons that will not surprise regular readers, I distilled it into a couple of 2x2s:
How do we want to consume information?
How do we want to enter information?
In my view, the constraints that voice isn’t always a convenient input, and speech isn’t always a great output place a natural ceiling on the usefulness of voice search - even beyond the issues Evans identified - and they are heightened for what I’m calling real searches. My view is that the majority (if not the vast majority) of what are currently being called “voice searches” in the stats aren’t much like what search marketers think of as searches. When Sundar said in 2016 that 20% of mobile searches in the Google app and on Android were voice searches, my bet is that 75%+ of those were incremental and not “real” searches. They were things you couldn’t do via “search” before and that are naturally done by voice - such as “OK Google, set a timer for 20 minutes”. The interesting thing about these “searches” and the reason I’m classifying them differently is that they are utterly uncommercial. Not only are you never going to “rank” for them, there is literally no intent to discover any kind of information or learn anything at all. They’re only really called searches because you’re doing them with / through Google.
The pace of change: revisiting some old predictions
Before I finish making those arguments, let’s look back at the presentation I opened with. I started by putting my 10-year predictions into context by looking back 10 years (to 2003 - this was 2013, remember).
The 10 years before 2013
I reminded myself and the audience that in 2003 we were on the cusp of:
Doing email on our phones (one of the most iconic BlackBerry handsets was released in 2003)
Having broadband (13% of Americans, growing +50% YoY)
Video everywhere (we had Skype, but no YouTube yet)
Scoring my 10-year predictions from 2013 halfway through
Now, I put this initial presentation together for a relatively small meet-up, so I didn’t turn them into completely quantitative and falsifiable projections - though if anyone thinks I’m substantially wrong, I’m still up for hashing out more quantifiable versions of them for the next five years. In that context, here are my main predictions for 2023. We’re now halfway there. How do you think I’m doing?
I said that in 2023 we will:
Still be doing email on our phone
Still be using keyboards
Still be reading text
I’m feeling pretty good about those three. Despite the growth of new input technologies, the growth of video, and the convenience of hardware like airpods making it easier and easier to listen to bits of audio in more places, it doesn’t seem likely to me that any of these are going anywhere.
Pay for more [digital] things
I mean. This was kinda cheating. Hard to imagine it going the other way. But the growth of everything from Netflix to the New York Times has continued apace.
I’m not 100% sure what the end-game looks like for media subscriptions. I feel that there has to be some bundling on the horizon somewhere, as I would definitely pay something for a subscription to my second, third, and fourth preference news sources, but there is no good way to do this right now where it’s a primary subscription or nothing.
Dumb pipes continue acting dumb
I think that the whole net neutrality issue (interesting take) is pretty good evidence of the continuing ambitions (and, so far, failures) of the “pipes” of the internet to be much more. Having said that, I didn’t get into anything nearly granular enough to count as a falsifiable prediction.
Last mile no longer the issue - getting fibre to the exchange is the challenge
I think this is probably the biggest miss. Although there are some core network issues, home and mobile connection speeds have generally continued to improve, and where they haven’t, the problem actually does still lie in the last mile. I suspect that as we move through the next five years to 2023, we will see a continuing divide with speeds continuing to increase (and not being a blocker to advanced new services like 4K streaming) in urban / wealthy / dense enough areas, while rural and poorer areas will continue to lag. In the UK, the smaller size and higher density means that we are already seeing 4G mobile technology cover some areas that don’t have great wired broadband. This trend will no doubt continue, but the huge size and scale of the US means that there will continue to be some unique challenges there.
Watch practically no scheduled TV apart from some news, sports and actual live events
This was a bold one. I may have forgotten my own lesson about how fast (read: slowly) consumer habits change. In the accompanying blog post, I wrote:
“I am much less excited by an internet-connected fridge (a supposed benefit of the internet since the late ‘90s) than I am by instant-on, wireless display streaming (see for example AppleTV AirPlay mirroring) making it as easy to stick something on the TV via the web as via terrestrial / cable TV channels.”
This prediction was part of a broader hypothesis I developed and refined in 2013-2014 around the future of TV advertising. The key prediction of that was that $14-25bn /yr of TV ad spend will move out of TV in the US in the next 5 years. We’re about to see what the 2018 upfronts look like, but we’ve already seen a ~$6bn drop. It’ll be interesting to see what 2019 holds and then come back to this in 2023.
Have converged capabilities between mobiles and laptops - what I called “everything, everywhere”
That last one was possibly the most granular of the predictions - I envisaged specific enhancements to our mobile devices:
Faster than 2013 laptops
In fact, the latest iPhones might be faster than 2018 laptops
Easier to purchase on than laptops by being more personal
This is certainly an area where we have seen huge innovation with more to come
And specific ways that laptop-like devices would become more like 2013 mobiles, with:
Touch screens
App stores
The ability to turn on instantly
The majority of my predictions were directional and not that controversial, but the point I was seeking to make with the first few was that technology and usage generally changes a little slower than we anticipate. I think this is particularly true in voice, especially when it comes to search, and spectacularly true when it comes to commercially-interesting searches (including true informational searches).
What does all this tell us about voice “search”
At a high level, the same arguments I made in 2013 about the suitability of the different kinds of input and output apply to put some kind of cap or ceiling on the ultimate percentage of queries that will eventually shift to voice. Along with that, the experience of what things changed and what stayed the same 2003-2013 and again 2013-2018 remind us that certain kinds of behaviour always change more slowly than we might imagine they will.
All of that combines to remind us that even in the bullish predictions for voice search growth, most will be incremental and so little of it is to the detriment of existing search marketing channels (I wrote more about this in my piece the next trillion searches).
So how many voice searches might there be? And how many are actually real searches (rather than voice controls)? Of those, how many are in any way competitive or commercial? And of those, how many give a significantly different result to the closest-equivalent text search, and hence need any kind of different marketing approach?
A bit of Fermi estimation
Google talked about 20% of mobile searches being voice in 2016. Let’s assume that’s up 50% since then. There are then another fraction of that which will be voice searches on other devices (smart speakers, watches?, laptops).
To make it concrete, let’s assume we have a trillion desktop searches / year and a trillion mobile non-voice searches / year to put very rough numbers against the argument. Then I believe (see next trillion searches) that the new searches will mainly not cannibalise these (and to the extent they do, there will be natural growth in the underlying search volume). So then, taking the conservative assumption of no other growth, we get to something like the following annual search volume:
1 trillion desktop
1 trillion mobile non-voice
300 million mobile voice
300 million voice non-mobile
Still to come: 400 million (the rest of the “next trillion”): unfulfilled search demand - queries you can’t do yet. Image searches. New devices. New kinds of searches. Some fraction of these will be voice too.
So - 600 million voice “searches”.
After reading a range of sources, and building some estimation models, I think that total voice “search” volume breaks down roughly as:
50% (300 million / year): control actions
[set a timer]
[remind me]
[play <song>]
[add <product> to shopping list]
20% (120 million / year): informational repeated queries with no new discovery (i.e. you want it to do the same thing it did yesterday)
[today’s weather]
[traffic on my commute]
5-10% (30-60 million / year): personal searches of your own library / curated list
[listen to <podcast>]
[news headlines] (from previously set up list of sources)
20-25% (120-150 million / year): “real” searches - breaking down as
1-2% unanswerable
10% text snippets
5% other answers (local business name, list of facts, etc)
5% (where screen present) regular search results equivalent to similar typed search
Unfortunately, there is little “keyword” data for voice to validate this estimation. We simply don’t know how often people perform which different kinds of queries and controls. Most of the research (example 1, example 2) has focused on questions such as “which of the following activities do you use voice search / control for?” or “what tasks do you perform on your smart speaker?” (neither of which capture frequency). While there is some clever estimation you can do with regular keyword research tools, there is little in the way of benchmarking.
The closest I have found is comScore research that talks about “top use cases”:
Tumblr media
If we interpret this as capturing frequency (which isn’t clear from the presentation) we can categorise it the same way I did above:
Tumblr media
And then sum it to get ratios that fall roughly within my ranges:
Control: 51%
Informational: 23%
Personal: 6%
Search: 19%
It’s those 4. c & d that provide a marketing opportunity equivalent to most typed searches (and of course, just like on desktop, many of those are uncompetitive for various reasons - because they are branded, navigational, or have only one obvious “right” answer). But even with those included, we’re looking at global search volume of the order of 12-15 million queries / year.
Still interesting, you might think. But then strip out the queries it’s impossible to compete for, and look at the remaining set: what % of those return either the top organic result, a regular search result page (where a screen is present), or a version of the same featured snippet that appears for a typed search? 80%? 90%? I’m betting that the true voice search opportunity that needs a different activity, tactic or strategy to compete for, defined as:
Discovery searches you haven’t performed before (i.e. not [weather] and similar)
That return good results
That are not essentially the same as the result for the typed query
Is less than 1 million searches / year globally at this point.
What market share of ~100k searches / month across all industries do you think your organisation might be able to capture? How much effort is it worth putting into that?
Where I know I’m wrong
My analysis above is quite general and averaged across all industries. There are a few places where there might be specific actions that make sense to make the most of the improvement in and growth of voice control. For example:
News / media - might find that there is an opportunity in a growing demand for news summaries and headlines delivered as a result of a voice interaction rather than as e.g. morning TV news (see for example, this stat that the NYTimes news podcast The Daily has more listeners than they ever had print subscribers)
Data providers - if you offer proprietary (and defensible) data that has high value for answering certain kinds of queries (e.g. sports league statistics), there could API integration opportunities with attached commercial opportunities
Customer success / retention / happiness for consumer companies - there are a bunch of areas where skills / integrations can make sense as a way to keep your customers or users engaged with you / your service / your app. These might perform like searches that no-one else has access to once your users are using your skill. An example of this is grocery shopping.
At the same time, I would be tempted to argue that most of that is not truly search in any particularly meaningful sense.
Of course, it’s completely possible that I’m just wrong on the scale of the opportunity - Andrew Ng of Baidu (formerly of Google Brain) believes that 50% of all (not just mobile) searches will be voice by 2020 (or at least he did in 2016!). I haven’t seen an updated stat from him and while I am inclined to think that’s too high, you might disagree and I would understand if you thought Ng’s credentials and access to deeper data were stronger than mine here! (Note you’ll also see this prediction bandied around a lot attributed to comScore but as far as I can tell, they just repeated Ng’s assertion).
Disagree? Want to argue with me?
Please do - I’d love to hear other opinions - either in the comments below or on twitter where I’m @willcritchlow.
0 notes