What’ll happen next? What’ll be important? Fivefold food for thought, fresh from the world’s most important technology conference.
The very first South by Southwest® (SXSW) commenced in Austin, Texas on March 12, 1987. Ronald Reagan was then the U.S. president, CDs were the hottest new thing, and the internet was just a handful of interconnected university computers. On those four days in March, 177 bands performed in 15 clubs – and a ticket cost just $10. Today, Southby, as it’s known locally, has become a million-dollar enterprise.
And it’s no longer just about music, as the movie, education, and tech industries have also joined the show. With around 70,000 accredited participants, hundreds of presentations, interviews, and panel discussions, SXSW spans ten days. And regardless of whether it’s about psychedelics, space junk recycling, or the latest AI applications, the overarching question at SXSW in Austin is: How will things developed? What’ll happen next? And how will we handle all that? We’ve selected five key insights from the thousands of forecasts and thoughts, observations and analyses, and explained what they mean for organizations.
We will no longer search the internet. The internet will search us.
Amy Webb, a quantitative futurist who’s been an important feature of the SXSW lineups for years, opened this year’s conference with the idea that the internet as we know it is finished. Up till now, the internet has been about the ever-larger quantities of online content companies mapped and made searchable – some well, some less so – but now the tables are turning. The more data that’s available on users, the easier it is for AI systems to analyze this data and automatically deliver content their calculations suggest is precisely right for the situation in question. Amy Webb emphasized that our concept of data will expand from text or images to include smells or details like whether someone is cooking in the background during a video call.
For organizations, this means not only preparing their own strategies and business models for this paradigm shift but also imposing their own data ethics. Which signals should and can be used, and which are taboo? How is consent obtained, and, of course, what happens to the data collected? Even though Amy Webb can impressively demonstrate what is technically possible and may become reality in the near future, what will ultimately matter is what questions will be asked about the data. Are there chemical odor molecules that can be used to identify, at an early stage, diseases like dementia or Alzheimer’s? Can productive and happy teams be assembled based on information previously not used for this purpose? What does it mean in identity terms if we can shed at will almost all our external characteristics – from gender or height to clothing or voice – and choose new ones in virtual (work) environments? Drawing up such questions will remain a task for people in organizations; technology will take care of the answers.
AI is always what machines can’t quite do yet.
Mike Bechtel, the chief futurist of the consulting company Deloitte, drew on the so-called AI Effect. Originally postulated by Larry Tesler, the inventor of the copy-and-paste mechanism, this theory suggests that AI is always what cannot yet be done. When IBM’s Deep Blue chess computer faced off against Grandmaster Garry Kasparov in 1996, many observers believed it was impossible for a machine to beat a human. After Deep Blue won, people suddenly said that after all, chess rules were very rigid so the computer wasn’t all that smart. When an IBM system known as Watson won a game of “Jeopardy!” against several human contestants in 2011, the same pattern emerged: Skepticism beforehand, and afterward, people said the machine was just quickly looking up things on the internet so it wasn’t all that impressive after all. “What seemed impossible yesterday is banal today,” Mike Bechtel said in his presentation. The same applies to current advancements in generative AI. Instead of mainly handling mathematical tasks, AI systems are taking on increasingly creative and complex tasks. However, the reaction in many cases is merely to criticize, like when a person in a detailed and elaborate painting computer-generated in just seconds is depicted with six fingers.
Mike Bechtel compared the emerging digital assistance systems to the staff in the “Downton Abbey” series where Lord Grantham didn’t need to explain in detail to each single servant from the chauffeur to the butler what they needed to do, but simply said, “I’m going to town this afternoon!” Everyone immediately knew their respective tasks. Just like a well-coordinated and loyal team of household staff in Victorian England could anticipate when the master of the house might go to town, the AI systems of the near future will also become better at predicting their users’ wishes and needs – and proactively fulfill them, or at least offer to do so. What implications does this have for organizations?
At SXSW, you can see how the technological possibilities of AI are evolving from year to year. For most organizations, it is essential not to dismiss these rapid advancements as irrelevant to their field or as the eccentric ideas of highly specialized development teams, but to approach these new possibilities in a spirit of curiosity and willingness to learn. A good starting point can be the freely available Deloitte report from Mike Bechtel’s team or Amy Webb’s extensive dossier, the „Emerging Tech Trend Report“, which she made available after her SXSW keynote.
The culture of hyperconnectivity is the culture of disconnection.
How could it be that we have 1,000 online friends but don’t know who to ask to feed our cat? In her presentation on the effects of hyper-digitalization on our lives and work, the psychologist and successful podcaster Esther Perel highlighted how digital communication is changing our expectations of others. The constant availability of apps, chats, and bots can create unrealistic ideas about what friends, partners, colleagues, or contractors can or want to do.
For companies it is becoming increasingly important to establish clear communication rules.
Which channels are best suited for what purpose? Who should be available when for what? How do you signal that you are only contactable in an emergency? And how can you ensure that the unexpected, the unplanned, the “unknowable” doesn’t get lost amidst all the Slack conversations, perfectly coordinated calendars and seamless digital technology? According to Esther Perel, an automated, risk-free life eliminates all possible errors and weakens our social muscles, which are crucial for successful relationships. In organizational theory terms, you could say that digital communication methods deeply impact the formal and informal expectations within an organization through the forms these methods take and the possibilities they open up. This is happening whether we like it or not. Although we basically know about this, it’s still good to be reminded of it. What shapes fundamental relationships also shapes organizational ones. For example, an email might suddenly seem less collegial than a Slack message, and it may be opportune to secure documents internally before storing them on a shared server.
Up to now, organizations have enjoyed the key advantage of being able to determine their structures themselves. However, many of the interaction techniques deployed are now being determined not by the organization itself but by communication technologies from Microsoft, Slack, Zoom, or the Google Suite. Yet this situation can be changed by addressing the actual implications rather than relying on the question of how the communication tools are used somehow sorting itself out.
Productivity is for robots not for humans.
Kevin Kelly, a tech visionary, addressed the heavily discussed topic of Generative AI in his presentation. Human beings tend to overestimate themselves when it comes to determining which tasks are too complex for machines. In the era of railroad construction, driving railroad spikes was considered something a machine could never do. Later, similar doubts were expressed about recognizing faces or creating artwork. Generative AI, which involves creating texts, images, or videos using tools like ChatGPT or Midjourney, is not going to be a job killer, it is said, but rather a useful intern or assistant – a “universal personal intern” that provides good (though often generic) foundations that humans have to refine and expand upon. Jobs will undoubtedly change, but as with previous advances in automation, it will be the simple, repetitive tasks that are easiest to delegate to AI. The freed-up capacities can be used for true creativity, as that is what humans should spend their time on, Kevin Kelly said. After all, productivity is something for robots, he added.
Organizations need to engage intensively and at an early stage with the possibilities generative AI opens up. As was the case with the old programmers’ motto of “Bullshit in, bullshit out”, it’s the quality of the prompts, which initiate the work of AI models like ChatGPT, that makes a significant difference to the results. Writing good prompts that lead to desired outcomes – whether it’s texts, melodies, or graphics – will become a crucial skill. It may be a media-sociological truism, but knowledge leads to more knowledge. Only those who know enough can ask smart questions. And companies that allow their employees to gain such experience at an early stage will clearly enjoy an advantage.
Young people don’t talk about the, Metaverse’ – that is a term only Boomers use.
While the metaverse was heavily discussed at the last SXSW conference, it was relatively quiet on that front this year. But this doesn’t mean the topic is completely off the table. The metaverse will come, most observers agree – but not tomorrow. And as was the case with the so-called information superhighway in the 1990s, the original name probably won’t stick in the end. The virtual, ever-available, limitless, and immersive online world will probably look a bit different from what Mark Zuckerberg envisions today.
For companies, this means they should keep an eye on the technology, even when it’s no longer in the spotlight. After all, it’s often in the valleys of hype cycles that the most significant developments occur. For example, according to Google Trends, autonomous driving was talked about, and searched for, much more from 2017 to 2019 than right now. But in the last few years, significant progress has been made. In selected areas of San Francisco, Phoenix, and Austin, the first Cruise robo-taxis are operating in collaboration with General Motors, and Mercedes-Benz has become the world’s first manufacturer to receive Level 3 approval. This means drivers are now allowed to divert their attention from the road for the first time, for example to watch a movie or conduct a video conference – but only during daylight hours and at speeds up to 60 mph..
For years, the motto of Silicon Valley startups, “Move Fast and Break Things”, seemed to be the guiding principle for modern companies—be agile first-movers, regardless of the losses. Do things simply because technology makes them possible – without asking whether they’re necessary, sensible, or what people want. Brake and you’ll lose. That’s how it sounded from incubators and accelerators, pitch decks and venture capital rounds.
Yet, the zeitgeist seems to have shifted. More and more often, we hear variations like “Move Far and Build Things”. While no one wants or can do without innovations, they should now happen intelligently, sustainably, and in a balanced way. Companies should focus on resilience and safety, follow rules and laws, and plan, test and adapt their products and services with the users in mind. Progress with responsibility and prudence, so to speak.
Two events occurred while the SXSW was taking place in Austin in March 2023: Silicon Valley Bank, home to many startups and venture capital firms, went bankrupt, and GPT-4 was launched. The latest version of the engine that powers software like ChatGPT scored 75% on the American Bar Association’s exam for lawyers. That was significantly more than the average 62% achieved by human candidates. The score was good enough to make the top 10% of all the candidates. But anyway, passing a lawyer’s exam isn’t that difficult, is it?