I'm getting déjà vu.
There's an exciting new technology that's captured the world's attention but only a small portion of people actually understand what it does, how it works, and how to make it useful. But the promise surrounding it is incredible and they hype is out of control. You, the leader of a small and new company in need of attention to attract investors, or a big and old company in need of attention to lure customers from your competitors, give the order to your staff:
- Product Designers: Add this new stuff to our existing products.
- Developers: Figure out how to integrate this technology.
- Marketing: Tell the world about it. Now.
Only a small portion of these will actually ship. Some because it was a promise the company couldn't fulfill, but many because there was no practical application for the technology in their business to begin with.
Remember how blockchain was going to change the world? Yeah, about that…
This cycle repeats over and over and over. Mobile apps, cloud computing, internet of things, blockchain, NFTs, virtual and augmented reality, machine learning, and now… generative artificial intelligence. They were all "the next big thing", but were way overblown at the start as everybody rushed to join the fray and put out their press releases saying that they are "all in on XYZ technology."
That's not to say that any of those are bad technologies — they're not! Well, except for NFTs, those were dumb. There are use cases for everything. Obviously, mobile apps and cloud computing and machine learning have become big deals, but when they first arrived the technology was immature but that didn't stop companies big and small from jumping on the bandwagon even if they had no logical reason to do so.
In part, that's smart marketing. The new hype train will leave the station soon enough and companies that feel they're in need of attention can jump on board with an announcement that they're adding that new tech to their product. It doesn't actually even have to ship, or be user-facing in any way, but by simply putting out a press release that is "Company ABC to integrate XYZ technology into LMNOP platform" will grab at least some attention.
How many absolutely useless mobile apps were released in the early days of smartphones? Sure, they existed, but they weren't any better than their website — or in many cases they were just wrappers for the website.
Heck, the thing doesn't even actually have to ship — getting attention is often the primary goal. How many companies said "we're adding blockchain to our tech stack" without any clear explanation of what they intended to use blockchain for? And then never followed up saying "our blockchain is now active and doing the thing"? Too many to count.
We're facing that trend again in 2023, except now with generative AI. The proliferation of numerous AI tools, including the popular and impressive ChatGPT, has captured the world's attention. These tools can be really good if used in the right way — or dangerous. On one hand, you have automated AI summarizations of user reviews from retailers like Newegg and Amazon (see to the right). On the other hand, AI is already creating false images for political operations.
But for every one of those "look at what AI does!", there are ten companies like Mercedes putting ChatGPT into things that don't need it with no use case that customers find appealing.
Now I don't want to make it sound like generative AI is going to be a massive hype flop like blockchain was. I have every expectation that generative AI will be a big deal and will change a lot of things. We're standing at the beach watching the water recede ahead of the tsunami that will be AI over the next several years. As these tools get more powerful and the use cases become clearer, it'll change not everything, but a lot of things.
The thing that makes AI different is that it does things that humans already do, but faster and in some cases better. All forms of generative AI are effectively just advanced prediction engines, whether for text or images or video. They take the prompt provided and use it to generate a response. ChatGPT is effectively a highly advanced and context-aware version of hitting the middle button of the "next word predictions" over your smartphone's keyboard.
That's very different and easier to visualize in a "we could use it this way" thought process. Instead of going to a writer and saying "I need to you write up a press release on how we're adding ChatGPT to our platform", you could literally just ask ChatGPT to do it.
Just because you can put generative AI into something doesn't mean you should.
That doesn't mean that all uses of generative AI make sense. And sometimes something needs to be tried first to decide if it actually is useful or not (looking at you, Mercedes). I'm sure every one of us has bought something that we saw and thought "I need this! I can do so much with one of these!" and then it ended up sitting in a drawer and getting used a few times a year. Often the hype doesn't align with the reality.
Not everything needs or will benefit from the infusion of AI, especially not right away. We're still in the earliest days of AI and there's a lot to work out (like making sure generative AI doesn't just make things up). But a lot of products and services will benefit, eventually. There are many things that still and will always benefit from the human touch — those Amazon review summaries needed all of the reviews from human customers in the first place.
While I don't like everything Apple does, I am often a fan of how Apple embraces and implements new technologies: when it makes sense and in generally thoughtful ways. The common wisdom is that "Apple is rarely the first, but they learn from everybody else's mistakes and do it better", and that often is true. From personal computers to smartphones to tablets to earbuds to smartwatches, Apple's version often arrives and completely changes the way the already-established industry works.
You might think that Apple is playing the same "wait and see" approach with AI, but they're not. Apple is investing heavily in AI and machine learning, they're just not shouting it from the rooftops and cramming it into every code repository of iOS. It's more subtle, and Apple loathes using common industry terms like "AI". Where everybody else might say "we've added predictive AI to our phone keyboard", Apple says that the iOS 17 autocorrect "leverages a transformer language model". That's exactly the same kind of tech as ChatGPT — the "T" stands for "Transformer". I've been running the iOS 17 public beta for a while now and the autocorrect and predictive text are much better than the awful implementation on iOS 16.
I'm never a fan of new technology for the sake of new. And while I totally understand and encourage the exploration of what new tools like generative AI can do for a business, they should be implemented with thoughtfulness and intention — not just whipped into a press release to garner 15 minutes of attention.