The introduction of the cotton gin wasn’t accompanied by an entire genre of Hollywood movies dedicated to the gin “singularity”. Nor did we usher in Golden Age of telecommunications with blockbuster killer “phone web” stories.
Artificial Intelligence is different. Like other disruptive technologies, it is having far-ranging effects, good and bad. But uniquely, quality AI information is clouded by the AI apocalypse narrative. If you google the field, you’ll be challenged to separate medical imaging wheat from AGI chaff. (Don’t tell anyone, friends, there’s no magic here, it’s just math.) AI alone is no more likely to take over the world than is your calculator. Well, unless it’s used as a deniability smokescreen: “It’s not my fault the killer robot smashed your house, it was the AI that did it”.
Honestly, what worries me most about AGI is the distraction it creates from the real ways that AI can make a massive positive difference in our lives. And the Winter/Summer AI cycle is a massive dampener.
AI winter system and status
AI hype is nonlinear: a bit of AI hype starts a flywheel effect, often pollinated by well-meaning journalists. AI hype is particularly mutant, which means that those of us trying to do some good in this world have faced a series of summer/winter cycles where Dutch tulip-like exuberance has led, inevitably, to a burst bubble.
Which hit me in the face personally; after riding the 1980s AI wave, by 1995 just saying “artificial intelligence” out loud pigeonholed me as a fuddy duddy so I rebranded myself as an “analytics” and data expert, for a decade or two.
Felix Hovsepian wrote a good set of Cliff notes on the AI Winter story today including pointers to hypebuster Roger Schank. And Mark Saroufin’s viral insider’s critique, casts a stark eye on academic AI incrementalism and the underlying risk/economic dynamics that have broken our social contract with basic research. We’re stuck in a strange attractor, and we’ll get out either abruptly against our will, or we’ll remove the attractor altogether by getting real.
Top five ways to stop the summer/winter oscillation
- Ground everything in reality. For early research, if you can’t at least name the decision or use case that your data and model could or should support, then you haven’t done your homework, and you shouldn’t be published. For more advanced research, you must provide rigorous results, tested at scale, on a nontrivial problem (and yes, showing results on non-training data—believe it or not, I still have to say this.).Shift from being solution-based and algorithm-focused to being problem-based, and decision-focused. As @thingskatedid puts it, “computers are magnificent, incredible achievements. unfortunately we run software on them.” Which breaks, a lot, increasingly in ML, without an engineering discipline (including design, planning, construction, and QA) that teaches us how to stop that, and which starts with ensuring that systems are “fit for purpose”.Shift resources from new algorithms to implementation and integration. I use the simple and powerful H2O framework (with R for orchestration) for most of my applied AI work, and it’s plenty, having given my projects breakthrough results dozens of times over the years. It feels like most of my clients are just trying to drive to the corner store, yet most data scientists I’ve met are trained as Formula I (ahem, tensorflow) mechanics, trying to win the next Kaggle competition. The diminishing return curve from this sort of work—compared to serious productization, AI orchestration, MLops, and ML-specific software engineering strategies—was crossed long ago.
Along these lines, support new incentives to reward and applaud applied—or at least use-inspired—research. This is harder, and more desperately needs good practitioners, than foundational AI. It’s been a dirty name—for the “B” students—until now, and this needs to change.Insist on transparency around Technology Readiness Levels (TRLs) when you read, write, cover, or review ML stories. Has this algorithm/system been proven in the lab or in the field? On a toy problem or at scale? By one or by thousands? By academics alone, or is anybody making money off of it? As I’ve spoken about a lot, we will waste resources and go astray if we mistake a TRL1 prototype for, say, a TRL7 MVP. My ML customers have fallen into this rabbit hole more times than you’d like to know, and I have to gently tell them “No, I’m sorry but that reinforcement learning / genetic algorithm / <fill in favorite sexy AI-related tech> is not mature enough for you to profit from it within the next five years”.
So yeah, sex sells. Even in the nerdy halls of backpropagatationdom.
Alexander Lavin and Gregory Renard have a great ML-specific TRL model, which I suggest be adopted by all ML peer review publications, along with a TRL disclosure rule. Embrace and support the emerging field of decision intelligence (DI), which democratizes and systematizes the way that we connect the AI stack to human stakeholders. It’s unnecessarily gnarly today to stand up a new AI project—or even to figure out where AI fits into a situation—so we tend to fall back on starting with the data (the solution) instead of a potential end user’s problem. DI fixes that.
If you’re new to AI/ML or a journalist, please know that AI is a “fake news” siren. Friends don’t share AI hype with friends. Take a few minutes to learn about the AI Winter. Don’t share unvetted clickbaity drek. And go high, not low: earn your clicks with substance, not fluff, please.
Or, if you’re a senior technologist, please use your influence to nudge this ship away from the upcoming rocks. Without some courageous reprioritization, we’re headed to another winter.
Can I help your own AI/ML project to get real? Book me for a free consultation or send email.