Innovation: When Less is Better

In the latest (May-Jun) issue of the Harvard Business Review, City University of London Professor Paolo Aversa talks about why, “Sometimes, Less Innovation is Better“.  In an extensive study of over 300 Formula 1 race cars, spanning 30+ years, they cross-checked these cars with the F1 race results.  Surprise, surprise: “In certain situations, more innovation led to poorer performance”.  When they plotted this relationship, they got an ‘inverted U’.  This means that an increase in innovation initially helps the performance, but after a certain point, it begins to hurt it.

Why does this happen?  Prof Aversa and his colleagues attribute this to the environment.  The chances of failing with an innovation in a dynamic, uncertain environment are higher.  In order to understand the when and the how, they have come up with a “Turbulence Framework” (my label for it).  This framework evaluates the environment around three factors:
1. Magnitude of Change: Asks questions around radical shifts in competitive space, industry, regulation, demand and prices.
2. Frequency of Change: Asks questions around the rate of change in the industry and competitive space.
3. Predictability of Change: Asks questions around the ability to predict industry forces and competition.

Based on this framework, an enterprise must decide whether it is worth innovating further or to hold out and ride the turbulence.  If the rate of change of technology is too much too fast, it may be wiser to actually slow down.  In this video (about five minutes long), Prof Aversa explains this work:

But then, aren’t environments generally uncertain?  Can the framework actually measure and provide a reliable prediction?  How do you find out if you have hit the peak of the ‘inverted U’ relationship between innovation and performance?  Is the curve steep?  Are there multiple curves, depending on how you define performance and innovation?  How does it apply to different types of innovation, like sustaining and disruptive?  Lots of questions to dive into.

Over-Innovation has been around for a while and folks have written about this.

In their Harvard Business Review article “Innovation Versus Complexity: What is Too Much of a Good Thing?”, Mark Gottfredson and Keith Aspinall introduce the concept of innovation fulcrum. This is how they explain it:

“Innovation fulcrum is the point at which the number of products strikes the right balance between customer satisfaction and operating complexity.
:
The fact is, companies have strong incentives to be overly innovative in new-product development.  Introducing distinctive offerings is often the easiest way to compete for shelf space, protect market share, or repel a rival’s attack.  Moreover, the press abounds with dramatic stories of bold innovators that revive brands or product categories.  Those tales grab managerial and investor attention, encouraging companies to focus even more insistently on product development.  But the pursuit of innovation can be taken too far.  As a company increases the pace of innovation, its profitability often begins to stagnate or even erode.  The reason can be summed up in one word: complexity.  The continual launch of new products and line extensions adds complexity throughout a company’s operations, and, as the costs of managing that complexity multiply, margins shrink.

:
Traditional financial systems are simply unable to take into account the link between product proliferation and complexity costs because the costs end up embedded in the very way companies do business.  Systems introduced to help manufacturing and other functions cope with the added complexity are usually categorized as fixed costs and thus don’t show up on variable margin analyses.  That’s why so many companies try to solve what really are product problems by tweaking their operations—and end up baffled by the lack of results.”

The issue of innovating too much via over-engineering lies with startups too.  CBInsights analyzed more than 101+ startup failure post-mortems and identified that 42% of the time, the reason for failure of a startup was: “No Market Need”.  In a tearing hurry to shove something out of the door and see if it sticks, feature-bloat and rejections are bound to happen.  Which puts the concept of Minimum Viable Product (MVP) under a rather stern lens. What should a startup do?  Part of the answer may be in the nature of the beast.  In the spirit of experimentation, one has to try something and see if it works.  Play with the boundaries. Else, move on.  Part of the answer also comes from ‘innovation accounting‘, courtesy of Eric Reis, which focuses on actionable metrics, rather than vanity metrics.  Another part of the answer comes from this blog post from Yevgeniy (Jim) Brikman, the founder of Atomic Squirrel.  He shares this useful illustration that MVP is a process:

MVP is a process
Source: Yevgeniy (Jim) Brikman

As an example, the Tiko Printer, a crowd funded 3D printer startup failed and many attribute its failure to over-innovation.

In the trade-off between execution versus innovation, incumbents flounder because of over-emphasis on execution, as the performance metrics are based on financials rather than innovation.  It is possible to err on the other side and thus over innovate. There are plenty of examples and stories.  Let me pick just two to illustrate the point.

Lexus over-engineered its RX luxury crossover in 2016. Tom Mutchler, an Auto Engineer with Consumer Reports, begins his review with the words, “Messing with success is a dangerous, dangerous thing.  Especially when it comes to Lexus RX”.  Joe Lorio, at Car and Driver, summed it up: “Don’t be too eager to ditch what you are really good at.”

Some time back, Lego, the toy maker went through financial turmoil as it lost control of innovation and tried to do too many things too fast.  Click here for this story.

This is just the start of this fascinating story.  There’s more to come.  We need to return to our questions.  Hang in there …

AI can predict suicide with better accuracy than humans

Olivia Goldhill, a writer who focuses on philosophy and psychology, has written an interesting  post on a paper published by Colin Walsh, a data scientist at the Vanderbilt University Medical Center, co-authored with Jessica D Ribeiro and Joseph C Franklin. 

Olivia writes:

Walsh and his colleagues have created machine-learning algorithms that predict, with unnerving accuracy, the likelihood that a patient will attempt suicide. In trials, results have been 80-90% accurate when predicting whether someone will attempt suicide within the next two years, and 92% accurate in predicting whether someone will attempt suicide within the next week.

The prediction is based on data that’s widely available from all hospital admissions, including age, gender, zip codes, medications, and prior diagnoses. Walsh and his team gathered data on 5,167 patients from Vanderbilt University Medical Center that had been admitted with signs of self-harm or suicidal ideation. They read each of these cases to identify the 3,250 instances of suicide attempts.

Please do read Olivia’s piece further about important questions related to role of computers in such sensitive matters and the complexity of such algorithms.

The paper is published in the Sage Journal Clinical Psychological Science in April (Vol 5, Issue 3, 2017) and can be accessed from here.  The authors write:

We developed machine learning algorithms that accurately predicted future suicide attempts (AUC = 0.84, precision = 0.79, recall = 0.95, Brier score = 0.14). Moreover, accuracy improved from 720 days to 7 days before the suicide attempt, and predictor importance shifted across time. These findings represent a step toward accurate and scalable risk detection and provide insight into how suicide attempt risk shifts over time.

I am guessing that ensemble methods were used for modeling the algorithms and that Deep Learning has not been used.  I have reached out to the authors and am awaiting their response.

2017 Manufacturing Summit

The 2017 Manufacturing Summit is being held today and tomorrow in Washington DC. You can view the livestream of the event here at 11:30am EST.

The Monday Economic Report – June 19, 2017, was not so positive. It said:

… Manufacturing production fell for the second time in the past three months, down 0.4 percent in May. Motor vehicles and parts production led the decline in May, down 2.0 percent for the month and off 1.5 percent year to date, as automotive demand has continued to be weaker than desired so far in 2017. Despite the easing in this latest release and some lingering challenges, the underlying data remain consistent with a manufacturing sector that has turned a corner and has moved in the right direction, especially relative to where it stood at this point last year. Manufacturing production has risen 1.4 percent over the past 12 months, expanding for the seventh consecutive month.

Global Manufacturing

The overall scenario for global manufacturing is still positive. You can read the details here. Summary:

  • Eurozone manufacturers reported their best growth rates since April 2011
  • Canada did well in May, in spite of a pullback from a six-year high in April
  • Mexico saw a slight rebound in May but below expectations
  • Chinese manufacturing contracted slightly for the first time in 11 months
  • Japanese manufacturing showed a modest expansion in May
  • Emerging markets expanded, but at the slowest pace in May since September

Let’s see what the Summit will reveal, although nothing earth-shaking is expected.

US Manufacturing output jumps in April 2017

US manufacturing production grew at the fastest clip in the last month surprising earlier predictions.  The Federal Reserve said that the US industrial production at factories, mines and utilities jumped 1 percent in April from March, its biggest in three years.

Source: Federal Reserve

 

Why Is It Called Deep Learning?

Adapted from: Deep Learning, the book co-authored by Ian Goodfellow, Yoshua Bengio and Aaron Courville and “Representation Learning: A Review and New Perspectives” by Yoshua Bengio, Aaron Courville and Pascal Vincent (click here for a copy of this paper).

The power of software has been the ability to codify tasks that can be clearly defined and listed.  AI, in its early days was fed problems that were intellectually hard for humans but relatively easy for computers – tasks that could still be formally described via mathematical rules.  The real challenge were problems that humans solved intuitively (automatically) but hard for computers to “get” – such as recognizing images or spoken words with context and continuity.

Instead of formally specifying all this intuitive knowledge, computers must learn from experience, fed as data.  It must build its own specifications of these experiences as a hierarchy of concepts.  Complicated concepts are built on simpler ones.  Thus, the degree of abstraction increases as you get to complicated concepts.  If this hierarchy of concepts is visualized as graphs, then it would be deep, or one with multiple layers and hence this approach is called AI Deep Learning.

 

A Venn diagram showing how deep learning is a kind of representation learning, which is in turn a kind of machine learning, which is used for many but not all approaches to AI. Each section of the Venn diagram includes an example of an AI technology. Source: Deep Learning book


Thus, Deep Learning models “either involve a greater amount of composition of learned functions or learned concepts than traditional machine learning does”.  Now, its graphs (and the concepts) are heavily dependent on the choice of data representation on which they are applied.  That is why data representation or feature engineering is so important.

“Such feature engineering is important but labor-intensive and highlights the weakness of current learning algorithms: their inability to extract and organize the discriminative information from the data.  Feature engineering is a way to take advantage of human ingenuity and prior knowledge to compensate for that weakness.  In order to expand the scope and ease of applicability of machine learning, it would be highly desirable to make learning algorithms less dependent on feature engineering, so that novel applications could be constructed faster, and more importantly, to make progress towards Artificial Intelligence (AI). An AI must fundamentally understand the world around us, and we argue that this can only be achieved if it can learn to identify and disentangle the underlying explanatory factors hidden in the observed milieu of low-level sensory data.”

Stay tuned for more to come …

From Waste to Bioeconomy (Agriculture Production + Manufacturing)

The peer-reviewed journal of the Royal Society of Chemistry has published a paper by Dr. Joshua Yuan of Texas A&M AgriLife Research.  The essence is that lignin, a class of complex organic polymers found in vascular plants and algae can now be converted into high quality carbon fiber, which has a wide variety of applications.  Credit: phys.org

Credit:Texas A&M AgriLife Research

Here are the details: Lignin is derived from the Latin word lignum which means wood.  It is believed that its original function was to provide structure in plants but in most vascular plants (which includes ferns and flowering plants), it is used for transporting water.

Each year the US paper and pulp industry outputs about 50 million tons of lignin as waste, while biorefineries that use plants to make ethanol could potentially yield another 100 to 200 tons of lignin.  Today, only about 2 percent of this lignin is recycled into new products. 

Sir Joseph Wilson Swan, a British physicist, chemist and inventor is known as an independent developer of the incandescent light bulb, credited with supplying the electric lights used in the world’s first homes and public buildings (Savoy Theater in 1881).  He invented the carbon fiber in 1860.  Source: Wikipedia

The demand for carbon fiber composites is increasing by 7% each year to reach US $10.8 billion in 2015.  With the ability to turn lignin into high quality carbon fiber, the applications will explode.  Source: Wikipedia

The most important point is that this entire bioeconomy of growing, harvesting and transporting the lignin and the advanced manufacturing into carbon fibers, can all be in the US.   Last year, Phys.org published this story about carbon fiber from wood used to make cars and batteries.  Lignin batteries are indistinguishable from Lithium batteries.  So, it’s not too far out in future.

Image Reconstruction from Human Brain Activity using fMRI

This is fascinating stuff.  

In a paper outlined here, Changde Du from the Research Center for Brain-Inspired Intelligence, Beijing and his co-authors from the Chinese Academy of Sciences (CAS), Beijing, describe decoding human brain activities via functional magnetic resonance imaging (fMRI) scans.

So, human eyes see a bunch of relatively simple images, send that information to the brain that processes these images; then the fMRI scans try to capture the brain activity as voxels.  They have used simple geometric shapes and alphabet letters to feed nonlinear observation models parameterized by Deep Neural Networks (DNN).  They call their model: Deep Generative Multiview Model (DGMM).

Stay tuned as we go through the paper in detail and share our thoughts.

Hello World!

Welcome to CollaMeta! 

The Perspectograph – an invention of Leonardo da Vinci’s which put mechanical skill to artistic use.  It was adopted by Albrecht Dürer, a painter, printmaker and a theorist of the German Renaissance.  He was one of the first European landscape artists.  With one eye covered, the artist could trace the outlines of a model onto a glass pane, and thereby respect linear perspective.

We thus stand on the shoulders of the pioneers before us and ponder the vistas we survey.   Every long journey, begins with the प्रथम (PrathaMa from Sanskrit) or the first step!

Welcome!