From Evidence-Driven Childhood to AI Leadership: A Data Journey


I grew up in an “evidence-driven” household. My earliest memories are of my father and me walking around and looking for “evidence” related to whatever we were focusing on at the time. Most of the time, it was Mathematics, particularly geometry. I later formally “learned” about concepts like Fibonacci numbers and spirals, perfect squares; we also looked for evidence that space “is” three dimensions (though we did not consider time). As we went about, we collected “data” and used it to derive our “next” steps. My father owned a factory, which was also my playground of sorts. We looked at operational efficiencies, how to improve them, and how to remove bottlenecks, among other things. In a way, these early experiences fostered in me a curiosity to understand the world around me in an objective manner.

At the age of 14, my parents bought me my first computer, a Commodore 64. By the age of 16, I wrote my first commercial software—a pharmacy inventory system, which was my first exposure to data management. One of my first “breaks” was trying to predict when a particular drug was going to run out and alert the pharmacy staff. Given the limitations of the Commodore 64 and my lack of mathematical knowledge, I did not implement the predictive module, but I was already thinking about Predictive Analytics. The year was 1983.

Fast forward to my Ph.D. I needed a lot of training data. While this is not necessarily a data-driven example, I did need to produce a fair amount of synthetic data for algorithm development and testing. Model training involved having 500 students write individual digits from 0 to 9 on a gridded digital pad, which converted each digit to a vector representation.

My time at NASA was marked by collecting terabytes of telemetry data from satellites and using that data to locate “lost” satellites.

While there were earlier instances where I used data to drive decisions—such as using Nagios and Cacti to monitor system performance, detect system defects, and build budget cases—my proper data-driven mindset began to form around 2002. At that time, there was an open-source product called Pentaho, an early Business Intelligence tool. I asked my team to install and configure it and started using it in addition to our system monitoring tools. However, I was more interested in monitoring marketing campaigns and their results to design future campaigns. In other words, I wanted to use results from previous campaigns to inform how to target each new campaign. Additionally, I aimed to understand how creatives and messages affected campaign performance. I also instrumented the application to understand consumer behavior and provide insights to the product team to optimize how the application worked. The most interesting part was discovering how “wrong” the product team’s assumptions were. Needless to say, once we implemented the changes suggested by the data, revenue grew at a faster pace.

The rest of my career has followed a similar mindset: using data and analyzed results to decide how to proceed, whether it was with systems configuration, defect detection, marketing intelligence, or product optimization. Over time, I used a variety of tools: Pentaho, Google Analytics, homegrown solutions, Mixpanel, and others. I worked with different teams to understand and interpret their needs and deliver the data and insights they required. I also helped educate those teams on how to think about data and use it to complement their know-how and creative instincts.

It is fair to say that I once held the mindset that “past behaviors are a representation of future behaviors.” We now know that this is not always the case.

In 2013, I joined Ninja Metrics as their Chief Operating Officer and, for all intents and purposes, their de facto Chief Technology Officer. The actual Chief Technology Officer was an academic from the University of Minnesota in Minneapolis. The product, Katana, featured a suite of Predictive Analytics tools, including Social Value. This experience fundamentally changed how I looked at data and decided on actions related to products, marketing campaigns, systems configurations, and more. I also learned about Experimental Data Science. Although I had conducted experiments in previous roles, the inclusion of models in these experiments added several dimensions and ultimately brought me closer to understanding consumer behavior.

Consider what a Predictive Churn model provides: a ranked list of consumers who are likely to leave in the near future. While we do not know exactly why they will leave, focused experiments can help us get close to an answer and find ways to retain them. Retention campaigns targeting different segments of this ranked list will reveal reasons for churn and lead to increased revenues once those reasons are addressed. The same applies to Predictive Conversion or Predictive LTV. The former helps understand why consumers do not buy, and the latter aids in deciding how much to spend on user acquisition campaigns, for example. Experiments help answer “why” and “how to optimize.” Some models, such as Predictive Churn, Conversion, and LTV, along with product and marketing optimization and multi-arm bandit algorithms, have become common staples for me, especially in direct-to-consumer environments. I use them as tools to inform, predict, and decide how to deploy resources.

In January 2016, I started as CEO and Co-founder of Stockmatic, an automatic supply chain and reordering management system with an IoT component. Ultimately, it was a data play. We collected data on product needs and their usage, and by using predictive analytics, we could determine how those products could perform in the market. Unfortunately, we did not secure the funding needed to scale and had to shut down, though not before launching four successful paid pilots that demonstrated exactly how it would work.

My time at NBCUniversal was very exciting. First, the Comcast family of companies is great, and, from my experience, they are employee-focused and place high importance on diversity—supported by ample data. Most importantly, there were numerous data-driven opportunities, and it felt like being a kid in a candy store. We analyzed data from games, TV Everywhere, DreamWorks, content deployed on Netflix, consumer products, Walmart campaigns, and more. The models we deployed to understand consumers’ journeys were groundbreaking. We explored data-driven greenlighting processes and how churn affects networks of people. We also began investigating how to automatically create content for games—and as an afterthought, outside of games—using AI. Although we did not advance far in this last effort, the results were very promising.

Finally, with FESSEX Data Advisory, I was able to apply all my past experiences to various companies. We worked on some very “cool” projects, including epilepsy, autism, direct-to-consumer strategies, operational improvements, supply-demand optimization, medical referrals through multidimensional look-alike algorithms, and more. We also integrated face recognition from AWS to boost consumer engagement.

It is very exciting to see how some of the early AI tools I pioneered are gaining widespread acceptance and becoming more commonplace in businesses. As always, the challenge is deploying them in a manner that is safe for the business, successful, aligned with the right roadmap, and cost-effective. The question I often get asked is: Do we need an LLM? My answer is always the same: “I don’t know, but the more important question is to understand what your business goals are for using AI.” The follow-up question is: “Do you have the infrastructure to get there?”

The answers to the question, “What are your business goals for using AI?” are varied. In many cases, the need is driven by media frenzy and a lack of understanding of AI. Sure, foundational tools are always necessary because they help you serve your customer base better. But beyond that, the use and business cases are not always clear. My job is to provide clarity and develop a strategy that makes sense. The answers to the infrastructure question vary widely. Some companies have the infrastructure, while others lag significantly behind.

We live in exciting times where data and AI are taking a center stage!

Leave a comment