Blog

Recap of Activate 2018

This article is been a bit late, but better late than never. Activate was a fantastic conference, and Lucidworks and the Montreal Sheraton were gracious hosts. These are a few rambling thoughts on what I learned on my Canadian vacation.

ML ML ML

In previous years, the conference has been titled “Lucene Revolution”; it was the biggest open source search conference, and the vibe has historically been very Solr, but fairly generally oriented at text search in general. As consultants who work with both Solr and Elasticsearch, we are asked from time-to-time to compare the two, and the quick answer is that Elasticsearch is “Logs, Logs, Logs.” The Elasticsearch search engine itself is general-purpose, but Elastic the company has put considerable effort towards supporting time-series and analytics data.

The renaming from “Lucene Revolution” to “Activate” signals a substantial change in the tone of the conference. Lucidworks has gathered a number of tools around machine learning into Fusion, and there is considerable energy in the community around augmenting the search experience. The conference collected a pretty thorough representation of the current state of the art in machine learning for text-based search.

AI or Machine Learning means different things to different people. As Beena Ammanath said in day 2’s keynote speaker, we are very much still in a state of ‘narrow’ intelligence. The algorithms and models available can solve individual tasks as well or better as a human expert, but are limited to a single focused task. Right now, those tasks are only at the periphery of the search experience: augmenting the data at ingestion, augmenting user input at query time, and balancing how those signals go together. That being said, the processes and tools for incorporating machine learning at those offline touchpoints has matured significanlty in recent years. Activate is Lucidworks’ big conference, and while much of the discussion of tooling focused on Fusion, even out-of-the-box Solr has NLP and LTR support.

# What was missing?

The talks at Activate were excellent. Without downplaying that quality, what wasn’t discussed was almost as interesting as what was.

Machine Learning in a production environment.

With the exception of Malvina Josephidou and Diego Ceccarelli’s excellent talk on the adventures of getting the performance of their large-scale LTR deployment comparable to their pre-LTR performance, there was very little discussion around the challenges of putting together an ML ecosystem.

  • Which teams validated their training data? Most, in some way or another, but how?
  • Choosing the correct objective function for training is critical to the successess of the deployment, but there was very little discussion around how to choose an objective function, and why teams selected the implementations they did.
  • How as a data scientist or relevance engineer do we maintain these trained systems in production? What is the life expectancy? There are many, many small Solr installations out there that get to ‘good enough’ and stall; when is ML an option for those teams, and when isn’t it?
  • How do we version control our models? Or does that even make sense? A model is tied to the data and the analysis configurations at a point in time.

How do we move forward from this narrow intelligence?

During Wednesday’s panel discussion, Grant Ingersoll polled the panel: “AI means never having to __ again.” Only Daniel Tunkelang answered feature engineering, and this feels like the elephant in the room. The current techniques add tools to generate new features, but in many ways, they aren’t really learning so much as mathematically encoding usage patterns. Text learning, and Learning to Rank in particular, are still relying on human product owners and search engineers to identify the key signals regardless of the implementing algorithm.

Jake Mannix of Lucidworks gave a talk on using deep neural nets in Tensorflow to start learning features, and CNNs may yet be the answer for text as well as images. With enough depth and enough training data, and an explainable model, he was able to pull out text features one character at a time. Early stages were able to identify stopwords and suffixes, deeper layers were able to aggregate those into linguistic roots and even simple phrases. I’ve been on the fence about deep learning for search, but this might’ve sold it. With enough data and enough horsepower, moving those building blocks into a next-generation linguistic model is starting to sound like more than a research problem.

See you in 2019!

Search relevancy tuning is fundamentally a hard optimization task to be replaced by machine learning. As a community, we are making progress to roll forward with the new technology, but there is a long way to go yet. Looking forward to seeing everyone next year, and to the progress another year will bring.