We are halfway through the evolution

Not long ago, the space of task and process automation used to be the domain of engineers and process experts. But innovative companies have managed to significantly reduce the barriers to implementing automation technology and today there exist great tools for a variety of tasks, such as:

Especially the first two have had a tremendous impact over the past few years. However, a regular question remains: What's next? We think that a certain type of problem is already well-covered by existing solutions but that there is still room for improvement when attempting to further adopt automation in operations-heavy businesses.

We predict that three "conditions to play" will mark the next wave of automation tools:

  1. The ability to process unstructured data.
  2. Open systems to allow best-of-breed tool selection.
  3. Shifting from developer to end-user experience.

Condition 1: Processing unstructured data

If your data is structured (tables, key-value pairs) and process rules are known, there is no shortage of tools. From solopreneurs to multi-billion-dollar companies, there is a host of excellent solutions that will execute processes as instructed, using information obtained from a range of internal databases.

Unfortunately, processes nowadays do not rely on simple rules and structured data alone. The idea of "repetitive, high-volume, low-deviation tasks" may have been true for the low-hanging fruits of early automation waves but has since turned into a running gag among true automation experts. Processes are simply not that simple anymore.

In line with that, there has been much hype around terms like Intelligent Process Automation and even hyper-automation. However, they are meant to serve slightly different purposes and neither of the two is designed to work on unstructured data in the first place.

None of the above should imply that rule-based automation tools are no longer in use. The opposite is true: Automating processes with unstructured data should be the last resort. But then there are situations where there are simply no alternatives, e.g. when working with stakeholders that simply don't want to provide input in a structured manner.

In reality, 80% of all company data comes in unstructured form and a large part of processes require human judgment for exactly that reason. And that is why many automation projects do not match prior expectations: Variation is much higher than expected.

Large companies, especially in the tech sector, moved ahead and invested in Data Science departments to employ technologies like Deep Learning productively. But such teams are nowhere near accessible for even mid-sized companies.

As a consequence, we are seeing some companies building AI applications capable to address problems around unstructured data "out of the box" and we expect to see many more to follow for years to come. This will allow said companies to benefit from technologies that were so far only accessible to big tech companies.

Condition 2: Open systems

Open access and interfaces used to be optional but they must and will become the de-facto standard for companies that want to play. For one, software no longer needs to work in isolation but has largely moved to the cloud. For another, the needs and preferences of buyers are too diverse for one vendor to cover all. Yet, some companies are still trying to force users into their ecosystems.

The problem? People generally don't like being forced into one ecosystem, and for good reason: Just as you would not buy all clothes from a single brand, the majority of software users want to be in charge of what they interact with every day.

Therefore, the driving theme will be simple: Do one thing and do it extremely well – and with open access points for other tools to seamlessly interact with. This will give decision-makers the necessary flexibility to pick the absolute best for what they intend to do as opposed to some "we have that too" version.

We are convinced that more vendors will allow access to their tools via public access points, direct integrations, and the provision of APIs. Buyers should be making the decision about which tool is best for them, not vendors.

Condition 3: Focus on the end-user experience

There used to be a split between operators (those who execute the processes) and automators (those who are supposed to automate them). The consequences were intransparent projects, distributed accountability, and lots of tribal knowledge concentrated on a few "chosen ones" – on both sides.

This line has vanished as companies have recognized that division of labor's true impact comes from those who know the process the best and have thus shifted their attention to the end-user: Someone who knows how to use a program, but preferably without reading the documentation first.

The investment firm Andreesen Horowitz recently published an article that speaks about the importance of good design:

A decade ago, [...] you needed a workflow manual just to follow the user interface! But now — in the decade of design — the interface no longer reflects the code; rather, the code reflects the design. We expect better, we deserve better, we demand better… it’s no longer optional to have good design.

Good design is not to be confused with "making something pretty" – good design is about solving a problem. A good starting point for doing so is thus a relentless focus on user experience. Because they will ultimately decide if tools get widely adapted or replaced by a better alternative – of which there are plenty.

Companies like Slack, Atlassian (makers of Trello, JIRA), or Airtable have shown how applications can be adapted by large organizations without even going through a proof of concept (POC), extensive training programs, or the installation of dedicated teams – thereby setting the bar (and user expectations!) for what good design looks like nowadays.


We expect that companies that can cater to these three will benefit much more than those that don't, as measured in customer satisfaction and consequently user adoption. Disregarding any of these three may constitute a fine automation tool but eventually, run behind these fundamental changes in 3-5 years from today.

We are in a lucky position that we are just getting started and can choose our focus rather freely. If we saw things developing differently, we could simply do that instead. But we are seeing clear evidence for these trends to manifest themselves and are therefore placing our bets on them.

Coming back to the question in the introduction "What's next?" we think that rather we should be asking "What's on top?". The true revolution will come only when AI can credibly mimic what humans are doing to a much larger extent. Until then, we hope that the number of new terms being invented for old technology will remain moderate and that companies instead focus on what still needs some care.

Now that you're here

Levity is a tool that allows you to train AI models on images, documents, and text data. You can rebuild manual workflows and connect everything to your existing systems without writing a single line of code.‍If you liked this blog post, you'll love Levity.

Sign up

Now that you're here

Levity is a tool that allows you to train AI models on images, documents, and text data. You can rebuild manual workflows and connect everything to your existing systems without writing a single line of code.‍If you liked this blog post, you'll love Levity.

Sign up

Stay inspired

Sign up and get thoughtfully curated content delivered to your inbox.