‘Hardcore’ Machine Learning PaaS secures $1.8M to help scale business infrastructures
By providing a hardcore Machine Learning (ML) Platform-as-a-Service (PaaS), Finnish startup Valohai raises $1.8 million to help companies scale their model deployment while automating ML infrastructures.
Valohai focuses on empowering data science professionals by maintaining tool-agnostic machine infrastructure, making experiments reproducible, encouraging sharing, and reducing manual labor while keeping everything cost-efficient.
Its platform automates the Machine Learning training and deployment infrastructure for companies who are looking to increase their business efficiency by leveraging ML.
Hardcore machine learning and data science is a team sport, and Valohai acts as the coach.
This scalable solution allows companies to run multiple variations of machine learning ideas in parallel, leaving developers free to focus on pushing the boundaries of their research with minimal idle time.
“Currently, every company starting with large-scale Machine Learning needs to build a lot of overhead infrastructure before they can apply deep learning to solve the actual problem,” said Eero Laaksonen, CEO and co-founder of Valohai.
The latest round of funding was led by Helsinki-based seed stage investment company Superhero Capital, with participation from Reaktor Ventures and Business Finland, the Finnish Funding Agency for Innovation.
“Hardcore machine learning and data science is a team sport, and Valohai acts as the coach,” said Juha Ruohonen, Founding Partner at Superhero Capital.
“Uniform workflows and collaboration tools will play a pivotal role in guiding machine learning solutions to the next level. Valohai’s collaborative tool means data professionals can work together towards a common goal and create the next big thing for machine learning and AI,” he added.
The Valohai platform is designed to be used for peer review and open-source collaboration. Its system can track changes, build reproducible algorithms, and ensure changes in team composition do not hinder the experiment process.
“By providing a standardized infrastructure and workflow, we help companies focus on the actual business driving machine learning models instead of the infrastructure,” added Laaksonen.
Runtime environments use GPU-enabled Docker images so running virtually any language or machine learning library is possible with the Platform-as-a-Service.
It supports the biggest cloud hosting platforms such as Amazon Web Services, Microsoft Azure, and Google Cloud Platform.