Pipeshift raises $2.5M to offer enterprises flexibility and control over their AI stack using open-source models
San Francisco, California – January 23, 2025; Over 80% of enterprises are turning to open-source AI models. However, the challenge isn’t accessing powerful models, rather it’s deploying them efficiently and securely. Today, companies must stitch together more than 10 different components just to begin deployment, with each optimization requiring thousands of engineering hours. Today, AI infrastructure startup Pipeshift announced their $2.5M seed round to solve this challenge, launching a new-age Platform-as-a-Service (PaaS) that enables engineering teams to orchestrate AI workloads across any infrastructure – cloud or on-premises – with unprecedented speed and control.
This round was led by Y Combinator and SenseAI Ventures, with additional participation from Arka Venture Labs, Good News Ventures, Nivesha Ventures, Astir VC, GradCapital, and MyAsiaVC. Seasoned Silicon Valley angels like Kulveer Taggar (CEO of Zuess), Umur Cubukcu (CEO of Ubicloud and former Head of PostgreSQL at Azure), and Krishna Mehra (former Head of Engineering at Meta and co-founder of Capillary Technologies) also joined the round.
Unlike existing players who are GPU-brokers offering one-size-fits-all solutions, Pipeshift understands the enterprise need for control and flexibility of infrastructure and offers an end-to-end MLOps stack for enterprises to train, deploy, and scale open-source GenAI models — LLMs, vision models, audio models, and image models — across any cloud or on-prem GPUs. As a result, enterprises can deploy their AI workloads in production faster and more reliably. Additionally, as we see more model and hardware architectures coming into the market, Pipeshift future-proofs infrastructure investments by offering flexibility through their modular MLOps stack that allows enterprises to bring down their GPU infrastructure costs without any additional engineering effort.
“2025 marks the year when GenAI transitions into production and engineering teams are witnessing the benefits of using open-source models in-house. This offers high levels of privacy and control alongside enhanced performance and lower costs. However, this is a complex and expensive process involving multiple components being stitched together.” said Arko Chattopadhyay, Co-Founder and CEO of Pipeshift. He added “Pipeshift’s enterprise-grade orchestration platform eradicates the need for such extensive engineering investments by not only simplifying deployment but also maximizing the production throughput.
Pipeshift’s founding team has been working on this problem for over a year. Co-founders, Arko Chattopadhyay, Enrique Ferrao and Pranav Reddy met during their undergraduate studies at Manipal Institute of Technology, where they were leading a defence robotics non-profit supported by NVIDIA, Dassault Systems, and SICK Sensor Intelligence. The team was focused on the deployment of machine learning models on the cloud and edge for processing real-time sensor data and task-specific vision models. In 2023, they scaled a Llama2-powered enterprise search app within an organization of over 1000 employees, completely on-prem. While building this, they saw the challenges of running and scaling private AI workloads in production, pushing them to start optimizing the same at Pipeshift.
The timing for Pipeshift is significant. As AI reshapes markets and redefines competition, companies know the rewards for seizing the moment are immense. However, security and data privacy risks loom large, demanding protection for proprietary IP. These challenges compound in a rapidly evolving technology landscape where missteps lead to expensive delays and lost opportunities. Adding to this complexity is the uniqueness of every business problem. No two AI strategies are the same, and every deployment must align with the distinct needs of the organization. Pipeshift solves this by bringing in the flexibility and precision of open-source AI models and the scalability of their enterprise MLOps platform. Businesses overcome these challenges while managing resource demands and ensuring compliance — all without losing sight of their broader goals.
“Enterprises prefer open-source GenAI for the benefits of privacy, model ownership, and lower costs. However, transitioning GenAI to production remains a complex and expensive process requiring multiple components to be stitched.” said Rahul Agarwalla, Managing Partner of SenseAI Ventures. He added, “Pipeshift’s enterprise-grade orchestration platform eliminates the need for such extensive engineering investments by not only simplifying deployment but also maximizing the production throughput.”
Anu Mangaly, Director of Software Engineering at NetApp said, “Pipeshift’s ability to orchestrate existing GPUs to deliver >500 tokens/second for models like Llama 3.1 8B without any compression or quantization of the LLM is extremely impressive, allowing businesses to reduce their compute footprint and costs in production, while delivering enhanced user experiences that are also private and secure.” She also shared, “At NetApp, we understood the enterprise need for a single data fabric across cloud, on-prem, and hybrid set up. Pipeshift’s orchestration allows for enterprises to unlock the same potential from the new generation of AI models all within their infrastructure.”
Yash Hemaraj, Founding Partner at Arka Venture Labs and General Partner at BGV added: “We invested in Pipeshift because their innovative platform addresses a critical need in enterprise AI adoption, enabling seamless deployment of open-source language models. The founding team’s deep technical expertise and track record in scaling AI solutions impressed us immensely. Pipeshift’s vision aligns perfectly with our focus on transformative Enterprise AI companies, particularly those bridging the US-India tech corridor, making them an ideal fit for our portfolio.”
Having already worked with over 30 companies including NetApp, Pipeshift aims to become the trusted partner for organizations looking to unlock AI’s potential while maintaining control of their infrastructure and data.
About Pipeshift
Pipeshift offers an end-to-end MLOps stack for enterprises to train, deploy, and scale open-source GenAI models – LLMs, vision models, audio models, and image models – across any cloud or on-prem GPUs. Enterprises get to deploy their AI workloads in production faster and more reliably. Additionally, as we see more model and hardware architectures coming into the market, Pipeshift future-proofs the infrastructure investments by offering flexibility through their modular MLOps stack that allows enterprises to bring down their GPU infrastructure costs without any additional engineering efforts on their end.
For more information please visit https://pipeshift.com/