In the rush to build, test, and deploy AI systems, companies often lack the resources and time to fully validate their systems and ensure they are bug-free. In a 2018 report, Gartner predicted that 85% of AI projects will produce incorrect results due to bias in data, algorithms, or the teams responsible for managing them. Even Big Tech Companies Are Not Immune To The Pitfalls – For One Customer, IBM in the end failed to deliver an AI-powered cancer diagnosis system that cost $62 million in 4 years.
Inspired by “bug bounty“programs founded by Jeong-Suh Choi and Soohyun Bae” Bobidi, a platform focused on helping companies validate their AI systems by exposing the systems to the global data science community. With Bobidi, Bae and Choi wanted to build a product that would allow customers to connect AI systems to the bug-hunting community in a “secure” way, via an API.
The idea is to have developers test AI systems and biases — that is, the edge cases where the systems underperform — to reduce the time it takes for validation, Choi explained in an email interview. Bae was previously a senior engineer at Google and led augmented reality mapping at Niantic, while Choi was a senior manager at eBay and led the people engineering team at Facebook. The two met about 10 years ago on a position in the tech industry.
“By the time biases or flaws from the model are revealed, the damage is already irreversible,” Choi said. “For example, natural language processing algorithms [like OpenAI’s GPT-3] often make problematic comments, or respond incorrectly to those comments, related to hate speech, discrimination and insults. Bobidi allows the community to ‘pre-test’ the algorithm and find those loopholes, which is actually very powerful because you can test the algorithm with a lot of people under certain conditions that represent social and political contexts that are constantly changing.”
To test models, the Bobidi developer community builds a validation dataset for a given system. As developers try to find loopholes in the system, customers get an analysis with patterns of false negatives and positives and the associated metadata (for example, the number of edge cases).
Exposing sensitive systems and models to the outside world might make some companies hesitant, but Choi claims that Bobidi models “automatically expire” after a certain number of days, so they can’t be reverse engineered. Customers pay for service based on the number of “legitimate” attempts by the community, which equates to one dollar ($0.99) per 10 attempts.
Choi notes that the amount of money developers can earn through Bobidi — $10 to $20 an hour — is significantly above minimum wage in many regions around the world. Assuming that Choi’s estimates are indeed rooted, Bobidi goes against the trend in the data science industry, which tends to pay data validators and labelers poorly. The annotators of the widely used ImageNet computer vision data set earned a median wage of $2 per hour, one study found, and only 4% earned more than $7.25 per hour.
Aside from the pay structure, crowd-powered validation is not a new idea. In 2017, the University of Maryland’s Computational Linguistics and Information Processing Laboratory launched a platform called Break It, Build It that allows researchers to submit models to users who need to come up with examples to beat them. Elsewhere, Meta maintains a platform called Dynabench that provides users with “silly” models designed to analyze sentiment, answer questions, detect hate speech, and more.
But Bae and Choi believe the “gamified” approach will help Bobidi stand out from the crowd. While still in its infancy, the vendor claims to have clients in augmented reality and computer vision startups, including Seerslab, Deepixel, and Gunsens.
The traction was enough to convince several investors to pledge money to the venture. Today, Bobidi closed a $5.5 million starting round with participation from Y Combinator, We Ventures, Hyundai Motor Group, Scrum Ventures, New Product Experimentation (NPE) at Meta, Lotte Ventures, Atlas Pac Capital and several unnamed investors.
Note that Bobidi is one of the first investments for NPE, which last year shifted the gear from building consumer-facing apps to making seed-stage investments in AI-focused startups. When approached for comment, Sunita Parasuraman, head of NPE investments said via email: “We are excited to support the talented founders of Bobidi, who help companies better validate AI models with an innovative solution powered by by people all over the world.”
“Bobidi is a mashup between community and AI, a unique combination of expertise that we share,” added Choi. “We believe that the era of big data is coming to an end and we are about to enter the new era of quality data. It means moving from the era – where the focus was on building the best model given with the datasets – to the new era, where people are tasked with finding the best dataset given with the fully opposite approach to the model.”
Choi said the proceeds from the seed round will be spent on hiring staff — Bobidi currently has 12 employees — and building “customer insights experiences” and several “core machine learning technologies.” The company hopes to triple the size of its team by the end of the year, despite the economic headwind.
Janice has been with businesskinda for 5 years, writing copy for client websites, blog posts, EDMs and other mediums to engage readers and encourage action. By collaborating with clients, our SEO manager and the wider businesskinda team, Janice seeks to understand an audience before creating memorable, persuasive copy.