• Email Us: [email protected]
  • Contact Us: +1 718 874 1545
  • Skip to main content
  • Skip to primary sidebar

Medical Market Report

  • Home
  • All Reports
  • About Us
  • Contact Us

Blackbird.AI grabs $10M to help brands counter disinformation

September 21, 2021 by David Barret Leave a Comment

New York-based Blackbird.AI has closed a $10 million Series A as it prepares to launched the next version of its disinformation intelligence platform this fall.

The Series A is led by Dorilton Ventures, along with new investors including Generation Ventures, Trousdale Ventures, StartFast Ventures and Richard Clarke, former chief counter-terrorism advisor for the National Security Council. Existing investor NetX also participated.

Blackbird says it’ll be used to scale up to meet demand in new and existing markets, including by expanding its team and spending more on product dev.

The 2017-founded startup sells software as a service targeted at brands and enterprises managing risks related to malicious and manipulative information — touting the notion of defending the “authenticity” of corporate marketing.

It’s applying a range of AI technologies to tackle the challenge of filtering and interpreting emergent narratives from across the Internet to identify disinformation risks targeting its customers. (And, for the record, this Blackbird is no relation to an earlier NLP startup, called Blackbird, which was acquired by Etsy back in 2016.)

Blackbird AI is focused on applying automation technologies to detect malicious/manipulative narratives — so the service aims to surface emerging disinformation threats for its clients, rather than delving into the tricky task of attribution. On that front it’s only looking at what it calls “cohorts” (or “tribes”) of online users — who may be manipulating information collectively, for a shared interest or common goal (talking in terms of groups like antivaxxers or “bitcoin bros”). 

Blackbird CEO and co-founder Wasim Khaled says the team has chalked up five years of R&D and “granular model development” to get the product to where it is now. 

“In terms of technology the way we think about the company today is an AI-driven disinformation and narrative intelligence platform,” he tells TechCrunch. “This is essentially the efforts of five years of very in-depth, ears to the ground research and development that has really spanned people everywhere from the comms industry to national security to enterprise and Fortune 500,  psychologists, journalists.

“We’ve just been non-stop talking to the stakeholders, the people in the trenches — to understand where their problem sets really are. And, from a scientific empirical method, how do you break those down into its discrete parts? Automate pieces of it, empower and enable the individuals that are trying to make decisions out of all of the information disorder that we see happening.”

The first version of Blackbird’s SaaS was released in November 2020 but the startup isn’t disclosing customer numbers as yet. v2 of the platform will be launched this November, per Khaled. 

Also today it’s announcing a partnership with PR firm, Weber Shandwick, to provide support to customers on how to respond to specific malicious messaging that could impact their businesses and which its platform has flagged up as an emerging risk.

Disinformation has of course become a much labelled and discussed feature of online life in recent years, although it’s hardly a new (human) phenomenon. (See, for example, the orchestrated airbourne leaflet propaganda drops used during war to spread unease among enemy combatants and populations). However it’s fair to say that the Internet has supercharged the ability of intentionally bad/bogus content to spread and cause reputational and other types of harms.

Studies show the speed of online travel of ‘fake news’ (as this stuff is sometimes also called) is far greater than truthful information. And there the ad-funded business models of mainstream social media platforms are implicated since their commercial content-sorting algorithms are incentivized to amplify stuff that’s more engaging to eyeballs, which isn’t usually the grey and nuanced truth.

Stock and crypto trading is another growing incentive for spreading disinformation — just look at the recent example of Walmart targeted with a fake press release suggesting the retailer was about to accept litecoin.

All of which makes countering disinformation look like a growing business opportunity.

Earlier this summer, for example, another stealthy startup in this area, ActiveFence, uncloaked to announce a $100M funding round. Others in the space include Primer and Yonder (previously New Knowledge), to name a few.

ActiveFence comes out of the shadows with $100M in funding and tech that detects online harm, now valued at $500M+

 

While some other earlier players in the space got acquired by some of the tech giants wrestling with how to clean up their own disinformation-ridden platforms — such as UK-based Fabula AI, which was bought by Twitter in 2019.

Another — Bloomsbury AI — was acquired by Facebook. And the tech giant now routinely tries to put its own spin on its disinformation problem by publishing reports that contain a snapshot of what it dubs “coordinated inauthentic behavior” that it’s found happening on its platforms (although Facebook’s selective transparency often raises more questions than it answers.)

The problems created by bogus online narratives ripple far beyond key host and spreader platforms like Facebook — with the potential to impact scores of companies and organizations, as well as democratic processes.

But while disinformation is a problem that can now scale everywhere online and affect almost anything and anyone, Blackbird is concentrating on selling its counter tech to brands and enterprises — targeting entities with the resources to pay to shrink reputational risks posed by targeted disinformation.

Per Khaled, Blackbird’s product — which consists of an enterprise dashboard and an underlying data processing engine — is not just doing data aggregation, either; the startup is in the business of intelligently structuring the threat data its engine gathers, he says, arguing too that it goes further than some rival offerings that are doing NLP (natural language processing) plus maybe some “light sentiment analysis”, as he puts it.

Although NLP is also key area of focus for Blackbird, along with network analysis — and doing things like looking at the structure of botnets.

But the suggestion is Blackbird goes further than the competition by merit of considering a wider range of factors to help identify threats to the “integrity” of corporate messaging. (Or, at least, that’s its marketing pitch.)

Khaled says the platform focuses on five “signals” to help it deconstruct the flow of online chatter related to a particular client and their interests — which he breaks down thusly: Narratives, networks, cohorts, manipulation and deception. And for each area of focus Blackbird is applying a cluster of AI technologies, according to Khaled.

But while the aim is to leverage the power of automation to tackle the scale of the disinformation challenge that businesses now face, Blackbird isn’t able to do this purely with AI alone; expert human analysis remains a component of the service — and Khaled notes that, for example, it can offer customers (human) disinformation analysts to help them drill further into their disinformation threat landscape.

“What really differentiates our platform is we process all five of these signals in tandem and in near real-time to generate what you can think of almost as a composite risk index that our clients can weigh, based on what might be most important to them, to rank the most important action-oriented information for their organization,” he says.

“Really it’s this tandem processing — quantifying the attack on human perception that we see happening; what we think of as a cyber attack on human perception — how do you understand when someone is trying to shift the public’s perception? About a topic, a person, an idea. Or when we look at corporate risk, more and more, we see when is a group or an organization or a set of accounts trying to drive public scrutiny against a company for a particular topic.

“Sometimes those topics are already in the news but the property that we want our customers or anybody to understand is when is something being driven in a manipulative manner? Because that means there’s an incentive, a motive, or an unnatural set of forces… acting upon the narrative being spread far and fast.”

“We’ve been working on this, and only this, and early on decided to do a purpose-built system to look at this problem. And that’s one of the things that really set us apart,” he also suggests, adding: “There are a handful of companies that are in what is shaping up to be a new space — but often some of them were in some other line of work, like marketing or social and they’ve tried to build some models on top of it.

“For bots — and for all of the signals we talked about — I think the biggest challenge for many organizations if they haven’t completely purpose built from scratch like we have… you end up against certain problems down the road that prevent you from being scalable. Speed becomes one of the biggest issues.

“Some of the largest organizations we’ve talked to could in theory product the signals — some of the signals that I talked about before — but the lift might take them ten to 12 days. Which makes it really unsuited for anything but the most forensic reporting, after things have kinda gone south… What you really need it in is two minutes or two seconds. And that’s where — from day one — we’ve been looking to get.”

As well as brands and enterprises with reputational concerns — such as those whose activity intersects with the ESG space; aka ‘environmental, social and governance’ — Khaled claims investors are also interested in using the tool for decision support, adding: “They want to get the full picture and make sure they’re not being manipulated.”

At present, Blackbird’s analysis focuses on emergent disinformation threats — aka “nowcasting” — but the goal is also to push into disinformation threat predictive — to help prepare clients for information-related manipulation problems before they occur. Albeit there’s no timeframe for launching that component yet.

“In terms of counter measurement/mitigation, today we are by and large a detection platform, starting to bridge into predictive detection as well,” says Khaled, adding: “We don’t take the word predictive lightly. We don’t just throw it around so we’re slowly launching the pieces that really are going to be helpful as predictive.

“Our AI engine trying to tell [customers] where things are headed, rather than just telling them the moment it happens… based on — at least from our platform’s perspective — having ingested billions of posts and events and instances to then pattern match to something similar to that that might happen in the future.”

“A lot of people just plot a path based on timestamps — based on how quickly something is picking up. That’s not prediction for Blackbird,” he also argues. “We’ve seen other organizations call that predictive; we’re not going to call that predictive.”

In the nearer term, Blackbird has some “interesting” counter measurement tech to assist teams in its pipeline, coming in Q1 and Q2 of 2022, Khaled adds.

False news spreads faster than truth online thanks to human nature

Source Link Blackbird.AI grabs $10M to help brands counter disinformation

David Barret
David Barret

Related posts:

  1. Guinean political prisoners freed, regional bloc to discuss coup
  2. Pakistan suggests inviting Taliban-run Afghanistan to regional forum
  3. Soccer-Premier clubs could face sanctions if they play South American players
  4. Tigray forces killed 120 civilians in village in Amhara – Ethiopia officials

Filed Under: News

Reader Interactions

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Primary Sidebar

  • It’s Supermoons Galore This Fall, With Three Of Them From Next Week Until December 4
  • Kidney Blood Type Changed From A To O Before Transplant, Bringing Universal Organs A Step Closer
  • What Has The Worst Breath In The Animal Kingdom?
  • The Longest Living Animals On Earth Have Been Alive For 2,300 Years
  • A 100-Year-Old Harpoon Was Found Embedded In The World’s Longest-Living Mammal
  • Sheep Leather, Slingshots, And A 650-Year-Old Shoe: Abandoned Vulture Nests Hide “Extraordinary” Artifacts
  • For The First Time In History, People Could Soon See Ice-Free Peaks In Yosemite
  • US Breaks New Measles Record, Surpassing 1,500 Cases – The Most In 33 Years
  • Xerces Blue Butterfly: America’s First Human-Caused Insect Extinction
  • Comet 3I/ATLAS Is About To Pass Near Mars – Our Robotic Explorers Are Ready For Our Closest View Yet
  • World’s Only Population Of Black Tigers Lives In A Single Reserve In India
  • Should We Worry About The Latest COVID-19 Variants?
  • Record-Breaking Rogue Planet Seen Growing At A Rate Of 6 Billion Tonnes Per Second
  • The Universe May End With A Big Crunch – And There’s Just 20 Billion Years To Go
  • Interstellar Comet 3I/ATLAS Found To Have “Extreme Abundance Ratio” Of Iron And Nickel
  • The Fundamental Forces Of The Universe Are Getting Weaker, New Paper Suggests
  • At Least 541 Million Years Old, These Might Be The First Animals To Evolve On Planet Earth
  • We May Finally Know Why Women Live Longer Than Men
  • Jane Goodall, Pioneering Scientist Who First Discovered Tool-Use In Chimps, Dies At 91
  • Trump Orders Release Of Classified Files On The Mysterious Disappearance Of Amelia Earhart
  • Business
  • Health
  • News
  • Science
  • Technology
  • +1 718 874 1545
  • +91 78878 22626
  • [email protected]
Office Address
Prudour Pvt. Ltd. 420 Lexington Avenue Suite 300 New York City, NY 10170.

Powered by Prudour Network

Copyrights © 2025 · Medical Market Report. All Rights Reserved.

Go to mobile version