" />
Technology

Partnership on AI vs OpenAI: Consolidation of Power vs Open Source

Partnership on AI vs OpenAI: Consolidation of Power vs Open Source

The Partnership on AI consolidates control for a handful of corporations and their stakeholders, pitting them against Elon Musk’s OpenAI open source, non-profit that is available to everyone.

Both the Partnership on AI and Elon Musk’s OpenAI were established to advance humanity through Artificial Intelligence. While Musk’s OpenAI was devised to be open and available to the public, the Partnership on AI is more of a consolidation of power that benefits stakeholders whose findings will later be used to enact public policies.

Google/DeepMind, Microsoft, Facebook, IBM, and Amazon have collaborated to form the Partnership on AI to “formulate best practices on the challenges and opportunities within the field.” While third-party groups such as academics, non-profits, and policy specialists have been invited to be on the new Board of Directors, the Partnership on AI ultimately serves the private stakeholders, and whatever the stakeholders dictate will be relayed to the public.

Partnership on AI claims to not lobby policy makers, yet invites policy specialists to Board of Directors

Although the Partnership on AI “does not intend to lobby government or other policymaking bodies,” their internal research and conclusions will no doubt weigh heavily on “policymaking bodies.” What would be the point of consolidating all this money and power into one partnership and inviting policy specialists to the Board of Directors if no call to action on policy is implemented?

If the Partnership wants to establish itself as the main authority on the subject of AI best practices, how can policy makers not take into account its “authoritative” findings when it comes to practicing policies?

Maybe that’s the point. The Partnership on AI wouldn’t have to lobby governments or policymaking bodies, because those very same institutions would look to the Partnership as an authority they can rely-on to enact their policies.

Karl Marx didn’t have to lobby governments to have his ideas enacted; he was already dead long before the Communist Revolutions in Russia, China, Cuba, and elsewhere even took place — yet his Communist Manifesto with Frederich Engels was the framework used by these governments to enact their policies.

“Open” discussion between “closed” groups

One of the key goals outlined on the Partnership on AI’s website is “to provide a regular, structured platform for AI researchers and key stakeholders to communicate directly and openly with each other about relevant issues.”

Under the guise of creating an “open platform for discussion and engagement,” the platform is really about “open” discussion between insiders and stakeholders, not the public. The public won’t be privy to what is being discussed, only the conclusions of the Partnership — and later — the policies that are enacted after the decisions made behind closed doors.

Following up on the goal of so-called open discussion, the Partnership will then relay its findings by acting as a self-proclaimed “trusted and expert point of contact” while it creates its own educational material for the public.

In other words, the Partnership on AI is a consolidation of power that conducts its own research, gets its funding from special interests in finance, healthcare, and education, and then presents its findings to the public as the ultimate moral authority on the subject of AI.

Elon Musk’s alternative open source platform

The antithesis to the Partnership on AI is Elon Musk’s OpenAI. In naming one of his most successful business ventures Tesla Motors, Musk is an advocate of the ideologies of the late genius Nikola Tesla, who wanted to provide free, alternative energy to the public.

Tesla was blocked in his efforts by JP Morgan and Thomas Edison, who wanted to consolidate and control the energy sector by putting a restrictive monopoly on electricity in order to make a profit.

On the surface, both the Partnership on AI and OpenAI seem to serve the same goal; however, the means by which both organizations approach the subject of Artificial Intelligence ethics differ, and that has to do with whom they serve.

Read More: World’s 1st conference on ethics of sex robots launches in UK

“As a non-profit, our aim is to build value for everyone rather than shareholders,” states OpenAI’s website. This is a stark contrast to the Partnership on AI’s stakeholder-oriented research.

Because OpenAI is “free from financial obligations,” Musk’s organization is more geared towards free collaboration across institutions to focus on “positive human impact” with the idea that AI “should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible.”

Just as HBO and Sky Atlantic get set to debut the futuristic drama, “Westworld,” about a theme park called staffed by robotic AI and the ethical complications that arise, OpenAI has already begun to contemplate the “unknown” and potentially dangerous evolution of Artificial Intelligence as outlined on its website:

“It’s hard to fathom how much human-level AI could benefit society, and it’s equally hard to imagine how much it could damage society if built or used incorrectly.”

Read More: Can an AI collective subconscious exist? Inside HBO’s new drama Westworld

While the Partnership on AI sets out to make itself the leading moral authority on the “best practices” of AI, OpenAI seeks “to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.”

View Comments (11)

11 Comments

  1. steve.weinberg@mac.com'

    Steven J Weinberg

    October 5, 2016 at 9:47 PM

    I’m leaving the comment, first, that I can’t believe that there are no other comments 6 days after publication. Come on folks. If you are reading this in the first place you are involved enough in the conundrum of “how to insure AI is limited to constructive uses” to have your own thoughts on the subject. We need a vibrant discussion on this issue – the most important issue facing our species forth foreseeable future.

    Second, I thank The Sociable and TIM HINCHLIFFE for stating what I was thinking when I first heard about the launch of the Partnership on AI. I had/have my doubts about Open AI’s actual openness, but I’m willing to be convinced by Tim & others. The Partnership on AI seems much scarier, based on things I’ve heard Jaron Lanier say and others. Bill Joy! We’ve finally started catching up to what you wrote in WIRED in April 2000, a full 16 years ago. Haven’t heard from you lately.

    • Tim Hinchliffe

      October 6, 2016 at 9:28 AM

      Hi Steven. Thanks for commenting. I guess we’ll have to wait to see what comes about from OpenAI. It seems, like with any new technology or policy surrounding technology, there are those who want to control everything, and those who are earnest about openness. And still there are others who claim to be open while secretly trying to consolidate power.

      What irks me is that the Partnership on AI is trying to make itself seem like moral crusaders. They say that their findings won’t be used to implement policy, yet they invite policy makers to their board of directors. Yikes! Most likely, all the new tech that comes out will have their patents and what they’re going for is a consolidated monopoly. With that much wealth, power, and ‘expertise’ they can greatly influence policy making to their benefit.

  2. Pingback: CIA 'Siren Servers' Can Predict Social Uprisings 3-5 Days in Advance

  3. Pingback: Partnership on AI vs OpenAI: Consolidation of Power vs Open Source - MLG $MOKER$

  4. Pingback: Graphcore raises $30M for self-driving car AI, Samsung's opportunity Knox

  5. Pingback: Lawyers 'Give' Carnegie Mellon $10M for Artificial Intelligence Ethics Research

  6. Pingback: Thiel vs Karp: Palantir co-Founders backed both Clinton and Trump

  7. Pingback: SpaceX, Facebook Internet from above: Digital Colonialism or Liberation?

  8. Pingback: Driverless Vehicle Competition Revs-up With Uber AI Labs Launch

  9. Pingback: Microsoft's Mission to Democratize AI Must Resist Temptation for Oligarchy

  10. Pingback: To Democratize AI Microsoft CEO Warns Against Tech Giant Worship, Hubris

Leave a Reply

Your email address will not be published. Required fields are marked *

Technology
@TimHinchliffe

Tim Hinchliffe is a veteran journalist whose passions include writing about how technology impacts society and Artificial Intelligence. He prefers writing in-depth, interesting features that people actually want to read. Previously, he worked as a reporter for the Ghanaian Chronicle in West Africa, and Colombia Reports in South America. tim@sociable.co

More in Technology

cia hacking

Apple doesn’t address how vulnerable its products are to CIA hacking in declaring WikiLeaks info outdated

Tim HinchliffeMarch 24, 2017
iot

Talk to the Picture or Wave to the Mirror, Daptly Launches Interactive Digital Assistant for IoT

Tim HinchliffeMarch 23, 2017
future artificial intelligence

Should The Human Race Be Afraid Of Future Artificial Intelligence?

Vivian MichaelsMarch 19, 2017
ai spirituality

AI and Spirituality: Toward the recreation of the mythical, soulless Golem

Tim HinchliffeMarch 18, 2017
cybersecurity

Why Current Cybersecurity Doesn’t Work and Why Blockchain Should Take Its Place

Andy HeikkilaMarch 18, 2017
escape room

Future Technology Escape Room in SF Immerses Team Building with Sci Fi Adventures

Tim HinchliffeMarch 17, 2017
stock images

Everypixel trains neural network to measure aesthetic beauty in stock images

Tim HinchliffeMarch 14, 2017
listening devices

Should You Be Concerned About Listening Devices Affecting Your Privacy?

Sam CyrusMarch 8, 2017
sadness detector

Researcher in Argentina develops sadness detector, inspired after working with children with autism

Tim HinchliffeMarch 6, 2017