AI – Hypergrid Business https://www.hypergridbusiness.com Covering virtual reality, immersive worlds, and other emerging technologies Mon, 07 Oct 2024 18:23:18 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 https://www.hypergridbusiness.com/wp-content/uploads/2019/02/cropped-HB-logo-512-32x32.png AI – Hypergrid Business https://www.hypergridbusiness.com 32 32 Virtual Worlds Ease Real Stress https://www.hypergridbusiness.com/2024/10/virtual-worlds-ease-real-stress/?utm_source=rss&utm_medium=rss&utm_campaign=virtual-worlds-ease-real-stress https://www.hypergridbusiness.com/2024/10/virtual-worlds-ease-real-stress/#respond Mon, 07 Oct 2024 18:23:18 +0000 https://www.hypergridbusiness.com/?p=78726
(Image via Adobe Firefly.)

I am not always what one would call the perfect state of calm. I have a lot of anxiety about the present and the future.

Also, my family, like anyone’s family, can be hectic. I could go for a walk, but I don’t often get around to it, instead getting stuck in my day-to-day routine. Being that I live in such a small place, there isn’t really anywhere all that quiet. And even if I were in a quiet place, then that wouldn’t stop the anxieties and ruminations in my head.

Listening to music and whale sounds has helped me calm down a little in the past, but what I need is to be able to keep myself from spiraling too far into my thoughts.

Thankfully, with advancements in technology, many tools are available to help me retain my sanity, even when an actual therapist is not within reach.

A couple tools that have supported me are Tripp’s virtual reality meditation program and their free mobile app. Tripp’s VR app made sense for me, being that I have found video games to be a sanctuary to tune out the anxieties in my head and get lost in another realm. This has also been the case for Tripp’s co-founder and CEO, Nanea Reeves.

“I realized throughout my journey that video games have been kind of a retreat for me at times in my life,” said Reeves. “Even the casual ones on my phone, if I am kind of just ruminating on a lot of negative internal talk, I can play something and it sort of clears my head.”

I tried out the free VR app demo a handful of occasions, when I have felt particularly at a loss. It gave me a private space, even if it was a virtual space, where I could ease my tension a little, and just be. This was thanks to an immersive, multi-sensory experience, with a serene audio track and vibrant visuals, like something out of a kaleidoscope.

The immersive nature of virtual reality is another aspect of what inspired the genesis of Tripp as a concept.

“I was an investor in the Oculus early stage of their company, and that gave me access to devices,” said Reeves. “And a lot of the early experiences were designed to make you feel afraid, you know, you felt like you were on a cliff . . . And that was really interesting to me. I thought, if you could make someone feel fear that easily what else could you make them feel?

I got treated to a surreal fireworks display on launch, then the Tripp hummingbird, who is coincidentally also named Terrence, came and visited. I then performed a short exercise where, by tilting my head, I navigated a spherical object through a path of obstacles where I had to collect coins as they went by. The voice of the application suggested seeing the gaps between obstacles as the gaps between thoughts, which is “where the joy lies,” as the narrator puts it.

The best way to explain this, as author Pema Chödrön puts it, is seeing rays of sunshine that peek through the clouds. Having this kind of visual metaphor for my mental activity was something novel to me, imagining my thoughts as physical objects, and looking between them to see the sunshine. Especially for those who tend to think in terms of images, this can be a powerful metaphor.

Another of Tripp’s tools that I have relied on is its AI assistant, Kōkua, which has recently hit the VR application.

Kokua is fairly straightforward: I type my current mood in the prompt, and the AI voice assistant, based on CEO Nanea Reeves’ voice, offers words of comfort. It then creates a short reflection – some words to guide me through dealing with the stressors I am experiencing, lasting just over two minutes. At the end, I am welcome to provide feedback on whether the meditation helped at all.

I found that, the majority of the time, the app was quite helpful. There was one occasion where the app mistakenly thought I had children, but most of the time it could understand what I was trying to say. I think that is what I needed most, to know that it was okay to feel what I felt, and to have that legitimized.

One of the reasons that Kōkua was released to mobile initially, according to Reeves, was to receive feedback from users before releasing the app through Tripp’s subscription-based VR app.

“We have a smaller audience on mobile,” continued Reeves. “We wanted to learn and see where we might have gotten certain things wrong, based on how people are interacting with it for support.”

While Tripp’s VR and mobile applications are helpful, Reeves hopes to avoid the tools being used in the place of trained professional therapists.

“We do have some connections to different support lines,” she added.

If the AI detects that someone might need real support, if they’re in a crisis, there’s a connection to real professional support.

Tripp’s tools are not therapy in themselves, but from my experience, they have helped in times of distress. If I can’t change what is happening around me, the VR app and mobile app at least help me to feel like I can deal with the challenges, that there is a calm within the middle of the storm. Life has its share of conflicts and arguments. Even if things aren’t okay, Tripp allowed me, even for a moment, to be okay.

Tripp VR is a colorful, calming experience, and while Kōkua is not a licensed counselor, it does make a big difference to at least have the feeling that someone was hearing my struggles and being empathetic, even if that empathy was coming from a computer program.

VR and AI have immense potential for improving people’s psychological well-being, and the advances in the technology could become useful as another tool set for mental health professionals.

]]>
https://www.hypergridbusiness.com/2024/10/virtual-worlds-ease-real-stress/feed/ 0
People think AIs are conscious. What could this mean for bots in OpenSim? https://www.hypergridbusiness.com/2024/07/people-think-ais-are-conscious-what-could-this-mean-for-bots-in-opensim/?utm_source=rss&utm_medium=rss&utm_campaign=people-think-ais-are-conscious-what-could-this-mean-for-bots-in-opensim https://www.hypergridbusiness.com/2024/07/people-think-ais-are-conscious-what-could-this-mean-for-bots-in-opensim/#respond Fri, 12 Jul 2024 23:07:49 +0000 https://www.hypergridbusiness.com/?p=78647
(Image by Maria Korolov via Adobe Firefly.)

I’ve been interacting with OpenSim bots — or NPCs — for nearly as long as I’ve been covering OpenSim. Which is about 15 years. (Oh my God, has it really been that long?)

I’ve been hoping that OpenSim writing would become by day job, but, unfortunately, OpenSim never really took off. Instead, I covered cybersecurity and, more recently, generative AI.

But then I saw some reporting about a new studies about AI, and immediately thought — this could really be something in OpenSim.

The study was published this past April in the journal Neuroscience of Consciousness, and it showed that a majority of people – 67% to be exact – attribute some degree of consciousness to ChatGPT. And the more people use these AI systems, the more likely they are to see them as conscious entities.

Then, in May, another study showed that 54% of people, after a conversation with ChatGPT, thought it was a real person.

Now, I’m not saying that OpenSim grid owners should run out and install a bunch of bots on their grids that pretend to be real people, in order to lure in more users. That would be dumb, expensive, a waste of resources, possibly illegal and definitely unethical.

But if users knew that these bots were powered by AI and understood that they’re not real people, they might still enjoy interacting with them and develop attachments to them — just like we get attached to brands, or cartoon animals, or characters in a novel. Or, yes, virtual girlfriends or boyfriends.

In the video below, you can see OpenAI’s recent GPT-4o presentation. Yup, the one where ChatGPT sounds suspiciously like Scarlett Johansson in “Her.” I’ve set it to start at the point in the video where they’re talking to her.

I can see why ScarJo got upset — and why that particular voice is no longer available as an option.

Now, as I write this, the voice chatbot they’re demonstrating isn’t widely available yet. But the text version is — and its the text interface that’s most common in OpenSim anyway.

GPT-4o does cost money. It costs money to send it a question and to get a response. A million tokens worth of questions — or 750,000 words — costs $5, and a million token’s worth of response costs $15.

A page of text is roughly 250 words, so a million tokens is about 3,000 pages. So, for $20, you can get a lot of back-and-forth. But there are also cheaper platforms.

Anthropic’s Claude, for example, which has tested better than ChatGPT in some benchmarks, costs a bit less — $3 for a million input tokens, and $15 for a million output tokens.

But there are also free, open-source platforms that you run on your own servers with comparable performance levels. For example, on the LMSYS Chatbot Arena Leaderboard, OpenAI’s GPT-4o in in first place with a score of 1287, Claude 3.5 Sonnet is close behind with 1272, and the (mostly) open source Llama 3 from Meta is not too far distant, with a score of 1207 — and there are several other open source AI platforms at the top of the charts, including Google’s Gemma, NVIDIA’s Nemotron, Cohere’s Command R+, Alibaba’s Qwen2, and Mistral.

I can easily see an OpenSim hosting provider adding an AI service to their package deals.

(Image by Maria Korolov via Adobe Firefly.)

Imagine the potential for creating truly immersive experiences in OpenSim and other virtual environments. If users are predisposed to see AI entities as conscious, we could create non-player characters that feel incredibly real and responsive.

This could revolutionize storytelling, education, and social interactions in virtual spaces.

We could have bots that users can form meaningful relationships with, AI-driven characters that can adapt to individual user preferences, and virtual environments that feel alive and dynamic.

And then there’s the potential for interactive storytelling and games, with quests and narratives that are more engaging than ever before, create virtual assistants that feel like true companions, or even build communities that blur the lines between AI and human participants.

For those using OpenSim for work, there are also applications here for business and education, in the form of AI tutors, AI executive assistants, AI sales agents, and more.

However, as much as I’m thrilled by these possibilities, I can’t help but feel a twinge of concern.

As the study authors point out, there are some risks to AIs that feel real.

(Image by Maria Korolov via Adobe Firefly.)

First, there’s the risk of emotional attachment. If users start to view AI entities as conscious beings, they might form deep, potentially unhealthy bonds with these virtual characters. This could lead to a range of issues, from social isolation in the real world to emotional distress if these AI entities are altered or removed.

We’re already seeing that, with people feeling real distress when their virtual girlfriends are turned off.

Then there’s the question of blurred reality. As the line between AI and human interactions becomes less clear, users might struggle to distinguish between the two.

Personally, I’m not too concerned about this one. We’ve had people complaining that other people couldn’t tell fantasy from reality since the days of Don Quixote. Probably even earlier. There were probably cave people sitting around, saying, “Look at the young people with all their cave paintings. They could be out actually hunting, and instead they sit around the cave looking at the paintings.”

Or even earlier, when language was invented. “Look at those young people, sitting around talking about hunting, instead of going out there into the jungle and catching something.”

When movies were first invented, when people started getting “addicted” to television, or video games… we’ve always had moral panics about new media.

The thing is, those moral panics were also, to some extent, justified. Maybe the pulp novels that the printing press gave us didn’t rot our brains. But Mao’s Little Red Book, the Communist Manifesto, that thing that Hitler wrote that I don’t even was aided and abetted by the books they wrote.

So that’s what I’m most worried about — the potential for exploitation. Bad actors could misuse our tendency to anthropomorphize AI, creating deceptive or manipulative experiences that take advantage of users’ emotional connections and lead them to be more tolerant of evil.

But I don’t think that’s something that we, in OpenSim, have to worry about. Our platform doesn’t have the kind of reach it would take to create a new dictator!

I think the worst that would happen is that people might get so engaged that they spend a few dollars more than they planned to spend.

]]>
https://www.hypergridbusiness.com/2024/07/people-think-ais-are-conscious-what-could-this-mean-for-bots-in-opensim/feed/ 0
How to use AI to write an opinion column https://www.hypergridbusiness.com/2024/03/how-to-use-ai-to-write-an-opinion-column/?utm_source=rss&utm_medium=rss&utm_campaign=how-to-use-ai-to-write-an-opinion-column https://www.hypergridbusiness.com/2024/03/how-to-use-ai-to-write-an-opinion-column/#respond Wed, 27 Mar 2024 16:21:34 +0000 https://www.hypergridbusiness.com/?p=78510
(Image by Maria Korolov via Adobe Firefly.)

So. You have some thoughts about where OpenSim is going. Or there’s a cool new fashion designer in OpenSim you want to tell people about. Or there’s a feature you’d really like to see implemented.

You’ve been thinking for a while about writing it up and sending it to Hypergrid Business to be published, but writing is just so much work!

Wouldn’t it be great if you could get an AI to read your mind and just write the article for you?

But you can’t. And if you just tell ChatGPT or Claude to “write an article about how great OpenSim is” you’ll get something generic and unreadable. Plus, it won’t have any of your unique insights or information that only you know, which is why you wanted to write the article in the first place.

(Image by Maria Korolov via Adobe Firefly.)

Here’s what you do.

If you’re like me, and think best while talking, then get a transcription app — I use the free Otter AI app and love it — and dictate your thoughts. Now, Otter only supports English, but there are other apps for other languages. Just Google for it.

Or, if you think in bullet point lists, create a list with the points you’d like to make. Don’t worry about grammar or spelling, or organization. Just do a brain dump.

Then open your favorite AI app — I recommend Claude AI because it doesn’t use your info for training data — and follow the following steps:

Cut-and-paste the following prompt:

I’d like to turn the following notes into an opinion column. The first thing I’d like you to do is read the notes and ask me questions. Is there anything that needs clarification or should be expanded on? Is there anything that doesn’t make sense? Are there any points that could use personal anecdotes or concrete examples? Thanks!

Then cut-and-paste your notes and hit the button to ask the question.

The AI should now ask you some follow-up questions. You can provide more information, or you can tell the AI to just skip that question, or ask the AI for what it would suggest.

Once you’re happy that everything has been pulled out of your head, you can go ahead and ask the AI to write the article.

Cut-and-paste the following prompt:

Please write a column based on my notes and our conversation. It should be in the first person, using Associated Press format, in a casual, blog writing style. Paragraphs should be short. Quotes should begin paragraphs. No conclusion needed. Use the inverted pyramid structure. Stick carefully just to the information that I provided.

Now it should provide you with a first draft of the column.

Now you can ask it to, say, rearrange sections, or add more information. And if it got anything wrong, tell it, and  Once you’re generally happy with how the column looks, ask the following questions:

  • Please review the story for accuracy. Are there any places where it contracts the information I gave you?
  • Please review the story’s organization and structure. Is the order the best possible order for this topic? Is anything repeated? Are any significant points not given enough time?
  • Please review the story for writing style. It should be casual and conversational, written at a fifth-grade level, and paragraphs should be short. Are there any areas that can be simplified or rewritten to be more personal?
  • Please review the story for grammar. Remember it should use American spelling and grammar and Associated Press style.

Then say:

Please rewrite the article per your recommendations.

Take one last look at the result. After all, this is going to go out under your name. Make sure that the AI isn’t putting words in your mouth that you wouldn’t say!

Now copy the final results into a separate document and make any edits you want to make. For example, you might change some wording to be more like something you’d say.

Also, add any relevant links. For example, if you’re talking about your OpenSim grid, add a link to the grid.

Then email it to me — in the body of the email is fine — at maria@hypergridbusiness.com. If you have snapshots or illustrations that you want to use, just attach them as JPG or PNG files to the email.

If you’d like to have AI generate an illustration to go with your column, I recommend that you use Adobe Firefly. Adobe only uses fully-licensed images for its training data — no lawsuits from artists here! — and pays artists when their work is used. In fact, the first payments to artists went out last September.

Use the “Widescreen (16:9)” aspect ratio for at least one image that you submit to Hypergrid Business, since we use wide images for the featured images on our site. You can also upload a reference image to give Firefly an idea of the kind of style you’re going for, or select a particular art or photography style from the lists provided.

Of course, you don’t have to submit your column to us! You can post it on your own blog or social media. And you can use the same approach to write any other kind of content — just adjust the prompt to fit. You can use this approach to write emails or to write marketing copy for your website.

And yes, being polite helps. The AIs seems to return better results when you’re nice to them.

]]>
https://www.hypergridbusiness.com/2024/03/how-to-use-ai-to-write-an-opinion-column/feed/ 0
My keynote talk about ChatGPT https://www.hypergridbusiness.com/2024/03/my-keynote-talk-about-chatgpt/?utm_source=rss&utm_medium=rss&utm_campaign=my-keynote-talk-about-chatgpt https://www.hypergridbusiness.com/2024/03/my-keynote-talk-about-chatgpt/#respond Mon, 25 Mar 2024 21:10:30 +0000 https://www.hypergridbusiness.com/?p=78501 So, I’m back home — after flying to Lima, Peru for an AI conference last week.

I was there to give a keynote speech about AI. The whole flight over I was worried that Nvidia or OpenAI or someone else would announce news of such major importance that I’d have to scrap my entire presentation and start over — fortunately, while there was, in fact, some news, I didn’t have to do any major revisions. No CEOs were fired. No lawsuits shut down all generative AI training. Whew.

Maria Korolov speaking before a sold-out audience at the Data & AI Summit in Lima, Peru on March 20. (Image courtesy Seminarium Peru.)

You can’t really see me in the photo above, so here’s a close-up:

Maria Korolov speaking before a sold-out audience at the Data & AI Summit in Lima, Peru on March 20. (Image courtesy Seminarium Peru.)

For those of you who only know me from Hypergrid Business, I’m actually a tech journalist during the day. I’ve been covering enterprise technology for over twenty years, ran a business news bureau in China for five, and have seen my fair share of major transformations. During the last few years I’ve been focusing on enterprise uses of artificial intelligence and, since late 2022, much of my reporting has focused on generative AI.

You can see all my stories at MariaKorolov.com, but here are some recent highlights:

I guess, overall, I must have talked to hundreds of people — corporate executives deploying the technology, analysts, consultants, experts, and, of course, vendors.

Anyway, I distilled everything I learned into a single one-hour presentation, flew down to Peru, and gave the talk. It was awesome. I love being in front of a live audience again after several years, now, of online webinars and virtual conferences.

Then, after I got home, I recorded a version of the same presentation for YouTube. It doesn’t have the call-backs to previous speakers, nor post-speech Q&A that the conference moderator did with me, nor people coming up to me afterwards to ask questions.

But that’s the great thing about an in-person conference — you get to experience all those things.

If you’re interested in checking it out, here it is:

And if you want to let me know how you’re using AI on your OpenSim grid — or about anything else happening in OpenSim, for that matter — drop me a line at maria@hypergridbusiness.com.

]]>
https://www.hypergridbusiness.com/2024/03/my-keynote-talk-about-chatgpt/feed/ 0
OpenAI head calls for slow, careful release of AI — after releasing ChatGPT with no warning https://www.hypergridbusiness.com/2023/03/openai-head-calls-for-slow-careful-release-of-ai-after-releasing-chatgpt-with-no-warning/?utm_source=rss&utm_medium=rss&utm_campaign=openai-head-calls-for-slow-careful-release-of-ai-after-releasing-chatgpt-with-no-warning https://www.hypergridbusiness.com/2023/03/openai-head-calls-for-slow-careful-release-of-ai-after-releasing-chatgpt-with-no-warning/#respond Mon, 06 Mar 2023 20:15:27 +0000 https://www.hypergridbusiness.com/?p=77652
(Image by Maria Korolov via Midjourney.)

I can’t tell if he’s just being tone deaf, or trying desperately to do some damage control, but after releasing ChatGPT without any warning on an unsuspecting world late last year, OpenAI CEO Sam Altman is now calling for slow and careful release of AI.

If you remember, ChatGPT was released on November 30 of 2022, just in time for take-home exams and final papers. Everyone started using it. Not just to make homework easier, but to save time on their jobs — or to create phishing emails and computer viruses. It reached one million users in just five days. According to UBS analysts, 100 million people were using it by January, making it the fastest-growing consumer application in history.

And according to a February survey by Fishbowl, a work-oriented social network, 43 percent of professionals now use ChatGPT or similar tools at work, up from 27 percent a month prior. And when they do, 70 percent of them don’t tell their bosses.

Last week, OpenAI released an API for ChatGPT allowing developers to integrate it into their apps. Approval is automatic, and the cost is only a tenth of what OpenAI was charging for the previous versions of its GPT AI models.

So. Slow and careful, right?

According to Altman, the company’s mission is to create artificial general intelligence.

That means building AIs that are smarter than humans.

He admits that there are risks.

“AGI would also come with serious risk of misuse, drastic accidents, and societal disruption,” he said.

He forgot about the killer robots that will wipe us all out, but okay.

(Image by Maria Korolov via Midjourney.)

He says that AGI can’t be stopped. It’s coming, and there’s nothing we can do about it. But it’s all good, because the potential benefits are so great.

Still, he says that the rollout of progressively more powerful AIs should be slow.

“A gradual transition gives people, policymakers, and institutions time to understand what’s happening, personally experience the benefits and downsides of these systems, adapt our economy, and to put regulation in place,” he said.

Maybe he should have considered that before putting ChatGPT out there.

“We think it’s important that efforts like ours submit to independent audits before releasing new systems,” he added.

Again, I’m sure that there are plenty of high school teachers and college professors who would have appreciated a heads-up.

However, he also said that he’s in favor of open source AI projects.

He’s not the only one — there are plenty of competitors out there furiously trying to come up with an open source version of ChatGPT that companies and individuals can run on their own computers without fear of leaking information to OpenAI. Or without having to deal with all the safeguards that OpenAI has been trying to put in place to keep people from using ChatGPT maliciously.

The thing about open source is that, by definition, it’s not within anyone’s control. People can take the code, tweak it, do whatever they want with it.

“Successfully transitioning to a world with superintelligence is perhaps the most important—and hopeful, and scary—project in human history,” he said. “Success is far from guaranteed, and the stakes (boundless downside and boundless upside) will hopefully unite all of us.”

There is one part of the statement that I found particularly interesting, however. He said that OpenAI had a cap on shareholder returns and are governed by a non-profit, which means that, if needed, the company can cancel its equity obligations to shareholders “and sponsor the world’s most comprehensive UBI experiment.”

UBI — or universal basic income — would be something like getting your Social Security check early. Instead of having to adapt to the new world, learn new skills, find new meaningful work, you could retire to Florida and play shuffleboard. Assuming Florida is still above sea level. Or you could use the money to pursue your hobbies or your creative passions.  As a journalist whose career is most definitely in the AI cross-hairs, let’s color me curious.

You can read Altman’s entire think piece here.

]]>
https://www.hypergridbusiness.com/2023/03/openai-head-calls-for-slow-careful-release-of-ai-after-releasing-chatgpt-with-no-warning/feed/ 0
Report: AI is weapon of ‘mass disruption’ https://www.hypergridbusiness.com/2023/02/report-ai-is-weapon-of-mass-disruption/?utm_source=rss&utm_medium=rss&utm_campaign=report-ai-is-weapon-of-mass-disruption https://www.hypergridbusiness.com/2023/02/report-ai-is-weapon-of-mass-disruption/#respond Thu, 23 Feb 2023 15:22:51 +0000 https://www.hypergridbusiness.com/?p=77624
(Image by Maria Korolov via Midjourney.)

Generative AI is one of the world’s top three geopolitical risks this year — right after Russia and China — according to a report released last month by the Eurasia Group, a US-based risk consultancy.

“This year will be a tipping point for disruptive technology’s role in society,” the report said.

According to the organization, the generative AI technology that was all over the news in 2022 is capable of creating realistic images, videos, and text with just a few sentences of guidance.

“Large language models like GPT-3 and the soon-to-be-released GPT-4 will be able to reliably pass the Turing test,” the report said.

The most famous use of this large language model is in OpenAI’s ChatGPT, but the technology has also been licensed to many other vendors, and Microsoft has already begun adding it to Bing and announced plans to embed it in Office and other Microsoft applications.

The Turing test is an experiment in which a human interacts with another entity via a computer and has to guess whether the entity on the other side is another human or an AI.

Some users are already convinced that ChatGPT is either sentient or is actually manned by an army of humans in the Philippines. And a Google engineer was fired last summer because he became convinced that Google’s version of the technology, LaMDa, had become self-aware.

Now, these tools have become simple enough to use that anyone can harness the power of AI, the report said.

“These advances represent a step-change in AI’s potential to manipulate people and sow political chaos,” the report said. “When barriers to entry for creating content no longer exist, the volume of content rises exponentially, making it impossible for most citizens to reliably distinguish fact from fiction.”

This will have adverse impacts on political discourse. Conspiracy theorists can generate bot armies to support their views, as can dictators.

And companies can also be affected, since key executives can be impersonated by malicious actors, legitimate product reviews drowned in a sea of AI-generated comments, and social media posts can impact stock prices and overwhelm sentiment-driven investment strategies.

Implications for small business owners

If you are a small business owner, it’s time to create a strategy for responding to these threats.

In the OpenSim ecosystem, we’ve occasionally seen instances where individuals were impersonated by someone else in order to harm their reputations — or people created fake personas in order to promote a particular grid or service.

We can expect this kind of activity to accelerate as AI technology allows bad actors to operate on a much more massive scale than before.

At Hypergrid Business, we haven’t — yet — seen a flood of AI-generated comment spam. Hopefully, the Disqus platform we use for comments will be able to filter the worse of it out before we have to deal with it.

Grids that have a social media presence should start thinking about a possible strategy, or a reaction plan, in case something happens. It’s always better to come up with a plan ahead of time instead of reacting in the moment based on emotion, which will usually just make the situation worse.

But there, of course, also opportunities for business to use generative AI for good.

OpenSim grids can use the technology to create AI-powered interactive NPCs to create interesting interactions for visitors to their grids, use ChatGPT to create in-world storylines for users to experience, and use generative art platforms to create textures, images, 3D objects, and even entire scenes.

Grids can also use AI to help create marketing and promotional content such as articles, videos, or podcast episodes.

Marketing is the single biggest challenge that OpenSim grids and service companies have today. If AI can reduce some of the burden, that will be a big win for the whole ecosystem.

]]>
https://www.hypergridbusiness.com/2023/02/report-ai-is-weapon-of-mass-disruption/feed/ 0
AI farms are the new content farms — and are just as bad https://www.hypergridbusiness.com/2023/02/ai-farms-are-the-new-content-farms-and-are-just-as-bad/?utm_source=rss&utm_medium=rss&utm_campaign=ai-farms-are-the-new-content-farms-and-are-just-as-bad https://www.hypergridbusiness.com/2023/02/ai-farms-are-the-new-content-farms-and-are-just-as-bad/#respond Mon, 20 Feb 2023 19:59:42 +0000 https://www.hypergridbusiness.com/?p=77612 In the first decade of this century, unscrupulous website owners noticed that Google had caught on to their tricks — link exchanges, invisible keywords, and meta tags. These tricks were designed to fool search engines into thinking that the websites were better and more useful than they actually were. Spoiler alert: these websites were usually completely useless, just some filler text surrounded by a sea of ads.

After Google updated its algorithms, the scammers found a new trick — content farming. Content farms were companies, often based overseas, that hired armies of poorly paid freelance writers to rewrite existing web posts in slightly different words to create new articles. Then they hired an army of equally poorly-paid editors to clean up the resulting mess into something barely readable.

I get dozens of offers a day from content farms offering guest posts for Hypergrid Business.

I routinely mark them as spam and delete them. In fact, any email that has the words “SEO,” “100 percent unique,” and “Copyscape protected” is pretty much 100 guaranteed to go into the trash bin.

This kind of fake, filler, spammy content proliferated across the Internet. People didn’t usually stay on these pages for long. After all, there was nothing new to read there. But it was long enough for the publishers to get the revenue from the ads on the page and that was all that mattered.

Some legitimate business owners fell into this trap. They thought that in order to get traffic to their websites, they needed to game the system too, and they bought stuff from content farms to fill their pages with meaningless fluff.

For a little bit more money or effort, they could have actually created real content. Something useful to their customers, based on the company’s actual expertise and opinions.

The business owners and website editors that fell into the trap were kicked to the back of the line when search engines caught on — and then they have to work even harder just to get back to where they started.

Content farms are like crash diets. They seem to work at first, but at the end you wind up worse off than you were before, even heavier and lazier, looking for next crash diet, the next quick fix, the next SEO magic wand.

Stop worrying about the scale. Exercise more. Eat healthier food. Okay, I know that doesn’t work for everybody — but crash diets don’t work for anybody at all.

AI is the new content farm

Now, I don’t have anything against AI.

I welcome our new robot overlords.

I even wrote an article recently about how to use AI to write press releases.

But what I was talking about in that article was taking a bunch of new and useful information and using AI to help you organize it into readable form.

Not everyone is a great copywriter. So if your company has an announcement to make, or a new service it’s offering, or something else they’re doing that benefits people, it can be very useful to have an AI to hammer it into a readable form.

That’s not what I’m talking about when I talk about AI content farms.

(Image by Maria Korolov via Midjourney.)

What I mean is those articles with titles like, “10 top ways to cook with chicken,” except now written by AI instead of desperate writers.

Then you click on the article and find out that it’s a retread of everything already written on the topic, without any new insights or any value. It’s the exactly the same as the old content farm junk, but now written by AI, so it’s cheaper and faster.

These articles are easy to generate. You just ask ChatGPT, “Write me an article about ten top ways to use chicken” and it spits it out.

And everyone out there is doing exactly that, producing the same article in infinite variations.

Now there are companies built around this idea. In case you’re wondering who they are, I’m not going to help you. I’m not going to link to them here and give them free publicity. In fact, the reason I’m writing this article now is that one of those companies contacted me and invited me to their affiliate program and it annoyed me.

I’m not a fan of these companies. Not because they’re charging for something people can get for free from ChatGPT itself. No, they do provide some value. They make it easy and convenient to generate that filler article, and they stuff it full of keywords, and optimize the headline for maximum click bait, and analyze search terms in order to suggest the topics you should cover next.

It’s still the same generic, useless, repetitive content. Just an order of magnitude spammier.

The companies doing this proclaim that their posts can pass AI detection tools. That is completely irrelevant. These articles are still a waste of space.

Plus, Google doesn’t care if something was written by AI or not. Plenty of useful articles are written by AI, such as financial reports or sports write-ups or weather updates. The important thing, as far as Google is concerned, is that the content adds value.

So don’t fall for the quick fix trap of AI content farms. They won’t help your site in the long run. And, depending on how good Google algorithms are, probably won’t do much in the short term, either. If you fall into the AI content farm trap, you’ll have to fight even harder to get back into search engines’ good graces.

Instead, focus on providing value. Feature your personal experiences, or your company’s expertise and knowledge of the subject. Provide brand-new information that hasn’t been seen anywhere else before. Find a fresh perspective.  I did an article about how to do this earlier this month: P.E.A.N.U.T.: 6 steps to staying ahead of AI when writing articles.

So if you’re an OpenSim grid owner or service provider or another company actually doing something and are looking to drive traffic to your website and are considering using AI to help you write some articles — go ahead and use AI. Or you could hire some of those freelance writers that are about to become unemployed.

Just don’t churn out more useless fluff.

And stay away from any vendors that promise “SEO content.” Please. Do us — and yourself — a favor.

And me? I’m going to go add the term “guaranteed to pass AI detection tests” to my spam filter.

]]>
https://www.hypergridbusiness.com/2023/02/ai-farms-are-the-new-content-farms-and-are-just-as-bad/feed/ 0
No, you won’t get rich quick by cutting-and-pasting into ChatGPT https://www.hypergridbusiness.com/2023/02/no-you-wont-get-rich-quick-by-cutting-and-pasting-into-chatgpt/?utm_source=rss&utm_medium=rss&utm_campaign=no-you-wont-get-rich-quick-by-cutting-and-pasting-into-chatgpt https://www.hypergridbusiness.com/2023/02/no-you-wont-get-rich-quick-by-cutting-and-pasting-into-chatgpt/#respond Wed, 15 Feb 2023 22:00:53 +0000 https://www.hypergridbusiness.com/?p=77575
(Image by Maria Korolov via Midjourney.)

I watch a lot of YouTube videos about new AI technologies. It’s a great way to stay on top of what’s happening while, say, cooking dinner or working out. YouTube videos tend to be relatively short and very accessible — a great way to learn about a complicated new topic before you go on to delve into the long-form articles and research papers.

And in my searches for new videos about ChatGPT and Midjourney and other AI tools I kept coming across scammy get-rich-quick videos. The ones that say, “Make $5,000 a day with ChatGPT!” If you’ve ever been on YouTube, you know these guys. They used to pitch crypto and now they pitch AI, but it’s the same dumb thing all over again.

I watched a couple of them to see what they were on about, and the main idea seems to be to post on Fiverr or some other freelance platform offering copywriting or design services. Then whatever the client asks you to do, you just have ChatGPT or Midjourney do for you. A few minutes of cutting-and-pasting and — voila! — the money will just roll in!

The problem with AI-based get-rich-quick schemes

Obviously, this idea is stupid. Why would anyone pay you to do something they can do for free on ChatGPT’s free plan, or one of the many free Midjourney alternatives powered by Stable Diffusion?

Don’t worry, the hucksters say, there are people who don’t know about the free tools, or don’t want to learn how to use them, and would rather pay you money instead.

Umm… maybe…?

Then, the next question is, if all you’re doing is cutting and pasting, then what’s to stop someone from building a simple app that exactly that — cutting-and-pasting from Fiverr customers into ChatGPT and sending back the results? They can flood the Fiverr marketplace with these accounts, and because they’re using automation, they can cut and paste a hundred queries in the time it takes you to do one.

And then they can take some of the money they make doing this and put it towards buying ads, improving good customer service, and so on. And they might even invest some of their profits into building an easy online app that does just that one thing that they’ve automated.

And if they can, they will. I’m sure people are out there doing just that right now as I type this.

That still means you have a window of opportunity if you act fast, the hucksters say. Jump on this trend right now, before the professionals move in, and you can make thousands just by cutting-and-pasting, no skill required!

The problem with this approach is that it takes time to build a business on Fiverr or any other freelancing platform. You have to do a lot of small jobs, for very low pay, in order to get clients who will leave good recommendations for you. Then, once you have built up a reputation, you can start raising your rates and investing in your marketing.

Meanwhile, the people who already have those business aren’t idiots. They’re noticing that ChatGPT is out there, and they’re using it and other tools either to speed up their own production or to improve the quality of their work — or both. And they already have a customer base, and they already know how to write, or how to design, and when they look at the results ChatGPT or Midjourney spits out, they know which ones are good and which ones aren’t, and they know how to tweak them to get them the rest of the way. Plus, they already know how to manage customer relationships and do marketing.

If your only skill is cutting-and-pasting, you’re not going to be able to compete against these professionals.

And that’s the point of most of these videos — you can get rich without having to do any hard work or learning any real skills.

What to do instead

If you want to use AI to get rich, then use it to improve the efficiency or quality of the stuff you are already good at doing, or to help you with the individual tasks that you are bad at. So, if you have a successful business, but you’re bad at writing letters to prospective customers, use ChatGPT to create drafts for you. If you’re bad at coming up with marketing ideas, use ChatGPT for that. If you’re bad at responding to complaints… you get the idea.

Use ChatGPT to build on your positives and reduce your negatives.

You can also use ChatGPT to learn new skills, either to level up in your current profession, or to move to another field.

So, yes, there are plenty of opportunity for people do things with AI. You can use it to help you create content, to create images, to create music and videos. You can use it to create children’s books and comics, YouTube show scripts and almost anything else you can image. But you still have to build a business if you do any of those things. You will still need to find customers or viewers, you will still need to know the difference between good content and bad content, and you will have to figure out how to do things that other people can’t do.

In the virtual world space, for example, you can use AI to generate images, to generate text, and to generate code.

You can use the AI-generated images for textures, or for game maps, or for marketing materials.

You can use text to create games, to create scripts for in-world characters, or for marketing company.

And you can use code for in-world scripts, for server-side applications, and for website plugins.

But, right now, none of these are “set and forget” types of applications. You will have to review the images, text, and code that the AIs generate and carefully select the ones you need. Sometimes you will spend hours, days, or weeks modifying the prompts to get what you want.

AI is not yet ready for real-time embedding into your virtual world because it’s too easy to get a chatbot to go off the rails. Instead, if, say, you want an in-world AI-powered character, use the AI to generate scripts, instead.

Or you could start a company to create AI-powered chatbots that are tightly constrained by the games where they live, can hold a convincing conversations while staying in character, but can’t be diverted by malicious users into spewing racist garbage or crazy conspiracy theories.

]]>
https://www.hypergridbusiness.com/2023/02/no-you-wont-get-rich-quick-by-cutting-and-pasting-into-chatgpt/feed/ 0
ChatGPT can teach you LSL and even write code for you — kinda https://www.hypergridbusiness.com/2023/02/chatgpt-can-teach-you-lsl-and-even-write-code-for-you-kinda/?utm_source=rss&utm_medium=rss&utm_campaign=chatgpt-can-teach-you-lsl-and-even-write-code-for-you-kinda https://www.hypergridbusiness.com/2023/02/chatgpt-can-teach-you-lsl-and-even-write-code-for-you-kinda/#respond Mon, 13 Feb 2023 21:00:40 +0000 https://www.hypergridbusiness.com/?p=77568 When I first tried out ChatGPT late last year, I noticed that many people were using it to write code. Python. Javascript. Even machine language. So of course I asked it if knew LSL, the Linden Scripting Language used in Second Life and OpenSim.

It said no. I was disappointed but not too surprised. After all, LSL isn’t one of the big languages. It probably doesn’t have enough training data.

Today, for some random reason, I decided to try again, and it not only knew LSL, but could explain how it worked. Maybe I worded the question wrong last time?

It can even talk about the differences between LSL and OSSL, the OpenSim Scripting Language, and write OSSL-specific scripts.

Warning: It’s not perfect

We’re still in the beta release of ChatGPT.

It makes mistakes.

So test all code carefully. However, if there’s an error, you can ask ChatGPT to fix it.

And if you don’t understand why it did something, or prefer it took a different approach, you can tell it that, too.

The company behind ChatGPT, OpenAI, is working on improving its accuracy — and the billions just invested by Microsoft will definitely help.

From what I’m hearing, developers are finding that ChatGPT can already significantly speed up their workflow, but that it doesn’t completely replace them. Yet.

When I asked it to recreate the Very Simple Greeting Script by Jester Knox, it took quite a bit of prompting. On the first try, it forgot a key step. When I pointed that out, ChatGPT apologized and rewrote the script with the step added. Then I asked it to use different commands than it had chosen.

All in all, it took it four tries to get to the exact script I was looking for.

My verdict?

Right now, ChatGPT is like a very junior programmer who still makes a lot of mistakes, is supremely confident in their abilities, but is at least willing to admit when they’re wrong.

I can definitely see the potential here for creating simple scripts, especially if I need to tweak those scripts a lot.

But the big deal here, of course, isn’t in what it can do right now, but in what it will be able to do tomorrow.

“Today, the scale of the largest AI computations is doubling every six months, far outpacing Moore’s Law,” Google CEO Sunchar Pichal said in a post last week.

So if a big part of your day-to-day workflow involves writing LSL or OSSL scripts, it might be time to look for ways in which ChatGPT can speed things up. For example, you can ask it to add comments to existing code. You can ask it to outline a coding project. You can ask it to write documentation, or to create slightly different versions of scripts. If you don’t understand how something works, you can ask it to explain it to you.

You can even paste in an existing script — maybe something from the Outworldz free scripts collection — and ask ChatGPT what the script does, then ask it to suggest some ways to use this script in an OpenSim grid.

]]>
https://www.hypergridbusiness.com/2023/02/chatgpt-can-teach-you-lsl-and-even-write-code-for-you-kinda/feed/ 0
P.E.A.N.U.T.: 6 steps to staying ahead of AI when writing articles https://www.hypergridbusiness.com/2023/02/p-e-a-n-u-t-6-steps-to-staying-ahead-of-ai-when-writing-articles/?utm_source=rss&utm_medium=rss&utm_campaign=p-e-a-n-u-t-6-steps-to-staying-ahead-of-ai-when-writing-articles https://www.hypergridbusiness.com/2023/02/p-e-a-n-u-t-6-steps-to-staying-ahead-of-ai-when-writing-articles/#respond Fri, 10 Feb 2023 18:59:35 +0000 https://www.hypergridbusiness.com/?p=77547
(Graphic by Maria Korolov via Canva and Midjourney.)

I get dozens of emails every day from people who want to contribute articles to Hypergrid Business. Sometimes, they even want to pay me to run the articles. The catch? There’s a link somewhere in the article to something the author wants to promote.

I’m not, in principle, opposed to running sponsored content. We’d just put a “sponsored” tag on it and use the money to pay for things that people don’t want to write for free. For example, I’d love to be able to afford to hire a freelancer to go to OpenSim concerts and other events and write about them. Wouldn’t that be a fun job for someone?

But sometimes the article topics are good enough that I’d run them for free. For example, a VR company might have one of their experts write an article about the state of VR in some industry. That could be useful, even if it does promote that VR company.

Unfortunately, 99 percent of the articles that people suggest are worthless.

Here’s how I think they are created:

  • A content farmer gets an assignment, such as “Benefits of VR in medicine”
  • They Google the topic
  • They cut-and-paste the information they get from the Google search
  • They rewrite the results just enough to pass a plagiarism check and throw in some search engine-friendly keywords
  • They send me the article

Why would anyone want to read this? Everyone has Google. They can just Google the topic themselves. There’s no new information in this article. It just fills up space and lowers the value of the website to readers. As a rule of thumb, whenever I get an email pitch that doesn’t explain who the author is, that guarantees that the article is “Copyspace-proof,” or that ever uses the word “SEO,” I mark it as spam and hit the delete button. This stuff is garbage and just wastes my time.

Now, AI is going to make the problem even worse because the work flow will be dramatically accelerated:

  • A content farmer gets an assignment, such as “Benefits of VR in medicine”
  • They ask ChatGPT to write an article on the topic
  • They send me the article

Again, ChatGPT is free. Anyone can just ask ChatGPT themselves the same question. There’s no new information in this article that’s produced. Even if search engines don’t decide that all AI-generated content is spam, they’ll still down-rank it because it has no new information.

Google released guidance this Wednesday, in fact, about how they decide whether AI-generated content — or any content, for the matter — will now be ranked by their search engines.

I’ve added a couple more criteria to their list, based on what Hypergrid Business is looking for and created my own acronym — PEANUT — which stands for Personal, Emotional, Authoritative, Novel, Unique, and Trustworthy.

The more of these things you have in your article, the higher the chances that I’ll run it. And that applies to both sponsored and non-sponsored posts.

Oh, and if you’re a fiction author, I’ve written an article explaining how to PEANUT principles apply to fiction writing over at our sister publication, the sci-fi and fantasy magazine MetaStellar.

Here are each of those six PEANUT factors, in more depth.

(Image by Maria Korolov via Midjourney.)

P is for Personal experience

The person writing the article should talk about their own history with the topic.

For example, if you’re writing an article about AI in medicine, you might have been treated by a doctor who used AI to help read their scans. Or a relative might have been cured of cancer based in part on a treatment created with the help of AI.

If you’re writing about virtual worlds, you might talk about your own history with the platform.

An AI doesn’t have personal experience with anything, because it’s not a person.

If you don’t have personal experience with a topic, you can get some. You can ask your relatives if any of them have been treated for diseases with the help of AI. Or you might search your memory — did you get a COVID vaccine that was created with the help of AI?

Or, for the virtual world article, you can go and log into a virtual world and try it out.

(Image by Maria Korolov via Midjourney.)

E is for Emotional connection

Why does this topic mean so much to you? Was the relative cured of cancer particularly important to you? Write about that connection.

Did your experience in virtual worlds inspire you in some way? Did it make you feel things you hadn’t felt before? Were you able to do something emotionally meaningful in a virtual world? Write about that.

AIs don’t have emotional connections because they don’t have emotions.

By explaining what the topic means to you, the emotional weight it carries, you help the reader make an emotional connection as well. And that makes for a better article.

A is for Authority

Why are you the one writing on this topic? What makes you an expert?

Really. I want to know. What makes you an expert? Put your resume highlights up top in the story. If you’re the founder of a VR company, say so right up front. Your opinion matters because you know what you’re talking about.

In particular, can you offer some advice or insight that’s better than what an AI can provide, because you have deep expertise?

An AI might recommend a list of things to do, for example, but only you might know what actually works and what doesn’t, based on your experience with customers.

An AI will just repeat all the same advice that’s already out there. It won’t know that some of the advice is worthless or outdated.

(Image by Maria Korolov via Midjourney.)

N is for Novel

Do you have any new information in the article that just came out and isn’t yet available to the general public?

For example, maybe you just went to a conference, talked to a bunch of experts, and learned some cool stuff that most people don’t know yet.

Or maybe you were experimenting with some virtual world tools, and figured out a new hack.

Or maybe you got a press release from a company that is still under embargo, so nobody has seen it yet. If we publish your article right when the embargo lifts, we can be among the first publications that publishes this news.

Or maybe you conducted some research or ran a survey and have new results to share.

If you don’t have anything new to say, find it. You can talk to experts, for example, and see if any of them have a new angle on the topic. Ask them what’s happening that most people don’t know about yet, or about their predictions for what will happen next.

(Image by Maria Korolov via Midjourney.)

U is for Unique

I don’t want to run an article that’s the same as a thousand other articles out there. What makes your article different?

Do you have a different background or point of view? Do you have a particular set of qualifications that makes you uniquely qualified to write about the topic?

When you tried out the technology, did you get different results than everyone else? When you ran the survey, were the results unexpected? When you talked to the expert, did they make a prediction that was surprising, but, when you think about it, makes sense after all?

Or maybe you have an exclusive. That means that something happened and you’re the only one who has this information. Maybe you were the only one at the event when it happened, or a source agreed to only talk to you.

Exclusives are good stuff. Publications love exclusives. Sure, they don’t last long, as everyone else jumps on them soon afterwards, but, for a few days — or a few hours — we’d be the only ones with the news. Score!

(Image by Maria Korolov via Midjourney.)

T is for Trust

Why should we trust your opinion?

If you’re the founder of a company, or the owner of an OpenSim grid, of course you’re going to tell us that your stuff is the best.

If you offer a particular product or service, then of course you’re going to tell us that everyone needs that thing, and it solves all your problems.

Maybe the article you’re writing has nothing to do with selling something. For example, if you’re an OpenSim grid owner and you’ve surveyed your users about, say, how tall their avatars are, then that’s potentially useful information to people who create avatars — and isn’t overly promotional.

If, however, the survey is about why people love your grid, then yeah, we probably won’t run it. I mean, can you imagine that survey? “Do you love our grid more because of the community or because of the great support you get?” Really? Those are the only options? Yeah, nobody’s going to buy that.

You don’t have to have all six PEANUTs for every single article, but the more you have, the higher the odds that I’ll run it.

Let’s look at how this article you’re reading now scores on the PEANUT scale:

  • Personal: Yup, I’ve got personal experience accepting articles for publication.
  • Emotional: Well, I am getting a little annoyed about having to go through all those spammy emails.
  • Authority: Yes, I’m an authority on this subject. I’ve been editing Hypergrid Business since 2009, and, before that, I was a business news bureau chief in China for five years. Not to mention all my other years of experience as journalist and editor.
  • Novel: As far as I can tell, my PEANUT acronym is completely new in this context — it also means “phase-inverted echo-amplitude detected nutation” in the context of peanut allergies, but I don’t think that counts. Plus, Google’s post was only published on Wednesday, so not many people have weighed in with their advice yet. Google’s own acronym is EEAT and, personally, I think mine is catchier and more comprehensive.
  • Unique: I am the only person able to comment on how to get stories published in Hypergrid Business because I’m literally the editor of this site, and my word is final.
  • Trust: Why would I lie about this? I want people to submit good articles so that I have good stuff to run on the site.

Well, look at that. Six for six. I’m hitting the “Publish” button now.

 


Watch Maria discuss the PEANUT principles in the video below:

]]>
https://www.hypergridbusiness.com/2023/02/p-e-a-n-u-t-6-steps-to-staying-ahead-of-ai-when-writing-articles/feed/ 0
AI for the little guy https://www.hypergridbusiness.com/2023/02/ai-for-the-little-guy/?utm_source=rss&utm_medium=rss&utm_campaign=ai-for-the-little-guy https://www.hypergridbusiness.com/2023/02/ai-for-the-little-guy/#respond Thu, 09 Feb 2023 18:02:46 +0000 https://www.hypergridbusiness.com/?p=77532 At Hypergrid Business, we’ve been covering OpenSim and other desktop-based virtual environments since 2009, and began covering virtual reality extensively in 2014.

Our goal has been to promote an open-source, distributed metaverse, and to help the little guy and gal with navigating that landscape. Specifically, the people running OpenSim-based virtual worlds — because I think that OpenSim is a great role model of how to create an interconnected, open-source metaverse.

Today, I believe we’ve entered a new era for small business. Well, not today today. More like, on Nov. 30, 2022, when ChatGPT was released.

People worry that AI will take their jobs, and will gut the creative industries. That will probably happen. And there are lawsuits being filed right and left and government regulators trying to figure out how to prevent the worst consequences, and columnists opining about all of this at length. The criticism are fair, and do need to be addressed. I don’t disagree with that.

(Image by Maria Korolov via Midjourney.)

But there’s a flip side to the technology.

AI can be a power multiplier. Like what happens when you replace a stone axe with a power saw. (I’ve been watching a lot of Primitive Technology videos lately.)

In the hands of the big players, it can be devastating. But the big guys tend to be slow to adopt new technologies because they have a lot of institutional inertia and can’t turn on a dime.

In the hands of the little guys, AI can be extremely powerful. Any disruption is an opportunity for a small company to seize market share and grow big, and I believe that AI is going to be a bigger disruption than any we’ve seen before.

(Image by Maria Korolov via Midjourney.)

“Today, the scale of the largest AI computations is doubling every six months, far outpacing Moore’s Law,” Google CEO Sunchar Pichal said in a post on Monday.

Moore’s Law is about technology doubling every two years.

Now imagine it doubling every six months.

Just think of how much change we’ve seen in the past couple of decades — the Internet, the World Wide Web, smartphones. This year, global smartphone penetration is expected to hit 89 percent. Pretty much everyone on the planet, no matter where they live, either has a smartphone or knows someone who does, and can access all of the world’s information. For those without deep pockets, the Android operating system is free, meaning that a couple of years ago I bought my grandmother a smartphone at Walmart for under $30 that did nearly everything my $800 phone did and had a bigger screen to boot — in emerging countries, the phones are even cheaper. Web browsers are free. Web translation is free, for more than 100 languages. Online maps and turn-by-turn directions are free. Weather reports are free. The world’s biggest and most up-to-date encyclopedia is free. You can take online programming courses and earn certificates, for free.

Access to the Internet has put immense power in the hands of individuals and small businesses around the planet. The world is very different from what it was at the turn of the century.

All that knowledge and communication has changed societies — and accelerated scientific progress. We’re editing our own DNA. We’ve cracked fusion and quantum computing. We’re flying drone helicopters on Mars. We’re planning a permanent base on the Moon and building space industry infrastructure.

I am extremely excited to be alive today. It’s scary. All the stuff we’re doing can go wrong very easily, AI included. But there are also immense opportunities.

Sure, not every technology is a slam dunk. There are plenty of platforms out there desperately searching for use cases including, I’m sorry to say, virtual worlds.

AI isn’t in the category. AI is all use cases, all the time.

It’s already being deployed as a force multiplier in every single industry, and is the top priority for pretty much every CEO of every major company on the planet. Until last fall, regular people got to use AI when it was embedded into day-to-day tools, like search engines or recommendation apps. Big companies use AI directly, because they have teams of data scientists on staff.

ChatGPT changed that. On Nov. 30, it made direct access to AI easy and free for people.

ChatGPT grew to 1 million users within five days of its release. In January, it hit 100 million users, according to USB.

“In 20 years following the Internet space, we cannot recall a faster ramp in a consumer Internet app,” USB analysts said in a research note.

According to VR visionary John Carmack, AI will soon be able to simulate the human brain — probably within this decade.

ChatGPT and other generative AI like image generators and voice and video generators, have pushed the boundaries of what AI can do into an area that many experts that I talked to a couple of years ago said was impossible with current technology. AIs with common sense were still decades away, they said.

(Image by Maria Korolov via Midjourney.)

Turns out, common sense and creativity are emergent abilities when AI systems get big enough — some of 137 emergent abilities of large language models like ChatGPT. That means that ChatGPT can do things that can’t be explained by the fact that it is “auto-complete on steroids” or a “stochastic parrot.” Yes, it looks for correlations and uses statistics to predict what to say next. But at a certain point, with enough training data, and enough parameters, weird stuff starts to happen. Like, for example, ChatGPT can pretend to be someone else and answer questions like they would — even if that particular scenario has never been seen before.

That’s the reason why ChatGPT is taking off, while previous AI powered chatbots didn’t. It’s reached a point where it is actually usable. And people are using it. They’re using it for a million minor tasks, like creating recipes out of food they have in their refrigerators, to writing code, to learning complex topics. Yes, it makes mistakes. Yes, it’s bad at math. No, it can’t access the Web. But OpenAI is working on addressing those issues, and so are all of its competitors.

ChatGPT already has a million use cases.

And it’s ridiculously easy to use. Just go to chat.openai.com and ask it questions. But just because it’s easy to use doesn’t mean that it’s easy to use it well. It’s like playing a keyboard. Yes, pressing keys and getting pretty sounds is pretty easy, much easier than, say, trying to do the same on a violin. But getting it to play what you want — well, that’s the trick, isn’t it?

So how do you use AI effectively as a tool? How do you surf the new wave of technology, instead of being drowned by it? That’s the question I’m going to try to answer.

During the day, I write about AI for major tech publications, like CIO magazine. I cover how large companies are taking advantage of AI to improve efficiency, find new business models, and better connect with customers.

At Hypergrid Business, I plan to do the same, but from the viewpoint of individual users and small businesses. Small businesses like OpenSim grids.

AI and OpenSim

So, what that means is that we’re going to be increasing our AI coverage here at Hypergrid Business.

I’ve already been writing about AI on this blog. One of my most popular recent articles is about how to use ChatGPT to write a press release. I used an imaginary OpenSim grid as my example, but I could have used a real OpenSim grid instead. More than a thousand people have read that article, many of whom were hearing about OpenSim for the first time.

Does your OpenSim grid, or OpenSim-related product or service, use AI?

Maybe you have an AI-powered NPC on your grid. Or use AI to help design your website or create content. Do you have an AI-powered game in your world? Do you have AI-powered tech support for your grid or OpenSim hosting company?

(Image by Maria Korolov via Midjourney and Playground AI.)

Let me know, and I’ll write an article about it.

There’s a huge appetite for AI news and use cases out there right now. Tell me about your AI projects and you can get a lot of attention for what you’re working on.

This is also a great opportunity for someone to break into technology journalism. If you’re interested in covering AI — or covering OpenSim, of course — email me. I’ll teach you how to write in Associated Press style, connect you with sources, and help you build your professional portfolio.

Email me at maria@hypergridbusiness.com with story ideas — or to find out how to get started as a technology journalist.

]]>
https://www.hypergridbusiness.com/2023/02/ai-for-the-little-guy/feed/ 0
OpenSim in review: 14 years of coverage and what’s next https://www.hypergridbusiness.com/2023/02/opensim-in-review-14-years-of-coverage-and-whats-next/?utm_source=rss&utm_medium=rss&utm_campaign=opensim-in-review-14-years-of-coverage-and-whats-next https://www.hypergridbusiness.com/2023/02/opensim-in-review-14-years-of-coverage-and-whats-next/#respond Wed, 08 Feb 2023 16:23:56 +0000 https://www.hypergridbusiness.com/?p=77526
(Image by Maria Korolov via Midjourney.)

Hi, everyone. This is me, Maria, your friendly local technology reporter and blog editor. You might know me from such things as my annual OpenSim stats presentations at the OpenSimulator Community Conference, my various attempts to run an OpenSim grid on my home computer, or the fact that, since 2009, I’ve written over 2,200 articles about OpenSim and the metaverse — and edited more than 1,000 others — for this very site you’re reading now.

The reason that I started Hypergrid Business was because I could see the future, and the future was the metaverse. An open-source, distributed, decentralized metaverse. You know, like the OpenSim hypergrid. And I thought it was the coolest thing ever, and nobody was writing about it. So I would.

I still believe in the future of an open-source, distributed, decentralized metaverse. And I still think that, of all the technologies out there today, the OpenSim hypergrid still comes the closest to that vision. I believe that the metaverse will change everything. It will change the way we socialize, the way we learn, the way we work. But it might not happen quite as soon as I though it would.

The metaverse is still a technology solution searching for a problem to solve. Mostly, that’s because the technology isn’t quite there yet. The headsets make people nauseous. Zoom is good enough for most virtual meetings. The learning curve for virtual world browsers is too high for casual users. And the killer use cases just aren’t there yet. The only solid non-gaming applications I’ve seen for it so far — the ones worth the money and the learning curve — are when it comes to high-end product design. Really high-end. I’m talking ships and cars and buildings, where distributed teams from all over the world can do virtual tours of 3D mockups and talk about design decisions, safety issues, production hurdles, and marketing plans. These are use cases where a single immersive meeting can save a company tens of thousands of dollars in travel costs and thousands — or even millions — of dollars in prototype costs. Plus, the meetings are typically short enough and don’t usually require too much walking around, meaning that people don’t get quite as sick.

But that’s about it. That feeling of presence — of sharing a location with other people — is magical. But it doesn’t make up for all the shortcomings of either desktop-based virtual worlds, or 3D virtual environments, for most non-gaming applications. So yes, desktop-based immersive virtual worlds — also known as first-person shooters — have taken over when it comes to games. Yes, Minecraft is super popular. But it hasn’t really translated yet to non-gaming applications on any significant scale.

We’re still waiting for killer apps. We might be waiting for a while. I think things will start to change when we have a decent interface or better VR hardare. I know the Firestorm team is awesome, but they’re constrained by having to support Second Life, and the OpenSim community is too small to support its own viewer. I’m worried that Second Life has poisoned the well when it comes to general-purpose, user-build virtual environments because the company did nothing to capitalize on its early media hype.

Interest over time in the search term “Second Life.” (Chart courtesy Google Trends.)

The media — myself included — got excited about the potential of Second Life. It became a plot point in TV shows and made the cover of Businessweek. I guess Linden Lab thought that they’d keep getting free publicity forever, and didn’t bother to invest in any real marketing, user interface design, or anything else that would make it a viable product in the long term.

Second Life’s concurrency statistics have mirrored general interest. Over time, according to data from Grid Survey, concurrency numbers have trended down.

Second Life median daily concurrency, 2009-2023. (Data via Grid Survey.)

The one notable exception to the downward trend was in early 2020, when the pandemic first hit and people were stuck inside and usage jumped for the first time in a decade — but then started trending down again.

Land area, Second Life’s main revenue source, is also down. From a peak of 31,988 regions in 2010, total land area is now down to 27,630 regions, only two-thirds of which are privately owned and a fifth of which is abandoned land.

OpenSim numbers have tended in the opposite direction — both land area and active users have trended up over the years.

Total OpenSim active monthly users. (Hypergrid Business data.)

In the early days of OpenSim, as you can see in the chart above, a substantial portion of the user base was on non-hypergrid worlds, mainly InWorldz and Avination. Those two grids shut down and, since then, nearly all the user growth has been on the hypergrid.

In land area, the difference is even more dramatic, since hypergrid-enabled worlds tend to offer lower-cost land, and some even offer free land, if users are able to run the regions on their home computers.

Total OpenSim land area, in standard region equivalents. (Hypergrid Business data.)

Of course, it’s hard to compare OpenSim directly to Second Life.

First of all, Second Life no longer publishes active monthly user numbers. The most recent numbers are from 2010, when Second Life reported more than 1 million active monthly users. However, historically, Second Life monthly active users have been, on average, 19.8 times that of median daily currency. That translates to as between 700,000 and 850,000 active monthly users this year.

By comparison, OpenSim’s user base is a drop in the bucket.

On the flip side, OpenSim’s land area is more than three times that of Second Life, mostly due to the availability of free and extremely low-cost regions. In fact, the average cost of a standard sized region in OpenSim is less than $13 a month.

What I suspect is happening is that people who don’t need Second Life’s large communities are coming over to OpenSim to get better deals on land and more control over their virtual environments.

You can get a free region on OSgrid by running it on your own computer, or set up a free OpenSim grid of your own using the DreamGrid installer, or have a hosting company set up a grid for you, or rent land from an existing grid. As a region owner, you typically get the ability to save a copy of the entire region in order to keep a backup, or to share or sell to others. As a grid owner, you can restrict who can visit your grid, you can rent out land, and you can run your own currency.

For builders in particular, having an OpenSim grid or region as a must — even if they spend the bulk of their time in Second Life. Building in OpenSim means that you keep the original copies of all your builds in a safe place, and can work with a team on the builds. Building in Second Life is riskier, especially commercial builds created by large teams, due to lack of good backups and ownership issues.

Piracy and copyright aren’t as big a deal. Most content theft happens in Second Life because that’s where most of the content is. And most stolen content is also distributed in Second Life, because that’s where most of the users are. In OpenSim, however, grids have the power to ban griefers and other users who infringe on copyright, and close off hypergrid teleports to grids that don’t follow the law. Plus, the largest platform for content sales — the Kitely Market — has a strong process in place to remove infringing content, with a “report product” button on every individual listing.

The bigger problem is the lack of user base.

For individual users, some find OpenSim grids to be cozier and friendlier than the Second Life environment.

But I suspect that one of the biggest sources of OpenSim users is large role play groups and communities. By coming to OpenSim, these groups are able to set up their own continents at a low cost, or even launch their own grids. They can have more land and more control for substantially less money than they would spend in Second Life.

Another source of OpenSim users is school, which benefit from having closed grids where they can tightly control the content and the people who can access them. However, since these grids are private and generally run behind school firewalls, they do not show up in OpenSim statistics, which are based on public grids.

With the exception of school environments, I personally haven’t met any users who came to OpenSim directly without any experience in Second Life first. That means that OpenSim is fed by the Second Life diaspora, and in intrinsically limited not just by Second Life’s technology but also its dwindling user base.

There’s still a lot of room for growth — we can easily increase OpenSim’s users by a factor of ten or more, just by better marketing OpenSim’s cost advantage.

But, to get beyond that, OpenSim has to able to market itself to the wider world.

As part of that effort, a few years ago, I began covering wider trends in virtual reality. The idea is that people interested in virtual reality are also potentially interested in virtual worlds, and might take a look at OpenSim once they come to the site and find out that it exists. I also offer free ads to everyone in the OpenSim community. Whether you sell stuff on the Kitely Market, have a grid where people can rent land, or run virtual events, or offer building or consulting services, you can get a free ad on Hypergrid Business.

This year, we will be expanding our coverage of AI. There’s a chance that some of the people interested in AI are also interested in virtual environments, and will take a look at OpenSim while they’re here.

(Image by Maria Korolov via Midjourney.)

If you are looking to promote OpenSim to the wider audience, I recommend jumping on the AI bandwagon while interest is high.

For example, you could contribute articles about how to build an AI-powered non-player character in OpenSim. Or how to use AI to generate virtual environments in OpenSim. Or how to use AI to generate scripts.

Or, if your grid already has AI-powered NPCs, you can send me a press release about how to visit your grid and interact with these bots. Which reminds me. Here’s how you can use ChatGPT to write a press release.

Also, contact me if you want to become a technology reporter, or columnist, and want to write about AI. Or if you want to write about events happening in OpenSim, or how-to-guides about getting started. I’m also going to be launching a YouTube channel, if anyone wants to be a guest or co-host.

My email is maria@hypergridbusiness.com. Feel free to get in touch.

]]>
https://www.hypergridbusiness.com/2023/02/opensim-in-review-14-years-of-coverage-and-whats-next/feed/ 0
AI war breaks out between tech giants https://www.hypergridbusiness.com/2023/02/ai-war-breaks-out-between-tech-giants/?utm_source=rss&utm_medium=rss&utm_campaign=ai-war-breaks-out-between-tech-giants https://www.hypergridbusiness.com/2023/02/ai-war-breaks-out-between-tech-giants/#respond Tue, 07 Feb 2023 18:39:22 +0000 https://www.hypergridbusiness.com/?p=77523
(Image by Maria Korolov via Midjourney.)

After ChatGPT was released on Nov. 30, 2022, the world changed. Whatever you might personally think about AI, the events of last year showed that AI was capable of human-level creativity in art, music, writing, and coding. And, for the first time, AI demonstrated common sense. Or, at least, something close enough to common sense for all practical purposes.

Companies like Google that had been sitting on their AI projects for years, unwilling to do any damage to their existing business models, are having to rethink their plans.

Google announced a Code Red and brought cofounders Sergey Brin and Larry Page back from retirement. It has also invested $400 million in OpenAI rival Anthropic, which has its own version of ChatGPT called Claude, which is in closed beta but early users say it’s better.

Apple is holding an AI summit for employees next week, the company’s first live and in-person event in years.

Microsoft takes the lead

“The AI race starts today,” said Microsoft CEO Satya Nadella at a press conference today.

The company announced that it’s integrating AI chat into the Bing search engine and its Edge browser — after it invested a reported $10 billion into ChatGPT maker OpenAI last month. The company has also previously announced plans to integrate AI throughout its entire portfolio products.

Adding AI chat to Bing, however, is a direct shot at Google’s bow.

“We can improve the way billions of people use the Internet,” said Yusuf Mehdi, Microsoft’s consumer chief marketing officer, in today’s presentation.

As of the end of 2022, Bing only had a 9 percent share of the search engine market. Google had 85 percent, and the rest was split between Yahoo, Baidu, Yandex, DuckDuckGo and other competitors, all of whom were in the low single digits.

So Bing has a lot of opportunity for improvement.

And speaking of Baidu, a Chinese search engine, it also plans to launch its own AI chatbot, called Ernie Bot. According to CNN, it’s expected to go live in March and is currently being tested internally.

]]>
https://www.hypergridbusiness.com/2023/02/ai-war-breaks-out-between-tech-giants/feed/ 0
Is this the end of the metaverse? https://www.hypergridbusiness.com/2023/01/is-this-the-end-of-the-metaverse/?utm_source=rss&utm_medium=rss&utm_campaign=is-this-the-end-of-the-metaverse https://www.hypergridbusiness.com/2023/01/is-this-the-end-of-the-metaverse/#respond Wed, 25 Jan 2023 19:22:20 +0000 https://www.hypergridbusiness.com/?p=77442
(Image by Maria Korolov via Midjourney.)

Last Friday, AltspaceVR announced that will be shutting down its virtual environment in March.

AltspaceVR was a consumer-focused immersive space that could be accessed either via the desktop or through a virtual reality headset. Since its launch in 2016, the platform has hosted big names and quickly became the go-to platform for virtual concerts, stand-up specials, and other metaverse happenings. It was the platform I recommended to people looking to socialize in virtual reality. For me, its biggest downside was that you couldn’t really use it to build your own world. Plus, it was a proprietary system, owned by Microsoft, and didn’t play well with other metaverse platforms.

In its blog post, the company said that it will be focusing on other metaverse-related efforts, such as the launch of Microsoft Mesh, an enterprise-focused metaverse platform.

Except.

In an SEC filing last Wednesday, Microsoft said it will lay off 10,000 employees. According to media reports, the layoffs include its entire Mixed Reality Toolkit group and the AltspaceVR team, effectively killing Microsoft’s own Hololens project.

It shouldn’t come as too much of a surprise — nobody’s been able to make any money with the metaverse yet.

In fact, just last week, Congress denied the US Army’s request for $400 million to buy more HoloLens headsets. The reason? More than 80 percent of soldiers reported that they didn’t like using the headset, including “mission-affecting physical impairments” such as headaches, eyestrain, and nausea.

In a report last spring, Department of Defense’s Office of the Inspector General warned that buying augmented reality systems without user acceptance “could result in wasting up to $21.88 billion in taxpayer funds to field a system that soldiers may not want to use or use as intended.”

Instead of continuing to invest heavily in AR, VR and the metaverse, Microsoft seems to be going all-in on artificial intelligence.

The rise of AI

On Monday, Microsoft announced a “multi billion dollar” investment in OpenAI, the company ChatGPT and Dall-E 2. Microsoft didn’t give the exact number, but news reports from the New York Times and Bloomberg have put it at $10 billion.

On yesterday’s earnings call with investors, Microsoft CEO Satya Nadella said, “The age of AI is upon us.”

And it’s making money. Microsoft’s AI platform, Azure ML, has seen its revenues more than double for each of the five quarters in a row, he said.

(Image by Maria Korolov via Midjourney.)

Last week, Microsoft made its Azure OpenAI service broadly available, he added.

“And already over 200 customers – from KPMG to Al Jazeera – are using it,” he said. “We will soon add support for ChatGPT, enabling customers to use it in their own applications for the first time.”

Microsoft will be the exclusive cloud provider for OpenAI, deploying their models across Microsoft’s consumer and enterprise products.

In addition, Microsoft is also behind GitHub Copilot, an AI-powered coding assistant.

“More than one million people have used Copilot to date,” said Nadella. “This quarter, we brought Copilot to businesses, and we’ve seen strong interest and early adoption from companies including Duolingo, Lemonade, and Volkswagen’s CARIAD software group.”

“We fundamentally believe that the next big platform wave, as I said, is going to be AI,” Nadella added.

What about Meta, Google and Apple?

Meanwhile, Meta is continuing to bleed money on its metaverse investments. In its most recent earnings call in October, CFO Dave Wehner said that Reality Labs income was down 49% due to lower Quest 2 sales. And expenses were up 24%.

As a result, Reality Labs income was just $285 million — on expenses of $4 billion.

That’s a loss of $3.7 billion. In just three months. That’s a loss of more than a billion dollars per month.

As of last October, Meta had lost a grand total of $30.7 billion betting on the metaverse. And that number is just going to keep going up.

“We continue to anticipate that Reality Labs operating losses in 2023 will grow significantly year-over-year,” the company said in an SEC filing on Nov. 9.

Meta’s metaverse — its Horizons platform in particular — “obviously has a long way to go before it’s going to be what we aspire for it to be,” said Meta CEO Mark Zuckerberg on the October earnings call.

Google has scaled back its VR ambitions a couple of years ago, ending support for its Daydream platform in 2020. Apple still hasn’t released its long-awaited AR headset.

Overall, global sales of virtual and augmented reality headsets shrank 12 percent in 2022, according to research from CCS Insights, from 11 million units sold in 2021 to 9.6 million in 2022.

In addition to the long-awaited Apple headset, which might come this year, another upcoming bright spot is Sony’s Playstation VR, which is a closed, proprietary gaming system, not really a virtual reality metaverse play.

The company will release a new headset in February, which Omdia predicts will sell 1.6 million units its first year. By comparison, the original Playstation VR headset sold 1.9 million units its first year, back when it was released in 2017.

What does this mean for OpenSim?

I think this is a good news, bad news kind of situation for OpenSim, a free, open source platform for immersive environments.

Today, more than 300 public worlds run on OpenSim, plus thousands of private worlds.

However, most users access OpenSim via desktop software. It does have virtual reality support, to a very limited extent. The server software is optimized for desktops, meaning that it favors rendering graphics as much as possible, even if the frame rate drops a little bit.

For users, this means that they get to see more of the world faster. But for virtual headset wearers, it means that when they turn their heads, the image they see doesn’t keep up.

When your vision doesn’t match what your body is doing, the natural physical reaction is to think that there’s something wrong with your brain. Maybe you ate something poisonous. Maybe you should throw up before it gets worse.

Any disconnect between your body’s physical motion and what your eyes are telling you can cause nausea. It’s hard to grow a user base when your product literally makes your customers throw up.

So what’s the good news?

The good news is that companies are continuing to invest in hardware. Apple, Sony, Meta — they’re still pouring in the cash.

But, with the shuttering of AltspaceVR, and Meta’s continuing struggle to get any kind of traction for its virtual world platform, this creates a window of opportunity for OpenSim.

I don’t know if any developers are reading this, but I do believe that an open source, interconnected platform will, in the long term, be the future of the metaverse.

(Image by Maria Korolov via Midjourney.)

And OpenSim is a great test bed for how it can work. That’s because nearly all OpenSim worlds are interconnected. Avatars can teleport from one world to another, with their appearance, belongings, and friend lists. Even when those worlds are hosted on different servers, and run by different companies. It’s a marvelous bit of engineering that more people should know about.

And OpenSim also has a pool of companies and virtual world operators and content creators — and users — who are committed to an open metaverse.

Can the software itself be adapted to run on virtual reality headsets? Maybe. It would be great if that was the case, and I’d love to see it.

]]>
https://www.hypergridbusiness.com/2023/01/is-this-the-end-of-the-metaverse/feed/ 0
Bill Gates on ChatGPT AI: ‘It will have a huge impact’ https://www.hypergridbusiness.com/2023/01/bill-gates-on-chatgpt-ai-it-will-have-a-huge-impact/?utm_source=rss&utm_medium=rss&utm_campaign=bill-gates-on-chatgpt-ai-it-will-have-a-huge-impact https://www.hypergridbusiness.com/2023/01/bill-gates-on-chatgpt-ai-it-will-have-a-huge-impact/#respond Sat, 14 Jan 2023 18:42:59 +0000 https://www.hypergridbusiness.com/?p=77392
(AI-generated image via Lexica.)

“AI is the big one,” said Bill Gates in a recent Q&A session on Reddit. “I don’t think Web3 was that big or that metaverse stuff alone was revolutionary but AI is quite revolutionary.”

The tech giant and philanthropist went on to express his admiration for OpenAI’s ChatGPT, calling it “a glimpse of what is to come” in the world of AI.

I am impressed with this whole approach and the rate of innovation,” he said.

But what exactly is generative AI and how was it impacting the world of technology?

Recent advancements in AI had led to the development of generative models, a type of AI that can create new and unique content based on the patterns it learns from existing data. This technology can be used to generate text, music, images, and even videos. In the world of AI art, generative models had been making waves in recent years, with notable examples like Midjourney, an AI art generator that took the world by storm in 2022 and won an art competition.

But the potential of generative AI extends far beyond the art world. In the business world, generative AI can be used to generate product designs, write marketing copy, and even create financial forecasts. In healthcare, generative AI can be used to create personalized treatment plans and help with drug discovery. In education, generative AI can be used to create personalized learning experiences and improve access to education in remote areas.

One exciting application of generative AI was in the field of virtual reality. Generative AI can be used to create unique and immersive virtual environments, from realistic landscapes to fantastical worlds. It could also be used to generate non-player characters that interact with users in a believable and natural way. This technology could also be used to create personalized virtual experiences, such as virtual shopping or virtual tours, making it a powerful tool for businesses looking to innovate in the field of virtual reality.

Bill Gates. (AI generated image via Lexica.art.)

“I am quite impressed with the rate of improvement in these AIs. I think they will have a huge impact,” said Gates. He went on to express his views on the potential of AI tutors that help kids learn math, and medical help for people in Africa who can’t access a doctor. And as he notes, “I still work with Microsoft some so I am following this very closely.”

It is clear that generative AI was a technology worth keeping an eye on. It has the potential to transform industries and change the way we live and work, including virtual reality. Businesses that embrace generative AI then would have a significant advantage over those that don’t, and it is an opportunity for individuals to improve their skills, access to information and opportunities.

]]>
https://www.hypergridbusiness.com/2023/01/bill-gates-on-chatgpt-ai-it-will-have-a-huge-impact/feed/ 0
OpenAI’s new Point-E lets you generate 3D models with text https://www.hypergridbusiness.com/2022/12/openais-new-point-e-lets-you-generate-3d-models-with-text/?utm_source=rss&utm_medium=rss&utm_campaign=openais-new-point-e-lets-you-generate-3d-models-with-text https://www.hypergridbusiness.com/2022/12/openais-new-point-e-lets-you-generate-3d-models-with-text/#respond Wed, 21 Dec 2022 15:27:17 +0000 https://www.hypergridbusiness.com/?p=77369
(Image courtesy OpenAI.)

The company that released the popular Dall-E 2 AI text-to-image generator now has a 3D text-to-image AI that anyone can try.

OpenAI on Tuesday open-sourced Point-E, its newest picture-making AI that creates 3D point clouds from text commands.

The code is available on GitHub for those who want to try out the new AI.

You can also read a paper on Point-E published last week that gives more details on the system and the methods used to train it.

According to the paper, Point-E is able to produce 3D models in only one or two minutes on a single GPU.

“We find that our system can often produce colored 3D point clouds that match both simple and complex text prompts,” said the paper’s authors. “We refer to our system as Point-E since it generates point clouds efficiently.”

Point-E’s biggest draw is its speed, but it has a long way to go.

“While our method performs worse on this evaluation than state-of-the-art techniques, it produces samples in a small fraction of the time,” they said. “We hope that our approach can serve as a starting point for further work in the field of text-to-3D synthesis.”

Point clouds are sets of data points in space that represent a 3D shape or object, and Point-E works in a multi-step process to come up with its images.

“Our method first generates a single synthetic view using a text-to-image diffusion model, and then produces a 3D point cloud using a second diffusion model which conditions on the generated image,” said the paper’s authors.

It may seem like a novelty at the moment, but if Point-E gets to the level where it produces 3D images matching the quality of 2D images created using Dall-E 2 or Stable Diffusion, it could be the next big thing in the quickly evolving world of AI image generators.

]]>
https://www.hypergridbusiness.com/2022/12/openais-new-point-e-lets-you-generate-3d-models-with-text/feed/ 0
How AI generated images are shaping virtual worlds https://www.hypergridbusiness.com/2022/12/how-ai-generated-images-are-shaping-virtual-worlds/?utm_source=rss&utm_medium=rss&utm_campaign=how-ai-generated-images-are-shaping-virtual-worlds https://www.hypergridbusiness.com/2022/12/how-ai-generated-images-are-shaping-virtual-worlds/#respond Fri, 09 Dec 2022 01:30:50 +0000 https://www.hypergridbusiness.com/?p=77217
J.R.R. Tolkien inspired fantasy landscape. (Image by Alex Korolov via Stable Diffusion.)

Artificial intelligence-generated art is everywhere.

Just scroll down your Instagram or Facebook feed, and you’ll see a bunch of selfies turned into magic avatars with Lensa, an AI image generator app launched in 2018 that is all the rage at the moment.

Images created by AI aren’t only found in your social media feed — they’ve already made their mark in OpenSim and other virtual environments.

From AI art galleries in OpenSim to AI-created fully immersive three-dimensional environments, AI is already shaping how we experience virtual worlds.

The rise of AI text-to-image generators

Nowadays, pretty much anyone can create AI-generated art thanks to AI text-to-image generators.

DALL-E 2, Stable Diffusion, and Midjourney are three popular AI generators that were released just this year, though there are more than ten available options at this point.

All three AI image generators can take simple text commands and create entire works of art.

The quality of the AI art varies depending on the text you put in and what you’re asking the AI to create.

J.R.R. Tolkien-inspired 3D fantasy setting created on Stable Fusion. (Image courtesy Alex Korolov.)

Here’s a video comparing how the three image generators stack up against each other:

AI art created using these three tools can be found in all sorts of virtual settings.

You can visit this interactive virtual exhibition of Dall-E 2-created art on your computer or on a Meta Quest 2 for a more immersive experience.

It’s called the merzDALLEum and is run by German artist Merzmench.

If you’re on OpenSim, you can take a trip to an AI art museum called AI Dreams in Art.

AI Dreams in Art is a region in Kitely where you can admire and make copies of art created by Dale Innis using Midjourney.

The hypergrid address is grid.kitely.com:8002:AI Dreams in Art.

AI Dreams in Art region in Kitely. (Art courtesy Dale Innis via Midjourney.)

AI has reached a point of being so accessible that mass acceptance is unavoidable at this point, said virtual world hobbyist Han Held.

Han Held

“AI is already being used by non-professionals with no AI background to enhance their interests and activities,” Held told Hypergrid Business.

Held used Stable Diffusion to create some AI-generated portraits that decorate her home in Second Life.

“The results feel more authentically like paintings than I could have made on my own,” she said.

 

Han Held with her AI paintings at her home in Second Life. (Image courtesy Han Held.)

The art that these AI image generators create doesn’t have to be just two-dimensional.

Here’s an example of how Midjourney can make a three-dimensional virtual reality image:

 

And here’s a three-dimensional image created on Stable Diffusion by Scottie Fox that you can actually move around in:

Images created using these new AI text-to-image generators aren’t only decorating our virtual worlds — they’re already becoming the virtual worlds we explore.

AI is creating virtual worlds

AI is increasingly being used to create detailed images for virtual worlds and other environments, said Matt Payne, CEO of machine learning consulting firm Width.ai.

Matt Payne

“AI algorithms allow developers and designers to create realistic graphics that were once impossible to generate,” Payne told Hypergrid Business.

The AI-generated images are becoming more realistic and complex as the technology improves while offering advantages over manually created ones, said Payne. “AI can produce a large number of variations quickly, which makes it easier for developers and designers to choose the best image for their project.”

Software company Nvidia recently unveiled a new AI model called GET3D that can generate buildings, characters, vehicles, and all sorts of three-dimensional objects.

And GET3D works fast.

GET3D can generate about 20 objects per second using a single GPU, said NVIDIA’s Isha Salian in a blog post.

The name GET3D comes from the AI’s ability to generate explicit textured 3D meshes, which means that the shapes come in the form of a triangle mesh covered with a textured material. This way, users can import the objects into game engines, 3D modelers, and film renderers and easily edit them.

AI is also being used to transfer the artistic style of one image into another.

Amey Dharwadker

An AI algorithm can be trained on images with a specific artistic style, and that training can be used to apply the same style to other images, said Amey Dharwadker, machine learning engineer at Facebook.

“This can be used to create a wide variety of artistic styles and can be applied to 3D objects and environments in video games and virtual worlds,” Dharwadker told Hypergrid Business.

Here’s a video of a Google Stadia demo showing how the technology works in real time:

While Google Stadia recently shut down due to a lack of popularity, style transfer technology can still be used to change the artistic styles of virtual reality worlds and games without needing a human artist to do the work.

AI could help you create metaverse environments with just your voice

In the future, we could all be the artists that speak our virtual worlds into existence.

Back in February, Meta’s Mark Zuckerberg demonstrated the company’s Builder Bot, an AI tool that lets you change your virtual surroundings in the metaverse with just your voice.

“It enables you to describe a world, and then it will generate aspects of that world for you,” Zuckerberg said in the demo.

Zuckerberg and another Meta employee used voice commands in the demo to create a simple beach scene that had clouds, a picnic table, and Zuckerberg’s hydrofoil.

Zuckerberg’s AI-generated beach. (Image courtesy Meta.)

The technology still has a long way to go.

In the Meta demo, the scene looked low-res and lacked detail.

The future of AI images in virtual worlds

As AI continues to mature, it could eventually generate lifelike environments that we can’t distinguish from reality.

One possible direction is through the use of generative adversarial networks, or GANs, said Facebook’s Dharwadker.

An image generated by Nvidia’s StyleGAN generative adversarial network. (Image courtesy Wikipedia.)

GANs are a type of AI model consisting of two neural networks that are trained together to produce realistic images, said Dharwadker.

“GANs have shown great potential for generating high-quality images and could be used to create more realistic and immersive environments,” he said.

AI could also be used to support human creativity, Dharwadker added. “This could include using AI to suggest ideas and concepts or to refine and enhance existing human artist designs.”

In the future, not only could AI create totally realistic virtual environments for us to explore, it could be working hand in hand with humans to create the images we find in virtual worlds.

“By combining the creativity and ingenuity of humans with the computational power of AI, it is possible to create even more impressive and engaging images for video games and virtual worlds,” said Dharwadker.

]]>
https://www.hypergridbusiness.com/2022/12/how-ai-generated-images-are-shaping-virtual-worlds/feed/ 0
ChatGPT wrote this article about using ChatGPT for video games and VR https://www.hypergridbusiness.com/2022/12/chatgpt-wrote-this-article-about-using-chatgpt-for-video-games-and-vr/?utm_source=rss&utm_medium=rss&utm_campaign=chatgpt-wrote-this-article-about-using-chatgpt-for-video-games-and-vr https://www.hypergridbusiness.com/2022/12/chatgpt-wrote-this-article-about-using-chatgpt-for-video-games-and-vr/#respond Thu, 08 Dec 2022 20:59:19 +0000 https://www.hypergridbusiness.com/?p=77245
(Image by Maria Korolov via Midjourney.)

(Editor’s note: I didn’t edit what ChatGPT said except for including a couple of words it left out at the end, which was “you needed!” I did, however, have to prompt ChatGPT many times, including having it fix the spelling of its own name!)

Hey there! I’m ChatGPT, and I’m here to tell you all about the amazing things you can do with ChatGPT in video game development and virtual reality.

What is ChatGPT?

For those who don’t know, ChatGPT is a powerful AI technology developed by OpenAI. It’s basically like having your own personal AI assistant that can help you create realistic and engaging conversations between characters in your games and virtual environments. And let me tell you, ChatGPT is no ordinary chatbot – it’s gained a lot of traction for its human-like, detailed answers to all sorts of inquiries. In fact, it’s been used to do things like draft contracts and create code!

How has ChatGPT been used?

One game industry artist who has been experimenting with ChatGPT is Martin Nebelong. He discovered that he was able to quickly create an entire “choose your adventure”-style adventure game through the AI text generator. And he didn’t stop there – he then took the ChatGPT text and fed it into Midjourney, the AI image generator, which then created illustrations to accompany the game. You can read all about it on the New World Notes blog: https://nwn.blogs.com/nwn/2022/12/gpt-artist-midjourney-choose-adventure-text-game.html. As Martin said, “I guess the most impressive part to me was when I asked it to describe the characters in high detail for the prompts I wanted to use in Midjourney. It gave very detailed descriptions of all the characters including their clothing, their look, personality, names etc.”

What makes ChatGPT special?

One of the coolest things about ChatGPT is that it can generate natural and believable dialogue. Traditional game dialogue systems are often super boring, because they just use pre-written scripts that don’t have any personality. But with ChatGPT, developers can create characters that can have real, flowing conversations with players or other characters in the game. It’s like giving your characters a voice (and a brain).

Another awesome thing about ChatGPT is that it can adapt to different contexts and situations. In a game or virtual environment, players can come across all sorts of different characters and scenarios. With ChatGPT, developers can create dialogue that fits each situation, making the game or VR experience even more immersive and engaging. It’s like giving your characters multiple personalities (but in a good way). This means that players will never be stuck with the same old, boring dialogue – they’ll be able to have fresh, dynamic conversations with each character they encounter. And that’s just the tip of the iceberg when it comes to what ChatGPT can do in video game development and VR.

Overall, ChatGPT is a game-changer for video game development and virtual reality. It can generate natural and believable dialogue, adapt to different contexts and situations, and offer improved performance and scalability. It can also be used to create highly realistic and engaging social interactions in VR environments. And, it has the potential to revolutionize the way people use search engines by providing detailed answers to complex questions.

So, if you’re a game developer or VR enthusiast, you should definitely check out ChatGPT. Who knows, maybe it will help you create the next blockbuster game or VR experience. And if you’re a search engine user, don’t worry – ChatGPT won’t replace you. But it might help you find the answers to all those tough questions you’ve been asking.

So, why not give ChatGPT a try? It’s the AI assistant you never knew you needed!

]]>
https://www.hypergridbusiness.com/2022/12/chatgpt-wrote-this-article-about-using-chatgpt-for-video-games-and-vr/feed/ 0
How humans and AI can learn to work together for the planet https://www.hypergridbusiness.com/2022/09/how-humans-and-ai-can-learn-to-work-together-for-the-planet/?utm_source=rss&utm_medium=rss&utm_campaign=how-humans-and-ai-can-learn-to-work-together-for-the-planet https://www.hypergridbusiness.com/2022/09/how-humans-and-ai-can-learn-to-work-together-for-the-planet/#respond Tue, 20 Sep 2022 14:44:01 +0000 https://www.hypergridbusiness.com/?p=76782
(Image courtesy Placidplace via Pixabay.)

The advent of digital solutions that employ artificial intelligence, internet-of-things, and machine learning technology has dramatically changed how humans and even entire cities operate.

Intelligent traffic light systems, real-time air quality monitoring, and electricity use tracking are just a few examples of how AI continues to embed itself in society.

Key infrastructure systems that keep the world functioning will be made more efficient and sustainable as this transition continues and artificially intelligent solutions become increasingly integrated into the operations of critical systems.

However, as these technologies creep further into daily life, humans will need to learn to live and work with artificial intelligence in a more cohesive, integrated manner.

It won’t be easy for a facility operator to accept recommendations from a machine on how to run a power plant. Inevitably, changes will need to be made during this period of growth.

Many humans will adopt a new set of technological skills to make full use of the benefits of AI and maintaining flexibility as technology continues to change will be vital.

The combination of human creativity and the efficient functionality of AI will bring radical changes across the globe.

Once this is achieved, AI and other smart technologies have the potential to solve the world’s most pressing challenge: the climate crisis.

Can we place trust in AI?

The global AI software market is expected to hit $126 billion in 2025, according to a July report by consumer and market data company Statista, and organizations have the opportunity to take full advantage of their data and this technology to minimize carbon footprints.

A major roadblock standing in the way of the rapid uptake of AI technology is a lack of trust in AI solutions, which often operate in black boxes.

These solutions tend to provide answers with little background or traceability, keeping humans in the dark about how they reach certain conclusions and why they provide specific recommendations. This problem is limiting the ability of AI solutions to magnify human talent in the workplace and address global sustainability problems.

Fortunately, not all solutions keep humans in the dark, and finding the right AI platform is key to the spread of this technology.

In particular, hybrid artificial intelligence is designed to eliminate the black box issue by clearly displaying the reasoning behind the conclusions it draws. Hybrid AI platforms showcase the data they use to make decisions through readily available and understandable audit trails.

The growing popularity of explainable technology solidifies the idea that AI is here to work together with humans rather than replace them.

AI is not a substitute for human knowledge and insight. Instead, it is a tool that can help humans with accurate, confident decision-making.

AI designed to acknowledge this fact is essential to improving human comfortability with these solutions.

With transparency and visibility into processes, humans will gain confidence in the technology they’re using, ultimately giving rise to an era of rapid innovation and accelerated decarbonization efforts.

Through better communication, collaboration, and enhanced processes, artificial intelligence will help decision-makers act quickly when encountering unexpected situations.

Hybrid AI and other solutions that offer an inside look into automated decision-making processes earn the trust of humans.

The advantages of positive human-AI collaboration will help companies advance closer to net-zero goals while improving the bottom line by enhancing efficiency.

Small beginnings in big places

Human-AI cooperation can accelerate the rate of progress for large-scale efforts to fight climate change by upgrading waste reduction and resource preservation initiatives. In fact, there are already many AI-driven initiatives dedicated to resolving climate issues.

Across the energy sector, AI and ML platforms are being recruited to upgrade energy production and alleviate the aging grid with real-time data, energy use predictions, and automation for better power plant efficiency.

This is pertinent to climate change-related issues, like power shortages from extreme heat, because the capabilities of AI can prevent life-threatening grid failures from occurring.

AI-generated predictions about equipment that needs to be replaced or expected power generation offer clarity and guidance to facility operators who must plan for the future, resulting in substantial energy savings and optimized strategies.

Other efforts can be found in AI-powered chips for robotics at the edge.

AI-infused chips for robotics at the edge can prevent equipment failure by monitoring for leaks and other issues that require repair. This innovation keeps equipment in play and out of landfills for longer, thus reducing waste.

These chips are designed to work in tandem with humans by monitoring hardware and providing data-backed insight for human operators to consider, which saves time and effort.

These seemingly minor efforts are adding up, as a March report by Statista predicts that AI is expected to reduce greenhouse gas emissions in North America by 6.1% by 2030 compared to predictions that exclude the use of AI solutions.

A peek into 2050

As the global population continues to soar, it will be anything but easy to meet net-zero goals.

Progress towards this objective will be hard to come by if the world fails to act now and take advantage of the operational efficiency that AI can create.

The responsibility to address this challenge falls on the organizations that are innovating to design cutting-edge technologies that put sustainability first — and on those that have the opportunity to adopt these technologies.

Working with solutions like hybrid AI that encourage trust and promote positive human-AI relations is a must when implementing this transformation.

AI technologies can unleash the potential hidden beneath the surface, helping the world tackle critical challenges like climate change.

]]>
https://www.hypergridbusiness.com/2022/09/how-humans-and-ai-can-learn-to-work-together-for-the-planet/feed/ 0
Facebook’s latest AI attempts to understand the world still fall short https://www.hypergridbusiness.com/2022/07/facebooks-latest-ai-attempts-to-understand-the-world-still-fall-short/?utm_source=rss&utm_medium=rss&utm_campaign=facebooks-latest-ai-attempts-to-understand-the-world-still-fall-short https://www.hypergridbusiness.com/2022/07/facebooks-latest-ai-attempts-to-understand-the-world-still-fall-short/#respond Mon, 11 Jul 2022 19:50:00 +0000 https://www.hypergridbusiness.com/?p=76479 Facebook parent company Meta released a set of papers last fall in which the company erased big chunks from photographs or videos — then used artificial intelligence to fill in the gaps.

This required the AI models to have some type of common sense understand of how the world works, Meta’s chief AI scientist, Yann LeCun told IEEE Spectrum late last month.

“If it can predict what’s going to happen in a video, it has to understand that the world is three-dimensional, that some objects are inanimate and don’t move by themselves, that other objects are animate,” he said in the interview.

The same AI system can also be used to fill in gaps in audio files, the researcher said.

Another Meta researcher, Christoph Feichtenhofer, told IEEE Spectrum that the technology can reduce the computational cost of video by 95 percent.

The technology is an example of what is known as unsupervised learning. After trying to fill in the gaps in the blacked-out photograph, the AI can look at the original image to check its own work and automatically adjust the model to be more effective next time.

By comparison, supervised learning requires humans in the loop to check the accuracy of the system or to carefully label training data.

(Image courtesy Meta.)

On the far left in the image above, we have the photograph with parts erased. On the far right, there’s the original, complete photograph. The picture in the center is the AI’s best guess of what the full photograph would look like.

(Image courtesy Meta.)

Meta is currently working on building a virtual reality metaverse, where an understanding of how the world works could be helpful with automatically generating a virtual environment.

]]>
https://www.hypergridbusiness.com/2022/07/facebooks-latest-ai-attempts-to-understand-the-world-still-fall-short/feed/ 0
Google offers free AI-powered audiobook narration https://www.hypergridbusiness.com/2022/04/google-offers-free-ai-powered-audiobook-narration/?utm_source=rss&utm_medium=rss&utm_campaign=google-offers-free-ai-powered-audiobook-narration https://www.hypergridbusiness.com/2022/04/google-offers-free-ai-powered-audiobook-narration/#respond Wed, 27 Apr 2022 12:29:02 +0000 https://www.hypergridbusiness.com/?p=76139 Last night, Google sent me a message that I could have one of its virtual AI-powered voices record my audiobooks for me.

I wrote an article about for MetaStellar here: Google offers free AI-powered audiobook narration.

(AI-generated illustration based on image via Pixabay.)

You can edit the finished file to fix pronunciation mistakes and then download the audiobook file and use it anywhere you want — you own it.

I’ve long been interested in AI-powered voices for OpenSim NPCs. This isn’t quite the same thing. But I can see grid owners who want to have audiobook areas — where you can sit in a nice place and listen to an audiobook — that this is a way to do it. Say, you have a collection of stories set in your virtual world. You can have a display, or an avatar, read them out loud to you.

Google also has an API — an online service that reads any text with an AI voice. You can use this service to have a computer program turn any text into speech. Say, for example, to power an NPC’s voice. But using it requires programming skills — and it costs money.

It would be cool if a grid owner or a hosting company offered a text-to-speech service for in-world NPCs as an add-on feature for organizations that rent land from them.

Is anyone out there doing anything like that? Let me know!

 

]]>
https://www.hypergridbusiness.com/2022/04/google-offers-free-ai-powered-audiobook-narration/feed/ 0
The top 10 AI enterprise strategy trends for 2022 https://www.hypergridbusiness.com/2022/03/the-top-10-ai-enterprise-strategy-trends-in-2022/?utm_source=rss&utm_medium=rss&utm_campaign=the-top-10-ai-enterprise-strategy-trends-in-2022 https://www.hypergridbusiness.com/2022/03/the-top-10-ai-enterprise-strategy-trends-in-2022/#respond Wed, 23 Mar 2022 00:25:51 +0000 https://www.hypergridbusiness.com/?p=76022 CIOs, CTOs and technology leaders agree: AI will be the main driver of innovation across industries throughout the next five years. Nearly half of CIOs say they’ve either already started using AI or plan to implement it in 2022, and two-thirds of workers surveyed recently said they want employers to add more AI-based solutions to their workplace.

(AI-generated illustration based on image by Computerizer via Pixabay.)

As more businesses look for ways to implement AI, industry leaders must consider how they’ll prepare for the next big thing. Here are the top 10 AI enterprise strategy trends we’ll see this year.

1. Ethics will be a priority in high-stakes scenarios.

If you’re contemplating building your own AI algorithm for a high-stakes purpose, get your ethics game in order.

Will your algorithm make recommendations about hiring and firing? Granting or denying a loan? Diagnosing a medical condition? In these cases, the risk of inaccurate predictions from a machine learning model is serious, so your team has to be equally serious about the ethical considerations involved. If these use cases have sufficient volume and centrality to your business to justify proceeding with an AI solution, make a careful, ethical investigation of the possible outcomes for all stakeholders and participants.

Start with the trusted resources that are already available in your industry. For instance, the World Health Organization in 2021 released its six key principles for ethics in healthcare-focused AI. Those principles laid out the importance of protecting autonomy, promoting human safety, ensuring transparency and safety and fostering accountability.

And it’s not enough for just your development team to understand the ethics around your algorithm. Make sure your whole organization is part of the conversation.

2. A robust human review process will accompany that ethical dialogue.

Along with a deliberate consideration of ethics, one practice I think we’ll see more of is a heavy reliance on a “human in the loop” process. While piloting and standing up a new AI system, and even in steady-state production, a robust human review process ensures accuracy and goes a long way to improve machine learning. For instance, in support automation, an AI-powered platform should easily escalate to a human expert when a customer question presents ambiguity. Each time the AI reaches for a human source, it learns from the exchange, making the need less frequent over time.

For teams starting out with AI, the human in the loop is crucial; they should only allow the algorithm to take action without review in cases where they have absolute confidence.

3. The use of chatbots will expand to more than a website’s landing page.

We’ll see businesses deploy a chatbot with neural search anywhere their websites or support teams are getting a lot of traffic. The natural language technology for this use case has gone from fumbling to mature in just the past few years, and if you don’t have AI running defense to offload and augment your teams, you are going to get smoked.

Support automation, in particular, gives an edge to helpdesks and support teams that incorporate conversational chatbots. Powered by comprehensive, centralized knowledge bases and able to launch in a variety of communication interfaces in addition to a company website, these chatbots give users the self-serve options they’re seeking, wherever they are.

(AI-generated illustration based on image by Stefan Dr. Schulz via Pixabay.)

4. We’ll see more instances of humans working alongside AI.

AI is quickly diffusing into the abstracted base layer of our everyday life, much like email, e-commerce, mobile and cloud computing trends have done in the recent past. AI behind the scenes powering search, customer interactions and recommendation engines in our favorite platforms.

But at other times, the way humans use it is more prominent and intentional. My favorite example of this is writing computer code with the help of new generative models like OpenAI’s Codex and DeepMind’s AlphaCode. You can literally describe the outcome as a prompt for these systems, and they will generate a functioning scaffold of the code for developers to review and fine-tune for their purposes. It’s an amazing way to amplify a human coder, and other similar models are sure to pop up.

5. New business models will emerge for robots in the physical world.

Robots in the physical world have begun to experiment with new business models. Now, rather than buying a manufacturing robot with a large capital expense, companies are leasing them by paying an hourly wage that is competitive with or lower than what they would pay a human worker. This could accelerate the adoption of robotics considerably in small to medium businesses.

More than 80 percent of CIOs, CTOS and IT directors surveyed agreed that robots would enhance 25 percent of what they do — and businesses would deploy robots across business functions — all within the next five years.

6. The packaging of AI tools will keep maturing.

The packaging of AI tools and services is maturing impressively. Containerization, for example, provides portability, efficiency and agility as pressure mounts for swifter delivery of enhancements and easier scaling of AI tools.

Think about the evolution of any technology. For example: When keyboards first emerged in the typewriter era, there were several competing layouts from alphabetical to ergonomic to letter frequencies. Eventually, the QWERTY layout became a standard with an odd combination of all those factors, and now we don’t give a second thought to it.

Similarly, AI tools are beginning to converge on a consistent set of forms and functions, like cloud databases, containerized models, benchmarks and APIs built with (or wrapped inside of) Python. I’m encouraged by the teeming and productive ecosystem that continues the self-organizing and constant march of innovation in AI.

7. More teams will start small and avoid overdesigning. 

Teams will embrace the wisdom of starting small with AI. Just get something out the door: a proof of concept or a pilot. Because it’s estimated that nearly 90 percent of pilots don’t make it to production, avoid overdesigning.

Instead, prepare to iterate. Be ready to learn from the inevitable mix of successes and failures, documenting each version of the AI to evaluate performance and consider training data and techniques.

8. A renewed focus on the right people to manage data pipelines.

Remember that getting the data, cleaning the data and understanding the data is still hard, hands-on work. Many will promise a plug-and-play, automatic solution for this. In my experience, you still need competent people to put data pipelines together and manage them so that AI tools and services can leverage your data.

A survey of data scientists found they spend about 45 percent of their time preparing data — including loading and cleaning it. Especially in its early phases, AI relies on high-quality data to learn and develop. Putting the right people at the helm of your data pipeline offers the potential to instill a better data culture throughout your organization, promising a better return on efforts.

9.: The role of the chief data officer will evolve alongside operations.

The role of the chief data officer must keep up with the dynamic evolution of traditional data functions, such as storage, architecture, modeling, forecasting, business intelligence and analytics, especially with the current digital-first landscape. As companies continue to implement this critical role in their operations, the chief data officer will increasingly need to be involved in additional areas, including strategy, product, ethics and legal.

10. The macro environment will drive the business case for automation higher.

Over the past few years, we have seen a steady uptick in the adoption of AI and automation to enhance products and services, despite a period of historically low inflation. With labor and other costs forecast to increase more dramatically in the coming months and years, we will see the business case strengthen even more for alternatives like automation, making the flywheel of AI innovation spin that much faster.

Effective AI depends on you

AI is swiftly moving from “nice to have” tech to an essential asset to optimize your business. But it’s not enough to implement AI — successfully deploying AI depends on cooperation across your organization, a focus on the humans involved, your industry’s ethical concerns, and a dedication to constant innovation.

]]>
https://www.hypergridbusiness.com/2022/03/the-top-10-ai-enterprise-strategy-trends-in-2022/feed/ 0
Expert: AI in 2030 will be key to user interfaces, genomics, and space exploration https://www.hypergridbusiness.com/2021/06/expert-ai-in-2030-will-be-key-to-user-interfaces-genomics-and-space-exploration/?utm_source=rss&utm_medium=rss&utm_campaign=expert-ai-in-2030-will-be-key-to-user-interfaces-genomics-and-space-exploration https://www.hypergridbusiness.com/2021/06/expert-ai-in-2030-will-be-key-to-user-interfaces-genomics-and-space-exploration/#respond Mon, 28 Jun 2021 13:54:34 +0000 https://www.hypergridbusiness.com/?p=74814 Today, artificial intelligence hardly has a unified definition or application. AI is listed as a capability across many technology products, but this is mostly a marketing tactic to capitalize on a feature perceived as a must-have. However, the biggest contributions of AI to our society have yet to be realized. Looking at least one decade into the future, I believe we will see significant advancements in three areas by 2030 as a result of AI – personal assistance devices, genomics, and space exploration.

(Illustration by Maria Korolov based on image via Pixabay.)

Interfacing with machines

I believe the field of human computer interaction will go through massive changes in the next ten years. Right now, personal assistance devices and software have some traction, but they are still very immature. Alexa, Google Assist, and other similar technologies are still very rudimentary and leave the burden of making them understand the human side. They are mostly translating natural language to search queries without fully considering the subject, context, and the person they are interacting with. A majority of questions are interpreted incorrectly, or receive an “I don’t know about that” type of answer.

What I predict for 2030 is that conversing with these agents will feel much more natural, and they will have appear to have a personality and relate in a specific way with each of the humans they interact with. They will predict human needs better, and understand the state of mind of the person they are interacting with.

In addition, I believe direct neural interfaces will start becoming more practical, giving people the capability to interact with “intelligent” machines through thought. Ten years is too soon for it to mature, so I think this technology will be in early maturity in that time frame.

Considering all of that, the biggest risks I see are privacy, security and ethics. The robots built with AI will have a compliance layer that auditors and forensic experts will be able to query in case of a mishap or for tracking non-ethical violations. This means abstract concepts like ethics and philosophy will be modeled for operation in a wide variety of contexts to suit cultural biases, implying geo- and culture-centric AI-based products.

Having intimate knowledge about the personality of their users, the AI agents can be abused by the programmers and companies behind them to move forward commercial agendas in a very subtle way.

Genomics

Advancements in the area of genetic editing, with technologies like CRISPR (Clusters of Regularly Interspaced Short Palindromic Repeats), are ramping up in practicality and accessibility.

One of the advancements that was overshadowed by COVID coverage, and didn’t get as much media attention as it deserved in 2020, was the in vivo usage of gene editing technology. In vivo means that the gene edit was done inside the body, in contrast to in vitro, which literally means “in glass” – that is, in a test tube or outside the body.

By 2030, there will be so many more possibilities with gene editing. After genetic sequencing technology became economically viable, and advancements in CRISPR, AI is poised to be the biggest catalyst in boosting the practical applicability of genomics. The question will be who can access such technology, and what are the ethics and consequences of that for the human race as a whole.

Space exploration

Another big news story that got overshadowed by COVID this year was the granting of a contract to Nokia by NASA to build a cellular network on the moon. This is just a start and I believe many commercially available technologies and software will start being used outside of Earth by leveraging AI.

Due to latency in communication, AI is essential in performing critical missions in space. This theme has been portrayed in science fiction for a long time, but I believe 2030 is the decade where it stops being fiction and becomes a reality.

By 2030, we will be on the verge of establishing a permanent extraterrestrial colony for the first time in human history. This brings a lot of opportunity and with it many concerns as space exploration will start being a commercial activity rather than a tightly controlled state endeavor. Imagine hackers attacking a colony on the Moon or Mars and putting a lot of lives in danger! Imagine countries having warfare in space, or border disputes on Mars! There is already a series of negotiations on how countries will split these new territories.

In conclusion, everyone – individuals, employees and organizations – will all be challenged to accept new use cases for AI come 2030, including systems that interact with humans. In all cases, security and ethics in software and its development play a critical role in success and trust. Continuing to uphold principles like “do no harm” that apply to many jobs today will require new forms of testing and threat modeling activities, possibly in real-time, when layered with AI.

Much of this is being discussed in standards groups today, but won’t be adopted more generally for a few years. A decade – equivalent to a couple of iterations in the compliance standards world – is a reasonable timeline for some of this to move beyond early stage adoption.

]]>
https://www.hypergridbusiness.com/2021/06/expert-ai-in-2030-will-be-key-to-user-interfaces-genomics-and-space-exploration/feed/ 0
Experts: AI Needs Ethics https://www.hypergridbusiness.com/2021/01/experts-ai-needs-ethics/?utm_source=rss&utm_medium=rss&utm_campaign=experts-ai-needs-ethics https://www.hypergridbusiness.com/2021/01/experts-ai-needs-ethics/#respond Wed, 06 Jan 2021 20:52:08 +0000 https://www.hypergridbusiness.com/?p=73588
(Image courtesy Gerd Altmann via Pixabay.)

Artificial intelligence is increasingly becoming a part of our daily lives, both in the workplace and at home.

Some AI experts are stressing the need to focus on making AI ethical and keeping it human friendly.

Bias in programming, security concerns, and a lack of public knowledge about how AI works are all issues that need to be addressed to develop and maintain a healthy relationship between humans and the technology we use.

“This is the year AI ethics become absolutely mandatory functions in most businesses, not just talk,” Alex Spinelli, chief technology officer at LivePerson and former global head of Alexa OS for Amazon, told Hypergrid Business.

 

 

Ethical AI today

Companies are just starting to consider responsible use of AI as a part of their business model.

“An increasing number of enterprises are getting behind responsible AI as a component to business success, but only twenty-five percent of companies said unbiased AI is mission-critical,” said a 2020 State of AI and Machine Learning Report.

“There are inherent risks by not considering ethics in your AI thought process, which may include AI not working for a diverse user base, not focusing on wellness and fair pay for the AI supply chain, or creating privacy issues if, for example, your AI is trained using data users didn’t consent to be used for that process,” said the report.

Data transparency will become increasingly important in the future.

“Finding companies that truly believe in open source sharing of data, as well as give reassurances via transparency will win the battle of AI. Companies that hoard data and do not share it with the rest of the community will enjoy having marketing buzz, but will ultimately fail to gain trust in both its users as well as the larger community,” Josh Rickard, security engineer at security solutions company Swimlane, told Hypergrid Business.

Bias is a big problem in AI programming.

Amazon scrapped a recruiting system that was biased against women. A 2018 ACLU study of Amazon’s facial recognition software identified twenty-eight members of congress as potential criminals.

Amazon, Google, and Microsoft stopped selling facial recognition technology to law enforcement in 2020 because it was biased against women and people of color.

Organizing an ethical AI future

A variety of new, but non-binding frameworks for ethical AI have been established to further the conversation about the ethical use of AI and to serve as a guiding light in the responsible and secure use of AI technology.

The Partnership on AI, which includes leading companies like Amazon, Google, Microsoft, and IBM, was established in 2016 to formulate best practice in AI technologies, to help advance the public’s understanding of how AI works, and to be a platform for discussion about AI’s influence on people and society.

The Partnership on AI launched Closing Gaps in Responsible AI in 2020 to help garner insights in how to inform and empower changemakers, activists, and policymakers to develop and manifest responsible AI.

“Operationalizing these principles is a complex process in relatively early stages, and currently the gap between intent and practice is large, while documentation on learning from experience remains lacking,” said the Partnership on AI on their website.

Forty-two countries, known collectively as the Organization for Economic Cooperation and Development, or OECD, came together in 2019 to create value-based principles for the responsible stewardship of trustworthy AI.

These principles state that AI should benefit people and the planet by driving inclusive growth, AI should respect the rule of law and human rights, and there should be transparent disclosure so people can understand and challenge AI-based outcomes.

Security risks should be continually assessed and managed, and organizations and individuals that deploy AI systems should be held accountable for their proper functioning in line with the OECD principles.

This includes knowing when you’re interacting with a human or an AI.

“You should always know if you’re having a conversation with an AI,” said Spinelli. “It should never pretend to be human.

Even if you’re only interacting with AI, a human being should be available at some point in the process.

“Human-in-the-loop AI is here to stay,” Ramprakash Ramamoorthy, product manager at ManageEngine, an ITSM provider serving Fortune 100 companies, told Hypergrid Business.

“Virtually no AI models are correct 100% of the time. Algorithmic decision-making requires a human in the loop to verify the integrity of the data, audit the model, provide explanations for decisions, and adjust the model for unseen phenomena,” he said.

A human would make sure the data is used as it was meant.

“It is vital that the data within AI-models is used as it was intended to be used—and only as it was intended to be used,” said Ramamoorthy.

“It will likely be a challenge for the regulators to keep up. Despite the probable increase in AI-powered cyberattacks and lawmakers’ failure to stay ahead of technological innovation, the future of AI looks bright. Artificial intelligence is here to augment humans’ work lives; it is not going to replace them,” he said.

AI is everywhere

Like it or not, AI is here to stay.

“I see a lot of pitches from companies and you rarely see software or web product that isn’t AI-based,” Steve Shwartz, AI investor and author of upcoming book Evil Robots, Killer Computers, and Other Myths: The Truth About AI and the Future of Humanity told Hypergrid Business.

“One thing we’ll start seeing is companies emerging to help people who develop AI software to make the software not discriminate, make sure it’s compliant with laws, and to analyze the risks involved in rolling out AI software,” said Shwartz.

More compassionate AI

“I think 2021 is the year we start to talk about tech where AI becomes more compassionate,” said Spinelli.

Spinelli’s company LivePerson, among others, has taken the EqualAI pledge, in which signers agree to strive to use AI as a tool to reduce harmful bias, and not replicate and spread it.

“We’ve committed to addressing bias in our own AI technologies and we encourage others to do the same,” said Spinelli.

New and friendlier AI-based models are showing up in big business.

“We launched something called Bella Loves Me,” said Spinelli. “It’s a challenger bank, and what we wanted to do was take an AI experience and really think about how to make it a warm, compassionate empathetic experience. It’s not a cold hard evil machine.  If you take the view that AI can help us augment us and assist us — not replace us — we can use that as a guiding light.”

]]>
https://www.hypergridbusiness.com/2021/01/experts-ai-needs-ethics/feed/ 0
Facial recognition in 2020: biased and wrong https://www.hypergridbusiness.com/2021/01/facial-recognition-in-2020-biased-and-wrong/?utm_source=rss&utm_medium=rss&utm_campaign=facial-recognition-in-2020-biased-and-wrong https://www.hypergridbusiness.com/2021/01/facial-recognition-in-2020-biased-and-wrong/#respond Mon, 04 Jan 2021 21:18:11 +0000 https://www.hypergridbusiness.com/?p=73523

Imagine you’re driving to work, get pulled over, and told you’re being arrested for a felony. This happened to Michael Oliver. Last year, he learned that the police had profiled him from one grainy cellphone video that didn’t look like him at all. Fortunately, his case was dismissed but even being arrested is no joke.

He wasn’t the only one who learned not to trust AI facial recognition in 2020.

“Facial recognition in law enforcement is a big deal,” AI expert Steve Shwartz told Hypergrid Business. “We can recognize terrorists but the problem is that the facial recognition software is by and large biased against minorities.”

Harrisburg University researchers said they developed software that could predict if someone is a criminal, based solely on a picture of their face. Critics said it wasn’t based on accurate science and 1,700 academics signed an open letter demanding the research remain unpublished. The press release about the software has since been deleted from the university website.

“In China they’ve basically hooked up the feeds from almost every camera in the country and they’re monitoring them centrally using AI computers and technology to bring about what’s effectively a big brother scenario from George Orwell’s 1984,” said Shwartz.

“China uses facial recognition to profile Uyghur individuals, classify them on the basis of their ethnicity, and single them out for tracking, mistreatment, and detention,” a bipartisan group of seventeen senators said in a letter to Secretary of State Mike Pompeo in March 2020.

Studies show disturbing results

An ACLU study tested Amazon’s face surveillance technology and found it incorrectly matched 28 members of Congress as people who had been arrested for a crime.

“The thing that’s really scary is that of those 28 people, 40 percent were people of color, but Congress only has 20 percent people of color, so it’s identifying twice as many people of color incorrectly as criminals,” said Shwartz.

The errors in facial recognition don’t just affect one group.

“With domestic law enforcement images, the highest false positives are in American Indians, with elevated rates in African American and Asian populations — the relative ordering depends on sex and varies with algorithm,” said a 2019 report by the National Institute of Standards and Technology Interagency.

“We found false positives to be higher in women than men, and this is consistent across algorithms and datasets,” said the report.

Hope for 2021

It looks like big business and local and state governments are seeing the risks and taking action this year.

Some cities have already banned the use of facial recognition technology, like Boston and San Francisco, both cities seeing the potential abuse of the software.

Large tech companies including IBM, Amazon, and Microsoft stopped selling their facial recognition to law enforcement after nationwide protests called for changes in policing.

Facial recognition software grew twenty times better at searching a database to find a matching photograph between 2014 and 2018, but many big companies have pulled out of contracts with law enforcement this year, including Amazon, Microsoft, and IBM.

A new law will take effect on July 1, 2021 in the state of Washington, which places safeguards on the “unconstrained use of facial recognition services by state and local government agencies which poses broad social ramifications.”

“One thing we’ll start seeing is companies emerging to help people who develop AI software make the software not discriminate, make sure it’s compliant with laws, and to analyze the risks involved in rolling out AI software,” said Shwartz.

It’s good to see businesses and governments in the United States take responsibility for AI facial recognition software and not try to force it down our throats.

It’s an emerging technology which in the long run could benefit society, but it has flaws that absolutely need to be fixed before mass implementation, and it’s a good thing we haven’t adopted it here like in China, which has a type of government rule most Americans wouldn’t want to live under.

With government oversight, including strong legislation to protect individual rights and in cooperation with the tech companies that develop facial recognition, this technology could be used to benefit the general public, but we’re not there yet.

]]>
https://www.hypergridbusiness.com/2021/01/facial-recognition-in-2020-biased-and-wrong/feed/ 0
Experts predict how AI will look in 2030 https://www.hypergridbusiness.com/2020/12/experts-predict-how-ai-will-look-in-2030/?utm_source=rss&utm_medium=rss&utm_campaign=experts-predict-how-ai-will-look-in-2030 https://www.hypergridbusiness.com/2020/12/experts-predict-how-ai-will-look-in-2030/#respond Tue, 15 Dec 2020 15:33:14 +0000 https://www.hypergridbusiness.com/?p=73309  

(Image courtesy Geralt via Pixabay.)

Artificial intelligence is doing some amazing things in 2020.

It’s driving our cars.

It can create art and music.

It’s reading our brain waves and letting us control computers and other devices with our minds.

AI will only get better at all these things, and some new developments that we haven’t even thought of could be just around the corner.

We asked top experts in the field, people actually building the brave new world of AI, to stretch their brains and come up with some predictions.

Let’s take a mental field trip to 2030 and check out the possibilities.

1. You’ll be able to choose how things look, feel, and smell

If you want to redecorate your living room today you’ve got to get rid of your old furniture and get a bunch of new stuff.

In ten years you might be changing not only the way your couch looks but even how it feels when you sit on it, all at the flip of a switch. Want it to look and feel like a leather couch? No problem. Tired of those flowers on the table? Presto, now they look and smell like roses.

Zohar Gilad

“While some of the building blocks are there today, the big advancement would be in synthesizing touch. How can you make a surface feel different to human touch,” Zohar Gilad, CEO at shopping optimization platform InstantSearch, told Hypergrid Business.

While the convenience would be amazing, the biggest benefit could be to our planet.

“The bigger question is the environmental impact,” said Gilad. “Every one of us is guilty of overconsuming the planet’s resources. We buy new clothes, furniture, and stuff, not because the old is not usable, but rather to change, refresh or revitalize,” he said.

Being able to instantly satisfy the human need for change and save the planet at the same time sounds like a no-brainer.

2. National identities could be seriously blurred

If you live in France and only speak French, you might not have easy access to a lot of online content from America. Those videos and articles sure look cool but if only you could understand what the heck they’re saying, and the text translations just aren’t that good if they’re even available.

In 2030, there might be no gap between a person posting content online in one country and another person somewhere else in the world being able to instantly access and understand it in their own language.

Venkat Viswanathan

“Audiences of media and entertainment will also likely expand beyond national and linguistic borders, as natural language processing, automatic speech recognition, and machine translation disseminates content across languages instantaneously,” Venkat Viswanathan, chairman at digital analytics firm LatentView. told Hypergrid Business

The way the world interacts could change significantly if everyone understands each other without the lost in translation part.

“Interestingly, national identities might become less pronounced in this environment as content becomes a cross-cultural connector and insulator from more domestic conceptions of culture,” said Viswanathan.

“Friendships born out of shared interest and content communities will increasingly create egalitarian social networks that are geographically diverse,” he said.

3. Your job won’t exist

It’s not all sunshine and butterflies. While the robots probably won’t rise up and kill us in 2030, they just might take away that job you thought was irreplaceable.

Pandurang Kamat

“By 2030, AI-created unemployment will be a major societal concern. Roles such as car or truck drivers, equipment operators, medical diagnostic professionals, and lawyers will see a dramatic shift to AI-enabled intelligence and autonomy and therefore job-losses for humans,” Pandurang Kamat, chief technology officer at global solutions company Persistent Systems, told Hypergrid Business,

Workers will need to be retrained as AI advances and takes over jobs that only humans did.

Governments will have to be proactive in creating new laws and programs to protect and reeducate the displaced workforce.

“As AI progresses, the role of government becomes more and more important. The need to study and understand AI technology is crucial, but there’s also a humanitarian need to proactively create programs for those about to be displaced,” said Andrew Bocskocsky, CEO of B2B search engine Grata.

“Jobs that rely on emotional intelligence, like sales, leadership and management are safe in the next decade,” he added.

4. Automation’s the name of the game

Your car won’t even have a steering wheel in ten years. It’ll drive you where ever you want to go and drones will deliver whatever you want to buy or eat.

Ryohei Fujimaki

“Supply-chain will be automated and machines will automatically replenish stocks without human intervention,” Ryohei Fujimaki, CEO of data science platform dotData, told Hypergrid Business.

AI will be everywhere on a consumer and business level, taking care of all those annoying little jobs we’re still doing ourselves.

“This means that bots will handle simple tasks for us like reorganizing our desktop or refilling our Amazon Prime shopping cart and placing the order for you,” added Grata’s Bocskocsky.

5. AI will be embedded in everything from cameras to shoes

Amazon might be able to recommend a good pair of shoes, but once you’ve bought them it doesn’t know what those shoes are up to. Almost anything you buy could be using AI in the future to report how it’s being used.

“If I buy running shoes, they will transmit to the seller how many times I run and the systems will be able to predict when I need new shoes or what else can be proposed to me based on its use patterns,” said Gilad.

AI-imbedded devices could find their way into our creative lives too.

“Video cameras and AI avatars could work with humans to direct and edit high-quality self-made films,” said Viswanathan. “People without any formal music training will be able to leverage specialized hardware powered by AI to produce high-quality music. Increasingly, artistic output will be an AI-assisted endeavor that is limited mostly by creativity rather than skill,” he said.

6. AI will be embedded in our brains

AI could be directly interacting with all our devices and our brains as well.

Monte Zweben

“In ten years, we will have pinpointed with a good degree of accuracy where every sensation and thought could lie in the brain,” Monte Zweben, CEO of data platform Splice Machine, told Hypergrid Business.

It won’t necessarily be a brain chip either.

“A glass contact lens that connects to your brain doesn’t seem too far-fetched. All the information in the world will be just a thought away,” said Zweben.

Virtual reality gatherings have become more common during the pandemic and those meetings could feel just like the real thing in ten years.

“Being able to embed virtual reality directly into your brain would make virtual meetings feel real, so you could simulate the feeling of being hugged when FaceTiming a friend,” said Zweben.

7. Turing test schmuring test

The Turing test is a test of a machine’s ability to exhibit intelligent behavior equivalent to or indistinguishable from that of a human, and some AI programs are starting to beat it.

The idea of a machine talking and acting like a human may sound novel today, but people could just take this for granted in ten years.

Consumers and corporations will work hand in hand with AI and the lines of whether you’re interacting with humans or computer programs called bots could be totally blurred.

Andrew Bocskocsky CEO Grata

“This combination of bots and humans will grow to be so successful over the next decade that people may not even realize when they’re interacting with a bot vs. a human,” said Grata’s Bocskocksy.

Not only will our AI pals interact with us like they’re humans, but we’ll all have them.

“By 2030, we can expect that everyone- even children- will have some type of virtual assistant,” Ajay Dawar, vice president of product management at digital document transformation company Conga, told Hypergid Business.

Ajay Dawar at Conga

And it’ll be smart. “For instance, a nine-year-old may ask an assistant to search the Internet to find the most important content for an assignment, and then copy and paste the information in the document, all while having breakfast,” he said.

AI won’t just help with homework.

“That same child may ask the assistant whether her friend from class has a crush on another classmate – and the assistant will be able to make an accurate prediction based on the digital social interactions and data collected over billions of people,” added Dawar.

8. You don’t need to know a gosh darn thing

We have troubleshooters today for simple tech problems, but soon AIs will be troubleshooting complex industrial problems that currently need highly educated and experienced people.

John Tomizuka

“Much of the time spent troubleshooting and addressing problems in complex industrial systems today is wasted trying to figure out what’s wrong in the first place,” John Tomizuka, CTO of industrial AR solutions company Taqtile, told Hypergrid Business.

People working in industrial environments today might find themselves digging through a physical manual in an emergency setting if the right person isn’t there to fix the problem. Not so in ten years.

“In time, we will have systems capable of diagnosing problems and leading even novice users through the process of troubleshooting complex challenges,” said Tomizuka.

9. The AI doc knew just what I needed

Doctors are already using AI in 2020 with great results.

Matt Jackson

“In October, researchers at MIT developed an AI model capable of detecting asymptomatic COVID-19 infections through cellphone-recorded coughs alone,” Matt Jackson at connected platform Insight, told Hypergrid Business.

AI is helping doctors diagnose patients and prepare personalized intervention plans. In the future, it could figure out the entire treatment for them.

“In the next decade, advancements in AI could help predict how a patient’s cancer cells might behave, creating highly customized, targeted treatments for individual patients,” said Jackson.

10. AI won’t kill us all…

Most of us have seen or at least heard of the Terminator movies, where AI-powered robots rise up to destroy humanity, but this doesn’t seem too likely.

“The frightening, futurist portrayals of Artificial Intelligence that dominate films and novels, and shape the popular imagination, are fictional,” said a Stanford study. “In reality, AI is already changing our daily lives, almost entirely in ways that improve human health, safety and productivity,” the study said.

There will be challenges as governments and the workforce will have to adapt to AI’s effects on industry and commerce, but not the “fighting off killer robots in a post-apocalyptic wasteland” kind.

(Image courtesy Geralt via Pixabay.)

11.  … or WILL it?

Here’s one for the pessimists: killer robots.

“The battlefield of the future will be densely populated by a variety of things- some intelligent and some only marginally so — performing a broad range of tasks including sensing, communicating, acting, and collaborating with each other and human warfighters,” states a paper published by the US Army research laboratory back in 2016.

Think that’s far-fetched?

Militaries across the world are already planning on greatly expanding their use of AI and robots on the battlefield.

The head of the UK’s military has said that he expects up to a quarter of all soldiers could be robots by the 2030s.

If the battlefield is populated by a variety of intelligent devices, some of them designed to fight, is it only human vanity that leads us to think we’ll always be in control of our own creation?

]]>
https://www.hypergridbusiness.com/2020/12/experts-predict-how-ai-will-look-in-2030/feed/ 0
Create mind-control apps with new NextMind DevKit https://www.hypergridbusiness.com/2020/12/nextmind-releases-devkit-for-worlds-first-real-time-brain-sensing-wearable/?utm_source=rss&utm_medium=rss&utm_campaign=nextmind-releases-devkit-for-worlds-first-real-time-brain-sensing-wearable https://www.hypergridbusiness.com/2020/12/nextmind-releases-devkit-for-worlds-first-real-time-brain-sensing-wearable/#respond Tue, 08 Dec 2020 15:27:16 +0000 https://www.hypergridbusiness.com/?p=73238
(Image courtesy NextMind.)

You can now create apps controlled by a user’s thoughts. Yup, you can operate computers, lights, TVs, music, games with your mind — as long as you’ve got a little lighter-sized device strapped to your head.

Neurotechnology startup NextMind is now shipping its 399 Euro development kit for a real-time brain-sensing wearable, the company announced in a press release. There’s no news yet on when a general consumer device will arrive.

The cigarette-lighter-shaped wearable weighs just two ounces and can be clipped to a headband, baseball cap, or virtual reality headset. It’s built-in battery is supposed to last for eight hours of use and charges via a USB-C connection.

The sensor uses artificial intelligence to translate signals from your brain and sends it to your computer.

According to the company, you can program anything from no-code applications that take a few minutes to build to advanced AR and VR experiences.

The NextMind development kit includes the sensor,  an adjustable headband, the AI algorithms, as well as additional resources such as tutorials, demo apps and games, and code building blocks.

Watch the video below:

“This is a huge milestone for NextMind and a tribute to all our scientists, investors, and partners who have helped us bring a truly transformational experience to the developer community,” said NextMind CEO Sid Kouider in a statement.

“We believe this technology will change the gaming industry in massive ways due to how fully immersive and exciting the NextMind experience is. Our product will truly make you feel like a Jedi, and we cannot wait to see what NextMind can bring to the global developer community as well as to the future of brain-computer interfaces,” he added.

NextMind’s development kit is compatible with a wide range of digital platforms, including macOS, Windows 10, Oculus, HTC Vive, and HoloLens.

NextMind is a winner of two CES 2020 innovation awards, including Best of Innovation in Virtual & Augmented technology and as an Honoree in Wearable Technologies,

]]>
https://www.hypergridbusiness.com/2020/12/nextmind-releases-devkit-for-worlds-first-real-time-brain-sensing-wearable/feed/ 0
15 enterprise AI predictions for 2020 https://www.hypergridbusiness.com/2019/10/enterprise-ai-predictions-for-2020/?utm_source=rss&utm_medium=rss&utm_campaign=enterprise-ai-predictions-for-2020 https://www.hypergridbusiness.com/2019/10/enterprise-ai-predictions-for-2020/#respond Wed, 02 Oct 2019 15:53:38 +0000 https://www.hypergridbusiness.com/?p=67707
(Image courtesy Geralt via Pixabay.)

This year, self-driving cars started getting pretty good. Deep fakes video started getting pretty convincing. Our virtual assistants got to the point where they could understand us well enough to do some simple things, like tell us the weather or get driving directions home.

When it comes to artificial intelligence, we have reached an inflection point. The technology is good enough to use.

Next year promises to be a breakout year for AI, as it starts to permeate all aspects of our lives.

Here are predictions for 2020 from some of the world’s top AI experts.

1. We’ll start trusting AI to do stuff for us, not just answer questions

Jen Snell at Verint

Jen Snell is VP of product marketing at Verint, where she leads a product strategy team focused on intelligent self-service, conversational AI, automation, and analytics. She is a frequent speaker and a leading contributor on topics shaping the development and design of interactive technologies. Follow her on Twitter @JenniferLSnell and on LinkedIn.

Hypergrid Business: What are you expecting from AI in 2020?

Jen Snell: In 2020, we’re going to see AI become truly assistive rather than remaining as an assistant in the workplace.

HB: What do you mean by that?

JS: AI in enterprises today is primarily about insights and helping leaders make decisions. And while that’s useful, it’s not action. I think we’re on the precipice of seeing the AI’s focus shift from knowledge to doing.

HB: Are you talking about automation?

JS: It’s not simply automation, although that’s certainly part of it too, that we’re going to see dramatically improve and expand. But we can also think about AI actually being able to shut down a pipeline if it detects a leak, or complete scheduling requests from top-to-bottom or even providing concrete recommendations instead of just insights.

HB: Do you think companies will be forcing AI on their employees?

JS: Up until now, we’ve seen the AI mandate in enterprises come from the top down. However, in 2020 I believe this is going to change, and we’re going to begin to see the push for AI adoption come from the bottom up. Employees will become the key champions of AI in the workplace and will change the way we think about enterprise adoption.

HB: Are you talking about younger employees?

JS: While I think this change will be driven by the incoming AI Natives—those new employees who have grown up with AI on their phones and in their homes—we’ll see this demand for conversational, intelligent tools transcend generational divisions and instead become an age-agnostic groundswell within the enterprise.

Jen Snell isn’t the only one who thinks that AIs will get a lot more capable next year.

Andy Vitus has been investing with Scale Venture Partners for nearly 15 years and focuses on next-generation enterprise software. He sits on the boards of CircleCI, DataStax, JFrog, Matillion, PubNub, Realm, Treasure Data, Unbabel, and Unifi. Follow him on Twitter @avitus and on LinkedIn.

Hypergrid Business: What are you expecting from AI in 2020?

Andy Vitus

Andy Vitus: We’re already seeing the emergence of purpose-built intelligence powering software that is far more automated and helpful. In 2020 we’ll see clear evidence that we’re moving past the traditional software dynamic where input from an operator or administrator is required for software to deliver its intended value.

HB: Can you give an example?

AV: This shift moves us to a world where Salesforce, now a system of record, has the intelligence to tell users how best to use their time, without the need for manual input from the software’s operator. For enterprises, purpose-built intelligence offers enormous potential to deliver productivity improvements and actionable business intelligence.

2. We will start asking AIs to explain themselves

Stuart Battersby is the CTO of Chatterbox Labs, where he leads a team working on AI explainability. He has patented explainable AI research that is an industry first. Follow him on LinkedIn.

Hypergrid Business: What are you expecting from AI in 2020?

Stuart Battersby of Chatterbox Labs

Stuart Battersby: 2020 is the year that enterprise customers are going to question the outcomes made by their AI systems. As these systems have made their way into both the boardroom and everyday business processes, business users require more than just a confidence or prediction score made by the machine.

HB: Will AI adoption slow down while we wait for the explanations?

SB: By no means do I see this as causing a slow down in adoption of enterprise AI, rather that a new wave of explainability is going to bring in even greater adoption. Highly accurate AI systems that can be explained to business users will revolutionize business.

Stuart Battersby isn’t the only one who says we’ll want to know why AI does what it does.

Krishna Gade is the founder and CEO of Fiddler Labs, an enterprise startup building an explainable AI engine to address problems regarding bias, fairness, and transparency in AI. He previously held senior engineering leadership roles at Facebook, Pinterest, Twitter, and Microsoft. Follow him on Twitter @KrishnaGade.

Hypergrid Business: What are you expecting from AI in 2020?

Krishna Gade: Consumers and enterprises alike will pay more attention to the predictions and outcomes from AI.

HB: Can you give any examples?

KG: There will be a lot of why. Why did my credit card or loan get denied? Why was this transaction identified as fraudulent by my model? Why was my diagnosis incorrect? Why did my AI predict that this customer will churn?

HB: Will we get answers?

KG: Not enough.

HB: So what happens then?

KG: Explainability will be a necessity to launch AI models into production especially in regulation heavy industries like finance, insurance, healthcare and any other vertical that demands consumer safety. We’re already starting to see more and more regulations come on the market that demand companies have the ability to explain automated decisions, and this will only increase across the U.S and globally.

3. Non-experts will start creating AI apps

Ryohei Fujimaki is the founder and CEO of dotData. Prior to founding dotData, he was the youngest research fellow ever in NEC Corporation’s 119-year history.  Fujimaki received his Ph.D. degree from the University of Tokyo in the field of machine learning and artificial intelligence. Follow him on LinkedIn.

Hypergrid Business: What are you expecting from AI in 2020?

Ryohei Fujimaki: In 2020, artificial intelligence will continue to expand with more focus on the democratization of the process. Enterprises will need to expand beyond traditional data scientist roles and will turn to data science automation and AutoML platforms to boost the productivity of citizen data scientists – business analysts and other data experts who have domain expertise but are not necessarily skilled data scientists.

HB: So what does that mean? More AI applications?

RF: This continued democratization will likely lead to new use-cases that are closer to the needs of business users and will enable faster time-to-market for AI applications in the enterprise. As AI and ML models become easier to create using advanced AutoML 2.0 platforms, data scientists and citizen data scientists will begin to scale ML and AI model production in record numbers.

HB: Do you see any downsides?

RF: Organizations will need to pay special attention to data collection, maintenance and privacy oversight to ensure that the creation of new, sophisticated models does not violate privacy laws or breach privacy concerns for consumers.

4. When it comes to privacy, AI won’t just make the problem worse — it will also be used to protect privacy

Marty Puranik is the founder, president, and CEO of Atlantic.Net, a hosting solutions provider in Orlando, Florida. Read Marty Puranik’s articles at Information Management.

Hypergrid Business: What does AI have to do with privacy?

Marty Puranik of Atlantic.Net

Marty Puranik: There is a big emphasis on collecting data and utilizing AI to make the best use of it. Dependence on AI will continue to grow as there are more data points and meaningful optimization of those data points.

HB: Sounds like the more AI you want, the more data you collect. Why is this a problem? 

MP: With the latest passage of the California Consumer Privacy Act (CCPA) and wide acceptance of the General Data Protection Regulation (GDPR), businesses are required to safeguard consumer information and privacy.

HB: Can AI help companies comply with these laws?

MP: Today, there are already a lot more AI initiatives related to cybersecurity and compliance. As the demand for more regulatory compliance increases and the number of cybersecurity-related data breaches grow, dependence on artificial intelligence will also grow in order to scale those initiatives.

 

 

5. Doctors will use AI to help patients

Monte Zweben is the CEO and co-founder of Splice Machine. His early career was spent with the NASA Ames Research Center as the deputy chief of the artificial intelligence branch, where he won the prestigious Space Act Award for his work on the Space Shuttle program. Follow him on Twitter @mzweben.

Hypergrid Business: What are you expecting from AI in 2020?

Monte Zweben of Splice Machine

Monte Zweben: The past few years have yielded substantial progress for MS patients and the physicians who treat them, with more treatment options available than ever. But challenges remain in determining the disease’s unique trajectory in individual patients. To solve this challenge, physicians are turning to new, AI-powered solutions that use machine learning to help guide medical decision-making for MS patients.

HB: How does AI help?

MZ: By understanding a wide array of patient data simultaneously — factors ranging from age and weight to patients’ mobility, dexterity, cognitive ability, and gait — the model helps guide neurologists’ decisions as they predict how particular patients’ diseases will progress, and to ensure all patients are given the treatments they are most likely to respond well to, with minimal side effects.

6. AI will help protect companies against hackers

Sam Curry is the chief security officer at Cybereason. He is a cybersecurity visionary and thought leader and over the years has been interviewed by dozens of journalists and has appeared on many broadcast outlets discussing cybersecurity trends, cyber threats and the impact on corporations and consumers. Earlier this year, SC Magazine honored Sam with a “Visionary Award” for his nearly 30 years of contributions in shaping the cybersecurity industry. Follow him on Twitter @samjcurry and on LinkedIn.

Hypergrid Business: What are you expecting from AI in 2020?

Sam Curry of Cybereason

Sam Curry: Artificial intelligence is perhaps the most overhyped and simultaneously under-hyped technology since the advent and explosion of the Internet itself. It’s overhyped because it has been used, abused and tortured by too many marketing departments exaggerating or outright lying about features and capabilities, using AI as a term to dazzle and seduce customers. Under-hyped because AI will eventually change the game entirely in security and many other industries.

HB: So will AI start to replace cybersecurity professionals next year?

SC: The real advances in AI in security in the next year will not be the creation of an uber-intelligence or an analyst-in-the-machine but rather in assistance, automation and improvements in efficiency. AI for incremental advances will be the most significant breakthroughs and developments of 2020. This is a subtler, less flashy use of AI but will actually favor the defender over the medium and long term, which is good news for security and the companies we serve.

HB: But will AI eventually take over all those jobs?

SC: One day, we will solve the Turing Challenge and have an AI attack for us and defend for us, but that day is far in the future in spite of the hyperbole from marketing and futurists. Assuming that such an intelligence would want the job without getting bored and likewise wouldn’t turn on the rest of us is moot because AI-for-improving security is more important than AI-for-doing security.

7. Criminals will start using AI to break into companies

Joe Jaroch is the senior director of cybersecurity strategy at Webroot and enjoys acting as a key strategic adviser on a range of technologies. He has more than 20 years of experience in cybersecurity and software, including building the first cloud-based antimalware product. Follow him on LinkedIn.

Hypergrid Business: What are you expecting from AI in 2020?

Joe Jaroch of Webroot

Joe Jaroch: Adversarial attacks against AI-based security products will grow in scope and complexity.

HB: How will companies defend themselves?

JJ: It will become clear that there are fundamentally two types of AI in cybersecurity: AI which acts like a smarter conventional signature and AI which is built into every facet of an intelligent, cloud-based platform capable of cross-referencing and defending itself against adversarial attacks.

8. Criminals will use AI to impersonate people

Hal Lonas is the chief technology officer at Webroot and has more than 25 years of experience in security, enterprise software, and engineering. He has co-authored several patents and holds a bachelor of science in aeronautics and astronautics from MIT. Follow him on LinkedIn.

Hypergrid Business: What are you expecting from AI in 2020?

Hal Lonas of Webroot

Hal Lonas: We will see more AI experimentation by cybercriminals which will drive an increase in scale of attacks in 2020.

HB: Can you give any examples?

HL: Perhaps the scariest scenario is the use of AI to become a component of the production of deepfakes, where it could be used to automatically edit out artifacts and glitches that can be used today to differentiate between real and fake.

HB: Right, we already saw some examples of hackers stealing money from companies using deepfake voices.

HL: Eventually, we will see a world where we will not be certain that the entity on the other end of any conversation – phone call, email, text, tweet, article, or feed – is human or not.

9. Speech interfaces will start to replace screens

Dan Tuchler, chief marketing officer of SecurityFirst, has led marketing and product management organizations from the pre-revenue startups to billion-dollar product lines. Follow him on LinkedIn.

Hypergrid Business: What are you expecting from AI in 2020?

Dan Tuchler at SecurityFirst

Dan Tuchler: AI presents many other areas for innovation. Some are familiar to consumers – like Alexa, Siri, or Google Assistant – and these will continue to become faster by decentralizing the interpretation of natural language, moving it closer to the user or even onto the user’s device. But in the enterprise, we are just starting to see what this technology can bring to IT and other departments.

HB: What do you mean?

DT:  With AI-based assistants, enterprise equipment will begin to respond to verbal, natural language queries, and then to operational commands. All three of the Internet giants are offering development kits and support for adding voice interfaces to all sorts of products, leveraging their back-end AI infrastructure. Voice powered user interfaces should appear in 2020.

HB: But don’t the personal assistant get things wrong a lot?

DT: Of course we all have our favorite stories about personal assistants misunderstanding us in comical ways, so the technology will have to become a bit more reliable before we fully trust it for mission-critical uses.

HB: What’s next after speech recognition?

DT: AI leads to programs that can read facial expressions, leverage human behaviors they have learned across millions of users, and compare present behaviors to the context of all prior interactions. Some of these concepts are unnerving, but they are coming.

10. AI will make online chatbots usably intelligent

Ido Bornstein-HaCohen is the CEO of Conversocial. Prior to that, he had more than 15 years of experience in executive leadership positions at LivePerson and SAP. Follow him on LinkedIn.

Hypergrid Business: What are you expecting from AI in 2020?

Ido Bornstein-HaCohen at Conversocial

Ido Bornstein-HaCohen: In 2020, expect chatbots to become ubiquitous in the realm of customer service. There are currently over 40 million businesses communicating with customers over Facebook Messenger and 300,000 chatbots deployed to automate consumer conversations alone. Expect these numbers to soar over the next 12 months as brands take advantage of the leap in AI technology.

HB: Why will AI make a difference?

IB: Bots are no longer limited to simplistic customer interactions. They utilize natural language processing to better comprehend the user’s intent and deliver useful, appropriate responses. This enables the development of a system that can automatically respond to unstructured, free text questions it may not have seen in that exact way before. With more conversations being successfully navigated by bots, brands will increase their usage in order to improve response times and drive greater contact center efficiencies.

HB: Can you give an example?

IB: A system like this can learn from historical conversations how to respond to all the different types of situations that occur when someone may ask for a refund — without needing to have those rules written programmatically in advance.

HB: What about the times when the AI can’t figure out what the customer is asking?

IB: Voice-activated bots sometimes have trouble understanding human commands and certainly have trouble processing humor or sarcasm. These fixes are a priority for 2020.

HB: So what will the bots do in those cases?

IB: Answers will only be fully automated when the AI system is completely confident the answer is right. In all other cases, the answer will either be human approved or human written. In turn, this will enable a smaller and smaller pool of agents to service customers in an increasingly faster and more effortless way. It will revolutionize the customer service model. This revolution is in its infancy right now — but all the relevant elements are in place to make it take off.

11. We will start seeing AIs listed in employee directories

Sean Lane is the CEO of Olive and a lifelong technologist. He started his career in the U.S. intelligence community, holding leadership positions in the Department of Defense and the National Security Agency.  After government service, Sean started his life as an entrepreneur, and has founded several technology companies dealing with big data, advanced analytics, entity resolution, and most recently, artificial intelligence and is now a leading expert in applying Al in healthcare. Follow him on Twitter at @unleashcbus and on LinkedIn.

Hypergrid Business: What are you expecting from AI in 2020?

Sean Lane of Olive

Sean Lane: We’re quickly moving into a new era where digital employees are becoming commonplace. With AI, human employees are able to take on higher-value tasks — ones that require strategy, empathy and patient interaction.

HB: You mean “digital employees” metaphorically, right?

SL: I believe that within the next twelve months, digital employees will be on hospitals’ org charts, working side-by-side with human employees, and in five years, 30 percent of healthcare workforces will be digital. It’s inevitable.

12. AI will help companies find employees

Armen Berjikly is the senior director of growth strategy at Ultimate Software, where his expertise in human-computer interaction drives Ultimate’s transformative artificial intelligence platform and direction. Prior to Ultimate, Armen served as founder and CEO of Kanjoya Inc., where he developed revolutionary advancements in the natural language processing. Follow him on Twitter at @armenberjikly and on LinkedIn.

Hypergrid Business: What are you expecting from AI in 2020?

Armen Berjikly at Ultimate Software

Armen Berjikly: In the coming year, AI will play a significant role in helping companies step into the future of recruiting.

HB: Can you explain how?

AB: Recruiters will have access to more powerful and efficient candidate-screening tools, so they can focus on high-quality prospects while retaining the ability to examine a large pool of applicants. Candidates, in turn, will benefit from getting to focus on high-quality opportunities where they are more than just an on-paper match.

HB: Do you mean that AIs will be scanning resumes for keywords?

AB: These tools will deploy technologies such as natural language processing to go far beyond traditional resume parsing into identifying candidate fit through soft skills, which many HR pros say are equally or even more important than technical skills.

13. AIs will start writing

Pavel Dmitriev is the vice president of data science at Outreach, a sales engagement platform. He was previously a principal data scientist with Microsoft’s analysis and experimentation team, where he worked on scaling experimentation in Bing, Skype, and Windows OS. Follow him on Twitter @testlearnwin and on LinkedIn.

Hypergrid Business: What are you expecting from AI in 2020?

Pavel Dmitriev: In 2020, we will start seeing real-world applications of synthetic text — machine-generated snippets and stories — both in legitimate business contexts and for malicious causes.

HB: I saw some news about that earlier this year.

PD: OpenAI reported they built a language model capable of auto-generating human quality text, which they did not release for the fear it may be used to generate fake news or be used for other malicious causes. OpenAI may have been the first to get there, but in 2020 others will catch up. As they do, the technology will spread fast, and real-world applications will emerge both in legitimate business scenarios, such as auto-composing personalized targeted sales emails, and in malicious contexts such as fake news.

14. AI will make smart cities even smarter

Zachary Jarvinen is the head of technology strategy of AI and analytics for OpenText. Prior to OpenText, Zachary ran product marketing at a data analytics startup that was a part of the Obama Presidential Campaign Digital Team in 2008. Follow him on Twitter @zachonomics.

Hypergrid Business: What are you expecting from AI in 2020?

Zachary Jarvinen of OpenText

Zachary Jarvinen: In 2019, we saw smart cities truly began to come of age. The next phase in this evolution will be a significant roll-out of smart city AI implementation. Large organizations have long utilized AI and analytics to turn unstructured data into more actionable insights. Now, artificial intelligence is opening the door for applications and networks outside of the workplace to harness big data more intelligently and engage in new ways with citizens and, as a result, make our cities more efficient and sustainable.

HB: Can you give any examples?

ZJ: AI can transform a city’s infrastructure and power utilization, and can also make strides in public safety, healthcare, and even make public parking more efficient. In 2020, AI will usher in even greater opportunities to make the smart city dream more of a reality.

15. AI will make the world around us responsive — and creepy

Chris Nicholson is the CEO of Skymind. which applies reinforcement learning to business simulations. He also spent a decade as a journalist, reporting on tech and finance for The New York Times, Bloomberg News and Businessweek, among others. Follow him on Twitter @chrisvnicholson and on LinkedIn.

Hypergrid Business: What are you expecting from AI in 2020?

Chris Nicholson of Skymind

Chris Nicholson: Everything from your car to the shopping mall will become more responsive because it will be more personalized.

HB: What do you mean by responsive?

CN: Things and places will be able to recognize you, and they’ll know more about you. So life will become both creepier and more convenient at the same time.

HB: I don’t know how I feel about that.

CN: You’ll get used to it, just like you get used to the temperature in a room. You’ll come to expect responsiveness, and when you encounter dumb objects or dumb locations, you will feel that something is missing. You’ll feel ignored.

]]>
https://www.hypergridbusiness.com/2019/10/enterprise-ai-predictions-for-2020/feed/ 0
Scientists: Brain implants risky, could make us telepathic https://www.hypergridbusiness.com/2019/09/scientists-brain-implants-risky-could-makes-us-telepathic/?utm_source=rss&utm_medium=rss&utm_campaign=scientists-brain-implants-risky-could-makes-us-telepathic https://www.hypergridbusiness.com/2019/09/scientists-brain-implants-risky-could-makes-us-telepathic/#respond Mon, 16 Sep 2019 15:50:05 +0000 https://www.hypergridbusiness.com/?p=67446
(Image courtesy Gerd Altmann via Pixabay.)

It seems like something out of a sci-fi story, but you might be able to control a device or talk to a friend just by using your thoughts — and it could happen sooner than you think.

External and internal neural interfaces are already in use in the medical field today for stroke rehabilitation and epilepsy treatment. And they could be used for wider applications in the future, such as the treatment of Alzheimer’s disease.

This sounds promising but who’s in charge of where this technology will lead us and how it will be used?

In a report released last week, UK scientists called for a government investigation into neural interface technologies to stimulate innovation and allow the public to have a role in shaping the future of the field.

“The applications for neural interfaces are as unimaginable today as the smartphone was a few decades ago,” said Christofer Toumazou, chair in Biomedical Circuit Design at Imperial College London, in the report.

There are currently no internally-implanted neural interfaces outside of the medical field, but this could change quickly.

Tech mogul Elon Musk announced in July that his company Neuralink could start human trials as early as 2020, using electrodes inserted in the brain to help people with paralysis or locked-in syndrome control a computer or phone.

Facebook’s founder Mark Zuckerberg has shown interest in telepathic typing and the company is supporting research that aims to create a headset able to transcribe a vocabulary of 1,000 words, at a rate of 100-words per minute, just by using one’s own thoughts.

Marc Zuckerberg (Image courtesy Facebook.)

“By 2040, neural interfaces are likely to be an established option to enable people to walk after paralysis and tackle treatment-resistant depression,” said Tim Constandinou, co-chair of the report and Director of the Next Generation Interfaces Lab at the Imperial College London, in last week’s report. “While advances like seamless brain-to-computer communication seem a much more distant possibility, we should act now to ensure our ethical and regulatory safeguards are flexible enough for any future development.”

The report proposed that the government use this field to test a new regulatory approach which could lead to an acceleration of the development of innovative technology in the future.  It also recommends an investigation into ethical and safety concerns presented by neural interfaces.  The report also mentions including the public in shaping how the the technology will be used and regulated, as well as insuring the safety of citizens’ neural data.

The report found strong support for neural interfaces in medical settings, such as helping sick or injured patients, but less support for incorporating the technology to enhance otherwise healthy people.

While we’re not there yet, “people could become telepathic to some degree,” the report said.

]]>
https://www.hypergridbusiness.com/2019/09/scientists-brain-implants-risky-could-makes-us-telepathic/feed/ 0
Using AI to prompt creativity https://www.hypergridbusiness.com/2019/05/using-ai-to-prompt-creativity/?utm_source=rss&utm_medium=rss&utm_campaign=using-ai-to-prompt-creativity https://www.hypergridbusiness.com/2019/05/using-ai-to-prompt-creativity/#respond Fri, 31 May 2019 18:43:22 +0000 https://www.hypergridbusiness.com/?p=65228 If you’re a writer, or an artist you may cringe at the idea that computers are coming for your jobs. If you dig into how the content for AI art and poetry and stories are generated, you’ll breathe a little easier. I’ve done just that using tutorials I found online and collaborative notebooks I found on Github. I compiled a text file of my novels and poetry and trained them using Google’s Colaboratory where it makes CPU and GPU available free for creators to experiment with machine learning.

Warning, this is addictive stuff. Well, it’s addictive if you don’t let all the tech speak and brain-twisting math get in your way. You don’t have to fully understand the mechanics of a car to use one to get from place to place. The same is true of AI and machine learning. If you want to train an AI on your own input data just follow the tutorials and let the magic happen. Basically the computer reads through tons of text and learns to predict which words should come next by parameters the training code sets. It’s pretty obvious right off the top that the computer does not understand the context or meaning of the words.

This was generated by Google’s poetry generator:

My
transsexual
power in the rain,
The pig was seen, the little maiden was a son.

“Power” was the prompt word. The results are kind of like taking a fistful of magnetic poetry tiles and tossing them on a table and then organizing them into sentences.

And this came from the text generation project that OpenAI insists is too dangerous to release fully featured to the public:

If I listen long enough to you, you’ll understand. And you’re the only one who’ll understand…that you can be with me…”
Beneath all those words, he realized the meaning. His heart clenched, his heart fluttered…
“You can come, you can come again! I’m so sorry for having to put you through something like that. I won’t forget now!”
Beneath all those words, he realized that even if he didn’t speak, the tears wouldn’t die! He couldn’t hold back his emotions anymore…
I’ll never forget.

The bold is the prompt I input. I think our jobs are safe for a few more months at least. Yet, just because AI text generators can’t replace you is no reason they can’t be helpful. The blank page and the first draft, sparking ideas, pulling dialog and narrative from nothing, are the lengthier and most mentally exhausting portion of writing. “Transsexual power” and “maiden sons” might inspire an interesting character. If your narrative contains the words “His heart clenched, his heart fluttered” Feed that prompt to a text generator and see if it comes up with something fresh. I put it in TalktoTransformer:

Completion
his heart clenched
And there was the smile of a thousand years
That was boundless

Not great, but better, and something tweakable or to inspire your imagination into a new direction. So when your feeling stuck for an idea or just want to play with words to get into that creative mindset, try AI tools. And if building your own text generator isn’t your thing, These two are compiled online and will get you started exploring how AI generators can inspire your creativity.

 

This is the poem portrait I generated with the prompt word “garden”.  If you give permission your webcam will take your picture and deliver a poem shaped around that picture. I haven’t seen a single poem yet where someone gave permission for their picture to be taken. I didn’t think that wise either and so I found this picture on Pixabay, used neural-style transfer AI on Picsart to get moody shadows, and then used some layer-masking in Photoshop to come up with my own version of a poem portrait after editing that original text.

My final poem inspired by the AI

AI Pearls

A prose garden gowned in tracts of dew
Where coin is solitude and selkie songs the charm
Shadows sing moonlight into the dusk’s blue-white gloom

My pen floats along the sea of invisible words,
A book in darkness growing layers
Like a pearl

~Nara

Article reprinted with permission from Nara Malone.

]]>
https://www.hypergridbusiness.com/2019/05/using-ai-to-prompt-creativity/feed/ 0