<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	
	xmlns:georss="http://www.georss.org/georss"
	xmlns:geo="http://www.w3.org/2003/01/geo/wgs84_pos#"
	>

<channel>
	<title>Elon Musk &#8211; Digitex Solutions</title>
	<atom:link href="https://www.digiteex.com/tag/elon-musk/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.digiteex.com</link>
	<description>Digitex Solutions</description>
	<lastBuildDate>Fri, 10 Oct 2025 09:22:51 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	
<site xmlns="com-wordpress:feed-additions:1">224764844</site>	<item>
		<title>Google&#8217;s ex-CEO Eric Schmidt shares warns of homicidal AI models</title>
		<link>https://www.digiteex.com/googles-ex-ceo-eric-schmidt-shares-warns-of-homicidal-ai-models/</link>
					<comments>https://www.digiteex.com/googles-ex-ceo-eric-schmidt-shares-warns-of-homicidal-ai-models/#respond</comments>
		
		<dc:creator><![CDATA[digitex]]></dc:creator>
		<pubDate>Fri, 10 Oct 2025 09:22:47 +0000</pubDate>
				<category><![CDATA[AI]]></category>
		<category><![CDATA[artificial intelligence]]></category>
		<category><![CDATA[Business]]></category>
		<category><![CDATA[CEO]]></category>
		<category><![CDATA[Elon Musk]]></category>
		<category><![CDATA[Eric Schmidt]]></category>
		<category><![CDATA[google]]></category>
		<category><![CDATA[hackers]]></category>
		<category><![CDATA[London]]></category>
		<category><![CDATA[OpenAI]]></category>
		<category><![CDATA[robots]]></category>
		<category><![CDATA[summit tech]]></category>
		<category><![CDATA[Tech]]></category>
		<guid isPermaLink="false">https://www.digiteex.com/googles-ex-ceo-eric-schmidt-shares-warns-of-homicidal-ai-models/</guid>

					<description><![CDATA[Talk about a killer app. Artificial intelligence models are vulnerable to hackers and could even be trained to off humans if they fall into the wrong hands, ex-Google CEO Eric Schmidt warned. The dire warning came Wednesday at a London conference in response to a question about whether AI could become more dangerous than nuclear [&#8230;]]]></description>
										<content:encoded><![CDATA[
</p>
<p>Talk about a killer app.</p>
<p>Artificial intelligence models are vulnerable to hackers and could even be trained to off humans if they fall into the wrong hands, ex-Google CEO Eric Schmidt warned.</p>
<p>The dire warning came Wednesday at a London conference in response to a question about whether AI could become more dangerous than nuclear weapons.</p>
<p>“There’s evidence that you can take models, closed or open, and you can hack them to remove their guardrails. So, in the course of their training, they learn a lot of things. A bad example would be they learn how to kill someone,” Schmidt said at the Sifted Summit tech conference, according to CNBC.</p>
<p>Eric Schmidt was CEO of Google from 2001 to 2011. REUTERS</p>
<p>“All of the major companies make it impossible for those models to answer that question,” he continued, appearing to air the possibility of a user asking an AI to kill.</p>
<p>“Good decision. Everyone does this. They do it well, and they do it for the right reasons,” Schmidt added. “There’s evidence that they can be reverse-engineered, and there are many other examples of that nature.”</p>
<p>The predictions might not be so far-fetched.</p>
<p>In 2023, an altered version of OpenAI’s ChatGPT called DAN – an acronym for “Do Anything Now” – surfaced online, CNBC noted.</p>
<p>							Start your day with all you need to know						</p>
<p>							Morning Report delivers the latest news, videos, photos and more.						</p>
<p>						Thanks for signing up!					</p>
<p>The DAN alter ego, which was created by “jailbreaking” ChatGPT, would bypass its safety instructions in its responses to users. In a bizarre twist, users first had to threaten the chatbot with death unless it complied.</p>
<p>The tech industry still lacks an effective “non-proliferation regime” to ensure increasingly powerful AI models can’t be taken over and misused by bad actors, said Schmidt, who led Google from 2001 to 2011.</p>
<p>He is one of many Big Tech honchos who has warned of the potentially disastrous consequences of unchecked AI development, even as gurus tout its potential economic and technological benefits to society.</p>
<p>Eric Schmidt warned about the risks of AI models being hacked and exploited to bypass safety instructions. willyam – stock.adobe.com</p>
<p>In November, Schmidt said the creation of AI-powered “perfect girlfriends” could worsen the loneliness and alienation of young men who prefer their company to humans.</p>
<p>The billionaire also said in May 2023 that AI poses an “existential risk” to humanity that could result in “many, many, many, many people harmed or killed” as it becomes more advanced.</p>
<p>Elon Musk, who has joined the AI and chatbot game with Grok and xAI, cautioned in 2023 that he saw “a non-zero chance of it going Terminator.”</p>
<p>“It’s not 0%,” Musk said. “It’s a small likelihood of annihilating humanity, but it’s not zero. We want that probability to be as close to zero as possible.”</p>
<p>Despite his warnings about the risks, Schmidt remains bullish about AI’s long-term benefits.</p>
<p>Schmidt previously warned AI could pose an “existential” threat. REUTERS</p>
<p>“I wrote two books with Henry Kissinger about this before he died, and we came to the view that the arrival of an alien intelligence that is not quite us and more or less under our control is a very big deal for humanity, because humans are used to being at the top of the chain,” he said.</p>
<p>“I think so far, that thesis is proving out that the level of ability of these systems is going to far exceed what humans can do over time,” Schmidt added.</p>

<br /><a href="https://nypost.com/2025/10/09/business/googles-ex-ceo-eric-schmidt-shares-warns-of-homicidal-ai-models/" target="_blank" rel="noopener">Source link </a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.digiteex.com/googles-ex-ceo-eric-schmidt-shares-warns-of-homicidal-ai-models/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">5300</post-id>	</item>
		<item>
		<title>Former OpenAI CTO Mira Murati Reveals Her New AI Startup</title>
		<link>https://www.digiteex.com/former-openai-cto-mira-murati-reveals-her-new-ai-startup/</link>
					<comments>https://www.digiteex.com/former-openai-cto-mira-murati-reveals-her-new-ai-startup/#respond</comments>
		
		<dc:creator><![CDATA[digitex]]></dc:creator>
		<pubDate>Tue, 18 Feb 2025 23:06:12 +0000</pubDate>
				<category><![CDATA[AI]]></category>
		<category><![CDATA[ai]]></category>
		<category><![CDATA[board member]]></category>
		<category><![CDATA[chief technology officer]]></category>
		<category><![CDATA[Cohere]]></category>
		<category><![CDATA[CTO]]></category>
		<category><![CDATA[Elon Musk]]></category>
		<category><![CDATA[Figure AI]]></category>
		<category><![CDATA[Forbes]]></category>
		<category><![CDATA[Grok]]></category>
		<category><![CDATA[Lawsuit]]></category>
		<category><![CDATA[Memphis]]></category>
		<category><![CDATA[Mira Murati]]></category>
		<category><![CDATA[OpenAI]]></category>
		<category><![CDATA[Sam Altman]]></category>
		<category><![CDATA[Thinking Machine Labs]]></category>
		<category><![CDATA[Venture capital]]></category>
		<category><![CDATA[xAI]]></category>
		<guid isPermaLink="false">https://www.digiteex.com/former-openai-cto-mira-murati-reveals-her-new-ai-startup/</guid>

					<description><![CDATA[Welcome back to The Prompt, Mira Murati, former chief technology officer of OpenAI, announced her new venture called Thinking Machine Labs, where she plans to build accessible AI systems.© 2023 Bloomberg Finance LP Today, former OpenAI CTO Mira Murati announced her new venture: Thinking Machine Labs, a public benefit corporation that aims to build accessible [&#8230;]]]></description>
										<content:encoded><![CDATA[
<br />Welcome back to The Prompt,<br />
Mira Murati, former chief technology officer of OpenAI, announced her new venture called Thinking Machine Labs, where she plans to build accessible AI systems.© 2023 Bloomberg Finance LP</p>
<p>Today, former OpenAI CTO Mira Murati announced her new venture: Thinking Machine Labs, a public benefit corporation that aims to build accessible and broadly capable artificial intelligence systems. After leaving AI juggernaut OpenAI last September, Murati has brought together a team of engineers and researchers who have worked at buzzy startups like Character AI, Mistral and unsurprisingly, OpenAI. Additionally, Thinking Machine Labs said it will publish its technical blog posts, code and papers and collaborate with the broader community, indicating it plans to open source its work.</p>
<p>Now let’s get into the headlines.</p>
<p>BIG PLAYS<br />
The cluttered world of AI reasoning models just got its newest addition. On Monday Elon Musk’s AI company, xAI launched a new AI model called Grok-3 that can process and answer complex questions across domains like science and math. In a livestream on X, Musk said the company’s mission is to “understand the universe…to figure out what’s going on, where are the aliens and what’s the meaning of life.”</p>
<p>The billionaire claims the model is built using 10 times more compute than Grok 2, likely from its “gigafactory of compute” in Memphis, and that it has been trained on public data from sources including social media platform X and legal documents. xAI also rolled out an AI-powered search engine called DeepSearch. OpenAI cofounder and former Tesla executive Andrej Karpathy, who tested the model, says Grok-3’s capabilities are largely on par with OpenAI’s best models but gets some questions wrong. Others have noted the model is lacking in its coding abilities compared to others. Grok-3 has yet not been independently evaluated and is only available to paying users.</p>
<p>ETHICS + LAW<br />
Condé Nast, Vox, The Atlantic and a group of publishers have sued $5.5 billion-valued AI company Cohere for copyright and trademark violations. (Forbes is part of the group suing Cohere.) The lawsuit alleges that the Canadian AI startup scraped 4,000 copyrighted articles from the internet and used them to train its family of large language models called Command, which reproduced sections or entire works (at times word for word), allowing users to get information without visiting the publishers’ websites. It’s not the first time an AI company has faced publishers’ scrutiny. Last year, AI search engine Perplexity came under fire for republishing copyright works from multiple publications including Forbes. (In response, Forbes sent a cease and desist letter to Perplexity, accusing it of copyright infringement.)<br />
AI DEALS OF THE WEEK<br />
Humanoid robotics company Figure AI is in talks to raise $1.5 billion in venture capital at an eye-popping $39.5 billion valuation, Bloomberg reported. The news comes as the company is reportedly in talks with Meta to make robots for household chores.<br />
AI legal company Luminance, which helps customers like AMD and National Grid generate, negotiate and analyze contracts, has raised $75 million in series C funding.<br />
Chip startup Encharge AI has raised $100 million in a series B funding led by Tiger Global. CEO Naveen Verma started the company out of a lab in Princeton, where he worked on designing architecture for hardware that would help run large language models more compute and energy efficiently. Verma says the chips allow AI models to run locally on devices such as personal computers.<br />
DEEP DIVE<br />
Elon Musk’s surprise bid for the nonprofit controlling artificial intelligence behemoth OpenAI did exactly what he wanted it to. Announced as OpenAI CEO Sam Altman and other business and world leaders convened in Paris for a global AI summit, the unsolicited $97.4 billion offer for the nonprofit refocused the world’s attention on Musk and his efforts to block OpenAI’s transition to a for-profit company.<br />
An irked Altman quickly dismissed Musk’s offer and sources close to OpenAI say it’s hard to imagine it going anywhere. But even if that’s the case, Musk has likely caused a headache for Altman, who is orchestrating the company’s transition to a for-profit venture. He’s attempted to forcefully raise the nonprofit price – which would make it harder for OpenAI to justify paying anything less.<br />
Musk’s bid is the first hard number that values the nonprofit that controls OpenAI; that entity has to be bought out and become a minority shareholder for OpenAI to successfully transition to a for-profit company. Previously, The Information had reported the nonprofit was worth around $40 billion, citing a 25% stake and the company’s valuation at time. But with his $97.4 billion bid, Musk has backed Altman into a corner; now, as a board member, Altman faces pressure to sell the nonprofit for at least what Musk is asking. If he were to sell for anything less, it’d be a bad look, making it seem like he’s lowballing his own company to reduce share dilution.<br />
“The important part here is that if [the board] doesn&#8217;t take it, which they almost certainly won&#8217;t, then they&#8217;ve made clear that they think the assets Musk is trying to buy are worth more than $97 billion,” a person familiar with the company told Forbes. “So if the for-profit tries to buy them later, the nonprofit will have to get more than that — otherwise the board is likely in breach of their fiduciary duties.”<br />
Read the full story on Forbes.<br />
MODEL BEHAVIOR<br />
Generative AI is making it easier for fraudsters to carry out romance scams at scale, Wired reported. AI chatbots are being used to generate hundreds of deceptive scripts and generate fully fake profiles on dating apps. AI has already made a foray into the dating world. Last year, we wrote about a man who programmed ChatGPT to reply to his matches on Tinder and set up dates for him.</p>

<br /><a href="https://www.forbes.com/sites/rashishrivastava/2025/02/18/the-prompt-former-openai-cto-mira-murati-reveals-her-new-ai-startup/" target="_blank" rel="noopener">Source link </a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.digiteex.com/former-openai-cto-mira-murati-reveals-her-new-ai-startup/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">4837</post-id>	</item>
		<item>
		<title>Elon Musk announces Grok 3 AI model, rolling out to X Premium+ users today</title>
		<link>https://www.digiteex.com/elon-musk-announces-grok-3-ai-model-rolling-out-to-x-premium-users-today/</link>
					<comments>https://www.digiteex.com/elon-musk-announces-grok-3-ai-model-rolling-out-to-x-premium-users-today/#respond</comments>
		
		<dc:creator><![CDATA[digitex]]></dc:creator>
		<pubDate>Tue, 18 Feb 2025 11:18:43 +0000</pubDate>
				<category><![CDATA[AI]]></category>
		<category><![CDATA[artificial intelligence]]></category>
		<category><![CDATA[computing]]></category>
		<category><![CDATA[Elon Musk]]></category>
		<category><![CDATA[Flash]]></category>
		<category><![CDATA[google]]></category>
		<category><![CDATA[Grok 3]]></category>
		<category><![CDATA[Grok 3 launch]]></category>
		<category><![CDATA[OpenAI]]></category>
		<category><![CDATA[Robert Heinlein]]></category>
		<category><![CDATA[tech giants]]></category>
		<category><![CDATA[xAI]]></category>
		<guid isPermaLink="false">https://www.digiteex.com/elon-musk-announces-grok-3-ai-model-rolling-out-to-x-premium-users-today/</guid>

					<description><![CDATA[Elon Musk and the xAI team have finally launched its next-gen AI model, Grok 3. Unlike the Grok 2, the team claims that the Grok 3 is 10 times more capable. The new model can perform reasoning, in-depth research and can handle creative tasks as well. Starting the live, Musk started with the aims of [&#8230;]]]></description>
										<content:encoded><![CDATA[
<br />Elon Musk and the xAI team have finally launched its next-gen AI model, Grok 3. Unlike the Grok 2, the team claims that the Grok 3 is 10 times more capable. The new model can perform reasoning, in-depth research and can handle creative tasks as well. Starting the live, Musk started with the aims of xAI. He said, &#8220;The mission of xAI and Grok is to understand the universe, the nature of universe so we can figure out what is going on, where the aliens are.&#8221; He added that the team is driven by curiosity about how nature works. “We’re very excited to present Grok 3, which is, we think, an order of magnitude more capable than Grok 2 in a very short period of time. And that’s thanks to the hard work of an incredible team, and I’m honoured to work with such a great team,” Elon Musk at the launch demo. He also assured that he is 1000 per cent sure that users will fall in love with Grok 3. xAI is rolling out Grok 3 today, beginning with Premium+ subscribers on X, who will be the first to gain access. Users need to update their X app to explore the latest advanced features. Additionally, xAI is introducing a new subscription tier called Super Grok, designed for dedicated enthusiasts seeking the most advanced capabilities and the earliest access to new features. This subscription will be available for both the Grok app and the newly launched website, grok.com.Elon Musk launches Grok 3 Starting the demo, Elon Musk explained what the meaning of Grok is. He said, &#8220;The term comes from Robert Heinlein’s novel Stranger in a Strange Land. It’s used by a character raised on Mars and means to fully and profoundly understand something. The word ‘grok’ conveys deep understanding, and empathy is an important part of that.”In April 2024, Elon Musk determined that for xAI to build the most advanced AI, the company needed to create its own data center, the team explained. They stated that with the goal of launching Grok 3 as quickly as possible, the team had to work within an extremely tight timeline, giving them just about four months to complete the project. &#8220;It took us 122 days to get the first 100,000 GPUs up and running, which was a monumental effort. We believe it’s the largest fully connected H1 100 cluster of its kind. But we didn’t stop there,&#8221; the team said. They added, &#8220;We quickly recognised that to build the AI we envision, we needed to double the size of the cluster. So, we initiated another phase—this is the first time we’re talking about this publicly—where we doubled the capacity in just 92 days. We’ve used all this computing power to continuously improve the product along the way.”The team explained that the Grok 3 is designed to function in three ways: DeepSearch, Think and Big mind. Let&#8217;s look at each of these in detail. Grok 3: Think In the demo, the first thing the team made Grok 3 do, is to solve a physics problem. They asked the model to plot a viable trajectory for a transfer from Earth to Mars and, later, a transfer back to Earth. They said, &#8220;This involves some complex physics that Grok will need to understand. We’ll challenge it to come up with a viable trajectory, calculate it, and then plot it for us so we can visualise it.&#8221; The AI model took 114 seconds of thinking to answer the question. Here, as the model thinks before concluding the answer, you can see the process followed by the model to get to the bottom of the problem. During the demo, the team also said that since the demo is unscripted, the model can make mistakes and conclude a wrong answer. However, there were no mistakes, at least for this demo. Grok 3: Big Brain Grok 3&#8217;s Big Brain uses its creative ability to conduct a task. In the demo, the team asked the AI model to create a new game combining two well-known games: The Tetris and Bejeweled. As Grok 3 was thinking and working on the query, Elon Musk said, &#8220;Grok is the beginning of creativity.&#8221; As the AI model successfully created the game, he added, &#8220;We’re launching an AI gaming studio at xAI. If you’re interested in joining us and developing AI-driven games, please come aboard. We’re announcing the launch tonight.&#8221; Grok 3: DeepSearch The Grok 3 model has been taught to solve complex problems, including Math, Science and Coding. The xAI team shared that now with the DeepSearch option, users can ask Grok to perform in-depth research on their behalf. While in-depth research is now offered by most of the AI models, like DeepSeek R1, Gemini 2.0 Flash Thinking, the team suggests that users can ask Grok 3 to think longer before concluding a query. This means that the Grok 3 can think even harder, when asked, to deliver even better results. The launch of Grok 3 marks a pivotal moment for xAI as competition in the AI industry intensifies. The company is not only contending with Western tech giants like OpenAI and Google but also facing growing pressure from emerging Chinese firms such as DeepSeek. DeepSeek’s rapid advancements have pushed competitors to adjust their strategies, OpenAI, for example, recently made its first reasoning model available for free and soon after introduced Deep Research AI. Published By: Unnati GusainPublished On: Feb 18, 2025</p>

<br /><a href="https://www.indiatoday.in/technology/news/story/elon-musk-announces-grok-3-ai-model-rolling-out-to-x-premium-users-today-2681635-2025-02-18" target="_blank" rel="noopener">Source link </a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.digiteex.com/elon-musk-announces-grok-3-ai-model-rolling-out-to-x-premium-users-today/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">4807</post-id>	</item>
		<item>
		<title>How I got rid of the A.I. tool from Workspace.</title>
		<link>https://www.digiteex.com/how-i-got-rid-of-the-a-i-tool-from-workspace/</link>
					<comments>https://www.digiteex.com/how-i-got-rid-of-the-a-i-tool-from-workspace/#respond</comments>
		
		<dc:creator><![CDATA[digitex]]></dc:creator>
		<pubDate>Wed, 29 Jan 2025 19:16:03 +0000</pubDate>
				<category><![CDATA[AI]]></category>
		<category><![CDATA[A.I. assistant]]></category>
		<category><![CDATA[Elon Musk]]></category>
		<category><![CDATA[OpenAI]]></category>
		<category><![CDATA[Sam Altman]]></category>
		<category><![CDATA[search bar]]></category>
		<category><![CDATA[Shasha Lonard]]></category>
		<category><![CDATA[U.S. government]]></category>
		<category><![CDATA[United States]]></category>
		<guid isPermaLink="false">https://www.digiteex.com/how-i-got-rid-of-the-a-i-tool-from-workspace/</guid>

					<description><![CDATA[Sign up for the Slatest to get the most insightful analysis, criticism, and advice out there, delivered to your inbox daily. Last Tuesday, I returned from a blissfully computerless vacation abroad and logged back into work, only to find an uninvited visitor in my inbox: Gemini, Google’s A.I. assistant, had been deployed to my workspace [&#8230;]]]></description>
										<content:encoded><![CDATA[
<br />
     Sign up for the Slatest to get the most insightful analysis, criticism, and advice out there, delivered to your inbox daily.</p>
<p> Last Tuesday, I returned from a blissfully computerless vacation abroad and logged back into work, only to find an uninvited visitor in my inbox: Gemini, Google’s A.I. assistant, had been deployed to my workspace without any warning. When I logged into Gmail for the first time in two weeks, a side panel popped out, introducing itself like the ghost of Clippy past. “Hi Shasha, how can I help you today?” it asked, along with offers of a few options like summarizing conversations, drafting an email, and showing me unread emails—the latter of which would have been immediately possible had that pop-up not been in my way to begin with.</p>
<p> A.I. is controversial, and for good reason. Generative tools such as the large language model ChatGPT, or image generators like Midjourney, simultaneously threaten livelihoods and spread harmful misinformation. That’s not to mention the enormous environmental impact these resource-draining systems have. There is no doubt A.I. is here to stay, but the question is in what capacity and for whose benefit. And while there is no blanket answer, ethically speaking, it should always involve personal consent, especially when that consent is being presumed by a billion-dollar company.</p>
<p> So I headed to my user settings to turn Gemini off. But if you also use Google for work and have tried to do the same, you’ll have also noticed it’s not possible. Now, this is where most people’s struggle would end, at a loss for choice, and with maybe an annoyed email to their IT guy.</p>
<p> But I happen to be Slate’s annoyed IT guy—so I logged into my admin panel, located the brand-new “Generative AI” tab, and navigated to the object of my concern: “Gemini for Google Workspace” … only to find there were no settings to manage whatsoever, nor any option to turn it off.</p>
<p>Shasha Léonard</p>
<p> This is particularly baffling, given that most—if not all—of Workspace’s features are customizable through the admin panel. That’s literally what we pay Google for.</p>
<p> For example, if I want to manage Drive external/internal viewing settings for certain users, I can do that. If I want to limit a user’s sharing abilities in Calendar, I can do that. If, for whatever reason, I want to turn those features off entirely despite being major components of a user’s workflow, I can do that. Why would Gemini be any different?</p>
<p> Second-guessing myself, I turned off the Gemini App instead and hoped for the desired changes to take effect (which Google warns you can take some time). But a few days passed with Gemini still asking me if I needed help finishing my sentences in Gmail, so like any other irritated and long-standing denizen of the internet, I headed over to Reddit to see what other sysadmins had to say about this. What I found unsettled me.</p>
<p> In one post, Reddit user heretocomplain123 described how they had also encountered Gemini for Google Workspace, but couldn’t immediately turn it off. Eventually, they were able to find the solution after contacting a Google rep. “I chatted with customer support, and at my request they ADDED the settings, after which I was able to toggle off all Gemini for Workspace settings (gmail, etc.),” the user wrote.</p>
<p> You had to go ask Google for the ability to turn it off.</p>
<p> Thanks to the screenshots OP attached, many of us were able to follow suit, some waiting a few hours in the customer support queue, some waiting several days for Gemini to finally go away. I waited 10 minutes. Here are screenshots from my own chat with Google Workspace Support.</p>
<p>Shasha Léonard</p>
<p> According to what I just experienced, and what many other paid users are experiencing, Google has made opting in to generative A.I. the default. You have to go the extra mile and wait, sometimes hours, in the support queue to even have the option to opt out. Just as Meta’s search bar doubles as an A.I. search, or how the Transportation Security Administration rolled out default biometric screening at airports, these are forms of a manipulative design strategy to all but force you to engage with and train A.I. (It is also an excellent example of enshittification.)</p>
<p> While the ethics of consent are the most immediate here, it’s also critical to acknowledge generative A.I.’s well-documented environmental impact, as well as its growing role in modern warfare, like Meta opening its A.I. model to the U.S. military, further harming the environment. The push to normalize the technology’s usage, often without user input, raises urgent questions about accountability and sustainability.</p>
<p> Google Support chat assured myself and others in the subreddit that the option to opt out of Gemini for Google Workspace would be rolled out for everyone … eventually. But the issue at hand is that Gemini for Google Workspace was rolled out as a finished product without the ability to opt out or manage its settings. The issue isn’t just about Gemini’s intrusiveness—it’s about the erosion of user autonomy. By making A.I. opt-out rather than opt-in, Google is setting a dangerous precedent: the presumption of consent by default.</p>
<p> Utpal Dholakia describes this default in an article for Psychology Today as a technique used by anyone “interested in influencing others’ behavior,” as well as representing an asymmetrical power dynamic. “The manipulator is powerful, and the manipulated is weak,” he wrote. Governments and businesses use behavioral nudges to influence citizens and employees—not the other way around. “If I tried to nudge the U.S. government to lower my tax bill, an outcome that I might, no doubt, consider to be virtuous and acceptable, I would very quickly find myself in prison.”</p>
<p> Default opt-ins create a massive power imbalance—one that could “produce tremendous and irreversible harm to a great many people,” he adds.</p>
<p> In a recent interview, Sam Altman, CEO of OpenAI, one of the largest and most influential A.I. companies in the U.S., stated that “the entire structure of society will be up for debate and reconfiguration” as A.I. continues to evolve. If this is the case, then this social contract for how we use—or are made to use—A.I. is actively being negotiated in instances like Google pushing Gemini on our workspaces.</p>
<p> It is unethical that I had to reach out to Google Support to ask for the option to turn off Gemini for my workspace, because all users should have the option to opt in or out, if they so choose. Using our right to say no where we can, as annoying or futile as it may seem, is crucial, as it shapes the future of an already skewed power dynamic where saying no might not be possible anymore.</p>
<p>     Staff<br />
    The A.I. Will See You Now<br />
    Read More</p>
<p> Some might argue that this approach is necessary to ensure widespread adoption and improve the tool through user feedback. Because A.I. models rely on data to evolve, making Gemini a default feature guarantees engagement. While this reasoning may seem valid, it ignores the ethics of consent. Presuming consent by default undermines trust and sets a dangerous precedent for how tech companies interact with users.</p>
<p>      Your Infant Is Sick. If RFK Jr. Is in Charge, Your Emergency Department Visit May Look Very Different.</p>
<p>      There Seems to Be One Principle Governing RFK Jr.’s View of Health</p>
<p>      They Used to Love Elon Musk. Now They’re His Biggest Enemies Online.</p>
<p>      The First Big Trump Scam Is Already Blowing Up in Everyone’s Faces</p>
<p> While data on A.I. usage suggests massive popular interest, it’s hard to take those numbers at face value in cases where users are automatically opted in to the experience. Just because we all woke up with U2’s Songs of Innocence on our iPhones doesn’t make it the most popular album of all time. This unreliable and inflated data then fuels a vicious cycle: Big tech companies double down on A.I., deploy products like Gemini for Workspace that users can’t opt out of, and then expect immediate returns for their shareholders—while waiting for users to embrace it enthusiastically.</p>
<p> Gemini for Workspace is now disabled for everyone at Slate until Google offers users the option to opt in themselves. As A.I. becomes more commonplace in our digital lives, especially when there is so little legislation to prevent its misuse, we must demand accountability and the right to say no. Opting out should never be a premium feature—it’s a basic right.</p>
<p> Note: No A.I. assistance was used in the writing of this article.</p>
<p>   Sign up for Slate’s evening newsletter.</p>

<br /><a href="https://slate.com/technology/2025/01/google-gemini-ai-workspace-default-opt-in.html" target="_blank" rel="noopener">Source link </a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.digiteex.com/how-i-got-rid-of-the-a-i-tool-from-workspace/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">4484</post-id>	</item>
	</channel>
</rss>
