YubNub Social YubNub Social
    Advanced Search
  • Login
  • Register

  • Night mode
  • © 2025 YubNub Social
    About • Directory • Contact Us • Privacy Policy • Terms of Use • Android • Apple iOS • Get Our App

    Select Language

  • English
Install our *FREE* WEB APP! (PWA)
Night mode
Community
News Feed (Home) Popular Posts Events Blog Market Forum
Media
Headline News VidWatch Game Zone Top PodCasts
Explore
Explore Jobs Offers
© 2025 YubNub Social
  • English
About • Directory • Contact Us • Privacy Policy • Terms of Use • Android • Apple iOS • Get Our App

Discover posts

Posts

Users

Pages

Group

Blog

Market

Events

Games

Forum

Jobs

RedState Feed
RedState Feed
29 w

Caitlin Clark Bends the Knee to the Race Grifters, and It'll Never Be Enough
Favicon 
redstate.com

Caitlin Clark Bends the Knee to the Race Grifters, and It'll Never Be Enough

Caitlin Clark Bends the Knee to the Race Grifters, and It'll Never Be Enough
Like
Comment
Share
RedState Feed
RedState Feed
29 w

NEW: Just Two Days After Trump's Win, Biden Unfreezes $10 Billion in Assets for Islamist Iran
Favicon 
redstate.com

NEW: Just Two Days After Trump's Win, Biden Unfreezes $10 Billion in Assets for Islamist Iran

NEW: Just Two Days After Trump's Win, Biden Unfreezes $10 Billion in Assets for Islamist Iran
Like
Comment
Share
Trending Tech
Trending Tech
29 w

Wallace & Gromit studio Aardman is working on a Pokémon project
Favicon 
www.theverge.com

Wallace & Gromit studio Aardman is working on a Pokémon project

TCPi / Aardman Aardman Animation, the studio behind the Wallace & Gromit, Chicken Run, and Shaun the Sheep, and Chicken Run franchises, is working with The Pokémon Company International on a mysterious new project. In a surprising turn of events, TCPi and Aardman announced today that they’re teaming up for “a special project” that’s set to be released some time in 2027. In a press release about the collaboration, TCP marketing and media VP Taito Okiura described it as “a dream partnership for Pokémon.” Aardman’s managing director Sean Clarke added that it was both a huge honor and privilege to be tasked with presenting the Pokémon world in a new way. Pokémon × @aardmanComing in 2027! pic.twitter.com/DQPbtekKXo— Pokémon (@Pokemon) December 11, 2024 “Bringing together Pokémon, the world’s biggest entertainment brand, together with our love of craft, character and comedic storytelling feels incredibly exciting,” Clarke said. “Aardman and TPCi share an emphasis on heritage and attention to detail as well as putting our fans and audiences at the heart of what we do, which we know will steer us right as we together create charming, original and new stories for audiences around the world.” Aside from the projected release year, there aren’t all that many details about what the collaboration is or how we’ll be able to consume it. Clearly, stop-motion claymation will be involved, but what’s less obvious is whether this will end up being a movie, a series, or perhaps a game — which wouldn’t be a first for Aardman. This sort of team-up makes a lot of sense for TCPi after the success of Detective Pikachu and surprise delights like Pokémon Concierge. And the closer we get to 2027, it’s feels like there’s a very strong chance this will be something that has people buzzing.
Like
Comment
Share
Trending Tech
Trending Tech
29 w

It sure sounds like Google is planning to actually launch some smart glasses
Favicon 
www.theverge.com

It sure sounds like Google is planning to actually launch some smart glasses

Here’s what Google’s latest smart glasses prototype looks like. | Image: Google Google is working on a lot of AI stuff — like, a lot of AI stuff — but if you want to really understand the company’s vision for virtual assistants, take a look at Project Astra. Google first showed a demo of its all-encompassing, multimodal virtual assistant at Google I/O this spring and clearly imagines Astra as an always-on helper in your life. In reality, the tech is somewhere between “neat concept video” and “early prototype,” but it represents the most ambitious version of Google’s AI work. And there’s one thing that keeps popping up in Astra demos: glasses. Google has been working on smart facewear of one kind or another for years, from Glass to Cardboard to the Project Iris translator glasses it showed off two years ago. Earlier this year, all Google spokesperson Jane Park would tell us was that the glasses were “a functional research prototype.” Now, they appear to be something at least a little more than that. During a press briefing ahead of the launch of Gemini 2.0, Bibo Xu, a product manager on the Google DeepMind team, said that “a small group will be testing Project Astra on prototype glasses, which we believe is one of the most powerful and intuitive form factors to experience this kind of AI.” That group will be part of Google’s Trusted Tester program, which often gets access to these early prototypes, many of which don’t ever ship publicly. Some testers will use Astra on an Android phone; others through the glasses. Later in the briefing, in response to a question about the glasses, Xu said that “for the glasses product itself, we’ll have more news coming shortly.” Is that definitive proof that Google Smart Glasses are coming to a store near you sometime soon? Of course not! But it certainly indicates that Google has some hardware plans for Project Astra. Smart glasses make perfect sense for what Google is trying to do with Astra. There’s simply no better way to combine audio, video, and a display than on a device on your face — especially if you’re hoping for something like an always-on experience. In a new video showing Astra’s capabilities with Gemini 2.0, a tester uses Astra to remember security codes at an apartment building, check the weather, and much more. At one point, he sees a bus flying past and asks Astra if “that bus will take me anywhere near Chinatown.” It’s all the sort of thing you can do with a phone, but nearly all of it feels far more natural through a wearable. Right now, smart glasses like these — and like Meta’s Orion — are mostly vaporware. When they’ll ship, whether they’ll ship, and whether they’ll be any good all remains up in the air. But Google is dead serious about making smart glasses work. And seems to be just as serious about making the smart glasses itself.
Like
Comment
Share
Trending Tech
Trending Tech
29 w

Google’s new Jules AI agent will help developers fix buggy code
Favicon 
www.theverge.com

Google’s new Jules AI agent will help developers fix buggy code

Illustration: The Verge Google has announced an experimental AI-powered code agent called “Jules” that can automatically fix coding errors for developers. Jules was introduced today alongside Gemini 2.0, and uses the updated Google AI model to create multi-step plans to address issues, modify multiple files, and prepare pull requests for Python and Javascript coding tasks in GitHub workflows. Microsoft introduced a similar experience for GitHub Copilot last year that can recognize and explain code, alongside recommending changes and fixing bugs. Jules will compete against Microsoft’s offering, and also against tools like Cursor and even Claude and ChatGPT’s coding abilities. Google’s launch of a coding-focused AI assistant is no surprise — CEO Sundar Pichai said in October that more than a quarter of all new code at the company is now generated by AI. “Jules handles bug fixes and other time-consuming tasks while you focus on what you actually want to build,” Google says in its blog post. “This effort is part of our long-term goal of building AI agents that are helpful in all domains, including coding.” Developers have full control to review and adjust the plans created by Jules, before choosing to merge the code it generates into their projects. The announcement doesn’t say that Jules will spot bugs for you, so presumably it needs to be directed to a list of issues that have already been identified to fix. Google also says that Jules is in early development and “may make mistakes,” but internal testing has shown it’s been beneficial for boosting developer productivity and providing real-time updates to help track and manage tasks. Jules is launching today for a “select group of trusted testers” according to Google, and will be released to other developers in early 2025. Updates about availability and how development is progressing will be available via the Google Labs website.
Like
Comment
Share
Trending Tech
Trending Tech
29 w

Google is testing Gemini AI agents that help you in video games
Favicon 
www.theverge.com

Google is testing Gemini AI agents that help you in video games

Clash of Clans. | Image: Supercell Google just announced Gemini 2.0, and as part of its suite of news today, the company is revealing that it’s been exploring how AI agents built with Gemini 2.0 can understand rules in video games to help you out. The agents can “reason about the game based solely on the action on the screen, and offer up suggestions for what to do next in real time conversation,” Google DeepMind CEO Demis Hassabis and CTO Koray Kavukcuoglu write in a blog post. Hassabis and Kavukcuoglu also say that the agents can also “tap into Google Search to connect you with the wealth of gaming knowledge on the web.” Google is testing the agents’ “ability to interpret rules and challenges” in games like Clash of Clans and Hay Day from Supercell, according to Hassabis and Kavukcuoglu. I’m not surprised Google is chasing these ideas: in theory, an AI agent coaching you through a strategy or puzzle could be useful. It sounds like this work is very early, though, and I have many questions about whether or not these agents actually give sound advice. Google is also investing in video games and AI in another way: creating playable virtual worlds on the fly from a prompt image using a “foundation world model” called Genie 2 that it showed off last week. That work seems early, too: Genie 2 can only generate consistent worlds for “up to a minute,” Google says.
Like
Comment
Share
Trending Tech
Trending Tech
29 w

Google launched Gemini 2.0, its new AI model for practically everything
Favicon 
www.theverge.com

Google launched Gemini 2.0, its new AI model for practically everything

Illustration: The Verge Google’s latest AI model has a lot of work to do. Like every other company in the AI race, Google is frantically building AI into practically every product it owns, trying to build products other developers want to use, and racing to set up all the infrastructure to make those things possible without being so expensive it runs the company out of business. Meanwhile, Amazon, Microsoft, Anthropic, and OpenAI are pouring their own billions into pretty much the exact same set of problems. That may explain why Demis Hassabis, the CEO of Google DeepMind and the head of all the company’s AI efforts, is so excited about how all-encompassing the new Gemini 2.0 model is. Google is releasing Gemini 2.0 on Wednesday, about 10 months after the company first launched 1.5. It’s still in what Google calls an “experimental preview,” and only one version of the model — the smaller, lower-end 2.0 Flash — is being released. But Hassabis says it’s still a big day. “Effectively,” Hassabis says, “it’s as good as the current Pro model is. So you can think of it as one whole tier better, for the same cost efficiency and performance efficiency and speed. We’re really happy with that.” And not only is it better at doing the old things Gemini could do but it can also do new things. Gemini 2.0 can now natively generate audio and images, and it brings new multimodal capabilities that Hassabis says lay the groundwork for the next big thing in AI: agents. Agentic AI, as everyone calls it, refers to AI bots that can actually go off and accomplish things on your behalf. Google has been demoing one, Project Astra, since this spring — it’s a visual system that can identify objects, help you navigate the world, and tell you where you left your glasses. Gemini 2.0 represents a huge improvement for Astra, Hassabis says. Google is also launching Project Mariner, an experimental new Chrome extension that can quite literally use your web browser for you. There’s also Jules, an agent specifically for helping developers find and fix bad code, and a new Gemini 2.0-based agent that can look at your screen and help you better play video games. Hassabis calls the game agent “an Easter egg” but also points to it as the sort of thing a truly multimodal, built-in model can do for you. “We really see 2025 as the true start of the agent-based era,” Hassabis says, “and Gemini 2.0 is the foundation of that.” He’s careful to note that the performance isn’t the only upgrade here; as talk of an industrywide slowdown in model improvements continues, he says Google is still seeing gains as it trains new models, but he’s just as excited about the efficiency and speed improvements. Google’s plan for Gemini 2.0 is to use it absolutely everywhere This won’t shock you, but Google’s plan for Gemini 2.0 is to use it absolutely everywhere. It will power AI Overviews in Google Search, which Google says now reach 1 billion people and which the company says will now be more nuanced and complex thanks to Gemini 2.0. It’ll be in the Gemini bot and app, of course, and will eventually power the AI features in Workspace and elsewhere at Google. Google has worked to bring as many features as possible into the model itself, rather than run a bunch of individual and siloed products, in order to be able to do more with Gemini in more places. The multimodality, the different kinds of outputs, the features — the goal is to get all of it into the foundational Gemini model. “We’re trying to build the most general model possible,” Hassabis says. As the agentic era of AI begins, Hassabis says there are both new and old problems to solve. The old ones are eternal, about performance and efficiency and inference cost. The new ones are in many ways unknown. Just to name one: what safety risks will these agents pose out in the world operating of their own accord? Google is taking some precautions with Mariner and Astra, but Hassabis says there’s more research to be done. “We’re going to need new safety solutions,” he says, “like testing in hardened sandboxes. I think that’s going to be quite important for testing agents, rather than out in the wild… they’ll be more useful, but there will also be more risks.” Gemini 2.0 may be in an experimental stage for now, but you can already use it by choosing the new model in the Gemini web app. (No word yet on when you’ll get to try the non-Flash models.) And early next year, Hassabis says, it’s coming for other Gemini platforms, everything else Google makes, and the whole internet.
Like
Comment
Share
Trending Tech
Trending Tech
29 w

Google built an AI tool that can do research for you
Favicon 
www.theverge.com

Google built an AI tool that can do research for you

Illustration: The Verge Google has just revealed a new AI tool called Deep Research that lets you call upon its Gemini bot to scour the web for you and write a detailed report based on its findings. Deep Research is currently only available in English to Gemini Advanced subscribers. If you have access, you can ask Gemini to research a particular topic on your behalf, and the chatbot will create a “multi-step research plan” that you can either edit or approve. Google says Gemini will start its research by “finding interesting pieces of information” on the web and then performing related searches — a process it repeats several times. GIF: Google When it’s finished, Gemini will spit out a report of its “key findings” with links to the websites where it found its information. You can ask Gemini to expand on certain areas or tweak its report, as well as export the AI-generated research to Google Docs. This all sounds a bit similar to the Pages feature offered by the AI search engine Perplexity, which generates a custom webpage based on your prompt. Google took the wraps off Deep Research as part of a broader announcement for Gemini 2.0, its new model for an era of “agentic” AI, or the AI systems that can perform tasks for you. Deep Research is just one example of Google’s agentic push, and it’s something other AI companies are seriously exploring as well. Along with Deep Research, Google announced that it’s making Gemini Flash 2.0 — a speedier version of the next-gen chatbot — available to developers. Deep Research is currently only available for Gemini Advanced subscribers on the web. You can try it by heading to Gemini and then changing the model dropdown to “Gemini 1.5 Pro with Deep Research.”
Like
Comment
Share
Trending Tech
Trending Tech
29 w

Google’s AI enters its ‘agentic era’
Favicon 
www.theverge.com

Google’s AI enters its ‘agentic era’

Image: Google I stepped into a room lined with bookshelves, stacked with ordinary programming and architecture texts. One shelf stood slightly askew, and behind it was a hidden room that had three TVs displaying famous artworks: Edvard Munch’s The Scream, Georges Seurat’s Sunday Afternoon, and Hokusai’s The Great Wave off Kanagawa. “There’s some interesting pieces of art here,” said Bibo Xu, Google DeepMind’s lead product manager for Project Astra. “Is there one in particular that you would want to talk about?” Project Astra, Google’s prototype AI “universal agent,” responded smoothly. “The Sunday Afternoon artwork was discussed previously,” it replied. “Was there a particular detail about it you wish to discuss, or were you interested in discussing The Scream?” I was at Google’s sprawling Mountain View campus, seeing the latest projects from its AI lab DeepMind. One was Project Astra, a virtual assistant first demoed at Google I/O earlier this year. Currently contained in an app, it can process text, images, video, and audio in real time and respond to questions about them. It’s like a Siri or Alexa that’s slightly more natural to talk to, can see the world around you, and can “remember”... Read the full story at The Verge.
Like
Comment
Share
NEWSMAX Feed
NEWSMAX Feed
29 w

Trump: 'Priority' to Resolve Russia-Ukraine War
Favicon 
www.newsmax.com

Trump: 'Priority' to Resolve Russia-Ukraine War

Resolving the war between Ukraine and Russia is a "priority," while Syria will have to "sort out its own problems," according to President-elect Donald Trump.
Like
Comment
Share
Showing 307 out of 56666
  • 303
  • 304
  • 305
  • 306
  • 307
  • 308
  • 309
  • 310
  • 311
  • 312
  • 313
  • 314
  • 315
  • 316
  • 317
  • 318
  • 319
  • 320
  • 321
  • 322

Edit Offer

Add tier








Select an image
Delete your tier
Are you sure you want to delete this tier?

Reviews

In order to sell your content and posts, start by creating a few packages. Monetization

Pay By Wallet

Payment Alert

You are about to purchase the items, do you want to proceed?

Request a Refund