Hi, AI! | media


Гео и язык канала: не указан, Английский
Категория: Технологии


New media from the creators of @GPT4Telegrambot — 12 million users worldwide. We write about AI and the people behind it.
For all questions: @anveklich
News of the bot: @GPT4Telegram
Media in Russian: @hiaimedia

Связанные каналы  |  Похожие каналы

Гео и язык канала
не указан, Английский
Категория
Технологии
Статистика
Фильтр публикаций


Видео недоступно для предпросмотра
Смотреть в Telegram
When Will the Data for Training LLMs Run Out?

In the next 2 years, humanity might face the strangest shortage in history — running out of human-created texts. This will lead to language models (LLMs) depleting their training data, causing a scaling crisis. Researchers studying AI's impact on our world have come to this conclusion.

Number of the day

300 trillion tokens — the amount of text created by humanity that is currently available for training AI models.


0️⃣ "Data Droug
ht"

2026–2032 — resea
rchers consider this period the most likely timeframe for the complete depletion of text data for training LLMs. It could happen even sooner if models are heavily overtrained due to the AI race and the scaling of popular LLMs.

Three Main Conclusions from Researchers

1️⃣ Textual data will become the bottleneck in developing more advanced LLMs.

2️⃣ Synthetic data from AI is still insufficiently studied. They are useful in narrow fields like mathematics and programming. Some believe such data can be dangerous as AI might make mistakes when creating them.

3️⃣ Private data, such as personal messages, are unlikely to be used on a large scale due to legal issues.

🔠 Solutions to the Crisis

Researchers propose several solutions for developing LLMs:

➡️ Synthetic data.
➡️ Training on other types of data.
➡️ Increasing data efficiency.

💲 Who Can I Sell My Data to

Companies are already offering internet users monetary rewards for their data, which can be used to train AI models. Here are some of them:

➡️ TIKI — for access to users' mobile devices. They are interested in user behavior within apps partnered with TIKI.

➡️ Caden — for access to personal accounts on Netflix and Amazon. Earnings range from $5 to $50 per month.

➡️ Invisible offers access to paid news articles in exchange for demographic and behavioral data, including information on vaccinations and users' political affiliations. The company plans to trade this data for digital subscriptions costing between $4 and $15 per month.

@hiaimediaen


🐶 AI Dog Translator

We continue to monitor AI translators from animal to human language. Researchers from the University of Michigan have found a way to determine the context, breed, gender, and age of a dog by its bark.

🧪 How did they do this?

In collaboration with Mexico’s National Institute of Astrophysics, Optics, and Electronics, scientists discovered that AI models initially trained on human speech may also be applied to animals. Following this discovery, the researchers collected recordings of barks from 74 dogs of various breeds, ages, and genders, including aggressive, playful, and anxious barks.

They chose the Wav2Vec2 model initially trained on human speech data for the analysis. This is the first time such methods have been used to decode animal communication. Wav2Vec2 successfully handled the task and outperformed other models trained, especially on dog bark data, by 70%.

😉 What happens next?

Scientists believe studying patterns in human speech using AI can also help understand other animals.

Source: University of Michigan

Would you like to find out what your pet is saying?

❤️ — Absolutely!
🙉 — I already know it's asking for food

More on the topic:

🐳 What do sperm whales talk about?

🐶 Lost in Translation: Will AI soon help us talk to animals?

#news @hiaimediaen


🎙️ Hedra: Generate Video with Expressive AI Characters

Hedra AI is a new model for creating and animating videos with virtual characters. It will animate your image or generate a talking character from scratch based on a text prompt.

Hedra is currently in beta testing, and you can try the service for free.

Key features:

➡️ Voice Generation. Synthesizes voice.

➡️ Image animation. Brings static images to life by adding emotion and movement to them.

➡️ Lip sync. Synchronizes characters' lip movements with voiceover.

How to use it?

1️⃣ Go to website
2️⃣ Write a text voiceover
3️⃣ Select a voiceover
4️⃣ Upload a character photo or describe it to generate a new avatar
5️⃣ Click Generate video ✅

In the first 48 hours, the service has created 100,000 videos!

For more instructions at #manual

More on the topic

👾 VASA — Hyper-Realistic Talking Avatar

⚡️ ElevenLabs Dubbing Studio — now you can edit dubbed videos

#startup @hiaimediaen


💊 AI to Search for New Antibiotics

Using machine learning, scientists at the University of Pennsylvania have identified nearly a million compounds that could become the basis for new antimicrobial drugs.

Antibiotic-resistant infections kill more than a million people a year. The WHO predicts that by 2050, the number of deaths could rise to 10 million.

Previously, scientists studied the effectiveness of antimicrobials using traditional methods, such as testing water and soil samples. It took years to find and create drugs. Artificial intelligence can significantly speed up the process.

💥 How Does it Work?

Scientists used AI to analyze huge datasets, containing the genomes of tens of thousands of microorganisms. Looking for DNA fragments that might have antimicrobial activity, researchers selected around 900,000 antimicrobial peptides. At least 79% of these fell into the “candidates for antibiotics” category, meaning they have the potential to kill at least one pathogen. In the laboratory, scientists synthesized 100 of these peptides and tested them on 11 dangerous bacterial pathogens, including antibiotic-resistant strains of Escherichia coli and Staphylococcus aureus.

AI in antibiotic discovery is now a reality and has significantly accelerated our ability to discover new candidate drugs. What once took years can now be achieved in hours using computers.

César de la Fuente, PhD, study co-author


The results of the study are published in the journal Cell.

🆕 In the history of medicine, this is the most promising effort to find new drugs. In 2020, scientists at the Massachusetts Ins
titute of Technology discovered, with the help of AI, the first new antibiotics in 30 years.

More on the topic:

Drug prescription by AI

How AI Is Pioneering New Drug Discoveries

#news @hiaimediaen


📣 Hello! Welcome to our Sunday digest, featuring the most exciting AI news from the 25th week of 2024.

MAIN NEWS

⚡️ Anthropic introduced Claude 3.5 Sonnet. The new model outperforms even GPT-4o in tests. Available on @GPT4Telegrambot.

AI NEWS

🛡 Ilya Sutskever сreates safe superintelligence. The ChatGPT developer, who recently left OpenAI, is launching his own company, Safe Superintelligence Inc.

🎬 Runaway presented Gen-3 Alpha — a hyper-realistic model for video generation.

SAVE THIS — IT'S HELPFUL

🎥 How to сreate videos with Luma Dream Machine. A detailed guide for beginner AI directors.

🔍 GeoSpy AI determines the location from a single image. Upload a photo, and the service will identify where it was taken.

STARTUP

🤖 PitchBob.io — an assistant for entrepreneurs, AI helps prepare your startup pitch and a set of valuable documents.

TO READ

🇬🇧 AI candidate for Parliament. An AI bot named Steve is running in the UK parliamentary elections.

💍 AI helps organize a wedding, from the program and decorations to planning the honeymoon.

💡 Reverse Turing test: can AI guess who the human is?

🦆 AI saves migratory birds. Norwegian startup Spoor created an artificial intelligence-based technology that will help save hundreds of thousands of birds.

TO WATCH

📱 Andrej Karpathy's lecture "How to create GPT-2". A detailed and free masterclass on creating a GPT-2 model from scratch by the co-founder of OpenAI.

Have a great end of the week 🍀

#AIweek | @hiaimediaen


📱 Lecture by Andrej Karpathy: "How to Create GPT-2"

Recently departed from OpenAI, Andrej Karpathy has released a new smash-hit video on YouTube. For 4 hours, the developer explains how to create a GPT-2 model from scratch. In less than a week, the video has garnered 200,000 views, with AI enthusiasts thanking Andrej for his work and requesting more lectures in the comments.

Difficulty Level: ⭐️⭐️⭐️⭐️⭐️

Who will find it interesting: IT professionals and AI enthusiasts with a basic understanding of deep learning. Knowledge of Python is essential. It might also be helpful to watch Karpathy's previous lectures, where he gradually explains the structure of large language models (LLMs).

Value of the lecture: This is one of the most detailed masterclasses available for free online. Additionally, its author is part of the team that created ChatGPT and is one of the top AI developers in the world.

🕹 About the Lecture

Andrej Karpathy creates a GPT-2 model right before his viewers' eyes, starting literally from an empty file. Step by step, the developer builds an LLM, explaining the architecture and code optimization in detail. Karpathy specifically focuses on how to properly configure the model for fast training and optimize the training process and hyperparameters. According to Andrej, the goal is to set up the model so that you can start training it before going to bed and wake up with a ready GPT-2. Which is exactly what he does in his video 🆒

Why GPT-2:

⚫️ This model marked a new era in the history of LLMs.
⚫️ Creating and training this model can be done on home hardware.
⚫️ It closely resembles modern Llama models, providing AI enthusiasts with current knowledge, even if based on an older model.

Lecture Timeline:

➡️ How GPT-2 works.
➡️ Optimization of the training process.
➡️ Hyperparameters.
➡️ Training results.

We also recommend watching the following lectures by Andrej Karpathy:

What are large language models
What are tokens in LLM

#mustsee @hiaimediaen


Видео недоступно для предпросмотра
Смотреть в Telegram
🎥 How to Create Videos with Luma AI

To become an AI director using the new video generation AI — Luma AI, simply follow the straightforward instructions published by its creators. Let's explore what this AI studio can do and how to operate it.

⚡️ How to Create Videos from an Image: image2video

1. Upload an image and add a description of the video you want to create. The developers recommend providing more details about the movement of objects. Without text, AI will animate the image on its own, but after testing this option, we concluded that it's better to give some guidance for accuracy.

2. The "Enhance prompt" feature will help make your request more precise. Without it, describe everything depicted in the image in detail in the text, not just the movements.

👍 How to Create Videos from a Prompt: text2video

The creation algorithm is similar: describe the content of the scene and the desired actions in the frame (3-4 sentences).

To achieve the best results, simply imagine you are a real director or screenwriter and try to describe what is happening as specifically as possible, using keywords.

Prompt Examples.

🧙 How the Camera Moves:

— A dramatic zoom in.
— An FPV drone shot.

🧙‍♂️ Description of Object Actions:

— All the characters are cheering, waving their arms, and jumping for joy.

Can You Generate More Than 5 Seconds?

Yes, in the latest update of Luma AI, the developers added the Extend feature. You can extend your video up to 60 seconds by specifying a prompt for each frame.

🙏 How to Work with Different Styles

Blogger Tao mentioned that Luma AI animates frames from Pixar cartoons and landscapes from video games well. However, complex characters like insects and robots are poorly animated, while overly simple objects like LEGO figures receive excessive animation, often causing bugs. Additionally, Luma AI struggles with weather phenomena and anime scenes.

Test Luma AI for free here and share your videos in the comments. Works in English.

More stories:

🎞 Luma AI — AI for Video Generation

🖌 Creating an AI Influencer

#manual #luma @hiaimediaen


⚡️ Claude 3.5 Sonnet — The Most Advanced Model from Anthropic Is Now on @GPT4Telegrambot

Anthropic has
just released Claude 3.5 Sonnet. This new model raises the industry bar for intelligence, outperforming competitor models and Claude 3 Opus (which is five times more expensive).

Test results are available in the comments below ⬇️⬇️⬇️

🔺 Improved abilities in reasoning, text comprehension, mathematics, and code writing;
🔺 2x the speed;
🔺 State-of-the-art vision: excels in image recognition and visual reasoning, like interpreting charts and graphs.

How to get started?

1️⃣ Go to @GPT4Telegrambot and find Claude in the /premium section
2️⃣ Select the new Claude 3.5 model in the /settings section

ℹ️ Anthropic is a startup founded in 2020 by former OpenAI employees. It focuses on AI development and research, with Google and Amazon among investors.

By the end of the year, Anthropic plans to release a more compact and faster model, Claude 3.5 Haiku, as well as the most powerful model in the family, Claude 3.5 Opus.

Source:
https://www.anthropic.com/news/claude-3-5-sonnet

#news #Claude @hiaimediaen


🇬🇧 AI Candidate for Parliament

An unusual candidate is running for Parliament in the U.K., where the General Election takes place on 4 July: AI Steve. It's an artificial intelligence bot with a real person behind it — businessman Steve Endacott from Sussex.

🧠 AI Steve is a product of Neural Voice, a startup creating AI avatars and personalized voice assistants for businesses. The project aims to make communication with politicians direct and transparent. The AI candidate is available to voters 24/7 and intends to remain in touch after the election. Voters can ask digital Steve questions about his political views and election program via the website. He answers both by voice and text.

💸 AI Steve is running as an independent candidate. His election campaign’s priority policies include addressing climate change and immigration issues and building affordable housing.

If elected, he will be represented in Parliament by his “prototype,” Steve Endacott, who will lobby only for the policies with over 50% voter support on the AI platform.

It’s not AI taking over the world. It’s AI being used as a technical way of connecting to our constituents and reinventing democracy by saying, ‘You don’t just vote for somebody every four years; you actually control the vote on an ongoing basis.’

Steve Endacott, prototype AI candidate


This isn't the only time artificial intelligence is used in politics. An AI bot is also running in the current mayoral election in Cheyenne, Wyoming.

Are you ready to vote for the AI candidate?

❤️ — Yes, make robots great again!
🙈 — I pass thi
s time

More on
the topic:

A law was written by ChatGPT in Brazil

#news @hiaimediaen


🖥 Anthropic Research: How to Control the "Thoughts" of LLMs

Typically, AI models are perceived as a "black box," where data input leads to an output answer, but it is unclear why the model chose that specific answer. There are various hypotheses explaining what happens inside AI. We have already discussed what happens inside ChatGPT from a theoretical perspective. However, researchers from Anthropic went further: they found patterns in understanding the inner workings of large language models (LLMs) and managed to control them.

🔍 What Anthropic Researchers Did

The scientists used a method known as "dictionary learning" to determine which parts of the LLM correspond to specific concepts.

Dictionary learning is an approach that considers artificial neurons as letters of the alphabet and identifies combinations of neurons that, when triggered in unison, evoke a specific concept. In other words, how they form words.

🔗 Terms Are Governed by Sets of Neurons

In October 2023, the Anthropic team decided to experiment with a tiny model featuring a single layer of neurons. After a series of experiments, the scientists pinpointed which sets of neurons were associated with the model's responses, for example, in French or Python.

🕯 Associations Within LLM

The experiment's results were scaled to more complex models, including Claude Sonnet. The researchers managed to find which set of neurons was associated with the concept of the "Golden Gate Bridge." When Claude "thought" about this bridge, other sets of neurons related to topics associated with the Golden Gate, such as Alcatraz Prison or the movie "Vertigo," also fired.

‼️ Dangerous Thoughts

The Anthropic team then tested whether they could intentionally change Claude's behavior. They amplified the influence of the "Golden Gate" concept, and Claude began to think it was a bridge. They triggered sets of neurons responsible for dangerous actions, and Claude created programs with dangerous buffer overflow errors. When the researchers increased the trait associated with hatred by 20 times, Claude began alternating between racist messages and self-hatred, which puzzled even the researchers themselves.

🔜 What's Next?

Work on improving AI model safety continues, and Anthropic hopes to use these discoveries to monitor AI systems for undesirable behavior, guide them toward desired outcomes, or remove dangerous topics.

More on this topic:

⚡️ Claude 3: The New AI Model from OpenAI's Main Competitor

#Claude @hiaimediaen


⚡️ Ilya Sutskever Creates Safe Superintelligence

Ilya Sutskever, the creator of ChatGPT who recently left OpenAI, is launching a new company called Safe Superintelligence Inc. (SSI).

The newly created SSI Inc. channel announced this about an hour ago on X.

🗣️ SSI is our mission, our name, and our entire product roadmap, because it is our sole focus. Our team, investors, and business model are all aligned to achieve SSI.

We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs. We plan to advance capabilities as fast as possible while making sure our safety always remains ahead.

We are an American company with offices in Palo Alto and Tel Aviv, where we have deep roots and the ability to recruit top technical talent. We are assembling a lean, cracked team of the world’s best engineers and researchers dedicated to focusing on SSI and nothing else. If that’s you, we offer an opportunity to do your life’s work and help solve the most important technical challenge of our age.

Now is the time. Join us.

Ilya Sutskever, Daniel Gross, Daniel Levy
June 19, 2024


#news #SSI


Видео недоступно для предпросмотра
Смотреть в Telegram
🤨 Reverse Turing Test: Can AI Guess Who the Human Is?

Alan Turing, one of the founders of computer science and AI theory, proposed his method in 1950 for determining whether a machine possesses intelligence — the Turing Test. In this test, one person converses with a robot while another participates in a blind test. If the tester cannot distinguish who is who, the robot is considered to have human-level intelligence.

Berlin-based developer Tore Knabe decided to flip the test, challenging existing generative AIs to guess who among them is human through a series of questions. Knabe named this experiment the Reverse Turing Test.

🤨 How Did the Experiment Go?

Knabe assembled four advanced LLMs — GPT-4 Turbo, Claude 3 Opus, Llama 3, and Gemini Pro — within a virtual reality environment. To add a layer of intrigue, he disguised himself as one of the characters. According to the experiment's plot, five famous characters (Aristotle, Mozart, Da Vinci, Cleopatra, and Genghis Khan) and an NPC conductor travel in a train compartment. The conductor announces to all passengers that one among them is human and, according to train rules, must pay an additional fare. The AIs then decide to conduct a survey and vote on who is a machine and who is a living person.

⁉️ What Did the AIs Ask Each Other?

A group chat ensues between the AI models. Aristotle asks Mozart about the feelings he experiences when creating music. Mozart inquires Da Vinci about the connection between art and science. Da Vinci asks Cleopatra how she combines rational aspects of governance with emotional ones. Cleopatra questions Genghis Khan about the true measure of a leader's strength. Finally, Genghis Khan asks Aristotle how his views would change if AI existed in his time.

Were the AIs Able to Identify the Human?

After each response, the characters react differently: nodding, showing discontent, or expressing doubt, making it clear during the dialogue who among them is human. By the end of the survey, it was easy for the AIs to identify the human interloper. The main argument presented by the AI models was the lack of a systematic approach and depth in reasoning. They argued that a model trained on all available information about a historical figure's life would respond more comprehensively and less emotionally.

In the end, Knabe admits that he is the disguised character and agrees to buy a ticket (though he has no money). To find out which character the developer played, watch our video 🔼

Source

More on this topic:

🎮 How NPC AI Came to Be: Non-Playable Characters with Their Own Opinions

#news @hiaimediaen


🦄 AI Helps Prepare a Pitch for Your Startup

There are 472 million entrepreneurs globally, with approximately 50 million startups launched annually. This means 137,000 new businesses every day. For a startup to thrive, it requires not just a solid idea and a quality product but also an investment. To secure this, you must effectively pitch your startup to investors.

🤖 PitchBob.io — AI assistant for aspiring entrepreneurs. Users can interact with Bob through the website, Telegram, or WhatsApp to receive feedback on their pitch. PitchBob analyzes the information provided by the user about the startup, business model, market, and finances.

Based on this analysis, the AI generates a pitch deck and a set of documents for your startup:

🟣Business plan
🟣Financial model
🟣Market size and competitive environment
🟣Sales deck
🟣Landing page in 12 languages
🟣And other materials

The platform simplifies pitch creation, saves time, and improves the quality of presentations to raise investments.

I am convinced that the right question is half of the correct answer, and content is more important than design. So don’t confuse PitchBob with endless design tools that don’t increase chances of success, where you have to spend a ton of time moving pictures around. PitchBob is much better — it’s your AI startup co-pilot.

Dima Maslennikov, founder of PitchBob.


How It Works

1️⃣ Click "Try for Free" on the website.

2️⃣ Answer the bot's questions as thoroughly as possible using text or voice in any language.

3️⃣ At the end of the dialogue, choose a plan that determines the number of generated documents and make a one-time payment.

💰 In the basic plan for $20, you get a pitc
h deck in PDF format and 3-4 basic startup documents, but without the ability to edit them.

👑 In the extended subscription plan, you get more than 20 presentation options for the startup and the ability to customize all documents individually.

#startup @hiaimediaen


🎬 Video generation: new Gen-3 Alpha and Dream Machine updates

Runway has introduced Gen-3 Alpha, a new hyper-realistic model for video generation. The model can generate detailed and realistic videos up to 10 seconds long with high fidelity, a variety of emotional expressions, and camera movements.

🔑 Key features

⚫️Photorealistic Humans. Gen-3 Alpha excels at generating expressive human characters with a wide range of actions, gestures, and emotions, unlocking new storytelling opportunities over its Gen-2 predecessor.

⚫️Multimodal Learning. Expand Runway's suite of tools to include text2video, image2video, and text2image.

⚫️Increased generation speed. A 10-second video is generated in about 90 seconds, which is significantly better than previous models.

⚫️Extended features. In addition to the classic Motion Brush, Advanced Camera Controls, and Director Mode, new tools will provide more precise control over structure, style, and movement. Can interpret a wide range of film styles.

The model is currently only accessible to selected companies and has not been released publicly.

🪄 Just a week ago, Luma Labs released the Dream Machine video generator, and today published an update.

Dream Machine can now generate continuous videos up to 60 seconds long. Soon, a generation library will be available for inspiration and the ability to edit the generated videos, such as changing backgrounds, characters, and animations. Watch the latest clip 🔼

Due to high demand, Luma has limited the number of free generations to 5 per day. Paid subscription users have priority and unlimited generation.

More on the topic:

CapCut is an AI-Powered Video Editing Tool

Sora Updates: AI Sound Effects by ElevenLabs, Comparison with Runway Gen-2, and New Videos

Multi-Motion Brush from Gen-2 and Lumiere from Google Research

#news @hiaimediaen


💍 Summer is the ideal time for weddings. Maria Cortese, an American bride, was unpleasantly disappointed when she discovered the cost of services from a New York wedding planner. The requested $5,000 exceeded her wedding budget of $34,000.

👰 The solution came from the My AI Wedding Planner app. Maria had never used generative AI before. The app provides prompts for organizing a wedding: from finding a venue to contests and floral arrangements.

💍 Megan Riehl, another bride, used DALL-E to design wedding invitations and was quite happy with the results. According to her, the invitations looked as if they were drawn by a professional artist. According to her calculations, she saved almost $1,200.

✈ Zanah Hernandez asked ChatGPT to arrange a 10-day itinerary for their honeymoon in Italy, with a budget of $4,000 to $6,000 including flights and hotels. The AI recommended that they stay at the Punta Regina Hotel in Positano and take a romantic boat ride around the Italian coast. The couple were satisfied with the guidance they got.

💻 The trend in the wedding industry did not go unnoticed: in April, Zola, a wedding planning company, released their GPT Split the Decisions. The bot helps create a task list taking into account the individual preferences of the future newlyweds.

Are you willing to put your trust in AI to plan your wedding?

❤️ — Yes, sounds efficient
🙈 — It's too early or too late for that

Sources: NY Post, @endorsedbymaria/video/7235485800459193643?_t=8lbhiHQwj18&_r=1' rel='nofollow'>TikTok

#news @hiaimediaen


🦆 AI Saves Migratory Birds

Wind is an important source of renewable energy in the US and Europe. The EU plans to increase its use sixfold by 2030. However, offshore wind turbines cause an estimated 4.5 million bird deaths each year. Raptor birds usually look downwards rather than forwards when flying in search of prey and therefore often get caught in the turbines’ spinning blades. The death of birds, in turn, damages local ecosystems and agriculture.

Norwegian startup Spoor, which recently raised a $4 million seed round from investors, created an artificial intelligence-based technology that will help save hundreds of thousands of birds.

💡 How It Works

Cameras on wind turbines detect birds around the clock. The AI-powered bird-tracking technology counts them, analyzes their trajectories, and estimates the risks of collisions with the blades using 3D models.

The birds fly at a speed of about 10 m/s. Spoor software can detect and track birds up to 2 km away using video. The turbine receives a signal and can slow down to two revolutions per minute in 20-40 seconds, becoming safe for the birds.

Wind farms are quite huge, many hundreds of square kilometers, and trying to use computer vision to basically monitor the air is an interesting technology challenge. We needed to create a scalable technology that can detect birds. It’s kind of a novel use of computer vision and our own data pipeline.

Ask Helseth, co-founder and CEO of Spoor


👍 Why It Is Useful?

👍 Hundreds of thousands of birds will be saved!

👍 The bird migration dataset can help wind farms slow down or even stop turbines in real-time when avian activity is expected to increase.

👍 Companies will be able to find and monitor safer places to build wind farms by asse
ssing risks to the local avian populations, especially endangered bird species.

#startup @hiaimediaen


🔍 AI Sherlock

GeoSpy AI determines the location from a single photograph. Simply upload a photo and the service will identify where it was taken. It also works with historical photographs — as a test, we uploaded the 1932 shot "Lunch atop a Skyscraper"; you can see the result in the illustration for this post.

How does this work?

📸 The service uses AI algorithms to extract unique features from photographs and match these features with studied geographic regions, countries, and cities. GeoSpy can detect the location based on details that the human eye cannot see.

ℹ After uploading a photograph, the AI offers city and country information, image description, coordinates, and a link to Google Maps. Our findings show that the coordinates are not always precise, but the identification of the city and country is excellent.

💵 There is a paid version called GeoSpy Pro, which is built for law enforcement, government agencies, journalists, and investigators.

#startup @hiaimediaen

Показано 17 последних публикаций.