Artificial Intelligence


Гео и язык канала: не указан, Английский
Категория: Технологии


AI will not replace you but person using AI will🚀
I make Artificial Intelligence easy for everyone so you can start with zero effort.
🚀Artificial Intelligence
🚀Machine Learning
🚀Deep Learning
🚀Data Science
🚀Python + R
🚀AR and VR
Dm @Aiindian

Связанные каналы  |  Похожие каналы

Гео и язык канала
не указан, Английский
Категория
Технологии
Статистика
Фильтр публикаций


In my previous team at IBM, we hired over 450 AI Engineers worldwide. They are working on Generative AI pilots for our IBM customers across various industries.

Thousands applied, and we developed a clear rubric to identify the best candidates.

Here are 8 concise tips to help you ace a technical AI engineering interview:

𝟭. 𝗘𝘅𝗽𝗹𝗮𝗶𝗻 𝗟𝗟𝗠 𝗳𝘂𝗻𝗱𝗮𝗺𝗲𝗻𝘁𝗮𝗹𝘀 - Cover the high-level workings of models like GPT-3, including transformers, pre-training, fine-tuning, etc.

𝟮. 𝗗𝗶𝘀𝗰𝘂𝘀𝘀 𝗽𝗿𝗼𝗺𝗽𝘁 𝗲𝗻𝗴𝗶𝗻𝗲𝗲𝗿𝗶𝗻𝗴 - Talk through techniques like demonstrations, examples, and plain language prompts to optimize model performance.

𝟯. 𝗦𝗵𝗮𝗿𝗲 𝗟𝗟𝗠 𝗽𝗿𝗼𝗷𝗲𝗰𝘁 𝗲𝘅𝗮𝗺𝗽𝗹𝗲𝘀 - Walk through hands-on experiences leveraging models like GPT-4, Langchain, or Vector Databases.

𝟰. 𝗦𝘁𝗮𝘆 𝘂𝗽𝗱𝗮𝘁𝗲𝗱 𝗼𝗻 𝗿𝗲𝘀𝗲𝗮𝗿𝗰𝗵 - Mention latest papers and innovations in few-shot learning, prompt tuning, chain of thought prompting, etc.

𝟱. 𝗗𝗶𝘃𝗲 𝗶𝗻𝘁𝗼 𝗺𝗼𝗱𝗲𝗹 𝗮𝗿𝗰𝗵𝗶𝘁𝗲𝗰𝘁𝘂𝗿𝗲𝘀 - Compare transformer networks like GPT-3 vs Codex. Explain self-attention, encodings, model depth, etc.

𝟲. 𝗗𝗶𝘀𝗰𝘂𝘀𝘀 𝗳𝗶𝗻𝗲-𝘁𝘂𝗻𝗶𝗻𝗴 𝘁𝗲𝗰𝗵𝗻𝗶𝗾𝘂𝗲𝘀 - Explain supervised fine-tuning, parameter efficient fine tuning, few-shot learning, and other methods to specialize pre-trained models for specific tasks.

𝟳. 𝗗𝗲𝗺𝗼𝗻𝘀𝘁𝗿𝗮𝘁𝗲 𝗽𝗿𝗼𝗱𝘂𝗰𝘁𝗶𝗼𝗻 𝗲𝗻𝗴𝗶𝗻𝗲𝗲𝗿𝗶𝗻𝗴 𝗲𝘅𝗽𝗲𝗿𝘁𝗶𝘀𝗲 - From tokenization to embeddings to deployment, showcase your ability to operationalize models at scale.

𝟴. 𝗔𝘀𝗸 𝘁𝗵𝗼𝘂𝗴𝗵𝘁𝗳𝘂𝗹 𝗾𝘂𝗲𝘀𝘁𝗶𝗼𝗻𝘀 - Inquire about model safety, bias, transparency, generalization, etc. to show strategic thinking.


OpenAI New Model-o1 🚀

Don't let snake-oil salesmen fool you. This new model released by OpenAI today doesn't "think." It just generates an extensive "chain of thought", which is just a discussion of the model with itself that looks like a person talking to themselves.

Previous models were trained to be used in one-shot. You ask a question, you get an answer. Because of randomness in next token generation, if you were unlucky, your answer might be wrong. This model was finetuned to generate a long (and hidden from the user by the UI) discussion on how to better solve the problem, what facts are known, what assumptions need to be made, and what constraints should be respected.

If you explicitly asked previous models to generate this discussion before answering your question, you would get a better quality result, because the final answer would be conditioned on the information contained in this discussion.

They seemingly optimized their model to generate good quality discussions (without the user asking for it) by using reinforcement learning on various problems that have a verifiable solution, so that a reward for finding the right answer could be automatically assigned. For example:

Question: 1+1 = ?
Discussion: we have 1 and we have 1 more. And we have a plus sign, so it's an addition. What happens if we add 1 and 1? It means 1 is incremented by 1. When we increment 1 by 1, what do we get? Let's count: 1, 2, 3, 4,... Ok, 2 comes after 1, so 1 + 1 must be 2.
Answer: 2
Reward: 1

Question: 1+1 = ?
Discussion: It's easy. 1+1=11
Answer: 11
Reward: 0

Once the model is trained, what the user sees:

Question: 1+1 = ?
(Discussion happens behinds the scenes.)
Answer: 2

Sure! Here's a more polished version of the statement:

Don't get swept away by the hype around AI; Stay grounded and approach it thoughtfully. 💯


This year's NVIDIA AI Summit is happening in India 🔥

Today is last day left to save 50% on your NVIDIA #AISummit India conference pass. Register by Sep. 11 to access early-bird pricing for this must-attend #AI event.

NVIDIA AI Summit India, where you can explore the latest breakthroughs in Generative AI, industrial digitalisation, robotics, LLMs, Edge, Future factories, DataCenters, Heath care, Retail, Public services, training, Certification and more. We have special round tables for Startups/ VCs / NVIDIA teams as well arranged.

Register today and save 50% off your conference pass

When: Oct 23-25, 2024
Where: Jio World Convention Centre, Mumbai
Register here: https://nvda.ws/4fucsD1


The power of Ai Hype and LinkedIn 😀


🚨 Major Announcement: Mukesh Ambani to transform Rel'AI'ince into a deeptech company

He is focused on driving AI adoption across Reliance Industries Limited's operations through several initiatives:

➡️ Developing cost-effective generative AI models and partnering with tech companies to optimize AI inferencing

➡️ Introducing Jio Brain, a comprehensive suite of AI tools designed to enhance decision-making, predictions, and customer insights across Reliance’s ecosystem

➡️ Building a large-scale, AI-ready data center in Jamnagar, Gujarat, equipped with advanced AI inference facilities

➡️ Launching JioAI Cloud with a special Diwali offer of up to 100 GB of free cloud storage

➡️ Collaborating with Jio Institute to create AI programs for upskilling

➡️ Introducing "Hello Jio," a generative AI voice assistant integrated with JioTV OS to help users find content on Jio set-top boxes

➡️ Launching "JioPhoneCall AI," a feature that uses generative AI to transcribe, summarize, and translate phone calls.


Репост из: AI Jobs
We are looking to hire for the below positions at Ai India Innovations.

𝗔𝗶 𝗔𝗿𝗰𝗵𝗶𝘁𝗲𝗰𝘁 : ( 12+ years Deep experience in Ai and real use on GenAi Architecture/development/delivery experience)

𝗔𝗶 𝗟𝗲𝗮𝗱 : ( 8+ years experience in Ai, with GenAi development and delivery experience).

We are seeking a highly skilled Ai Engineer to join our dynamic team.

Immediate joiners or 30 days notice period candidates are preferred.

Interested candidates can send their resume at hr@aiindia.ai


90% of Machine Learning solutions in the real world are for tabular data.

Not LLMs, not transformers, not agents, not fancy stuff.

Learning to do feature engineering and build tree-based models will open a ton of opportunities.

Join our WhatsApp Channel 🔥


Best books for Machine Learning


A bit lengthy, but very helpful for beginners!


To start with Machine Learning:

1. Learn Python
2. Start practicing using Jupyter

There are two main deep learning frameworks that everyone uses:

• TensorFlow
• PyTorch

Don't overthink this. Pick one of them and start practicing with it. I promise you'll end up learning both at some point.

You'll find many tutorials online but I usually struggle putting a good plan together, so I prefer courses that hold my hand from start to end.

Here are two of those programs:

• Introduction to Machine Learning with TensorFlow.
bit.ly/4fFu0wk

• Introduction to Machine Learning with PyTorch.
bit.ly/46JHQd0

These are the same program but one uses TensorFlow and the other uses PyTorch. Choose the one you prefer.

After you are done with this, you'll have accomplish something very important:

1. You'd have a large background on classical machine learning
2. You'd have a bunch of solved problems under your belt

Now, it's time to go much deeper. Here are some of the most advanced classes you can take:

• Udacity's Deep Learning Topics with Computer Vision and NLP
• MIT 6.S191 Introduction to Deep Learning
• DS-GA 1008 Deep Learning
• Udacity's Computing With Natural Language
• UC Berkeley Full Stack Deep Learning
• UC Berkeley CS 182 Deep Learning
• Cornell Tech CS 5787 Applied Machine Learning

I also love books! Look at the attached image. Those are some of my favorite machine learning books that I think you should consider.

Finally, keep these three ideas in mind:

1. Start by working on solved problems so you can find help whenever you get stuck.

2. Use AI to summarize complex concepts and generate questions you can use to practice.

3. Find a community and share your work. Ask questions and help others.

During this time, you'll deal with a lot. Sometimes, you will feel it's impossible to keep up with everything happening, and you'll be right.

Here are the good news: ☺️

Most people understand a tiny fraction of the world of Machine Learning. You don't need more to build a fantastic career in the space.

Focus on finding your path, and Write. More. Code. ❤️

That's how you win. ✌️


"Artificial Intelligence Bootcamp" 🚀

+90 hrs sessions.
+26 Weeks.
+19 Tools & Technology.
+9 Case studies.
+9 Projects.
+9 Skills.
+7 Domains.
+13 homework assignments

Duration: 6 months

📌 Live Remote & weekend sessions.
📌 Starting from basics.
📌 Get Certificate

For quires: +918275367267 (WhatsApp)

Join group https://chat.whatsapp.com/J7yYgRz11S9GxSlHstD6Ow

For registrations:
https://aiindia.ai/ai-bootcamp/

Starting from 23rd August


You can download this Data Science and Machine Learning book for free.

There are 4 sections for those who are starting:

• Python Primer
• Linear Algebra and Functional Analysis
• Probability and Statistics
• Multivariate Differentiation and Optimization

The rest of the chapters:

• Importing, Summarizing, and Visualizing Data
• Statistical Learning
• Monte Carlo Methods
• Unsupervised Learning
• Regression
• Regularization and Kernel Methods
• Classification
• Decision Trees and Ensemble Methods
• Deep Learning

Download the book here: https://people.smp.uq.edu.au/DirkKroese/DSML/DSML.pdf


Репост из: AI Jobs
International students are having a tough time securing a full time job in Tech.

Reasons

1. Market has limited opportunities for entry level junior data/software engineers.

2. People having 0-3 years of workex are competing against job seekers having 8+ years of seasoned experience.

3. Lot of companies have their operations in off-shore, as companies grow the headcount of offshore>>onshore.

Solution :

Build up relevant skillsets, latch on to the newer technologies that will bring you at par with experienced candidates.

Build apps/ai models/data pipelines/dashboards that tracks live data.

People who have secured job has everything to lose and their routine is restricted and have limitied time to be dedicated to learning. You on the other hand have nothing to lose and everything to gain.

Request for that coffee chat.

Take that certification exam.

Attend that quarterly event.

Success is sweeter when its delayed.




Видео недоступно для предпросмотра
Смотреть в Telegram
Huge announcement from Meta. Welcome Llama 3.1!

This is all you need to know about it:

The new models:
- The Meta Llama 3.1 family of multilingual large language models (LLMs) is a collection of pre-trained and instruction-tuned generative models in 8B, 70B, and 405B sizes (text in/text out).

- All models support long context length (128k) and are optimized for inference with support for grouped query attention (GQA).

- Optimized for multilingual dialogue use cases and outperform many of the available open source chat models on common industry benchmarks.

- Llama 3.1 is an auto-regressive language model with an optimized transformer architecture, using SFT and RLHF for alignment. Its core LLM architecture is the same dense structure as Llama 3 for text input and output.

- Tool use, Llama 3.1 Instruct Model (Text) is fine-tuned for tool use, enabling it to generate tool calls for search, image generation, code execution, and mathematical reasoning, and also supports zero-shot tool use.


Upstage Global Online Ai Hackathon! 🚀

Submit your innovative ideas around LLM and Win South Korea trip. ✈️

Checkout: https://go.upstage.ai/4f4JMk1

💰 Prize worth $14K.
💳 $200 Upstage Credits.
🤖 Workshops on LLM fine-tuning .
📚 Join FREE workshop on Deep Learning and LLM training, register: https://go.upstage.ai/4cLOoKl

Share with AI enthusiasts! 🚀✨


Microsoft creates an AI speech tool VALL-E 2 so realistic they decide not to release it.

Microsoft has created VALL-E 2, a text-to-speech AI tool that is so realistic that they have decided not to release it to the public, fearing misuse of the ability to impersonate other people’s voices.
https://www.microsoft.com/en-us/research/project/vall-e-x/vall-e-2/


AI Engineers can be quite successful in this role without ever training anything.

This is how:

1/ Leveraging pre-trained LLMs: Select and tune existing LLMs for specific tasks. Don't start from scratch

2/ Prompt engineering: Craft effective prompts to optimize LLM performance without model modifications

3/ Implement Modern AI Solution Architectures: Design systems like RAG to enhance LLMs with external knowledge

Developers: The barrier to entry is lower than ever.

Focus on the solution's VALUE and connect AI components like you were assembling Lego! (Credits: Unknown)


This is a class from Harvard University:

"Introduction to Data Science with Python."

It's free. You should be familiar with Python to take this course.

The course is for beginners. It's for those who want to build a fundamental understanding of machine learning and artificial intelligence.

It covers some of these topics:

• Generalization and overfitting
• Model building, regularization, and evaluation
• Linear and logistic regression models
• k-Nearest Neighbor
• Scikit-Learn, NumPy, Pandas, and Matplotlib

Link: https://pll.harvard.edu/course/introduction-data-science-python


All Developers are Now AI Developers

Worldwide, there are 30 million developers, 300,000 ML engineers, and only 30,000 ML researchers.

For those innovating at the very forefront of ML, the references estimate there may only be ~100 researchers in the world who know how to build a GPT-4 or Claude 3-level system.

The good news is that in the face of these talent shortages, tasks that used to require years of fundamental research and sophisticated ML expertise can now be accomplished in days or weeks by mainstream developers building on top of powerful pre-trained language models.

The future belongs to those who can effectively apply existing AI, not just pioneer new AI. Credits - Armand R

Показано 20 последних публикаций.