Future Time | News, AI, blockchain, CBD, E-gaming


Channel's geo and language: not specified, English


Media about tech trends
shaping the future
Web: https://futuretime.ai/ Questions: @KirillFuturetime

Related channels

Channel's geo and language
not specified, English
Statistics
Posts filter


The startup, thanks to which the Hempfy tonic (“Hempfy”) was created, is somewhat reminiscent of the story with legendary Coke. The difference being that instead of coca leaves cannabis leaves were used as basis for all the products. Read more on: https://futuretime.ai/2020/04/02/what-way-are-cannabis-tonic-and-coke-alike/


Forward from: Futuretime.ai
​​🤖Can AI Detect Your Emotion Just By How You Walk?👣

Now robots are becoming a part of our lives and to interact with them better, they must understand our feelings, emotions.

Robots need to understand human emotions, feelings, intent, social boundaries, and expectations. Whereas much of current robotic applications are focused primarily on accomplishing tasks focused on efficiency or time, socially-intelligent robotics add an additional component of human emotional and social interaction. The addition of this aspect helps make humans feel more safe, comfortable, and friendly in their close-quarters robotic interactions.

One of the most interesting research applications of socially-intelligent robots is their ability to read body language. The research into determining emotion from facial expressions is already fairly established.

GAMMA lab is looking at AI systems that can detect emotion based on gait. The lab combines facial expressions with body motion as a way to improve prediction of humans' emotional states. According to the researchers, in the case of gaits, walking style is not something that can be easily manipulated. Identifying people based on their gait using AI systems has already been widely publicized and shown to have a certain degree of accuracy.


Forward from: Futuretime.ai
​​🤖Autonomous Delivery vehicles🚗

The coronavirus epidemic and related restrictions have raised hopes for faster adoption of Autonomous vehicles, that is, those that operate without drivers.

Neolix has announced mass-production of its autonomous delivery vehicles and declared itself the first company in the world to do this.


🔥AI plays Hide-And- Seek💧

AI can play hide-and-seek. It’s the latest example of how, with current machine learning techniques, a very simple setup can produce shockingly sophisticated results.

The AI agents play a very simple version of the game, where the “seekers” get points whenever the “hiders” are in their field of view. The “hiders” get a little time at the start to set up a hiding place and get points when they’ve successfully hidden themselves; both sides can move objects around the playing field (like blocks, walls, and ramps) for an advantage.

The results from this simple setup were quite impressive. Over the course of 481 million games of hide-and-seek, the AI seemed to develop strategies and counterstrategies, and the AI agents moved from running around at random to coordinating with their allies to make complicated strategies work. (Along the way, they showed off their ability to break the game physics in unexpected ways, too; more on that below.) Learn more : https://www.youtube.com/watch?v=Lu56xVlZ40M


Forward from: Futuretime.ai
​​🤖Robots in our Lives🌍

🤖AI in Museum🏛 Pepper is a humanoid robot that stands 4-foot-tall and has the ability to perceive and interact with its surroundings. Using the recognition pattern of AI, Pepper is able to sense when a visitor is close by and then engage and interact with them through the conversational pattern. After some research and experimentation, six Smithsonian venues deployed the Pepper robots in a trial program. The program was aimed to test how robot technology would enhance visitor experiences and educational offerings.

In the beginning Kristi Delich and her team didn't know much about Pepper’s capabilities, but it quickly became apparent that Pepper is a great fit for what the museum is trying to accomplish. In just the short time that Pepper has been in the museum, it has made a big impact on enhancing overall guest experiences. The Smithsonian quickly found out that robots that can engage the general public can handle well many of the hospitality and customer service aspects of a visitor’s time in the museum. This allows guests to get their needs met and allows workers at the museum more time to spend focusing on more complex or interesting tasks.

Pepper is able to provide customized visitor engagement with artwork and artifacts and give docents and museum educators new tools to engage with many visitors. Pepper is able to answer commonly asked questions or tell stories, and also has an interactive touch screen. Additional perks include the fact that Pepper also dances, plays games and poses for selfies. This offers a playful and non-threatening experience for guests, and as a result, often attracts a crowd.


Forward from: Futuretime.ai
​​🗿History of VR🎮

1935
In 1935 American science fiction writer Stanley Weinbaum presented a fictional model for VR in his short story Pygmalion's Spectacles. In the story, the main character meets a professor who invented a pair of goggles which enabled "a movie that gives one sight and sound ... taste, smell, and touch."

1956
Cinematographer Morton Heilig created Sensorama, the first VR machine (patented in 1962). It was a large booth that could fit up to four people at a time. There was a combined full colour 3D video, audio, vibrations, smell and atmospheric effects, such as wind.

1968
Sutherland, with his student Bob Sproull, created the first virtual reality HMD, named The Sword of Damocles. This head-mount connected to a computer rather than a camera and was quite primitive as it could only show simple virtual wire-frame shapes.

1985
Jaron Lanier and Thomas Zimmerman founded VPL Research, Inc. This company is known as the first company to sell VR goggles and gloves.

1989
Scott Foster founded Crystal River Engineering Inc after receiving a contract from NASA to develop the audio element of the Virtual Environment Workstation Project (VIEW) - a VR training simulator for astronauts.

1991
Antonio Medina, a NASA scientist, designed a VR system to drive the Mars robot rovers from Earth in supposed real-time despite signal delays between the planets.

2012
Luckey launched a Kickstarter campaign for the Oculus Rift which raised $2.4 million.

2016
HTC released its HTC VIVE SteamVR headset. This was the first commercial release of a headset with sensor-based tracking which allowed users to move freely in a space.


Forward from: Futuretime.ai
​​🕹VR in Gaming🎮

For the past few years, the Virtual Reality gaming industry has already won significant market size and still shows fast growth rate. In the beginning, the idea of virtual reality was fascinating and a little bit fantastic. Though, as VR came true, we can all agree, it has the potential to become the next “big thing”. At least, virtual reality in gaming, for sure.

Since the release of first prototypes of Oculus VR and Samsung Gear VR, the new age of virtual reality has started. In 2015 HTC has launched Vive headset, equipped with the hand controllers and tracking technology. And by the end of that year, global revenues of Virtual Reality in the gaming industry reached $4.3 billion.

Most of the computer games can be successfully transformed into VR format, with new and better interaction. In 2018, due to the variety of VR headsets, new games and new content are emerging regularly. Both high-end and mobile games are pushing the boundaries of VR even further.

First-person shooters (FPS) are still the most popular genre of VR games. Players experience presence on the battlefield with appropriate audio and visual escort (flying bullets, explosions, etc.). Although, shooters yet have one big problem to solve – freedom of movement, which is key for these action games.

Not so long time ago on the steam platform was published new game Half-Life: Alyx by Valve. This game uses VR technology and everyone can try it right now


Video is unavailable for watching
Show in Telegram
🖥VR vz REAL🏝

Virtual realiry developer recreated his apartment in VR using Unity and his Oculus Quest, and the results are pretty cool.


Forward from: Futuretime.ai
​​🤖AI chatbot helps people to know more about coronavirus🦠

Two telemedicine startups are working together on a program that allows worried patients to text their concerns about the novel coronavirus, or COVID-19, to a chatbot that can link them to remote doctors. The goal is to help them avoid waiting in crowded clinics that may increase the risk of infection for them and for healthcare providers.

San Francisco-based Memora Health and New York-based Ro are offering the new service that uses an artificial-intelligence powered chatbot to answer basic questions about COVID-19, and then connects at-risk patients to a doctor. Patients can text, “What are the symptoms?” and “should I wear a mask?” and the bot, using automated software programmed with recommendations from the Centers for Disease Control and Prevention (CDC) and the World Health Organization (WHO), texts back answers.


Forward from: Coronavirus
​​🤖What's the Difference Between Machine Learning and Deep Learning?🎭

Deep learning is a specialized form of machine learning. A machine learning workflow starts with relevant features being manually extracted from images. The features are then used to create a model that categorizes the objects in the image. With a deep learning workflow, relevant features are automatically extracted from images. In addition, deep learning performs “end-to-end learning” – where a network is given raw data and a task to perform, such as classification, and it learns how to do this automatically.

Another key difference is deep learning algorithms scale with data, whereas shallow learning converges. Shallow learning refers to machine learning methods that plateau at a certain level of performance when you add more examples and training data to the network.

In machine learning, you manually choose features and a classifier to sort images. With deep learning, feature extraction and modeling steps are automatic.


Forward from: Coronavirus
​​🦠Coronavirus impact on space colonization🌖

The coronavirus has dealt a blow to NASA's plan to return Americans to the moon by 2024, as the space agency's chief on Thursday ordered the temporary closure of two rocket production facilities after an employee tested positive for the illness.
NASA Administrator Jim Bridenstine said in a statement he was shutting down the Michoud Assembly Facility in New Orleans and the Stennis Space Center in nearby Hancock County, Mississippi, due to a rise in coronavirus cases in the region.


Forward from: Coronavirus
​​🌿Cannabis Products Could Become Illegal in Departmental Stores Next Year👮‍♂️

According to a statement by The Food Standards Agency (FSA), the products containing CBD must gain approval for sale and be registered products before March 2021 else they will all be removed from the departmental stores. The sales graph of CBD products, however, is not a problem. The sales have risen despite the fact that not even a single product has received approval across the UK yet and this has been raising the concerns of safety.

Though Cannabidiol is a derivative product of Cannabis, it does not inflict any kind of psychoactive properties over the individual using it. It is up for sale in certain pharmacies as well as food shops under the supplement section. It is used to deal with conditions like pain and insomnia.

Besides everything else, the regulatory reluctance to approve the products could stem from the trials that concluded the presence of unlisted and possibly hazardous ingredients. It is also observed to have illegal amounts of THC or tetrahydrocannabinol, which is a psychoactive component of cannabis.


Forward from: Futuretime.ai
​​🤖How Deep Learning Works🕸

Most deep learning methods use neural network architectures, which is why deep learning models are often referred to as deep neural networks.
The term “deep” usually refers to the number of hidden layers in the neural network. Traditional neural networks only contain 2-3 hidden layers, while deep networks can have as many as 150. Deep learning models are trained by using large sets of labeled data and neural network architectures that learn features directly from the data without the need for manual feature extraction.
One of the most popular types of deep neural networks is known as convolutional neural networks (CNN or ConvNet). A CNN convolves learned features with input data, and uses 2D convolutional layers, making this architecture well suited to processing 2D data, such as images.
CNNs eliminate the need for manual feature extraction, so you do not need to identify features used to classify images. The CNN works by extracting features directly from images. The relevant features are not pretrained; they are learned while the network trains on a collection of images. This automated feature extraction makes deep learning models highly accurate for computer vision tasks such as object classification.

CNNs learn to detect different features of an image using tens or hundreds of hidden layers. Every hidden layer increases the complexity of the learned image features. For example, the first hidden layer could learn how to detect edges, and the last learns how to detect more complex shapes specifically catered to the shape of the object we are trying to recognize.


Forward from: Futuretime.ai
​​🤖What Is Deep Learning?🕸

Deep learning is a machine learning technique that teaches computers to do what comes naturally to humans: learn by example. Deep learning is a key technology behind driverless cars, enabling them to recognize a stop sign, or to distinguish a pedestrian from a lamppost. It is the key to voice control in consumer devices like phones, tablets, TVs, and hands-free speakers. Deep learning is getting lots of attention lately and for good reason. It’s achieving results that were not possible before.
In deep learning, a computer model learns to perform classification tasks directly from images, text, or sound. Deep learning models can achieve state-of-the-art accuracy, sometimes exceeding human-level performance. Models are trained by using a large set of labeled data and neural network architectures that contain many layers.

✅How does deep learning attain such impressive results?💯
In a word, accuracy. Deep learning achieves recognition accuracy at higher levels than ever before. This helps consumer electronics meet user expectations, and it is crucial for safety-critical applications like driverless cars. Recent advances in deep learning have improved to the point where deep learning outperforms humans in some tasks like classifying objects in images.


Which blockchain is the most advanced one? People argue about it all the time. The bear market got rid of all those small coins that claimed to be better than Bitcoin for no reason. Now it’s totally clear that Bitcoin is going to hold its crown as the main cryptocurrency, and the amount of transactions per second isn’t the most important factor for adoption. Read more on: https://futuretime.ai/2020/03/18/which-consensus-algorithm-is-the-most-advanced-one/


Forward from: Futuretime.ai
​​🦠How is Coronavirus Impacting the Cannabis Landscape?🌿

The marijuana industry is possibly facing a fallout in the wake of the emergent corona outbreak and is expected to join other global industries soon. This is attributed to the fact that much of the cheap marijuana growing and processing was done in China, but it also somewhat is because of the economic slowdown that Coronavirus is inflicting on the industries.
The virus is already reported to have reached the U.S., as well as Europe, and is soon expected to arrive in other Asian countries as well as the public markets.


Prior to its transmission or storage, a multi-media signal (whether audio, image or video) often needs to be converted from one format, or code, to another. This process (or ‘encoding’) is repeated in reverse (or ‘decoded’) for playback or editing. The technology that makes this happen is thus referred to as a “codec“. Read more on: https://futuretime.ai/2020/03/18/the-inefficiency-of-current-video-codecs-part-2/


Forward from: Futuretime.ai
​​👥Deepfakes in lives of people👫

Increasingly, new uses are being found for deepfakes. Good uses. Whether recreating long-dead artists in museums or editing video without the need for a reshoot, deepfake technology will allow us to experience things that no longer exist, or that have never existed. And aside from having numerous applications in entertainment and education, it’s being increasingly used in medicine and other areas.
▪️How does it work ?⚙️ In short, deepfakes work via the use of deep generative modelling. Basically, neural networks of algorithms learn how to create realistic looking images and videos of real (or fictitious) people after processing a database of example images. From being trained on images of a real person, they can then synthesise realistic videos of that person. Ultimately, the same technology can also be used to synthesise the same person's voice, which has led to fears that we're not far from fake yet entirely believable videos of politicians and celebrities doing or saying outrageous things.

▪️Usage
But this is the worst-case scenario. Much more realistically, deepfake technology will play an increasingly constructive role in recreating the past and in envisioning future possibilities. This is already being borne out by an expanding range of examples.
▪️Deepfake in news video reports🎬
Most recently, Reuters collaborated with AI startup Synthesia to create the world's first synthesised, presenter-led news reports, using the same basic deepfakes technology to create new video reports out of pre-recorded clips of a news presenter. What was most novel about this is that, by using deepfake technology, you can automatically generate video reports personalised for each individual news viewer.


Forward from: Futuretime.ai
​​🦠COVID-19’s Impact On Industries👥

Computer Economics in collaboration with their parent company Avasant published their Coronavirus Impact Index by Industry that looks at how COVID-19 is affecting 11 major industry sectors in four dimensions: personnel, operations, supply chain, and revenue. Please see the Coronavirus Impact Index by Industry by Tom Dunlap, Dave Wagner, and Frank Scavo of Computer Economics for additional information and analysis. The resulting index is an overall rating of the impact of the pandemic on each industry and is shown below:


Forward from: Futuretime.ai
​​🤖AI vz Coronavirus🦠

⚙️Here are ways artificial intelligence, data science, and technology are being used to manage and fight COVID-19.
💊
AI to identify, track and forecast outbreaks🔦
The better we can track the virus, the better we can fight it. By analyzing news reports, social media platforms, and government documents, AI can learn to detect an outbreak. Tracking infectious disease risks by using AI is exactly the service Canadian startup BlueDot provides. In fact, the BlueDot’s AI warned of the threat several days before the Centers for Disease Control and Prevention or the World Health Organization issued their public warnings.

AI to help diagnose the virus🌡
Artificial intelligence company Infervision launched a coronavirus AI solution that helps front-line healthcare workers detect and monitor the disease efficiently. Imaging departments in healthcare facilities are being taxed with the increased workload created by the virus. This solution improves CT diagnosis speed. Chinese e-commerce giant Alibaba also built an AI-powered diagnosis system they claim is 96% accurate at diagnosing the virus in seconds.

Process healthcare claims👩‍⚕️
It’s not only the clinical operations of healthcare systems that are being taxed but also the business and administrative divisions as they deal with the surge of patients. A blockchain platform offered by Ant Financial helps speed up claims processing and reduces the amount of face-to-face interaction between patients and hospital staff.

Drones deliver medical supplies🚁
One of the safest and fastest ways to get medical supplies where they need to go during a disease outbreak is with drone delivery. Terra Drone is using its unmanned aerial vehicles to transport medical samples and quarantine material with minimal risk between Xinchang County’s disease control centre and the People’s Hospital. Drones also are used to patrol public spaces, track non-compliance to quarantine mandates, and for thermal imaging.

Robots sterilize, deliver food and supplies and perform other tasks🍎
Robots aren’t susceptible to the virus, so they are being deployed to complete many tasks such as cleaning and sterilizing and delivering food and medicine to reduce the amount of human-to-human contact. UVD robots from Blue Ocean Robotics use ultraviolet light to autonomously kill bacteria and viruses. In China, Pudu Technology deployed its robots that are typically used in the catering industry to more than 40 hospitals around the country.

20 last posts shown.

58

subscribers
Channel statistics