Connect with us

Technology

New Microsoft learning language app uses the camera of a phone and computer vision for teaching vocabulary

mm

Published

on

New Microsoft learning language app uses the camera of a phone and computer vision for teaching vocabulary

Eight of Microsoft best interns have finally developed a language tool for learning which can be used by people in their smartphones to improve their English literacy. The application will help people learn new words for all the things which are there around them. This application called Read My World helps you take pictures with the help of your phone so that you can learn from a library of 1,500 words. This photo can be a real-world object or any kind of document.

The application is now ready to supplement formal classroom training. This is the best way to learn something new for those who do not have extra cash to take courses. Users are encouraged to take photographs of the things which they see in their day to day life.

Nicole Joyal, a software developer intern who worked on the project said

“Originally, we were planning more of a lesson plan-style approach, but through our research and discovery, we realized a Swiss army knife might be more useful.”

She further added:

 “We wound up building a tool that can help you throughout your day-to-day rather than something that teaches.”

The application Read My World generally uses a Microsoft Cognitive Services along with Computer Vision APIs so that it can identify what is there in the picture. The next step is simply the application will give the users a spelling of the word along with the phonetic pronunciation. Users will have the option of saving the words in their personal dictionary which they can refer to later on. 

The application has a very fun way of encouraging all its users to practice all the words that they have learned recently with the help of a vocabulary game. People might think that the 1,500-word count is very less but in real life, the number is very close to the number of words which can pick up with the help of traditional study.

A report from the BBC confirmed that a lot of people struggle with learning new languages. People struggle to learn 2,000 to 3,000 words, even after one year of study. There was a study conducted in Taiwan which found out that it took nine years to master a foreign language and even after that people failed to learn the most commonly used 1000 words which are needed to have a general conversation.

Source

Technology

The Neural Network detects the if a photo has been photo shopped or not

Published

on

The Neural Network detects the if a photo has been photoshopped or not

Photoshop is one of the most common video editing software available now for professionals. It is a very common practice that people have to manipulate pictures and tweak up faces. Researchers from Berkeley and Adobe have come up with an absolutely new tool with help of which people will get to understand if the images are Photo shopped or not. This tool can tell how the images have been photo shopped and what are the ways that they can simply undo it.

It is very important to understand that this tool will be working only for Photoshop and no other editing application. A tool which will help to understand changes is a long way off, but the tool which has been designed for Photoshop is very smart.

One of the Researchers called Alexei Efros who was just part of the AI Robotics event has begun to assume that a lot of images are manipulated with the help of Photoshop application and for her, the best place to work now is to look for manipulations possible with this tool.

In a test, a set of portrait images was taken and slight manipulations were done to it. Some of the changes made were movement done to the eye or emphasize the smile a bit more. Then both the original and edited versions were fed to the system so that the tool is able to distinguish between both of them.

“We live in a world where it’s becoming harder to trust the digital information we consume,” said Adobe’s Richard Zhang, who worked on the project, “and I look forward to further exploring this area of research.”

There is a paper available which people can read to gather more information about the project.

 

 

Continue Reading

Technology

Project xCloud was bested by Google Stadia in the tests of latency

mm

Published

on

Project xCloud was bested by Google Stadia in the tests of latency

At an event of E3 in the year 2019, Microsoft was giving a chance to try out the “Project xCloud.” It is a service of game streaming which can give a tough competition with Google Stadia. The location of the server is almost 400 miles away. But the shocking fact is that you can easily access the games like Halo 5: Guardians and War 4. It has the capability to run those two games on 60 frames per seconds if you connect an android phone as a controller.

According to a report of Ars Technica, xCloud only can show an input delay of 67 milliseconds. But on Stadia it was like 166 Ms. But if you want to play the Halo 5 game on a console, you can get an input lag of 63 ms.

According to a Google stadia:

“ In our video tests, the time between tapping the A button and seeing a response on the smartphone screen took sixteen frames of a 240 FPS video or 67 ms across three subsequent tests. That’s almost imperceptibly slower than the 63 ms input latency Digital Foundry measured on the Xbox One version of Halo 5 in 2017… Testing latency of a wired… Stadia demonstration at March’s Game Developers Conference, Digital Foundry found total latency of 166 ms, compared to a low of 100 ms on a 60 FPS PC. “

But we have to know that the test was conducted on less than ideal testing conditions. But it almost has a reduction in the input delay. If you compare that to the previous testing of Stadia, the input lag is reduced; xCloud can easily improve its base.

The same fact is applicable to Stadia. As it was uncovered in March, Google was trying to keep the input delays low. Right now, we are just waiting for the streaming game to start. We should know the fact that Stadia will be live in Founder’s Edition subscribers and the xCloud will be available for a public trial.

Source

Continue Reading

Technology

‘Snoopy’ Lunar Module of NASA was found after 50 years of getting abandoned in space

mm

Published

on

‘Snoopy’ Lunar Module of NASA was found after 50 years of getting abandoned in space

The trip of NASA on the lunar surface took place in the month of July in 1969 followed many other missions. It also included Apollo10. It took part in some mock mission, but it landed on the surface very successfully. There were two astronauts named Thomas Stafford and Eugene Cernan, who flew a lunar module at the time of Apollo 10. The nickname of that aircraft was “Snoopy”. After they complete their task in the space, the aircraft got a shot.

There was no other way for the aircraft named ‘Snoopy’ can return to the earth. It was sent to an orbit which revolves around the sun beyond the moon. But after completing the journey when the astronauts returned to the command module, NASA could not track the trajectory of the spaceship. They started to find that trajectory in the year of 2011. This project was led by an amateur astronomer named Nick Howes.

According to the latest news, they have almost tracked down the vehicle. They will soon confirm its the location which will be preserved by Elon Musk’s Key cultural artifact.

Apollo 10 was one of the fourth crewed missions of NASA’s Apollo program. The mission was to fly to the moon module within the 805 miles along the surface of the moon. Keeping the theme as ‘peanuts.’

the command module mission was called “Charlie Brown.”

The fuel tanks which were used for this mission was not enough with which the spaceship could return. The international limitation which was imposed, according to that as astronauts who were doing the test run could jump the queue and land before Apollo 11’s Neil Armstrong and Buzz Aldrin. This was the only way with which they could create history.

Source

 

Continue Reading

Trending