Mapping methods and machine learning technologies
In recent years, machine learning methods based on deep neural network implementation have become widespread. The implementation of these methods allowed to attain breakthrough results in many spheres of human knowledge.
Today, the programs that utilize neural networks manage to establish medical diagnosis better than doctors, read lips, identify speech better than professionals, search for new molecules, allow to generate real-time video sequences, whose creation still recently required efforts by numerous computer graphics specialists and significant time and financial expenditures.
However, such networks are currently most widely used in image recognition. Their use allows to construct models capable of categorizing an image as belonging one of the thousands of classes. Today they can actually attain greater precision than that of a human without specialized equipment.
Success in all of these areas has become possible as a result of a number of factors:
Emergence of new neural network architectures and training methods
Increased performance of computational equipment
Emergence of a large number of labeled data sets (training sets)
Preliminary training is required for neural network functioning. At input, the dataset is accompanied by the information on its contents, which is the result that the neural network is expected to attain as the result of independent data processing.
For instance:
Photographs and information regarding its content is uploaded to train a neural network in image recognition.
When patient health parameter data is uploaded to the neural network, along with the data on established diagnoses, the neural network is trained to establish diagnoses independently.
The famous Alpha Go program used the recording of moves by the leading Go players, aggregated as the result of the games they played among themselves. The moves by the winning players were considered correct.
The program demonstrated extraordinary results in 2015, the very first year following its creation; in its subsequent versions, having played millions of games against itself, reached a level that didn’t leave a single chance to humans. The mysterious player, nicknamed Master, appeared on Go servers in 2017, winning every game against every decorated champion. Subsequently, it came to light that the player is the new version of the Alpha Go program, and there were no people left capable of defeating it.
What seemed unattainable only yesterday, became reality today.
To resolve the problem of precise user device positioning, we at Spheroid Universe are using neural networks and machine learning methods.
From the programming point of view, Spheroid Universe technology is an aggregate of methods:
Site classification is conducted for each Space according to the video records: water, land, buildings, trees, animate objects, transport, etc., with the assistance of the pre-trained neural network.
Based on a cloud of points that are classified as static objects (permanently present onsite), a 3D-model and textures are created.
A set of features is identified (the aggregate of unique image elements and their location in relation to each other), based on which the platform subsequently conducts a search to determine device position and orientation.
The resulting 3D model is linked to the surrounding space and the map, which provides precise geographic coordinates for each of the model's points.
While in the app use mode, we upload image features and location models to the user's device, based on the approximate user location determined by the device's GPS data. Subsequently, with the assistance from the neural network, we identify the set of features that corresponds to the image from the device camera, and search for a similar set in the platform data. Thus, we determine which part of the model and from which side the device camera is aimed at. Then, comparing the image with the 3D model, we determine the precise location of the device in relation to the model. This information allows to ensure seamless interaction of virtual objects with reality on the device display.
Last updated