Semantic segmentation of panoramic images is an important branch of computer vision in artificial intelligence. It combines image classification, target detection and image segmentation, and is a pixel-level classification for image data.
Images processed by semantic segmentation of panoramic images are widely used in machine learning in autopilot, unmanned aerial vehicle and other scenarios. It is one of the most common data labeling types in image processing.
It is worth noting that Figure 1 is only an intuitive vision image processed by semantic segmentation and cannot be used directly for machine learning. The JSON and Mask format data only exported from this image can be recognized and applied by the machine.
JSON format data
JSON, known in English as JavaScript Object Notation, is a lightweight data exchange format that stores and presents data in a text format completely independent of the programming language.
The JSON format was first proposed by Douglas Crockford in 2001 to replace the cumbersome XML format. The JSON format has two distinct advantages over the XML format: it is easy to write and easy to understand at a glance; It conforms to JavaScript's native syntax and can be handled directly by the interpretation engine without adding additional code.
These advantages have made the JSON format rapidly accepted, and it has now become the standard format for exchanging data across major websites, and is written to ECMAScript 5 as part of the standard.
In the field of artificial intelligence, data in JSON format can be read and written quickly, and can be exchanged between different platforms. It also has the properties similar to C language (including C, C++, C#, Java, JavaScript, Perl, Python, etc.), making it an ideal data export format for labeled datasets.
Compared with image data, JSON data describes the related information in the image in characters, including update time, project name, dataset name, color information, point coordinate information, completion time, label name, etc. It will informationalize the image, effectively extract the key information in image data, and perfectly meet the needs of machine learning.
Mask
Mask originated in the semiconductor manufacturing industry.
In semiconductor manufacturing, many chip process steps use hotolithography, in which the graphical 'negative' for these steps is also called Mask, which covers an opaque graphical template in the selected area on the silicon so that the etching or diffusion below will only affect the area outside the selected area.
In the field of image processing, Mask means to control the area or process of image processing by masking the processed image entirely or partially with the selected image, graphics or object.
In the field of data labeling, Mask images can be used to detect and extract structural features similar to Mask in images using similarity variables or image matching methods for machine recognition and learning.
SEED Platform Data Export Format
In a complete data life cycle, data labeling only accounts for a part of the overall process, and labeled data can only be used for related algorithm recognition and learning if it is exported.
At present, the SEED data service platform fully supports the export of all types of data formats including. json,. xml,. csv,. xls and Mask to meet the different requirements of different types of algorithm models.
At the same time, the SEED data service platform also supports JSON+Mask bi-directional data export in the mode of semantic segmentation of panoramic images. The same data can be labeled once to output structured data in two different formats.
With the two-way data export function, JSON+Mask does not need to be converted, nor does it need to be exported repeatedly. It not only meets the algorithm requirements in different scenarios, but also avoids duplicate work and improves production efficiency.