Some parts of this page may be machine-translated.

Achievements and Case Studies (Annotation Site)

Case Studies

We support AI annotation projects for many companies, including GAFAM.

We participate in AI development projects with a thorough security system and high-precision annotation for a wide range of fields, including the medical industry, automotive industry, and IT industry.

Translation, Documentation, and Annotation Achievements

Achievements and Case StudiesCase Studies

CASE 01
AI Development Project for Advanced Medical Devices
Medical Device Manufacturing Company

Required Tasks
  • AI development of advanced medical devices whose purpose is supporting surgery and diagnosis.
  • Perform instance segmentation of several types of labels using actual data such as endoscopic images and X-ray images.
Customer's
Challenges
  • For the handling of highly confidential medical data, the customer wanted to avoid remote work and outsourcing overseas.
  • They were concerned about the compatibility between the source data and the tools used by subcontractors.
Our
Solutions
  • We built a dedicated secure room for this project within our office. It was operated on-site. By allowing only project members to enter the room, we ensured the security of the data and the confidentiality of the project itself.
  • By properly managing the version of the tools used, we unified the environments of our customer and annotators. This ensured data integrity.
Number of Tasks
10,000 items
Work Period
2 months
Main Takeaways
  • Human Science, which has obtained ISMS certification, has secure rooms for on-site operation.
  • We achieve thorough data management with comprehensive security management and worker education.
  • We provide flexible support for introducing new annotation tools and updating versions.

CASE 02
Autonomous Driving AI Accuracy Improvement Project
AI Technology Development Manufacturer

Required Tasks
  • Annotation for improving autonomous driving technology. Tagging performed by specifying objects and areas based on dashcam footage.
Customer's
Challenges
  • Planning for long-term operation, but the work is monotonous and the retention rate of annotators is low. Additionally, even if excellent annotators are trained, they leave quickly.
Our
Solutions
  • We selected and organized a team of qualified personnel for this project from our contracted annotators.
  • Regular meetings were held within the team. We reinforced quality and productivity by creating a system that does not leave annotators' questions unanswered.
  • Workers were regularly assigned new responsibilities, and moved between teams. By making changes to the environment while ensuring quality, we were able to maintain high motivation and a sense of accomplishment from the annotators.
Number of Tasks
Over 6,000 items
Work Period
Over 6 months
Main Takeaways
  • Human Science can organize a team of personnel suitable for the project's tasks from the human resources directly contracted with Human Science.
  • We create changes and a sense of achievement, even after the start of the project, to keep the annotators' motivation high. We stabilize the quality of work by securing long-term employment.

CASE 03
AI Assistant User Request Understanding Improvement Project
Global IT Company

Required Tasks
  • Ensure that the AI assistant correctly understands the users' voice requests and can perform the desired actions.
  • Workers evaluate the AI's understanding by tagging each action taken by AI.
Customer's
Challenges
  • The customer wanted to build a team of 40 members within 2 months.
  • Working in a secure room was essential, due to the highly confidential nature of the project.
  • The task was highly difficult and relied on workers' insight and judgment, so they wanted to proceed only with skilled annotators. Training for annotators was essential before starting the actual work.
  • The cost needed to secure the resources was difficult for their own company.
Our
Solutions
  • First, we started the project in our existing secure room, and within 1.5 months, we moved the project to a newly established secure room where 40 people could work.
  • We established a team structure and training program based on the proficiency of the annotators. By actively sharing and updating our knowledge, we were able to improve and stabilize the quality.
Number of Tasks
About 450,000 items
Work Period
6 months
Main Takeaways
  • Human Science can provide secure rooms that meet your standards. We also respond promptly to the need for expansion.
  • We provide thorough security education to annotators. We manage projects in a way that meets high security standards in both resources and environment.
  • By sharing information and communicating closely among members, we support the proficiency of annotators. This enables cost reduction through shorter training periods and greater productivity.

CASE 04
Project to Improve OCR Text Recognition Accuracy
Global IT Company

Required Tasks
  • Convert text areas found in images such as maps and restaurant menus into data that AI can understand, to improve the recognition accuracy of OCR.
  • The operator manually selects the text areas and adds the correct information to each one.
Customer's
Challenges
  • The customer wanted to ensure maximum uptime within the deadline, but their own resources alone were not enough.
  • Due to the difficulty of the task, many resources that were hired quit during training. Making progress on the project was challenging.  
Our
Solutions
  • We designed and implemented a new specialized recruitment test for the project. By forming teams with only the successful candidates, we reduced resignations and improved operational efficiency.
  • We analyzed the inclinations of the annotators who performed well in training and actively hired resources with similar tendencies.
  • We organized a team with resources that can understand English guidelines and materials as they are. By eliminating the process of translating documents, we reduced the cost.
Number of Tasks
22,000 items
Work Period
1,600 hours/month
Main Takeaways
  • Human Science has cultivated skills in document creation and formed the resources for multilingual support, which were utilized in the test creation and team organization for this project.
  • As a result, we achieved a high operating efficiency that exceeded the initial expected standards.

CASE 05
AI Automated Contract Content Confirmation Project
Global IT Company

Required Tasks
  • Automate the process of reviewing the contents of contracts by analyzing text.
  • The worker reads the contract documents, extracts and categorizes specific phrases and expressions, and performs labeling. The ability to understand technical terms and define complex labeling is required.
Customer's
Challenges
  • Internal resources were insufficient, and the establishment of a system to mass-produce training data was not progressing.
  • The client did not know where to start to execute a PoC (Proof of Concept).
  • They wanted to consult with experienced individuals for the establishment of work rules, standardization of knowledge, and the creation of management mechanisms.
Our
Solutions
  • We dispatched one experienced annotator from our company's resources to work at the client's office.
  • We listened to their challenges, and together, we created instructions for the work process and decision-making criteria.
  • We concretized the management challenges for future expansion of annotation work and developed mechanisms to ensure the continuation.
Number of Tasks
About 200 items
Work Period
3 months
Main Takeaways
  • By dispatching an experienced project manager, Human Science can visualize current and future challenges.
  • By being stationed in the customer's office, we can achieve both detailed support and data confidentiality. We contributed to the establishment of a system for expanding the annotation structure.

CASE 06
AI PoC Project for Automatic Determination of Internal Tissue Areas
Medical Device Manufacturer

Required Tasks
  • Instance Segmentation of CT Slice Images
Customer's
Challenges
  • Instance segmentation of CT slice images.
  • The customer attempted to internalize data annotation using available engineers but were unable to keep up with the maintenance of annotation specifications, resulting in quality variations and production issues, so the project could not progress as planned.
Our
Solutions
  • We executed a rapid project launch utilizing our contracted data annotators (experienced in medical data annotation).
  • We created an annotation specification document from the already annotated sample data provided by the client.
Number of Tasks
About 2,000 items
Work Period
2 weeks
Main Takeaways
  • Human Science can create annotation specification documents through customer-provided annotated data and Q&A.
  • In addition to the above specifications, we utilized annotated sample data as a training reference for difficult-to-explain scenarios, shortening the training period for data annotators.
  • Thanks to our carefully selected contract annotators and shortened training period, we ensured high productivity from the early stages of the project launch. Despite the difficulty of the task, we were able to meet the client's deadline and received high praise from them for both quality and delivery time.

CASE 07
Conversation Emotion Detection AI Project
Content Creation IT Company

Required Tasks
  • Label conversational text with eight emotional categories.
Customer's
Challenges
  • Because the annotation work was being done by a single engineer within the company, the creation of training data was not progressing. Therefore, the client was considering outsourcing. But due to the ambiguous and difficult nature of annotation work, the client was concerned about individual differences in labeling and whether consistent, high-quality training data could be created.
  • They had no experience or know-how in creating documented standards to suppress variation in labeling and stabilize the quality when working with multiple people or outsourcing.
Our
Solutions
  • Before entering into an outsourcing contract with the client, we conducted a trial and had the client evaluate the quality.
  • We created data annotation guidelines at our company.
  • We adopted a triple-pass method. (Three people annotated the same data, and the label was selected and determined by majority opinion.)
Number of Tasks
20,000 items
Work Period
About 2 months
Main Takeaways
  • During the trial, Human Science was able to create an annotation specification that met the client's requirements, despite the high level of ambiguity, while receiving Q&A, communication, and feedback from the client. Additionally, the specification was useful for regular additional learning within the client's organization.
  • By making frequent partial deliveries, we can respond to feedback and requests from our customers in a timely manner, alleviating any concerns they may have about the quality of our services.
  • In addition to the triple-pass method, by conducting PM checks, providing timely feedback to workers, and holding regular meetings, we received high praise from the client for ensuring stability and consistency in the quality while suppressing variation and biases in worker judgments, which are common in ambiguous language annotations.

CASE 08
Machine Operation Analysis AI Project
Machine Tool Manufacturer

Required Tasks
  • Key point data annotation for machine operators
Customer's
Challenges
  • Without the know-how to produce data annotations in-house, the client had difficulty establishing a system to ensure stable quality and productivity.
  • Due to the high ambiguity of the annotation position, there were significant individual differences in the point annotation positions, and even when gathering people within the company to annotate, there was a large variation in quality and a high volume of rework needed.
  • The client had trouble grasping the key points and guidelines for creating a manual to reduce variations in annotation.
  • Because the data was highly confidential, they requested that work be handled domestically with a client-provided tool.
Our
Solutions
  • We quickly launched a project team that consisted of our registered domestic data annotators, who have received security education.
  • As work progressed, we accumulated responses and evaluation standards for dealing with edge cases, and we provided feedback on the manual provided by the customer.
Number of Tasks
3,000 files
Work Period
3 weeks
Main Takeaways
  • At the start of the project, the project manager (PM) actually performs the annotation and works with the client to clarify any questions and answers (Q&A) about the work specifications. This allows us to understand the finer details that cannot be captured in the manual.
  • By accumulating and documenting knowledge and information such as detailed points about work and evaluation standards for edge cases, and then utilizing them in worker training, we were able to shorten the time spent teaching, while successfully launching a smooth team and stabilizing the quality.
    By sharing the accumulated information with the client, we were able to help them create a work manual and acquire knowledge for outsourcing data annotation.

CASE 09
GPS Human Flow Data Automatic Analysis AI Project
Research Institution

Required Tasks
  • Labeling of transportation method and stay type (total of 7 types) for human movement GPS data
Customer's
Challenges
  • In the previous data annotation done by another company, the use of overseas workers resulted in a lack of understanding of domestic geography and transportation, leading to significant variations in quality and a high number of revisions.
  • Due to the high ambiguity of the annotation position, there were significant individual differences in the point annotation position, and even when aligning people within the company to perform annotation, there was a large variation in quality and a lot of rework was required.
    As a result, the workload increased and caused delays in the schedule. Therefore, this time, we want to achieve high-quality annotation while keeping the price down by utilizing domestic workers.
Our
Solutions
  • We have placed domestic contract workers who are familiar with our geography and transportation situation, and quickly launched the project.
  • By monitoring the proficiency and understanding of the workers, the PM controls the frequency and weight of checks as needed to establish an efficient checking system while ensuring the same level of quality as a full check.
Number of Tasks
3,000 items (3,000 days worth of travel and stay data)
Work Period
About 2 months
Main Takeaways
  • Annotation is a difficult and somewhat specialized task, as it requires consideration of not only the movement logs on the day of annotation, but also past movement histories, and can vary in quality depending on the person.
  • In addition to the work procedure manual, we also document and accumulate a large number of "data annotation examples" and "ways to lead to the correct answer" as references for decision-making, and share them as knowledge with the workers.
  • In addition, we have established an information sharing and Q&A system for the entire team at an early stage, which has greatly reduced variations in individual judgments and resulted in data with less quality fluctuations, less revisions, and less feedback, receiving high praise from our clients.

CASE 10
Conversation-specific expression automatic detection AI project
Research institution

Required Tasks
  • Labeling specific expressions from conversation videos
Customer's
Challenges
  • I tried to do data annotation with engineers in the company, but it took too much time and effort.
  • Although data annotation is difficult, ambiguous, and prone to variation, the quality of the results was larger than expected due to the lack of clear criteria and alignment among workers. Therefore, we would like to outsource to a vendor with expertise in the language domain.
  • Because the data was highly confidential, they requested that work be handled domestically with a client-provided tool.
Our
Solutions
  • Triple pass (three people annotate the same data, select and determine the label by majority vote) is adopted, and efficient quality management is implemented based on the agreement rate.
  • In addition to creating data annotation standards at our company, we are expanding the training period before entering actual production work and strengthening our system.
  • To ensure understanding and align judgment criteria, increase the frequency of meetings and individual feedback with workers, and suppress variations in quality by addressing methods for handling edge cases.
Number of Tasks
Conversation Video: 1,300min
Work Period
20 business days
Main Takeaways
  • The PM conducted a trial operation using sample data provided by the customer in advance. It was determined that there were many edge cases specific to linguistic understanding and conversation data annotation, resulting in a lot of variation in judgments. Based on this, a work system and process were established.
  • Assign our dedicated workers who are strong in natural language text annotation within our company.
  • Expanded the training period before the actual production, strengthened the system, and increased the frequency of worker meetings and individual feedback to minimize the variation of three workers from the start of work.
    As a result, we were able to reduce the burden of rework after checking, error leakage to customers, and acceptance checks by customers, and received high praise for achieving higher quality than the results achieved within the customer's company.

Other Case Studies

  • Natural Language Processing
    Data Generation for AI Assistant
    Project for improving the accuracy of an AI assistant. We assigned native speakers to generate a large amount of natural text that is likely to be spoken by general users as requests to the AI assistant.
  • Map Information
    Improved Map App Route Proposal Feature
    Project for improving user satisfaction with a map app. By evaluating whether the app's perceived location information and suggested routes were appropriate, we produced a massive quantity of high-quality training data with more accurate information.
  • OCR Text
    Improved Optical Text Recognition Accuracy
    Text area extraction from images. Request from an overseas company. We organized a team of annotators within 3 business days, consisting of people who can understand and apply English work manuals and feedback as is. We completed the project within the deadline and without spending time on translation or interpretation.
  • Speech Recognition
    Creation of Training Data for Voice Reading
    Project for creating training data using multilingual speech synthesis. The project team was composed of native speakers of each language. Voice data in Japanese, English, Chinese, and Korean was created. This is a case where the resources cultivated in our translation business were helpful.
  • Machine Translation Evaluation
    Creation of Machine Translation Training Data
    Project for evaluating the output of machine translation and improving the quality of training data. This work contributes to improving translation accuracy by integrating with natural language processing. This is a case where both our translation business experience and knowledge of natural language processing with AI/annotation were utilized.
  • Intent Extraction
    Search Engine Accuracy Evaluation
    Project for improving a search engine's understanding. Workers evaluated whether the browser was displaying appropriate results for each one of the users' search inputs.

Useful MaterialsDownloads

Contact Us / Free Trial

TOP