Some parts of this page may be machine-translated.

Achievements and Case Studies (Annotation Site)

Case Studies

We support AI annotation projects for many companies, including GAFAM.

We participate in AI development projects with a thorough security system and high-precision annotation for a wide range of fields, including the medical industry, automotive industry, and IT industry.

Translation, Documentation, and Annotation Achievements

Achievements and Case StudiesCase Studies

CASE 01
AI Development Project for Advanced Medical Devices
Medical Device Manufacturing Company

Required Tasks
  • AI development of advanced medical devices whose purpose is supporting surgery and diagnosis.
  • Perform instance segmentation of several types of labels using actual data such as endoscopic images and X-ray images.
Customer's
Challenges
  • For the handling of highly confidential medical data, the customer wanted to avoid remote work and outsourcing overseas.
  • They were concerned about the compatibility between the source data and the tools used by subcontractors.
Our
Solutions
  • We built a dedicated secure room for this project within our office. It was operated on-site. By allowing only project members to enter the room, we ensured the security of the data and the confidentiality of the project itself.
  • By properly managing the version of the tools used, we unified the environments of our customer and annotators. This ensured data integrity.
Number of Tasks
10,000 items
Work Period
2 months
Main Takeaways
  • Human Science, which has obtained ISMS certification, has secure rooms for on-site operation.
  • We achieve thorough data management with comprehensive security management and worker education.
  • We provide flexible support for introducing new annotation tools and updating versions.

CASE 02
Autonomous Driving AI Accuracy Improvement Project
AI Technology Development Manufacturer

Required Tasks
  • Annotation for improving autonomous driving technology. Tagging performed by specifying objects and areas based on dashcam footage.
Customer's
Challenges
  • Planning for long-term operation, but the work is monotonous and the retention rate of annotators is low. Additionally, even if excellent annotators are trained, they leave quickly.
Our
Solutions
  • We selected and organized a team of qualified personnel for this project from our contracted annotators.
  • Regular meetings were held within the team. We reinforced quality and productivity by creating a system that does not leave annotators' questions unanswered.
  • Workers were regularly assigned new responsibilities, and moved between teams. By making changes to the environment while ensuring quality, we were able to maintain high motivation and a sense of accomplishment from the annotators.
Number of Tasks
Over 6,000 items
Work Period
Over 6 months
Main Takeaways
  • Human Science can organize a team of personnel suitable for the project's tasks from the human resources directly contracted with Human Science.
  • We create changes and a sense of achievement, even after the start of the project, to keep the annotators' motivation high. We stabilize the quality of work by securing long-term employment.

CASE 03
AI Assistant User Request Understanding Improvement Project
Global IT Company

Required Tasks
  • Ensure that the AI assistant correctly understands the users' voice requests and can perform the desired actions.
  • Workers evaluate the AI's understanding by tagging each action taken by AI.
Customer's
Challenges
  • The customer wanted to build a team of 40 members within 2 months.
  • Working in a secure room was essential, due to the highly confidential nature of the project.
  • The task was highly difficult and relied on workers' insight and judgment, so they wanted to proceed only with skilled annotators. Training for annotators was essential before starting the actual work.
  • The cost needed to secure the resources was difficult for their own company.
Our
Solutions
  • First, we started the project in our existing secure room, and within 1.5 months, we moved the project to a newly established secure room where 40 people could work.
  • We established a team structure and training program based on the proficiency of the annotators. By actively sharing and updating our knowledge, we were able to improve and stabilize the quality.
Number of Tasks
About 450,000 items
Work Period
6 months
Main Takeaways
  • Human Science can provide secure rooms that meet your standards. We also respond promptly to the need for expansion.
  • We provide thorough security education to annotators. We manage projects in a way that meets high security standards in both resources and environment.
  • By sharing information and communicating closely among members, we support the proficiency of annotators. This enables cost reduction through shorter training periods and greater productivity.

CASE 04
Project to Improve OCR Text Recognition Accuracy
Global IT Company

Required Tasks
  • Convert text areas found in images such as maps and restaurant menus into data that AI can understand, to improve the recognition accuracy of OCR.
  • The operator manually selects the text areas and adds the correct information to each one.
Customer's
Challenges
  • The customer wanted to ensure maximum uptime within the deadline, but their own resources alone were not enough.
  • Due to the difficulty of the task, many resources that were hired quit during training. Making progress on the project was challenging.  
Our
Solutions
  • We designed and implemented a new specialized recruitment test for the project. By forming teams with only the successful candidates, we reduced resignations and improved operational efficiency.
  • We analyzed the inclinations of the annotators who performed well in training and actively hired resources with similar tendencies.
  • We organized a team with resources that can understand English guidelines and materials as they are. By eliminating the process of translating documents, we reduced the cost.
Number of Tasks
22,000 items
Work Period
1,600 hours/month
Main Takeaways
  • Human Science has cultivated skills in document creation and formed the resources for multilingual support, which were utilized in the test creation and team organization for this project.
  • As a result, we achieved a high operating efficiency that exceeded the initial expected standards.

CASE 05
AI Automated Contract Content Confirmation Project
Global IT Company

Required Tasks
  • Automate the process of reviewing the contents of contracts by analyzing text.
  • The worker reads the contract documents, extracts and categorizes specific phrases and expressions, and performs labeling. The ability to understand technical terms and define complex labeling is required.
Customer's
Challenges
  • Internal resources were insufficient, and the establishment of a system to mass-produce training data was not progressing.
  • The client did not know where to start to execute a PoC (Proof of Concept).
  • They wanted to consult with experienced individuals for the establishment of work rules, standardization of knowledge, and the creation of management mechanisms.
Our
Solutions
  • We dispatched one experienced annotator from our company's resources to work at the client's office.
  • We listened to their challenges, and together, we created instructions for the work process and decision-making criteria.
  • We concretized the management challenges for future expansion of annotation work and developed mechanisms to ensure the continuation.
Number of Tasks
About 200 items
Work Period
3 months
Main Takeaways
  • By dispatching an experienced project manager, Human Science can visualize current and future challenges.
  • By being stationed in the customer's office, we can achieve both detailed support and data confidentiality. We contributed to the establishment of a system for expanding the annotation structure.

CASE 06
Automated Tissue Region Detection AI PoC Project
Medical Device Manufacturer

Required Tasks
  • Instance Segmentation of CT Slice Images
Customer's
Challenges
  • Instance segmentation of CT slice images.
  • The customer attempted to internalize data annotation using available engineers but were unable to keep up with the maintenance of annotation specifications, resulting in quality variations and production issues, so the project could not progress as planned.
Our
Solutions
  • We executed a rapid project launch utilizing our contracted data annotators (experienced in medical data annotation).
  • We created an annotation specification document from the already annotated sample data provided by the client.
Number of Tasks
About 2,000 items
Work Period
2 weeks
Main Takeaways
  • Human Science can create annotation specification documents through customer-provided annotated data and Q&A.
  • In addition to the above specifications, we utilized annotated sample data as a training reference for difficult-to-explain scenarios, shortening the training period for data annotators.
  • Thanks to our carefully selected contract annotators and shortened training period, we ensured high productivity from the early stages of the project launch. Despite the difficulty of the task, we were able to meet the client's deadline and received high praise from them for both quality and delivery time.

CASE 07
Conversation Emotion Analysis AI Project
Content Production IT Company

Required Tasks
  • Label conversational text with eight emotional categories.
Customer's
Challenges
  • Because the annotation work was being done by a single engineer within the company, the creation of training data was not progressing. Therefore, the client was considering outsourcing. But due to the ambiguous and difficult nature of annotation work, the client was concerned about individual differences in labeling and whether consistent, high-quality training data could be created.
  • They had no experience or know-how in creating documented standards to suppress variation in labeling and stabilize the quality when working with multiple people or outsourcing.
Our
Solutions
  • Before entering into an outsourcing contract with the client, we conducted a trial and had the client evaluate the quality.
  • We created data annotation guidelines at our company.
  • We adopted a triple-pass method. (Three people annotated the same data, and the label was selected and determined by majority opinion.)
Number of Tasks
20,000 items
Work Period
About 2 months
Main Takeaways
  • During the trial, Human Science was able to create an annotation specification that met the client's requirements, despite the high level of ambiguity, while receiving Q&A, communication, and feedback from the client. Additionally, the specification was useful for regular additional learning within the client's organization.
  • By making frequent partial deliveries, we can respond to feedback and requests from our customers in a timely manner, alleviating any concerns they may have about the quality of our services.
  • In addition to the triple-pass method, by conducting PM checks, providing timely feedback to workers, and holding regular meetings, we received high praise from the client for ensuring stability and consistency in the quality while suppressing variation and biases in worker judgments, which are common in ambiguous language annotations.

CASE 08
Machine Operation Behavior Analysis AI Project
For Machine Tool Manufacturer

Required Tasks
  • Key point data annotation for machine operators
Customer's
Challenges
  • Without the know-how to produce data annotations in-house, the client had difficulty establishing a system to ensure stable quality and productivity.
  • Due to the high ambiguity of the annotation position, there were significant individual differences in the point annotation positions, and even when gathering people within the company to annotate, there was a large variation in quality and a high volume of rework needed.
  • The client had trouble grasping the key points and guidelines for creating a manual to reduce variations in annotation.
  • Because the data was highly confidential, they requested that work be handled domestically with a client-provided tool.
Our
Solutions
  • We quickly launched a project team that consisted of our registered domestic data annotators, who have received security education.
  • As work progressed, we accumulated responses and evaluation standards for dealing with edge cases, and we provided feedback on the manual provided by the customer.
Number of Tasks
3,000 files
Work Period
3 weeks
Main Takeaways
  • At the start of the project, the project manager (PM) actually performs the annotation and works with the client to clarify any questions and answers (Q&A) about the work specifications. This allows us to understand the finer details that cannot be captured in the manual.
  • By accumulating and documenting knowledge and information such as detailed points about work and evaluation standards for edge cases, and then utilizing them in worker training, we were able to shorten the time spent teaching, while successfully launching a smooth team and stabilizing the quality.
    By sharing the accumulated information with the client, we were able to help them create a work manual and acquire knowledge for outsourcing data annotation.

CASE 09
GPS Crowd Flow Data Automatic Analysis AI Project
Research Institutions

Required Tasks
  • Labeling of transportation methods and types of stay (a total of 7 types) for human movement GPS data
Customer's
Challenges
  • In the previous annotation done by another company, overseas workers were utilized, which led to a lack of familiarity with domestic geography and traffic conditions, resulting in significant quality variations and many revisions.
  • Due to the high ambiguity of the point location, there is a significant individual difference in the point annotation positions. Even when we gathered people within the company to perform the annotations, there was a large variation in quality, resulting in many reworks.
    As a result, the man-hours increased beyond expectations, causing delays in the schedule. Therefore, this time we want to achieve high-quality annotations using domestic workers while keeping costs down.
Our
Solutions
  • We assign domestic contractors who are well-versed in our geography and transportation conditions to quickly launch projects.
  • By having the project manager monitor the operator's proficiency and understanding, and by appropriately controlling the frequency and weight of the sampling checks, we establish an efficient checking system that ensures quality equivalent to a full inspection while conducting sampling checks.
Number of Tasks
3,000 cases (3,000 days of movement and stay data)
Work Period
About 2 months
Main Takeaways
  • It is a somewhat specialized task that is difficult and has many factors that cause quality to vary by person, as it is necessary to consider not only the movement log of the day of annotation but also the past movement history.
  • Therefore, not only the work procedures document but also a wealth of materials such as "examples of annotations" and "ways of thinking that lead to the correct answer" are compiled and accumulated, and shared as knowledge with the workers.
  • In addition, we established a system for information sharing and Q&A across the entire team early on, which helped to minimize variations in individual judgments. As a result, we received high praise from our clients for delivering data with significantly less variation in quality and fewer revisions and feedback than expected.

CASE 10
Automatic Determination AI Project for Specific Conversational Expressions
Research Institutions

Required Tasks
  • Labeling specific expressions extracted from conversation videos
Customer's
Challenges
  • I tried doing annotations with in-house engineers, but it took too much effort.
  • The annotations are difficult, highly ambiguous, and have a lot of variability. Due to the lack of a clear standard document and insufficient alignment among workers, the results of the internal implementation showed that the quality variability was greater than expected.
    Therefore, we would like to outsource to a vendor with expertise in the language domain and extensive experience.
  • Because the data was highly confidential, they requested that work be handled domestically with a client-provided tool.
Our
Solutions
  • Adopting a triple pass (where the same data is annotated by three individuals, and the label is selected and determined by majority vote), we implement efficient quality control based on the agreement rate.
  • Not only the creation of annotation standards at our company, but also the expansion of the training period and strengthening of the system before entering the actual work phase.
  • Increase the frequency of meetings and individual feedback with workers to align understanding on confirming comprehension, standard criteria, and how to handle edge cases, thereby reducing variations in quality.
Number of Tasks
Conversation Video: 1,300min
Work Period
20 business days
Main Takeaways
  • The sample data provided by the client in advance was used by the PM to conduct trial work. It was found that there were many edge cases specific to linguistic understanding and conversation annotation, leading to significant variability in judgment during annotation. Based on this, a work structure and process were established.
  • Assign our dedicated personnel who are strong in natural language text annotation within our company.
  • We successfully minimized the variation among the three operators from the very start of the work by expanding the training period before the actual work, strengthening the system, and increasing the frequency of operator meetings and individual feedback.
    As a result, we received positive feedback for achieving higher quality than what was obtained internally by the client, while reducing the need for corrections after checks, preventing errors from reaching the client, and alleviating the burden of acceptance checks at the client’s site.

Other Case Studies

  • Natural Language Processing
    Data Generation for AI Assistant
    Project for improving the accuracy of an AI assistant. We assigned native speakers to generate a large amount of natural text that is likely to be spoken by general users as requests to the AI assistant.
  • Map Information
    Improved Map App Route Proposal Feature
    Project for improving user satisfaction with a map app. By evaluating whether the app's perceived location information and suggested routes were appropriate, we produced a massive quantity of high-quality training data with more accurate information.
  • OCR Text
    Improved Optical Text Recognition Accuracy
    Text area extraction from images. Request from an overseas company. We organized a team of annotators within 3 business days, consisting of people who can understand and apply English work manuals and feedback as is. We completed the project within the deadline and without spending time on translation or interpretation.
  • Speech Recognition
    Creation of Training Data for Voice Reading
    Project for creating training data using multilingual speech synthesis. The project team was composed of native speakers of each language. Voice data in Japanese, English, Chinese, and Korean was created. This is a case where the resources cultivated in our translation business were helpful.
  • Machine Translation Evaluation
    Creation of Machine Translation Training Data
    Project for evaluating the output of machine translation and improving the quality of training data. This work contributes to improving translation accuracy by integrating with natural language processing. This is a case where both our translation business experience and knowledge of natural language processing with AI/annotation were utilized.
  • Intent Extraction
    Search Engine Accuracy Evaluation
    Project for improving a search engine's understanding. Workers evaluated whether the browser was displaying appropriate results for each one of the users' search inputs.

Useful MaterialsDownloads

Contact Us / Free Trial

TOP