Some parts of this page may be machine-translated.

Achievements and Case Studies (Annotation Site)

Case Studies

We support AI annotation projects for many companies, including GAFAM.

We participate in AI development projects with a thorough security system and high-precision annotation for a wide range of fields, including the medical industry, automotive industry, and IT industry.

Translation, Documentation, and Annotation Achievements

Achievements and Case StudiesCase Studies

CASE 01
AI Development Project for Advanced Medical Devices
Medical Device Manufacturing Company

Required Tasks
  • AI development of advanced medical devices whose purpose is supporting surgery and diagnosis.
  • Perform instance segmentation of several types of labels using actual data such as endoscopic images and X-ray images.
Customer's
Challenges
  • For the handling of highly confidential medical data, the customer wanted to avoid remote work and outsourcing overseas.
  • They were concerned about the compatibility between the source data and the tools used by subcontractors.
Our
Solutions
  • We built a dedicated secure room for this project within our office. It was operated on-site. By allowing only project members to enter the room, we ensured the security of the data and the confidentiality of the project itself.
  • By properly managing the version of the tools used, we unified the environments of our customer and annotators. This ensured data integrity.
Number of Tasks
10,000 items
Work Period
2 months
Main Takeaways
  • Human Science, which has obtained ISMS certification, has secure rooms for on-site operation.
  • We achieve thorough data management with comprehensive security management and worker education.
  • We provide flexible support for introducing new annotation tools and updating versions.

CASE 02
Autonomous Driving AI Accuracy Improvement Project
AI Technology Development Manufacturer

Required Tasks
  • Annotation for improving autonomous driving technology. Tagging performed by specifying objects and areas based on dashcam footage.
Customer's
Challenges
  • Planning for long-term operation, but the work is monotonous and the retention rate of annotators is low. Additionally, even if excellent annotators are trained, they leave quickly.
Our
Solutions
  • We selected and organized a team of qualified personnel for this project from our contracted annotators.
  • Regular meetings were held within the team. We reinforced quality and productivity by creating a system that does not leave annotators' questions unanswered.
  • Workers were regularly assigned new responsibilities, and moved between teams. By making changes to the environment while ensuring quality, we were able to maintain high motivation and a sense of accomplishment from the annotators.
Number of Tasks
Over 6,000 items
Work Period
Over 6 months
Main Takeaways
  • Human Science can organize a team of personnel suitable for the project's tasks from the human resources directly contracted with Human Science.
  • We create changes and a sense of achievement, even after the start of the project, to keep the annotators' motivation high. We stabilize the quality of work by securing long-term employment.

CASE 03
AI Assistant User Request Understanding Improvement Project
Global IT Company

Required Tasks
  • Ensure that the AI assistant correctly understands the users' voice requests and can perform the desired actions.
  • Workers evaluate the AI's understanding by tagging each action taken by AI.
Customer's
Challenges
  • The customer wanted to build a team of 40 members within 2 months.
  • Working in a secure room was essential, due to the highly confidential nature of the project.
  • The task was highly difficult and relied on workers' insight and judgment, so they wanted to proceed only with skilled annotators. Training for annotators was essential before starting the actual work.
  • The cost needed to secure the resources was difficult for their own company.
Our
Solutions
  • First, we started the project in our existing secure room, and within 1.5 months, we moved the project to a newly established secure room where 40 people could work.
  • We established a team structure and training program based on the proficiency of the annotators. By actively sharing and updating our knowledge, we were able to improve and stabilize the quality.
Number of Tasks
About 450,000 items
Work Period
6 months
Main Takeaways
  • Human Science can provide secure rooms that meet your standards. We also respond promptly to the need for expansion.
  • We provide thorough security education to annotators. We manage projects in a way that meets high security standards in both resources and environment.
  • By sharing information and communicating closely among members, we support the proficiency of annotators. This enables cost reduction through shorter training periods and greater productivity.

CASE 04
Project to Improve OCR Text Recognition Accuracy
Global IT Company

Required Tasks
  • Convert text areas found in images such as maps and restaurant menus into data that AI can understand, to improve the recognition accuracy of OCR.
  • The operator manually selects the text areas and adds the correct information to each one.
Customer's
Challenges
  • The customer wanted to ensure maximum uptime within the deadline, but their own resources alone were not enough.
  • Due to the difficulty of the task, many resources that were hired quit during training. Making progress on the project was challenging.  
Our
Solutions
  • We designed and implemented a new specialized recruitment test for the project. By forming teams with only the successful candidates, we reduced resignations and improved operational efficiency.
  • We analyzed the inclinations of the annotators who performed well in training and actively hired resources with similar tendencies.
  • We organized a team with resources that can understand English guidelines and materials as they are. By eliminating the process of translating documents, we reduced the cost.
Number of Tasks
22,000 items
Work Period
1,600 hours/month
Main Takeaways
  • Human Science has cultivated skills in document creation and formed the resources for multilingual support, which were utilized in the test creation and team organization for this project.
  • As a result, we achieved a high operating efficiency that exceeded the initial expected standards.

CASE 05
AI Automated Contract Content Confirmation Project
Global IT Company

Required Tasks
  • Automate the process of reviewing the contents of contracts by analyzing text.
  • The worker reads the contract documents, extracts and categorizes specific phrases and expressions, and performs labeling. The ability to understand technical terms and define complex labeling is required.
Customer's
Challenges
  • Internal resources were insufficient, and the establishment of a system to mass-produce training data was not progressing.
  • The client did not know where to start to execute a PoC (Proof of Concept).
  • They wanted to consult with experienced individuals for the establishment of work rules, standardization of knowledge, and the creation of management mechanisms.
Our
Solutions
  • We dispatched one experienced annotator from our company's resources to work at the client's office.
  • We listened to their challenges, and together, we created instructions for the work process and decision-making criteria.
  • We concretized the management challenges for future expansion of annotation work and developed mechanisms to ensure the continuation.
Number of Tasks
About 200 items
Work Period
3 months
Main Takeaways
  • By dispatching an experienced project manager, Human Science can visualize current and future challenges.
  • By being stationed in the customer's office, we can achieve both detailed support and data confidentiality. We contributed to the establishment of a system for expanding the annotation structure.

CASE 06
Automated Tissue Region Detection AI PoC Project
Medical Device Manufacturer

Required Tasks
  • Instance Segmentation of CT Slice Images
Customer's
Challenges
  • Instance segmentation of CT slice images.
  • The customer attempted to internalize data annotation using available engineers but were unable to keep up with the maintenance of annotation specifications, resulting in quality variations and production issues, so the project could not progress as planned.
Our
Solutions
  • We executed a rapid project launch utilizing our contracted data annotators (experienced in medical data annotation).
  • We created an annotation specification document from the already annotated sample data provided by the client.
Number of Tasks
About 2,000 items
Work Period
2 weeks
Main Takeaways
  • Human Science can create annotation specification documents through customer-provided annotated data and Q&A.
  • In addition to the above specifications, we utilized annotated sample data as a training reference for difficult-to-explain scenarios, shortening the training period for data annotators.
  • Thanks to our carefully selected contract annotators and shortened training period, we ensured high productivity from the early stages of the project launch. Despite the difficulty of the task, we were able to meet the client's deadline and received high praise from them for both quality and delivery time.

CASE 07
Conversation Emotion Analysis AI Project
Content Production IT Company

Required Tasks
  • Label conversational text with eight emotional categories.
Customer's
Challenges
  • Until now, annotation work has been done by a single in-house engineer, so the creation of training data has not progressed. Therefore, we are considering outsourcing, but since annotation work is ambiguous and it is difficult to determine the correct answer, there are significant individual differences in labeling, and we are concerned about whether we can create high-quality training data with consistent quality.
  • They had no experience or know-how in creating documented standards to suppress variation in labeling and stabilize the quality when working with multiple people or outsourcing.
Our
Solutions
  • Before entering into an outsourcing contract with the client, we conducted a trial and had the client evaluate the quality.
  • We created data annotation guidelines at our company.
  • We adopted a triple-pass method. (Three people annotated the same data, and the label was selected and determined by majority opinion.)
Number of Tasks
20,000 items
Work Period
About 2 months
Main Takeaways
  • During the trial, Human Science was able to create an annotation specification that met the client's requirements, despite the high level of ambiguity, while receiving Q&A, communication, and feedback from the client. Additionally, the specification was useful for regular additional learning within the client's organization.
  • By making frequent partial deliveries, we can respond to feedback and requests from our customers in a timely manner, alleviating any concerns they may have about the quality of our services.
  • In addition to the triple-pass method, by conducting PM checks, providing timely feedback to workers, and holding regular meetings, we received high praise from the client for ensuring stability and consistency in the quality while suppressing variation and biases in worker judgments, which are common in ambiguous language annotations.

CASE 08
Machine Operation Behavior Analysis AI Project
For Machine Tool Manufacturer

Required Tasks
  • Key point data annotation for machine operators
Customer's
Challenges
  • Without the know-how to produce data annotations in-house, the client had difficulty establishing a system to ensure stable quality and productivity.
  • Due to the high ambiguity of the annotation position, there were significant individual differences in the point annotation positions, and even when gathering people within the company to annotate, there was a large variation in quality and a high volume of rework needed.
  • The client had trouble grasping the key points and guidelines for creating a manual to reduce variations in annotation.
  • Because the data was highly confidential, they requested that work be handled domestically with a client-provided tool.
Our
Solutions
  • We quickly launched a project team that consisted of our registered domestic data annotators, who have received security education.
  • As work progressed, we accumulated responses and evaluation standards for dealing with edge cases, and we provided feedback on the manual provided by the customer.
Number of Tasks
3,000 files
Work Period
3 weeks
Main Takeaways
  • At the start of the project, the project manager (PM) actually performs the annotation and works with the client to clarify any questions and answers (Q&A) about the work specifications. This allows us to understand the finer details that cannot be captured in the manual.
  • By accumulating and documenting knowledge and information such as detailed points about work and evaluation standards for edge cases, and then utilizing them in worker training, we were able to shorten the time spent teaching, while successfully launching a smooth team and stabilizing the quality.
    By sharing the accumulated information with the client, we were able to help them create a work manual and acquire knowledge for outsourcing data annotation.

CASE 11
"To be honest, there is nothing but satisfaction with your company."
AI Anomaly Detection Project - Business IT Company

Required Tasks
  • Annotation
    (Bounding Box / Semantic Segmentation) Work
  • Selection of images to be processed (data cleansing)
  • Create images that are difficult to collect using generative AI
Customer's
Challenges
  • There was a shortage of in-house resources to perform annotation tasks.
  • It is difficult to establish and refine annotation standards, resulting in quality inconsistencies and ultimately no improvement in AI accuracy.
Our
Solutions
  • Collect Q&A and feedback generated during work and share them with customers. Clarify decision criteria while obtaining agreement.
  • We share our expertise on annotation tools and tasks to not only improve work efficiency and quality but also enhance our clients' understanding of annotation.
Number of Tasks
5 Types of Tasks
Approximately 7,000 Files
Work Period
About 4 months
Main Takeaways
  • The challenge was the bottleneck caused by running AI development and annotation tasks in parallel, along with a lack of time flexibility in the development schedule. By outsourcing the annotation work to your company and dividing the tasks, we were able to focus fully on AI model development and accuracy improvement, which are our core priorities, and successfully lead the AI development to completion in a short period, as highly evaluated by our clients.
  • Through close communication with our clients, we conduct Q&A and gather feedback, consolidate them, and share with our clients to clarify standards. As a result, we were highly evaluated for achieving higher quality than in-house production by the client, significantly contributing to the improvement of the development AI's accuracy.

●Customer Interviews are >here

CASE 12
"We received candid feedback even on minor discrepancies and differences in definition recognition."
Comprehensive Machinery Manufacturer

Required Tasks
  • Point Annotation for People
Customer's
Challenges
  • Annotation work is not progressing as expected due to a lack of resources.
  • Annotations contain many ambiguities. There is a lack of know-how in establishing annotation review systems and training programs to ensure quality.
  • Due to the high confidentiality of the data, we want to perform the work domestically.
Our
Solutions
  • Share ambiguous points and edge cases regarding annotations with clients, and provide feedback to the specification document to ensure quality stability
  • The PM personally handles the initial sample data work, acquires an understanding of the specifications and work tips, and creates a manual. This enables effective training for the workers.
Number of Tasks
Approximately 3,000 files
Work Period
About 1 month
Main Takeaways
  • To improve annotation quality, we identified data that did not match the specification documents and items prone to variability, and proposed them to the client. We received feedback that pointing these out led to further quality improvements.
  • At the project kickoff, we received positive feedback that by aligning not only on detailed requirements but also on communication plans and methods, the project was able to proceed smoothly from start to finish.

●Customer Interviews are >here

Other Case Studies

  • Natural Language Processing
    Data Generation for AI Assistant
    Project for improving the accuracy of an AI assistant. We assigned native speakers to generate a large amount of natural text that is likely to be spoken by general users as requests to the AI assistant.
  • Map Information
    Improved Map App Route Proposal Feature
    Project for improving user satisfaction with a map app. By evaluating whether the app's perceived location information and suggested routes were appropriate, we produced a massive quantity of high-quality training data with more accurate information.
  • OCR Text
    Improved Optical Text Recognition Accuracy
    Text area extraction from images. Request from an overseas company. We organized a team of annotators within 3 business days, consisting of people who can understand and apply English work manuals and feedback as is. We completed the project within the deadline and without spending time on translation or interpretation.
  • Speech Recognition
    Creation of Training Data for Voice Reading
    Project for creating training data using multilingual speech synthesis. The project team was composed of native speakers of each language. Voice data in Japanese, English, Chinese, and Korean was created. This is a case where the resources cultivated in our translation business were helpful.
  • Machine Translation Evaluation
    Creation of Machine Translation Training Data
    Project for evaluating the output of machine translation and improving the quality of training data. This work contributes to improving translation accuracy by integrating with natural language processing. This is a case where both our translation business experience and knowledge of natural language processing with AI/annotation were utilized.
  • Intent Extraction
    Search Engine Accuracy Evaluation
    Project for improving a search engine's understanding. Workers evaluated whether the browser was displaying appropriate results for each one of the users' search inputs.

Useful MaterialsDownloads

Contact Us / Free Trial

TOP