
Spinoff Blog Project
――Annotations Supporting AI in the DX Era. The Reality of the Analog Field
Human Resource Management to Ensure Annotation Quality and Productivity
Until now, our company has been publishing various blogs related to annotation and AI. In those, we have focused on conveying general knowledge and know-how. While the task of annotation may seem simple when expressed in words, it is a task that inevitably involves a lot of human interaction due to its inherent "ambiguity" and the fact that it is a "task that cannot be avoided by humans." Therefore, it often becomes quite messy and cannot be resolved with the neat theories that are commonly found. In order to ensure quality and productivity, a variety of experiences and know-how are actually required.
Therefore, understanding the specific problems that occur in the actual annotation field and how to address them can serve as helpful hints for leading to successful annotation.
In our company, we want to convey what actually happens on-site and the specific responses and measures we take. Unlike regular blogs or columns, we aim to share the real conditions of our work environment, including our unique characteristics and commitments, under the title of a spin-off blog project.
- Table of Contents
1. Human science personnel. Commitment to direct contracts
Our company directly contracts all annotators who perform annotation work, and we do not engage in annotation work by crowd workers. Of course, we understand that there are benefits for crowd workers, but the reason for choosing direct contracts is based on the belief that "building long-term relationships with annotators ensures the productivity and quality of annotation work." Our company started with "monozukuri" centered around manual production. At the heart of monozukuri is "people." We believe that whether work goes well or not is influenced by the relationship between the person doing the work and the person managing it.
Related Blog Posts
2. Management of Annotators
This is not limited to annotation work, but in order to ensure the quality and productivity of work, it is essential to gather better workers and continuously engage in human resource management to further enhance performance through education. This time, we will explain our approach based on the five perspectives of human resource management and personnel measures: "recruitment, placement, compensation, development, and evaluation."
Recruitment
When hiring new annotators, our company conducts a trial test to assess their suitability as annotators. Our trial test primarily checks whether candidates can accurately understand specifications and work instructions to perform their tasks. A common issue is that candidates may start working without fully understanding the specifications or overlook additional instructions or changes that arise during the process. The ability and attitude to correctly understand instructions and specifications are fundamental qualities not only for the tasks at hand but also for skill development and proficiency, which is why we place particular importance on this.
If the trial test exceeds the benchmark, we will proceed to the next step, which is an interview. Many people feel nervous during the interview, so we strive to create a relaxed atmosphere where open and candid discussions can take place (by speaking openly, we can learn more about the person's "character". Additionally, we want to understand what kind of communication style the person has in order to build the communication skills and smooth teamwork necessary for Q&A during annotation work).
Arrangement
Annotation work is performed on various types of data, including image-based and text-based data. We cannot force an annotator who says, "I'm sorry, I'm not good with bugs," to annotate images of such subjects by saying, "It's okay! You'll get used to it!" Additionally, each annotator has their own strengths and weaknesses. At our company, we evaluate annotators after each project and assign the most suitable annotator to a project based on their evaluation results and suitability.
In addition, among the high-quality and productive annotators, there are individuals who excel in understanding specifications and communication skills, actively providing accurate opinions in response to questions from other annotators. For such annotators, we prepare roles such as QA personnel, reviewers, and even support tasks for project managers, ensuring appropriate placement based on their skills.
Compensation
When estimating a project, we calculate the work hours and the annotator's compensation rate based on the results of work with sample data and the specifications gathered from the client. However, in some cases, the actual work hours may exceed our initial expectations.
In fact, in one case, it became clear that the workload was significantly higher than anticipated. The annotators mentioned, "Even if I work hard for a day, it doesn't even amount to half a day's worth of work..." When converted from the work unit price to an hourly wage, it turned out to be well below the minimum wage, which naturally affects motivation. Above all, it is our responsibility as a company to provide fair compensation for the work done. In cases like this, where the actual workload significantly exceeds the estimated hours, we may review the compensation rate during the project to align it with the actual work performed. (As a result, this can sometimes lead to a squeeze on the final profits...)
You may wonder if it is possible to accurately grasp the work hours that will serve as the basis for the reward unit in advance, but the AI being developed is diverse. Even if we refer to past similar projects or calculate work hours by actually working with sample data provided by clients to derive accurate work hours, it often turns out to be completely different from what was expected once we actually start.
Customers, regardless of our company, cannot realistically grasp all the relevant data in advance, nor is it practical to do so. Therefore, as we proceed with the work, edge cases often arise that do not fit within the specifications or assumptions made by the customer, leading to situations where the work does not progress as per the specifications, or the number of annotations per image or file exceeds expectations, resulting in increased labor. I would like to discuss this further as a separate topic.
Development (Skill Development)
Even if we assign projects based on aptitude, there may be annotators who find the tasks unnecessarily difficult once they start working. As a result, they may become overly cautious, leading to stagnation in productivity, or they may perform annotations without fully understanding the specifications, resulting in many errors without realizing their mistakes. In such cases, to deepen the understanding of the specifications and elevate productivity and quality to the necessary levels, our company emphasizes education and skill development through 1-on-1 meetings and close communication via chat tools.
While individual support is important, sharing best practices among all members through initiatives like "information sharing meetings" is also effective. There were several annotators who expressed concerns about their productivity, saying, "I get stuck when I have to make a decision, and my productivity decreases." To address this, we had our top annotator, who has outstanding productivity, share their screen while working and explain their process. Not only was their efficiency impressive, but I also remember how clear their reasoning was when they said things like, "You can judge this as the same label as the ones you've used before," which provided a sense of speed and clarity in decision-making. The annotators who were struggling seemed to gain a positive feeling, saying things like, "Thanks to this, I don't have to worry so much anymore." As a result, productivity significantly exceeded the previous lower limits starting the next day.
Evaluation
We evaluate the annotators after each project is completed. In the past, our company only provided free-form comments and an overall evaluation, but to enable a more objective assessment, we have broken down the evaluation into detailed categories such as quality, productivity, and communication, among many others. This helps us when assigning annotators to the next project.
That said, there are cases where the same worker was assigned to similar projects, and while the previous project went very well, the productivity in the current project did not increase as much as expected. When I hear annotators say things like, "I was able to make good judgments last time, but in this project, since the content was interesting, I couldn't help but want to dig deeper when it came to labeling edge cases," I often think, "Evaluating is quite difficult."
3. Summary
In AI development, the work of annotators can be considered a relatively non-specialized job that can be done without special knowledge or skills such as programming, as long as there is a PC and a network environment. Although there are set periods for each project, continuous employment is difficult except in special cases, making it inevitably a temporary job. In this regard, as cloud workers can be gathered easily in a short period and can proceed with a low budget, we are aware of such advantages. However, as a company that has handled numerous annotation projects, we believe that despite the effort required to gather and train personnel, building a close relationship with annotators secured through direct contracts and implementing appropriate human resource management will yield more fruitful results in terms of quality and productivity.
Annotation involves human intervention at some point. Even if the performance of automated annotation tools improves in the future, human work will still be necessary for the final touches and verification. Our company aims to deliver high-quality and highly productive annotations to our clients by emphasizing "connections between people" and focusing on "people" in our talent management.
Author:
Manabu Kitada
Annotation Group Project Manager
Since the establishment of our Annotation Group, we have been broadly responsible for team building and project management for large-scale projects centered on natural language processing, as well as the formulation of annotation specifications for PoC projects and consulting aimed at scaling.
Currently, in addition to being a project manager for image and video annotation and natural language annotation, I am also engaged in promotional activities such as being a seminar instructor for annotation and writing blogs.