🎉 Share Your 2025 Year-End Summary & Win $10,000 Sharing Rewards!
Reflect on your year with Gate and share your report on Square for a chance to win $10,000!
👇 How to Join:
1️⃣ Click to check your Year-End Summary: https://www.gate.com/competition/your-year-in-review-2025
2️⃣ After viewing, share it on social media or Gate Square using the "Share" button
3️⃣ Invite friends to like, comment, and share. More interactions, higher chances of winning!
🎁 Generous Prizes:
1️⃣ Daily Lucky Winner: 1 winner per day gets $30 GT, a branded hoodie, and a Gate × Red Bull tumbler
2️⃣ Lucky Share Draw: 10
Manpower track + large model, no imaginary splash
Original source: Titanium Media
After the large model was implemented in the enterprise-level market for a period of time, it did not make the imaginary splash.
**“It’s all public network data, and enterprises dare not use it.” There are also very few practical application scenarios. A pre-sales expert for HR enterprise-level software told Titanium Media.
At present, the company mainly implements the three core scenarios of talent discovery, resume evaluation and interview questions. Titanium media understands that the market can now feel the information of enterprise layoffs and tightening recruitment needs, but this does not affect the flow of high-quality talents, especially for the recruitment needs of large and medium-sized enterprises, which need to improve the ability to identify talents and tap human efficiency;
But in fact, there are not many customer cases of this company, and the large model trained for the customer is called “continuous iterative process”, and the fee is planned to be charged in a subscription model, and the pilot customers who are currently accepting this practice are from the world’s top 500 multinational companies and domestic industry-leading enterprises.
In fact, the customer demand market is also full of different opinions on generative AI products or large model technologies. Titanium Media understands that, especially for large enterprise customers, although many enterprise customers publicly advertise that they are actively trying large models, eager to solve all the pain points encountered in the current business, but at the same time, there are also enterprise customers who disdain this, believing that the experience and talents they have accumulated will not be simply replaced by large models.
“The development and training of large models is still very complex, can it be more concise?” said a customer in the traditional energy industry at a closed-door meeting with a leading Chinese ICT manufacturer.
The customer clearly pointed out: **"Although certain achievements have been made in promoting intelligence in the past, there is still a very big gap with our imagination. In addition to the complex natural conditions of the industry’s own production operations, the lack of understanding of intelligence by various subordinate manufacturers has led to a lot of investment, but they are still fighting their own battles.
On the other hand, large models are not “cheap” at this stage, and companies that can afford to use large models either have a certain production volume or a certain budget in hand, and believe that the generative technology brought by large models is worth trying and deploying.
The person in charge of an AI listed company recently talked about the large model entrepreneurship, pointing out that “the large model on the market has some basic capabilities, but it is only a toy or a tool, and the large model is not only these capabilities, but more importantly, it is to realize commercial realization.” ”
In fact, whether it is Wensheng diagram or Wensheng text, generative AI driven by large model technology does show its huge application scenarios. According to incomplete statistics from Titanium Media, there are currently no less than 50 giant companies and AI startups participating in it, and the general large model is also trying to access multiple cloud platforms for enterprises to call or customize and develop private models.
It is worth mentioning that at the first developer conference held on November 6, OpenAI explained the direction of its next efforts, including: strengthening the length of text input, the output mode conforming to the predetermined mode, more and better quality training data, more multimodality, domain knowledge and business logic, and more favorable prices.
This is good news for other AI developers or users, but it is a sword hanging over the top for all of you who are big model technology entrepreneurs.
At the same time, compliance review and privacy security of generative content are also necessary prerequisites for large models or generative AI technologies to be popularized in the market.
In China, on October 11, the official website of the National Information Security Standardization Technical Committee released the “Basic Requirements for the Security of Generative AI Services” (draft for comments) to solicit opinions from the public. This is the first draft of the regulatory draft specifically for the field of generative AI security in China, and it is also a support for the “Interim Measures for the Management of Generative AI Services” launched by seven departments including the Cyberspace Administration of China in July.
Compared with the frenzy and hustle and bustle when the large model arrived, at the stage of real landing, the exploration and test of how to apply the large model in depth in the industry has just begun.
Is every product worth a big model redo?
“Every product deserves to be redone with a large model”, this is a common narrative for enterprises that are new to the application layer of large models.
But before we redo the product, there’s a more fundamental question: where can the big model be used?
In the B-end market, this question may be more difficult to answer, or it may be difficult to have a single answer.
From the open source of models such as Meta’s LLaMA series, Stability AI’s StableLM series, MPT and other models, as well as the internal testing of domestic general large models for enterprise-level APIs, more startups have been stimulated to follow up and try.
If a set of recruitment processes is disassembled into resume collection, screening, evaluation, interview, offer and onboarding, in the whole chain, the natural language interaction technology driven by large models is most directly reflected in improving HR recruitment efficiency and improving user experience.
As an intelligent recruitment management software company, MOKA advocates the transformation of core human resources systems with AI. At the end of June this year, MOKA announced the launch of Eva, a smart product solution based on large models, including resume screening, customized interview questions, AI writing reviews, conversational BI, and employee chatBot. For example, the employee chatbot function is used for employees to complete tasks related to recruitment and human resources and obtain relevant information, similar to an AI assistant that provides consultation. Conversational BI interacts with a natural language interface, giving employees quick access to databases, querying key metrics, and supporting business decision-making and management.
Similarly, Beisen first announced the internal test function based on Baidu’s Wenxin Yiyan large model - recruitment JD writing and recruitment poster production. At the spring conference in May, Beisen also launched a new AI product, Mr. Sen, a personal leadership coach, for the transformation and adaptation of new leaders, team management, personal development, performance feedback, etc. According to Ji Weiguo, CEO of Beisen, AI interviews, employee service robots, and sparring robots will all be scenarios that can be combined with large models.
Yonyou and Kingdee chose to integrate the capabilities of general large models and combine them with their own private domain data to train enterprise-level domain models applied to process software scenarios such as human resources, finance, and supply chain.
Bao Fei, senior application architect of Yonyou, pointed out when communicating with Titanium Media, "At this stage, the large model is still far from the industry, and customers are most concerned about what kind of value it can bring to the industry. ”
At present, the list of domain model applications supported by Yonyou YonGPT includes: enterprise business insight, intelligent order generation, supplier risk control, dynamic inventory optimization, intelligent talent discovery, intelligent recruitment, intelligent budget analysis, intelligent business travel expense control, code generation, etc.
Kingdee Sky GPT is currently mainly used in finance, human resources, and program research and development scenarios. For example, Kingdee’s first financial model is similar to a financial AI assistant, providing services such as financial Q&A, expense reimbursement, contract approval, report generation, analysis and forecasting.
Titanium Media previously reported that Zhao Yanxi, executive vice president and general manager of the R&D platform of Kingdee China, found that enterprises do have a strong interest in large models during the survey, but the two most mentioned feedbacks are “AI anxiety”, afraid that if you don’t use AI, you will fall behind, and the other is “AI is confused” and doesn’t know how to use it in business. Kingdee’s strategy is to find a group of prototype customers to do pilot verification of business scenarios, which is closer to the actual business of the enterprise.
In the view of Wang Jingfei, CTO of iHR, “the most important thing for customers is not which big model is used, but what the big model helps the business.” **The boss will pay attention to the business indicators, such as per capita output, input analysis, cost optimization, etc., and HR will pay attention to employee experience and work efficiency, such as a smoother personnel process (in, transfer, overtime, etc.). ”
Focusing on core personnel + human efficiency, I personnel is also cooperating with DingTalk to launch three types of intelligent solutions for operational efficiency, risk management and business experts, the purpose of which is to solve the problem that business owners are most concerned about: using the most scientific method and the lowest risk premise to realize the enterprise strategy most efficiently.
In terms of operational efficiency, through the large-scale model capability, it can provide employees with 7*24 hours of HR consulting, process and other support; personnel risk management experts can manage the risks that may occur in human resources, such as early warning of potential risks in the content of labor contracts, and put forward correction suggestions and risk plans; performance experts among business experts can guide employees on how to complete the goal dismantling, alignment and review of OKRs.
Finding pilot scenarios and prototype customers is also the strategy of most large-scale model application manufacturers.
It is the problem of the black box of technology, and it is also the dilemma of real investment
However, when it comes to end users, due to the wide variety of enterprise business logic and organizational forms, there is still a certain degree of confusion about whether the large model is ChatGPT or “ChatPPT” from “testing” to “full landing”.
"We didn’t say that we have done it, but we can make a technical reference for customers when they happen to have such a strong demand, because sometimes the technology is too cutting-edge, and customer management can’t keep up, which can only make customers salivate. An HR SaaS team leader A told Titanium Media. The company currently plans to incorporate third-party AI large model capabilities into the SaaS version, and when asked whether it has been implemented, the person in charge was noncommittal.
In the internal test product descriptions of the above different manufacturers, the application paradigm of the large model in their respective fields has not brought subversive changes, but more has added one more choice of interaction methods for different links, and the measurement standard is nothing more than “whether it is enough to understand human language and whether it can produce relevant intelligent actions”. In other words, the level of intelligence is not determined by the software, but by the large model used behind it.
Objectively, the large model is a probabilistic model, and what kind of content is generated is completely changed, and the desire to achieve more accurate results is limited by data, scenarios, talents, costs and other issues, which are all linked to the investment of real conditions. In addition, application manufacturers will also choose to develop across multiple models, in order to reduce the potential risks caused by the operation and policies of model enterprises.
This is an issue that must be considered as a large model for the market.
According to Oracle Cloud HCM Product Owner, about 80% of Oracle product iterations are driven by customer requirements. Oracle has identified more than 100 high-value scenarios for AI-generated AI, such as assisted authoring, recommendations, summarization, and more, and is just getting started.
Oracle Cloud HCM is built on OCI Cloud Platform and has generative AI capabilities embedded in it. On the one hand, customers can use their own data to optimize the model, and the platform provides protection for sensitive and proprietary information of customers, and at the same time, generates content that is more in line with user needs through built-in prompts, while reducing factual errors and biases.
In order to apply the large model to enterprise-level scenarios as much as possible, Bao Fei pointed out that the current solution of Yonyou is to rely on expert experience in part and also realize business-oriented process control, including adding a control layer to the upper layer of the large model to control the reliability and compliance of the generated results.
After the introduction of large model regulations in August, the iHR will give priority to domestic large models and open source large models. For some general information, such as public data such as laws and regulations in the “Legal Assistant”, domestic large models will be used to complete training and fine-tuning, while for information related to enterprises and employees, open source models will be considered to complete the training independently to ensure the security of personal information.
Wang Jingfei pointed out that in the process of model training, it is inevitable to need its own accumulated “private domain” data, and in order to ensure the data compliance of enterprise training large models, the current underlying basic model large model provider provides some access capabilities, allowing private databases to access large models and use them in combination with large models. In addition, most of the domestic large models provide support in other aspects of data security.
At the same time, application vendors are not able to access large models smoothly.
The person in charge of an HR SaaS product told Titanium Media, "HR vendors are different from AI vendors, in the past, the two sides were in a stage where they expected each other, but the cooperation was relatively in-depth. The biggest problem for AI vendors is that they have excellent algorithms, but they don’t have data related to vertical business, and the business side has a large amount of data, and AI capabilities are relatively lacking. The data of AI companies and the data of such business companies will be slowly combined, and I think this is definitely a trend. ”
For example, before the emergence of large models, this company was already experimenting with AI interviews, although many companies are trying this new thing, but its application is not very widespread. The company believes that the decision on whether to choose AI interview as an application scenario depends on the position being recruited and the credibility of AI interview ability, etc., “In the actual scenario, employers have doubts about the ability of AI interviews to evaluate candidates, and the criteria for systematically evaluating candidates are different, and whether they are objective and credible needs to be verified, and it will take time to accept.” ”
The person in charge of the above-mentioned start-up company A also pointed out that at the product level, it will cooperate with friends or independent research and development, and some Internet companies will dock and directly package their technical solutions.
"The application of AI in the field of HR SaaS is relatively limited. "The AI products involved in this company mainly include: resume analysis, automatic identification of ID cards/bank cards, face recognition, and electronic signatures, and the application links mainly revolve around the job matching and recruitment modules.
In the salary link, the salary system of many large and medium-sized enterprises is still relatively complex, including the calculation of commissions and bonuses, if it is a flexible employment dispatch method, the salary system is on time, and some are calculated by piece, and at the same time, in the practice process, there are many sources of these data, that is, there are many systems, if you can’t do the interface (tax, social security, bank, government), then you need other technologies to synchronize the data.
"**The cost of the current large model is still relatively high, so for high-frequency and low-value scenarios and low-frequency and high-value scenarios, the difference in cost performance will be relatively large. **iHR currently launches personnel legal assistants, labor contract risk analysis, and performance coaching experts, which belong to low-frequency and high-value scenarios, and at the same time, we are also actively exploring high-frequency and high-value user scenarios. Wang Jingfei told Titanium Media.
To this end, iHR first chooses the performance link, the cost aspect is one of the considerations, and more importantly, iHR is based on the consideration of the digital role of “human resource management expert”. Performance management, especially OKRs, requires a lot of methodology and practice to do well. The iHR will provide a large number of experts, and by empowering AI to empower these experts, the influence of experts will be maximized.
In Wang Jingfei’s view, if the application party wants to use the basic large model safely and conveniently, it must start from the following levels:
(1) The vector database is used with the large model. The self-built vector database is used to store customer data, which is only stored in the server of iHR and will not be open to external models.
(2) Adopt self-built large models. For large models in some vertical fields, such as “HR Legal Assistant”, which requires a large amount of documents and data accumulation, it is more appropriate to build a self-built model + fine-tuning. Based on the self-built model, the personnel personnel is fine-tuned based on the characteristics of the law.
(3) Data desensitization. Some data must be analyzed by an external large model, and the data can be desensitized and transmitted to the large model. After the masked data, the large model only has a string of numbers and a random ID to identify the data owner, and the large model returns the corresponding ID to the i personnel system after completing the analysis, and then i personnel returns it to the customer after completing the internal mapping.
Anti-Thought
In fact, not only the HR track, but other industries are also facing the problem of large-scale model landing.
In April, DingTalk was the first to demonstrate its intelligent achievements: by accessing the Ali Tongyi model, it realized the AI “magic wand” capability in four high-frequency scenarios such as group chat, documents, video conferencing, and application development. Judging from the recent results, after the internal testing of more than 500,000 enterprises, DingTalk’s “AI Magic Wand” was officially launched, and 17 products and 60+ scenarios such as DingTalk chat, documents, knowledge base, brain map, flash, and Teambition were fully open for testing.
Talking about the process of accessing the large model, Ye Jun talked about two points in the previous exchange with Titanium Media: one is in the interaction layer, the connection between the application interface of each product of DingTalk and the API of the Qianwen large model, and the other is the docking between the models, and the Qianwen large model needs some general text from the knowledge base document.
In Ye Jun’s view, it is not difficult to connect with the large model, but how to connect the different business systems on the DingTalk platform with the large model. "Now a lot of the data on DingTalk has business implications, and the docking model is more complicated, theoretically it is not a large model in the traditional sense, but a medium-sized model with certain industry characteristics, or a certain application system structure. Therefore, I think the time period for integration will be longer, which is also a big difference between enterprise applications and traditional general search. ”
Regarding commercialization, Ye Jun said that DingTalk will consider two models: one is for relatively high-value commercial services, directly do subscription charging, and combine it into the base of DingTalk Professional Edition, Exclusive Edition, and Special Edition;
Since DingTalk itself is connected to a lot of ecological applications, how to let SaaS startups directly call the model capabilities of the platform through DingTalk and transform SaaS products is something that can be carried as a platform application. The above-mentioned cooperation between DingTalk and i personnel may be one of the typical paths in the future.
"For enterprise applications, large models need to be trained, not only to be efficient and accurate, which is completely different from writing an essay or a simple human-machine conversation. When talking about Oracle’s practical experience in the field of HCM, Wu Chengyang, vice president and managing director of Oracle China, told Titanium Media. He believes that Oracle is characterized by the fact that it is already working closely with large model companies like Cohere, and at the data level, it will combine vector search with customer business data stored in Oracle database, coupled with generative AI technologies such as RAG architecture, “Only when these technologies are combined will there be a disruptive change.” ”
All software deserves to be rewritten with a big model - such a conclusion is idealistic and optimistic, but in the actual scenario and business of the enterprise, there is also a practical consideration of “whether it is worth it”.
Returning to the combination of HR SaaS and large models, resume information, interview information, competency evaluation and other information that are highly related to “people”, if the large model can be trained with more accurate proprietary data, then this HR SaaS system will be the most efficient management terminal of the enterprise, but does this data containing a high degree of personal privacy be authorized by individuals for training? The risks and benefits to be borne are all considered by manufacturers, rather than blindly pursuing advanced technology.
At present, many recruitment platforms are actively accessing or cooperating with large model products/services, but there are not many relatively mature application scenarios, such as recruitment scenarios, interview sparring scenes, and talent assessment scenarios, the data of these scenarios are just between public data and proprietary data, which is the so-called clever use of appropriate level of data for these scenarios that benefit enterprises and interviewers in both directions. **
It is worth mentioning that although enterprises can access open source platforms to train large models, they are bound to comply with stricter compliance and security protection guidelines. At present, the industry is already alert to the risk of shutting down open source technology. Connecting to an open-source platform, or even privately using GPT for internal testing, is taking a huge risk.
At present, it seems that HRSaaS vendors need to try and make mistakes one by one in the thousands of scenarios in the “selection, use, education, retention, and departure” of HR SaaS scenarios, and this cost may not be much different from rewriting a set of products.
In addition, whether it can be commercialized is the ultimate proposition of generative AI, at least in the ToB market, the essence of technology is to serve business.
Whether it is the choice of open-source large model distillation or third-party API calls, enterprises and manufacturers are eager to try, but the application of AI large models is ultimately to provide employees with a scenario application. AI is widely expected to free frontline personnel from tedious, dangerous, and repetitive tasks, but at this stage, generative AI is mainly just adding new functional experiences in their respective fields. In the future, there will be more AI technology, but how much economic value it will have for the enterprise, and whether it is enough to win the mutual recognition and payment of customers and front-line employees… The exploration of AI applications has only just begun.