Data quality degradation and cost burden in composite resin recipe development

Data quality is compromised by duplicate entries and inconsistencies across production, material, and property data.The large number of recipe variations increases experimental time and costs, while the limited availability of labeled data hinders the training of robust models.When evaluating new raw materials, deviations from existing data patterns reduce model reliability—requiring continuous feedback from the production floor to maintain performance.

 

Ensuring data integrity and advancing AI-based property prediction models

Checklist codes were introduced to manage and validate material property data, ensuring data integrity. Recipe variables were consolidated to reduce dimensionality and minimize the number of required experiments and labels. Context-specific test scenarios were designed to assess accuracy under varying conditions, while a real-time feedback dashboard was developed to collect insights from the factory floor—enabling continuous model refinement.

 

Improving prediction accuracy and reducing development time and cost

Enhanced data quality improved the predictive accuracy of the AI model. Label efficiency optimization further reduced development time and testing costs. The model’s contextual reliability and application scope were clearly defined, and a feedback-driven improvement loop was established to support process optimization and ongoing productivity gains.