Invited Review Article

Artificial Intelligence in Meat Processing: A Comprehensive Review of Data-Driven Applications and Future Directions

Authors: , , , , , , ,

Abstract

Traditional meat processing technologies and methods are predominantly manual and labor intensive, but they can be significantly optimized through automation and advanced data-driven systems. Artificial intelligence (AI) and internet of things technologies enable noninvasive, automated, and real-time solutions that enhance efficiency, safety, and consistency, while also reducing labor demands. These capabilities mark an inflection point in meat processing, with AI-driven solutions spanning every stage of the livestock sector, from meat production to quality assessment and market analysis. This review comprehensively explores existing research throughout the meat processing cycle, with a specific focus on data- driven AI applications that perform classification, regression, and image analysis tasks. The analysis emphasizes the types of data collected, the preprocessing strategies employed, and the AI models adopted. It also identifies key challenges, emerging trends, and potential pathways for future development, specifically highlighting opportunities to improve efficiency, safety, and sustainability. The insights presented herein offer valuable guidance for researchers and industry professionals seeking to advance meat processing technologies through AI-driven innovation.

Keywords: artificial intelligence, machine learning, meat processing, meat production, meat quality analysis, market analysis

How to Cite: Jeong, K. , Jo, G. , Lee, J. , Kim, B. , Choi, J. , Oh, H. , Jeong, J. & Lee, E. (2025) “Artificial Intelligence in Meat Processing: A Comprehensive Review of Data-Driven Applications and Future Directions”, Meat and Muscle Biology. 9(1). doi: https://doi.org/10.22175/mmb.20157

Introduction

Fresh meat and processed meat products are fundamental to the human diet in that beef, pork, lamb, and other raw meats are major sources of high-quality protein, vitamins, and minerals (Baltic and Boskovic, 2015). Meat processing typically involves slaughtering, cutting, packaging, and storage, with the primary goal of extending shelf life while providing high-quality products. Each stage of the process can significantly impact both microbial safety and key meat quality attributes (Wang and Li, 2024). In particular, the heavy reliance on manual techniques, such as slaughtering, deboning, carcass fabrication and cutting during slaughter, and postharvest processing can lead to product damage or loss, inconsistencies in quality, increased waste, and a higher risk of cross-contamination (Daniel et al., 2020). Addressing these limitations has become a focal point of recent research, particularly through the introduction of noninvasive measurement equipment (e.g., computer vision systems) and automated evaluation methods that improve efficiency in terms of both time and labor (Shi et al., 2021).

Concurrently, researchers are increasingly integrating artificial intelligence (AI) with internet of things (IoT) technologies to reduce human labor and facilitate robust, data-driven decision-making (Lee et al., 2021, 2022c). Broadly, AI is defined as a technology enabling machines to simulate human-like intelligence and behavior, encompassing capabilities such as learning, problem-solving, planning, and creativity (Russell and Norvig, 1995). Recent technological advances have evolved from these foundational concepts to encompass a diverse range of applications. Generative AI represents a particularly transformative development, illustrated by instruction-tuned models such as ChatGPT, which have achieved widespread adoption across multiple sectors for generating sophisticated content including text, code, and visual media (Ouyang et al., 2022; Dwivedi et al., 2023). These diverse AI technologies are being successfully integrated across multiple domains with analytical applications demonstrating considerable utility in agriculture (Eli-Chukwu, 2019; Jeong et al., 2024c) and proving especially prominent in livestock farming operations (Tian et al., 2019; Jeong et al., 2024a). Within the livestock sector specifically, research on production prediction and quality assessment has advanced through computer vision and automated evaluation methods. These advancements have transformed traditional meat processing and evaluation workflows, significantly improving both accuracy and efficiency (Wang and Li, 2024; Alvarez-García et al., 2024).

As meat processing facilities increasingly adopt these AI systems, comprehensive reviews of noninvasive sensing technologies have outlined specific directions for further enhancing meat quality analysis and production efficiency throughout the processing pipeline (Shi et al., 2021; Wu et al., 2022a). The effectiveness of AI applications in meat processing is significantly influenced by data diversity and preprocessing techniques (García et al., 2016). For image data, preprocessing steps such as normalization, augmentation, and segmentation are crucial for highlighting relevant features while minimizing background noise (Shorten and Khoshgoftaar, 2019). Similarly, for tabular data, effective preprocessing through outlier removal, feature scaling, and dimensionality reduction substantially enhances model performance (Borisov et al., 2022). These data preparation strategies not only reduce noise and biases but also enable AI systems to identify meaningful patterns that might otherwise remain undetected, ultimately leading to more accurate predictions and assessments in meat processing applications (Bow, 2002).

In response to these emerging insights, this review provides a comprehensive synthesis of the current state of AI integration within the meat processing industry. The timeliness of this endeavor is underscored by the recent and significant surge in research activity within this domain, as illustrated in Figure 1. Although physical robotics constitutes another rapidly evolving field, particularly through IoT-enabled automation systems deployed for sophisticated operations such as carcass deboning in meat processing facilities (Lyu et al., 2025; Kim et al., 2023a), the nascent nature of AI methodologies in physical automation systems constrains the scope of examination to the seminal work conducted by Manko et al. (2022). Consequently, this review focuses on data–driven AI applications in meat processing that encompass classification, regression, and image analysis, which currently represent the most mature and comprehensively documented aspects of the field. The literature is systematically synthesized by identifying, for each research objective, the data–acquisition equipment and data types used, the preprocessing workflows applied, the AI modeling strategies adopted, and the resulting performance metrics. By systematically identifying prevailing challenges, explicating emergent research patterns, and delineating prospective research avenues, this comprehensive analysis delivers essential insights for researchers and industry stakeholders committed to advancing operational efficiency, product safety, and environmental sustainability throughout the meat production value chain. The main contributions of this review are summarized as follows:

Figure 1.
Figure 1.

Annual publication trend for artificial intelligence applications in the meat processing industry. The analysis is based on a keyword search for (“Meat” OR “Meat processing”) AND “AI” across the Web of Science, IEEE, and ACM databases from 2016 to 2024. ACM, Association for Computing Machinery; AI, artificial intelligence; IEEE, Institute of Electrical and Electronics Engineers.

Overview of Artifical Intelligence Technologies

This section provides a comprehensive overview of machine learning (ML) methodologies as a core component of the broader field of AI. In addition, the review defines the abbreviations and evaluation metrics employed throughout the study. AI represents a multidisciplinary field focused on developing computational systems that emulate human cognitive capabilities, particularly in areas of perception, decision-making, and complex problem-solving.

Machine learning

ML, a cornerstone of modern AI, enables computers to extract patterns directly from data rather than following explicitly programmed rules. This data-driven approach fundamentally differs from traditional programming paradigms where humans manually encode solution logic (Jordan and Mitchell, 2015). The primary objective of ML systems is to achieve robust generalization, the ability to make accurate predictions on previously unseen data through iterative refinement of internal model parameters during training. This automated approach to decision-making helps reduce human biases while promoting more objective analytical outcomes (Mitchell, 1997).

As illustrated in Figure 2, ML comprises 3 principal categories: supervised learning (Kotsiantis et al., 2007), unsupervised learning (Ghahramani, 2003), and reinforcement learning (Kaelbling et al., 1996). Supervised learning utilizes labeled training data that establish direct mappings between input features and desired outputs. Common applications include regression for predicting continuous values (such as market prices or consumer preferences) and classification for categorical assignments (such as product quality grades or defect detection). Unsupervised learning, conversely, operates without labeled data to uncover inherent data structures through techniques like clustering, dimensionality reduction, and anomaly detection. Reinforcement learning employs a distinct approach where an agent learns optimal behavior through environmental interaction, receiving rewards for beneficial actions and penalties for suboptimal choices.

Figure 2.
Figure 2.

Overview of machine learning methodologies illustrating the 3 fundamental paradigms: supervised learning, unsupervised learning, reinforcement learning. DBSCAN, density-based spatial clustering of applications with noise; t-SNE, T-distributed stochastic neighbor embedding.

Although these 3 categories share foundational principles, they differ in how models are structured and tuned. Traditional ML methods, such as gradient boosting (GB) and random forest (RF), rely on hyperparameters like maximum tree depth and the number of nodes, which must be manually adjusted to optimize performance (Ren et al., 2016). Deep learning, on the other hand, employs multilayered neural networks that automatically learn hierarchical representations from large datasets. Rather than depending on fixed parameters set by human experts, deep learning emphasizes the design of layer architectures and node structures that capture increasingly abstract features as data propagate through successive layers. This architecture often yields exceptional performance in tasks such as image recognition and natural language processing (Lathuilière et al., 2019).

Figure 3 presents a comprehensive workflow of typical preprocessing steps and tasks performed on tabular and image data (Zelaya, 2019). Tabular data, which are typically organized into rows and columns, often contain missing values that must be imputed. Each feature may also require normalization to ensure consistent scaling. In addition, dimensionality reduction methods such as principal component analysis (PCA) can be applied to extract the most relevant features for subsequent classification or regression tasks (Karamizadeh et al., 2013). In contrast, image data typically need to be resized to meet the input requirements of convolutional neural networks (Li et al., 2021b), for instance 640 × 640 pixels. Depending on the analytical objective, images may be cropped or further processed to isolate regions of interest (RoI) or to perform object segmentation. When the dataset is limited, data augmentation techniques such as rotation, flipping, and adding noise can help increase variability. The preprocessed images can then be used for tasks that include object classification, detection, and segmentation, thereby leveraging the spatial properties inherent in visual data.

Figure 3.
Figure 3.

Comprehensive and conceptual workflow of artifical intelligence methodologies in meat processing applications showing preprocessing techniques and common tasks for both tabular and image data. CNN, convolutional neural network; PCA, principal component analysis; RGB, red-green-blue; SVM, support vector machine.

Abbreviations and evaluation metrics

To facilitate understanding of the research literature analysis in this review, overlapping abbreviations and definitions are summarized in Table 1. This paper provides abbreviations and definitions for the general terms used throughout as well as for the equipment, data, preprocessing methods, models, and metrics discussed in subsequent section.

Table 1.

Abbreviations and definitions in meat processing research.

Category Abbreviation Definition
General terms AI Artificial intelligence
ML Machine learning
DL Deep learning
IoT Internet of things
CV Computer vision
Equipment NIR Near-infrared
E-nose Electronic nose
E-tongue Electronic tongue
Vis-NIR Visible and near-infrared
HSI Hyperspectral imaging
MSI Multispectral imaging
GC-MS Gas chromatography with mass spectrometry
REIMS Rapid evaporative ionization mass spectroscopy
FTIR Fourier-transform infrared
Data target IMF Intramuscular fat
SFT Subcutaneous fat thickness
TBARS Thiobarbituric acid reactive substances
LC-SFA Long-chain saturated fatty acid
PCA Principal component analysis
Preprocessing SLIC Simple linear iterative clustering
RoI Regions of interest
MSF Mean shift filtering
FD First-order derivative
SD Second-order derivative
COW Correlated optimized warping
PLS Partial least-squares
SNV Standard normal variate
SG Savitzky-Golay smoothing
MSC Multiplicative scatter correction
GLCM Gray-level co-occurrence matrix
OSC Orthogonal signal correction
GI-AAE Generative inference adversarial autoencoder
CFS Correlation-based feature selection
FIR Finite impulse response
UVE Uninformative variable elimination
ACO Ant colony optimization
IWO Improved whale optimization
DFA Discriminant function analysis
FWT Fast wavelet transform
RBF Radial basis function
CNN Convolutional neural network
Model YOLO You only look once
VGG Visual geometry group
GB Gradient boosting
DT Decision trees
KNN K-nearest neighbors
RF Random forest
SR Stepwise regression
PLSR Partial least-squares regression
DNN Deep neural network
SVM Support vector machine
LS-SVM Least-squares support vector machine
LSM Least-squares mean
LDA Linear discriminant analysis
MLR Multinomial logistic regression
DBN Deep belief network
AFINN Adaptive fuzzy inference neural network
AFLS Adaptive fuzzy logic system
DCRNet Detect cells rapidly network
RNN Recurrent neural network
BERT Bidirectional encoder representations from transformer
ARIMA Autoregressive integrated moving average
LSTM Long short-term memory
MAE Mean absolute error
Metrics RMSE Root mean squared error
MAPE Mean average percentage error
IoU Intersection over union
AP Average precision
R2 Coefficient of determination

In addition, Table 2 explains the evaluation metrics for both regression and classification tasks. For regression problems, establishing reliable “ground truth” values represents a fundamental methodological challenge. Reference standards may be derived from direct empirical measurements or defined through acceptable tolerance ranges within predetermined categories, depending on specific research objectives and experimental constraints. To mitigate these reference standard limitations and enhance the reliability of AI-based investigations, standardized quantitative evaluation metrics are systematically employed. These include mean absolute error (MAE), root mean squared error (RMSE), and mean average percentage error (MAPE), which quantify the discrepancy between predicted and reference values. Lower values for these metrics indicate greater proximity to the established ground truth (Naidu et al., 2023). In contrast, higher values of R2 and the correlation coefficient indicate better model performance and stronger agreement with reference standards. For classification tasks, higher evaluation metric values correspond to greater accuracy in predictions, but similar challenges exist in establishing reliable categorical boundaries and validation methodologies.

Table 2.

Metrics for evaluating performance in meat processing research.

Category Metric Equation Brief Explanation
Regression MAE 1ni=1n|yiy^i| Average of absolute differences between predictions and true values.
RMSE 1ni=1n(yiy^i)2 Square root of the average of squared differences.
MAPE 100%ni=1n|yiy^iyi| Average of absolute percentage errors.
R2 1i=1n(yiy^i)2i=1n(yiy¯)2 Proportion of variance explained by the model.
Corr i=1n(yiy^)(y^iy^¯i)i=1n(yiy¯)2i=1n(y^iy^¯i)2 Measures the linear relationship between predictions and actual values.
Classification Accuracy TP+TNTP+TN+FP+FN Ratio of correct predictions to total predictions.
F1-score 2·Precision·RecallPrecision+Recall Harmonic mean of precision and recall.
Precision TPTP+FP Ratio of true positives to all predicted positives.
Recall TPTP+FN Ratio of true positives to all actual positives.
IoU AreaofOverlapAreaofUnion Measures the overlap between predicted and ground truth regions.
mAP 1Cc=1CAP(c) Mean of average precision over all classes.
AP n=1N(rnrn1)pn Area under the precision-recall curve, computed as the weighted sum of precisions at different recall levels.
  • AP, average precision; C, number of classes; Corr, Pearson correlation coefficient; FN, false negatives; FP, false positives; IoU, intersection over union; MAE, mean absolute error; mAP, mean average precision; MAPE, mean average percentage error; n, total number of samples; pn and rn, precision and recall at the nth threshold, respectively; RMSE, root mean squared error; TN, true negatives; TP, true positives; (y¯), mean of actual values; yi, actual value; y^i, predicted value.

Application of Artificial Intelligence Technology in Meat Processing

This section provides a systematic review of data-driven AI applications in meat processing, organized into 3 primary domains: (1) meat production analysis, (2) meat quality analysis, and (3) market analysis and consumer preferences. To enhance narrative cohesion and facilitate insightful comparisons, the analysis within each domain is thematic, structured around data types and target variables rather than purely chronological progression. While detailed specifications of each study (including data types, methodologies, and performance metrics) are summarized in their respective tables, the main text of this review prioritizes the practical applications and outcomes of AI rather than a technical exegesis of the underlying models.

Meat production analysis

Meat production analysis research categorizes yield evaluation into 2 main areas: overall carcass yield assessment, which involves entire carcass segmentation along with rib-eye regions and bones, and specific cut yield analysis, which focuses on predicting intramuscular fat (IMF) content and back-fat thickness. Table 3 summarizes studies that have applied AI techniques to meat production.

Table 3.

Summary of literature on meat production, categorized by purpose (overall carcass yield, specific cut yield), focusing on collected data and prediction methods.

Category Year Animal Data Method
Equipment Type Features Target NR Preprocessing Model Performance
Overall carcass Gonçalves et al. (2021) Beef RGB camera (smartphone) Image 3 264 × 2 448 Cla: 2 class (seg: carcasses) 226 SLIC VGG-16 Acc: 96.0%; IoU: 92.2%
Matthews et al. (2022) Beef Structured data, RGB camera (smartphone) Tabular image 9 features Reg: carcass yields 123 844 - GB, CNN RMSE: 2.92; RMSE: 2.82
Lee et al. (2022b) Beef RGB camera (smartphone) Image 224 × 224 Reg: marbling score 10 246 Segment (RoI), resize MSENet MAE: 0.55; Corr: 0.95
Zhang et al. (2023) Pork RGB camera (smartphone) Image 2 448 × 3 264 Cla: 3 class (seg: marbling) 173 Resize, MSF Marbling-Net IoU: 76.8%; F1-score: 86.9%
de Melo et al. (2022) Beef Ultrasound scanning Image 309 × 213 Cla: 2 class (seg: rib-eye) 67 Resize U-Net++ Acc: 97.3%
Xu et al. (2023) Pork X-ray Image - Cla: 2 class (seg: born) 27 837 Resize Encoder- decoder MAE: 0.12
Manko et al. (2022) Pork RGB camera (smartphone) Image 1 280 × 720 Cla: 2 class (key point detection) 25 Crop, resize, augment U-Net mAP: 0.98
Specific cut Kvam and Kongsro (2017) Pork Ultrasound scanning Image - Reg: IMF 3 037 Crop, normalize augment CNN RMSE: 1.8; R2: 0.74
Liu et al. (2018) Pork RGB camera (smartphone) Image - Cla: 3 class 85 Segment (RoI), 18 color features SVM Acc: 75.0%
Shahinfar et al. (2019) Lamb Ultrasound scanning Image - Reg: IMF 3 500 - RF MAE: 0.74
Kucha et al. (2022) Pork NIR Tabular 900-1 700 nm Reg: IMF 144 SD, COW SVM RMSE: 0.37; R2: 0.89
Chen et al. (2022b) Pork RGB camera (smartphone) Image - Reg: IMF 1 481 Segment (RoI) GB Corr: 0.81
Masferrer et al. (2018) Pork AutoFomIII Tabular 11 features Cla: 4 class 4 000 - SVM Acc: 73.0%
Masferrer et al. (2019) Pork AutoFomIII Tabular 11 features Cla: 4 class 400 - SVM Acc: 75.3%
Lee et al. (2022a) Pork RGB camera (smartphone) Image - Reg: back-fat thickness 3 782 Crop BTENet MAPE: 6.4
  • Acc, accuracy; BTENet, back-fat thickness estimation network; Cla, classification; CNN, convolutional neural network; Corr, Pearson correlation coefficient; COW, correlated optimized warping; GB, gradient boosting; IMF, intramuscular fat; IoU, intersection over union; MAE, mean absolute error; mAP, mean average precision; MAPE, mean average percentage error; MSENet, mean and standard deviation-based ensemble network; MSF, mean shift filtering; NIR, near-infrared; NR, number of recordings; Reg, regression; RF, random forest; RGB, red-green-blue; RMSE, root mean squared error; RoI, regions of interest; Seg, segment; SLIC, simple linear iterative clustering; SVM, support vector machine.

Overall carcass analysis

This approach assesses yield, marbling distribution, and the segmentation of rib–eye and bone. Gonçalves et al. (2021) employed a convolutional neural network (CNN)-based segmentation approach to identify carcass regions in smartphone-captured images, by first partitioning each image into superpixels and then classifying each segment as either carcass or background with 96.0% accuracy.

In the context of regression-based prediction, AI models have proven highly effective at estimating key carcass traits directly from image data. For example, Matthews et al. (2022) demonstrated the effectiveness of a 2-dimensional CNN model for accurately estimating the weight of specific cuts from a video analysis system. In a similar application for quality assessment, Lee et al. (2022b) developed a specialized CNN, mean and standard deviation-based ensemble network (or MSENet), to successfully predict marbling scores from red-green-blue (RGB) images taken at the intersection of the 12th and 13th ribs.

Research focusing on segmenting specific RoI within entire carcass structures has led to the development of specialized architectures such as Marbling-Net, which have demonstrated superior performance compared to standard baseline models like U-Net in precisely segmenting marbling regions (Zhang et al., 2023). Complementing these quality-focused applications, the versatility of segmentation techniques has been effectively demonstrated in anatomical analysis across multiple imaging platforms. For instance, CNN-based architectures have achieved successful detection of loin eye areas in ultrasound imaging (de Melo et al., 2022), while efficient encoder-decoder algorithms have been developed for real-time bone segmentation from X-ray images (Xu et al., 2023).

Beyond segmentation tasks, keypoint detection has emerged as another critical computer vision technique for meat processing applications, particularly from the perspective of meat automation systems. Manko et al. (2022) advanced robotic manipulation capabilities in meat processing by developing keypoint detection algorithms to identify critical anatomical landmarks on pig carcasses, enabling accurate prediction of limb orientation and identification of optimal gripping points for automated handling. Their methodology involved collecting RGB images from 6 different viewpoints to ensure comprehensive spatial coverage and robust keypoint localization across varying perspectives.

Specific cut analysis

Specific cut analysis frequently focuses on IMF content and back-fat thickness prediction. Kvam and Kongsro (2017) employed ultrasound imaging to successfully predict IMF in pigs using a custom CNN model trained on preprocessed images that were normalized to unit standard deviation and augmented through flipping to enhance model robustness. Similarly, Liu et al. (2018) developed an approach to predict IMF categories in pork using a support vector machine (SVM) model applied to RGB images. To segment the RoI from the images, they employed a gray-level histogram combined with the Otsu method for background removal and binarization. Eighteen color features were subsequently extracted and applied to the SVM model, achieving a classification accuracy of 75.0% for IMF levels (grades 1–3), demonstrating the feasibility of this approach for practical implementation.

For IMF prediction, several studies have employed diverse data sources and AI methodologies with notable success. Shahinfar et al. (2019) utilized a comprehensive dataset including animal weights recorded from birth through slaughter (130–700 d) and 3 500 ultrasound images collected concurrently, with the RF model achieving superior predictive performance among the evaluated AI approaches. Kucha et al. (2022) employed near-infrared (NIR) spectra from vacuum-packed pork samples, preprocessing the spectral data using SD transformation and the correlated optimized warping (or COW) method to eliminate polyethylene film interference, with the SVM model subsequently demonstrating robust performance in IMF content estimation. Chen et al. (2022b) integrated computer vision scores derived from RGB images with conventional pork quality traits such as meat color, marbling score, pH value, and drip loss. The datasets underwent background removal, grayscale conversion, and RoI segmentation to derive the computer vision score, with the GB model achieving a Pearson correlation coefficient of 0.81, highlighting the potential of combining computer vision features with conventional quality metrics.

For back-fat thickness classification and prediction, several studies have developed AI-driven approaches with varying methodologies. Masferrer et al. (2018) systematically varied both the number and type of predictor variables to improve SVM model performance for categorizing ham into 4 thickness groups: thin (0–10 mm), standard (11–15 mm), semi-fat (16–20 mm), and fat (>20 mm), with the SVM model successfully classifying ham categories using variables including lean meat percentage, sex, and breed. Masferrer et al. (2019) employed the AutoFomIII system to collect characteristic data from 400 pig carcasses and successfully classified them into 4 categories (HC1: <9 mm, HC2: 9–12 mm, HC3: 13–19 mm, and HC4: >19 mm) based on subcutaneous fat thickness using an SVM model. Lee et al. (2022a) proposed a CNN-based architecture called back-fat thickness estimation network (or BTENet) for predicting back-fat thickness in slaughtered pigs. The model successfully performed both segmentation and thickness estimation simultaneously by cropping the back-fat region from original images.

Figure 4 summarizes the research trends in meat production studies. Overall carcass analysis has primarily focused on predicting carcass yield and segmenting loin eye areas and bone structures, while specific cut analysis has concentrated on IMF content and back-fat thickness prediction through regression and classification tasks. Most studies employ image-based approaches, utilizing segmented RoI from original images followed by various preprocessing techniques such as resizing and simple linear iterative clustering (SLIC) before applying CNN-based models for enhanced accuracy in specific anatomical characteristic analysis. Nevertheless, such approaches utilizing superpixel-based partitioning in carcass detection studies may result in computational limitations and reduced accuracy. Therefore, segmentation-based methodologies should be prioritized as they demonstrate significant advantages in both processing efficiency and classification precision.

Figure 4.
Figure 4.

Overview of the artificial-intelligence-driven meat production analysis framework: a red-green-blue camera-based workflow for evaluating carcass characteristics and specific cut parameters using advanced image segmentation techniques. IMF, intramuscular fat; RGB, red-green-blue; SLIC, simple linear iterative clustering.

Meat quality analysis

The meat quality analysis section examines research in 3 primary areas: meat quality assessment, freshness evaluation, and meat authentication. Meat quality assessment aims to predict intrinsic meat quality characteristics, such as pH, water activity, meat color, IMF, flavor, lipid/protein oxidation, and texture profile analysis, to evaluate whether the meat is commercially acceptable and to assess its performance under refrigerated or frozen storage conditions (Table 4). Freshness evaluation focuses on determining the current state of meat, often through sensory analysis and real-time monitoring of spoilage indicators (Table 5). Lastly, in the domain of meat authentication, the primary objective is to quantify the degree of adulteration among various meat types and to classify cuts based on distinctive muscle characteristics (Table 6). Detailed discussions of these topics are provided in the subsequent subsections.

Table 4.

Summary of literature on meat quality assessment, categorized by purpose (attribute analysis, meat quality, storage condition), focusing on collected data and prediction methods.

Category Year Animal Data Method
Equipment Type Features Target NR Preprocessing Model Performance
Attribute analysis Ndob and Lebert (2018) Pork pH meter, aw meter Tabular 21 features Reg: pH, Water activity 143 Normalize DNN MAE: 0.2; MAE: 0.01
Dixit et al. (2021) Beef, lamb, venison NIR Image, tabular 900–1 700 nm, 235 spectrals Reg: pH, IMF 2 196 Segment (RoI), SNV CNN R2: 0.89; R2: 0.89
Wang et al. (2021) Lamb NIR Tabular 900–1 700 nm, 256 spectrals Reg: stearic acid 151 MSC, SNV, OSC, PCA LS-SVM RMSE: 0.18; R2: 0.76
El Karam et al. (2023) Pork UV spectrometry Tabular 288–560 nm Cla: 5 class 72 Normalize SVM Acc: 97.6%
Cheng et al. (2023b) Pork Vis-NIR Image, tabular 400–1 002 nm, 240 spectrals Reg: TBARS, carbonyl 240 SNV, SG, PLSR Multitask CNN R2: 0.97; R2: 0.96
Cheng et al. (2023a) Pork Vis-NIR Image, tabular 400–1 000 nm, 411 spectrals Reg: TBARS 240 SNV, SG 3D-CNN RMSE: 0.03; R2: 0.92
Cui et al. (2024) Red meat (lamb, pork, beef) Vis-NIR Image, tabular 400–1 000 nm, 128 spectrals Reg: LC-SFA - Segment (RoI), GI-AAE CNN RMSE: 0.55; R2: 0.72
Sun et al. (2018) Pork RGB camera (smartphone) Image - Cla: 6 class 1 400 Segment (RoI), 18 color features SVM Acc: 92.5%
Tang et al. (2023) Pork Vis-NIR Image, tabular 400–1 000 nm, 204 spectrals Reg: fat, lean, etc. 1 091 Segment (RoI) DNN R2: 0.72; R2: 0.73
Wang et al. (2019) Beef E-tongue Tabular 12 features Cla: 5 class 60 PCA SVM Acc: 90.0%
Cui et al. (2022) Beef GC × GC − MS Tabular - Cla: 10 class 8 - DNN Acc: 90.0%
Zhang et al. (2022b) Lamb NIR Tabular 900–1 700 nm, 256 spectrals Reg: hardness, gumminess, chewiness 84 OSC, mean centering SVM R2: 0.98; R2: 0.98; R2: 0.98
Meat quality Wold et al. (2017) Chicken NIR Tabular 760–1 040 nm, 15 spectrals Cla: 2 class 197 PLSR LDA Acc: 99.5%
Geronimo et al. (2019) Chicken NIR, RGB camera (smartphone) Tabular, image 1 150 –2 150 nm, 1 600 × 1 200 Cla: 2 class 80 Segment (RoI), CLAHE SVM Acc: 97.5%; Acc: 91.8%
Ahn et al. (2020) Red meat X-ray Image - Cla: 2 class 29 149 CLAHE, hough, resize, augment CNN Acc: 99.8%
Barbon et al. (2018) Chicken NIR Tabular 1 050 spectrals Cla: 4 class 158 CFS REPTree F1-score: 73.5%
Biglia et al. (2022) Pork Fat-O-meater Tabular 11 features Cla: 2 class 134 One-hot encoding DNN Acc: 86.8%
Penning et al. (2020) Beef REIMS Tabular - Cla: 5 class 1 800 PCA, feature selection SVM Acc: 99.0%
Alaiz-Rodríguez and Parnell (2020) Lamb FTIR Tabular 1 687 spectrals Cla: 2 134 MSC, PCA DNN Acc: 93.8%
García-Infante et al. (2024) Lamb Colorimeter thermocouple SPME + GC − MS Tabular 144 features Cla: 3 class 78 Data cleaning DNN Acc: 88.0%
Storage condition Huang et al. (2016a) Pork NIR Tabular 760–2 500 nm, 870 spectrals Cla: 4 class 180 PCA DNN Acc: 93.3%
Xu and Sun (2017) Fish HSI Image, tabula - Cla: 2 class 400 SNV, normalize, segment (RoI), PCA TreeBagger Acc: 97.8%
Górska-Horczyczak et al. (2017) Pork E-nose Tabular 90 features Cla: 7 class 1 008 Normalize DNN Acc: 85.4%
Abie et al. (2021) Pork Electrical impedance Tabular 10–500 kHz, 36 features Cla: 2 class 180 - LSTM Acc: 95.0%
Swanson and Gowen (2022) Poultry Vis-NIR Image, tabula 443–726 nm, 204 spectrals Cla: 2 class 10 SNV, SG SVM Acc: 88.0%
Park et al. (2023) Beef Vis-NIR Image, tabula 400–1 000 nm, 300 spectrals Cla: 3 class 4,950 MSC, SNV, SG, normalize SVM Acc: 97.6%
  • 3D-CNN, 3-dimensional convolutional neural network; Acc, accuracy; aw, water activity; CFS, correlation-based feature selection; Cla, classification; CLAHE, contrast-limited adaptive histogram equalization; CNN, convolutional neural network; Corr, Pearson correlation coefficient; DNN, deep neural network; E-nose, electronic nose; E-tongue; electronic tongue; FTIR, Fourier-transform infrared; GC, gas chromatography; GI-AAE, generative interference adversarial autoencoder; HSI, hyperspectral imaging; IMF, intramuscular fat; LC-SFA, long-chain saturated fatty acid; LDA, linear discriminant analysis; LS-SVM, least-squares support vector machine; LSTM, long short-term memory; MAE, mean absolute error; MS, mass spectrometry; MSC, multiplicative scatter correction; NIR, near-infrared; NR, number of recordings; OSC, orthogonal signal correction; PCA, principal component analysis; PLSR, partial least-squares regression; Reg, regression; REIMS, rapid evaporative ionization mass spectroscopy; RGB, red-green-blue; RMSE, root mean squared error; RoI, regions of interest; SG, Savitzky-Golay smoothing; SNV, standard normal variate; SPME, solid phase microextraction; SVM, support vector machine; TBARS, thiobarbituric acid reactive substances; UV, ultraviolet; Vis-NIR, visible and near-infrared.

Table 5.

Summary of literature on meat freshness evaluation, categorized by purpose (freshness status analysis, microbe analysis), focusing on collected data and prediction methods.

Category Year Animal Data Method
Equipment Type Features Target NR Preprocessing Model Performance
Freshness status analysis Hasan et al. (2012) Beef, fish E-nose Tabular 8 features Cla: 4 class 1 372 FIR KNN Acc: 96.2%
Haddi et al. (2015) Red meat (beef, goat, sheep) E-nose, E-tongue Tabular 6 features, 21 features Cla: 5 class 75 PCA SVM Acc: 81.3%; Acc: 100%
Vajdi et al. (2019) Fish E-nose Tabular 35 features Cla: 3 class 64 PCA DNN Acc: 96.8%
Huang et al. (2016b) Fish NIR Tabular 10 000 –4 000 cm−1, 1 557 spectrals Cla: 4 class 180 PCA, MSC DNN Acc: 93.3%
Yang et al. (2017) Beef HSI Image, tabular 400–1 000 nm, 774 spectrals Cla: 3 class 105 Random frog LS-SVM Acc: 97.1%
Ropodi et al. (2017) Beef, horse MSI Image, tabular 405–970 nm, 18 spectrals Cla: 4 class 350 Segment (RoI), PCA SVM Acc: 95.3%
Alshejari and Kodogiannis (2017) Beef Vis-NIR Image, tabular 405–970 nm, 18 spectrals Cla: 8 class 112 PCA AFINN Acc: 92.8%
Zhang et al. (2022a) Lamb Vis-NIR Image, tabular 400–1 000 nm, 125 spectrals Cla: 3 class 300 PCA RF Acc: 91.0%
Huang et al. (2023) Beef MOF@SnS2 Tabular - Cla: 2 class 1 532 - DNN Acc: 78.5%
Arsalane et al. (2018) Beef RGB camera (smartphone) Image 2 592 × 1 944 Cla: 3 class 81 Segment (RoI), color space (HSV), PCA SVM Acc: 100%
Guo et al. (2020) Chicken, fish, beef RGB camera (smartphone) Image - Cla: 3 class 4 161 White balance ResNet-101 Acc: 98.5%
Amani and Sarkodie (2022) Red meat RGB camera (smartphone) Image 1 280 × 720 Cla: 2 class 1 896 - CNN Acc: 100%
Hewawasam et al. (2023) Chicken RGB camera (smartphone) Image - Cla: 2 class 2 487 Resize, augment MobileNet Acc: 100%
Kim et al. (2023b) Pork RGB camera (smartphone), air sensor Image - Cla: 3 class 18 Normalize, augment 1D-CNN Acc: 99.4%
Kim et al. (2023c) Pork RGB camera (smartphone) Image - Cla: 3 class - Color space (HSV) 1D-CNN Acc: 98.7%
Elangovan et al. (2024) Red meat (lamb, pork, beef) RGB camera (smartphone) Image 1 280 × 720 Cla: 3 class 1 896, 2 266 Resize, augment ConvNet Acc: 99.4%; Acc: 96.6%
Arsalane et al. (2024) Beef RGB camera (smartphone) Image 2 592 × 1 944 Cla: 2 class 81 Resize, crop, FWT KNN Acc: 92.5%
Zheng et al. (2024) Pork Microscope Image 3 840 × 2 880 Cla: 2 class (object detection) 54 445 Crop, augment DCRNet AP: 81.2%
Microbe analysis Papadopoulou et al. (2013) Beef E-nose Tabular 8 features Cla: 3 class, reg: TVC 177 PCA, DFA SVM Acc: 90.4%; Corr: 0.86
Kodogiannis et al. (2014) Beef FTIR Tabular 1 800–1 000 cm−1 Cla: 3 class, reg: TVC 74 SG, PCA AFLS Acc: 95.9%; RMSE: 0.37
Gu et al. (2017) Pork E-nose Tabular 10 features Cla: 3 class, reg: acid value 25 - DNN, SVM Acc: 100%; R2: 0.98
Kaswati et al. (2020) Chicken Vis-NIR Image, tabular 400–1 000 nm Cla: 2 class, reg: pH 91 Segment (RoI) RF, PLSR Acc: 85.5%; R2: 0.84
Huang et al. (2014) Pork E-nose, RGB camera (smartphone), NIR Image, tabular 11 features, 640 × 480, 10 000 –4 000 cm−1 Reg: TVB-N 90 SNV, segment (RoI), PCA DNN RMSE: 2.73; R2: 0.95
Huang et al. (2015) Pork NIR Tabular 3 spectrals Reg: TVB-N 77 GLCM, PCA BP-AdaBoost RMSE: 6.94; Corr: 0.83
Khulal et al. (2016) Chicken HSI Image, tabular 1 628 × 618, 618 spectrals Reg: TVB-N 75 SNV, PCA, ACO DNN RMSE: 6.38; Corr: 0.75
Dai et al. (2016) Prawn HSI Image, tabular 400–1 000 nm, 206 spectrals Reg: TVB-N 240 UVE, feature selection LS-SVM RMSE: 0.72; R2: 0.95
Liang et al. (2019) Pork Terahertz spectroscopy Tabular 0.2–2.0 THz, 250 spectrals Reg: K value 80 FD, SG, PCA BP-AdaBoost RMSE: 9.89; R2: 0.84
Fengou et al. (2020) Pork FTIR and MSI Image, tabular 1 800 –900 cm−1, 405–970 nm Reg: TVC 903 PCA, PLS, relief, AutoEncoder SVM RMSE: 0.88; Corr: 0.83
Dourou et al. (2021) Chicken FTIR Tabular 1 800 –900 cm−1 Reg: TVC 878 SNV SVM RMSE: 0.72; R2: 0.76
Kolosov et al. (2023) Pork MSI Image, tabular 1 200 × 1 200, 18 spectrals Reg: TVC 847 Resize, normalize ResNet-34 MAE: 0.05
Lakehal and Lakehal (2023) Beef RGB camera (smartphone) Image - Reg: time 120 Color space (CIELAB) DNN RMSE: 0.06; R2: 0.97
  • 1D-CNN, 1-dimensional convolutional neural network; Acc, accuracy; ACO, ant colony optimization; AFINN, adaptive fuzzy inference neural network; AFLS, adaptive fuzzy logic system; CIELAB, Commission Internationale de l’Eclairage color space; Cla, classification; CNN, convolutional neural network; Corr, Pearson correlation coefficient; DCRNet, detect cells rapidly network; DFA, XXX; DNN, deep neural network; E-nose, electronic nose; E-tongue, electronic tongue; FIR, finite impulse response; FTIR, Fourier-transform infrared; FWT, fast wavelet transform; GLCM, gray-level co-occurrence matrix; HSI, hyperspectral imaging; HSV, hue-saturation-value; KNN, K-nearest neighbors; LS-SVM, least-squares support vector machine; MAE, mean absolute error; MSC, multiplicative scatter correction; MSI, multispectral imaging; NIR, near-infrared; NR, number of recordings; PCA, principal component analysis; PLS, partial least-squares; PLSR, partial least-squares regression; Reg, regression; RF, random forest; RGB, red-green-blue; RoI, regions of interest; RMSE, root mean squared error; SG, Savitzky-Golay smoothing; SNV, standard normal variate; SVM, support vector machine; TVB-N, total volatile basic nitrogen; TVC, total viable count; Vis-NIR, visible and near-infrared.

Table 6.

Summary of literature on meat authentication, categorized by purpose (meat type, meat cut), focusing on collected data and prediction methods.

Category Year Animal Data Method
Equipment Type Features Target NR Preprocessing Model Performance
Meat type Tian et al. (2013) Lamb, pork E-nose Tabular 10 features Reg: pork ratio in lamb 800 PCA DNN RMSE: 5.26; R2: 0.97
Güney and Atasoy (2015) Fish E-nose Tabular 4 800 features Cla: 3 class 129 Subsampling DT Acc: 96.1%
Alfar et al. (2016) Beef, chicken, lard NIR Tabular 900–1 500 nm, 10 spectrals Cla: 3 class 120 SNV, normalize SVM Acc: 98.3%
Acquarelli et al. (2017) Chicken, pork, turkey FTIR Image, tabular 1 000–1 800 cm−1, 448 spectrals Cla: 3 class 120 - CNN Acc: 100%
Al-Sarayreh et al. (2018) Lamb, beef, pork HSI Image, tabular 672–957 nm, 25 spectrals Cla: 4 class 86 535 Superpixel SLIC, normalize 3D-CNN Acc: 96.1%
Zhao et al. (2019) Beef Vis-NIR Image, tabular 400–1 000 nm, 250 spectrals Reg: adulteration level 76 Segment (RoI), feature selection (IWO) LS-SVM RMSE: 5.67; Corr: 0.97
Al-Sarayreh et al. (2020) Red meat (lamb, beef, pork) Line-scanning Vis-NIR Image 548–1 701 nm, 235 spectrals Cla: 4 class 559 032 Superpixel segmentation 3D-CNN Acc: 98.6%
Ayaz et al. (2020) Beef, chicken, lamb Vis-NIR Image, tabular 400–1 000 nm; 224 spectrals Cla: 3 class 60 Segment (RoI), SG, PCA SVM Acc: 88.8%
Zhang et al. (2022c) Pork, lamb Vis-NIR Image, tabular 373–1 033 nm, 616 spectrals Cla: 5 class 120 MSC, SG CNN Acc: 99.9%
Robert et al. (2021) Beef, lamb, venison Raman spectroscopy Tabular 313–1 895 cm−1 Cla: 3 class 90 SNV, SG SVM Acc: 93.0%
Sun et al. (2022) Beef, pork, lamb Raman spectroscopy Tabular 0–2 000 cm−1 Cla: 3 class 2 400 Wavelet denoising, normalize, feature selection DNN F1-score: 99.8%
Meat cut Sanz et al. (2016) Lamb HSI Image, tabular 380–1 028 nm, 1 040 spectrals Cla: 4 class 120 PCA LSM Acc: 96.67%
Li et al. (2021a) Beef Vis-NIR Image, tabular 500–800 nm, 6 spectrals Cla: 3 class 555 GLCM, color space (CIELAB) LDA Acc: 90.9%
Prakash et al. (2021) Beef RGB camera (smartphone) Image - Cla: 5 class 7 987 Segment (RoI), augment Ensemble (CNN, MLR, DT) Acc: 99.1%
Huang et al. (2022) Pork RGB camera (smartphone) Image - Cla: 4 class 1 992 Resize, crop, augment ResNet-50 Acc: 94.4%

    3D-CNN, 3-dimensional convolutional neural network; Acc, accuracy; CIELAB, Commission Internationale de l’Eclairage color space; Cla, classification; CNN, convolutional neural network; Corr, Pearson correlation coefficient; DNN, deep neural network; DT, decision trees; E-nose, electronic nose; FTIR, Fourier-transform infrared; GLCM, gray-level co-occurrence matrix; HSI, hyperspectral imaging; IWO, improved whale optimization; LDA, linear discriminant analysis; LSM, least-squares mean; LS-SVM, least-squares support vector machine; MLR, multinomial logistic regression; MSC, multiplicative scatter correction; NIR, near-infrared; NR, number of recordings; PCA, principal component analysis; Reg, regression; RGB, red-green-blue; RMSE, root mean squared error; RoI, regions of interest; SG, Savitzky-Golay smoothing; SLIC, simple linear iterative clustering; SNV, standard normal variate; SVM, support vector machine; Vis-NIR, visible and near-infrared.

Meat quality assessment

Various meat quality parameters and other relevant chemical traits have been determined using AI-based ML approaches. Ndob and Lebert (2018) successfully predicted pH and water activity in pork by integrating data from a pH meter and an aw meter with a deep neural network (DNN) model. Building on this foundation, Dixit et al. (2021) utilized NIR technology to predict pH and IMF levels in red meat using hyperspectral imaging (HSI) datasets from beef, lamb, and venison, with a custom CNN model achieving an R2 of 0.89 for both parameters. In a related study focusing on specific fatty acid analysis, Wang et al. (2021) utilized NIR spectroscopy to quantify stearic acid content in lamb meat from 3 different cuts, with the least-squares (LS)-SVM achieving an RMSE of 0.18 and R2 of 0.76.

El Karam et al. (2023) investigated NaCl concentrations ranging from 1.1% to 1.9% in pork muscle using ultraviolet spectrometry, with a SVM model achieving 97.6% classification accuracy. Advancing beyond single-parameter analysis, Cheng et al. (2023b) developed a multitask CNN model to predict lipid and protein oxidation in frozen-thawed pork, achieving R2 values of 0.97 for thiobarbituric acid reactive substances (TBARS) and 0.96 for carbonyl content. Complementing this work, Cheng et al. (2023a) introduced a lightweight 3-dimensional (3D) CNN model specifically for TBARS prediction, achieving an RMSE of 0.03 and R2 of 0.92. Furthermore, Cui et al. (2024) explored long-chain saturated fatty acid content prediction in lamb, beef, and pork using visible (Vis) NIR spectral data, with the CNN model achieving improved performance on generative interference adversarial autoencoder enhanced datasets (RMSE of 0.55 and R2 of 0.72) compared to raw datasets.

Moving to visual and textural quality assessments, Sun et al. (2018) employed a computer vision system combined with AI models to classify color and marbling in pork, with the SVM model achieving 92.5% accuracy in color classification and 75.0% in marbling classification. Expanding on this multi-attribute approach, Tang et al. (2023) predicted 14 meat quality traits using Vis-NIR imaging combined with a DNN model, demonstrating high predictive accuracy for fat (R2 = 0.72) and chemical lean (R2 = 0.73), with moderate performance for other quality attributes including moisture, collagen, and color parameters.

In the realm of sensory evaluation, Wang et al. (2019) utilized an electronic (E) tongue sensor array to detect ions in beef samples, with the SVM model achieving 90.0% accuracy in flavor scoring. Similarly, Cui et al. (2022) employed comprehensive two-dimensional gas chromatography-mass spectrometry (GC × GC – MS) and a DNN to predict beef flavor based on Maillard reaction products, demonstrating accuracy exceeding 90.0% in predicting flavor scores. For texture analysis, Zhang et al. (2022b) applied NIR spectroscopy to predict texture properties of lamb loins, with the SVM model achieving exceptionally high predictive accuracy for hardness (R2 =0.98), gumminess (R2 = 0.98), and chewiness (R2 = 0.98).

Quality defect detection has shown considerable progress through AI-driven approaches. Wold et al. (2017) investigated wooden breast syndrome in chicken breast fillets using NIR spectroscopy, with a linear discriminant analysis (LDA) model achieving 99.5% accuracy in defect detection. Building on this success, Geronimo et al. (2019) combined computer vision with NIR spectroscopy to classify woody breast abnormalities in chicken, with an SVM model achieving 97.5% accuracy using NIR spectral data. Beyond biological defects, Ahn et al. (2020) detected needles in meat using X-ray images, with their CNN model reaching 99.8% classification accuracy. Additionally, breed and production method classification has been explored by Alaiz-Rodríguez and Parnell (2020) who employed Fourier-transform infrared (FTIR) spectroscopy to differentiate suckling lamb carcasses raised on milk replacer from those raised on ewe milk, with the DNN-PCA approach achieving 93.8% accuracy. García-Infante et al. (2024) further demonstrated the potential of multidata integration by classifying lamb carcasses from 3 native Spanish breeds using organoleptic, sensorial, and nutritional data, with DNN achieving 88.0% accuracy when combining both datasets.

Storage condition assessment has become increasingly important, with studies focusing on distinguishing between frozen and thawed meat. Huang et al. (2016a) classified pork samples into 4 categories based on freezing history using NIR spectroscopy, with a DNN model achieving 93.3% accuracy. Similarly, Xu and Sun (2017) employed HSI to identify freezer burn in frozen salmon fillets, with a TreeBagger ensemble classifier, achieving 97.8% accuracy. E-nose technology has also proven effective in this area, as demonstrated by Górska-Horczyczak et al. (2017), who introduced an E-nose to differentiate fresh pork from frozen, thawed, and spoiled counterparts, with a DNN model achieving 85.4% average precision across 7 categories.

Complementing spectroscopic approaches, Abie et al. (2021) leveraged bioimpedance to assess different thawing methods on meat quality, with a long short-term memory (LSTM) model achieving classification accuracies of 91.6% for fast thawing and 95.0% for slow thawing conditions. Recent portable technology applications have shown promising results, with Swanson and Gowen (2022) examining poultry samples using portable Vis-NIR with multivariate AI methods, achieving 88.0% accuracy for detecting thawed poultry with an SVM. Park et al. (2023) combined HSI with AI to classify beef samples into 3 storage states, with an SVM model achieving 97.6% classification accuracy, demonstrating the potential for rapid, noninvasive assessments of meat storage conditions.

Figure 5 provides a concise overview of spectroscopic techniques for meat–quality assessment and distinguishes workflows by data format. Tabular spectral measurements are smoothed and feature–selected before predictive models such as DNN or SVM are trained. Conversely, HSI data undergo RoI extraction followed by analysis with CNN. Both pipelines support regression of physicochemical attributes (pH, moisture, acidity) and classification of product quality and storage conditions (normal vs. defective, frozen vs. thawed), underscoring the versatility of AI–enabled spectroscopy.

Figure 5.
Figure 5.

Overview of spectroscopic approaches for meat quality assessment: analysis of spectral data for predicting attributes, quality classification, and storage condition. CFS, correlation-based feature selection; PCA, principal component analysis; RoI, regions of interest; SG, Savitzky-Golay smoothing; SNV, standard normal variate.

Meat freshness evaluation

Freshness evaluation can be broadly categorized into freshness status analysis and microbe analysis. In freshness status analysis, the primary objective is to classify meat into discrete categories such as fresh, semi-fresh, and spoiled, with classification criteria varying considerably among studies. In contrast, microbe analysis focuses on predicting microbial counts generated during spoilage, enabling more precise evaluation of overall freshness.

A variety of sensor- and imaging-based approaches have been explored to categorize meat freshness. Hasan et al. (2012) employed an E-nose system to classify meat and fish samples into 4 categories, with SVM and K-nearest neighbors (KNN) models achieving accuracies of 94.5% and 96.2%, respectively. Building on this foundation, Haddi et al. (2015) combined an E-nose and E-tongue with AI models to classify beef, goat, and sheep freshness over 5 storage d, achieving 81.3% accuracy for E-nose data and 100% for E-tongue data. Vajdi et al. (2019) monitored fish headspace over a 15-d period using an E-nose device to classify samples into freshness categories, with a DNN model achieving 96.8% classification accuracy.

Expanding beyond electronic sensing, several studies have employed HSI techniques for freshness analysis. Huang et al. (2016b) compared NIR spectroscopy with computer vision systems to assess fish freshness, with a DNN model achieving 90.0% accuracy for image-based data and 93.3% for spectral data. Yang et al. (2017) utilized HSI data to classify cooked beef samples as fresh, medium fresh, or spoiled, achieving 97.1% accuracy with a LS-SVM. Ropodi et al. (2017) employed multispectral imaging (MSI) to distinguish beef from horse meat and detect adulteration, achieving 95.3% accuracy using an SVM. Alshejari and Kodogiannis (2017) investigated packaging conditions and storage temperatures on beef using multispectral images, with a DNN achieving 90.1% accuracy. Zhang et al. (2022a) advanced lamb freshness prediction using Vis-NIR imaging with chemical measurements, with a RF model achieving 91.0% accuracy for classifying lamb into premium, subfresh, and spoiled categories. Huang et al. (2023) integrated MOF@SnS2 sensors with AI to determine beef storage duration, with a DNN model achieving 78.5% accuracy.

The accessibility of smartphone technology has led to extensive research utilizing RGB cameras for freshness analysis. Arsalane et al. (2018) evaluated beef freshness using RGB images, extracting color space features with an SVM model achieving 100% classification accuracy. Guo et al. (2020) employed a ResNet-101 model on smartphone images of chicken, fish, and beef to classify samples as fresh, less fresh, or spoiled, achieving 98.5% accuracy. Similarly, Amani and Sarkodie (2022) utilized a CNN model on meat images to differentiate fresh from spoiled meat, achieving 100% classification accuracy. Hewawasam et al. (2023) developed an integrated poultry management system using RGB cameras, with a MobileNet model achieving 100% accuracy in identifying chicken slaughter timing. Kim et al. (2023b) fused RGB images and air quality measurements using MobileNet and 1-dimensional (1D) CNN, achieving 99.4% classification accuracy for categorizing pork freshness. Kim et al. (2023c) employed a color sensor to measure hue-saturation-value (or HSV) values for pork classification, achieving 98.7% accuracy with a 1D-CNN.

Recent advances in computer vision have further enhanced freshness assessment capabilities. Elangovan et al. (2024) classified red meat freshness into 2 or 3 categories using RGB cameras, with ConvNet-18 achieving 99.4% accuracy for 2-category classification and ConvNet-24 achieving 96.6% for 3-category classification. Arsalane et al. (2024) predicted beef freshness by analyzing camera-acquired images using fast wavelet transform feature extraction, with KNN achieving 92.5% accuracy and SVM achieving 90.1%. Zheng et al. (2024) developed detect cells rapidly network (or DCRNet) for detecting and counting stained cells to assess meat quality, achieving an average precision score of 81.2% while reducing manual cell counting workload to less than 0.5%.

Some research efforts combine freshness classification with microbial activity estimation. Papadopoulou et al. (2013) employed a portable E-nose to assess beef fillet samples, with an SVM model achieving 90.4% classification accuracy and a correlation coefficient of 0.86 for predicting microbial counts. Kodogiannis et al. (2014) integrated FTIR spectroscopy with neuro-fuzzy modeling to classify beef samples while simultaneously estimating microbial counts, with their adaptive fuzzy logic system achieving 95.9% classification accuracy and a regression RMSE of 0.37. Gu et al. (2017) focused on lipid oxidation in Chinese style sausages by classifying 3 quality levels and performing regression analyses for acid value using an E-nose, with both DNN and SVM models achieving 100% classification accuracy and the SVM model attaining an R2 of 0.98 for regression analysis. Kaswati et al. (2020) used Vis-NIR imaging to classify chicken samples as fresh or spoiled while predicting pH levels, with a RF model achieving 85.5% classification accuracy and partial least-squares regression model yielding R2 values between 0.80 and 0.84 for pH prediction.

A substantial group of investigations has focused on estimating total volatile basic nitrogen (TVB-N), which is closely linked to microbial spoilage. Huang et al. (2014) employed NIR spectroscopy, computer vision, and E-nose techniques to predict TVB-N content in pork samples, with a DNN model combining all 3 data sources achieving an RMSE of 2.73 and R2 of 0.95. Huang et al. (2015) proposed a NIR spectroscopy system with a BP-AdaBoost algorithm, achieving an RMSE of 6.94 and correlation coefficient of 0.83. Khulal et al. (2016) quantified TVB-N content in chicken using HSI, with a DNN model incorporating the ant colony optimization method achieving an RMSE of 6.38 and correlation coefficient of 0.75. Dai et al. (2016) predicted TVB-N content in prawns during cold storage using HSI data, with an LS-SVM model achieving an RMSE of 0.72 and correlation coefficient of 0.95.

Beyond TVB-N analysis, some approaches target additional spoilage indicators. Liang et al. (2019) employed terahertz spectroscopy to predict the K value in pork samples, with a BP-AdaBoost model achieving an RMSE of 9.89 and R2 of 0.84. For total viable count (TVC) prediction. Fengou et al. (2020) combined FTIR spectroscopy and MSI data to evaluate TVC in meat samples, with integrated MSI and FTIR data yielding an RMSE of 0.88 and correlation coefficient of 0.83. Dourou et al. (2021) employed FTIR spectroscopy on chicken liver samples under various temperature conditions, with an SVM model achieving an RMSE of 0.72 and R2 of 0.76 for noninoculated samples. Kolosov et al. (2023) integrated MSI data with a deep CNN model to measure TVC in pork samples, with ResNet-34 achieving MAE values of 0.05 for aerobically stored samples and 0.06 for modified atmosphere packaged samples. Lakehal and Lakehal (2023) collected beef samples across 6 freezing periods to predict storage time using Commission Internationale de l’Eclairage color space (or CIELAB) transformation, with a DNN model achieving an RMSE of 0.06 and R2 of 0.97.

Figure 6 provides an overview of meat freshness evaluation using various equipment. In summary, meat freshness evaluation utilizes 2 main types of data collection: tabular data obtained from E-nose and spectroscopic sensors and image data captured through RGB imaging and HSI. For tabular data, the collected information was preprocessed through smoothing techniques (including standard normal variate [SNV], Savitzky-Golay smoothing [SG], and normalization) and dimension reduction using PCA. In addition, visual information of the meat was captured using RGB imaging. The images underwent further preprocessing through region of interest segmentation, augmentation (using rotations and flips), and resizing. These preprocessed data enabled the AI model to perform dual analytical tasks: classification of meat status (fresh, semi-fresh, and spoiled) and regression analysis of microbial content (TVC and TVB-N levels).

Figure 6.
Figure 6.

Overview of diverse sensing approaches for meat freshness evaluation: data-specific preprocessing of electronic nose measurements, spectroscopic analysis, and red-green-blue imaging techniques for freshness status classification and microbial content prediction. PCA, principal component analysis; RGB, red-green-blue; RoI, regions of interest; SG, Savitzky-Golay smoothing; SNV, standard normal variate; TVB-N, total volatile basic nitrogen; TVC, total viable count.

Meat authentication

Meat authentication analysis typically involves predicting the extent of adulteration across different meat types or identifying specific muscle cuts. For meat type classification, the objective is to differentiate among pork, lamb, and beef based on their composition or to determine the level of admixture. For meat cut classification, the goal is to classify samples according to the distinctive features. Table 6 provides a summary of studies that apply AI to meat evaluation, with a specific focus on meat authentication.

Early research in this field utilized electronic sensing approaches. Tian et al. (2013) employed an E-nose to predict pork percentage in minced lamb and pork mixtures, with a DNN model achieving R2 of 0.97 and RMSE of 5.26, outperforming multiple linear regression. Güney and Atasoy (2015) identified fish species using an E-nose with metal oxide gas sensors, with a 2-level binary decision trees (DT) model achieving 96.1% classification accuracy, surpassing naive Bayes (84.7%) and KNN (80.0%).

Expanding beyond E-nose approaches, several studies have employed spectral techniques with remarkable success. Alfar et al. (2016) authenticated and classified fats derived from beef, chicken, and lard using micro-NIR spectrometry, with a SVM model achieving 98.3% 3-class classification accuracy. Acquarelli et al. (2017) discriminated chicken, pork, and turkey products using FTIR spectroscopy, with a CNN model achieving 100% accuracy. Al-Sarayreh et al. (2018) classified lamb, beef, pork, and fat using HSI data, with a 3D-CNN achieving 96.1% accuracy and operating 4.7 times faster than SVM.

Adulteration detection has become increasingly important in meat authentication. Zhao et al. (2019) employed Vis-NIR techniques to detect adulteration in spoiled beef, with a LS-SVM model achieving an RMSE of 5.67 and correlation coefficient of 0.97. Al-Sarayreh et al. (2020) investigated snapshot HSI coupled with deep learning to classify red meat, with a 3D-CNN model achieving classification accuracies of 98.6%, 96.9%, and 97.1% on line-scanning, NIR, and visible snapshot datasets, respectively. Ayaz et al. (2020) used HSI to classify minced beef, chicken, and lamb, with an SVM model achieving 88.8% accuracy. Zhang et al. (2022c) introduced a CNN to predict pork content in adulterated lamb at various levels, achieving classification accuracies of 100%, 100%, and 99.9% for fresh, frozen-thawed, and mixed datasets, respectively.

Advanced spectroscopic techniques have further enhanced meat species identification capabilities. Robert et al. (2021) employed Raman spectroscopy to distinguish among beef, venison, and lamb, with an SVM model achieving 93.0% accuracy with a radial basis function kernel. Sun et al. (2022) integrated laser-induced breakdown spectroscopy and Raman spectroscopy to classify meat species (pork, beef, and lamb), with a DNN model achieving F1-scores of 99.8% for beef, 99.4% for lamb, and 99.1% for pork.

Research focusing on meat cut classification has demonstrated the potential for precise muscle-specific identification. Sanz et al. (2016) classified 4 types of lamb muscles using HSI data, with a least-squares mean classifier achieving 96.6% accuracy. Li et al. (2021a) developed a classification model for 3 beef cuts (sirloin, shank, and flank) by integrating Vis-NIR imaging with AI, with a LDA model reaching 90.9% accuracy. Recent developments in computer vision have opened new avenues for meat authentication. Prakash et al. (2021) employed an ensemble AI approach that merged CNN, multinomial logistic regression, and DT to categorize 5 muscle types of meat using RGB images, with the ensemble model achieving 95.0% accuracy using grayscale images and 99.1% with original colored images. Huang et al. (2022) investigated the classification of 4 pork primal cuts (ham, loin, belly, and neck) using mobile phone images, with a ResNet-50 model enhanced with convolutional block attention module achieving 94.4% classification accuracy.

Figure 7 synthesizes spectroscopic meat–authentication studies that use tabular signals from E–nose and other sensors, along with HSI and RGB imagery. Tabular spectra are enhanced by smoothing (SNV, SG, normalization) and feature–selection methods such as PCA and improved whale optimization, while HSI/RGB images undergo superpixel– and SLIC–based segmentation to capture spatial context. These pipelines enable AI models to distinguish meat species (beef, pork, lamb) and cuts (neck, loin, ham) with high accuracy, underscoring the strength of AI–enabled spectroscopy for comprehensive meat authentication.

Figure 7.
Figure 7.

Overview of spectroscopic analysis for meat authentication: data-specific preprocessing of spectral measurements and hyperspectral imaging data for meat type and cut identification, followed by artificial-intelligence-based classification. IWO, improved whale optimization; PCA, principal component analysis; SG, Savitzky-Golay smoothing; SLIC, simple linear iterative clustering; SNV, standard normal variate.

Market analysis and consumer preferences

Market analysis for the meat industry often centers on forecasting prices and understanding consumer preferences, thereby facilitating data-driven decision-making and delivering economic benefits. Price forecasting studies generally aim to predict the trends of meat prices based on daily, monthly, or annual averages, while preference prediction research focuses on forecasting consumer preferences based on individual carcass characteristics. Table 7 provides a summary of studies that apply AI-based methods to market analysis.

Table 7.

Summary of literature on market analysis, categorized by purpose (price prediction, preference prediction), focusing on collected data and prediction methods.

Category Year Animal Data Method
Equipment Type Features Target NR Preprocessing Model Performance
Price Chen et al. (2022a) Pork Online Tabular - Reg: monthly 192 - Bi-RNN-LSTM RMSE: 0.69; MAPE: 3.36
Chuluunsaikhan et al. (2020) Pork Online Tabular News Reg: daily 2 466 Topic modeling, TF-IDF LSTM RMSE: 1,155; MAPE: 5.1
Chen et al. (2021) Pork Online Tabular News Reg: daily 1 031 BERT Multi-BERT-LSTM RMSE: 0.96; MAPE: 0.17
Ma et al. (2019) Pork Online Tabular 22 factors Reg: yearly 16 - DBN RMSE: 1.20; MAPE: 7.13
Yang et al. (2021) Pork Online Tabular 12 factors Reg: monthly 65 - ARIMA-LSTM RMSE: 2.03
Suaza-Medina et al. (2023) Pork Online Tabular 8 factors Reg: weekly 322 - ARIMA-RNN R2: 0.98; R2: 0.97
Rahmani et al. (2024) Beef Online Tabular 4 factors Reg: monthly 225 Normalize AdaBoost RMSE: 0.16; MAPE: 38.29
Preference Ko et al. (2023) Pork Ultrasound (AutoFomIII) Tabular 40 features Reg: flavor, appearance preference score 6 917 Feature selection, label encoding, normalize Stacking ensemble RMSE: 0.35; MAPE: 11.17; RMSE: 0.14; MAPE: 5.02
Jeong et al. (2024b) Pork Ultrasound (AutoFomIII) Tabular 17 features Reg: flavor preference score 2 321 Feature selection, label encoding, normalize Stacking ensemble RMSE: 0.16; MAPE: 5.34
  • Acc, accuracy; ARIMA-LSTM, autoregressive integrated moving average long short-term memory; ARIMA-RNN, autoregressive integrated moving average recurrent neural network; BERT, bidirectional encoder representations from transformer; Bi-RNN-LSTM, bi-recurrent neural network long short-term memory; Cla, classification; Corr, Pearson correlation coefficient; DBN, deep belief network; LSTM, long short-term memory; MAPE, mean average percentage error; Multi-BERT-LSTM, multi-bidirectional encoder representations from transformer long short-term memory; NR, number of recordings; Reg, regression; RMSE, root mean squared error; RNN, recurrent neural network; TF-IDF, term frequency-inverse document frequency.

Market analysis

Chen et al. (2022a) forecasted monthly pork prices using a bi-recurrent neural network (RNN) LSTM model on historical price records, achieving a RMSE of 0.69 and MAPE of 3.36, outperforming RNN and standard LSTM models. Recognizing the importance of external factors, some researchers have incorporated additional variables to enhance predictive accuracy. Chuluunsaikhan et al. (2020) integrated pig-related news with historical price data in an LSTM model, reducing RMSE from 1 928 to 1 155 and MAPE from 8.4 to 5.1 compared to autoregressive integrated moving average with exogenous variables (or ARIMAX). Chen et al. (2021) embedded online news articles using a bidirectional encoder representations from transformer (BERT) model within a multi-BERT-LSTM framework, achieving an RMSE of 0.96 and MAPE of 0.17, significantly outperforming standard LSTM.

Building on this multifactor approach, Ma et al. (2019) integrated external features including soybean meal prices and macroeconomic indicators into AI algorithms, with a deep belief network model achieving an RMSE of 1.20 and MAPE of 7.13, outperforming autoregressive integrated moving average (ARIMA). Yang et al. (2021) compared various models using price data and macroeconomic indicators, with a hybrid ARIMA-LSTM model achieving the best performance (RMSE of 2.038). Suaza-Medina et al. (2023) focused on the Lleida market by incorporating regional market data, with ARIMA achieving an R2 of 0.98, outperforming RNN and LSTM. Rahmani et al. (2024) expanded the dataset to include multiple economic indicators, with RF and AdaBoost achieving lower RMSE values of 0.16 and 0.16, respectively, compared to ARIMA.

Consumer preferences

Ko et al. (2023) proposed a deep learning-based framework to predict flavor and appearance preferences in pork. The study collected 1 767 flavor preference data points and 5 150 appearance preference data points, as well as pork characteristics measured using ultrasound equipment (i.e., AutoFomIII) and additional sex and breed information. The collected data were preprocessed by applying label encoding to the sex and breed features and normalizing each feature before being used in a stacking ensemble of linear models, RF, and a DNN. The final system achieved a MAPE of 11.17 for flavor and 5.02 for appearance on a scale of 1 to 5. Jeong et al. (2024b) further expanded this concept by integrating an AutoFomIII device to collect pork characteristics and conducting a taste preference survey, yielding 2 321 preference scores. Their deep learning-based stacking model reached a MAPE of 5.34, outperforming baseline DNN (7.53) and extreme GB (7.93) models. Additionally, SHapley Additive exPlanations values revealed that consumer age, gender, and pork fat properties significantly influence taste preferences. Overall, these studies underscore the value of combining historical data, external market indicators, consumer demographics, and advanced AI techniques to improve meat price forecasting and optimize product offerings according to consumer preferences.

These studies underscore the value of combining historical data, external market indicators, consumer demographics, and advanced AI techniques to improve meat price forecasting and optimize product offerings according to consumer preferences. Figure 8 presents an overview of the methodology for predicting consumer preferences using ultrasound measurement equipment. The study utilized the AutoFomIII device to collect comprehensive carcass measurements for this purpose. The resulting tabular data underwent several preprocessing steps, including feature selection to identify relevant measurements, label encoding for categorical variables (such as sex and breed), and normalization to smooth the data. The processed data were then analyzed using an ensemble approach that stacked multiple models and combined them through a meta-regressor that successfully predicted consumer preference scores for both flavor and appearance attributes, demonstrating the effectiveness of integrated measurement and AI approaches in understanding consumer preferences for meat products.

Figure 8.
Figure 8.

Overview of preference prediction using artifical intelligence technologies in market analysis; illustrating the workflow from data collection through model implementation.

Challenges and Future Direction

This section analyzes existing research on integrating AI technologies into meat processing, focusing on the challenges and future directions from the perspectives of data collection and methodological approaches. Figure 9 presents 8 key challenges along with their corresponding future directions across the domains of data acquisition, methodology, and application. Detailed discussions are provided in the subsequent subsections.

Figure 9.
Figure 9.

Overview of current challenges and future directions in meat processing research using artificial intelligence methods. AI, artificial intelligence; GAN, generative adversarial network.

Data acquisition

A first challenge in data acquisition lies in ensuring sufficient diversity of the available data, which is often constrained by narrow experimental or environmental conditions. In many studies, data are acquired under restricted constraints, and the performance of the proposed predictive models is evaluated with data that were not used for training but still conform to the same constraints. As a result, these models may not generalize effectively to data collected under different conditions, potentially undermining the validity of the AI model’s generalization performance. To overcome this, recent studies have demonstrated that verifying model performance with openly available datasets is critical for enhancing reliability and generalizability. For instance, Xu et al. (2023) employed an X-ray type open dataset to validate their proposed model’s accuracy across multiple datasets. In addition, Elangovan et al. (2024) utilized open meat freshness datasets from Kaggle to evaluate accuracy under various data conditions.

To ensure that data sharing efforts yield maximum scientific value, the research community increasingly advocates for managing datasets in accordance with the FAIR (findable, accessible, interoperable, and reusable) principles (Wilkinson et al., 2016). A critical component of achieving FAIR data compliance, particularly for ensuring interoperability, is the establishment of comprehensive data standards. This standardization principle extends beyond the definition of anatomical features, such as standardized keypoints for carcass joints (Manko et al., 2022) analogous to facial landmarks in computer vision (Kazemi and Sullivan, 2014). Rather, it encompasses the broader framework of data types, acquisition protocols, and metadata schemas that define the purpose and contextual information of collected datasets. The establishment of such comprehensive standards is essential for creating datasets that achieve both technical compatibility and semantic interoperability. Ultimately, this ecosystem of open and standardized data serves as the crucial prerequisite for establishing fair benchmarks to objectively evaluate and advance different AI methodologies.

A second significant challenge in data acquisition involves the limited quantity and representativeness of available data, as some studies rely on less than 100 samples to train deep learning models. In particular, studies that employ spectroscopy equipment, such as NIR and Vis-NIR (Ayaz et al., 2020; Zhang et al., 2022c), typically collect fewer data than those that use RGB cameras (Sun et al., 2018; Prakash et al., 2021). Nonetheless, increasing the dataset size allows models to capture a broader range of patterns and improves deep learning performance. To address this issue, 2 principal approaches have been proposed. First, transfer learning leverages models previously trained on large-scale datasets in other domains, thereby reducing the need for extensive, domain-specific data. For example, Huang et al. (2022) applied transfer learning for meat cut classification, which not only improved model accuracy but also reduced training time. By utilizing preexisting feature representations, transfer learning proves particularly valuable when acquiring large amounts of new data is impractical. Second, generative adversarial networks can be applied to generate synthetic data that resemble real-world observations. By integrating artificially generated data into training sets, researchers can increase both the quantity and diversity of samples, thereby enabling models to learn more robust representations in data-scarce environments. For instance, Cui et al. (2024) employed a generative inference adversarial autoencoder model to generate synthetic data for training, effectively enhancing their dataset and model performance.

A third critical challenge in data acquisition arises from the cost, in both time and labor, associated with labeling. For instance, labeling individual data points and manually drawing bounding boxes in object detection tasks demands substantial human resources. Although most current research projects involve datasets ranging from a few hundred to a few thousand samples, labeling costs can escalate exponentially when the scale of data reaches ten- or even-hundreds of millions. One promising approach to mitigate this issue is the use of active learning (Monarch, 2021; Wu et al., 2022b), which focuses labeling efforts on samples where the model’s predictions exhibit high uncertainty, thereby making more efficient use of limited labeling resources. This strategy has gained considerable attention as a means to minimize the decline in model performance while reducing labeling burdens in various applications that require large-scale data, and its practical implementations are steadily expanding.

Methodology

In terms of methodology, the first challenge is ensuring predictive reliability, given the black-box nature of AI models. Although state-of-the-art deep learning techniques have achieved notable success in meat processing tasks such as defect detection and quality assessment, it remains difficult to interpret the predictive outcomes generated by these complex systems. This lack of transparency can compromise stakeholder confidence in critical domains such as food safety and quality control, particularly in contexts where regulatory compliance and liability considerations are paramount. Explainable AI offers a potential remedy by illuminating the model’s decision-making process and identifying the specific features or data regions that contribute most significantly to its predictions. For instance, Jeong et al. (2024b) demonstrated that the prediction model is most strongly influenced by consumer age, gender, and pork fat properties when forecasting taste preferences. Such transparency not only strengthens stakeholder confidence in the model’s predictions but also highlights practical avenues for improving its performance.

Beyond interpretability concerns, establishing robust validation methodologies constitutes another fundamental challenge, as the determination of appropriate reference standards and ground truth values exhibits substantial variability across research investigations. For instance, in IMF prediction, researchers may choose to predict continuous measured values or classify samples into predefined ranges with each approach requiring different validation frameworks, making it challenging to determine the most appropriate methodological approach.

A second methodological issue arises from the reliance on a single data type. Many studies utilize only one form of data acquired from devices such as NIR spectrometers or RGB cameras. However, in the work of Huang et al. (2014), data collected from an E-nose, an RGB camera, and NIR equipment were combined to yield more precise predictions of microbial activity in meat. Multimodal approaches that involve early, middle, or late fusion of different data types can significantly enhance predictive accuracy (Pawłowski et al., 2023). By analyzing multiple data modalities simultaneously, a broader range of features is captured, leading to improved reliability. Consequently, integrating multimodal data not only boosts prediction performance but also enhances the robustness and generalizability of meat quality evaluation models.

A third methodological challenge arises from data drift, a phenomenon in which the statistical properties of input data shift over time (Agrahari and Singh, 2022). For instance, images initially captured under bright lighting conditions may subsequently be taken in darker settings, and environmental factors such as inflation rates or pandemics can cause prices to rise or fall beyond typical baselines, leading to significant drifts in economic data. To address this issue, incremental learning (Polikar et al., 2001) and continual learning (Wang et al., 2024) techniques enable models to adapt to evolving data distributions by updating their internal parameters or acquiring new tasks while preserving previously learned knowledge. Consequently, by proactively managing data drift in IoT environments established within livestock facilities, stakeholders can safeguard model reliability, sustain long-term performance, and ensure that AI-driven insights remain both accurate and actionable.

Application

In the application domain, the first challenge is ensuring that AI models achieve high predictive accuracy in modern slaughterhouses and factory environments while also meeting the demands of high implementation costs. As a result, lightweight deep learning strategies (Wang et al., 2024) and model compression techniques (Dantas et al., 2024), such as pruning, quantization, and knowledge distillation, have attracted considerable attention. These approaches reduce computational overhead without substantially compromising accuracy, thereby minimizing operational costs and inference times and facilitating the transition from research prototypes to large-scale industrial deployments. Ultimately, these methods enable rapid, accurate, and resource-efficient decision-making in real-world meat processing workflows.

However, beyond computational efficiency, successful industrial implementation faces additional practical challenges that are often underestimated in research settings. Equipment integration complexities arise from the need to seamlessly incorporate AI systems with existing processing infrastructure, while real-time processing requirements demand consistent performance under varying operational conditions. These multifaceted implementation barriers highlight the necessity for comprehensive deployment strategies that address not only algorithmic performance but also the broader operational requirements of modern meat processing facilities. Ultimately, these methods enable rapid, accurate, and resource-efficient decision-making in real-world meat processing workflows, provided that these practical implementation challenges are adequately addressed.

In the application domain, a second challenge arises from the fact that most AI-driven research in meat processing has predominantly focused on quality analysis. However, significant opportunities remain to extend AI applications across the entire meat processing cycle. For instance, although studies applying AI to cultured meat are still relatively few, recent work on optimizing culture media conditions (Cosenza et al., 2021; Nikkhah et al., 2023) and on automated calculation of the fusion index during muscle formation (Weisrock et al., 2024; Jeong et al., 2025) illustrate the potential for deeper exploration in this area.

In addition, further research into robotic automation is essential for improving efficiency, particularly in overcoming the challenge posed by the anatomical diversity of animal carcasses. Building on initial work in detecting anatomical joint points designed for automated cutting (Manko et al., 2022), future investigations should focus on developing systems for real-time identification and guidance of optimal cutting directions. Furthermore, a significant opportunity lies in leveraging generative AI and large language models such as ChatGPT, which could be developed into advanced systems for a wide range of applications. These applications can be broadly categorized into 2 main domains: operational support and knowledge transfer. In the operational domain, they can serve as advanced tools for real-time process control, dynamic scheduling, and interactive decision support. In the knowledge transfer domain, they show promise as interactive training programs for novice operators and as specialized chatbots providing expert guidance in livestock management (Dwivedi et al., 2023). Future investigations into these emerging applications hold promise for significantly enhancing automation capabilities, optimizing resource utilization, and elevating food safety standards throughout the entire meat supply chain.

Conclusion

This comprehensive review has examined the integration of AI technologies across the meat processing industry, encompassing meat production, meat quality, and market analysis. Our analysis reveals significant progress in applying AI methods to enhance productivity, ensure product quality and safety, and optimize management throughout the meat supply chain. These advancements can be systematically categorized into 3 primary domains.

In the domain of meat production, AI technologies have demonstrated considerable success in predicting carcass composition and estimating specific cut characteristics. The implementation of computer vision systems and deep learning algorithms has enabled more accurate and efficient assessment of key quality parameters such as IMF content, color, water-holding capacity, and/or tenderness. Also, AI-based methods have shown promising results in areas including freshness detection, defect identification, and storage condition monitoring. The integration of multiple sensor technologies with advanced AI algorithms has improved the accuracy and reliability of quality assessments by enabling noninvasive evaluation techniques. In market analysis, AI applications have effectively enhanced price forecasting capabilities and consumer preference prediction. These advancements support data-driven decision-making processes and contribute to greater economic efficiency within the industry.

While these achievements across all 3 domains demonstrate the transformative potential of AI in meat processing, several critical challenges remain to be addressed, including data acquisition limitations, methodological constraints, and implementation barriers within industrial settings. Future research endeavors should prioritize the development of more robust and comprehensive datasets, enhancement of model interpretability and explainability, and expansion of AI applications to emerging sectors such as cultured meat production and fully automated handling systems. As AI technologies continue to advance rapidly, their strategic integration into meat processing operations presents unprecedented opportunities for enhancing industry capabilities while addressing contemporary challenges in food production efficiency, waste reduction, and safety assurance. The continued development and systematic refinement of these technologies will be instrumental in establishing more efficient, sustainable, and reliable meat processing systems that meet evolving industry demands.

To fully realize these opportunities, future research may need to further explore the underlying sensing technologies, including detailed examinations of their detection principles, sensor configurations, and operational characteristics. Such comprehensive analyses will provide the foundation necessary for informed technology selection and optimization in commercial meat processing applications. Furthermore, our future research will expand into emerging applications including automated animal inspection, animal welfare monitoring, disease surveillance, and automated animal monitoring systems, recognizing the growing importance of these areas in modern livestock management and processing facilities.

Conflict of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgments

This work was supported by Korea Institute of Planning and Evaluation for Technology in Food, Agriculture and Forestry through Agriculture and Food Convergence Technologies Program for Research Manpower development funded by Ministry of Agriculture, Food and Rural Affairs (grant number: RS-2024-00398561). In part, this research was supported by Brain Pool program funded by the Ministry of Science and Information and Communication Technology through the National Research Foundation of Korea (grant number: 2022H1D3A2A01096260).

Author Contribution

Kyungchang Jeong: writing—original draft; Gyuchan Jo: writing—original draft; Jong Ho Lee: validation; Yuan H. Brad Kim: conceptualization; Jungseok Choi: investigation; Hongseok Oh: methodology; Ji-Hoon Jeong: methodology; and Euijong Lee: writing—review and editing.

Literature Cited

Abie, S. M., Ø. G. Martinsen, B. Egelandsdal, J. Hou, F. Bjerke, A. Mason, and D. Münch. 2021. Feasibility of using electrical impedance spectroscopy for assessing biological cell damage during freezing and thawing. Sensors. 21:4129. doi: https://doi.org/10.3390/s21124129.

Acquarelli, J., T. van Laarhoven, J. Gerretzen, T. N. Tran, L. M. C. Buydens, and E. Marchiori. 2017. Convolutional neural networks for vibrational spectroscopic data analysis. Anal Chim. Acta. 954:22–31. doi: https://doi.org/10.1016/j.aca.2016.12.010.

Agrahari, S., and A. K. Singh. 2022. Concept drift detection in data stream mining: a literature review. Journal of King Saud University-Computer Information Sciences. 34:9523–9540. doi: https://doi.org/10.1016/j.jksuci.2021.11.006.

Ahn, J.-H., W.-J. Jang, W.-H. Lee, and J.-D. Kim. 2020. Detection of needles in meat using x-ray images and convolution neural networks. Journal of Sensor Science and Technology. 29:427–432. doi: https://doi.org/10.46670/jsst.2020.29.6.427.

Al-Sarayreh, M., M. M. Reis, W. Q. Yan, and R. Klette. 2018. Deep spectral-spatial features of snapshot hyperspectral images for red-meat classification. 2018 International Conference on Image and Vision Computing New Zealand, Auckland, New Zealand 19–21 November. p. 1–6. doi: https://doi.org/10.1109/ivcnz.2018.8634783.

Al-Sarayreh, M., M. M. Reis, W. Q. Yan, and R. Klette. 2020. Potential of deep learning and snapshot hyperspectral imaging for classification of species in meat. Food Control. 117:107332. doi: https://doi.org/10.1016/j.foodcont.2020.107332.

Alaiz-Rodríguez, R., and A. C. Parnell. 2020. A machine learning approach for lamb meat quality assessment using FTIR spectra. IEEE Access. 8:52385–52394. doi: https://doi.org/10.1109/access.2020.2974623.

Alfar, I. J., A. Khorshidtalab, R. Akmeliawati, S. Ahmad, and I. Jaswir. 2016. Towards authentication of beef, chicken and lard using micro near-infrared spectrometer based on support vector machine classification. ARPN J. Eng. Appl. Sci. 11:4130–4136.

Alshejari, A., V. S. Kodogiannis. 2017. An intelligent decision support system for the detection of meat spoilage using multispectral images. Neural Comput. Appl. 28:3903–3920. doi: https://doi.org/10.1007/s00521-016-2296-6.

Alvarez-García, W. Y., L. Mendoza, Y. Muñoz-Vílchez, D. C. Nuñez-Melgar, and C. Quilcate. 2024. Implementing artificial intelligence to measure meat quality parameters in local market traceability processes. Int. J. Food Sci. Tech. 59:8058–8068. doi: https://doi.org/10.1111/ijfs.17546/v1/review1.

Amani, M. A., S. A. Sarkodie. 2022. Mitigating spread of contamination in meat supply chain management using deep learning. Sci. Rep. 12:5037. doi: https://doi.org/10.1038/s41598-022-08993-5.

Arsalane, A., N. El Barbri, A. Tabyaoui, A. Klilou, K. Rhofir, and A. Halimi. 2018. An embedded system based on DSP platform and PCA-SVM algorithms for rapid beef meat freshness prediction and identification. Comput. Electron. Agr. 152:385–392. doi: https://doi.org/10.1016/j.compag.2018.07.031.

Arsalane, A., A. Klilou, and N. El Barbri. 2024. Performance evaluation of machine learning algorithms for meat freshness assessment. International Journal of Electrical and Computer Engineering. 14:5858–5865. doi: https://doi.org/10.11591/ijece.v14i5.pp5858-5865.

Ayaz, H., M. Ahmad, A. Sohaib, M. N. Yasir, M. A. Zaidan, M. Ali, M. H. Khan, and Z. Saleem. 2020. Myoglobin-based classification of minced meat using hyperspectral imaging. Applied Sciences. 10:6862. doi: https://doi.org/10.3390/app10196862.

Baltic, M. Z., and M. Boskovic. 2015. When man met meat: meat in human nutrition from ancient times till today. Proc. Food Sci. 5:6–9. doi: https://doi.org/10.1016/j.profoo.2015.09.002.

Barbon S., A. P. A. da Costa Barbon, R. G. Mantovani, and D. F. Barbin. 2018. Machine learning applied to near-infrared spectra for chicken meat classification. J. Spectrosc. 2018:8949741. doi: https://doi.org/10.1155/2018/8949741.

Biglia, A., P. Barge, C. Tortia, L. Comba, D. R. Aimonino, and P. Gay. 2022. Artificial intelligence to boost traceability systems for fraud prevention in the meat industry. J. Agr. Eng. 53. doi: https://doi.org/10.4081/jae.2022.1328.

Borisov, V., T. Leemann, K. Seßler, J. Haug, M. Pawelczyk, and G. Kasneci. 2022. Deep neural networks and tabular data: a survey. IEEE T. Neur. Net. Lear. 35: 7499–7519. doi: https://doi.org/10.1109/tnnls.2022.3229161.

Bow, S. T. 2002. Pattern recognition and image preprocessing. CRC Press, Boca Raton, FL. doi: https://doi.org/10.1201/9780203903896.

Chen, T., Z. Chen, and Z. Zhou. 2021. Computational research and implementation of prediction of pork price based on deeplearning. 2020 2nd International Conference on Computer, Communications and Mechatronics Engineering, Xiamen, China. p. 012032. doi: https://doi.org/10.1088/1742-6596/1815/1/012032.

Chen, J., L. Lin, and X. Li. 2022a. Pork price prediction using Bi-RNN-LSTM artificial neural network. 2022 5th International Conference on Artificial Intelligence and Big Data, IEEE, Chengdu, China. p. 168–172. doi: https://doi.org/10.1109/icaibd55127.2022.9820121.

Chen, D., P. Wu, K. Wang, S. Wang, X. Ji, Q. Shen, Y. Yu, X. Qiu, X. Xu, Y. Liu, and G. Tang. 2022b. Combining computer vision score and conventional meat quality traits to estimate the intramuscular fat content using machine learning in pigs. Meat Sci. 185:108727. doi: https://doi.org/10.1016/j.meatsci.2021.108727.

Cheng, J., J. Sun, K. Yao, and C. Dai. 2023a. Generalized and hetero two-dimensional correlation analysis of hyperspectral imaging combined with three-dimensional convolutional neural network for evaluating lipid oxidation in pork. Food Control. 153: 109940. doi: https://doi.org/10.1016/j.foodcont.2023.109940.

Cheng, J., J. Sun, K. Yao, M. Xu, and C. Dai. 2023b. Multi-task convolutional neural network for simultaneous monitoring of lipid and protein oxidative damage in frozen-thawed pork using hyperspectral imaging. Meat Sci. 201:109196. doi: https://doi.org/10.1016/j.meatsci.2023.109196.

Chuluunsaikhan, T., G.-A. Ryu, K.-H. Yoo, H. Rah, and A. Nasridinov. 2020. Incorporating deep learning and news topic modeling for forecasting pork prices: the case of south korea. Agriculture. 10:513. doi: https://doi.org/10.3390/agriculture10110513.

Cosenza, Z., D. E. Block, and K. Baar. 2021. Optimization of muscle cell culture media using nonlinear design of experiments. Biotechnol J. 16:2100228. doi: https://doi.org/10.1002/biot.202100228.

Cui, J., Y. Lv, S. Liu, S. Pan, K. Li, S. Gao, R. Luo, H. Wu, Z. Zhang, and S. Wang. 2024. Synergizing meat science and AI: enhancing long-chain saturated fatty acids prediction. Comput. Electron. Agr. 221:108931. doi: https://doi.org/10.1016/j.compag.2024.108931.

Cui, J., Y. Wang, Q. Wang, L. Yang, Y. Zhang, E. Karrar, H. Zhang, Q. Jin, G. Wu, and X. Wang. 2022. Prediction of flavor of Maillard reaction product of beef tallow residue based on artificial neural network. Food Chem.: X. 15:100447. doi: https://doi.org/10.1016/j.fochx.2022.100447.

Dai, Q., J.-H. Cheng, D.-W. Sun, Z. Zhu, H. Pu. 2016. Prediction of total volatile basic nitrogen contents using wavelet features from visible/near-infrared hyperspectral images of prawn (Metapenaeus ensis). Food Chem. 197:257–265. doi: https://doi.org/10.1016/j.foodchem.2015.10.073.

Daniel, H., G. V. González, M. V. García, A. J. L. Rivero, and J. F. De Paz. 2020. Non-invasive automatic beef carcass classification based on sensor network and image analysis. Future Generation Computer Systems. 113:318–328. doi: https://doi.org/10.1016/j.future.2020.06.055.

Dantas, P. V., W. Sabino da Silva, L. C. Cordeiro, and C. B. Carvalho. 2024. A comprehensive review of model compression techniques in machine learning. Appl. Intell. 54:11804–11844. doi: https://doi.org/10.1007/s10489-024-05747-w.

de Melo, M. J., D. N. Gonçalves, M. d. N. B. Gomes, G. Faria, J. de Andrade Silva, A. P. M. Ramos, L. P. Osco, M. T. G. Furuya, J. M. Junior, and W. N. Gonçalves. 2022. Automatic segmentation of cattle rib-eye area in ultrasound images using the UNet++ deep neural network. Comput. Electron. Agr. 195:106818. doi: https://doi.org/10.1016/j.compag.2022.106818.

Dixit, Y., M. Al-Sarayreh, C. R. Craigie, and M. M. Reis. 2021. A global calibration model for prediction of intramuscular fat and pH in red meat using hyperspectral imaging. Meat Sci. 181:108405. doi: https://doi.org/10.1016/j.meatsci.2020.108405.

Dourou, D., A. Grounta, A. A. Argyri, G. Froutis, P. Tsakanikas, G.-J. E. Nychas, A. I. Doulgeraki, N. G. Chorianopoulos, C. C. Tassou. 2021. Rapid microbial quality assessment of chicken liver inoculated or not with Salmonella using FTIR spectroscopy and machine learning. Front. Microbiol. 11:623788. doi: https://doi.org/10.3389/fmicb.2020.623788.

Dwivedi, Y. K., N. Kshetri, L. Hughes, E. L. Slade, A. Jeyaraj, A. K. Kar, A. M. Baabdullah, A. Koohang, V. Raghavan, M. Ahuja, H. Albanna, M. A. Albashrawi, A. S. Al-Busaidi, J. Balakrishnan, Y. Barlette, S. Basu, I. Bose, L. Brooks, D. Buhalis, L. Carter, S. Chowdhury, T. Crick, S. W. Cunningham, R. M. Davison, R. Dé, D. Dennehy, Y. Duan, R. Dubey, R. Dwivedi, J. S. Edwards, C. Flavián, R. Gauld, V. Grover, M.-C. Hu, M. Janssen, P. Jones, I. Junglas, S. Khorana, S. Kraus, K. R. Larsen, P. Latreille, S. Laumer, F. T. Malik, A. Mardani, M. Mariani, S. Mithas, E. Mogaji, J. H. Nord, S. O’Connor, F. Okumus, M. Pagani, N. Pandey, S. Papagiannidis, I. O. Pappas, N. Pathak, J. Pries-Heje, R. Raman, N. P. Rana, S.-V. Rehm, S. Ribeiro-Navarrete, A. Richter, F. Rowe, S. Sarker, B. C. Stahl, M. J. Tiwari, W. van der Aalst, V. Venkatesh, G. Viglia, M. Wade, P. Walton, J. Wirtz, and R. Wright. 2023. Opinion paper: “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. Int. J. Inform. Manage. 71:102642. doi: https://doi.org/10.1016/j.ijinfomgt.2023.102642.

El Karam, S. A., M. Ferrand, T. Astruc, and A. Germond,2023. Evaluation and prediction of salt effects on pig muscle by deep UV and machine learning. Meat Sci. 199:109136. doi: https://doi.org/10.1016/j.meatsci.2023.109136.

Elangovan, P., V. Dhurairajan, M. K. Nath, P. Yogarajah, and J. Condell. 2024. A novel approach for meat quality assessment using an ensemble of compact convolutional neural networks. Applied Sciences. 14:5979. doi: https://doi.org/10.3390/app14145979.

Eli-Chukwu, N. C. 2019. Applications of artificial intelligence in agriculture: a review. Engineering, Technology & Applied Science Research. 9:4377–4383. doi: https://doi.org/10.48084/etasr.2756.

Fengou, L.-C., I. Mporas, E. Spyrelli, A. Lianou, G.-J. Nychas. 2020. Estimation of the microbiological quality of meat using rapid and non-invasive spectroscopic sensors. IEEE Access. 8:106614–106628. doi: https://doi.org/10.1109/access.2020.3000690.

García, S., S. Ramírez-Gallego, J. Luengo, J. M. Benítez, and F. Herrera. 2016. Big data preprocessing: methods and prospects. Big Data Analytics. 1:1–22. doi: https://doi.org/10.1186/s41044-016-0014-0.

García-Infante, M., P. Castro-Valdecantos, M. Delgado-Pertíñez, A. Teixeira, J. L. Guzmán, and A. Horcada. 2024. Effectiveness of machine learning algorithms as a tool to meat traceability system. A case study to classify Spanish Mediterranean lamb carcasses. Food Control. 164:110604. doi: https://doi.org/10.1016/j.foodcont.2024.110604.

Geronimo, B. C., S. M. Mastelini, R. H. Carvalho, S. B. Júnior, D. F. Barbin, M. Shimokomaki, and E. I. Ida. 2019. Computer vision system and near-infrared spectroscopy for identification and classification of chicken with wooden breast, and physicochemical and technological characterization. Infrared Phys. Techn. 96:303–310. doi: https://doi.org/10.1016/j.infrared.2018.11.036.

Ghahramani, Z. 2003. Unsupervised learning. In: Advanced lectures on machine learning. Springer, Berlin, Heidelberg. p. 72–112. doi: https://doi.org/10.1007/978-3-540-28650-9_5.

Gonçalves, D. N., V. A. de Moares Weber, J. G. B. Pistori, R. da Costa Gomes, A. V. de Araujo, M. F. Pereira, W. N. Gonçalves, and H. Pistori. 2021. Carcass image segmentation using CNN-based methods. Information Processing in Agriculture. 8:560–572. doi: https://doi.org/10.1016/j.inpa.2020.11.004.

Górska-Horczyczak, E., M. Horczyczak, D. Guzek, I. Wojtasik-Kalinowska, and A. Wierzbicka. 2017. Chromatographic fingerprints supported by artificial neural network for differentiation of fresh and frozen pork. Food Control. 73:237–244. doi: https://doi.org/10.1016/j.foodcont.2016.08.010.

Gu, X., Y. Sun, K. Tu, and L. Pan. 2017. Evaluation of lipid oxidation of chinese-style sausage during processing and storage based on electronic nose. Meat Sci. 133:1–9. doi: https://doi.org/10.1016/j.meatsci.2017.05.017.

Güney, S., and A. Atasoy. 2015. Study of fish species discrimination via electronic nose. Comput. Electron. Agr. 119:83–91. doi: https://doi.org/10.1016/j.compag.2015.10.005.

Guo, L., T. Wang, Z. Wu, J. Wang, M. Wang, Z. Cui, S. Ji, J. Cai, C. Xu, and X. Chen. 2020. Portable food-freshness prediction platform based on colorimetric barcode combinatorics and deep convolutional neural networks. Adv Mater. 32:2004805. doi: https://doi.org/10.1002/adma.202004805.

Haddi, Z., N. El Barbri, K. Tahri, M. Bougrini, N. El Bari, E. Llobet, and B. Bouchikhi. 2015. Instrumental assessment of red meat origins and their storage time using electronic sensing systems. Anal. Methods. 7:5193–5203. doi: https://doi.org/10.1039/c5ay00572h.

Hasan, N. U., N. Ejaz, W. Ejaz, and H. S. Kim. 2012. Meat and fish freshness inspection system based on odor sensing. Sensors. 12:15542–15557. doi: https://doi.org/10.3390/s121115542.

Hewawasam, T., J. Gunasekara, N. Jayarathna, Y. Indrawansha, S. Rathnayake, and P. Panduwawala. 2023. Integrated AI-based system for comprehensive poultry management. 2023 5th International Conference on Advancements in Computing, IEEE, 7–8 December, Colombo, Sri Lanka. p. 382–387. doi: https://doi.org/10.1109/icac60630.2023.10417233.

Huang, Q., Q. Chen, H. Li, G. Huang, Q. Ouyang, and J. Zhao. 2015. Non-destructively sensing pork’s freshness indicator using near infrared multispectral imaging technique. J. Food Eng. 154:69–75. doi: https://doi.org/10.1016/j.jfoodeng.2015.01.006.

Huang, F., Y. Li, J. Wu, J. Dong, and Y. Wang. 2016a. Identification of repeatedly frozen meat based on near-infrared spectroscopy combined with self-organizing competitive neural networks. Int. J. Food Prop. 19:1007–1015. doi: https://doi.org/10.1080/10942912.2014.968789.

Huang, X., H. Xu, L. Wu, H. Dai, L. Yao, and F. Han. 2016b. A data fusion detection method for fish freshness based on computer vision and near-infrared spectroscopy. Anal Methods. 8:2929–2935. doi: https://doi.org/10.1039/c5ay03005f.

Huang, H., W. Zhan, Z. Du, S. Hong, T. Dong, J. She, and C. Min. 2022. Pork primal cuts recognition method via computer vision. Meat Sci. 192:108898. doi: https://doi.org/10.1016/j.meatsci.2022.108898.

Huang, Y., X. Zhang, S. Liu, R. Wang, J. Guo, Y. Chen, and X. Ma. 2023. Wireless food-freshness monitoring and storage-time prediction based on ammonia-sensitive MOF@ SnS2 PN heterostructure and machine learning. Chem. Eng. J. 458:141364. doi: https://doi.org/10.1016/j.cej.2023.141364.

Huang, L., J. Zhao, Q. Chen, and Y. Zhang. 2014. Nondestructive measurement of total volatile basic nitrogen (TVB-N) in pork meat by integrating near infrared spectroscopy, computer vision and electronic nose techniques. Food Chem. 145:228–236. doi: https://doi.org/10.1016/j.foodchem.2013.06.073.

Jeong, K., D.-R. Kim, J.-H. Ryu, H.-W. Kim, J. Cho, E. Lee, and J.-H. Jeong. 2024a. A monitoring system for cattle behavior detection using YOLO-v8 in IoT environments. 2024 IEEE International Conference on Consumer Electronics, 6–8 January, Las Vegas, NV. p. 1–4. doi: https://doi.org/10.1109/icce59016.2024.10444145.

Jeong, K., E. Ko, H. Oh, G. Cho, H. Seo, J. Choi, J.-H. Jeong, and E. Lee. 2024b. Method to predict and explain taste preference using pork characteristics and consumer information. 2024 IEEE International Conference on Consumer Electronics, 6–8 January, Las Vegas, NV. p. 1–6. doi: https://doi.org/10.1109/icce59016.2024.10444266.

Jeong, K., H. Oh, Y. Lee, H. Seo, G. Jo, J. Jeong, G. Park, J. Choi, Y.-D. Seo, J.-H. Jeong, and E. Lee. 2024c. IoT and AI systems for enhancing bee colony strength in precision beekeeping: a survey and future research directions. IEEE Internet Things. 12:362–389. doi: https://doi.org/10.1109/jiot.2024.3461775.

Jeong, K., S. Park, G. Jo, H. Seo, N. Choi, S. Jang, G. Park, Y.-D. Seo, Y. H. B. Kim, J.-H. Jeong, S.-H. Hyun, J. Choi, and E. Lee. 2025. SEPO-FI: deep-learning based software to calculate fusion index of muscle cells. Comput. Biol. Med. 186:109706. doi: https://doi.org/10.1016/j.compbiomed.2025.109706.

Jordan, M. I., and T. M. Mitchell. 2015. Machine learning: trends, perspectives, and prospects. Science. 349:255–260. doi: https://doi.org/10.1126/science.aaa8415.

Kaelbling, L. P., M. L. Littman, and A. W. Moore. 1996. Reinforcement learning: a survey. J. Artif. Intell. Res. 4:237–285. doi: https://doi.org/10.1613/jair.301.

Karamizadeh, S., S. M. Abdullah, A. A. Manaf, M. Zamani, and A. Hooman. 2013. An overview of principal component analysis. Journal of Signal and Information Processing. 4:173–175. doi: https://doi.org/10.4236/jsip.2013.43B031.

Kaswati, E. L. N., A. H. Saputro, and C. Imawan. 2020. Examination system of chicken meat quality based on hyperspectral imaging. J. Phys. Conf. Ser. 1528:012045. doi: https://doi.org/10.1088/1742-6596/1528/1/012045.

Kazemi, V., and J. Sullivan. 2014. One millisecond face alignment with an ensemble of regression trees.2024 IEEE Conference on Computer Vision and Pattern Recognition, 23–28 June, Columbus, OH. p. 1867–1874. doi: https://doi.org/10.1109/CVPR.2014.241.

Khulal, U., J. Zhao, W. Hu, and Q. Chen. 2016. Nondestructive quantifying total volatile basic nitrogen (TVB-N) content in chicken using hyperspectral imaging (HSI) technique combined with different data dimension reduction algorithms. Food Chem. 197:1191–1199. doi: https://doi.org/10.1016/j.foodchem.2015.11.084.

Kim, J., Y.-K. Kwon, H.-W. Kim, K.-H. Seol, and B.-K. Cho. 2023a. Robot technology for pork and beef meat slaughtering process: a review. Animals. 13:651. doi: https://doi.org/10.3390/ani13040651.

Kim, D.-E., N.-D. Mai, W.-Y. Chung. 2023b. AIoT-based meat quality monitoring using camera and gas sensor with wireless charging. IEEE Sens. J. 24:7317–7324. doi: https://doi.org/10.1109/jsen.2023.3328915.

Kim, D.-E., Y. A. Nando, W.-Y. Chung. 2023c. Battery-free pork freshness estimation based on colorimetric sensors and machine learning. Applied Sciences. 13:4896. doi: https://doi.org/10.3390/app13084896.

Ko, E., K. Jeong, H. Oh, Y. Park, J. Choi, and E. Lee. 2023. A deep learning-based framework for predicting pork preference. Current Research in Food Science. 6:100495. doi: https://doi.org/10.1016/j.crfs.2023.100495.

Kodogiannis, V. S., T. Pachidis, and E. Kontogianni. 2014. An intelligent based decision support system for the detection of meat spoilage. Eng. Appl. Artif. Intel. 34:23–36. doi: https://doi.org/10.1016/j.engappai.2014.05.001.

Kolosov, D., L.-C. Fengou, J. M. Carstensen, N. Schultz, G.-J. Nychas, and I. Mporas. 2023. Microbiological quality estimation of meat using deep CNNs on embedded hardware systems. Sensors. 23:4233. doi: https://doi.org/10.3390/s23094233.

Kotsiantis, S. B., I. D. Zaharakis, and P. E. Pintelas. 2007. Machine learning: a review of classification and combining techniques. Artif. Intell. Rev. 26:159–190. doi: https://doi.org/10.1007/s10462-007-9052-3.

Kucha, C. T., L. Liu, M. Ngadi, and C. Gariépy. 2022. Prediction and visualization of fat content in polythene-packed meat using near-infrared hyperspectral imaging and chemometrics. J. Food Compo. Anal. 111:104633. doi: https://doi.org/10.1016/j.jfca.2022.104633.

Kvam, J., and J. Kongsro. 2017. In vivo prediction of intramuscular fat using ultrasound and deep learning. Comput. Electron. Agr. 142:521–523. doi: https://doi.org/10.1016/j.compag.2017.11.020.

Lakehal, S., and B. Lakehal. 2023. Storage time prediction of frozen meat using artificial neural network modeling with color values. Rev. Cient.-Fac Cien. V. 33. doi: https://doi.org/10.52973/rcfcv-e33268.

Lathuilière, S., P. Mesejo, X. Alameda-Pineda, and R. Horaud. 2019. A comprehensive analysis of deep regression. IEEE T. Pattern Anal. 42:2065–2081. doi: https://doi.org/10.1109/TPAMI.2019.2910523.

Lee, H.-J., J.-H. Baek, Y.-K. Kim, J. H. Lee, M. Lee, W. Park, S. H. Lee, and Y. J. Koh. 2022a. BTENet: back-fat thickness estimation network for automated grading of the Korean commercial pig. Electronics. 11:1296. doi: https://doi.org/10.3390/electronics11091296.

Lee, H.-J., Y. J. Koh, Y.-K. Kim, S. H. Lee, J. H. Lee, and D. W. Seo. 2022b. MSENet: marbling score estimation network for automated assessment of Korean beef. Meat Sci. 188:108784. doi: https://doi.org/10.1016/j.meatsci.2022.108784.

Lee, E., Y.-D. Seo, and Y.-G. Kim. 2022c. Self-adaptive framework with master–slave architecture for internet of things. IEEE Internet Things. 9:16472–16493. doi: https://doi.org/10.1109/jiot.2022.3150598.

Lee, E., Y.-D. Seo, S.-R. Oh, and Y.-G. Kim. 2021. A survey on standards for interoperability and security in the internet of things. IEEE Commun. Surv. Tut. 23:1020–1047. doi: https://doi.org/10.1109/comst.2021.3067354.

Li, A., C. Li, M. Gao, S. Yang, R. Liu, W. Chen, and K. Xu. 2021a. Beef cut classification using multispectral imaging and machine learning method. Frontiers in Nutrition. 8:755007. doi: https://doi.org/10.3389/fnut.2021.755007.

Li, Z., F. Liu, W. Yang, S. Peng, and J. Zhou. 2021b. A survey of convolutional neural networks: analysis, applications, and prospects. IEEE T. Neur. Net. Lear. 33:6999–7019. doi: https://doi.org/10.1109/tnnls.2021.3084827.

Liang, Q., Z. Maocheng, Z. Jie, and T. Yuweiyi. 2019. Preliminary investigation of Terahertz spectroscopy to predict pork freshness non-destructively. Food Sci. Tech-Brazil. 39:563–570. doi: https://doi.org/10.1590/fst.25718.

Liu, J.-H., X. Sun, J. M. Young, L. A. Bachmeier, and D. J. Newman. 2018. Predicting pork loin intramuscular fat using computer vision system. Meat Sci. 143:18–23. doi: https://doi.org/10.1016/j.meatsci.2018.03.020.

Lyu, Y, F. Wu, Q. Wang, G. Liu, Y. Zhang, H. Jiang, and M. Zhou. 2025. A review of robotic and automated systems in meat processing. Frontiers in Robotics and AI. 12:1578318. doi: https://doi.org/10.3389/frobt.2025.1578318.

Ma, Z., Z. Chen, T. Chen, and M. Du. 2019. Application of machine learning methods in pork price forecast. Proceedings of the 2019 11th International Conference on Machine Learning and Computing, 22–24 February, Zhuhai, China. p. 133–136. doi: https://doi.org/10.1145/3318299.3318364.

Manko, M., O. Smolkin, I. de Medeiros Esper, A. Popov, and A. Mason. 2022. Estimation of the pig’s limb orientation and gripping points based on the pose estimation deep neural networks, 2022 IEEE 10th Jubilee International Conference on Computational Cybernetics and Cyber-Medical Systems. 6–9 July, Reykjavík, Iceland. p. 000245–000250. doi: https://doi.org/10.1109/iccc202255925.2022.9922893.

Masferrer, G., R. Carreras, M. Font-i Furnols, M. Gispert, M. Serra, and P. Marti-Puig. 2019. Automatic ham classification method based on support vector machine model increases accuracy and benefits compared to manual classification. Meat Sci. 155:1–7. doi: https://doi.org/10.1016/j.meatsci.2019.04.018.

Masferrer, G., R. Carreras, M. Font-i Furnols, M. Gispert, P. Marti-Puig, and M. Serra. 2018. On-line Ham Grading using pattern recognition models based on available data in commercial pig slaughterhouses. Meat Sci. 143:39–45. doi: https://doi.org/10.1016/j.meatsci.2018.04.011.

Matthews, D., T. Pabiou, R. D. Evans, C. Beder, and A. Daly. 2022. Predicting carcass cut yields in cattle from digital images using artificial intelligence. Meat Sci. 184:108671. doi: https://doi.org/10.1016/j.meatsci.2021.108671.

Mitchell, T. M. 1997. Machine learning. Vol 1. McGraw-Hill, New York, NY.

Monarch, R. M. 2021. Human-in-the-loop machine learning: active learning and annotation for human-centered AI. Manning, Shelter Island, NY.

Naidu, G., T. Zuva, and E. M. Sibanda. 2023. A review of evaluation metrics in machine learning algorithms. In: R. Silhavy, P. Silhavy, editors, Artificial intelligence application in networks and systems: proceedings of 12th computer science on-line conference. Vol 3. Springer, Cham, Switzerland. p. 15–25. doi: https://doi.org/10.1007/978-3-031-35314-7_2.

Ndob, A. M., and A. Lebert. 2018. Prediction of pH and aw of pork meat by a thermodynamic model: new developments. Meat Sci. 138:59–67. doi: https://doi.org/10.1016/j.meatsci.2017.11.017.

Nikkhah, A., A. Rohani, M. Zarei, A. Kulkarni, F. A. Batarseh, N. T. Blackstone, and R. Ovissipour. 2023. Toward sustainable culture media: using artificial intelligence to optimize reduced-serum formulations for cultivated meat. Sci. Total Environ. 894: 164988. doi: https://doi.org/10.1016/j.scitotenv.2023.164988.

Ouyang, L., J. Wu, X. Jiang, D. Almeida, C. L. Wainwright, P. Mishkin, C. Zhang, S. Agarwal, K. Slama, A. Ray, J. Schulman, J. Hilton, F. Kelton, L. Miller, M. Simens, A. Askell, P. Welinder, P. Christiano, J. Leike, and R. Lowe. 2022. Training language models to follow instructions with human feedback. In: Advances in neural information processing systems, Curran & Associates, Inc., Red Hook, NY. p. 27730–27744.

Papadopoulou, O. S., E. Z. Panagou, F. R. Mohareb, and G.-J. E. Nychas. 2013. Sensory and microbiological quality assessment of beef fillets using a portable electronic nose in tandem with support vector machine analysis. Food Res. Int. 50:241–249. doi: https://doi.org/10.1016/j.foodres.2012.10.020.

Park, S., S.-J. Hong, S. Kim, J. Ryu, S. Roh, and G. Kim. 2023. Classification of fresh and frozen-thawed beef using a hyperspectral imaging sensor and machine learning. Agriculture. 13:918. doi: https://doi.org/10.3390/agriculture13040918.

Pawłowski, M., A. Wróblewska, and S. Sysko-Romańczuk. 2023. Effective techniques for multimodal data fusion: a comparative analysis. Sensors. 23:2381. doi: https://doi.org/10.3390/s23052381.

Penning, B. W., W. M. Snelling, and M. J. Woodward-Greene. 2020. Machine learning in the assessment of meat quality. IT Prof. 22:39–41. doi: https://doi.org/10.1109/mitp.2020.2986123.

Polikar, R., L. Upda, S. S. Upda, and V. Honavar. 2001. Learn++: an incremental learning algorithm for supervised neural networks. IEEE T. Syst. Man. Cy. C. 31:497–508. doi: https://doi.org/10.1109/5326.983933.

Prakash, S., D. P. Berry, M. Roantree. O. Onibonoje, L. Gualano. M. Scriney, and A. McCarren. 2021. Using artificial intelligence to automate meat cut identification from the semimembranosus muscle on beef boning lines. J. Anim. Sci. 99:skab319. doi: https://doi.org/10.1093/jas/skab319.

Rahmani, E., M. Khatami, and E. Stephens. 2024. Using probabilistic machine learning methods to improve beef cattle price modeling and promote beef production efficiency and sustainability in Canada. Sustainability. 16:1789. doi: https://doi.org/10.3390/su16051789.

Ren, Y., L. Zhang, and P. N. Suganthan. 2016. Ensemble classification and regression-recent developments, applications and future directions. IEEE Comput. Intell. M. 11:41–53. doi: https://doi.org/10.1109/mci.2015.2471235.

Robert, C., S. J. Fraser-Miller, W. T. Jessep, W. E. Bain, T. M. Hicks, J. F. Ward, C. R. Craigie, M. Loeffen, and K. C. Gordon. 2021. Rapid discrimination of intact beef, venison and lamb meat using Raman spectroscopy. Food Chem. 343:128441. doi: https://doi.org/10.1016/j.foodchem.2020.128441.

Ropodi, A. I., E. Z. Panagou, and G.-J. E. Nychas. 2017. Multispectral imaging (MSI): a promising method for the detection of minced beef adulteration with horsemeat. Food Control. 73:57–63. doi: https://doi.org/10.1016/j.foodcont.2016.05.048.

Russell, S., and P. Norvig. 1995. Artificial intelligence: a modern approach. 3rd ed. Prentice-Hall, Upper Saddle River, NJ.

Sanz, J. A., A. M. Fernandes, E. Barrenechea, S. Silva, V. Santos, N. Gonçalves, D. Paternain, A. Jurio, and P. Melo-Pinto. 2016. Lamb muscle discrimination using hyperspectral imaging: comparison of various machine learning algorithms. J. Food Eng. 174:92–100. doi: https://doi.org/10.1016/j.jfoodeng.2015.11.024.

Shahinfar, S., K. Kelman, and L. Kahn. 2019. Prediction of sheep carcass traits from early-life records using machine learning. Comput. Electron. Agr. 156:159–177. doi: https://doi.org/10.1016/j.compag.2018.11.021.

Shi, Y., X. Wang, M. S. Borhan, J. Young, D. Newman, E. Berg, and X. Sun. 2021. A review on meat quality evaluation methods based on non-destructive computer vision and artificial intelligence technologies. Food Science of Animal Resources. 41:563–588. doi: https://doi.org/10.5851/kosfa.2021.e25.

Shorten, C., and T. M. Khoshgoftaar. 2019. A survey on image data augmentation for deep learning. Journal of Big Data. 6:1–48. doi: https://doi.org/10.1186/s40537-019-0197-0.

Suaza-Medina, M. E., F. J. Zarazaga-Soria, J. Pinilla-Lopez, F. J. Lopez-Pellicer, and J. Lacasta. 2023. Effects of data time lag in a decision-making system using machine learning for pork price prediction. Neural Comput. Appl. 35:19221–19233. doi: https://doi.org/10.1007/s00521-023-08730-7.

Sun, H., C. Song, X. Lin, and X. Gao. 2022. Identification of meat species by combined laser-induced breakdown and Raman spectroscopies. Spectrochim. Acta B. 194:106456. doi: https://doi.org/10.1016/j.sab.2022.106456.

Sun, X., J. Young, J.-H. Liu, and D. Newman. 2018. Prediction of pork loin quality using online computer vision system and artificial intelligence model. Meat Sci. 140:72–77. doi: https://doi.org/10.1016/j.meatsci.2018.03.005.

Swanson, A., and A. Gowen. 2022. Detection of previously frozen poultry through plastic lidding film using portable visible spectral imaging (443–726 NM). Poultry Sci. 101:101578. doi: https://doi.org/10.1016/j.psj.2021.101578.

Tang, X., L. Rao, L. Xie, M. Yan, Z. Chen, S. Liu, L. Chen, S. Xiao, N. Ding, Z. Zhang, and L. Huang. 2023. Quantification and visualization of meat quality traits in pork using hyperspectral imaging. Meat Sci. 196:109052. doi: https://doi.org/10.1016/j.meatsci.2022.109052.

Tian, M., H. Guo. H. Chen, Q. Wang, C. Long, and Y. Ma. 2019. Automated pig counting using deep learning. Comput. Electron. Agr. 163:104840. doi: https://doi.org/10.1016/j.compag.2019.05.049.

Tian, X., J. Wang, and S. Cui. 2013. Analysis of pork adulteration in minced mutton using electronic nose of metal oxide sensors. J. Food Eng. 119:744–749. doi: https://doi.org/10.1016/j.jfoodeng.2013.07.004.

Vajdi, M., M. J. Varidi, M. Varidi, and M. Mohebbi. 2019. Using electronic nose to recognize fish spoilage with an optimum classifier. J. Food Meas. Charact. 13:1205–1217. doi: https://doi.org/10.1007/s11694-019-00036-4.

Wang, C.-H., K.-Y. Huang, Y. Yao, J.-C. Chen, H.-H. Shuai, and W.-H. Cheng. 2024. Lightweight deep learning: an overview. IEEE Consumer Electronics Magazine. 13:51–64. doi: https://doi.org/10.1109/mce.2022.3181759.

Wang, M., and X. Li. 2024. Application of artificial intelligence techniques in meat processing: a review. J. Food Process Eng. 47:e14590. doi: https://doi.org/10.1111/jfpe.14590.

Wang, Y., C. Wang, F. Dong, and S. Wang. 2021. Integrated spectral and textural features of hyperspectral imaging for prediction and visualization of stearic acid content in lamb meat. Anal. Methods. 13:4157–4168. doi: https://doi.org/10.1039/d1ay00757b.

Wang, H., X. D. Wang, D. Liu, Y. Wang, X. Li, and J. Duan. 2019. Evaluation of beef flavor attribute based on sensor array in tandem with support vector machines. J. Food Meas. Charact. 13:2663–2671. doi: https://doi.org/10.1007/s11694-019-00187-4.

Wang, L., X. Zhang, H. Su, and J. Zhu. 2024. A comprehensive survey of continual learning: theory, method and application. IEEE T. Pattern Anal. 46:5362–5383. doi: https://doi.org/10.1109/tpami.2024.3367329.

Weisrock, A., R. Wüst, M. Olenic, P. Lecomte-Grosbras, and L. Thorrez. 2024. MyoFInDer: an AI-based tool for myotube fusion index determination. Tissue Eng. Pt. A. 30:19–20. doi: https://doi.org/10.1089/ten.tea.2024.0049.

Wilkinson, M. D., M. Dumontier, I. J. Aalbersberg, G. Appleton, M. Axton, A. Baak, N. Blomberg, J.-W. Boiten, L. B. da Silva Santos, P. E. Bourne, J. Bouwman, A. J. Brookes, T. Clark, M. Crosas, I. Dillo, O. Dumon, S. Edmunds, C. T. Evelo, R. Finkers, A. Gonzalez-Beltran, A. J. G. Gray, P. Groth, C. Goble, J. S. Grethe, J. Heringa, P. A C. ’t Hoen, R. Hooft, T. Kuhn, R. Kok, J. Kok, S. J. Lusher, M. E. Martone, A. Mons, A. L. Packer, B. Persson, P. Rocca-Serra, M. Roos, R. van Schaik, S.-A. Sansone, E. Schultes, T. Sengstag, T. Slater, G. Strawn, M. A. Swertz, M. Thompson, J. van der Lei, E. van Mulligen, J. Velterop, A. Waagmeester, P. Wittenburg, K. Wolstencroft, J. Zhao, and B. Mons. 2016. The FAIR guiding principles for scientific data management and stewardship. Scientific Data. 3:160018. doi: https://doi.org/10.1038/sdata.2016.18.

Wold, J. P., E. Veiseth-Kent, V. Høst, and A. Løvland. 2017. Rapid on-line detection and grading of wooden breast myopathy in chicken fillets by near-infrared spectroscopy. PLoS One. 12:e0173384. doi: https://doi.org/10.1371/journal.pone.0173384.

Wu, X., X. Liang, Y. Wang, B. Wu, and J. Sun. 2022a. Non-destructive techniques for the analysis and evaluation of meat quality and safety: a review. Foods. 11:3713. doi: https://doi.org/10.3390/foods11223713.

Wu, X., L. Xiao, Y. Sun, J. Zhang, T. Ma, and L. He. 2022b. A survey of human-in-the-loop for machine learning. Future Gener. Comp. Sy. 135:364–381. doi: https://doi.org/10.1016/j.future.2022.05.014.

Xu, J.-L., and D.-W. Sun. 2017. Identification of freezer burn on frozen salmon surface using hyperspectral imaging and computer vision combined with machine learning algorithm. Int. J. Refrig. 74:151–164. doi: https://doi.org/10.1016/j.ijrefrig.2016.10.014.

Xu, T., W. Zhao, L. Cai, X. Shi, and X. Wang. 2023. Lightweight saliency detection method for real-time localization of livestock meat bones. Sci. Rep. 13:4510. doi: https://doi.org/10.1038/s41598-023-31551-6.

Yang, D., D. He, A. Lu, D. Ren, and J. Wang. 2017. Detection of the freshness state of cooked beef during storage using hyperspectral imaging. Appl. Spectrosc. 71:2286–2301. doi: https://doi.org/10.1177/0003702817718807.

Yang, F., S. Lin, and J. Zhang. 2021. Pork price forecast based on the comparison of KPCA-ARIMA-LSTM and DBN multi-model. 2021 2nd International Conference on Computer Science and Management Technology, 12–14 November, Shanghai, China. p. 124–130. doi: https://doi.org/10.1109/iccsmt54525.2021.00033.

Zelaya, C. V. G. 2019. Towards explaining the effects of data preprocessing on machine learning. 2019 IEEE 35th International Conference on Data Engineering, 8–11 April, Macao, China. p. 2086–2090. doi: https://doi.org/10.1109/ICDE.2019.00245.

Zhang, S., Y. Chen, W. Liu, B. Liu, X. Zhou. 2023. Marbling-net: a novel intelligent framework for pork marbling segmentation using images from smartphones. Sensors. 23:5135. doi: https://doi.org/10.3390/s23115135.

Zhang, J., G. Liu, Y. Li, M. Guo, F. Pu, and H. Wang. 2022a. Rapid identification of lamb freshness grades using visible and near-infrared spectroscopy (Vis-NIR). J. Food Compos Anal. 111:104590. doi: https://doi.org/10.1016/j.jfca.2022.104590.

Zhang, J., Y. Ma, G. Liu, N. Fan, Y. Li, and Y. Sun. 2022b. Rapid evaluation of texture parameters of Tan mutton using hyperspectral imaging with optimization algorithms. Food Control. 135:108815. doi: https://doi.org/10.1016/j.foodcont.2022.108815.

Zhang, Y., M. Zheng, R. Zhu, R. Ma. 2022c. Adulteration discrimination and analysis of fresh and frozen-thawed minced adulterated mutton using hyperspectral images combined with recurrence plot and convolutional neural network. Meat Sci. 192:108900. doi: https://doi.org/10.1016/j.meatsci.2022.108900.

Zhao, H.-T., Y.-Z. Feng, W. Chen, and G.-F. Jia. 2019. Application of invasive weed optimization and least square support vector machine for prediction of beef adulteration with spoiled beef based on visible near-infrared (Vis-NIR) hyperspectral imaging. Meat Sci. 151:75–81. doi: https://doi.org/10.1016/j.meatsci.2019.01.010.

Zheng, H., N. Zhao, S. Xu, J. He, R. Ospina, Z. Qiu, and Y. Liu. 2024. Deep learning-based automated cell detection-facilitated meat quality evaluation. Foods. 13:2270. doi: https://doi.org/10.3390/foods13142270.