TY - JOUR
T1 - Security threats to agricultural artificial intelligence
T2 - Position and perspective
AU - Gao, Yansong
AU - Camtepe, Seyit A.
AU - Sultan, Nazatul Haque
AU - Bui, Hang Thanh
AU - Mahboubi, Arash
AU - Aboutorab, Hamed
AU - Bewong, Michael
AU - Islam, Rafiqul
AU - Islam, Md Zahidul
AU - Chauhan, Aufeef
AU - Gauravaram, Praveen
AU - Singh, Dineshkumar
PY - 2024/12
Y1 - 2024/12
N2 - In light of their remarkable predictive capabilities, artificial intelligence (AI) models driven by deep learning (DL) have witnessed widespread adoption in the agriculture sector, contributing to diverse applications such as enhancing crop management and agricultural productivity. Despite their evident benefits, the integration of AI in agriculture brings forth security risks, a concern further exacerbated by the comparatively lower security awareness among agriculture stakeholders. This position paper endeavors to amplify the security consciousness among stakeholders (e.g., end-users such as farmers and governmental bodies) engaged in the implementation of AI systems within the agricultural sector. In our systematic categorization of security threats to AI systems, we delineate three primary avenues of attack: (1) Adversarial Example Attacks, (2) Poisoning Attacks, and (3) Backdoor Attacks. Adversarial example attacks manipulate inputs during the model’s inference phase to induce incorrect predictions. Poisoning attacks corrupt the training data, compromising the model’s availability by indiscriminately degrading its performance on legitimate inputs. Backdoor attacks, typically introduced during the training phase, undermine the model’s integrity, enabling attackers to trigger specific behaviors or outputs through predetermined hidden patterns. An example of compromising AI integrity for malicious purposes is DeepLocker, highlighted by IBM researchers. A detailed examination of attacks in each category is provided, emphasizing their tangible threats to real-world agricultural applications. To illustrate the practical implications, we conduct case studies on specific agricultural applications, focusing on precise irrigation schedules and plant disease detection, utilizing authentic agricultural datasets. Comprehensive countermeasures against each attack type are presented to assist agriculture stakeholders in actively safeguarding their AI applications. Additionally, we address challenges inherent in securing agriculture AI and offer our perspectives on mitigating security threats in this context. This work aims to equip agriculture stakeholders with the knowledge and tools necessary to fortify their AI systems against evolving security challenges. The artifacts of this work are released at https://github.com/garrisongys/Casestudy.
AB - In light of their remarkable predictive capabilities, artificial intelligence (AI) models driven by deep learning (DL) have witnessed widespread adoption in the agriculture sector, contributing to diverse applications such as enhancing crop management and agricultural productivity. Despite their evident benefits, the integration of AI in agriculture brings forth security risks, a concern further exacerbated by the comparatively lower security awareness among agriculture stakeholders. This position paper endeavors to amplify the security consciousness among stakeholders (e.g., end-users such as farmers and governmental bodies) engaged in the implementation of AI systems within the agricultural sector. In our systematic categorization of security threats to AI systems, we delineate three primary avenues of attack: (1) Adversarial Example Attacks, (2) Poisoning Attacks, and (3) Backdoor Attacks. Adversarial example attacks manipulate inputs during the model’s inference phase to induce incorrect predictions. Poisoning attacks corrupt the training data, compromising the model’s availability by indiscriminately degrading its performance on legitimate inputs. Backdoor attacks, typically introduced during the training phase, undermine the model’s integrity, enabling attackers to trigger specific behaviors or outputs through predetermined hidden patterns. An example of compromising AI integrity for malicious purposes is DeepLocker, highlighted by IBM researchers. A detailed examination of attacks in each category is provided, emphasizing their tangible threats to real-world agricultural applications. To illustrate the practical implications, we conduct case studies on specific agricultural applications, focusing on precise irrigation schedules and plant disease detection, utilizing authentic agricultural datasets. Comprehensive countermeasures against each attack type are presented to assist agriculture stakeholders in actively safeguarding their AI applications. Additionally, we address challenges inherent in securing agriculture AI and offer our perspectives on mitigating security threats in this context. This work aims to equip agriculture stakeholders with the knowledge and tools necessary to fortify their AI systems against evolving security challenges. The artifacts of this work are released at https://github.com/garrisongys/Casestudy.
KW - Artificial intelligence
KW - Agriculture
KW - Adversarial example
KW - Poisoning availability
KW - Backdoor
UR - http://www.scopus.com/inward/record.url?scp=85207635043&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85207635043&partnerID=8YFLogxK
U2 - 10.1016/j.compag.2024.109557
DO - 10.1016/j.compag.2024.109557
M3 - Review article
SN - 0168-1699
VL - 227
SP - 1
EP - 19
JO - Computers and Electronics in Agriculture
JF - Computers and Electronics in Agriculture
M1 - 109557
ER -