[1] S. Dong, P. Wang, K. Abbas, A survey on deep learning and its applications, Comp. Sci. Rev. 40 (2021) 100379.
[2] A. Brutzkus, A. Globerson, Why do larger models generalize better? A theoretical perspective via the XOR problem, in: ICML, 2019, pp. 822–830.
[3] B. Zhou, A. Lapedriza, J. Xiao, A. Torralba, A. Oliva, Learning deep features for scene recognition using places database, in: NIPS, 2014, pp. 487–495.
[4] J. Li, J. Yang, A. Hertzmann, J. Zhang, T. Xu, LayoutGAN: Synthesizing graphic layouts with vector-wireframe adversarial networks, IEEE Trans. PAMI 43 (7) (2021) 2388–2399.
[5] S. Zhao, Z. Liu, J. Lin, J.-Y. Zhu, S. Han, Differentiable augmentation for data-efficient gan training, NIPS 33 (2020) 7559–7570.
[6] H.T. Shen, X. Zhu, Z. Zhang, S.-H. Wang, Y. Chen, X. Xu, J. Shao, Heterogeneous data fusion for predicting mild cognitive impairment conversion, Inf. Fusion 66 (2021) 54–63.
[7] X. Qiu, T. Sun, Y. Xu, Y. Shao, N. Dai, X. Huang, Pre-trained models for natural language processing: A survey, Sci. China Technol. Sci. (2020)1–26.
[8] M. Zaib, Q.Z. Sheng, W. Emma Zhang, A short survey of pre-trained language models for conversational AI-A new age in NLP, in: ACSW, 2020, pp. 1–4.
[9] S. Bahrami, F. Dornaika, A. Bosaghzadeh, Joint auto-weighted graph fusion and scalable semi-supervised learning, Inf. Fusion 66 (2021) 213–228.
[10] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A.N. Gomez, L. Kaiser, I. Polosukhin, Attention is all you need, in: NIPS, 2017.
[11] J. Devlin, M.-W. Chang, K. Lee, K. Toutanova, BERT: Pre-training of deep bidirectional transformers for language understanding, in: NAACL, 2019,pp. 4171–4186.
[12] A. Radford, K. Narasimhan, T. Salimans, I. Sutskever, Improving language understanding by generative pre-training, 2018.
[13] M. Habermann, W. Xu, M. Zollhofer, G. Pons-Moll, C. Theobalt, Deepcap: Monocular human performance capture using weak supervision, in: CVPR, 2020, pp. 5052–5063.
[14] Y. Wang, W. Yang, F. Ma, J. Xu, B. Zhong, Q. Deng, J. Gao, Weak supervision for fake news detection via reinforcement learning, in: AAAI, Vol. 34, (01) 2020, pp. 516–523.
[15] S. Jia, S. Jiang, Z. Lin, N. Li, M. Xu, S. Yu, A survey: Deep learning for hyper-spectral image classification with few labeled samples, Neurocomputing 448 (2021) 179–204.
[16] M. Diligenti, S. Roychowdhury, M. Gori, Integrating prior knowledge into deep learning, in: ICMLA, 2017, pp. 920–923.
[17] S. Chen, Y. Leng, S. Labi, A deep learning algorithm for simulating autonomous driving considering prior knowledge and temporal information, Comput.-Aided Civ. Infrastruct. Eng. 35 (4) (2020) 305–321.
[18] Y. Lin, S.L. Pintea, J.C. van Gemert, Deep hough-transform line priors, in:ECCV, 2020, pp. 323–340.
[19] G. Hartmann, Z. Shiller, A. Azaria, Deep reinforcement learning for time optimal velocity control using prior knowledge, in: ICTAI, 2019 pp. 186–193.
[20] X. Zhang, S. Wang, J. Liu, C. Tao, Towards improving diagnosis of skin diseases by combining deep neural network and human knowledge, BMC Med. Inform. Decis. Mak. 18 (2) (2018) 69–76.
[21] R. Zhang, F. Torabi, L. Guan, D.H. Ballard, P. Stone, Leveraging human guidance for deep reinforcement learning tasks, in: International Joint Conference on Artificial Intelligence (IJCAI), 2019.
[22] A. Holzinger, M. Plass, M. Kickmeier-Rust, K. Holzinger, G.C. Crişan, C.-M. Pintea, V. Palade, Interactive machine learning: experimental evidence for the human in the algorithmic loop, Appl. Intell. 49 (7) (2019) 2401–2414.
[23] Y.-t. Zhuang, F. Wu, C. Chen, Y.-h. Pan, Challenges and opportunities: from big data to knowledge in AI 2.0, Front. Inf. Technol. Electron. Eng. 18 (1) (2017) 3–14.
[24] V. Kumar, A. Smith-Renner, L. Findlater, K. Seppi, J. Boyd-Graber, Why didn’t you listen to me? Comparing user control of human-in-the-loop topic models, in: ACL, 2019.
[25] D. Xin, L. Ma, J. Liu, S. Macke, S. Song, A. Parameswaran, Accelerating human-in-the-loop machine learning: Challenges and opportunities, in: Proceedings of the Second Workshop on Data Management for End-To-End Machine Learning, 2018, pp. 1–4.
[26] S. Budd, E.C. Robinson, B. Kainz, A survey on active learning and human-in-the-loop deep learning for medical image analysis, Med. Image Anal. 71 (2021) 102062.
[27] W. Jung, F. Jazizadeh, Human-in-the-loop HVAC operations: A quantitative review on occupancy, comfort, and energy-efficiency dimensions, Appl. Energy 239 (2019) 1471–1508.
[28] S. Agnisarman, S. Lopes, K.C. Madathil, K. Piratla, A. Gramopadhye, A survey of automation-enabled human-in-the-loop systems for infrastructure visual inspection, Autom. Constr. 97 (2019) 52–76.
[29] L. Benedikt, C. Joshi, L. Nolan, R. Henstra-Hill, L. Shaw, S. Hook, Human-in-the-loop AI in government: A case study, in: IUI, 2020 pp. 488–497.
[30] C. Chai, G. Li, Human-in-the-loop techniques in machine learning, Data Eng. (2020) 37.
[31] B.M. Tehrani, J. Wang, C. Wang, Review of human-in-the-loop cyber-physical systems (HiLCPS): The current status from human perspective, Comput. Civ. Eng. 2019: Data, Sens. Anal. (2019) 470–478.
[32] Z.Y. Khan, Z. Niu, S. Sandiwarno, R. Prince, Deep learning techniques for rating prediction: a survey of the state-of-the-art, Artif. Intell. Rev. 54 (1) (2021) 95–135.
[33] F. Yu, A. Seff, Y. Zhang, S. Song, T. Funkhouser, J. Xiao, LSUN: Construction of a large-scale image dataset using deep learning with humans in the loop, 2015, arXiv:arXiv:1506.03365. 378
[34] O. Siméoni, M. Budnik, Y. Avrithis, G. Gravier, Rethinking deep active learning: Using unlabeled data at model training, in: ICPR, 2021 pp. 1220–1227.
[35] Y. Wang, L. Zhang, Y. Yao, Y. Fu, How to trust unlabeled data instance credibility inference for few-shot learning, IEEE Trans. PAMI (2021) 1.
[36] Y. Shi, A.K. Jain, Boosting unconstrained face recognition with auxiliary unlabeled data, in: CVPR, 2021, pp. 2795–2804.
[37] Z. Ren, R. Yeh, A. Schwing, Not all unlabeled data are equal: Learning to weight data in semi-supervised learning, NIPS 33 (2020).
[38] S. Niu, B. Li, X. Wang, H. Lin, Defect image sample generation with GAN for improving defect recognition, IEEE Trans. Autom. Sci. Eng. 17 (3) (2020) 1611–1622.
[39] S. Khan, M. Naseer, M. Hayat, S.W. Zamir, F.S. Khan, M. Shah, Transformers in vision: A survey, ACM Comput. Surv. (2021).
[40] T.D. Pham, Classification of COVID-19 chest X-rays with deep learning: new models or fine tuning? Health Inf. Sci. Syst. 9 (1) (2021) 1–11.
[41] S. Chen, Y. Hou, Y. Cui, W. Che, T. Liu, X. Yu, Recall and learn: Fine-tuning deep pretrained language models with less forgetting, in: EMNLP, 2020, pp. 7870–7881.
[42] G. Wang, W. Li, M.A. Zuluaga, R. Pratt, P.A. Patel, M. Aertsen, T. Doel, A.L. David, J. Deprest, S. Ourselin, et al., Interactive medical image segmentation using deep learning with image-specific fine tuning, IEEE Trans. Med. Imaging 37 (7) (2018) 1562–1573.
[43] L. He, J. Michael, M. Lewis, L. Zettlemoyer, Human-in-the-loop parsing, in: EMNLP, 2016, pp. 2337–2342.
[44] J.Z. Self, R.K. Vinayagam, J. Fry, C. North, Bridging the gap between user intention and model parameters for human-in-the-loop data analytics, in: Proceedings of the Workshop on Human-in-the-Loop Data Analytics, 2016, pp. 1–6.
[45] Y. Zhuang, G. Li, Z. Zhong, J. Feng, Hike: A hybrid human-machine method for entity alignment in large-scale knowledge bases, in: CIKM, 2017 pp. 1917–1926.
[46] G. Li, Human-in-the-loop data integration, Proc. VLDB Endow. 10 (12) (2017) 2006–2017.
[47] B. Kim, B. Pardo, A human-in-the-loop system for sound event detection and annotation, ACM Trans. Interact. Intell. Syst. (TiiS) 8 (2) (2018) 1–23.
[48] A. Doan, Human-in-the-loop data analysis: a personal perspective, in: Proceedings of the Workshop on Human-in-the-Loop Data Analytics, 2018, pp. 1–6.
[49] X.L. Dong, T. Rekatsinas, Data integration and machine learning: A natural synergy, in: COMAD, 2018, pp. 1645–1650.
[50] A.L. Gentile, D. Gruhl, P. Ristoski, S. Welch, Explore and exploit. Dictionary expansion with human-in-the-loop, in: European Semantic Web Conference, 2019, pp. 131–145.
[51] S. Zhang, L. He, E. Dragut, S. Vucetic, How to invest my time: Lessons from human-in-the-loop entity extraction, in: KDD, 2019, pp. 2305–2313.
[52] L. Berti-Equille, Reinforcement learning for data preparation with active reward learning, in: International Conference on Internet Science, 2019, pp. 121–13.
[53] S. Gurajada, L. Popa, K. Qian, P. Sen, Learning-based methods with human-in-the-loop for entity resolution, in: CIKM, 2019, pp. 2969–2970.
[54] Y. Lou, M. Uddin, N. Brown, M. Cafarella, Knowledge graph programming with a human-in-the-loop: Preliminary results, in: Proceedings of the Workshop on Human-in-the-Loop Data Analytics, 2019, pp. 1–7.
[55] Z. Liu, J. Wang, S. Gong, H. Lu, D. Tao, Deep reinforcement active learning for human-in-the-loop person re-identification, in: ICCV, 2019, pp. 6122–6131.
[56] E. Wallace, P. Rodriguez, S. Feng, I. Yamada, J. Boyd-Graber, Trick me if you can: Human-in-the-loop generation of adversarial examples for question answering, Trans. Assoc. Comput. Linguist. 7 (2019) 387–401.
[57] X. Fan, C. Li, X. Yuan, X. Dong, J. Liang, An interactive visual analytics approach for network anomaly detection through smart labeling, J. Vis. 22 (5) (2019) 955–971.
[58] E. Krokos, H.-C. Cheng, J. Chang, B. Nebesh, C.L. Paul, K. Whitley, A. Varshney, Enhancing deep learning with visual interactions, ACM Trans. Interact. Intell. Syst. (TiiS) 9 (1) (2019) 1–27.
[59] J.-C. Klie, R.E. de Castilho, I. Gurevych, From zero to hero: Human-in-the-loop entity linking in low resource domains, in: ACL, 2020, pp. 6982–6993.
[60] C. Butler, H. Oster, J. Togelius, Human-in-the-loop AI for analysis of free response facial expression label sets, in: IVA, 2020, pp. 1–8.
[61] P. Ristoski, A.L. Gentile, A. Alba, D. Gruhl, S. Welch, Large-scale relation extraction from web documents and knowledge graphs with human-in-the-loop, J. Web Semant. 60 (2020) 100546.
[62] K. Qian, P.C. Raman, Y. Li, L. Popa, Partner: Human-in-the-loop entity name understanding with deep learning, in: The AAAI Conference on Artificial Intelligence, 34, (09) 2020, pp. 13634–13635.
[63] T.-N. Le, A. Sugimoto, S. Ono, H. Kawasaki, Toward interactive selfannotation for video object bounding box: Recurrent self-learning and hierarchical annotation based framework, in: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2020 pp. 3231–3240.
[64] M. Bartolo, A. Roberts, J. Welbl, S. Riedel, P. Stenetorp, Beat the AI: Investigating adversarial human annotation for reading comprehension, Trans. Assoc. Comput. Linguist. 8 (2020) 662–678.
[65] K. Muthuraman, F. Reiss, H. Xu, B. Cutler, Z. Eichenberger, Data cleaning tools for token classification tasks, in: Proceedings of the Second Workshop on Data Science with Human in the Loop: Language Advances, 2021, pp. 59–61.
[66] Q. Meng, W. Wang, T. Zhou, J. Shen, Y. Jia, L. Van Gool, Towards a weakly supervised framework for 3d point cloud object detection and annotation, IEEE Trans. PAMI (2021) 1.
[67] L. Zhang, X. Wang, Q. Fan, Y. Ji, C. Liu, Generating manga from illustrations via mimicking manga creation workflow, in: CVPR, 2021, pp. 5642–5651.
[68] B. Adhikari, H. Huttunen, Iterative bounding box annotation for object detection, in: ICPR, 2021, pp. 4040–4046.
[69] J.L. Martinez-Rodriguez, A. Hogan, I. Lopez-Arevalo, Information extraction meets the semantic web: A survey, Semant. Web 11 (2) (2020) 255–335.
[70] H. Ye, W. Shao, H. Wang, J. Ma, L. Wang, Y. Zheng, X. Xue, Face recognition via active annotation and learning, in: ACM International Conference on Multimedia, 2016, pp. 1058–1062.
[71] Y. LeCun, Y. Bengio, G. Hinton, Deep learning, Nature 521 (7553) (2015) 436–444.
[72] T. Karmakharm, N. Aletras, K. Bontcheva, Journalist-in-the-loop: Continuous learning as a service for rumour analysis, in: EMNLP, 2019 pp. 115–120.
[73] Y. Song, J. Wang, T. Jiang, Z. Liu, Y. Rao, Targeted sentiment classification with attentional encoder network, in: ICANN, Springer, 2019, pp. 93–103.
[74] X. Bai, P. Liu, Y. Zhang, Investigating typed syntactic dependencies for targeted sentiment classification using graph attention neural network, IEEE/ACM Trans. Audio, Speech, Lang. Process. 29 (2020) 503–514.
[75] I. Arous, L. Dolamic, J. Yang, A. Bhardwaj, G. Cuccu, P. Cudré-Mauroux, Marta: Leveraging human rationales for explainable text classification, in: The AAAI Conference on Artificial Intelligence, 35, (7) 2021, pp. 5868–5876.
[76] Z. Yao, X. Li, J. Gao, B. Sadler, H. Sun, Interactive semantic parsing for if-then recipes via hierarchical reinforcement learning, in: The AAAI Conference on Artificial Intelligence, 33, (01) 2019, pp. 2547–2554.
[77] Z. Yao, Y. Su, H. Sun, W.-t. Yih, Model-based interactive semantic parsing: A unified formulation and a text-to-SQL case study, in: EMNLP, 2019.
[78] D.M. Ziegler, N. Stiennon, J. Wu, T.B. Brown, A. Radford, D. Amodei, P. Christiano, G. Irving, Fine-tuning language models from human preferences, 2019, arXiv:arXiv:1909.08593.
[79] N. Stiennon, L. Ouyang, J. Wu, D. Ziegler, R. Lowe, C. Voss, A. Radford, D. Amodei, P.F. Christiano, Learning to summarize with human feedback, NIPS 33 (2020) 3008–3021.
[80] B. Hancock, A. Bordes, P.-E. Mazare, J. Weston, Learning from dialogue after deployment: Feed yourself, chatbot!, in: ACL, 2019, pp. 3667–3684.
[81] Z. Liu, Y. Guo, A.A. AI, J. Mahmud, When and why does a model fail? A human-in-the-loop error detection framework for sentiment analysis, NAACL-HLT 2021 (2021) 170.
[82] S. Chopra, M. Auli, A.M. Rush, Abstractive sentence summarization with attentive recurrent neural networks, in: NAACL, 2016, pp. 93–98.
[83] Z.J. Wang, D. Choi, S. Xu, D. Yang, Putting humans in the natural language processing loop: A survey, in: Proceedings of the First Workshop on Bridging Human–Computer Interaction and Natural Language Processing, 2021, pp. 47–52.
[84] L. Xiao, X. Hu, Y. Chen, Y. Xue, D. Gu, B. Chen, T. Zhang, Targeted sentiment classification based on attentional encoding and graph convolutional networks, Appl. Sci. 10 (3) (2020) 957.
[85] L. Xiao, X. Hu, Y. Chen, Y. Xue, B. Chen, D. Gu, B. Tang, Multi-head self-attention based gated graph convolutional networks for aspect-based sentiment classification, Multimedia Tools Appl. (2020) 1–20.
[86] B. Nushi, E. Kamar, E. Horvitz, Towards accountable ai: Hybrid human machine analyses for characterizing system failure, in: The AAAI Conference on Artificial Intelligence, 6, (1) 2018.
[87] M.T. Ribeiro, S. Singh, C. Guestrin, " Why should i trust you?" explaining the predictions of any classifier, in: Annual ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), 2016, pp. 1135–1144.
[88] X. Wu, Y. Zheng, T. Ma, H. Ye, L. He, Document image layout analysis via explicit edge embedding network, Inform. Sci. 577 (2021) 436–448.
[89] X. Wu, B. Xu, Y. Zheng, H. Ye, J. Yang, L. He, Fast video crowd counting with a temporal aware network, Neurocomputing 403 (2020) 13–20.
[90] R. Girshick, Fast r-cnn, in: ICCV, 2015, pp. 1440–1448.
[91] Z. Zou, Z. Shi, Y. Guo, J. Ye, Object detection in 20 years: A survey, 2019, arXiv:arXiv:1905.05055.
[92] A. Yao, J. Gall, C. Leistner, L. Van Gool, Interactive object detection, in: CVPR, 2012, pp. 3242–3249.
[93] K. Madono, T. Nakano, T. Kobayashi, T. Ogawa, Efficient human-in-the-loop object detection using bi-directional deep SORT and annotation-free segment identification, in: APSIPA ASC, 2020, pp. 1226–1233.
[94] N. Wojke, A. Bewley, D. Paulus, Simple online and realtime tracking with a deep association metric, in: ICIP, 2017, pp. 3645–3649. 379
[95] M.R. Banham, A.K. Katsaggelos, Digital image restoration, IEEE Signal Process. Mag. 14 (2) (1997) 24–41.
[96] A. Criminisi, P. Perez, K. Toyama, Object removal by exemplar-based inpainting, in: CVPR, 2, 2003, p. II.
[97] G. Liu, F.A. Reda, K.J. Shih, T.-C. Wang, A. Tao, B. Catanzaro, Image inpainting for irregular holes using partial convolutions, in: ECCV, 2018, pp. 85–100.
[98] T. Weber, H. Hußmann, Z. Han, S. Matthes, Y. Liu, Draw with me: Human-in-the-loop for image restoration, in: IUI, 2020, pp. 243–253.
[99] D. Ulyanov, A. Vedaldi, V. Lempitsky, Deep image prior, in: CVPR, 2018, pp. 9446–9454.
[100] J. Roels, F. Vernaillen, A. Kremer, A. Gonçalves, J. Aelterman, H.Q. Luong, B. Goossens, W. Philips, S. Lippens, Y. Saeys, A human-in-the-loop approach for semi-automated image restoration in electron microscopy, BioRxiv (2019) 644146.
[101] S. Minaee, Y.Y. Boykov, F. Porikli, A.J. Plaza, N. Kehtarnavaz, D. Terzopoulos, Image segmentation using deep learning: A survey, IEEE Trans. PAMI (2021) 1.
[102] V. Badrinarayanan, A. Kendall, R. Cipolla, Segnet: A deep convolutional encoder-decoder architecture for image segmentation, IEEE Trans. PAMI 39 (12) (2017) 2481–2495.
[103] H. Wang, T. Chen, Z. Wang, K. Ma, Efficiently troubleshooting image segmentation models with human-in-the-loop, 2020, p. 1.
[104] A. Taleb, C. Lippert, T. Klein, M. Nabi, Multimodal self-supervised learning for medical image analysis, in: IPMI, 2021, pp. 661–673.
[105] M. Ravanbakhsh, V. Tschernezki, F. Last, T. Klein, K. Batmanghelich, V. Tresp, M. Nabi, Human-machine collaboration for medical image segmentation, in: ICASSP, 2020, pp. 1040–1044.
[106] Y. Murata, Y. Dobashi, Automatic image enhancement taking into account user preference, in: CW, 2019, pp. 374–377.
[107] M. Fischer, K. Kobs, A. Hotho, Nicer: Aesthetic image enhancement with humans in the loop, in: The Thirteenth International Conference on Advances in Computer-Human Interactions, 2020, pp. 357–362.
[108] A. Benard, M. Gygli, Interactive video object segmentation in the wild, 2017, arXiv:arXiv:1801.00269.
[109] S.W. Oh, J.-Y. Lee, N. Xu, S.J. Kim, Fast user-guided video object segmentation by interaction-and-propagation networks, in: CVPR, 2019 pp. 5247–5256.
[110] K.N. Shukla, A. Potnis, P. Dwivedy, A review on image enhancement techniques, IJEACS 2 (7) (2017) 232–235.
[111] X. Fu, J. Yan, C. Fan, Image aesthetics assessment using composite features from off-the-shelf deep models, in: ICIP, 2018, pp. 3528–3532.
[112] C. Burges, T. Shaked, E. Renshaw, A. Lazier, M. Deeds, N. Hamilton, G. Hullender, Learning to rank using gradient descent, in: ICML, 2005, pp. 89–96.
[113] R. Yao, G. Lin, S. Xia, J. Zhao, Y. Zhou, Video object segmentation and tracking: A survey, ACM Trans. Intell. Syst. Technol. (TIST) 11 (4) (2020) 1–47.
[114] S. Caelles, K.-K. Maninis, J. Pont-Tuset, L. Leal-Taixé, D. Cremers, L. Van Gool, One-shot video object segmentation, in: CVPR, 2017, pp. 221–230.
[115] N. Xu, B. Price, S. Cohen, J. Yang, T.S. Huang, Deep interactive object selection, in: CVPR, 2016, pp. 373–381.
[116] M. Hudec, E. Mináriková, R. Mesiar, A. Saranti, A. Holzinger, Classification by ordinal sums of conjunctive and disjunctive functions for explainable AI and interpretable machine learning solutions, Knowl.-Based Syst. 220 (2021) 106916.
[117] L.F. Cranor, A framework for reasoning about the human in the loop, in: Proceedings of the 1st Conference on Usability, Psychology, and Security, 2008, pp. 1–15.
[118] H.V. Singh, Q.H. Mahmoud, Human-in-the-loop error precursor detection using language translation modeling of HMI states, in: SMC, 2020 pp. 2237–2242.
[119] G. Demartini, S. Mizzaro, D. Spina, Human-in-the-loop artificial intelligence for fighting online misinformation: Challenges and opportunities, Bull. Tech. Committee Data Eng. 43 (3) (2020) 1–10.
[120] D. Odekerken, F. Bex, Towards transparent human-in-the-loop classification of fraudulent web shops, in: Legal Knowledge and Information Systems, 2020, pp. 239–242.
[121] S. Brostoff, M.A. Sasse, Safe and sound: a safety-critical approach to security, in: Proceedings of the 2001 Workshop on New Security Paradigms, 2001, pp. 41–50.
[122] A. Machiry, R. Tahiliani, M. Naik, Dynodroid: An input generation system for android apps, in: Proceedings of the 2013 9th Joint Meeting on Foundations of Software Engineering, 2013, pp. 224–234.
[123] A. Kovashka, D. Parikh, K. Grauman, Whittlesearch: Interactive image search with relative attribute feedback, IJCV 115 (2) (2015) 185–210.
[124] L. Rosenberg, Artificial swarm intelligence, a human-in-the-loop approach to AI, in: The AAAI Conference on Artificial Intelligence, 30, (1) 2016.
[125] Y. Shoshitaishvili, M. Weissbacher, L. Dresel, C. Salls, R. Wang, C. Kruegel, G. Vigna, Rise of the hacrs: Augmenting autonomous cyber reasoning systems with human assistance, in: Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, 2017 pp. 347–362.
[126] M.S. Wogalter, Communication-human information processing (C-HIP) model, in: Forensic Human Factors and Ergonomics, 2018, pp. 33–49.
[127] L. Ma, Towards understanding and simplifying human-in-the-loop machine learning, 2018, p. 1.
[128] M.A. Salam, M.E. Koone, S. Thirumuruganathan, G. Das, S. Basu Roy, A human-in-the-loop attribute design framework for classification, in: WWW, 2019, pp. 1612–1622.
[129] B.A. Plummer, M.H. Kiapour, S. Zheng, R. Piramuthu, Give me a hint! navigating image databases using human-in-the-loop feedback, in: WACV, 2019, pp. 2048–2057.
[130] F. Wrede, A. Hellander, Smart computational exploration of stochastic gene regulatory network models using human-in-the-loop semi-supervised learning, Bioinformatics 35 (24) (2019) 5199–5206.
[131] M. Böhme, C. Geethal, V.-T. Pham, Human-in-the-loop automatic program repair, in: ICST, 2020, pp. 274–285.
[132] A. Renner, Designing for the Human in the Loop: Transparency and Control in Interactive Machine Learning, (Ph.D. thesis), University of Maryland, College Park, 2020.
[133] J.B. Davidson, R.B. Graham, S. Beck, R.T. Marler, S.L. Fischer, Improving human-in-the-loop simulation to optimize soldier-systems integration, Applied Ergon. 90 (2021) 103267.
[134] H.O. Demirel, Digital human-in-the-loop framework, in: International Conference on Human-Computer Interaction, 2020, pp. 18–32.
[135] M. Metzner, D. Utsch, M. Walter, C. Hofstetter, C. Ramer, A. Blank, J. Franke, A system for human-in-the-loop simulation of industrial collaborative robot applications, in: CASE, 2020, pp. 1520–1525.
[136] A. Polisetty Venkata Sai, Information Preparation with the Human in the Loop, (Ph.D. thesis), TU Darmstadt, 2020.
[137] Z. Zhu, Y. Lu, R. Deng, H. Yang, A.B. Fogo, Y. Huo, Easierpath: An open-source tool for human-in-the-loop deep learning of renal pathology, in: Interpretable and Annotation-Efficient Learning for Medical Image Computing, 2020, pp. 214–222.
[138] N. Li, S. Adepu, E. Kang, D. Garlan, Explanations for human-on-the-loop: A probabilistic model checking approach, in: Proceedings of the IEEE/ACM 15th International Symposium on Software Engineering for Adaptive and Self-Managing Systems, 2020, pp. 181–187.
[139] P. Wiriyathammabhum, D. Summers-Stay, C. Fermüller, Y. Aloimonos, Computer vision and natural language processing: recent approaches in multimedia and robotics, ACM Comput. Surv. 49 (4) (2016) 1–44.
[140] A. Holzinger, B. Malle, A. Saranti, B. Pfeifer, Towards multi-modal causability with graph neural networks enabling information fusion for explainable AI, Inf. Fusion 71 (2021) 28–37.
[141] S. Arora, P. Doshi, A survey of inverse reinforcement learning: Challenges, methods and progress, Artificial Intelligence 297 (2021) 103500.
[142] A. Doan, A. Ardalan, J. Ballard, S. Das, Y. Govind, P. Konda, H. Li, S. Mudgal, E. Paulson, G.P. Suganthan, et al., Human-in-the-loop challenges for entity matching: A midterm report, in: Proceedings of the 2nd Workshop on Human-in-the-Loop Data Analytics, 2017, pp. 1–6.
[143] J. Li, A.H. Miller, S. Chopra, M. Ranzato, J. Weston, Dialogue learning with human-in-the-loop, ICLR (2016) 1–23.
[144] H. Amirpourazarian, A. Pinheiro, E. Fonseca, M. Ghanbari, M. Pereira, Quality evaluation of holographic images coded with standard codecs, IEEE Trans. Multimed. (2021) 1.
[145] S. Wan, Y. Hou, F. Bao, Z. Ren, Y. Dong, Q. Dai, Y. Deng, Human-in-the-loop low-shot learning, IEEE Trans. Neural Netw. Learn. Syst. 32 (7) (2021) 3287–3292.
[146] L. Yang, Q. Sun, N. Zhang, Z. Liu, Optimal energy operation strategy for we-energy of energy internet based on hybrid reinforcement learning with human-in-the-loop, IEEE Trans. Syst. Man, Cybern.: Syst. (2020)
1–11.
[147] Y. Fu, X. Zhu, B. Li, A survey on instance selection for active learning, Knowl. Inf. Syst. 35 (2) (2013) 249–283.
[148] J. Zhang, P. Fiers, K.A. Witte, R.W. Jackson, K.L. Poggensee, C.G. Atkeson, S.H. Collins, Human-in-the-loop optimization of exoskeleton assistance during walking, Science 356 (6344) (2017) 1280–1284.
[149] Y. Tay, M. Dehghani, D. Bahri, D. Metzler,
[150] J. Kreutzer, S. Riezler, C. Lawrence, Offline reinforcement learning from human feedback in real-world sequence-to-sequence tasks, in: SPNLP, 2021, pp. 37–43.
[151] A. Smith, V. Kumar, J. Boyd-Graber, K. Seppi, L. Findlater, Closing the loop: User-centered design and evaluation of a human-in-the-loop topic modeling system, in: IUI, 2018, pp. 293–304.
[152] A. Kapoor, J.C. Caicedo, D. Lischinski, S.B. Kang, Collaborative personalization of image enhancement, IJCV 108 (1–2) (2014) 148–164. 380
[153] J.-S. Jwo, C.-S. Lin, C.-H. Lee, Smart technology–driven aspects for human-in-the-loop smart manufacturing, Int. J. Adv. Manuf. Technol. 114 (5) (2021) 1741–1752.
[154] B. Settles, Closing the loop: Fast, interactive semi-supervised annotation with queries on features and instances, in: EMNLP, 2011, pp. 1467–1478.
[155] T.Y. Lee, A. Smith, K. Seppi, N. Elmqvist, J. Boyd-Graber, L. Findlater, The human touch: How non-expert users perceive, interpret, and fix topic models, Int. J. Hum.-Comput. Stud. 105 (2017) 28–42.
[156] N.M. Marquand, Automated Modeling of Human-in-the-Loop Systems, (Ph.D. thesis), Purdue University Graduate School, 2021.
[157] J.J. Dudley, P.O. Kristensson, A review of user interface design for interactive machine learning, ACM Trans. Interact. Intell. Syst. (TiiS) 8 (2) (2018) 1–37.
[158] K. Shilton, Values and ethics in human-computer interaction, Found. Trends® Hum.–Comput. Interaction 12 (2) (2018).
[159] A. Jolfaei, M. Usman, M. Roveri, M. Sheng, M. Palaniswami, K. Kant, Guest editorial: Computational intelligence for human-in-the-loop cyber physical systems, IEEE Trans. Emerg. Top. Comput. Intell. 6 (1) (2022) 2–5.
[160] W. Xu, M.J. Dainoff, L. Ge, Z. Gao, Transitioning to human interaction with AI systems: New challenges and opportunities for HCI professionals to enable human-centered AI, Int. J. Hum.–Comput. Interaction (2022) 1–25.
[161] K. Zhou, Z. Liu, Y. Qiao, T. Xiang, C. Change Loy, Domain generalization: A survey, 2021, arXiv:arXiv:2103.15053