Introduction
Machine learning is gaining importance by the day. Its application has been discussed in different, critical, and life-changing fields. However, to build a good model, it must have a sufficient amount of data, which is limited nowadays due to the increasing limitation placed to protect the data owner’s privacy. Federated learning addresses the privacy issues by adopting an on-device model training strategy while communicating only the model parameters rather than raw data. Thus, preserving users’ information and shielding them from harm following the dissemination of their private information to malicious and suspicious parties. However, federated learning comes with its own set of problems. The distributed form of federated learning makes it vulnerable to the Non-IID (Independent and identically distributed) data. The total accuracy is reduced, and the convergence time increase. To this end, we present a genetic-based approach to solving the Non-IID problem by considering each model’s influence as a base for our trainers’ preliminary selection process.