Security implication if android app can be installed on emulator
It is unclear what kind of security requirements you have in the first place and thus it is unclear if your security measures are sufficient or not.
Fully protecting against a malicious user using your application is not possible as long as you are not able to fully control the device of the user. This risk includes running the application on emulators but also includes running it on rooted or otherwise tampered device - and not all of this will be detected by whatever root detection method you use.
Instead you need to design your application so that a malicious user cannot do any harm to you or to other users but only to himself. This for example means having user specific secrets in the application and not using global secrets. This also means that you should not trust anything the application reports but instead verify if this make sense (i.e. not trust any self-reported high score in games or similar).
Whose security are you concerned for here, and what are you trying to protect? Are you trying to protect the users from having other people access their data, or are you trying to protect the company from reverse engineers attempting to look at how the app works because your API is insecure?
If you are purely attempting to protect the users' security, then there is no issue at all with having the app run in a VM unless you think users will run the app in a poorly secured VM and have their data stolen, which is both very unlikely and is their problem, not yours.
If you are attempting to prevent people from reverse engineering the app, then you are fighting a difficult battle because root checkers are easily bypassed. This is also almost always a pointless effort since the app should have nothing useful for an attacker if it was designed securely.
Also, keep in mind that sometimes security testing people will sometimes just make up non-issues if they fail to find any real issues since a blank report makes it hard to justify the money spent. If possible, challenge them on this statement and ask them to give a real world example of how this is actually an issue.
The classic and correct answer to your client is NOTANISSUE.
No client side software * * should ever be considered to be designed as secure, in the sense your question asks. They can't be. The client side software - be it web or app - is totally under the clients control, as is its environment, as is the total ability to rewrite/mod the software, or run it on an undetectably insecure or modified environment. That isn't a bug. That's inherent in the model * *.
The purpose of your various checks is to reduce the risks and raise the bar, as so often with security. It is not done to make the client secure or ensure client side security, and your client is incorrect in assuming that aim.
- * * With perhaps the sole exclusion of client side software where the entire client side software and its environment is designed and controlled with the purpose of creating a highly tamper-resistant and verifiable environment, such as Trusted Execution, or the firmware of some YubiKeys (that can't easily at all be downloaded or modified once flashed), or when the client is a remote system with its own security in place, such as well-secured failover servers syncing to each other over SSH.
Even then, perhaps the specific module may be considered secure (for a certain threat model) but that still doesn't mean that anything else, such as a local app checking the dongle's response, is in any way secured.