Is it possible to examine a Huawei device to answer definitively whether or not there is a security risk?
This article from the MIT Technology Review says that the U.K. has been vetting Huawei gear before deployment. It provides for only a limited level of assurance. The risks are loss of privacy, espionage, and sabotage.
I left the electronics/computer industry years ago but I think the following views are still applicable to the state of the art.
If espionage is intended, the manufacturer can make a device's code hard to access and inspect. A flash memory device might be part of a system-on-chip or system-in-package that obviates the need to present memory lines on the surfaces of the package. This would make code very inconvenient to access. In the extreme you may resort to using a scanning electron microscope (SEM). Courbon et al used a SEM to read a 210 nm flash memory. (The article is not dated but the most recent reference is from 2015 so the paper is at least as recent.) If you can access the code, you may then find that it was obfuscated. Obfuscated code is created when the programmer uses a program to turn the original code into a Rube Goldberg machine. Whether this is effective in hiding backdoors is debatable.
A custom chip can be a mystery. A chip "normal" in every way except identification looks like a mystery. A manufacturer can simply print phony or confusing identification on one or more of the devices on a board. A device can be made to look standard when it is in fact not. If any well funded entity were to invest in hiding or disguising functionality, it would take a huge amount of research to determine the "true circuit" of any single design . Until you know the true circuit, any inspection of the code provides no real assurance.
If you believe you need to go to much trouble to determine the true circuit and inspect code for a backdoor you may be better off just designing and building your own telecommunications equipment. More realistically you would install intrusive oversight in design, manufacturing, support, and updates and perhaps take over corporate governance through the nationalization of a maker.
If you are not going to be intrusive you are implicitly accepting a certain amount of risk and you are perhaps relying on commercial pressures to keep the manufacturer honest.
If updates are disabled, you might conclude that a particular design is not a security risk after a certain amount of effort. You would not know if the effort made is insufficient and that a larger effort is needed. It may not be practical or even safe to disable updates. As soon as the code is updated you do not have any assurance. You may not be aware of the event of an update if the manufacturer tries to be stealthy.
There is a lot of speculation based on intelligence that has not been explained. That's not nothing, but it also isn't something.
Can the phones be inspected? Yes, but only based on function. The problem is not a backdoor or a bootloader, and there would be no need to add a suspicious chip because malicious functionality can be baked in. Any regular calls home would also be detected.
The problem is with "sleeper" functions. Imagine the phone has a "dump all data to home" function that was not enabled until an update was applied by the manufacturer. The update itself need not be malicious, just configured in such a way to turn the sleeper function on. Then the phone contents are uploaded. It might be detected, but by that time, the damage is done.
Security depends a lot on trust, and if a vendor cannot be trusted, then it almost doesn't matter if it is possible to inspect the product; there are too many ways that the vendor can break whatever trust was granted.