What Pixel did, which the X couldn’t.
It’s the portrait camera of course. The portrait camera on the pixel being superior than the camera on the iPhone regardless of being technically inferior.
Technically the iPhone has a IR system complete with IR camera, dot projector to 3D map your face from a 2D image. The pixel ? An ordinary front camera system. But how is the front camera on the pixel at portrait mode so much better than the one on the iPhone?
I think it all lies in the strategy the companies each took in solving the problem. Apple on one hand took a more complex way to solve the problem. Even though the dot projector produces thousands of dots to map the face, the dot are not sufficient enough to accurately map the thin hair characteristics of the face.
Pixel on the other hand, took a more traditional software approach. It’s google we are talking about. Google has a ton of experience processing objects and faces in the image. The system has made mistakes and has learned from thousands of such mistakes that the system is now fairly accurate in performing depth information separating foreground from the background.
Google has fairly large amount of data to design a accurate system. Which Apple lags.
But the portrait mode on the iPhone is still in its beta state. As time improves and updates roll out the feature would get better and better learning from the misstates that it made.