Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.    By clickin

google / jax

submited by
Style Pass
2021-06-22 20:00:18

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

In TensorFlow you can just use tf.device(None) to use the TPU's 300gb RAM + cpu for bigger operations but after looking at xla, the bridge, trax (which is where I am using jaxlib) and jax, I only seem to run into stuff like this error - 'JAX cannot work yet with n_devices != all devices: 1 != 8'.

I believe with tf.device(None): simply drops any device annotations allowing the TF placer to use whichever device it wants. In practice, it usually makes a simple default choice.

+1 to Peter's questions, but note jax.devices('cpu')[0] gives you a CpuDevice, which you can pass to jit, device_put, etc. to run on host. I'm not sure if/how trax plumbs this through though.

Leave a Comment
Related Posts