目录
常用代码片段及技巧 自动选择GPU和CPU 切换当前目录 打印模型参数 将tensor的列表转换为tensor 内存不够 debug tensor memory常用代码片段及技巧
自动选择GPU和CPU
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') # model and tensor to device vgg = models.vgg16().to(device)
切换当前目录
import os try: os.chdir(os.path.join(os.getcwd(), '..')) print(os.getcwd()) except: pass
打印模型参数
from torchsummary import summary # 1 means in_channels summary(model, (1, 28, 28))
将tensor的列表转换为tensor
x = torch.stack(tensor_list)
内存不够
Smaller batch size torch.cuda.empty_cache() every few minibatchesdebug tensor memory
resource` module is a Unix specific package as seen in https://docs.python.org/2/library/resource.html which is why it worked for you in Ubuntu, but raised an error when trying to use it in Windows.
Here is what solved it for me.
Downgrade to the Apache Spark 2.3.2 prebuild version Install (or downgrade) jdk to version 1.8.0 My installed jdk was 1.9.0, which doesn‘t seem to be compatiable with spark 2.3.2 or 2.4.0 make sure that when you run java -version in cmd (command prompt), it show java version 8. If you are seeing version 9, you will need to change your system ENV PATH to ensure it points to java version 8. Check this link to get help on changing the PATH if you have multiple java version installed.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') def debug_memory(): import collections, gc, resource, torch print('maxrss = {}'.format( resource.getrusage(resource.RUSAGE_SELF).ru_maxrss)) tensors = collections.Counter((str(o.device), o.dtype, tuple(o.shape)) for o in gc.get_objects() if torch.is_tensor(o)) for line in sorted(tensors.items()): print('{}\t{}'.format(*line)) # example import tensor x = torch.tensor(3,3) debug_memory() y = torch.tensor(3,3) debug_memory() z = [torch.randn(i).long() for i in range(10)] debug_memory()
声明:本文来自网络,不代表【好得很程序员自学网】立场,转载请注明出处:http://haodehen.cn/did171697