PySyft: TypeError: object of type 'NoneType' has no len()

Describe the bug TypeError: object of type ‘NoneType’ has no len()

My code

x_train = torch.tensor(train_x)
y_train = torch.tensor(train_y)

import syft as sy
# Hook that extends the Pytorch library to enable all computations with pointers of tensors sent to other workers
hook = sy.TorchHook(torch)

# Creating 2 virtual workers
bob = sy.VirtualWorker(hook, id="bob")
anne = sy.VirtualWorker(hook, id="anne")
#datasets=[]
# threshold indexes for dataset split (one half for Bob, other half for Anne)
train_idx = int(len(x_train)/2)


# Sending toy datasets to virtual workers
#data_bob = x_train[:train_idx].send(bob)
#data_anne = x_train[train_idx:].send(anne)
#target_bob = y_train[:train_idx].send(bob)
#target_anne = y_train[train_idx:].send(anne)

#bob_train_dataset = sy.BaseDataset(data_bob , target_bob).send(bob)
#anne_train_dataset = sy.BaseDataset(data_anne, target_anne).send(anne)

#datasets = [(data_bob,target_bob),(data_anne,target_anne)]
#federated_train_dataset = sy.FederatedDataset([(data_bob,target_bob), (data_anne, target_anne)])
#bob_train_dataset = sy.BaseDataset(data_bob, target_bob).send(bob)
#anne_train_dataset = sy.BaseDataset(data_anne, target_anne).send(anne)
# Creating federated datasets, an extension of Pytorch TensorDataset class
#federated_train_dataset = sy.FederatedDataset([bob_train_dataset, anne_train_dataset])


# Creating federated dataloaders, an extension of Pytorch DataLoader class
#federated_train_loader = sy.FederatedDataLoader(federated_train_dataset, shuffle=True, batch_size=64)


# Hook that extends the Pytorch library to enable all computations with pointers of tensors sent to other workers
#hook = sy.TorchHook(torch)

# Creating 2 virtual workers
#bob = sy.VirtualWorker(hook, id="bob")
#anne = sy.VirtualWorker(hook, id="anne")

# threshold indexes for dataset split (one half for Bob, other half for Anne)
#train_idx = int(len(train_labels)/2)

# Sending toy datasets to virtual workers
bob_train_dataset = sy.BaseDataset(x_train[:train_idx], y_train[:train_idx]).send(bob)
anne_train_dataset = sy.BaseDataset(x_train[train_idx:], y_train[train_idx:]).send(anne)

# Creating federated datasets, an extension of Pytorch TensorDataset class
federated_train_dataset = sy.FederatedDataset([bob_train_dataset, anne_train_dataset])
# Creating federated dataloaders, an extension of Pytorch DataLoader class
federated_train_loader = sy.FederatedDataLoader(federated_train_dataset, shuffle=True, batch_size=64)

class GRUNet(nn.Module):
    def __init__(self, input_dim, hidden_dim, output_dim, n_layers, drop_prob=0.2):
        super(GRUNet, self).__init__()
        self.hidden_dim = hidden_dim
        self.n_layers = n_layers
        
        self.gru = nn.GRU(input_dim, hidden_dim, n_layers, batch_first=True, dropout=drop_prob)
        self.fc = nn.Linear(hidden_dim, output_dim)
        self.relu = nn.ReLU()
        
    def forward(self, x, h):
        out, h = self.gru(x, h)
        out = self.fc(self.relu(out[:,-1]))
        return out, h
    
    def init_hidden(self, batch_size):
        weight = next(self.parameters()).data
        hidden = weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().to(device)
        return hidden

class LSTMNet(nn.Module):
    def __init__(self, input_dim, hidden_dim, output_dim, n_layers, drop_prob=0.2):
        super(LSTMNet, self).__init__()
        self.hidden_dim = hidden_dim
        self.n_layers = n_layers
        
        self.lstm = nn.LSTM(input_dim, hidden_dim, n_layers, batch_first=True, dropout=drop_prob)
        self.fc = nn.Linear(hidden_dim, output_dim)
        self.relu = nn.ReLU()
        
    def forward(self, x, h):
        out, h = self.lstm(x, h)
        out = self.fc(self.relu(out[:,-1]))
        return out, h
    
    def init_hidden(self, batch_size):
        weight = next(self.parameters()).data
        hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().to(device),
                  weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().to(device))
        return hidden

batch_size = 64
def train(federated_train_loader, learn_rate, hidden_dim=256, EPOCHS=5, model_type="GRU"):
    
    # Setting common hyperparameters
    input_dim = next(iter(federated_train_loader))[0].shape[2]
    output_dim = 1
    n_layers = 2
    # Instantiating the models
    if model_type == "GRU":
        model = GRUNet(input_dim, hidden_dim, output_dim, n_layers)
    else:
        model = LSTMNet(input_dim, hidden_dim, output_dim, n_layers)
    model.to(device)
    
    # Defining loss function and optimizer
    criterion = nn.MSELoss()
    optimizer = torch.optim.SGD(model.parameters(), lr=learn_rate)
    
    model.train()
    print("Starting Training of {} model".format(model_type))
    epoch_times = []
    # Start training loop
    for epoch in range(1,EPOCHS+1):
        start_time = time.clock()
        h = model.init_hidden(batch_size)
        avg_loss = 0.
        counter = 0
        for x, label in federated_train_loader:
            worker = x.location
            #h = torch.Tensor(np.zeros((batch_size))).send(worker)
            model.send(worker)
            counter += 1
            if model_type == "GRU":
                h = h.data
            else:
                h = tuple([e.data for e in h])
            model.zero_grad()
           
            
            out, h = model(x.to(device).float(), h)
            loss = criterion(out, label.to(device).float())
            loss.backward()
            optimizer.step()
            avg_loss += loss.item()
            if counter%200 == 0:
                print("Epoch {}......Step: {}/{}....... Average Loss for Epoch: {}".format(epoch, counter, len(train_loader), avg_loss/counter))
        current_time = time.clock()
        print("Epoch {}/{} Done, Total Loss: {}".format(epoch, EPOCHS, avg_loss/len(train_loader)))
        print("Time Elapsed for Epoch: {} seconds".format(str(current_time-start_time)))
        epoch_times.append(current_time-start_time)
    print("Total Training Time: {} seconds".format(str(sum(epoch_times))))
    return model

Desktop (please complete the following information):

  • OS: Ubuntu
  • Version : 16.04

Additional context Starting Training of GRU model


TypeError Traceback (most recent call last) <ipython-input-37-6d641218ab70> in <module>() 1 lr = 0.001 2 #batch_size = 64 ----> 3 gru_model = train(federated_train_loader, lr, model_type=“GRU”)

<ipython-input-36-ba6a40ede7c3> in train(federated_train_loader, learn_rate, hidden_dim, EPOCHS, model_type) 38 39 —> 40 out, h = model(x.to(device).float(), h) 41 loss = criterion(out, label.to(device).float()) 42 loss.backward()

Actually, I do not know what happen on my dataset?

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Comments: 23 (9 by maintainers)

Most upvoted comments

Not a problem @niklausliu! We’re happy to have people passionate about the project like yourself. It seems this particular issue has gotten off-topic. I’d suggest you post your questions in the #team_pysyft channel on Slack and give 1-2 days for someone to reply. If not, submit a new issue here.

Thanks for your help. I am going to modify my code. Can you leave me your email address? I think maybe I still have a lot of bugs, because this is my first time using pysyft.

@niklausliu - We’re happy to help in the OpenMined community. But as a general rule, we’re not tech support. While you may have a problem running PySyft, and there may be a legitimate issue here, everyone who works on the OpenMined project works for free. If we spent all of our time fixing people’s implementations of PySyft, we’d never have time to write code for PySyft itself. Either way, providing our personal email addresses so that you can ping us questions is not acceptable.

I’m going to close this issue for now as it seems that problem you’re now having is totally unrelated to the original issue. If that’s not the case, please send me a message on Slack and I’ll be happy to re-open the issue.