Task-based end-to-end model learning in stochastic optimization

Overview

Task-based End-to-end Model Learning in Stochastic Optimization

This repository is by Priya L. Donti, Brandon Amos, and J. Zico Kolter and contains the PyTorch source code to reproduce the experiments in our paper Task-based End-to-end Model Learning in Stochastic Optimization.

If you find this repository helpful in your publications, please consider citing our paper.

@inproceedings{donti2017task,
  title={Task-based end-to-end model learning in stochastic optimization},
  author={Donti, Priya and Amos, Brandon and Kolter, J Zico},
  booktitle={Advances in Neural Information Processing Systems},
  pages={5484--5494},
  year={2017}
}

Introduction

As machine learning techniques have become more ubiquitous, it has become common to see machine learning prediction algorithms operating within some larger process. However, the criteria by which we train machine learning algorithms often differ from the ultimate criteria on which we evaluate them.

This repository demonstrates an end-to-end approach for learning probabilistic machine learning models within the context of stochastic programming, in a manner that directly captures the ultimate task-based objective for which they will be used. Specifically, we evaluate our approach in the context of (a) a generic inventory stock problem and (b) an electrical grid scheduling task based on over eight years of data from PJM.

Please see our paper Task-based End-to-end Model Learning in Stochastic Optimization and the code in this repository (locuslab/e2e-model-learning) for more details about the general approach proposed and our initial experimental implementations.

Setup and Dependencies

Inventory Stock Problem (Newsvendor) Experiments

Experiments considering a "conditional" variation of the inventory stock problem. Problem instances are generated via random sampling.

newsvendor
├── main.py - Run inventory stock problem experiments. (See arguments.)
├── task_net.py - Functions for our task-based end-to-end model learning approach.
├── mle.py - Functions for linear maximum likelihood estimation approach.
├── mle_net.py - Functions for nonlinear maximum likelihood estimation approach.
├── policy_net.py - Functions for end-to-end neural network policy model.
├── batch.py - Helper functions for minibatched evaluation.
├── plot.py - Plot experimental results.
└── constants.py - Constants to set GPU vs. CPU.

Load Forecasting and Generator Scheduling Experiments

Experiments considering a realistic grid-scheduling task, in which electricity generation is scheduled based on some (unknown) distribution over electricity demand. Historical load data for these experiments were obtained from PJM.

power_sched
├── main.py - Run load forecasting problem experiments. (See arguments.)
├── model_classes.py - Models used for experiments.
├── nets.py - Functions for RMSE, cost-weighted RMSE, and task nets.
├── plot.py - Plot experimental results.
├── constants.py - Constants to set GPU vs. CPU.
└── pjm_load_data_*.txt - Historical load data from PJM.

Price Forecasting and Battery Storage Experiments

Experiments considering a realistic battery arbitrage task, in which a power grid-connected battery generates a charge/discharge schedule based on some (unknown) distribution over energy prices. Historical energy price data for these experiments were obtained from PJM.

battery_storage
├── main.py - Run battery storage problem experiments. (See arguments.)
├── model_classes.py - Models used for experiments.
├── nets.py - Functions for RMSE and task nets.
├── calc_stats.py - Calculate experimental result stats.
├── constants.py - Constants to set GPU vs. CPU.
└── storage_data.csv - Historical energy price data from PJM.

Acknowledgments

This material is based upon work supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE1252522.

Licensing

Unless otherwise stated, the source code is copyright Carnegie Mellon University and licensed under the Apache 2.0 License.

Comments
  • Error in running battery_storage files.

    Error in running battery_storage files.

    Hello I am trying to run the main.py file of battery_storage and I am getting size issue as attached below.

    C:\Users\AppData\Local\Programs\Python\Python37\Scripts\battery_storage>main.py --save C:\Users\AppData\Local\Programs\Python\Python37\Scripts\battery_storage --nRuns 1 --paramSet 1 C:\Users\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\nn\modules\loss.py:445: UserWarning: Using a target size (torch.Size([500, 24])) that is different to the input size (torch.Size([500, 24])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size. return F.mse_loss(input, target, reduction=self.reduction) Traceback (most recent call last): File "C:\Users\AppData\Local\Programs\Python\Python37\Scripts\battery_storage\main.py", line 234, in <module> main() File "C:\Users\AppData\Local\Programs\Python\Python37\Scripts\battery_storage\main.py", line 73, in main model_rmse = nets.run_rmse_net(model_rmse, loaders_task, params, tensors_task) File "C:\Users\AppData\Local\Programs\Python\Python37\Scripts\battery_storage\nets.py", line 55, in run_rmse_net total_train_loss += train_loss.data[0] * X_train_.size(0) IndexError: invalid index of a 0-dim tensor. Usetensor.item()in Python ortensor.item()in C++ to convert a 0-dim tensor to a number

    opened by parimuns 1
  • Error in running code on CPU.

    Error in running code on CPU.

    Hello I am trying to reproduce the code for power scheduling. But I don't have access to GPU , so I am running codes on a windows system with CPU.In your code,I have removed cuda() wherever it was written.But I am getting errors which I am not able to resolve.Your support in solving issues will be highly appreciated. I have attached modified main.py and nets.py files.

    For power scheduling,I am getting error in main file as model_rmse = nets.run_rmse_net(model_rmse, variables_rmse, X_train, Y_train)

    files.zip

    `AttributeError Traceback (most recent call last) in ----> 1 model_rmse = nets.run_rmse_net(model_rmse, variables_rmse, X_train, Y_train)

    ~\Untitled Folder 1\nets.py in run_rmse_net(model, variables, X_train, Y_train) 42 model.train() 43 train_loss = nn.MSELoss()( ---> 44 model(variables['X_train_']), variables['Y_train_']) 45 train_loss.backward() 46 opt.step()

    c:\users\appdata\local\programs\python\python37\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs) 720 result = self._slow_forward(*input, **kwargs) 721 else: --> 722 result = self.forward(*input, **kwargs) 723 for hook in itertools.chain( 724 _global_forward_hooks.values(),

    c:\users\appdata\local\programs\python\python37\lib\site-packages\torch\nn\modules\loss.py in forward(self, input, target) 443 444 def forward(self, input: Tensor, target: Tensor) -> Tensor: --> 445 return F.mse_loss(input, target, reduction=self.reduction) 446 447

    c:\users\appdata\local\programs\python\python37\lib\site-packages\torch\nn\functional.py in mse_loss(input, target, size_average, reduce, reduction) 2633 mse_loss, tens_ops, input, target, size_average=size_average, reduce=reduce, 2634 reduction=reduction) -> 2635 if not (target.size() == input.size()): 2636 warnings.warn("Using a target size ({}) that is different to the input size ({}). " 2637 "This will likely lead to incorrect results due to broadcasting. "

    AttributeError: 'tuple' object has no attribute 'size' `

    opened by parimuns 1
  • Error

    Error

    mldl@mldlUB1604:~/ub16_prj/e2e-model-learning/power_sched$ python3 main.py --save . setGPU: Setting GPU to: 0 0 0.07094819098711014 0.01747439242899418 1 0.0608951672911644 0.015597431920468807 2 0.04586026072502136 0.012867853045463562 3 0.03554176539182663 0.011823796667158604 .................................. 997 0.002964276820421219 0.010087293572723866 998 0.0029753427952528 0.010150546208024025 999 0.0028919086325913668 0.011296983808279037 ERROR:root:An unexpected error occurred while tokenizing input The following traceback may be corrupted or invalid The error message is: ('EOF in multi-line string', (48, 66))


    TypeError Traceback (most recent call last) /usr/local/lib/python3.5/dist-packages/qpth-0.0.6-py3.5.egg/qpth/solvers/pdipm/batch.py in pre_factor_kkt(Q= ( 0 ,.,.) = 1055.5512 0.0000 0.0000...ch.cuda.DoubleTensor of size 2553x24x24 (GPU 0)] , G= ( 0 ,.,.) = 1 -1 0 ... 0 0 0 ...ch.cuda.DoubleTensor of size 2553x46x24 (GPU 0)] , A=[torch.cuda.DoubleTensor with no dimension] ) 357 try: --> 358 Q_LU = Q.btrifact(pivot=False) Q_LU = undefined Q.btrifact = <built-in method btrifact of torch.cuda.DoubleTensor object at 0x7f2709a5c7c8> global pivot = undefined 359 except:

    TypeError: btrifact received an invalid combination of arguments - got (pivot=bool, ), but expected one of:

    • () didn't match because some of the keywords were incorrect: pivot
    • (torch.cuda.IntTensor info)

    During handling of the above exception, another exception occurred:

    RuntimeError Traceback (most recent call last) /home/mldl/ub16_prj/e2e-model-learning/power_sched/main.py in () 151 152 if name=='main': --> 153 main() global main = <function main at 0x7f2709e1e620>

    /home/mldl/ub16_prj/e2e-model-learning/power_sched/main.py in main() 71 model_rmse = nets.run_rmse_net( 72 model_rmse, variables_rmse, X_train, Y_train) ---> 73 nets.eval_net("rmse_net", model_rmse, variables_rmse, params, save_folder) global nets.eval_net = <function eval_net at 0x7f270ec02510> model_rmse = Net ( (lin): Linear (149 -> 24) (net): Sequential ( (0): Linear (149 -> 200) (1): BatchNorm1d(200, eps=1e-05, momentum=0.1, affine=True) (2): ReLU () (3): Dropout (p = 0.2) (4): Linear (200 -> 200) (5): BatchNorm1d(200, eps=1e-05, momentum=0.1, affine=True) (6): ReLU () (7): Dropout (p = 0.2) (8): Linear (200 -> 24) ) ) variables_rmse = {'X_train_': Variable containing: -4.0186e-01 7.3033e-02 3.3559e-02 ... -1.6573e-01 4.8663e-02 -6.3155e-01 4.1893e-01 4.7020e-01 5.5965e-01 ... -1.6573e-01 7.2976e-02 -6.3155e-01 9.7893e-01 1.0885e+00 1.3457e+00 ... -1.6573e-01 9.7268e-02 -6.3155e-01 ... ⋱ ...
    -1.0027e+00 -9.6148e-01 -9.6799e-01 ... -1.3273e-01 -1.4598e-01 1.5815e+00 -6.2014e-01 -6.1378e-01 -6.5050e-01 ... -1.3273e-01 -1.2164e-01 1.5815e+00 -8.7676e-01 -8.8478e-01 -8.8041e-01 ... -1.3273e-01 -9.7281e-02 -6.3233e-01 [torch.cuda.FloatTensor of size 2553x149 (GPU 0)] , 'X_test_': Variable containing: -0.1722 -0.0922 -0.0812 ... -0.1327 -0.0729 -0.6323 0.0751 0.1430 0.2965 ... -0.1327 -0.0485 -0.6323 0.5183 0.4600 0.7180 ... -0.1327 -0.0241 -0.6323 ... ⋱ ...
    -1.0261 -1.1711 -1.2253 ... -0.1327 -1.4622 -0.6323 -1.2920 -1.3552 -1.4059 ... -0.1327 -1.4635 -0.6323 -1.0121 -1.1967 -1.2472 ... -0.1327 -1.4645 -0.6323 [torch.cuda.FloatTensor of size 639x149 (GPU 0)] , 'Y_test_': Variable containing: 1.5750 1.5000 1.4730 ... 1.8880 1.8380 1.7480 1.6700 1.5620 1.5500 ... 1.8470 1.8250 1.7090 1.6560 1.5600 1.5430 ... 1.7330 1.6720 1.5990 ... ⋱ ...
    1.2820 1.2070 1.1620 ... 1.6300 1.5460 1.4420 1.3420 1.2380 1.1910 ... 1.5690 1.4880 1.3880 1.2810 1.2810 1.2810 ... 1.2810 1.2810 1.2810 [torch.cuda.FloatTensor of size 639x24 (GPU 0)] , 'Y_train_': Variable containing: 1.6384 1.5479 1.5014 ... 2.0454 2.0098 1.8751 1.7482 1.6577 1.6313 ... 2.0735 2.0128 1.9111 1.7923 1.7135 1.6666 ... 1.9348 1.9248 1.8024 ... ⋱ ...
    1.4260 1.3520 1.3000 ... 1.5640 1.5180 1.4510 1.3710 1.2990 1.2580 ... 1.6990 1.6590 1.6090 1.5220 1.4540 1.4040 ... 1.8120 1.7540 1.6820 [torch.cuda.FloatTensor of size 2553x24 (GPU 0)] } params = {'c_ramp': 0.4, 'n': 24, 'gamma_over': 0.5, 'gamma_under': 50} save_folder = './0' 74 75 # Randomly construct hold-out set for task net training.

    /home/mldl/ub16_prj/e2e-model-learning/power_sched/nets.py in eval_net(which='rmse_net', model=Net ( (lin): Linear (149 -> 24) (net): Seque...opout (p = 0.2) (8): Linear (200 -> 24) ) ), variables={'X_test_': Variable containing: -0.1722 -0.0922 -0.0812 .....[torch.cuda.FloatTensor of size 639x149 (GPU 0)] , 'X_train_': Variable containing: -4.0186e-01 7.3033e-02 3....torch.cuda.FloatTensor of size 2553x149 (GPU 0)] , 'Y_test_': Variable containing: 1.5750 1.5000 1.4730 ..... [torch.cuda.FloatTensor of size 639x24 (GPU 0)] , 'Y_train_': Variable containing: 1.6384 1.5479 1.5014 .....[torch.cuda.FloatTensor of size 2553x24 (GPU 0)] }, params={'c_ramp': 0.4, 'gamma_over': 0.5, 'gamma_under': 50, 'n': 24}, save_folder='./0') 142 143 # Eval model on task loss --> 144 Y_sched_train = solver(mu_pred_train.double(), sig_pred_train.double()) Y_sched_train = undefined solver = SolveScheduling ( ) mu_pred_train.double = <bound method Variable.double of Variable containing: 1.6559 1.5848 1.5492 ... 2.0905 2.0244 1.8988 1.7428 1.6612 1.6278 ... 2.0502 1.9847 1.8801 1.7963 1.7084 1.6635 ... 1.9366 1.8893 1.7936 ... ⋱ ...
    1.4143 1.3457 1.3031 ... 1.6152 1.5751 1.5050 1.3843 1.3180 1.2787 ... 1.6053 1.5626 1.4952 1.5086 1.4384 1.4004 ... 1.8398 1.7719 1.6712 [torch.cuda.FloatTensor of size 2553x24 (GPU 0)]

        sig_pred_train.double = <bound method Variable.double of Variable containing:
    

    1.00000e-02 * 1.9104 2.3216 2.4676 ... 6.8909 6.7232 6.3667 1.9104 2.3216 2.4676 ... 6.8909 6.7232 6.3667 1.9104 2.3216 2.4676 ... 6.8909 6.7232 6.3667 ... ⋱ ...
    1.9104 2.3216 2.4676 ... 6.8909 6.7232 6.3667 1.9104 2.3216 2.4676 ... 6.8909 6.7232 6.3667 1.9104 2.3216 2.4676 ... 6.8909 6.7232 6.3667 [torch.cuda.FloatTensor of size 2553x24 (GPU 0)]

    145     train_loss_task = task_loss(
    146         Y_sched_train.float(), variables['Y_train_'], params)
    

    /usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py in call(self=SolveScheduling ( ), *input=(Variable containing: 1.6559 1.5848 1.5492 .....torch.cuda.DoubleTensor of size 2553x24 (GPU 0)] , Variable containing: 1.00000e-02 * 1.9104 2.32...torch.cuda.DoubleTensor of size 2553x24 (GPU 0)] ), **kwargs={}) 204 205 def call(self, *input, **kwargs): --> 206 result = self.forward(*input, **kwargs) result = undefined self.forward = <bound method SolveScheduling.forward of SolveScheduling ( )> input = (Variable containing: 1.6559 1.5848 1.5492 ... 2.0905 2.0244 1.8988 1.7428 1.6612 1.6278 ... 2.0502 1.9847 1.8801 1.7963 1.7084 1.6635 ... 1.9366 1.8893 1.7936 ... ⋱ ...
    1.4143 1.3457 1.3031 ... 1.6152 1.5751 1.5050 1.3843 1.3180 1.2787 ... 1.6053 1.5626 1.4952 1.5086 1.4384 1.4004 ... 1.8398 1.7719 1.6712 [torch.cuda.DoubleTensor of size 2553x24 (GPU 0)] , Variable containing: 1.00000e-02 * 1.9104 2.3216 2.4676 ... 6.8909 6.7232 6.3667 1.9104 2.3216 2.4676 ... 6.8909 6.7232 6.3667 1.9104 2.3216 2.4676 ... 6.8909 6.7232 6.3667 ... ⋱ ...
    1.9104 2.3216 2.4676 ... 6.8909 6.7232 6.3667 1.9104 2.3216 2.4676 ... 6.8909 6.7232 6.3667 1.9104 2.3216 2.4676 ... 6.8909 6.7232 6.3667 [torch.cuda.DoubleTensor of size 2553x24 (GPU 0)] ) kwargs = {} 207 for hook in self._forward_hooks.values(): 208 hook_result = hook(self, input, result)

    /home/mldl/ub16_prj/e2e-model-learning/power_sched/model_classes.py in forward(self=SolveScheduling ( ), mu=Variable containing: 1.6559 1.5848 1.5492 .....torch.cuda.DoubleTensor of size 2553x24 (GPU 0)] , sig=Variable containing: 1.00000e-02 * 1.9104 2.32...torch.cuda.DoubleTensor of size 2553x24 (GPU 0)] ) 150 d2g = GQuadraticApprox(self.params["gamma_under"], 151 self.params["gamma_over"])(z0, mu0, sig0) --> 152 z0_new = SolveSchedulingQP(self.params)(z0, mu0, dg, d2g) z0_new = undefined global SolveSchedulingQP = <class 'model_classes.SolveSchedulingQP'> self.params = {'c_ramp': 0.4, 'n': 24, 'gamma_over': 0.5, 'gamma_under': 50} z0 = Variable containing: 1.6559 1.5848 1.5492 ... 2.0905 2.0244 1.8988 1.7428 1.6612 1.6278 ... 2.0502 1.9847 1.8801 1.7963 1.7084 1.6635 ... 1.9366 1.8893 1.7936 ... ⋱ ...
    1.4143 1.3457 1.3031 ... 1.6152 1.5751 1.5050 1.3843 1.3180 1.2787 ... 1.6053 1.5626 1.4952 1.5086 1.4384 1.4004 ... 1.8398 1.7719 1.6712 [torch.cuda.DoubleTensor of size 2553x24 (GPU 0)]

        mu0 = Variable containing:
    

    1.6559 1.5848 1.5492 ... 2.0905 2.0244 1.8988 1.7428 1.6612 1.6278 ... 2.0502 1.9847 1.8801 1.7963 1.7084 1.6635 ... 1.9366 1.8893 1.7936 ... ⋱ ...
    1.4143 1.3457 1.3031 ... 1.6152 1.5751 1.5050 1.3843 1.3180 1.2787 ... 1.6053 1.5626 1.4952 1.5086 1.4384 1.4004 ... 1.8398 1.7719 1.6712 [torch.cuda.DoubleTensor of size 2553x24 (GPU 0)]

        dg = Variable containing:
    

    -24.7500 -24.7500 -24.7500 ... -24.7500 -24.7500 -24.7500 -24.7500 -24.7500 -24.7500 ... -24.7500 -24.7500 -24.7500 -24.7500 -24.7500 -24.7500 ... -24.7500 -24.7500 -24.7500 ... ⋱ ...
    -24.7500 -24.7500 -24.7500 ... -24.7500 -24.7500 -24.7500 -24.7500 -24.7500 -24.7500 ... -24.7500 -24.7500 -24.7500 -24.7500 -24.7500 -24.7500 ... -24.7500 -24.7500 -24.7500 [torch.cuda.DoubleTensor of size 2553x24 (GPU 0)]

        d2g = Variable containing:
    

    1054.5512 867.7901 816.4561 ... 292.3651 299.6580 316.4354 1054.5512 867.7901 816.4561 ... 292.3651 299.6580 316.4354 1054.5512 867.7901 816.4561 ... 292.3651 299.6580 316.4354 ... ⋱ ...
    1054.5512 867.7901 816.4561 ... 292.3651 299.6580 316.4354 1054.5512 867.7901 816.4561 ... 292.3651 299.6580 316.4354 1054.5512 867.7901 816.4561 ... 292.3651 299.6580 316.4354 [torch.cuda.DoubleTensor of size 2553x24 (GPU 0)]

    153             solution_diff = (z0-z0_new).norm().data[0]
    154             print("+ SQP Iter: {}, Solution diff = {}".format(i, solution_diff))
    

    /usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py in call(self=SolveSchedulingQP ( ), *input=(Variable containing: 1.6559 1.5848 1.5492 .....torch.cuda.DoubleTensor of size 2553x24 (GPU 0)] , Variable containing: 1.6559 1.5848 1.5492 .....torch.cuda.DoubleTensor of size 2553x24 (GPU 0)] , Variable containing: -24.7500 -24.7500 -24.7500 ...torch.cuda.DoubleTensor of size 2553x24 (GPU 0)] , Variable containing: 1054.5512 867.7901 816...torch.cuda.DoubleTensor of size 2553x24 (GPU 0)] ), **kwargs={}) 204 205 def call(self, *input, **kwargs): --> 206 result = self.forward(*input, **kwargs) result = undefined self.forward = <bound method SolveSchedulingQP.forward of SolveSchedulingQP ( )> input = (Variable containing: 1.6559 1.5848 1.5492 ... 2.0905 2.0244 1.8988 1.7428 1.6612 1.6278 ... 2.0502 1.9847 1.8801 1.7963 1.7084 1.6635 ... 1.9366 1.8893 1.7936 ... ⋱ ...
    1.4143 1.3457 1.3031 ... 1.6152 1.5751 1.5050 1.3843 1.3180 1.2787 ... 1.6053 1.5626 1.4952 1.5086 1.4384 1.4004 ... 1.8398 1.7719 1.6712 [torch.cuda.DoubleTensor of size 2553x24 (GPU 0)] , Variable containing: 1.6559 1.5848 1.5492 ... 2.0905 2.0244 1.8988 1.7428 1.6612 1.6278 ... 2.0502 1.9847 1.8801 1.7963 1.7084 1.6635 ... 1.9366 1.8893 1.7936 ... ⋱ ...
    1.4143 1.3457 1.3031 ... 1.6152 1.5751 1.5050 1.3843 1.3180 1.2787 ... 1.6053 1.5626 1.4952 1.5086 1.4384 1.4004 ... 1.8398 1.7719 1.6712 [torch.cuda.DoubleTensor of size 2553x24 (GPU 0)] , Variable containing: -24.7500 -24.7500 -24.7500 ... -24.7500 -24.7500 -24.7500 -24.7500 -24.7500 -24.7500 ... -24.7500 -24.7500 -24.7500 -24.7500 -24.7500 -24.7500 ... -24.7500 -24.7500 -24.7500 ... ⋱ ...
    -24.7500 -24.7500 -24.7500 ... -24.7500 -24.7500 -24.7500 -24.7500 -24.7500 -24.7500 ... -24.7500 -24.7500 -24.7500 -24.7500 -24.7500 -24.7500 ... -24.7500 -24.7500 -24.7500 [torch.cuda.DoubleTensor of size 2553x24 (GPU 0)] , Variable containing: 1054.5512 867.7901 816.4561 ... 292.3651 299.6580 316.4354 1054.5512 867.7901 816.4561 ... 292.3651 299.6580 316.4354 1054.5512 867.7901 816.4561 ... 292.3651 299.6580 316.4354 ... ⋱ ...
    1054.5512 867.7901 816.4561 ... 292.3651 299.6580 316.4354 1054.5512 867.7901 816.4561 ... 292.3651 299.6580 316.4354 1054.5512 867.7901 816.4561 ... 292.3651 299.6580 316.4354 [torch.cuda.DoubleTensor of size 2553x24 (GPU 0)] ) kwargs = {} 207 for hook in self._forward_hooks.values(): 208 hook_result = hook(self, input, result)

    /home/mldl/ub16_prj/e2e-model-learning/power_sched/model_classes.py in forward(self=SolveSchedulingQP ( ), z0=Variable containing: 1.6559 1.5848 1.5492 .....torch.cuda.DoubleTensor of size 2553x24 (GPU 0)] , mu=Variable containing: 1.6559 1.5848 1.5492 .....torch.cuda.DoubleTensor of size 2553x24 (GPU 0)] , dg=Variable containing: -24.7500 -24.7500 -24.7500 ...torch.cuda.DoubleTensor of size 2553x24 (GPU 0)] , d2g=Variable containing: 1054.5512 867.7901 816...torch.cuda.DoubleTensor of size 2553x24 (GPU 0)] ) 118 h = self.h.unsqueeze(0).expand(nBatch, self.h.size(0)) 119 --> 120 out = QPFunction(verbose=False)(Q, p, G, h, self.e, self.e) out = undefined global QPFunction = <class 'qpth.qp.QPFunction'> global verbose = undefined Q = Variable containing: ( 0 ,.,.) = 1055.5512 0.0000 0.0000 ... 0.0000 0.0000 0.0000 0.0000 868.7901 0.0000 ... 0.0000 0.0000 0.0000 0.0000 0.0000 817.4561 ... 0.0000 0.0000 0.0000 ... ⋱ ...
    0.0000 0.0000 0.0000 ... 293.3651 0.0000 0.0000 0.0000 0.0000 0.0000 ... 0.0000 300.6580 0.0000 0.0000 0.0000 0.0000 ... 0.0000 0.0000 317.4354

    ( 1 ,.,.) = 1055.5512 0.0000 0.0000 ... 0.0000 0.0000 0.0000 0.0000 868.7901 0.0000 ... 0.0000 0.0000 0.0000 0.0000 0.0000 817.4561 ... 0.0000 0.0000 0.0000 ... ⋱ ...
    0.0000 0.0000 0.0000 ... 293.3651 0.0000 0.0000 0.0000 0.0000 0.0000 ... 0.0000 300.6580 0.0000 0.0000 0.0000 0.0000 ... 0.0000 0.0000 317.4354

    ( 2 ,.,.) = 1055.5512 0.0000 0.0000 ... 0.0000 0.0000 0.0000 0.0000 868.7901 0.0000 ... 0.0000 0.0000 0.0000 0.0000 0.0000 817.4561 ... 0.0000 0.0000 0.0000 ... ⋱ ...
    0.0000 0.0000 0.0000 ... 293.3651 0.0000 0.0000 0.0000 0.0000 0.0000 ... 0.0000 300.6580 0.0000 0.0000 0.0000 0.0000 ... 0.0000 0.0000 317.4354 ...

    (2550,.,.) = 1055.5512 0.0000 0.0000 ... 0.0000 0.0000 0.0000 0.0000 868.7901 0.0000 ... 0.0000 0.0000 0.0000 0.0000 0.0000 817.4561 ... 0.0000 0.0000 0.0000 ... ⋱ ...
    0.0000 0.0000 0.0000 ... 293.3651 0.0000 0.0000 0.0000 0.0000 0.0000 ... 0.0000 300.6580 0.0000 0.0000 0.0000 0.0000 ... 0.0000 0.0000 317.4354

    (2551,.,.) = 1055.5512 0.0000 0.0000 ... 0.0000 0.0000 0.0000 0.0000 868.7901 0.0000 ... 0.0000 0.0000 0.0000 0.0000 0.0000 817.4561 ... 0.0000 0.0000 0.0000 ... ⋱ ...
    0.0000 0.0000 0.0000 ... 293.3651 0.0000 0.0000 0.0000 0.0000 0.0000 ... 0.0000 300.6580 0.0000 0.0000 0.0000 0.0000 ... 0.0000 0.0000 317.4354

    (2552,.,.) = 1055.5512 0.0000 0.0000 ... 0.0000 0.0000 0.0000 0.0000 868.7901 0.0000 ... 0.0000 0.0000 0.0000 0.0000 0.0000 817.4561 ... 0.0000 0.0000 0.0000 ... ⋱ ...
    0.0000 0.0000 0.0000 ... 293.3651 0.0000 0.0000 0.0000 0.0000 0.0000 ... 0.0000 300.6580 0.0000 0.0000 0.0000 0.0000 ... 0.0000 0.0000 317.4354 [torch.cuda.DoubleTensor of size 2553x24x24 (GPU 0)]

        p = Variable containing:
    

    -1772.5911 -1401.5665 -1291.1130 ... -638.0358 -633.3941 -627.4944 -1864.3215 -1468.0045 -1355.4388 ... -626.1972 -621.4645 -621.5749 -1920.8175 -1508.9659 -1384.5890 ... -592.8813 -592.7801 -594.1101 ... ⋱ ...
    -1517.6264 -1193.8847 -1090.0132 ... -498.5796 -498.3313 -502.4959 -1485.9994 -1169.8336 -1070.0472 ... -495.6922 -494.5639 -499.3930 -1617.1337 -1274.4051 -1169.5335 ... -564.4880 -557.4778 -555.2510 [torch.cuda.DoubleTensor of size 2553x24 (GPU 0)]

        G = Variable containing:
    

    ( 0 ,.,.) = 1 -1 0 ... 0 0 0 0 1 -1 ... 0 0 0 0 0 1 ... 0 0 0 ... ⋱ ...
    -0 -0 -0 ... 1 -0 -0 -0 -0 -0 ... -1 1 -0 -0 -0 -0 ... -0 -1 1

    ( 1 ,.,.) = 1 -1 0 ... 0 0 0 0 1 -1 ... 0 0 0 0 0 1 ... 0 0 0 ... ⋱ ...
    -0 -0 -0 ... 1 -0 -0 -0 -0 -0 ... -1 1 -0 -0 -0 -0 ... -0 -1 1

    ( 2 ,.,.) = 1 -1 0 ... 0 0 0 0 1 -1 ... 0 0 0 0 0 1 ... 0 0 0 ... ⋱ ...
    -0 -0 -0 ... 1 -0 -0 -0 -0 -0 ... -1 1 -0 -0 -0 -0 ... -0 -1 1 ...

    (2550,.,.) = 1 -1 0 ... 0 0 0 0 1 -1 ... 0 0 0 0 0 1 ... 0 0 0 ... ⋱ ...
    -0 -0 -0 ... 1 -0 -0 -0 -0 -0 ... -1 1 -0 -0 -0 -0 ... -0 -1 1

    (2551,.,.) = 1 -1 0 ... 0 0 0 0 1 -1 ... 0 0 0 0 0 1 ... 0 0 0 ... ⋱ ...
    -0 -0 -0 ... 1 -0 -0 -0 -0 -0 ... -1 1 -0 -0 -0 -0 ... -0 -1 1

    (2552,.,.) = 1 -1 0 ... 0 0 0 0 1 -1 ... 0 0 0 0 0 1 ... 0 0 0 ... ⋱ ...
    -0 -0 -0 ... 1 -0 -0 -0 -0 -0 ... -1 1 -0 -0 -0 -0 ... -0 -1 1 [torch.cuda.DoubleTensor of size 2553x46x24 (GPU 0)]

        h = Variable containing:
    

    0.4000 0.4000 0.4000 ... 0.4000 0.4000 0.4000 0.4000 0.4000 0.4000 ... 0.4000 0.4000 0.4000 0.4000 0.4000 0.4000 ... 0.4000 0.4000 0.4000 ... ⋱ ...
    0.4000 0.4000 0.4000 ... 0.4000 0.4000 0.4000 0.4000 0.4000 0.4000 ... 0.4000 0.4000 0.4000 0.4000 0.4000 0.4000 ... 0.4000 0.4000 0.4000 [torch.cuda.DoubleTensor of size 2553x46 (GPU 0)]

        self.e = Variable containing:[torch.cuda.DoubleTensor with no dimension]
    
    121         return out
    122 
    

    /usr/local/lib/python3.5/dist-packages/qpth-0.0.6-py3.5.egg/qpth/qp.py in forward(self=<qpth.qp.QPFunction object>, Q_= ( 0 ,.,.) = 1055.5512 0.0000 0.0000...ch.cuda.DoubleTensor of size 2553x24x24 (GPU 0)] , p_= -1772.5911 -1401.5665 -1291.1130 ... -638.03...torch.cuda.DoubleTensor of size 2553x24 (GPU 0)] , G_= ( 0 ,.,.) = 1 -1 0 ... 0 0 0 ...ch.cuda.DoubleTensor of size 2553x46x24 (GPU 0)] , h_= 0.4000 0.4000 0.4000 ... 0.4000 0.4000 ...torch.cuda.DoubleTensor of size 2553x46 (GPU 0)] , A_=[torch.cuda.DoubleTensor with no dimension] , b_=[torch.cuda.DoubleTensor with no dimension] ) 89 90 if self.solver == QPSolvers.PDIPM_BATCHED: ---> 91 self.Q_LU, self.S_LU, self.R = pdipm_b.pre_factor_kkt(Q, G, A) self.Q_LU = undefined self.S_LU = undefined self.R = undefined global pdipm_b.pre_factor_kkt = <function pre_factor_kkt at 0x7f270ebf3620> Q = ( 0 ,.,.) = 1055.5512 0.0000 0.0000 ... 0.0000 0.0000 0.0000 0.0000 868.7901 0.0000 ... 0.0000 0.0000 0.0000 0.0000 0.0000 817.4561 ... 0.0000 0.0000 0.0000 ... ⋱ ...
    0.0000 0.0000 0.0000 ... 293.3651 0.0000 0.0000 0.0000 0.0000 0.0000 ... 0.0000 300.6580 0.0000 0.0000 0.0000 0.0000 ... 0.0000 0.0000 317.4354

    ( 1 ,.,.) = 1055.5512 0.0000 0.0000 ... 0.0000 0.0000 0.0000 0.0000 868.7901 0.0000 ... 0.0000 0.0000 0.0000 0.0000 0.0000 817.4561 ... 0.0000 0.0000 0.0000 ... ⋱ ...
    0.0000 0.0000 0.0000 ... 293.3651 0.0000 0.0000 0.0000 0.0000 0.0000 ... 0.0000 300.6580 0.0000 0.0000 0.0000 0.0000 ... 0.0000 0.0000 317.4354

    ( 2 ,.,.) = 1055.5512 0.0000 0.0000 ... 0.0000 0.0000 0.0000 0.0000 868.7901 0.0000 ... 0.0000 0.0000 0.0000 0.0000 0.0000 817.4561 ... 0.0000 0.0000 0.0000 ... ⋱ ...
    0.0000 0.0000 0.0000 ... 293.3651 0.0000 0.0000 0.0000 0.0000 0.0000 ... 0.0000 300.6580 0.0000 0.0000 0.0000 0.0000 ... 0.0000 0.0000 317.4354 ...

    (2550,.,.) = 1055.5512 0.0000 0.0000 ... 0.0000 0.0000 0.0000 0.0000 868.7901 0.0000 ... 0.0000 0.0000 0.0000 0.0000 0.0000 817.4561 ... 0.0000 0.0000 0.0000 ... ⋱ ...
    0.0000 0.0000 0.0000 ... 293.3651 0.0000 0.0000 0.0000 0.0000 0.0000 ... 0.0000 300.6580 0.0000 0.0000 0.0000 0.0000 ... 0.0000 0.0000 317.4354

    (2551,.,.) = 1055.5512 0.0000 0.0000 ... 0.0000 0.0000 0.0000 0.0000 868.7901 0.0000 ... 0.0000 0.0000 0.0000 0.0000 0.0000 817.4561 ... 0.0000 0.0000 0.0000 ... ⋱ ...
    0.0000 0.0000 0.0000 ... 293.3651 0.0000 0.0000 0.0000 0.0000 0.0000 ... 0.0000 300.6580 0.0000 0.0000 0.0000 0.0000 ... 0.0000 0.0000 317.4354

    (2552,.,.) = 1055.5512 0.0000 0.0000 ... 0.0000 0.0000 0.0000 0.0000 868.7901 0.0000 ... 0.0000 0.0000 0.0000 0.0000 0.0000 817.4561 ... 0.0000 0.0000 0.0000 ... ⋱ ...
    0.0000 0.0000 0.0000 ... 293.3651 0.0000 0.0000 0.0000 0.0000 0.0000 ... 0.0000 300.6580 0.0000 0.0000 0.0000 0.0000 ... 0.0000 0.0000 317.4354 [torch.cuda.DoubleTensor of size 2553x24x24 (GPU 0)]

        G = 
    

    ( 0 ,.,.) = 1 -1 0 ... 0 0 0 0 1 -1 ... 0 0 0 0 0 1 ... 0 0 0 ... ⋱ ...
    -0 -0 -0 ... 1 -0 -0 -0 -0 -0 ... -1 1 -0 -0 -0 -0 ... -0 -1 1

    ( 1 ,.,.) = 1 -1 0 ... 0 0 0 0 1 -1 ... 0 0 0 0 0 1 ... 0 0 0 ... ⋱ ...
    -0 -0 -0 ... 1 -0 -0 -0 -0 -0 ... -1 1 -0 -0 -0 -0 ... -0 -1 1

    ( 2 ,.,.) = 1 -1 0 ... 0 0 0 0 1 -1 ... 0 0 0 0 0 1 ... 0 0 0 ... ⋱ ...
    -0 -0 -0 ... 1 -0 -0 -0 -0 -0 ... -1 1 -0 -0 -0 -0 ... -0 -1 1 ...

    (2550,.,.) = 1 -1 0 ... 0 0 0 0 1 -1 ... 0 0 0 0 0 1 ... 0 0 0 ... ⋱ ...
    -0 -0 -0 ... 1 -0 -0 -0 -0 -0 ... -1 1 -0 -0 -0 -0 ... -0 -1 1

    (2551,.,.) = 1 -1 0 ... 0 0 0 0 1 -1 ... 0 0 0 0 0 1 ... 0 0 0 ... ⋱ ...
    -0 -0 -0 ... 1 -0 -0 -0 -0 -0 ... -1 1 -0 -0 -0 -0 ... -0 -1 1

    (2552,.,.) = 1 -1 0 ... 0 0 0 0 1 -1 ... 0 0 0 0 0 1 ... 0 0 0 ... ⋱ ...
    -0 -0 -0 ... 1 -0 -0 -0 -0 -0 ... -1 1 -0 -0 -0 -0 ... -0 -1 1 [torch.cuda.DoubleTensor of size 2553x46x24 (GPU 0)]

        A = [torch.cuda.DoubleTensor with no dimension]
    
     92             zhats, self.nus, self.lams, self.slacks = pdipm_b.forward(
     93                 Q, p, G, h, A, b, self.Q_LU, self.S_LU, self.R,
    

    /usr/local/lib/python3.5/dist-packages/qpth-0.0.6-py3.5.egg/qpth/solvers/pdipm/batch.py in pre_factor_kkt(Q= ( 0 ,.,.) = 1055.5512 0.0000 0.0000...ch.cuda.DoubleTensor of size 2553x24x24 (GPU 0)] , G= ( 0 ,.,.) = 1 -1 0 ... 0 0 0 ...ch.cuda.DoubleTensor of size 2553x46x24 (GPU 0)] , A=[torch.cuda.DoubleTensor with no dimension] ) 362 Please make sure that your Q matrix is PSD and has 363 a non-zero diagonal. --> 364 """) global Factor = undefined global the = undefined global U22 = undefined global block = undefined global that = undefined global we = undefined global can = undefined global only = undefined global do = undefined global after = undefined global know = undefined global D = undefined 365 366 # S = [ A Q^{-1} A^T A Q^{-1} G^T ]

    RuntimeError: qpth Error: Cannot perform LU factorization on Q. Please make sure that your Q matrix is PSD and has a non-zero diagonal.

    /usr/local/lib/python3.5/dist-packages/qpth-0.0.6-py3.5.egg/qpth/solvers/pdipm/batch.py(364)pre_factor_kkt() 362 Please make sure that your Q matrix is PSD and has 363 a non-zero diagonal. --> 364 """) 365 366 # S = [ A Q^{-1} A^T A Q^{-1} G^T ]

    ipdb>

    opened by loveJasmine 1
  • Error

    Error

    mldl@mldlUB1604:~/ub16_prj/e2e-model-learning/newsvendor$ python3 main.py --save . setGPU: Setting GPU to: 0 316.580809054 501.94985305 TEST SET RESULTS:
    Average loss: 1226.7804 Epoch: 0 [100/100 (100%)] Loss: 986.8513 986.851318359375 1226.7803955078125 TEST SET RESULTS:
    Average loss: 1225.4849 Epoch: 1 [100/100 (100%)] Loss: 979.9048 979.9048461914062 1225.48486328125

    ...........................................

    Epoch: 999 [100/100 (100%)] Loss: 191.7679 191.7679443359375 528.0696411132812 528.0696411132812 /home/mldl/ub16_prj/e2e-model-learning/newsvendor/plot.py:36: MatplotlibDeprecationWarning: The set_axis_bgcolor function was deprecated in version 2.0. Use set_facecolor instead. ax.set_axis_bgcolor("none") /usr/local/lib/python3.5/dist-packages/numpy/core/numeric.py:531: UserWarning: Warning: converting a masked element to nan. return array(a, dtype, copy=False, order=order) /usr/local/lib/python3.5/dist-packages/matplotlib/axes/_base.py:2903: UserWarning: Attempting to set identical left==right results in singular transformations; automatically expanding. left=100.0, right=100.0 'left=%s, right=%s') % (left, right)) *** Error in `pdonti..linear': free(): invalid pointer: 0x00007ff6f29b4ac0 *** ======= Backtrace: ========= /lib/x86_64-linux-gnu/libc.so.6(+0x777e5)[0x7ff6f7bde7e5] /lib/x86_64-linux-gnu/libc.so.6(+0x7fe0a)[0x7ff6f7be6e0a] /lib/x86_64-linux-gnu/libc.so.6(cfree+0x4c)[0x7ff6f7bea98c] /usr/lib/x86_64-linux-gnu/libstdc++.so.6(_ZNSt15basic_stringbufIcSt11char_traitsIcESaIcEE8overflowEi+0x181)[0x7ff6df542fa1] /usr/lib/x86_64-linux-gnu/libstdc++.so.6(_ZNSt15basic_streambufIcSt11char_traitsIcEE6xsputnEPKcl+0x89)[0x7ff6df599e79] /usr/local/lib/python3.5/dist-packages/torch/lib/libshm.so(_ZSt16__ostream_insertIcSt11char_traitsIcEERSt13basic_ostreamIT_T0_ES6_PKS3_l+0x1c5)[0x7ff6f2729235] /usr/local/lib/python3.5/dist-packages/matplotlib/ttconv.cpython-35m-x86_64-linux-gnu.so(_ZN14TTStreamWriter6printfEPKcz+0xd2)[0x7ff69c3384a2] /usr/local/lib/python3.5/dist-packages/matplotlib/ttconv.cpython-35m-x86_64-linux-gnu.so(_ZN12GlyphToType3C2ER14TTStreamWriterP6TTFONTib+0x2e2)[0x7ff69c3370f2] /usr/local/lib/python3.5/dist-packages/matplotlib/ttconv.cpython-35m-x86_64-linux-gnu.so(_Z17tt_type3_charprocR14TTStreamWriterP6TTFONTi+0x18)[0x7ff69c337438] /usr/local/lib/python3.5/dist-packages/matplotlib/ttconv.cpython-35m-x86_64-linux-gnu.so(_Z17get_pdf_charprocsPKcRSt6vectorIiSaIiEER20TTDictionaryCallback+0x384)[0x7ff69c335d84] /usr/local/lib/python3.5/dist-packages/matplotlib/ttconv.cpython-35m-x86_64-linux-gnu.so(+0x7209)[0x7ff69c333209] pdonti..linear(PyCFunction_Call+0x77)[0x4e9bc7] pdonti..linear(PyEval_EvalFrameEx+0x614)[0x524414] pdonti..linear[0x52d82f] pdonti..linear(PyEval_EvalFrameEx+0x5532)[0x529332] pdonti..linear[0x52d82f] pdonti..linear(PyEval_EvalFrameEx+0x5532)[0x529332] pdonti..linear(PyEval_EvalFrameEx+0x4a14)[0x528814] pdonti..linear(PyEval_EvalFrameEx+0x4a14)[0x528814] pdonti..linear(PyEval_EvalCodeEx+0x13b)[0x52e12b] pdonti..linear[0x4ebdd7] pdonti..linear(PyObject_Call+0x47)[0x5b7167] pdonti..linear(PyEval_EvalFrameEx+0x24af)[0x5262af] pdonti..linear(PyEval_EvalCodeEx+0x13b)[0x52e12b] pdonti..linear[0x4ebdd7] pdonti..linear(PyObject_Call+0x47)[0x5b7167] pdonti..linear(PyEval_EvalFrameEx+0x24af)[0x5262af] pdonti..linear[0x52d2e3] pdonti..linear(PyEval_EvalFrameEx+0x5532)[0x529332] pdonti..linear(PyEval_EvalFrameEx+0x4a14)[0x528814] pdonti..linear(PyEval_EvalFrameEx+0x4a14)[0x528814] pdonti..linear[0x52d2e3] pdonti..linear(PyEval_EvalCode+0x1f)[0x52dfdf] pdonti..linear[0x5fd2c2] pdonti..linear(PyRun_FileExFlags+0x9a)[0x5ff76a] pdonti..linear(PyRun_SimpleFileExFlags+0x1bc)[0x5ff95c] pdonti..linear(Py_Main+0x456)[0x63e7d6] pdonti..linear(main+0xe1)[0x4cfe41] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf0)[0x7ff6f7b87830] pdonti..linear(_start+0x29)[0x5d5f29] ======= Memory map: ======== 00400000-007a8000 r-xp 00000000 08:22 53875828 /usr/bin/python3.5 009a8000-009aa000 r--p 003a8000 08:22 53875828 /usr/bin/python3.5 009aa000-00a41000 rw-p 003aa000 08:22 53875828 /usr/bin/python3.5 00a41000-00a72000 rw-p 00000000 00:00 0 0136b000-52c8d000 rw-p 00000000 00:00 0 [heap] 200000000-200100000 rw-s 11782c000 00:06 552 /dev/nvidiactl 200100000-200104000 rw-s 116faf000 00:06 552 /dev/nvidiactl 200104000-200120000 ---p 00000000 00:00 0

    opened by loveJasmine 1
  • The NLL loss function in the newsvendor task is wrongly used.

    The NLL loss function in the newsvendor task is wrongly used.

    The NLL loss function in the newsvendor task is wrongly used. NLLLoss() (in PyTorch) expects a log-probability as input. But your code sends the raw probability into it. So it is actually minimizing -sum pi. instead of minimizing - sum log(pi). Though this weird loss can still kind of maximize the likelihood, it is different from your description. This happens in your task_net and mle_net.

    Another bug: your main.py for "# Nonlinear MLE net" actually is doing linear MLE net. You need to send the parameter "is_nonlinear=True".

    opened by dichen9412 0
  • Low GPU utilization for sequential quadratic programming solver

    Low GPU utilization for sequential quadratic programming solver

    When I train the task_net from the power scheduling problem (modified to work with my data) the SQP solving process takes forevor. While it runs, my GPU (Tesla K80) utilization hovers at only ~3%. I'm not sure if that is normal or what the bottleneck may be. However, this step significantly impacts training time. Training the rmse nets is extremely quick, of course.

    opened by ryanvolpi 0
Owner
CMU Locus Lab
Zico Kolter's Research Group
CMU Locus Lab
A complete end-to-end demonstration in which we collect training data in Unity and use that data to train a deep neural network to predict the pose of a cube. This model is then deployed in a simulated robotic pick-and-place task.

Object Pose Estimation Demo This tutorial will go through the steps necessary to perform pose estimation with a UR3 robotic arm in Unity. You’ll gain

Unity Technologies 187 Dec 24, 2022
A Momentumized, Adaptive, Dual Averaged Gradient Method for Stochastic Optimization

MADGRAD Optimization Method A Momentumized, Adaptive, Dual Averaged Gradient Method for Stochastic Optimization pip install madgrad Try it out! A best

Meta Research 774 Dec 31, 2022
On the model-based stochastic value gradient for continuous reinforcement learning

On the model-based stochastic value gradient for continuous reinforcement learning This repository is by Brandon Amos, Samuel Stanton, Denis Yarats, a

Facebook Research 46 Dec 15, 2022
Official code for "End-to-End Optimization of Scene Layout" -- including VAE, Diff Render, SPADE for colorization (CVPR 2020 Oral)

End-to-End Optimization of Scene Layout Code release for: End-to-End Optimization of Scene Layout CVPR 2020 (Oral) Project site, Bibtex For help conta

Andrew Luo 41 Dec 9, 2022
MBPO (paper: When to trust your model: Model-based policy optimization) in offline RL settings

offline-MBPO This repository contains the code of a version of model-based RL algorithm MBPO, which is modified to perform in offline RL settings Pape

LxzGordon 1 Oct 24, 2021
RoMA: Robust Model Adaptation for Offline Model-based Optimization

RoMA: Robust Model Adaptation for Offline Model-based Optimization Implementation of RoMA: Robust Model Adaptation for Offline Model-based Optimizatio

null 9 Oct 31, 2022
A lightweight Python-based 3D network multi-agent simulator. Uses a cell-based congestion model. Calculates risk, loudness and battery capacities of the agents. Suitable for 3D network optimization tasks.

AMAZ3DSim AMAZ3DSim is a lightweight python-based 3D network multi-agent simulator. It uses a cell-based congestion model. It calculates risk, battery

Daniel Hirsch 13 Nov 4, 2022
PyTorch implementation of the end-to-end coreference resolution model with different higher-order inference methods.

End-to-End Coreference Resolution with Different Higher-Order Inference Methods This repository contains the implementation of the paper: Revealing th

Liyan 52 Jan 4, 2023
PyTorch implementation of SampleRNN: An Unconditional End-to-End Neural Audio Generation Model

samplernn-pytorch A PyTorch implementation of SampleRNN: An Unconditional End-to-End Neural Audio Generation Model. It's based on the reference implem

DeepSound 261 Dec 14, 2022
Code & Models for 3DETR - an End-to-end transformer model for 3D object detection

3DETR: An End-to-End Transformer Model for 3D Object Detection PyTorch implementation and models for 3DETR. 3DETR (3D DEtection TRansformer) is a simp

Facebook Research 487 Dec 31, 2022
Spatio-Temporal Entropy Model (STEM) for end-to-end leaned video compression.

Spatio-Temporal Entropy Model A Pytorch Reproduction of Spatio-Temporal Entropy Model (STEM) for end-to-end leaned video compression. More details can

null 16 Nov 28, 2022
An end-to-end image translation model with weight-map for color constancy

CCUnet An end-to-end image translation model with weight-map for color constancy 1. Download the dataset (take Colorchecker_recommended dataset as an

Jianhui Qiu 1 Dec 21, 2021
End-to-end face detection, cropping, norm estimation, and landmark detection in a single onnx model

onnx-facial-lmk-detector End-to-end face detection, cropping, norm estimation, and landmark detection in a single onnx model, model.onnx. Demo You can

atksh 42 Dec 30, 2022
Genetic Algorithm, Particle Swarm Optimization, Simulated Annealing, Ant Colony Optimization Algorithm,Immune Algorithm, Artificial Fish Swarm Algorithm, Differential Evolution and TSP(Traveling salesman)

scikit-opt Swarm Intelligence in Python (Genetic Algorithm, Particle Swarm Optimization, Simulated Annealing, Ant Colony Algorithm, Immune Algorithm,A

郭飞 3.7k Jan 3, 2023
library for nonlinear optimization, wrapping many algorithms for global and local, constrained or unconstrained, optimization

NLopt is a library for nonlinear local and global optimization, for functions with and without gradient information. It is designed as a simple, unifi

Steven G. Johnson 1.4k Dec 25, 2022
Racing line optimization algorithm in python that uses Particle Swarm Optimization.

Racing Line Optimization with PSO This repository contains a racing line optimization algorithm in python that uses Particle Swarm Optimization. Requi

Parsa Dahesh 6 Dec 14, 2022
Official code for Score-Based Generative Modeling through Stochastic Differential Equations

Score-Based Generative Modeling through Stochastic Differential Equations This repo contains the official implementation for the paper Score-Based Gen

Yang Song 818 Jan 6, 2023
PyTorch implementation for Score-Based Generative Modeling through Stochastic Differential Equations (ICLR 2021, Oral)

Score-Based Generative Modeling through Stochastic Differential Equations This repo contains a PyTorch implementation for the paper Score-Based Genera

Yang Song 757 Jan 4, 2023
NeuralWOZ: Learning to Collect Task-Oriented Dialogue via Model-based Simulation (ACL-IJCNLP 2021)

NeuralWOZ This code is official implementation of "NeuralWOZ: Learning to Collect Task-Oriented Dialogue via Model-based Simulation". Sungdong Kim, Mi

NAVER AI 31 Oct 25, 2022