Scenarios, tutorials and demos for Autonomous Driving

Overview

The Autonomous Driving Cookbook (Preview)


NOTE:

This project is developed and being maintained by Project Road Runner at Microsoft Garage. This is currently a work in progress. We will continue to add more tutorials and scenarios based on requests from our users and the availability of our collaborators.


Autonomous driving has transcended far beyond being a crazy moonshot idea over the last half decade or so. It has quickly become one of the biggest technologies today that promises to shape our tomorrow, not very unlike when cars first came into existence. A big driver powering this change is the recent advances in software (Artificial Intelligence), hardware (GPUs, FPGAs etc.) and cloud computing, which have enabled ingest and processing of large amounts of data, making it possible for companies to push for levels 4 and 5 of autonomy. Achieving those levels of autonomy though, require training on hundreds of millions and sometimes hundreds of billions of miles worth of training data to demonstrate reliability, according to a report from RAND.

Despite the large amount of data collected every day, it is still insufficient to meet the demands of the ever increasing AI model complexity required by autonomous vehicles. One way to collect such huge amounts of data is through the use of simulation. Simulation makes it easy to not only collect data from a variety of different scenarios which would take days, if not months in the real world (like different weather conditions, varying daylight etc.), it also provides a safe test bed for trained models. With behavioral cloning, you can easily prepare highly efficient models in simulation and fine tune them using a relatively low amount of real world data. Then there are models built using techniques like Reinforcement Learning, which can only be trained in simulation. With simulators such as AirSim, working on these scenarios has become very easy.

We believe that the best way to make a technology grow is by making it easily available and accessible to everyone. This is best achieved by making the barrier of entry to it as low as possible. At Microsoft, our mission is to empower every person and organization on the planet to achieve more. That has been our primary motivation behind preparing this cookbook. Our aim with this project is to help you get quickly acquainted and familiarized with different onboarding scenarios in autonomous driving so you can take what you learn here and employ it in your everyday job with a minimal barrier to entry.

Who is this cookbook for?

Our plan is to make this cookbook a valuable resource for beginners, researchers and industry experts alike. Tutorials in the cookbook are presented as Jupyter notebooks, making it very easy for you to download the instructions and get started without a lot of setup time. To help this further, wherever needed, tutorials come with their own datasets, helper scripts and binaries. While the tutorials leverage popular open-source tools (like Keras, TensorFlow etc.) as well as Microsoft open-source and commercial technology (like AirSim, Azure virtual machines, Batch AI, CNTK etc.), the primary focus is on the content and learning, enabling you to take what you learn here and apply it to your work using tools of your choice.

We would love to hear your feedback on how we can evolve this project to reach that goal. Please use the GitHub Issues section to get in touch with us regarding ideas and suggestions.

Tutorials available

Currently, the following tutorials are available:

Following tutorials will be available soon:

  • Lane Detection using Deep Learning

Contributing

Please read the instructions and guidelines for collaborators if you wish to add a new tutorial to the cookbook.

This project welcomes and encourages contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.microsoft.com.

When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

Comments
  • Numpy Error, Help me

    Numpy Error, Help me

    TypeError: bad operand type for unary -: 'str'

    python version = 3.6 Numpy version = 1.14.3 scipy version = 1.10

    Help me...

    Can I know which version I used at the time of implementation?

    opened by Meokji 15
  • TextModel Error-“Permission denied error”

    TextModel Error-“Permission denied error”

    Your issue may already be reported! Please make sure to search all open and closed issues before starting a new one.

    Please fill out the sections below so we can understand your issue better and resolve it quickly.

    Problem description

    *I have finished learning the TrainModel.ipynb,and the results looks good, but when i tried to run the testModel.ipynb, I met this error as the following as @llyhbuaa,but it does not work when i deleted the modes and runned the TrainModel.ipynb again . I have connected to AirSim Simulator

    Problem details

    OSError Traceback (most recent call last) in () ----> 1 model = load_model(MODEL_PATH) 2 3 client = CarClient() 4 client.confirmConnection() 5 client.enableApiControl(True)

    C:\ProgramData\Anaconda3\lib\site-packages\keras\models.py in load_model(filepath, custom_objects, compile) 232 return custom_objects[obj] 233 return obj --> 234 with h5py.File(filepath, mode='r') as f: 235 # instantiate model 236 model_config = f.attrs.get('model_config')

    C:\ProgramData\Anaconda3\lib\site-packages\h5py_hl\files.py in init(self, name, mode, driver, libver, userblock_size, swmr, **kwds) 267 with phil: 268 fapl = make_fapl(driver, libver, **kwds) --> 269 fid = make_fid(name, mode, userblock_size, fapl, swmr=swmr) 270 271 if swmr_support:

    C:\ProgramData\Anaconda3\lib\site-packages\h5py_hl\files.py in make_fid(name, mode, userblock_size, fapl, fcpl, swmr) 97 if swmr and swmr_support: 98 flags |= h5f.ACC_SWMR_READ ---> 99 fid = h5f.open(name, flags, fapl=fapl) 100 elif mode == 'r+': 101 fid = h5f.open(name, h5f.ACC_RDWR, fapl=fapl)

    h5py_objects.pyx in h5py._objects.with_phil.wrapper()

    h5py_objects.pyx in h5py._objects.with_phil.wrapper()

    h5py\h5f.pyx in h5py.h5f.open()

    OSError: Unable to open file (unable to open file: name = 'D:\EndToEndDeepLearning\Model', errno = 13, error message = 'Permission denied', flags = 0, o_flags = 0)

    Exepriment/Environment details

    • Tutorial used: *(AirSimE2EDeepLearning)
    • Environment used: (hawaii )
    • Versions of artifacts used (if applicable): ( Python 3.6, Keras 2.1.2,cntk 2.5.1)
    opened by Harold-lkk 15
  • Connecting issue with AirSim

    Connecting issue with AirSim

    I trained the model successfully and tried to run TestModel.ipynb. but connection with Airsim is not works.

    1. I changed MODEL_PATH with trained model. MODEL_PATH = 'model/models/model_model.26-0.0002362.h5' -> Runs well.

    2. Run AD_Cookbook_Start_AirSim.ps1 using PowerShell. -> Runs well.

    3. No further process during connecting to AirSim. Waiting for connection: -> Not works.

    I've already tested examples of CNTK-AirSim (Drone, Car) and it works well in my laptop. Test Environments OS : Windows 10 64bit. GPU : GTX1060 6GB Anaconda3

    opened by edimoon777 15
  • Steering angle values have not been completely recorded.

    Steering angle values have not been completely recorded.

    I tried to run this Autonomous driving cookbook on other maps like the neighborhood. However, the steering angle values in " airsim_rec.txt" only have three values: -1, 0, and 1. I have attached one file below. airsim_rec.txt

    Don't know how to deal with it. Has anyone ever met this issue? Thank you!

    distribution_plot distribution_plot2

    opened by BrianN92 12
  • Cannot use segmentation images as data set.

    Cannot use segmentation images as data set.

    Because the shadow of trees in "Neighborhood" often causes confused, I tried to use the segmentation images recorded as training data set. However, the car cannot move properly during testing. I am wondering whether I can train it by segmentation images or not?

    opened by BrianN92 9
  • xbox one controller with Hawaii

    xbox one controller with Hawaii

    Thanks for writing this tutorial! But there is a question about the xbox controller.

    1.Using the scene "AD_Cookbook_AirSim\Hawaii" with xbox one controller , the input steering and throttle value are too large. How to set it properly?

    2.When problem 1 happened, pressing backspace would not make all values back to 0 . The car will keep rotating.

    opened by mnpica 8
  • Version of packages

    Version of packages

    I've tried to run 'TrainModel.ipynb' using Jupyter Notebook. but It is difficult to install OpenCV libraries because of version matchings.

    Can you tell me versions of additional packages such as openCV, CUDA, CuDNN?

    I'm trying to run the code with Python 3.5.2 Anaconda 4.1.1 (64bit) CNTK 2.4 Keras 2.1.4 Tensorflow-gpu 1.5.0 CUDA9.1 CuDNN9.0 OpenCV??

    Error message because of openCV as below.

    C:\Anaconda3\lib\site-packages\cv2_init_.py in () 2 import os 3 ----> 4 from .cv2 import * 5 from .data import * 6

    ImportError: DLL load failed:

    opened by edimoon777 8
  • TypeError: unsupported operand type(s) for *: 'ZMQIOLoop' and 'float'

    TypeError: unsupported operand type(s) for *: 'ZMQIOLoop' and 'float'

    Problem description

    I am trying to run the TestModel and I have found a TypeError when connecting to AirSim. It seems to me that it could come from the Tornado version, but actually I have tried with more than one version and the problem continues.

    Problem details

    Details of the problem are as follow:

    When I run the script .\AD_Cookbook_Start_Airsim.ps1 landscape, everything runs perfectly in UE4 and AirSim but once I run the Jupyter Notebook (Step 2: "Test the model") as follows, it is when I find the error after the second cell (posted below):

    model = load_model(MODEL_PATH) client = CarClient() client.confirmConnection() client.enableApiControl(True) car_controls = CarControls() print('Connection established!')

    Then I found the folowing error message:


    TypeError Traceback (most recent call last) in 1 model = load_model(MODEL_PATH) 2 ----> 3 client = CarClient() 4 client.confirmConnection() 5 client.enableApiControl(True)

    ~\PycharmProjects\Cookbook\AutonomousDrivingCookbook-master\AutonomousDrivingCookbook-master\AirSimE2EDeepLearning\AirSimClient.py in init(self, ip) 504 if (ip == ""): 505 ip = "127.0.0.1" --> 506 super(CarClient, self).init(ip, 42451) 507 508 def setCarControls(self, controls):

    ~\PycharmProjects\Cookbook\AutonomousDrivingCookbook-master\AutonomousDrivingCookbook-master\AirSimE2EDeepLearning\AirSimClient.py in init(self, ip, port) 148 class AirSimClientBase: 149 def init(self, ip, port): --> 150 self.client = msgpackrpc.Client(msgpackrpc.Address(ip, port), timeout = 3600) 151 152 def ping(self):

    ~\Anaconda3\envs\Cookbook\lib\site-packages\msgpackrpc\client.py in init(self, address, timeout, loop, builder, reconnect_limit, pack_encoding, unpack_encoding) 13 14 if timeout: ---> 15 loop.attach_periodic_callback(self.step_timeout, 1000) # each 1s 16 17 @classmethod

    ~\Anaconda3\envs\Cookbook\lib\site-packages\msgpackrpc\loop.py in attach_periodic_callback(self, callback, callback_time) 37 38 self._periodic_callback = ioloop.PeriodicCallback(callback, callback_time, self._ioloop) ---> 39 self._periodic_callback.start() 40 41 def dettach_periodic_callback(self):

    ~\Anaconda3\envs\Cookbook\lib\site-packages\tornado\ioloop.py in start(self) 885 self._running = True 886 self._next_timeout = self.io_loop.time() --> 887 self._schedule_next() 888 889 def stop(self) -> None:

    ~\Anaconda3\envs\Cookbook\lib\site-packages\tornado\ioloop.py in _schedule_next(self) 913 def _schedule_next(self) -> None: 914 if self._running: --> 915 self._update_next(self.io_loop.time()) 916 self._timeout = self.io_loop.add_timeout(self._next_timeout, self._run) 917

    ~\Anaconda3\envs\Cookbook\lib\site-packages\tornado\ioloop.py in _update_next(self, current_time) 920 if self.jitter: 921 # apply jitter fraction --> 922 callback_time_sec *= 1 + (self.jitter * (random.random() - 0.5)) 923 if self._next_timeout <= current_time: 924 # The period should be measured from the start of one call

    TypeError: unsupported operand type(s) for *: 'ZMQIOLoop' and 'float'

    Experiment/Environment details

    • Tutorial used: AirSimE2EDeepLearning
    • Environment used: landscape (all my environment are running perfectly)
    • Versions of artifacts used: -- Windows 10 -- Anaconda -- Python 3.5 -- Tornado 6.0.2

    Thanks a lot in advance.

    opened by gonzaloaguilarjimenez 7
  • error train the model

    error train the model

    python35 cudnn5.1 keras2.1.2 numpy (1.14.2) Traceback (most recent call last): File "C:\Users\oleg\Anaconda3\lib\site-packages\traitlets\traitlets.py", line 526, in get value = obj._trait_values[self.name] KeyError: 'layout'

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "C:\Users\oleg\Anaconda3\lib\site-packages\traitlets\traitlets.py", line 526, in get value = obj._trait_values[self.name] KeyError: 'kernel'

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "E:/AutonomousDriving/AirSimE2EDeepLearning/test2.py", line 122, in validation_data=eval_generator, validation_steps=num_eval_examples//batch_size, verbose=2) File "C:\Users\oleg\Anaconda3\lib\site-packages\keras\legacy\interfaces.py", line 87, in wrapper return func(*args, **kwargs) File "C:\Users\oleg\Anaconda3\lib\site-packages\keras\engine\training.py", line 2072, in fit_generator callbacks.on_train_begin() File "C:\Users\oleg\Anaconda3\lib\site-packages\keras\callbacks.py", line 126, in on_train_begin callback.on_train_begin(logs) File "C:\Users\oleg\Anaconda3\lib\site-packages\keras_tqdm\tqdm_callback.py", line 129, in on_train_begin total=epochs) File "C:\Users\oleg\Anaconda3\lib\site-packages\keras_tqdm\tqdm_callback.py", line 67, in build_tqdm_outer return self.tqdm(desc=desc, total=total, leave=self.leave_outer) File "C:\Users\oleg\Anaconda3\lib\site-packages\keras_tqdm\tqdm_notebook_callback.py", line 33, in tqdm return tqdm_notebook(desc=desc, total=total, leave=leave) File "C:\Users\oleg\Anaconda3\lib\site-packages\tqdm_init_.py", line 22, in tqdm_notebook return _tqdm_notebook(*args, **kwargs) File "C:\Users\oleg\Anaconda3\lib\site-packages\tqdm_tqdm_notebook.py", line 176, in init self.sp = self.status_printer(self.fp, self.total, self.desc) File "C:\Users\oleg\Anaconda3\lib\site-packages\tqdm_tqdm_notebook.py", line 96, in status_printer pbar = IntProgress(min=0, max=total) File "C:\Users\oleg\Anaconda3\lib\site-packages\ipywidgets\widgets\widget_int.py", line 57, in init super(cls, self).init(**kwargs) File "C:\Users\oleg\Anaconda3\lib\site-packages\ipywidgets\widgets\widget_int.py", line 95, in init super(_BoundedInt, self).init(**kwargs) File "C:\Users\oleg\Anaconda3\lib\site-packages\ipywidgets\widgets\widget_int.py", line 76, in init super(_Int, self).init(**kwargs) File "C:\Users\oleg\Anaconda3\lib\site-packages\ipywidgets\widgets\domwidget.py", line 90, in init super(DOMWidget, self).init(*pargs, **kwargs) File "C:\Users\oleg\Anaconda3\lib\site-packages\ipywidgets\widgets\widget.py", line 184, in init self.open() File "C:\Users\oleg\Anaconda3\lib\site-packages\ipywidgets\widgets\widget.py", line 197, in open state, buffer_keys, buffers = self._split_state_buffers(self.get_state()) File "C:\Users\oleg\Anaconda3\lib\site-packages\ipywidgets\widgets\widget.py", line 291, in get_state value = to_json(getattr(self, k), self) File "C:\Users\oleg\Anaconda3\lib\site-packages\traitlets\traitlets.py", line 554, in get return self.get(obj, cls) File "C:\Users\oleg\Anaconda3\lib\site-packages\traitlets\traitlets.py", line 533, in get value = self._validate(obj, dynamic_default()) File "C:\Users\oleg\Anaconda3\lib\site-packages\ipywidgets\widgets\domwidget.py", line 23, in _layout_default return Layout() File "C:\Users\oleg\Anaconda3\lib\site-packages\ipywidgets\widgets\widget.py", line 184, in init self.open() File "C:\Users\oleg\Anaconda3\lib\site-packages\ipywidgets\widgets\widget.py", line 203, in open self.comm = Comm(**args) File "C:\Users\oleg\Anaconda3\lib\site-packages\ipykernel\comm\comm.py", line 56, in init self.open(data=data, metadata=metadata, buffers=buffers) File "C:\Users\oleg\Anaconda3\lib\site-packages\ipykernel\comm\comm.py", line 83, in open comm_manager = getattr(self.kernel, 'comm_manager', None) File "C:\Users\oleg\Anaconda3\lib\site-packages\traitlets\traitlets.py", line 554, in get return self.get(obj, cls) File "C:\Users\oleg\Anaconda3\lib\site-packages\traitlets\traitlets.py", line 533, in get value = self._validate(obj, dynamic_default()) File "C:\Users\oleg\Anaconda3\lib\site-packages\traitlets\traitlets.py", line 589, in _validate value = self.validate(obj, value) File "C:\Users\oleg\Anaconda3\lib\site-packages\traitlets\traitlets.py", line 1681, in validate self.error(obj, value) File "C:\Users\oleg\Anaconda3\lib\site-packages\traitlets\traitlets.py", line 1528, in error raise TraitError(e) traitlets.traitlets.TraitError: The 'kernel' trait of a Comm instance must be a Kernel, but a value of class 'NoneType' (i.e. None) was specified.

    opened by hyybuaa 7
  • TypeError: bad operand type for unary -: 'str'

    TypeError: bad operand type for unary -: 'str'

    This issue maybe silly, I'm new to this area, Sorry to have your time.

    When I run into the TrainModel.ipynb

    def draw_image_with_label(img, label, prediction=None):
        theta = label * 0.69 #Steering range for the car is +- 40 degrees -> 0.69 radians
        line_length = 50
        line_thickness = 3
        label_line_color = (255, 0, 0)
        prediction_line_color = (0, 0, 255)
        pil_image = image.array_to_img(img, K.image_data_format(), scale=True)
        print('Actual Steering Angle = {0}'.format(label))
        draw_image = pil_image.copy()
        image_draw = ImageDraw.Draw(draw_image)
        first_point = (int(img.shape[1]/2),img.shape[0])
        second_point = (int((img.shape[1]/2) + (line_length * math.sin(theta))), int(img.shape[0] - (line_length * math.cos(theta))))
        image_draw.line([first_point, second_point], fill=label_line_color, width=line_thickness)
        
        if (prediction is not None):
            print('Predicted Steering Angle = {0}'.format(prediction))
            print('L1 Error: {0}'.format(abs(prediction-label)))
            theta = prediction * 0.69
            second_point = (int((img.shape[1]/2) + (line_length * math.sin(theta))), int(img.shape[0] - (line_length * math.cos(theta))))
            image_draw.line([first_point, second_point], fill=prediction_line_color, width=line_thickness)
        
        del image_draw
        plt.imshow(draw_image)
        plt.show()
    
    [sample_batch_train_data, sample_batch_test_data] = next(train_generator)
    for i in range(0, 3, 1):
        draw_image_with_label(sample_batch_train_data[0][i], sample_batch_test_data[i])
    

    The error message as below:


    TypeError                                 Traceback (most recent call last)
    <ipython-input-13-6955bc10ff72> in <module>
         24     plt.show()
         25 
    ---> 26 [sample_batch_train_data, sample_batch_test_data] = next(train_generator)
         27 for i in range(0, 3, 1):
         28     draw_image_with_label(sample_batch_train_data[0][i], sample_batch_test_data[i])
    
    ~\AppData\Local\conda\conda\envs\my_airsim\lib\site-packages\keras_preprocessing\image.py in __next__(self, *args, **kwargs)
       1524 
       1525     def __next__(self, *args, **kwargs):
    -> 1526         return self.next(*args, **kwargs)
       1527 
       1528     def _get_batches_of_transformed_samples(self, index_array):
    
    D:\AirSim\AutonomousDrivingCookbook\AirSimE2EDeepLearning\Generator.py in next(self)
        240         # so it can be done in parallel
        241 
    --> 242         return self.__get_indexes(index_array)
        243 
        244     def __get_indexes(self, index_array):
    
    D:\AirSim\AutonomousDrivingCookbook\AirSimE2EDeepLearning\Generator.py in __get_indexes(self, index_array)
        262                 x_images = x_images[self.roi[0]:self.roi[1], self.roi[2]:self.roi[3], :]
        263 
    --> 264             transformed = self.image_data_generator.random_transform_with_states(x_images.astype(K.floatx()))
        265             x_images = transformed[0]
        266             is_horiz_flipped.append(transformed[1])
    
    D:\AirSim\AutonomousDrivingCookbook\AirSimE2EDeepLearning\Generator.py in random_transform_with_states(self, x, seed)
        138             x = image.random_channel_shift(x,
        139                                            self.channel_shift_range,
    --> 140                                      img_channel_axis)
        141         if self.horizontal_flip:
        142             if np.random.random() < 0.5:
    
    ~\AppData\Local\conda\conda\envs\my_airsim\lib\site-packages\keras_preprocessing\image.py in random_channel_shift(x, intensity_range, channel_axis)
        199         Numpy image tensor.
        200     """
    --> 201     intensity = np.random.uniform(-intensity_range, intensity_range)
        202     return apply_channel_shift(x, intensity, channel_axis=channel_axis)
        203 
    
    TypeError: bad operand type for unary -: 'str'
    

    It seems the data type of "intensity_range" is 'str', which should be a number. but I cannot find where the channel_shift_range been assigned as 'str' Please help me out.

    opened by GetDarren 6
  • Tips for optimizing model in new evironment?

    Tips for optimizing model in new evironment?

    First of all, thank you for making these tutorials! They've been incredibly helpful for getting started with autonomous driving. My question is, do you have any tips for collecting useful data in new environments, and how to improve/optimize the model's performance?

    Problem details

    I've worked through the Autonomous Driving using End-to-End Deep Learning: an AirSim tutorial, and am now trying apply the concepts discussed to train a model to drive in the AirSim Neighborhood environment. I've already collected training data (using a steering wheel), and, using the same network architecture and parameter values as the tutorial, I've been able to train a model where the car drives around for a few minutes before swerving off the road or crashing into a parked car, etc.

    I have begun to try modifying different parameters and variables to optimize the model's performance, such as following the suggestions in the tutorials: modifying the region of interest, zero drop percentage, network architecture, etc. However, I've run into a few problems.

    My main problem is that a trained model with lower validation loss doesn't necessarily correspond to a model with better driving performance. For example, one model I trained had a validation loss of .0002429 and crashed while making its first turn, but a different model with a val loss of .0005796 was able to drive pretty well for about five minutes. As a general trend, I've found a lower val loss does indicate a better performing model, but not enough that I can rely on it to optimize performance. This has costs me quite a bit of time, as each time I make changes and retrain, I have to manually test the model and watch the car drive, instead of just being able to rely on minimizing the val loss.

    My best guess as to why this is happening is poor training data. I understand that if the training data is "bad", then no matter how low you are able to get your val loss, your model will perform poorly, as it has learned to match the "bad" training data. I did my best to follow the ideas outlined in the Data Exploration and Preparation jupyter notebook when I collected my training data. The majority of the data was collected by driving normally. Then I also collected data to take care of edge cases and deviations from the ideal (like the swerve data does in the tutorial), but I have no idea how "good" the data I collected is, since we're using an end-to-end model where most everything is abstracted, so I don't know if this is actually a problem with my data or with something else.

    At this point, I've just been iteratively modifying and testing the model manually. I know there's got to be a better way to approach this, but I'm at a loss for what to try. It's difficult to optimize performance when it doesn't directly correspond to validation loss or another quantitative value (as far as I can tell). I would greatly appreciate any tips anyone can offer for collecting better training data or improving model performance. Thank you!

    opened by NextSim 6
  • Help! training not starting! #urgent

    Help! training not starting! #urgent

    Capture

    My training is not starting. I have used python 3.6 with tensorflow gpu 1.8.0 and keras 2.1.2. Also I have a Geforce GTX 3060 running on my computer. So it shouldnt be a problem. I also installed Norton antivirus on this new computer. On the older computer which has a bad GPU I had Panda Dome, but there training was running. But after over 1 hour, the training was only on 1%. Thats why I bought a new computer with a good GPU and CPU. Some of this work is going to be presented in my master thesis. I would appreciate any help soon.

    opened by danialvi 6
  • Kaggle AirSim End-to-End Learning to share

    Kaggle AirSim End-to-End Learning to share

    Just to share my port on Kaggle AirSim End-to-End Learning

    To make it work on Kaggle with Keras 2.6.0, Cooking.py and Generator.py are modified in my github fork

    It seems I got better prediction than the tutorial notebook.

    opened by rkuo2000 1
  • Mr. Spryn! How can I change the Generator.py and Cooking.py to store the images in the batches in the same order as they are in the folders and then entered into the model for train?

    Mr. Spryn! How can I change the Generator.py and Cooking.py to store the images in the batches in the same order as they are in the folders and then entered into the model for train?

    Problem description

    I am trying to use images in batches of frames and use Conv3D in my model; so I need to enter the images in the same order in the folders. I could fix other necessary changes and it is the only remaining problem. I also consider changing the order of folders so that the model also learns the swerve Images. Of course, I know it would not be a good way of data preprocessing but please give me your ideas on this matter. Thanks

    Problem details

    I tried some changes but none of them works; in one of them I made generateDataMapAirSim in Cooking.py not to shuffle the mappings :

    mappings = [(key, all_mappings[key]) for key in all_mappings] # random.shuffle(mappings) return mappings

    Experiment/Environment details

    • Tutorial used: AirSimE2EDeepLearning
    • Environment used: landscape
    • Versions of artifacts used (if applicable): Win 10, Python 3.8, Keras 2.4.3
    opened by M-M98 0
  • JSONDecodeError

    JSONDecodeError

    JSONDecodeError: Expecting value: line 14 column 1 (char 354)

    Is it a problem with the version? You get the same error when you try different versions. How do I solve this?

    opened by RohDonghyeon 1
  • Getting IndexError: list index out of range while running TrainModel.ipynb ([in 5]).

    Getting IndexError: list index out of range while running TrainModel.ipynb ([in 5]).

    Problem description

    Getting IndexError: list index out of range while running TrainModel.ipynb ([in 5]).

    Problem details

    IndexError Traceback (most recent call last) in 1 image_input_shape = sample_batch_train_data[0].shape[1:] ----> 2 state_input_shape = sample_batch_train_data[1].shape[1:] 3 activation = 'relu' 4 5 #Create the convolutional stacks

    IndexError: list index out of range

    Experiment/Environment details

    • Tutorial used: AirSimE2EDeepLearning/TrainModel.ipynb
    • Environment used: none
    • Versions of artifacts used (if applicable): (Python 3.6, Keras 2.1.2 ,TF 1.8)
    opened by Eldart95 2
  • Modifications for throttle prediciton

    Modifications for throttle prediciton

    i want try to do modifications on the code so the code will predict throttle and steering angle too. where i should add the code? i am using landscape mountain environment with deep learning method.

    opened by aldaoctavia 0
Owner
Microsoft
Open source projects and samples from Microsoft
Microsoft
An unofficial personal implementation of UM-Adapt, specifically to tackle joint estimation of panoptic segmentation and depth prediction for autonomous driving datasets.

Semisupervised Multitask Learning This repository is an unofficial and slightly modified implementation of UM-Adapt[1] using PyTorch. This code primar

Abhinav Atrishi 11 Nov 25, 2022
Repository to run object detection on a model trained on an autonomous driving dataset.

Autonomous Driving Object Detection on the Raspberry Pi 4 Description of Repository This repository contains code and instructions to configure the ne

Ethan 51 Nov 17, 2022
Official Repo for Ground-aware Monocular 3D Object Detection for Autonomous Driving

Visual 3D Detection Package: This repo aims to provide flexible and reproducible visual 3D detection on KITTI dataset. We expect scripts starting from

Yuxuan Liu 305 Dec 19, 2022
RTS3D: Real-time Stereo 3D Detection from 4D Feature-Consistency Embedding Space for Autonomous Driving

RTS3D: Real-time Stereo 3D Detection from 4D Feature-Consistency Embedding Space for Autonomous Driving (AAAI2021). RTS3D is efficiency and accuracy s

null 71 Nov 29, 2022
[arXiv] What-If Motion Prediction for Autonomous Driving ❓🚗💨

WIMP - What If Motion Predictor Reference PyTorch Implementation for What If Motion Prediction [PDF] [Dynamic Visualizations] Setup Requirements The W

William Qi 96 Dec 29, 2022
[CVPR'21] Multi-Modal Fusion Transformer for End-to-End Autonomous Driving

TransFuser This repository contains the code for the CVPR 2021 paper Multi-Modal Fusion Transformer for End-to-End Autonomous Driving. If you find our

null 695 Jan 5, 2023
Uncertainty-aware Semantic Segmentation of LiDAR Point Clouds for Autonomous Driving

SalsaNext: Fast, Uncertainty-aware Semantic Segmentation of LiDAR Point Clouds for Autonomous Driving Abstract In this paper, we introduce SalsaNext f

null 308 Jan 4, 2023
One Million Scenes for Autonomous Driving

ONCE Benchmark This is a reproduced benchmark for 3D object detection on the ONCE (One Million Scenes) dataset. The code is mainly based on OpenPCDet.

null 148 Dec 28, 2022
[ICCV'21] NEAT: Neural Attention Fields for End-to-End Autonomous Driving

NEAT: Neural Attention Fields for End-to-End Autonomous Driving Paper | Supplementary | Video | Poster | Blog This repository is for the ICCV 2021 pap

null 254 Jan 2, 2023
Self-Supervised Pillar Motion Learning for Autonomous Driving (CVPR 2021)

Self-Supervised Pillar Motion Learning for Autonomous Driving Chenxu Luo, Xiaodong Yang, Alan Yuille Self-Supervised Pillar Motion Learning for Autono

QCraft 101 Dec 5, 2022
Code repository for Semantic Terrain Classification for Off-Road Autonomous Driving

BEVNet Datasets Datasets should be put inside data/. For example, data/semantic_kitti_4class_100x100. Training BEVNet-S Example: cd experiments bash t

(Brian) JoonHo Lee 24 Dec 12, 2022
Unofficial PyTorch implementation of "RTM3D: Real-time Monocular 3D Detection from Object Keypoints for Autonomous Driving" (ECCV 2020)

RTM3D-PyTorch The PyTorch Implementation of the paper: RTM3D: Real-time Monocular 3D Detection from Object Keypoints for Autonomous Driving (ECCV 2020

Nguyen Mau Dzung 271 Nov 29, 2022
Graph Self-Attention Network for Learning Spatial-Temporal Interaction Representation in Autonomous Driving

GSAN Introduction Code for paper GSAN: Graph Self-Attention Network for Learning Spatial-Temporal Interaction Representation in Autonomous Driving, wh

YE Luyao 6 Oct 27, 2022
Pre-trained Deep Learning models and demos (high quality and extremely fast)

OpenVINO™ Toolkit - Open Model Zoo repository This repository includes optimized deep learning models and a set of demos to expedite development of hi

OpenVINO Toolkit 3.4k Dec 31, 2022
Demos of essentia classifiers hosted on replicate.ai

essentia-replicate-demos Demos of Essentia models hosted on replicate.ai's MTG site. The models Check our site for a complete list of the models avail

Music Technology Group - Universitat Pompeu Fabra 12 Nov 14, 2022
Breaching - Breaching privacy in federated learning scenarios for vision and text

Breaching - A Framework for Attacks against Privacy in Federated Learning This P

Jonas Geiping 139 Jan 3, 2023
An open-source, low-cost, image-based weed detection device for fallow scenarios.

Welcome to the OpenWeedLocator (OWL) project, an opensource hardware and software green-on-brown weed detector that uses entirely off-the-shelf compon

Guy Coleman 145 Jan 5, 2023
MetaDrive: Composing Diverse Scenarios for Generalizable Reinforcement Learning

MetaDrive: Composing Diverse Driving Scenarios for Generalizable RL [ Documentation | Demo Video ] MetaDrive is a driving simulator with the following

DeciForce: Crossroads of Machine Perception and Autonomy 276 Jan 4, 2023
TPH-YOLOv5: Improved YOLOv5 Based on Transformer Prediction Head for Object Detection on Drone-Captured Scenarios

TPH-YOLOv5 This repo is the implementation of "TPH-YOLOv5: Improved YOLOv5 Based on Transformer Prediction Head for Object Detection on Drone-Captured

cv516Buaa 439 Dec 22, 2022