Skip to content

Latest commit

 

History

History
82 lines (67 loc) · 4.44 KB

File metadata and controls

82 lines (67 loc) · 4.44 KB
layout page
title About
permalink /about/

What is this?

This is a RL content website: It includes annotations on a DeepRL course and summaries of "recent"/"famous" RL papers we found interesting and soon some of our own projects.

In addition, we also provide a comment section below each content page for readers to review/discuss/ask content-related matters.

Who is this for?

This is for RL enthusiasts such as ourselves.

We select interesting papers and try to provide a summary which will give you a good understanding of them (deeper than what you get by just reading the abstract) in a much compressed way (by omitting technical details, previous work, and reiterations).

Ultimately (in fact, mainly) this website is also for us, to ensure we keep reading and learning. Creating notes for Sergey Levine Deep Reinforcement Learning course helped us to sort the main ideas while summarizing recent papers allows us to understand the experiments being attempted by most famous researchers.

Who are we?

We are two Machine Learning M.Sc. students from KTH very interested in Reinforcement Learning Research. We decided to start this website in an attempt to deepen our understanding on this field. It is a nice way of forcing ourselves be up to date with most recent developments and make our projects more presentable.

To be more precise, we are:

Oleguer Canal

<style> div { margin-right: 10px; margin-left: 10px; } </style>

Simultaneously studied a B.Sc. in Mathematics and a B.Sc. in Industrial Engineering through CFIS center at the Polytechnic University of Catalonia (UPC).

Researched on the application of modern computer vision techniques to robotic tactile feedback at the MCube Lab in the Massachusetts Institute of Technology (MIT).

Worked as a robotics perception engineer at XYZ Robotics (Shanghai), and now doing research at the division of Robotics Perception and Learning (RPL) of KTH Royal Institute of Technology.

LinkedIn, GitHub, Scholar

Federico Taschin

<style> div { margin-right: 10px; margin-left: 10px; } </style>

Studied Computer Engineering at the Università degli Studi di Padova and currently studying Machine Learning at KTH Royal Institute of Technology.

Working as Driverless Engineer and Technical Integrator at KTH Formula Student, developing the SLAM module for the autonomous car and integrating to other modules into the car system.

Aiming of pursuing a PhD in Reinforcement Learning.

LinkedIn, GitHub,

How can I get in contact/collaborate?

Feel free to write us through LinkedIn! For website-related messages such as questions, collaborations or suggestions you can write us here: ai.campus.ai@gmail.com or directly open an issue to the website's repo.