Mellow


Head of Engineering
Mellow
Germany, Berlin
2022 - 2023

Website
https://getmellow.app

Area:
mobile, wearable, medical, machine learning

Technologies:
iOS, swift, Python, FastAPI, TensorFlow, CoreML, DVC, AWS, InfluxDB. MongoDB, Redis, MQTT, Mosquitto, Telegraf, Grafana, Quicksight, Label Studio, Docker, Gitlab CI/CD

Mellow is a mobile application that focuses on providing epilepsy management using wearable technologies and AI.

As the Head of Engineering, I successfully led the technical development of Mellow, from conceptualization to launching the application on the Apple App Store, with the Apple Watch as the main hardware platform. My contributions in system architecture, machine learning, and technical leadership were instrumental in creating a robust and innovative solution for seizure detection, catering to the needs of people with epilepsy.

Contribution and Tasks:

  • Provided principal technical contribution, advisory, and consulting to the startup, working closely with the founders and product managers on the company's roadmap, vision, and scope.

  • Led a team of iOS developers, DevOps engineers, and backend engineers, while collaborating with UI/UX designers, marketing specialists, medical advisors, and freelancers.

  • Designed and implemented the full system architecture for Mellow, which included a scalable storage and processing backend for gathering sensor data, custom backend for the mobile application, and monitoring and data analysis infrastructure

  • Designed and implemented ML models with data processing pipelines for seizure detection based on movement data, leveraging technologies such as TensorFlow and DVC for ML model management.

  • Integrated all technical subsystems into a production-ready solution, ensuring smooth communication and data flow between the wearable devices, mobile application, and backend servers.

  • Designed and executed non-trivial solutions for assessing the quality of the medical application, including robust monitoring and alerting systems, ensuring high performance and reliability of the system.

Technologies Involved:

  • iOS, Swift, and FastAPI for mobile application and backend development.

  • Python and TensorFlow for machine learning model development and training.

  • InfluxDB, MongoDB, AWS, Redis, MQTT, Mosquitto, Telegraf, and Grafana for data storage, processing, and visualization.

  • Robotics and 3D printing for prototyping wearable devices and custom hardware solutions.

Arahub


Chief Technology Officer
Myled
Poland, Cracow
2019 - 2022

Website
https://arahub.ai

Area:
Machine Learning, Computer Vision, Embedded Systems, Sensor Networks

Technologies:
Python, Tensorflow, PyTorch, Nvidia Jetson, Docker, Gitlab CI/CD, Linux, Kafka, AWS, MongoDB

At Myled I was the co-founder of the Arahub project that aims to collect and analyze data from the environment using a proprietary device for machine vision. The device consists of a set of sensors that collect information about people moving nearby, including movement direction, interests analysis, and demographic profile. AI ML algorithms are used to interpret the collected information, which enables the identification of audience characteristics within the range of the devices. The system allows clients to recognize people in a given place and analyze their characteristics in terms of gender, age, appearance, and interests. By tracking the movement of people within the range of ARABOXes, the system can determine their distance from the device as well as the direction and manner of movement.

As the Chief Technology Officer and Co-founder of the project, I provided principal technical contribution, advisory, and consulting based on advancements in AI and Computer Vision. I worked closely with the co-founders to define the strategy and technological goals of the project. I also led a team of data scientists and engineers to build a Minimum Viable Product (MVP) showcasing the possibility of using machine learning for outdoor audience analysis.

My contributions to the project include delivering a Proof of Concept (PoC) showcasing the possibilities of simultaneous video and RF signal tracking. I designed the full architecture of the system, including an embedded platform with AI on the edge and a backend system for collecting and analyzing data from a distributed network of sensors. I also designed and implemented a CI/CD system for building, testing, and distributing software for an embedded system with GPU for ML acceleration (Nvidia Jetson). Additionally, I supervised the recruiting and building of the team, and ensured compliance with GDPR and legal requirements.

This project highlights my expertise in AI and Computer Vision, as well as my leadership and technical skills in designing and building complex systems.

Papers:

Brightbox


Chief Technology Officer
QED Software
Poland, Warsaw
2018 - 2021

Website
https://qed.pl/eu-projects/brightbox/

Area
Machine Learning, Explainable Artificial Intelligence

BrightBox is a diagnostic and monitoring tool for machine learning models, which helps in investigating the reasons behind errors in model predictions. It allows for both global and local level diagnosis of ML models, continuous auditing and monitoring of model operations, uncertainty estimation and analysis, and prescriptive and what-if analysis for model decisions. The technology is based on the analysis of preprocessed reference data and model predictions, enabling the identification of model- and data-related issues without direct access to the model. BrightBox provides ML engineers with insight into the actual reasons behind errors, enabling better-informed decisions regarding the model and data updating process. It is intended to be used by Data Science teams communicating with Business Owners to improve the transparency and interpretability of AI/ML models.

Contribution:

  • Co-authored funding proposal for the BrightBox project

  • Supervised the R&D team working on the project, recruiting, evaluating, and mentoring team members

  • Provided guidance for architecture design and technological stack

  • Supervised the integration of BrightBox with other solutions developed by the company

  • Ensured that the technology was developed using the most effective and efficient tools and techniques available

Infoframes


Chief Technology Officer
QED Software
Poland, Warsaw
2018 - 2021

Website
https://qed.pl/ai-solutions/infoframes/


Area
Machine Learning, Compact Representations, Database, Compression

InfoFrames is a powerful and scalable library for big data storage and analysis. It offers efficient and compressed storage of data in the form of effective summaries that can be used in machine learning and data science applications. The library supports multiple common data types and has APIs in C++ and Python for easy integration. InfoFrames can store and analyze large sets of multi-dimensional data up to 30x better than NumPy. It has the ability to take full advantage of unstructured data, such as images and videos, by providing a native Tensor data type for easy storage and processing. The library also enables the in-database information layer, allowing for the analysis of unstructured data with various new types of analytics. InfoFrames helps in reducing the amount of resources, especially data and time, needed to obtain high quality ML models.

Contribution:

  • Supervised the R&D team working on the InfoFrames project, recruiting, evaluating, and mentoring team members to ensure they had the necessary skills and resources to deliver on project goals

  • Led the process of architecture design for InfoFrames, ensuring that it was designed in a way that was scalable, efficient, and compatible with the needs of the company's other products and services

  • Supervised the integration of InfoFrames with other solutions developed by the company, ensuring that it seamlessly integrated with existing products and services

  • Provided expertise in data structures, storage and processing, and big data analytics to help the team create a robust and effective solution that offers efficient and compressed storage of data.

Papers:

LITL


Chief Technology Officer
QED Software
Poland, Warsaw
2018 - 2021

Website
https://qed.pl/eu-projects/labelling-in-the-loop/

Area
Machine Learning, Active Learning, Data Labeling

Label in the Loop (LITL) is a technology that helps data-driven organizations collaborate with or supervise domain experts over an unlabeled body of data for efficient training of machine learning models. LITL's smart selection of data samples and smart designation of those samples to domain experts reduce time and cost of model training. The system selects samples from an unlabeled pool that will have the most influence on the model’s performance and assigns them to experts according to their estimated performance on similar samples. The experts' assignment process balances between exploration and exploitation of experts’ knowledge to estimate their performance more effectively. The process works iteratively by retraining the model with consecutive batches of samples. LITL features active learning sampling, initial batch selection, experts assignment, consensus, performance estimation, and new classes identification.

Contribuion:

  • Supervised the R&D team working on the LITL project, recruiting, evaluating, and mentoring team members to ensure they had the necessary skills and resources to deliver on project goals

  • Provided guidance for architecture design and technological stack, ensuring that LITL was developed using the most effective and efficient tools and techniques available

  • Supervised the integration of LITL with other solutions developed by the company, ensuring that it seamlessly integrated with existing products and services

GRAIL


Senior Project Manager
QED Software
Poland, Warsaw
2018 - 2020

Deep Learning Consultant
Silver Bullet Labs
Poland, Warsaw
2017 - 2018

Website
https://grail.com.pl/

Area
Artificial Intelligence, Video Games, Machine Learning, Simulations

Grail is a project that aims to create a component-based engine for embedding advanced artificial intelligence (AI) in games. The main objective is to provide game designers with tools to allow actors or characters to have realistic and sophisticated behavior, efficiently driving towards their individually set goals and adapting to human player's way of playing. The project aims to create computer characters that play more effectively than existing methods, which can engage players for a longer time and address problems and deficits of the products currently offered on the video games market. The target user group is game developers, and various channels will be used to reach them based on specific results achieved in the project.

Contribution

  • As a deep learning consultant I worked on using machine learning for creating state-value and action-value models in order to enhance MCTS based agents of the GRAIL engine for different video games

  • Implemented reinforcement learning approach for GRAIL engine, performed experiments with Hearthstone game

  • Worked on improvement of Hearthstone simulator

  • Performed DevOps tasks for the project

Papers:

(NDA)


AI Team Lead

2018 - 2019

Area
Machine Learning, Video Games, Sensor data, Biomarkers

The project aimed to create an AI system capable of interpreting the actions of a video game player to dynamically adapt the game environment and keep the player maximally engaged. The system used a range of inputs, including in-game actions, controller inputs (such as mouse, keyboard, and pad), biomarkers (ECG, GSR, accelerometer), and image and sound recognition. The end result was a prototype video game that could adapt its difficulty level and scenarios based on the perceived stress level of the player.

Role: AI Team Lead

Contribution:

  • Leading a team of scientists responsible for designing and implementing AI methods to model human emotions in video games

  • Developing methods for dynamically adapting video game content based on player reactions, ensuring that the game remained engaging and challenging for the player

  • Developing methods for reading signals from input devices and biomarkers, which were critical for accurately interpreting the player's emotional state

  • Supervising the team and ensuring that they delivered final scientific and technological deliverables that met the project's goals and objectives.

MORL-DV


PhD student
University of Warsaw
Poland, Warsaw
2016 - 2019

Website
https://github.com/ttajmajer/morl-dv

Area
Machine Learning, Reinforcement Learning, Simulations

The project aimed to develop a method using Deep Q-Networks (DQNs) to tackle multi-objective environments. While DQNs have shown exceptional results in single-objective problems using high-level visual state representations, multi-objective problems require an agent to pursue multiple objectives simultaneously, such as in robotics or games. The project proposed an architecture using separate DQNs to control the agent's behaviour with respect to specific objectives. Decision values were introduced to improve the scalarization of multiple DQNs into a single action. This allowed the decomposition of the agent's behaviour into controllable sub-behaviours learned by distinct modules, which could be changed post-learning while maintaining overall performance. The solution was evaluated using a game-like simulator in a 2D world, where an agent with high-level visual input pursued multiple objectives.

Contribution:

  • Authored, designed and implemented the full solution using tensorflow, openai gym and custom game simulator

  • Performed large-scale ML experiments

  • Published results

Papers:

SmartHome


Private
2020 - 2023

Area
Internet of Things, Sensors, Database, Automation

The project involved building a smart home using HomeAssistant, which incorporated both consumer electronics and custom-built hardware solutions. The smart home consisted of over 80 endpoints that used various communication protocols such as WiFi, ZigBee, Bluetooth, and other RF technologies. It also had over 500 logical endpoints for sensing, actuating, and automation. The system was set up with over 200 automations, ensuring efficient and effective control of the various endpoints. The smart home was integrated with various other systems such as Google Home, Roomba, SmartThings, Smart TVs, smartphones, and even a car.

Contribution:

  • Designed the entire system including electrical, networking, wireless, power, and HVAC components

  • Built multiple custom devices using platforms such as ESP8266, STM32, and Raspberry Pi

  • Implemented a network backbone based on professional grade networking hardware

  • Wrote complete automation scripts and code for the system

RADCARE


Embedded Systems Engineer
Elnovel
Poland, Warsaw
2013 - 2016

Area
Embedded Systems, Radars, Electronics, Signal Processing, Sensors

The RADCARE project aimed to provide care support for elderly and disabled individuals through the use of radar sensor technology. Its primary objective was to explore new possibilities for using an impulse radar sensor system in preventive care and the diagnosis of different health conditions. The project utilized a dedicated radar system to sense and gather human life parameters.

Role: Embedded Systems Engineer

Contribution:

  • Sensor data acquisition system with database, processing and web fronted based on CoAP protocol

  • Designing a solution to enable nonroutable and legacy sensor networks to utilize the CoAP protocol and integrate with a centralized database

Papers:

Long-range
sensor network


Embedded Systems Engineer
Elnovel
Poland, Warsaw
2013 - 2016

Area
Internet of Things, Sensors, Low-power applications,

The project is an experimental sensor network designed for long-range and deep-sleeping applications. It uses transceivers operating in the 868 MHz and 168 MHz bands and is primarily used for monitoring temperature in delivery chains, especially in refrigerated trucks. The network employs a custom mesh radio protocol that enables low-energy and low-bitrate RF communication. The system includes a full-stack design, starting from the link, network, and routing layers, to a backend for gathering sensor data and a frontend for data processing and visualization. Additionally, a monitoring system based on Software-Defined Radio (SDR) with custom modules for GnuRadio and Wireshark was implemented.

Role: Embedded System, Radio Engineer

Contribution:

  • designed and implemented complete mesh networking protocol

  • implemented custom debugging and monitoring system based on SDR, GnuRadio and Wireshark

  • implemented backend and frontend systems for gathering sensor data and visualization

SmartSantander

& SACCOM 


System & Software Engineer
Warsaw University of Technology
Poland, Warsaw
2011 - 2013

Website
https://www.smartsantander.eu/

Area
Internet of Things, Embedded Systems, Smart Cities

SmartSantander was a project that proposed a unique city-scale experimental research facility in support of typical applications and services for a smart city. The project aimed to deploy 20,000 sensors in Belgrade, Guildford, Lübeck, and Santander, exploiting a large variety of technologies. A scalable, heterogeneous, and trustable large-scale real-world experimental facility was deployed. The project addressed all the requirements for a real-world IoT experimental platform by specifying, designing, and implementing the necessary building blocks. One of the main objectives of the project was to fuel the use of the Experimentation Facility among the scientific community, end-users, and service providers to reduce the technical and societal barriers that prevent the IoT concept from becoming an everyday reality. 

SACCOM ("Soft Actuation over Cooperating Objects Middleware") was a project resulting from the SmartSantander First Open Call for Experiments. The project aimed to deploy an independently developed cooperating objects middleware platform, called POBICOS, on top of the Smart Campus IoT testbed at UniS in Guildford and to use the resulting system to perform two experiments, which were referred to as the middleware experiment and the application experiment, respectively.

The objective of the middleware experiment was to verify the scalability of the POBICOS Proxy Environment, through which applications would access the IoT nodes of the Guildford testbed. The ambition was to run tests that involved up to 200 nodes. The objective of the application experiment was to investigate the concept of so-called soft-actuating applications, which were sense-and-react applications that did not perform any "real" actuation but gently prompted the user to perform the actuating action manually.

My role and contribution:

In the SmartSantander/SACCOM project I served as the lead software and system engineer, responsible for integrating the POBICOS middleware platform with the SmartSantander platform. I also created a virtualization platform that allowed us to run a large number of virtual sensor network nodes connected to physical endpoints. This was a critical component of the project's infrastructure.

In addition, I played a significant role in deploying the large-scale storage area network (SAN) that was essential for storing and processing the massive amounts of data generated by the project's sensors. Finally, I was responsible for preparing and performing experiments in a smart office environment, which allowed us to test the scalability and functionality of our platform in a real-world setting.

Overall, I am proud to have contributed to a project that aimed to make the IoT concept an everyday reality by reducing the technical and societal barriers that stand in its way.

Papers:

POBICOS


Embedded Systems Engineer
Warsaw University of Technology
Poland, Warsaw
2010 - 2013

Website
https://equ3.tele.pw.edu.pl/www/index.php/POBICOS

Area
Internet of Things, Building Automation, Smart Homes, Embedded Systems, Sensors

POBICOS is a project that aims to develop a platform for Opportunistic Behaviour in Incompletely Specified, Heterogeneous Object Communities. This platform is designed for computing environments where multiple objects with different computing resources, sensors, and actuators are present. The mix of objects and the resources they offer may not be fully known at the time of application development, which creates challenges for programming and executing these applications.

As an embedded software developer, my role in the project was crucial. I worked on developing middleware with an AVR virtual machine for mobile code, which allowed for greater flexibility in programming applications. I also created virtual environments for embedded development and developed tools for testing and deploying Wireless Sensor and Actuator Networks (WSAN) infrastructure.

To accomplish these tasks, I utilized a variety of technologies including tinyOS, nesC, wireless sensor networks, C/C++, Python, and ZigBee. I worked with different embedded platforms such as micaZ, AVR, and potentially Intel-based platforms.

Overall, my contributions were integral to the successful development of the POBICOS platform, which provides greater flexibility and adaptability to computing environments with diverse and changing resources.

Papers: