Training Preparation -- Current Practices of 100 Attendees
How do you test your code? | What is your target system? | What development tools do you use? | What is your code review practice? | How much time does it take to do an incremental build, load, and start running your code so you can start to test it? | What percentage of your development time is spent coding? | What percentage of your development time is spent testing? | What percentage of your development time is spent debugging? | |
---|---|---|---|---|---|---|---|---|
Manually | 8051, MSP430, Aurix, ARM Cortex-M4, dsPIC33, STM32 | Atmel Studio, AURIX Development Studio, uVision, Eclipse, MPLAB, Visual Studio, Resharper | Fisheye with Crucible | 31-60 seconds | 60 | 20 | 20 |
Show |
Build UI and manual test | Embedded control unit w/UI | Eclipse IDE | Fairly straight-forward, via Fisheye-Crucible | 30-60 minutes | 80 | 10 | 10 |
Show |
Manually | MOCU is the UI that interfaces with a Robot | DevTools for Javacript, GDB, Valgrind | We have designated peer review process | 11-30 seconds | 5 | 3 | 2 |
Show |
Still rely a lot on compiling the whole application and then testing the changed functionality by hand. We have some unit tests, but not enough to exercise all of the functionality. | Split between web apps and cross platform (Windows/Linux) desktop applications that process SAE J1939/J1708 vehicle diagnostic messages. | Visual Studio Code (Linux and Windows), Notepad++, Git/BitBucket, Jira | Code reviews are required. They tend to be done individually by reviewers using BitBucket pull request functionality. Sometimes I think too little time is spent and issues get missed. | 1-5 minutes | 70 | 10 | 20 |
Show |
Full system-level tests; Developing unit tests | Ubuntu, Intel Core i5 dual core @ 2.60GHz, 8GB RAM | GCC, CMake, Visual Studio Code, Docker, Bitbucket, Jira, Jenkins, SonarQube | Pull request for most changes, typically 1-2 reviewers within our dev team; Static code analysis via Jenkins pipeline for all automated builds, reports pushed to SonarQube for issue tracking and resolution | 31-60 seconds | 40? | 30? | 30? |
Show |
yes | cleancoders.com and client software | IntelliJ / IDAE editors Mac | pair programming | Under 10 seconds | 50 | 50 | 5 |
Show |
Currently, all human testing. | linux robot | VS Code/Studio, Atlassian suite | Review code on bitbucket before merging | 11-30 seconds | 30 | 30 | 40 |
Show |
Depends - some manual integration/system test, some unit test. | UI for small unmanned ground robot control/status (called MOCU for MTRS Inc II system). | Visual Studio Code, Node Webkit dev tools, gdb | We use Bitbucket PR feature (or Visual Studio Code) to review diffs, then build if needed, and verify with test steps in a JIRA issue. | Under 10 seconds | 80 | 10 | 10 |
Show |
using Rust's built-in unit-test framework combined with hardware-in-loop test. | STM32 micro-controller | Vim | Done with Mob Programming, and it's awesome. I highly suggest it. | Under 10 seconds | 20 | 50 | 30 |
Show |
The old fashioned way by brute force and VIM or notepad. | CentOS, Ubuntu, Windows, Debian, RedHat, Yocto | MS Code, Visual Studio, Eclipse, QT, GCC, G++ | General Pull requests in Bitbucket while using Jira and Confluence. | 5-30 minutes | 30 | 15 | 55 |
Show |
Manually. I write tests in LabVIEW. My test software communicates with the target device serially, mainly RS-232 and RS-485. | We are currently designing a new line of smarter sensors. Target system will use ARM Cortex-M microcontroller from STM. We will use a BLE module to support iOS and Android mobile applications to configure and monitor our device. | Old tools include Code Composer Studio from TI and MCUXpresso from NXP. We will use RUST and Cargo with VS Code as our IDE. I also use mixed signal oscilloscope regularly. | In our old projects, code reviews are done in bulk. We have tried Mob programming recently to have continuous code reviews. | 1-5 minutes | 50 | 25 | 25 |
Show |
Unit testing. On my (Agile) team, adding tests for all new code was encouraged but not required by policy, and there was a large legacy codebase without coverage, so we didn't see the full benefit of testing. Tests were run every commit by CI but not automatically on a dev workstation. I learned about TDD but never got the chance to put it into practice. | Previously I have targeted STM32 as well as other ARM platforms, either on top of Linux, RTOS, or bare-metal. | I'm most comfortable in vim but I can usually tune an IDE enough to be happy, when needed. I usually use a GCC or clang toolchain. | Organized with GitLab using Merge Requests. Reviews required two approvals from team members as well as a successful CI build/test. | 31-60 seconds | 40% | 30% | 30% |
Show |
Mainly with doctest and ApprovalTests | macOS, mainly | CLion, mainly | None - always pair-program instead | 31-60 seconds | 45 | 50 | 5 |
Show |
We have a CppUnitest framework, but I don't have deep knowledge of it. | ARM Cortex | Segger JLink Visual Studio Visual GDB Visual Studio Code Make | I look for structure, is it following code standards, code make sure there is nothing malicious. I also try to make sure the application code is hardware agnostic. Then I build it myself, load it and make sure the target is happy. | 11-30 seconds | 60% | 20% | 20% |
Show |
The programming I do now is more scripting to analyze data and so it is generally running right in front of me and I am the consumer of the output, so my eyes are the test. | Various embedded MCU's. | TI's Code Composer Studio, Altium, matlab | N/A | 1-5 minutes | 1 | 1 | 1 |
Show |
We try to set up different stages: 1. Unit tests (e.g. Microsoft Visual Studio Unit testing - but not all projects are covered, yet) 2. Integration tests - right now executed manually we distinguish between stress tests, fault injection, boundary value, and error guessing. Jenkins CI is planned here in the future. 3. Acceptance tests - Finally we have a test report which we can hand (together with the system) to uninvolved people (e.g. Product managers) | Depends on the project - most likely a 32 bit ARM Cortex Mx processor. | Microsoft Visual Studio Microsoft Visual Studio Code Eclipse based environment Keil uVision | The external partners have a 4 eye principle. We applied a rule to our DevOps environment. For critical features, there is an additional review. | 31-60 seconds | 20 | 40 | 40 |
Show |
Run manual tests after implementing a feature. | An embedded system to control a conductivity sensor. | Eclipse IDE supplied by chip manufacturer. | Infrequent and incomplete. Performed to satisfy checkbox in a company process. | 1-5 minutes | 40 | 25 | 35 |
Show |
Mostly final testing of functionality and code reviews | see Eric Renger's answer. | Code Composer ST's IDE | We review for coding standards as well as for functionality and efficiency, though much of our functionality is tested in system tests. | 5-30 minutes | 20 | 50 | 30 |
Show |
We are using Lua Test to do acceptance testing of one legacy project. Using Rust Test on a starting out green-field project. | We make sensors and controllers for sensing properties in water. New systems are built on small ARM M-type processors, many older systems on MSP430. Lots of bare-metal, starting to use a small RTOS. | CLion, VSCode rustc, cargo, clippy, etc... | My preference is to use Mob Programming and code review as you go. | 11-30 seconds | 5 | 5 | 1 |
Show |
capture and analyze communication between devices, run system in normal use pattern, run system in unusual use patterns | embedded system, robotics, medical device, multiple embedded processors, multiple communication buses | TI Code Composer Studio, Keil uVision, STM32 Cube IDE | all code is reviewed by multiple software developers, reviewers then meet to go over their comments and questions | 11-30 seconds | 60 | 20 | 20 |
Show |
using debugger and printf | Xilinx | Visual Studio, Eclipse | Peer reviews | 11-30 seconds | 50 | 25 | 25 |
Show |
junit, gtest | an embedded arm system | vs-code, xcode | combination of bitbucket and live (virtual) reviews.i | 31-60 seconds | 40 | 30 | 20 |
Show |
Most of the time, by simulating as a user, what are the possibilities. I also utilize the use of Fiddler and SQL Query Analyzer. I know there are built in unit test in Visual Studio like Assert, and in Angular, I could utilize and use Jasmine & Karma. | my target system or framework is MVVM architecture. | VS Core and Visual Studio 2015 | I pay attention on using access modifiers and data types, and on how memories are being consumed. I like the code when it is readable by other developers not just me. | 11-30 seconds | 30 | 50 | 20 |
Show |
Unit test cases | IntelPOC UT69000R | GCC | Web-based diff/Collaborative review tool | 5-30 minutes | 30 | 60 | 10 |
Show |
System Testing post releases... | RTUs (Embedded Systems): - ARM uCs(Atmel & Siliconlabs) Sub GHz, - Small flash & ram - OpenRTOS & FatFS - GNU ARM compiler (arm-none-eabi-gcc) - Ethernet, serial, usb, zigbee interfaces - Dig In & Out, An In & Out, RTD - RTC, GPS | - Simply Studio (Silabs), Atmel Studio & Visual Studio. - JLINK - Wireshark - Putty (telnet) - HxD - Git & SVN | There is usually no time for code reviews. | 11-30 seconds | 40 | 20 | 40 |
Show |
I do not. | No specific target system. | QT for c++ | we trace the parts we are interested in. | 1-5 minutes | 70% | 0% | 30% |
Show |
i dont | . | Qt | mayeb to trace the code all over | 11-30 seconds | 70 | 0 | 30 |
Show |
Bespoke Python test harness over the serial port to exercise and validate the running code via a shell. Manual testing by an experience field engineer. | Embedded real-time control system for Oil/Gas sampling system. | Keil uVision. Linux | There is usually only myself working on the code. | 5-30 minutes | 50 | 20 | 30 |
Show |
For most of my personal projects I do not employee any unit testing currently. | I do not have a specific target system. | I use visual studio code for all python programming. I use gcc from the command line and visual studio code for c programming. | During my internship there was a CI/CD pipeline on our gitbucket server. Each time a push was made it would run all the tests and then you could review any code changes. The whole team reviewed all pushes to the repository at the end of each day. | 31-60 seconds | 6 | 1 | 3 |
Show |
Experimentation constantly Much of the last iOS app I wrote was 90% UI and so it was constantly running in simulator. | IOS and Mac. Maybe Mac servers. Cloud servers. | Xcode What Unix tools I can remember Xcode simulator and instruments tool | I’m currently a team of one. Except on a side project. Pair programming with Lance Kind and TDD.Academy. | Under 10 seconds | 80 | 18 | 2 |
Show |
I just test the lines of code I write (trying to do the more exhaustive possible set of conditions) and then I keep the code. I realize this produces, especially on a large software, a code that fails easily and it is difficult to maintain. | x86_64 Linux. Code I develop normally needs at least 4 cores to run in real-time (lot of calculations). | gcc, cmake and eclipse for C++ (now) | I don't understand the question. Maybe the what follow answers: usually the result of my code are data processing or model software whose results are used on our experiment and/or presented at a conference. I do not have a formal reviewing process. | 11-30 seconds | 40 | 40 | 20 |
Show |
Run testcases as a whole project | FreeRTOS and Linux | Simplicity Studio Atmel Studio Visual Studio | N/A | 11-30 seconds | 50 | 30 | 20 |
Show |
Unfortunately I am not writing much code at the moment. But I try to do my best to get the teams running with TDD and unit testing. | At the moment our target system are several micro services within an internal IT department. | Mostly IntelliJ for writing Code. GitLab for building and deploying the artefacts. | We mostly try to do Pair-Programming | 1-5 minutes | 20 | 20 | 60 |
Show |
We have a suite of apps that can be deployed for simulation purposes on a Desktop PC as well as on our target system. | We develop radar detection systems fighter jet and test against that | Visual Studio, UltraEdit, BeyondCompare, MULTI | We don't have a consistent for of code review. Sometimes a senior member of our team will review my code, but usually all of our team contributes their changes to a collective baseline and the baseline is tested overall. | 1-5 minutes | 15 | 20 | 65 |
Show |
Its mainly functional testing by putting the executable directly in the target board and use printf statements for debugging. | Its X86 board with stripped version of Linux ( Ubuntu 10.04). The Kernel is 2.29.6. It has the peripherals ( IOs), LAN Port, Zigbee port | Ubuntu 10.04 Eclipse (LUNA) for development For Microcontroller Keil5 and for ARM IAR | Peer to peer code review is done. Another developer does the code review and put comments in an excel sheet. Modification is done based on that and proper comments are placed in the same excel sheet. | 1-5 minutes | 40 | 30 | 30 |
Show |
Hardware Prototype in loop, trial and error. | 8 bit and 32 bit low end Microcontroller based Instrument Control Systems. These in some cases work in tandem with an Embedded Linux based HMI written in C++/Qt. | STM32Cube, Viusal Studio Code, Visual Studio, Qt Creator, Vim | No review process in place. Static analysis tools - not in embedded. | Under 10 seconds | 10 | 30 | 60 |
Show |
use gtest and gmock | Windows | Visual Studio | Having discussions with several peers. Check in when all agree with the change | 11-30 seconds | 40 | 10 | 50 |
Show |
yes | Windows Low Level System Programming, Device Driver and System level code | C/C++,git,tc, Visual Studio | Code review needs to be reviewed and approved by 2 people. | 1-5 minutes | 50 | 25 | 25 |
Show |
- Unit test using gtest (most of new code, some legacy) - Component tests e.g. client/server, COM client/component - Automated system tests using proprietary automation framework. Mostly regression, updated when a new feature changes some system aspect - Manual testing. Uncover more feature specific end-to-end test - Performance and stress testing using industry standard tests for enterprise e.g. LoginVSI for single server scalability | Windows on X86 | Microsoft visual studio (MSVC++), Trace tools (based on windows event trace logging system), build scripts using power shell, Windows debugger(WinDbg), Wireshark, git | Code reviews are done using review tools (Atlassian bitbucket), requiring at least two human approvals one of which is generally required from SME. An additional bot "reviews" for build success and automated smoke/sanity tests's success | 1-5 minutes | 35 | 30 | 35 |
Show |
1.Using the Gtest framework to write unit tests 2.Automation tests using python and the RobotFramework. | - | VSCode, Linux based debugging (gdb). | - | 30-60 minutes | 50 | 30 | 20 |
Show |
1.Using the Gtest framework to write unit tests 2.Automation tests using python and the RobotFramework. | - | VSCode, Linux based debugging (gdb). | - | 30-60 minutes | 50 | 30 | 20 |
Show |
I spend most of my time testing by writing unit tests to verify the code under test is doing what I expect. Next I test the code in production to verify it's working as expected when integrated with the rest of the system. | Currently writing software that runs on multiple platforms. We have to make sure we can write as much cross platform code as possible so that the same code can be reused on all platforms. | Visual Studio, Visual Studio Code, Android Studio, Clang, CMake, Git, Windows Terminal, Docker, Windows Subsystem for Linux | Everyone on our domain team gets added to the code review and is free to comment any suggestions. Two developers need to approve the code review before it gets submitted into main branch. | 11-30 seconds | 40 | 20 | 40 |
Show |
We have unit tests, component tests and system test. Recently there are more emphasis on unit tests and we have been adding more unit tests. Unit tests runs as part of our CI/CD steps in jenkins during the build. Build is not successful if any one of the UT fails. | Windows, Linux, MacOS, HTML5/WebAssembly | VS, VSCode with various plugin. Windbg. | . | Under 10 seconds | 40 | 20 | 15 |
Show |
Wherever possible and wherever there's an ROI, for all new code, we write unit tests and legacy code predominantly gets tested at system level | Target system is Windows 10 or 2016 VDA connecting with a client (windows, mac etc) and running along with multiple components in the middle interacting with each other at various times. | Visual Studio, Sublime | Looking at working as well as readability of code along with avoiding static analysis errors. | 30-60 minutes | 5 | 3 | 2 |
Show |
mix of automation, unit test, component test and manually. | depends...it could be linux or windows, simple or complex deployment | VS Code, Visual studio, git bitbucket | Understand what problem this code change addressing and how. Flow of the code and data. Avoid or minimize use of global, local variables. Error handling, resource cleanup, return values and parameters to functions, their time etc. | 5-30 minutes | 40 | 20 | 40 |
Show |
GoogleTest - I have unit tests integrated in some parts of some projects, but an uphill battle to incorporate it int | Mostly XMOS MCUs or embedded Linux targets. | gcc, vim, CMake | Upsource - we have a language-agnostic set of acceptance criteria | 1-5 minutes | 30 | 30 | 40 |
Show |
There are a number of ways and it varies. New code is generally developed along with unit tests - these run as part of the build on the CI system. The unit tests either run the code natively on Windows/macOS or use a simulator for hardware specific functionality. There is an automated test harness using real hardware for end-to-end/integration tests. We have some great QA engineers who do manual testing and who develop automated tests. I do some manual testing while developing the code. I work with a legacy codebase and some of these testing methods have been relatively recently adopted - a lot of code is not under test... | USB, XMOS processor, audio, real-time. | VSCode, git, XMOS toolchain (including XGDB), xTag debugger, GoogleTest, lots of bash, Beagle USB analyser, Jenkins, Artifactory, YouTrack, make, cmake | Code reviews vary from meetings to discuss the approach taken to fix a particular issue to using UpSource or using merge requests in GitLab. They vary from project to project and the availability of appropriate reviewers. | 31-60 seconds | 20% | 20% | 50% |
Show |
Run scripts through Jenkins each night. | N/A | Eclipse | Usually one on one meetings stepping through the code. Sometimes we have larger groups (around 5 people) stepping through each line. | 1-5 minutes | 4 | 2 | 4 |
Show |
There is an existing unit test structure in the C code, but nothing implemented in Matlab. | My target is an autopilot. | Matlab, Visual Studio Code | They are good-natured and useful. Sometimes the points are pedantic but I would say they are useful more often than not. | 1-5 minutes | 50 | 10 | 40 |
Show |
Verifying behaviour using memory-watching while stepping through (with code running on the target), logging trace events, and functional system testing on the hardware. | STM32 Cortex-M devices (M0 through to M7). Starting to work with Espressif ESP32 devices (Xtensa core). | IAR Embedded Workbench, VS Code and CLion for IDE / text editing etc. GCC and IAR compilers. Segger J-Link, IAR I-Jet hardware debuggers. | We peer-review design documents as part of our process. Code reviews are initiated by the code author when they feel its needed (which happens for maybe 10-25% of the code). | 11-30 seconds | 40 | 10 | 50 |
Show |
I haven't | I don't have one | Visual Studio Code, the Internet | n/a | Under 10 seconds | 0 | 0 | 0 |
Show |
We test using manual device and system tests. | It's a small portable embedded IoT device built with a commercial RTOS on an ARM M0+ chip. It senses the world and reports data periodically to a server. Low-cost hardware is a primary project driver. | a tailored eclipse with gcc-arm-embedded toolchain, plus Msys2 and Gcc on dev box for unit test. GitLab Runner. | nada | 1-5 minutes | 25 | 25 | 50 |
Show |
Self-constructed test wrappers. | Linux PCs, embedded Linux, bare metal | git, make, vi, ssh, llvm/lldb | Line-by-line, three or four participants, over audio chat. Alternatively done through GitLab. | 31-60 seconds | 40% | 20% | 40% |
Show |
New to company, they use CPPUnit. Past company we tested per requirements on a hardware rig, then we had a validation/test group that did formal testing, using custom scripting / hardware to exercise the software. It was for aviation related equipment; so we had to have a validation group certify the code and package it up to get sent to the FAA for approval. Lots of custom tools used to test that stuff. | ARM Cortex M3 chip. | IAR compiler, Cygwin, CppUnit | Done using Jira and Bitbucket. | 11-30 seconds | 60 | 20 | 20 |
Show |
Using a custom embedded unit test harness, after the code is written (I'd like to change that!) | Embedded, size constraints | Eclipse | We've been having working meetings over video conferencing SW where the SME goes over the code with those less experienced and also makes changes. Also, we're looking at tools that integrate w/ our ALM tool for more traditional, reviews. | 1-5 minutes | 10 | 80 | 10 |
Show |
CI testing using Jenkins. General unit tests for non hardware dependent code. Hardware unit tests running on hardware with PC commanding the tests using python scripts. Integration tests running on hardware as well. | Several embedded systems communicating and performing various different functions. It is a high reliability environment. | Visual Studio Code, Green Hills Software | We use Gitflow and require at least one other team member to review and approve pull requests before they are merged into the develop branch. | 11-30 seconds | 20 | 40 | 40 |
Show |
this will vary based on what I am testing, but usually with our test suite. | mostly custom | Green Hills Vivado | code reviews are typically done with Git pull-requests reviewed by other team members | 31-60 seconds | 40 | 30 | 30 |
Show |
I almost (99%) always use XP style TDD sometimes use ATDD (depending on the project and tools available) | Mostly web, some XR devices and mobile | JetBrains IDEs mostly | Pairs > Code Reviews | Under 10 seconds | 45 | 55 | 0 |
Show |
I know how my code should react given a certain input so i simply input the causes i already know the answer to and verify that the outputs received are what i expect | I typically deal with low power and resource embedded systems | So far i just use and IDE to assist with coding styles but other than that i do not use any other tools. (Compiling and other tools are used by senior members of my team.) | I haven't been a part of an official code review yet in my position. Any code reviews I did in school were more focused on getting code working than making it more efficient or better overall. | 1-5 minutes | 50 | 20 | 30 |
Show |
current project has continuous testing; previously with unit test, integration, verification, validation; incremental development and integration | Zynq based with no OS and with Linux | Green Hills, Linux, Xilinx SDK, git, Jenkins, Visual Studio Code, Tera Term, Beyond Compare, PuTTY | informally review code before integrating into development branch; formal reviews later | 1-5 minutes | 30 | 30 | 10 |
Show |
C/C++ homegrown unit tests | NA | Visual Studio | Peer reviews conducting via tool capture | 11-30 seconds | 5 | 5 | 1 |
Show |
With CppUtest, automatically run on compile and every pull request. | Bare-metal ARM, Embedded Linux on ARM, x86_64 Linux desktop-class | Vim, CMake, CppUTest, Jenkins, Bitbucket, Fisheye/Crucible, VSCode | Uses Crucible or a Bitbucket PR, usually goes on for too long, or no activity | Under 10 seconds | 30 | 40 | 30 |
Show |
by doing my own little small tests, pass and fail tests. | N/A | Momentics IDE Eclipse IDE BitBucket Github | i have participated in one code review session. it was as a group with 6 or 7 people. good discussions happened on how/ or why a piece of code was done | 31-60 seconds | 65 | 20 | 15 |
Show |
Outside of work I try to use a unit test framework and I try and employ TDD. I have done the most TDD with Java and Junit, but try to use it when I can. Professionally I try to encourage our org to move in that direction more when possible but I am not directly coding very much as all there. | Professionally our organization see's a wide variety of targets from enterprise servers to low level micro controllers of various types. Outside work: various microcontrollers from 8-bit atmels to 32bit arm and regular x86 architecture, Cortex-M0 | tend towards lighter weight tools and command line, IDE wise I have used various including eclipse but trying vscode these days, jtag tools and compilers variants depending on specific controllers, mostly gcc based. | Professionally it can vary by project, generally a developer reviews offline with guidance on hat to review, some tooling such as bitbucket and to make them more tool based. ideally trying to apply more continuous integration based. | 1-5 minutes | 40 | 40 | 20 |
Show |
Unit test, integration tests, system tests, stress tests | ARM 9 | GreenHills | code reviews on pull request | Under 10 seconds | 45 | 45 | 10 |
Show |
I start debugging and testing by loading the code on the hardware, running it with CLI commands or instrumented code that exercises the new features. After that we use CppUTest for unit test and also lint the code. The unit tests and lint are run automatically when a feature branch is pushed to the git server. | Typically a fairly small MCU ranging from an 8051 to single core ARM with integrated peripherals as well as external SPI or IC2 devices. | IAR, Kiel, GNU toolchains. RTOS varies from bare metal to Yocto Linux with small or no RTOS being more common. | Two approvals are required before code can be merged. The reviews are typically very through. | Under 10 seconds | 50 | 20 | 30 |
Show |
I start debugging and testing by loading the code on the hardware, running it with CLI commands or instrumented code that exercises the new features. After that we use CppUTest for unit test and also lint the code. The unit tests and lint are run automatically when a feature branch is pushed to the git server. | Typically a fairly small MCU ranging from an 8051 to single core ARM with integrated peripherals as well as external SPI or IC2 devices. | IAR, Kiel, GNU toolchains. RTOS varies from bare metal to Yocto Linux with small or no RTOS being more common. | Two approvals are required before code can be merged. The reviews are typically very through. | 11-30 seconds | 50 | 20 | 30 |
Show |
Typically I start debugging and testing by loading the code on the hardware, running it with CLI commands or instrumented code that exercises the new features. After that we use CppUTest for unit test and also lint the code. The unit tests and lint are run automatically when a feature branch is pushed to the git server. | Typically a fairly small MCU ranging from an 8051 to single core ARM with integrated peripherals as well as external SPI or IC2 devices. | IAR, Kiel, GNU toolchains. RTOS varies from bare metal to Yocto Linux with small or no RTOS being more common. | Two approvals are required before code can be merged. The reviews are typically very through. | 11-30 seconds | 50 | 20 | 30 |
Show |
Run unit tests and acceptance tests. | My main target systems are mobile printers. | VMare Player, PyCharm, QNX Momentics, Notepad++, TortoiseSVN | I haven't done much code reviews yet. My projects either have not been code intensive or have been developed in a pair programming manner that did not require a code review. | 1-5 minutes | 10% | 20% | 80% |
Show |
Using Jest to perform unit testing | Embedded devices running Linux for streaming applications, commonly using C/C++ | Visual Studio Code, Bitbucket, Node.js, Jira, Azure IoT Hub | First I pull the code and try to compile it. If the compilation finished successfully, I review the implementation and look for border cases. | 1-5 minutes | 70% | 20% | 10% |
Show |
using system and module level Testbenches | Communication systems, Wireless comms, Video and image processing, Cybersecurity | Vivado, Quartus, Keil, Visual studio, Workbench, ARM and TI tools as and when needed. | between 30 minutes to and hour. | 5-30 minutes | 50 | 20 | 30 |
Show |
My dat-to-day role involves software integration, I depend on integration tests. But I would like to learn Unit testing and I think that is the right way to do. | Linux based System on Chip. Our development is in Userspace Middleware. | GCC Python Behave Gherkin tests | We use gerrit | 30-60 minutes | 15 | 10 | 75 |
Show |
We are using google test for unit testing and a proprietary system for integration testing | Set-top-box | I'm personally using VSCode | We use gerrit | 5-30 minutes | 30 | 30 | 40 |
Show |
unittesting using Cpputest/googletest and DMS testing which tests the whole code base. | Linux embedded system built on Set Top Boxes | Visual Studio | Code review using gerrit. | 1-5 minutes | 20 | 60 | 20 |
Show |
Currently I use several test tracks | There are several different targets depending on the region and | Visual studio code, putty, build server. | Our code reviews are done via gerrit | 5-30 minutes | 30 | 30 | 40 |
Show |
unit tests, integration tests and some times fullstack | Set top box | We use a whole lot of custom tools mostly lunix based. Git for version control, gerrit for review, jenkins for dev-ops | use gerrit | Under 10 seconds | 30 | 50 | 20 |
Show |
Test at module/component/multi-component levels. | Embedded linux | Coverity static analysis Vim Neoclide/coc (clang completion) vim ccls | We use gerrit - code from a team member is viewed by all, but only 1 person is required for sign-off. A bot also performs checks on the change. | 1-5 minutes | 10 | 55 | 35 |
Show |
N/a | N/a | N/a | N/a | 1-5 minutes | N/a | N/a | N/a |
Show |
I have just started to unit test. Very new. Previously manual. | Embedded processors | CppUTest, IAR | Mostly line by line, peer review. | 11-30 seconds | 30 | 30 | 40 |
Show |
functional tests | Gas detectors using STM-ARM cores and SiLabs 8051s | Eclipse based or IAR | Every member of the team reviews code and makes comments on compliance with the standard and business logic. | 31-60 seconds | 30 | 30 | 40 |
Show |
ATDD, TDD, Exploring manual test. | web app, ios-app | jet brain IDEs. | face2face every 1 hour | Under 10 seconds | 50 | 40 | 10 |
Show |
Debug later | Embedded C running either bare metal or FreeRTOS on an MSP430 | IAR and CCS | Peer review | 11-30 seconds | 30 | 30 | 40 |
Show |
Starting to use cpputest, but sometimes difficult. Our main way of testing is flashing to the target unit and running automated tests with real stimulus. It takes tens of minutes to hours to complete a test. | ST Micro, low power, arm | IAR | None | 1-5 minutes | 3 | 2 | 5 |
Show |
Jenkins Continuous Integration, bench top and functionality testing | STM32L4+ microcontroller with various communications interfaces and SAM4L microcontroller RF system | IAR Embedded Workbench | Informal | 1-5 minutes | 50 | 25 | 25 |
Show |
On the fly, how it seems appropriate at the time. (hoping to improve this :) ) | Low power custom embedded board (variety of ARM Cortex-M micro controllers) | IAR EW, Ozone | Fairly informal, conference room, go function by function. | 11-30 seconds | 30 | 30 | 40 |
Show |
We build test in python and run the in Jenkins. Hard to cover all corner cases when building the test. Our CI systems is something like this: get code from repo, compile code, lint code, flash hardware with new code, run test then email results to developers. | I would rather not say. | IAR,QP, Git, and Vault | When software gets to a point we feel at a good state, we schedule a code review and invite our peers. We try to give people plenty of time review. We highlight pieces of code that we might have questions about ourselves so that others can help. | 1-5 minutes | 25 | 50 | 25 |
Show |
Collection of unit test cases, tested on development boards. | ARM Cortex M3/4 microcontrollers, low-power environments, baremetal software | GCC, Cygwin, Eclipse, IAR | Code reviews aren't often used, getting started using Gitlab's review feature. | 31-60 seconds | 30% | 30% | 40% |
Show |
end-to-end testing using CI | embedded C for micro-controllers only, cannot use C++ due to project restrictions (not even for unit testing) | IDE | GitLab Merge Requests or Fagan depending on project type | 31-60 seconds | 60 | 10 | 30 |
Show |
using continuous integration | mcu | gcc, make, iar, | what are those? | 5-30 minutes | 60 | 20 | 20 |
Show |
I write some short functional tests and try to capture the different use cases. | n/a | IAR, Eclipse. | n/a | 31-60 seconds | 50 | 25 | 25 |
Show |
in production | i have different targets | vi | does it compile? | 1-5 minutes | 1 | 1 | 98 |
Show |
breakpoint debugging instrumentation test gpio Watch windows Memory windows | Cypress PSoC 5LP, PSoC 6, Microchip PIC18s, dsPics, SAME family, other processors over the years. | IDEs, Programmer/Debuggers, Logic Analyzers | seldom, working to incorporate into new designs | 1-5 minutes | 20 | 30 | 50 |
Show |
Before ceedling I'd often test code using print statements or toggle pins if testing on hardware. For timing critical testing I might throw test data into an array and print it all at the end. But that took a bit of extra work and system overhead. | I really like Notepad++ | My team only just formed and hammered out our code standard. We haven't had any code reviews yet, but plan to. | 11-30 seconds | 40% | 40% | 20% |
Show |
|
Testing scripts which attach to the target via debug UART. These use python and Q_SPY | Cortex M0/M3/M4 and Xilinx Zynq | IAR/GCC SEGGER JTrace/Ozone for tracing | Depending on the speed of the project, code reviews and architecture meetings are held with varying frequency. | 11-30 seconds | 20 | 30 | 50 |
Show |
Unit test, system-level test | ARM Cortex M4/A5/A9, TI MSP430, TI DSP | IAR, gcc | Sparse | 1-5 minutes | ? | ? | ? |
Show |
Most important is to design code as separate testable modules. I can then write a test program that pumps data through the modules and checks the outputs. | ARM processor, battery powered, lots of sensors, radio communications. | IAR (maybe Green Hills also) | Project dependent. Some have none. Others have full up reviews in TeamForge + Gerrit. | 1-5 minutes | 70w/design | 20 | 10 |
Show |
Manual unit tests; full unit testing over time, sometimes with the aid of continuous integration with the embedded systems in the loop | Various custom boards with mostly various ARM Cortex-M3/M4-based microcontrollers; FreeRTOS, bare metal, or QP, depending on project; mostly C, but sometimes C++ | IAR or GCC | Usually small sections get reviewed line-by-line | 11-30 seconds | 10% | 60% | 30% |
Show |
unit tests, functional tests, performance tests | c++, python | xcode, visual studio, pycharm etc. | Each code change is reviewed by reviewers of that code area. | 31-60 seconds | 70 | 15 | 15 |
Show |
I write tons of runtime asserts, step through the code, then manual verification. This is followed up with black box testing. | It's a cross platform (Windows, Mac, iOS, and Android) video editor. I work on the Win and Mac UI and a library that's shared across all platforms. | Visual Studio, Xcode | We use browser-based tool called Code Collaborator. It's not a good tool, and is downright hostile for reviews that end up having multiple revisions to any file. | 1-5 minutes | 40 | 30 | 30 |
Show |
manual app testing, Unit Testing | Rush Android application | Android Studio, Perforce, gradle | our reviews are fairly good | 5-30 minutes | 30 | 10 | 60 |
Show |
Make this into a word cloud
Select text below (triple click), copy, then try one of these tag cloud generatorsjasondavies word cloud generator - colorful
tagcrowd - lets you show counts