Training Preparation -- Current Practices of 1030 Attendees

How do you test your code? What is your target system? What development tools do you use? What is your code review practice? How much time does it take to do an incremental build, load, and start running your code so you can start to test it? What percentage of your development time is spent coding? What percentage of your development time is spent testing? What percentage of your development time is spent debugging?
Consequences of bugs are low because software is for internal use by a small group. Testing is limited. I led MeterMate group years ago, and we did peer reviews of ALL code changes, and some white box testing, but nothing too structured. Generally Windows applications, but interested in meter firmware Visual Studio Author of change sits down with printout and explains changes to peer reviewer line by line. 31-60 seconds 75% 10% 15% Show
Run new code with emulator, check variables using debugger to make sure the values are as expected. currently Atmel SAM4C IAR Workbench, Visual Studio, GNU Tool Chain Very useful. 1-5 minutes 33.3% 33.3% 33.3% Show
X z X Y Under 10 seconds 10 10 10 Show
Debuger Single Step , IO Traffic Monitoring, Protocol Simulators, Stress/Avalanche Tests, Sometimes built PC apps for testing, Simulated external interaces, Code instrumentation, Display Special characters on LCD panel, Change LED colors, Electronic Signal Injection/Monitoring, Debug Prints Smart Meters, Substation Gateways, Data Concentrators IAR, GCC/G++ Supported both Formal and Informal. Tend to be overly defensive 31-60 seconds 25 35 40 Show
Not familiar with them at this company, yet. Not familiar yet. Visual Studio, Vim, Komodo, Not familiar yet 5-30 minutes 40 30 30 Show
System and unit test. Unit testing is not automated and system test is about 60% automated. Cortex M4, Renesas Rx IAR, RVDS, Coverity, etc. We use code collaborator but not enforced 5-30 minutes 30 30 30 Show
Enulation NEC legacy controllers IAR-NEC They need work 5-30 minutes 50 25 25 Show
code a little, test a little company made, last one had a Renesas, and before that Atmel ARM, IAR IDE, Cygwin, Windows We try to get some in officially, most are done unofficially if at all. 31-60 seconds 20 20 60 Show
Debugger. Some test code that runs on the target and typically gets discarded once everything passes. Dual core, Cortex M4 micro controller. 128K RAM, 1M of flash. NEC V850 micro. 8K RAM, 256K flash. IAR compiler, Visual SlickEdit. Weak. Review diffs between old and new with a couple of programmers. 1-5 minutes 30 30 40 Show
Debugger, step through code 80515, H8, MSP430, V850, Rx, ARM Vendor, IAR SmartBear Code Collaborator online reviews 31-60 seconds 30 20 20 Show
not doing development currently Renasas V850 32-bit RISC micro not doing development currently they are required as part of our engineering process, but we do not have a documented process. 1-5 minutes 0 0 0 Show
Debugger Windows 7 Visual Studio 2010 None at this time 5-30 minutes 60 20 20 Show
Self Test ARM IAR Impromptu 1-5 minutes 50 30 20 Show
EVT/DVT and some experimentation with VectorCast harness. Did not get to step of actually running devices through full harness to determine code coverage. Card and Label printing devices with various hardware CCS, Altera Tools, SVN, SBM, GNU occasional peer review 1-5 minutes 50 25 25 Show
C++: I mainly debug or use print statements. Java: TDD; I write tests first. QNX, desktop applications Eclipse, IntelliJ, PyCharm I do not have formal code reviews 5-30 minutes 40 10 50 Show
on the machine card and label printer Altera, QNX peer review 1-5 minutes 10 50 40 Show
By running unit tests and acceptance tests RTOS based QNX operating system QNX Momentics, Eclipse We do pair programming. So one developer do coding and another developer review code at a same time. 1-5 minutes 50 30 20 Show
on the target hardware mobile and card printers using FreeScale/QNX and Altera Nios II/uCOS respectively Momentix, Altera IDE Previous group didn't have them, new group seems to do reviews and/or pair programming 5-30 minutes 60 20 20 Show
Mostly with acceptance tests written in Python or a home-grown acceptance test tool. Some build scripts are just run with a debug flag and the output validated. Label/receipt printer that supports various printing languages and communication protocols. QNX/GCC toolchain + Perl deployment scripts to a printer/VM Non-existent 1-5 minutes 60 10 30 Show
Force all paths to execute. Thermal printers VMware, SVM, SBM, Coverity Informal with peers 1-5 minutes 20 60 20 Show
Unittests are put together after functionality is implemented and these tests are run before committing code and upon every build. Printer Momentics, Pycharm Rare 1-5 minutes 40 20 40 Show
Unit and acceptance testing, and empirical tests QNX Eclipse, PyCharm, GCC/GDB, Clang, etc. We don't generally do code reviews; mostly pair programming. 31-60 seconds 60 20 20 Show
Unit tests, auto-regression test, manual test Arm-based freescale custom boards lint, coverity, gcc N/A. Pair programming 1-5 minutes 40 20 40 Show
sfsdf fsdfsd sfdsf sfsd 5-30 minutes 11 11 11 Show
Add test code as needed to test various features. A lot of runtime testing, print various cards with different configurations. Embedded processor in printer. Altera tools, Code Composer Studio, emulator, logic analyzer, protocol analyzer periodic with team, maybe a hour. 31-60 seconds 70 20 10 Show
UTs, acceptance tests, etc QNX ARM Eclipse Pair prog and code reviewed 1-5 minutes 5 4 2 Show
we have simulation system to test our PLC codes , n.a. PLC software ( step 7 ,controllogix5000 etc.) n.a. 5-30 minutes 50 25 25 Show
User testing Sample input and output - Visual Studio, Eclipse Never did before 5-30 minutes 30 10 60 Show
I usually test individual classes using test cases before integrating them in the full program. I have never used a proper test framework, though. Mostly it's by calling the class under test though a small test program and checking that the response is as expected after giving it some inputs. Embedded systems in Seismic technology. Sometimes running linux, other times barebone applications. TI's code composer studio, Keil, etc. Yet to be determined 11-30 seconds 50 30 20 Show
Write code, then, write test cases to test for the good and bad cases. arm CCS Fagan 5-30 minutes 35 35 30 Show
Unit Test followed by Integration and feature tests. Arm9 and Freon processor FlatFile, RoseRT We review with other staffs 11-30 seconds 40 30 30 Show
Using Unit Test or flash it into the product to test it. Real Time Embedded System IBM rational rose, clearcase, TestRT, Ultraedit, Notepad++ it's time consuming but it's somehow effective 30-60 minutes 20 40 40 Show
do some ut when writing code on target test after code done radio scripts, sourceinsight heavey 31-60 seconds 30 60 10 Show
Bundle several highly related source files together and test them together Arm9 (ti omap 1710) VC, RoseRT, TestRT, source insight Team was following fagon inspection for reviewing code. Now team is trying to find lightweight solution 5-30 minutes 20 40 40 Show
Test RT Ergo CPPUTEST Fagan Inspection 30-60 minutes 40% 40% 20% Show
Perform compilation, and test the code on product. Embedded Rose Realtime, CodeComposer Studio Fagan code review 5-30 minutes 40 40 20 Show
CPPUTest, TestRT, black box test, sanity test ARM RoseRT and CCS Fagan 30-60 minutes 30 40 30 Show
Hard coded some values to inject into the function to see whether the result match with expected result. Test the overall feature whether it behave like what it should be. Radio Protocol Source Insight, Rose RT FTR 5-30 minutes 50% 20% 30% Show
I test it by using the real tool. ARM 7 Multi-6 Analyze errors, look into the code and debug. 5-30 minutes 30% 30% 40% Show
UT, off target IT and on target IT. a radio basing on dual core chip. RoseRT formal review with process and tools supports. 30-60 minutes 40% 30% 30% Show
Microsoft UnitTest and GUI test under the target system .NET and Siemens PLC Microsoft Visual Studio 2012 and Siemens SIMIT I review my onw code. 11-30 seconds 60 20 20 Show
Defining test cases and see whether the output is as expected. Some of my colleagues use Google Mock Motorola LMR radio Air Tracer (similar to Wireshark), standard editing tools Formal Technical Reviews 1 day or more 10 20 70 Show
UT(Function test/class test), Scenario test(task level test, off target test); IT(on target test); feature test; expert test. Embedded system Rational tools, Visual studio, ultraedit Invite the right person is important. 5-30 minutes 30% 50% 20% Show
In target test. Embedded system Ultraedit , visual studio Fagan inspection 5-30 minutes 50% 30% 20% Show
on target freon rational rose real time, ccs fagan 5-30 minutes 50 20 30 Show
on target test ARM9 processor with RTOS Texas Instrument Code Composer Studio Motorola Process 30-60 minutes 30% 50% 20% Show
CppUTest, black box testing on target 2 way radios notepad++ fagan inspection 5-30 minutes 30% 20% 50% Show
Unit Test, White-Box Testing and Black-Box Testing on the hardware ARM9 Rational Rose Real Time, Ultra Edit tool Fagan Inspection 5-30 minutes 50% 30% 20% Show
cpputest arm ccs - 5-30 minutes 60 10 30 Show
UT IT embedded C legacy code source insight I think it's OK 30-60 minutes 50 30 20 Show
integration test, black box test, no unit test. two-way radio Rose RealTime/UltraEdit we have formal review process 5-30 minutes 50% 30% 20% Show
CXX test, Physical unit test DSP TI BIOS, ARM eclipse Fagan and formal revies 5-30 minutes 50 20 30 Show
Partially Embedded ARM system Eclipse, GNU GCC toolchain, vim, Notepad++ Formal fagan reviews 5-30 minutes 50 20 30 Show
Compile switch enables a special test mode that runs on simulated input Embedded systems MATLAB, Keil, Eclipse small team. no review 31-60 seconds 1 1 1 Show
Simulator, running case None STEP 7 none 5-30 minutes 10 60 30 Show
None ASIC with no OS depends basically looking for memory leak and coding convention 31-60 seconds 30 40 30 Show
Functionally. Test System for Isometrix product Microsoft Visual Studios 2010 Weak 11-30 seconds 60 20 20 Show
compile in CVI CVI CVI - 31-60 seconds - 1 hr 3hr Show
Using CCStudio debugger. I used test benches while working on FPGAs TMS320F2812 CCStudio. Keil NA 1-5 minutes 30 40 30 Show
Unit tests, deployment on hardware, regression test on hardware TI TMS320F2812 CCS, Eclipse Reviews are mainly using version control system, locally, within the project team, often more formal reviews are held which include members from other teams and some subject matter experts 1-5 minutes 20% 40% 40% Show
cpputest, eclipse, mingw, sometimes Jenkins CI FPGAs, ARMs, DSP Eclipse, Visual Studio, Multi, VisualDSP++ relatively non existent 5-30 minutes 40 50 10 Show
write my own test cases, dont use any formal test procedure firmware for TI processors for high temperature high pressure high reliability safety critical systems. Code composer studio, labwindows//CVI nothing in particular 5-30 minutes 30 20 50 Show
In the Debug Environment MSP430 Code Composer Variable name is not defined to understand the use of it 30-60 minutes 6hrs 18 hrs 12 hrs Show
not applicable str911faw42 - arm 9 Ride 7 not had any yet 11-30 seconds 60 10 30 Show
Live, on the target TI 28335, MSP430 CCS6 Had none so far 5-30 minutes 30 20 50 Show
You kidding! There are no unit tests written here. Development tests yes, but these get forgotten after rework etc. improve embeded skills Eclipse, and adb logcat Gerrits, but most people +1 5-30 minutes 40% 30% 30% Show
functional and unit testing only ARM based mobile phones. eclipse, klockwork, lint, Gdebug, gerrits 2-4 hours 30 10 60 Show
1) By putting logs in code. 2) Placing breakpoints if development environment supports it. QCOM MSM platforms. Android , Eclipse, MS visual studio , VI , git , perforce , SVN etc. my codes are reviewed by my team mates and I get comments as per the team's understanding . They are generally related to the part of the code which are not very much clear to us. 11-30 seconds 60 30 10 Show
On actual Target ARM based Mobile Devices VS We have aumated system where each change in the code should be reviewd by other devlopers before the changes can go in. 30-60 minutes 40 30 30 Show
1st stage: compiles correctly 2nd stage: code review 3rd stage: blackbox testing on target device Android / Linux Eclipse, vi We use gerrit to review code of peers in the final stage of development, i.e. shortly before release. 1-5 minutes 20 30 50 Show
Manual Testing Windows/Android Visual Studio/Eclipse Good 11-30 seconds 65 25 10 Show
Visual Studio Simulator and on real target CortexM0 Visual Studio, SlickEdit, RVDS Organised by developer via Code Collaborator 1-5 minutes 20 20 60 Show
Native g-test, sample applications Mobile device Develop in text editors with android compilers Consistent 1-5 minutes 30 10 60 Show
Using logs NFC native lib for android Eclipse, VS - 1-5 minutes 10% 30% 60% Show
To the requirements Equipment on a bus used mainly for fleet tracking IDE Semi-formal 5-30 minutes 30 30 40 Show
write my own test function and run against the code to be tested ARM Source Insight NA 11-30 seconds 20 20 60 Show
Manually creating conditions to test on bench, building, loading to drive, running against appropriate test suite tests. A multi-processor SSD drive, or a simulator in Linux. SlickEdit, GIT, SVN, vim, ARM compiler Reviewed in CodeCollaborator by my team members and appropriate other teams 5-30 minutes 20 30 50 Show
unit testing, regression testing SSD NA NA 1-5 minutes 50 25 25 Show
Mostly unit tests. Embedded SSD drive. At this job, Sublime Text mostly, vim in a pinch. They're generally ok, but, sometimes they are huge, making the code review ineffective because it is hard to digest all of the information. 11-30 seconds 45% 45% 10% Show
I am just getting started at this company, but so far I am test-driving an implementation of telemetry encoding and decoding using Python, which I then intend to further test-drive in C and C++. I am looking for guidance on how to test-drive very low level code, such as direct hardware interface. We work with PIC and Cortex-M series processors at the moment Leaning toward using open source whenever possible, also ARM-MDK, MPLAB X Non-existent at this point 31-60 seconds 80 10 10 Show
DO-178B's Requirements Based Tests. A lot of manual test procedures. Automated tests are generally not 'for credit' tests. Trying to enforce Vertical Slice user story "requirements -> code -> test" in a sprint. Traditionally, we have followed the "V" method, where test procedures come long after 'design and code' are done. Little to no automated test procedures. PowerPC based embedded computing system. Eclipse as an IDE. PCLint for static code analysis. Informal every sprint (static code analysis, Subject Matter Expert review). Formal review (Safety , Certification, Software Quality, etc... every release) 30-60 minutes 20 70 10 Show
Work on functional pieces and with some combination of testing with oscilloscope, in circuit debuggers, printfs, and functionality/interface tests for the specific application. dspic, arm cortex mplab, keil I'v never had one. Under 10 seconds 30 30 40 Show
Simulator, Functional on FPGA and Live Product 5 ARM core customer SSD controller ARM, GCC, Arium ICE Formal, Required, CodeCollaborator 1-5 minutes 0 0 0 Show
We usually test our code with inhouse scripts and 3rd party scripts and tools. SATA SSD, ASIC based on ARM source insight editor, realview compiler and debugger, Lecroty SATA analyser, MS Visual studio, etc. reviews include self review, peer review, and walk through. 1-5 minutes 50 25 25 Show
Manual testing Medical infusion pump Visual Studio I perform code reviews with a checklist and extensive knowledge base 11-30 seconds 70 20 10 Show
firmware verification medical drug delivery system eclipse 3 people, 1 recorder. Code reviewed at a meeting, prereviewed before meeting. 5-30 minutes 20 25 55 Show
yes TI Code Warrior OK 5-30 minutes 10 10 10 Show
For hardware tested , we write stubs and for application stuff , we use google test frame. STM32 , Arm processor IAR workbench Static tool analysis and peer reviews 31-60 seconds 30 50 20 Show
Conbination of automated unit tests (generally developed after code implementation) and manual tests CODESYS CODESYS, Visual studio Some are good, some not so good (depends upon the complexity of the code being reviewed) 1-5 minutes 65 25 10 (including debugging test code) Show
Mainly manual testing with some automated googletests Currentl ARM based IAR EWB Performed by a peer 11-30 seconds 50 30 20 Show
unit tests, some TDD STM32Fxxx, ARM CORTEX, FreeRTOS, VxWorks visual studio, IAR Embedded Workbench, Windreiver Workbench Manual, functionality, common security issues, etc 1-5 minutes 60 25 15 Show
Design of experiments of the control algorithms that I have implemented. CODESYS Simulink PLC coder / CODESYS N/A 30-60 minutes 40 40 20 Show
I test against test specs running the code in the defined condition and tacking note of the results. 8051, AVR ATMEGA32, ARM7 IAR Embedded Workbench, KEIL MDK-ARM code is reviewed by my colleagues by means of SVN compare tool 1-5 minutes 60 20 20 Show
Ad-hoc Embedded instrumentation MS Visual C, various compilers Peer review controlled by Trac tickets 31-60 seconds 70 20 10 Show
Jtag probe debugger to debug realtime issues I usually write my own debug code STM32 microcontroller based analogue input module IAR workbench peer code reviews 31-60 seconds 30% 30% 30% Show
gtest and Klocwork static analysis Cortex M processor IAR WB peer reviews 11-30 seconds 50 25 25 Show
CODESYS test manager unit testing E+PLC400 - PowerPC based system. CODESYS, visual studio Code reviews are done at the end of the code / test development process and usually involve the engineer that wrote the code and an additional engineer. 1-5 minutes 20 70 10 Show
we have unit tests for corner cases and api testing. We also use larger system black box testing scripts in python. ARM Source Insight, VIM, gcc, armcc, git we use code collaborator to review code between domains 1-5 minutes 30 50 20 Show
running automated tests, and bench testing my drive with already made tests testing done in linux on a drive or fpga eclipse, notepad ++ we use code collaborator and atleast two reviews must be on them 5-30 minutes 30 50 20 Show
unit & regression testing ARM M3 slickedit, arium Engineers do a decent job through code collaborator tool 1-5 minutes 30 30 40 Show
Via existing system-level regression tests, and through Python-initiated SCSI commands that initiate . The target is an multi-core ASIC media controller. Arium Sourcepoint, Slickedit, Git, Code Collaborator, custom Python scripts. We use Code Collaborator for reviews. 1-5 minutes 40% 30% 30% Show
- - - - 5-30 minutes 70 15 15 Show
Jenkins using test scripts using python and shell scripts. Solid state drive SSD green hills we use code collaborator to peer review 5-30 minutes 30 30 40 Show
Combination of on-target testing and unit testing 3 CPU arm-5 based system Greenhills Software, previously ARM/RealView done on a git server tool (stash) 5-30 minutes 35 20 45 Show
Manual Unit Test Black box testing Flash Controller Xtensa, GDB, Eclipse, personal, on request 1-5 minutes 30 10 60 Show
Bench test, Cutest, checkin test, nightly regression test, internal weekly tests, reliability tests, system compatibility test embedded multi-core ARM Green Hills, doxygen, Notepad++, GIT, stash, JIRA Stash, not formal, 2 engineers must approve 1-5 minutes 25 25 50 Show
Ground and pound. TI chips -- concerto, piccolo Eclipse, MS Visual Studio, Dynamic C IDE non-existant 31-60 seconds 40 10 50 Show
Blackbox tests controlled by Python that initiates host to controller (our firmware) traffic. Firmware controlled logging of events and errors. An SSD controller. Xtensa Explorer, Eclipse Mostly nonexistant. 11-30 seconds 80 15 5 Show
If I think I need to I will write code embedded in the firmware to test functionality. Otherwise I use system tests if they exist. ssd build around multiple arm processors. arm compilier/ vim code collaborator. seems to work well. 30-60 minutes 20% 30% 50% Show
- Nightly Python system tests - Separate test organization running proprietary Python test scripts against devices SSD ASIC with Tensilica CPU cores Xtensa (Eclipse- & gcc-derived dev environment) No code reviews 1-5 minutes 40 20 40 Show
Running my code on the target against current tests or test I have written. A solid state drive running in a PC. Sublime text editor, Greenhills tools, test system, serial port Code reviews are done using an online tool, from at least 2 people. 1-5 minutes 40 20 40 Show
By performing Unit Tests using such things as printf's to monitor internal states, Fault Insertion Macros, Emulators, Pythons scripts... SSD Compilier, Arium Debugger CodeCollaborator 1-5 minutes 60% 20% 20% Show
Python based manual test Embedded Processor Modified Eclipse N/A 5-30 minutes 20 20 40 Show
- - - - 1-5 minutes - - - Show
NA NA Eclipse NA 1-2 hours 20 40 40 Show
Stubs FPGA & ASIC w/JTAG SlickEdit, Jenkins, JIRA, etc. collaborator 1-5 minutes 0 0 0 Show
debugger, Python test scripts prototype NVMe Flash drive Xtensa Xplorer, Eclipse none 1-5 minutes 10 40 50 Show
Python scripts that perform component testing SSD firmware Confidential CodeCollaborator 1-5 minutes 60 30 10 Show
Write specialized test in Python or SPTI aimed at the device under test. Write Analyzer scripts to test protocol specific functionality. SAS SSD Arium, SPTI, Python Code Collaborator 5-30 minutes 10 30 60 Show
RTRT DTCO Eclipse & Softune we do 5-30 minutes 50 25 25 Show
module test I do not have one. Rhapsody, Compiler, Cantata++, PC-Lint, Jenkins They are based on best knowledge. 5-30 minutes 20 30 50 Show
After the development of the Code based on performance specification Embedded Eclipse, MS-VS, Codewright Happen on every code change 1-5 minutes 30% 50% 20 Show
partially on host (cantata), Dynamic tests, partially automated. Tests done in target by developers via debugger. embedded 32 microcontoller eclipse, done by senior developers 31-60 seconds 1 1 1 Show
Currently I use RTRT and .ptu scripts. My target system is the digital tachograph. Eclipse and CodeWright When I do a review , I check if our coding standards are respected , and if the change affect other parts of the system. 1-5 minutes 35 45 20 Show
debugging on the emulator and module test DTRPR is a module from DTCO that is responsible for calculating and providing information to the driver Softtoon, Rational Test RealTime First I check for functional issues and then if the coding standards are met 5-30 minutes 15 40 45 Show
I am a SW architect as such I don't directly test code Embedded multi controller Whiteboard, Pen paper ,Word, Excel, UML editor ,Compiler ,linker , Static Code analysis, Sw test tool, Debugger, Emulator, Hardware informal 31-60 seconds 10 10 10 Show
Cantata++, mainly on black-box level, which also enables a white-box test, by having enough tests on interface level to execute all the internals (close to 100% branch coverage). Different controllers, small targets, but I don't care a lot. UML (Rhapsody), eclipse No tool in place. Under 10 seconds 40 50 10 Show
Code review, Integration testing, white box testing. Code is written in C Eclipse, Visual Studio, Tasking Not enough done, and people not communication good enough in my opinion. 1-5 minutes 15% 80% 5% Show
module test and target test Automoted embeded system with 2 controllers Eclipse CDT Galileo functional flow and coding standard 5-30 minutes 40 40 20 Show
After development we perform UT, MT, IT and ST Sfottune, Eclips, Micro controller, RTRT, Cantata++, Rhapsody Sfottune, Eclips, Micro controller, RTRT, Cantata++, Rhapsody Formal and semi formal review. 31-60 seconds 45 45 10 Show
Module/Unit test, Integration test, Desktop Test (Simulator/Emulator) Microcontroller Emulator, Simulator - 5-30 minutes 25 30 45 Show
With Rational test real time and desktop testing Embedded system Rhapsody, Tasking and now some new Usually new code I let review within a 4 eye reveiw 5-30 minutes 10 10 10 Show
Manually, or with scripts. Linux Jenkins n/a 1-5 minutes 5-10 50 40 Show
print statements and test case runs win8 and linux eclipse, pycharm, JTAG debuggers na 11-30 seconds 20% 30% 50% Show
Unit Test, System Test SSD Eclipse Code Collab or Pull Request 1-5 minutes 25 25 50 Show
Python test scripts SSD GNU/SourcePoint Very Good 5-30 minutes 20 40 40 Show
Custom unit tests and system tests. Internal test framework. C based embeded system slickedit, sourcepoint, pycharm online with code collaborator 5-30 minutes 20 20 60 Show
Debugger/Emulator embedded slickedit, eclipse, code collaborator Using code collaborator 5-30 minutes 70 20 10 Show
I write a python script to excercise each feature that I implement. SSD arm compiler, slickedit We use collaborator 1-5 minutes 50% 20% 30% Show
Run it on the ASIC and see what happens. Enterprise SSD Eclipse We use Collaborator by SmartBear 1-5 minutes 40% 30% 30% Show
Unit test, regression/nightly test ARM Greenhill ? 1-5 minutes 30% 40% 40% Show
Verify with break points or verify with logic analyzer to decode access code source. Embedded SOC ARM DS5 Look at the flow of the code development. and how to debug/test 31-60 seconds 30% 30% 40% Show
I test the different pieces of it as I develop it. Not sure what I can tell you about it. JTAG, Greenhills probe, Eclipse, GIT They happen before anything gets checked into trunk. 1-5 minutes 20% 50% 30% Show
system level tests, unit tests ARM, Linux CentOS ARM, GNU CodeCollaborator w/ multiple peers 1-5 minutes 40 30 30 Show
Given a firmware requirement, I contact the firmware owner and we discuss expected behavior and test cases and what the expected results should be. I then create the test based on the expected behavior and the test cases we want to hit ironing out any unexpected behavior by communicating with the firmware owner. Actual disk hardware connected to a Windows 8 mother board. Eclipse with PyDev Code reviews are required and approval of the reviewers is required prior to merging the code. 1-5 minutes 30 20 50 Show
System tests, White box tests, some Unit tests Embedded ARM system with lots of custom hardware/RTL Arm compiler and toolchain, Eclipse for C/C++ Sincerely done 5-30 minutes 40 20 40 Show
It is a mix of: - hand-walking through new code with a simulator/emulator - hand-generated dedicated unit-tests - creation of special top-level regression tests - pass through the suite of regression tests Embedded multi-core ARM SSD drive SlickEdit, gcc, ddd, ARM compiler, pylint, TestRunner (this year) Run in Code Collaborator 1-5 minutes 35% (includes design) 40% 25% Show
on hardware, write python scripts or use host testing tools to test the specific functionality. we also used google test to write unit tests. not sure I understand "target system", but I work on firmware on SSD controller xtensa IDE; source insight viewer; gdb in linux;also some customized flight recorder; when using ARM processors, I enjoyed using ETM/ETB with lauterbach debugger we use code collaborators 30-60 minutes 20 40 40 Show
python script to stimulate system. Logs/printf/emulator to step through code. gdb on simulator. SSD drive eclipse editor, arium emulator, linux host with target device attached. code collaborator peer reviews 5-30 minutes 10% 20% 20% Show
yes linux gcc, more often 2-4 hours 30 40 30 Show
combo of unit/bench test Linux Eclipse, Visual Studio CC 1-5 minutes 60 20 20 Show
Functional tests in system, because that is the only way to test. Embedded microcontroller (Multi-core ARM) with dedicated custom hardware Eclipse CDT, vim, gnu-arm tools, Arium ARM debugger thorough 5-30 minutes 30 30 40 Show
Unit testing and Jenkins. SSD Logic analyzer, Debugger Code collaborator 1-5 minutes 60 30 10 Show
Write my own tests. Simulated environment in Visual Studio. Embedded FPGA, JTAG Code Collaborator Under 10 seconds 40 5 55 Show
Depends on the function of code I'm writing. Embedded ARM processors RealView, Arium Done on-line using Code Collaborator software. 1-5 minutes Depends on the phase of the project. Depends on the phase of the project. Depends on the phase of the project. Show
Xtensa debugger and python code. Tensilica CPU Xtensa Explorer, Eclipse with PyDev plugin, SAD - in house JTAG register set/display Minimal 1-5 minutes 10 10 10 Show
Third Party tools, In-house test tools, Independent Software vendor Tools, ARM Source in-sight, Microsoft Visual Studio Peer code review, code collabrator 5-30 minutes 30 35 35 Show
Python unit tests, examination of counters and data dumps after allowing the calibration to run. PC Xtensa Eclipse, PyDev, Idle (Python) NA 1-5 minutes 35 35 30 Show
I have built some basic unit test. HD and SSD. Windows, Python, software simulation, Arium debugger.. We use Code Collaborator 1-5 minutes 50 25 25 Show
na na na na 1-2 hours 0 0 0 Show
Manually run unit-test/module testing. ARM Cortex-M4F IAR CooCox Soon to be online peer reviews. 11-30 seconds 60 25 15 Show
Automated and functional (user-run, hands-on at desk and in the lab) tests. Welders, Wire Feeders and the User Interfaces that control them. Code Composer Studio, CodeWarrior Rare, and usually over-the-shoulder 11-30 seconds 50 25 25 Show
Functional test at a system level. Unit tests informally throughout development. arc welding power supplies, Cortex M, C2000, Coldfire Code Composer, Notepad++, Eclipse, Visual Studio Very informal over the shoulder reviews. Working to develop a formal process. 1-5 minutes 50 25 25 Show
Black box test procedures on engineering test fixtures and final project. Water heaters with controls 64K code range. CodeWarrior and KDS For many years they were scarce and done in one giant effort at the end of the project. We just started using them this past year using CodeReviewer to review changes to code and will be starting new projects using it on an ongoing basis. 31-60 seconds 30 20 50 Show
Unit testing iMX6 running WEC800 Visual Studio, Platform builder Sometimes done as a group, sometimes durring paired programming 31-60 seconds 60% 10% 30% Show
Using the debugger to hit breakpoints and manipulate values Gas detection instrument with an Energy Micro processor IAR We use Crucible for online code reviews, but it is difficult to get reviewers to participate 31-60 seconds 30 30 40 Show
Run through self-created software qualification procedure along with Reliability testing tiny 8/16-bit MCU's IAR, Code Composer, AVR internal code design reviews Under 10 seconds 20% 50% 30% Show
unit testing - manual / Vector cast system testing - manual ARM IAR manual review / static analysis 1-5 minutes 50 30 20 Show
HP ALM using test protocols and test cases Insulin pump delivery IAR for embedded Usually send the code to peer reviewer and have a meeting for collaboration 31-60 seconds 0 0 0 Show
against the low level requirements using the IAR embedded Work bench Insulin pump IAR walk through reviews 1-5 minutes 3 years 5 years 3 years Show
Black Box Testing Embedded Systems - Insulin Pumps IAR for embedded. I am not familiar with them yet at my new company. 31-60 seconds 0 0 0 Show
White box testing, formal verification Battery-powered insulin infusion pump with 5 microprocessors inside. IAR for embedded target, mostly Visual Studio for PC Author, QE and at least one independent reviewer examine design intent and code. 31-60 seconds 50 30 20 Show
Manually at desk NXP ASP, ARM Cortex GCC, Eclipse Do one every so often 31-60 seconds 30 10 60 Show
Unit test for the C++ modules are run automatically on each build. For the C part it is still a manual process that has to be run on every change. In addition to that there's your everyday dev testing on the different embedded targets we deploy to 3 different embedded platforms each running a different version of Linux (+1 PC Linux target) QtCreator, proprietary build system (based on Scons) Web-based reviews using Reviewboard, the reviews are posted to the whole competency team working on the component 31-60 seconds 40 20 40 Show
CUnit/CMock (customized solution that fits best the Stoneridge needs) Renesas RH850 & Renesas V850 IAR, Eclipse (C/C++ package), Visual Studio, gcc Peer-review. Codestriker 31-60 seconds 35 40 25 Show
Littel bit of everything, unittest and manualy ie live system Truck, Bus Eclipse, IAR Workbench Don't happen very often 1-5 minutes 15 35 35 Show
Unit tests, regression tests and finally system tests Usually 32bit embedded systems, Renesas MCU´s Eclipse, IAR and in-house tools Peer-review 1-5 minutes 45 35 20 Show
Unit testing PC QtCreator 2 persons are doing review on pull request. Under 10 seconds 40 40 20 Show
Dev testing manually and by running unit tests, nightly automated test runs, automated system/integration tests from a custom framework, performance tests with KPI comparison Hifi audio system running ARM, TV system running Intel Atom, IoT running PPC and host as generic Linux QtCretor, custom build env on top of SCons using GCC 4.8 ReviewBoard with SVN commit hooks 31-60 seconds 30 30 40 Show
manual testing, unit-tests (if there is time for that) linux, soft RTOS, userspace applications, audio streaming gcc, gdb, customized Scons, QTCreator as editor, console, JIRA we use 'reviewboard', RBTools, reviewing is done before commitment to SVN 1-5 minutes 70 15 15 Show
Currently I use unit tests and manual tests for GUI. Cross-platform concurrent distributed systems Eclipse, Qt Designer, Qt Creator, Visual Studio Code is reviewed by at least two peers in Atlassian Stash during pull requests. Under 10 seconds 30 40 30 Show
CUnit for unit tests C# & NUnit for test automation using variuous hardware simulators Atmel AVR IAR, Eclipse, gcc Before every release. Tend to gather up and then we flash through too many files. 11-30 seconds 40 40 20 Show
try and fail embedded platform visual studio no experience yet 1-5 minutes 30 50 20 Show
unit tests and interation tests To make my programs more robust Microsoft Visual studio not many code reviews Under 10 seconds 70% 10% 20% Show
TestComplete Maritime monitoring and control systems for mechant systems Visual Studio 2010 - 1-5 minutes 40 30 20 Show
Mostly by debugging through JTAG or other emulators Mostly Analog Devices Blackfin, a bit of PIC and ARM Specialized IDEs based on eclipse and netbeans Next to none 11-30 seconds 35 35 30 Show
By trial and error, the old way. PIC, EFM32, Blackfin. Eclipse, GCC, MPLab x - 1-5 minutes 40 30 30 Show
Manual self-test. Verification by test department. Some unit testing for "code approriate for unit testing". Embedded Linux (ARM), Windows Embedded (x86) Qt Creator, Visual Studio, gcc, git, TFS Some percentages of our code are reviewed. Sometimes reviewer and coder sits together. Under 10 seconds 20 30 50 Show
printf statements mostly, Some ad hoc tests to debug/verify code blocks on development host ADI Blackfin w/uClinux, Cortex A8 ARM w/yocto linux gcc/g++ mostly Used to have codereviews in my earlier job, but not in my current job. 1-5 minutes 30 20 50 Show
using self developed app, which calls various methods in my code. centos7 gcc, gdb at least 2 peers will review my code, before it is committed to the official version control system 5-30 minutes 40 30 30 Show
white/black box test x86 system with various HW types Linux & open source tool chain peer code review 5-30 minutes 20 50 30 Show
using py.test for feature test video streaming eclipse after checking in code, we submit code for review and resolve alll comments before checking in to the repository. 1-5 minutes 50% 20% 30% Show
Write short test scripts, use REPL etc. CentOS vim, WebStorm We use PRRQ. Under 10 seconds Not sure Not sure Not sure Show
Currently we have our test programs written in cppTest testing framework. Linux VI Editor, GCC compiler, Linux tools Currently its being peer reviewed 5-30 minutes 50% 30% 20% Show
n/a n/a Visual Studio/Eclipse Yes 5-30 minutes 40% 40% 20% Show
For a certain module, write production code first, then write unit test to test module as a big box. Use test coverage tool to monitor test coverage, enhance the unit testing if necessary. arm9 based SSD controller from marvell, dual core. sourceinsight, arm compiler, GIT, Jira Nothing special, just read the source code and give recommendations 31-60 seconds 30% 35% 35% Show
with test scripts M510DC ARM RVDS 4.0 git pull request 5-30 minutes 20 30 50 Show
Hardware platform Hardware platform Hardware board Yes 1-5 minutes 30 40 30 Show
Write debug code and serial command, and run DM2 scripts. Don't understand. Source Insight to edit, ARM compiler to compile. It takes 3 people a whole day to review 1000 lines code. 1-5 minutes 20 20 20 Show
UT/Coverity arm based SOC DS5 git pull request/code collaborator 1-5 minutes 20 20 20 Show
unit test. scripts. ARM ARM realreview development suite involve 2 or more senior engineers to review the code together. 1-5 minutes 30% 20% 50% Show
yes windows 2010express we review our code before we check in the code 1-5 minutes 50% 30% 20% Show
simulator + test script embedded system source insight before check in code 31-60 seconds 20% 30% 30% Show
simulator, no OS SI collaborator 5-30 minutes 40% 30% 30% Show
none metha metha CC 31-60 seconds 30 30 40 Show
unit test,script test, serial command arm arm we will create a meeting to review it 31-60 seconds 30% 40% 30% Show
unit test Hardware development board Lauch Batch Peer review and supervisor review 1-2 hours 30 30 40 Show
No Embedded system ICE ccollab 5-30 minutes 30% 30% 40% Show
Manually using unit tests that are created to verify correct code functionality and that requirements are met. Linux OS on mulitiple hardware platforms. Eclipse, gcc, gdb, gdbserver Code reviews conduected using ReviewBoard. 11-30 seconds 50 20 30 Show
I think of tests and I execute them. embedded gcc informal and formal 11-30 seconds 40 40 20 Show
I run it and check the output. Reduced-resource (embedded) type system running Linux. vim, gcc Every commit is scheduled for a code review by two "peers" to the person having done the change. 1-5 minutes ... ... ... Show
on the spot checks during development Embedded system using real-time kernel Kinetis Design Studio informal 1-5 minutes 50 20 30 Show
TDD Linux firmware running on Archer C7 mips target Eclipse, vim, QM modeler, gdb, svn formal 11-30 seconds 50 35 15 Show
My team uses unit tests, and we are developing fully automated integration tests. We also use simulators or Jmeter for load testing. Jboss in a VM My team uses Eclipse or Jboss developer studio One person reviews code and provides feedback. We don't do this often, but we are trying to get in the habit of doing so in a more agile manner. 1 day or more 50 30 20 Show
Manual test, automated unit test with unit test framework, some gui automation, instrumented code, debugger step through if neccessary Linux based security/networking appliance many Some people do good reviews, some people rubberstamp, most people dont have enough domain expertise to understand the fine details of the code they are reviewing to catch issues 5-30 minutes 20% 40% 40% Show
unit tests, component tests, functional tests Linux, OSX various all code is reviewed before committed 11-30 seconds 50 25 25 Show
Both manual and automated tests Varies, largely PC/Servers and embedded primarily vi/emacs for code editing, shell scripts for basic build environments and testing code reviews are generally few and far between 1-5 minutes 35 10 55 Show
Manually, once; then as-needed. IP video set top boxes with various different hardware, SOCs, kernels, etc. gcc, make, atom, vim All code is reviewed before merging to the tip via Gerrit; Jenkins performs build testing. Under 10 seconds 80 10 10 Show
manually and automation stb rdk none na 1 day or more na na na Show
Ad-hoc procedures. Each team member has his/her own methods. Embedded system written in C++ GNU Toolchain, Make, git We review the interfaces, we rarely review implementation. 5-30 minutes 60 10 30 Show
unit tests, manual acceptance tests ARM systems Atmel Studio code together 11-30 seconds 40 40 20 Show
Manually linux vim by tech lead 1-5 minutes 70 20 10 Show
Google Test Intel Vi Peer Review - ReviewBoard 1-2 hours 30 50 20 Show
VB^ scrips communicating to a test rig. Not sure what you are looking for VB6, Python, We have not done one in a while. 30-60 minutes 20 50 30 Show
Firmware test group test the firmware per requirement for the products and how firmware implemented the requirements N/A N/A peers reviews the code Under 10 seconds 60 30 10 Show
Test scripts in python Embedded electronic power meters based on various CPUs IAR compilers, Renesas compilers Team based code reviews with everybody (3-5 people) looking at the code 31-60 seconds 30 30 30 Show
Write scripts to test the functionality ? none Email the code out and hope someone reviews it 5-30 minutes 20 20 60 Show
Unit test using scripts electronic power meter, embedded communication modules IC emulators, J-tag debuggers peer and formal code reviews 30-60 minutes 10 50 40 Show
visual basic scripts smart meters luna, visual studio peer review 2-4 hours 70 10 20 Show
unit test, test functions embedded into firmware Communication module which resides on 3rd party device IAR, CodeWrite regular code reviews Under 10 seconds 30 50 20 Show
Python testing. N/A Eclipse, Visual Studios N/A Under 10 seconds 30 35 35 Show
Exercise code and also exercise code to cause failure to insure failures are properly trapped. CC1110 radio chip on PCMCIA card. IAR compiler. CodeWrite editor. SmartSVN version control. Call meetings with other developers to review code if change is significant 1-5 minutes 1 5 2 Show
Local testing, try to think of likely failure cases ARM system IAR Embedded Workbench Use ReviewBoard and meetings 31-60 seconds 60 20 20 Show
unit test, mini system on bench, verification, system Embedded microcontrollers IAR less common than they used to be 11-30 seconds 33 33 33 Show
Run it and debug it NA NA peer review Under 10 seconds 30 60 10 Show
Custom simulator Custom ASIC Custom written assembler and simulator I am the only one who knows it 31-60 seconds 70 20 10 Show
a small amount of unittesting (when applicable), alot of python tests acting on the DUT. ARM Cortex M4 IAR, Visual Studio, Eclipse we use review board. I find that most code reviews are more a factor of who attends them, than the amount of them. Having the right people in the room allows for better reviews. 11-30 seconds 50 25 25 Show
we use a combination of VB and Python to test firmware releases running on the target HW via the external interfaces that are available embeddd IAR compilers, Eclipse typically desk reviews with occasional group reviews, started using ReviewBoard tool to capture/track comments 1-5 minutes <10 <10 <10 Show
Try as many test cases as possibly. Python script to loop through tests. ARM Cortex-M4 with RTOS IAR, Eclipse, Visual Studio Periodic depending on the size of the commit 31-60 seconds 50 25 25 Show
python sanity tests using unittest/nose, manual testing ARM Cortex M4 IAR/Visual Studio We use a combination of ReviewBoard and in person reviews. People typically review the code ahead of time and come to the reviews with questions or concerns. When changes are made, developers will send out another review of the corrections. 31-60 seconds 20 40 40 Show
real time audio analyser processor simulator Embedded DSP xIDE, Visual Studio hahaha 1-5 minutes 20% 30% 15% Show
On bench and during system test. risc processor, small memory, no OS. Notepad++, make, in house compiler. Bluelab (In house IDE), emacs if forced. We have them. 1-5 minutes 5% 5% 20% Show
In system Soundbar/Headset MIsc PC and Linux Not very fulfilling 31-60 seconds 20% 40% 40% Show
Unit testing for the areas where unit tests currently exist. Nightly smoke tests for daily builds, full system testing for release candidates. Typically an eval board, containing one of our bluetooth chips with a whole bunch of peripherals around it to exercise all the functionality designed into our code base - buttons, I2S amplifiers, microphone inputs etc In -house IDE and tools Changes are made to the code through changelists, and every changelist must be reviewed by someone qualified to review the specific area, before submitting. 1-5 minutes 15 60 25 Show
A little bit after I write it. At work? There are a small number of C unit tests which no one uses. There are many TCL Expect scripts for testing firmware behaviour. Vim and tmux All code changes are reviewed. 1-5 minutes 50 30 20 Show
System test only, however I have been trying to get into the habit of unit testing functions, however with the current code base this is quite difficult. A 16 bit microcontroller GCC Team reviews are made per commit 5-30 minutes 20 40 40 Show
Ad-hoc developer testing followed by regular system test. XAP processor. Message based single threaded environment. In house customised IDE wrapped around make, gcc and a range of other utilities/tools. Code reviews are done by another team member after completion of the bug/feature but before they're committed to the main code base. 31-60 seconds 30 20 50 Show
small projects: functional test mid-big projects: unit test bluetooth audio CSR proprietary and VS2008 balance between good practices, readbility and functionality 31-60 seconds 30% 40% 30% Show
JUnit test cases NUnit test cases Relying on System Test team Windows application for .NET (NUnit) mainly Visual Studio; Eclipse Some peer reviews for .NET development Under 10 seconds 60 20 20 Show
combination of unit tests and tests on actual devices firmware for CSR Bluetooth chips emacs done by a fellow member of the team 1-5 minutes 25 20 30 Show
Code is tested on a representative hardware platform, then manual and automated tests are carried out by a Test Team Custom embedded platform with additional DSP. xIDE (in house embedded C development tool) Code reviews are strictlyenforced and work well 11-30 seconds 5 20 75 ( most of my work is finding and fixing bugs) Show
Only recently started using on Host unit tests. Before that lots of manual testing during development and System tests. CSR XAP micro CSR proprietary Not always done by the most appropriate person 5-30 minutes 60 20 20 Show
We currently test our code by running the code onto the hardware and capturing debug traces. CSR XAP processor , Kalimba DSP processor Xide tool chain Sometimes they are just obligatory but most of the times they are quite in-depth. 1-5 minutes 25 25 50 Show
Manual tests, in-house Test On Target framework, Soaktester test engine which uses TCL scripts for more extensive testing. XAP2 processor GCC, GVIM, IPython, Perforce, Notepad++ Code reviews are an integral part of our team's development process. All code we submit are reviewed by at least one other teammate. 5-30 minutes 50 20 30 Show
Currently applying minimal testing on the generated code, if not none. SOC devices for wireless communication systems Cross platform for embedded C and .NET for desktop applications Regular reviews per change list in a particular field 5-30 minutes %60 %5 %35 Show
Manual tests, debuger Embedded system, integrated Bluetooth chip Notepad++, xIDE, eclipse, code::blocks I didn't do much of them 11-30 seconds 20 60 20 Show
Unity for new modules; "system test" for existing functionality; lots of manual testing when fixing bugs. Bluetooth Audio Chip (CSR) xIDE, shell scripts, make, in-house utilites Slow & cumbersome 1-5 minutes 10 50 40 Show
Manual tests Running code in debugger Unity test harness (currently on library code only) Embedded code on XAP Processor Debugger for XAP Processor All code changes must be reviewed by someone else before submitting to our codebase 1-5 minutes 30 40 30 Show
-Manually -Some automation -Unity test framework Embedded SOC's and Controllers CSR specific development tools Reviews are done as per system requriements and coding guidlines and optimization 5-30 minutes 50 30 20 Show
With Phones and Bluetooth Profile Test Suite(PTS) CSR embedded dev kits CSR developement platform Code developed is reviewed for optimization, logic check and coding standard 5-30 minutes 40 30 30 Show
- Unity testing: Unity test framework using Cmock. - Functional testing: against other standard Bluetooth enabled devices (such as phones, tablets, speakers and PCs etc) - Conformance testing: using Profile tuning suite (for qualification to Bluetooth standards) Bluetooth headsets, speakers, soundbars CSR specific Development Environment (xIDE) code reviews are clear and precise from subject experts and mainly focusing on functionality aspects 31-60 seconds 40 25 35 Show
My code usually have some unit tests. In general our code is tested by system level manual tests. It is reference design of Bluetooth music playing headsets, speakers and soundabrs. It runs on 16 bit invented next door processor. Eclipse and gcc Reviews are mandatory (recently) but rather informal. Due to current system limitation usually there is only one reviewer and it is not easy to refer to specific place in code. 11-30 seconds 40 40 20 Show
real time debugger, i/o, end result etc as appropriate CSR xap processor, Kalimba DSP Xide all changes reviewed 11-30 seconds 50 25 25 Show
unity tests Bluetooth headset/speaker/soundbar. BlueCore (XAP) with on-chip host xIDE (SDK) and other in house tools They are generally useful 30-60 minutes 40 40 20 Show
Run regression over weeks Cisco router Linux workstation Use Cisco PRRQ 31-60 seconds 50 25 25 Show
automation/ manual test/ system test beds x86 systems with network processors GNU tools Peer reviews 5-30 minutes design + code - 60% 30% 10% Show
Run it and verify it works. N/A Eclipse Very infrequent Under 10 seconds 50 25 25 Show
CppUTest Bare-metal ARM7TDMI / Linux x86 / Linux ARM GNU, Qt Creator, SVN, git, VS Code standard practice, reasonably rigorous 31-60 seconds 15 5 10 Show
Roughly: 40% via unit test, 30% via manual testing, 10% via our Lua integration, and 20% likely not covered at all. This discipline is horrible, and I need help to change it. C++/OpenGL on Linux target with 1Ghz CPU and 1GB RAM gcc, gdb, make, QtCreator, SmartSVN, CppUTest, SciTools Understand, google perftools We don't do them, but I really wish that we did. 1-5 minutes 30% 20% 50% Show
Plugging it into a simulated test environment and observing the output x86 dual core 32 bit linux gdb, valgrind Informal, and might not occure at all unless a check-in causes problems. 31-60 seconds 20 40 40 Show
An extensive but manual process involving debugger verifications, regressions, and end to end test both in a simulated and real-world environment. Embedded Linux and bare-metal ARM. vi, gdb, cscope, Qt Creator, unix Informal; usually a second set of eyeballs and a walk-through. Sometimes exhaustive, sometimes not. 11-30 seconds 25% 50% 25% Show
Manual Functional test ARM-7 raw processor, no OS GNU - 1-5 minutes 25% 25% 50% Show
More and more unit testing with lots and lots of manual integration testing. embedded linux x86 qt creator, subversion, smartsvn We have then regularly now which is good, but I would still consider them unfocused. 1-5 minutes 33% 33% 33% Show
Load image to route/switch with debug logging information Cisco switch/router clearcase, Cisco ADS Cisco Peer Review tool 1-5 minutes 50% 25% 25% Show
Mostly manual testing. With some critical python scripts I will do unit testing. In some cases I develop integrated tests. SkyView and D3. PyCharm, QtCreator, Visual Studio They are fairly productive, I credit most of my personal development as a programmer to code review feedback. 5-30 minutes 20 70 10 Show
I make a list of things that need to be tested based on requirements, and then I run through each thing manually to make sure the requirements have been tested. Then after checking in the code, I write down a manual test procedure of sorts so that someone else can carry out the steps that I just did. Often app code inside SKyview, or possibly ARM code GDB, remote debugging, logic analyzer, bus sniffing, debug serial ports We have them when we feel one is needed or desired by the developer. 1-5 minutes 50 30 20 Show
Black box testing, some built in unit test code, some throw away unit test code. Embedded systems (IOS or Linux) running in Cisco switches and routers gcc/icc Internal code reviews based on internal review tool 30-60 minutes 20 30 50 Show
mostly unit testing via TDD, some integration/acceptance testing Linux servers deployed via AWS vim, emacs, Eclipse we primarily pair and do some pull requests 11-30 seconds 49 49 2 Show
JUNIT for java, not much for c++ LINUX Eclipse code collaborator 2-4 hours 40 30 30 Show
setup testbed cisco router none cisco code review standard 5-30 minutes 50 30 20 Show
I unit test with my daemon's client ARM Linux GCC, VIM PRRQ 1-5 minutes 30 40 40 Show
Unit test N/A eclipse Perr code review 30-60 minutes 60 30 10 Show
Mostly just ad-hoc, manual testing. Linux based embedded system GNU and Qt Creator few and far between 1-5 minutes 50 20 30 Show
laborious functional bench testing ARM (preferrably Cortex-M) ARM-MDK, GNU tools(gcc, gdb, make, etc.) We do them......sometimes...... 1-5 minutes 30% 20% 50% Show
white box testing, edge cases, positive and negative testing system running uC-OSII or ThreadX GreenHills Multi, IAR Embedded Workbench Peer or Formal reviews using CodeCollaborator from SmartBear 1-5 minutes 50 30 20 Show
CPPUTest run with GCC STM32 Keil, GCC, Eclipse 3 or more people, 1 must be senior, 1 must be off platform Under 10 seconds 50 25 25 Show
mostly manually, some assembly testing custom built PCBs using Microchip dsPIC33 uCs, native code, no OS Mplab X, some code analysis tool, Spreadsheet, it'd be nice. 11-30 seconds 30 20 10 Show
Embedded test. Unit Test, Product Test Embedded Green Hills, IAR We have then. Somewhat painful 31-60 seconds 40 20 40 Show
Test its feature or test with test command. uc/OS-II, Embedded system MULTI debugger Suppose this is optional. 5-30 minutes 50 30 20 Show
Unit tests using TDD where practical and system-level feature/regression tests. For full system development, I often perform system-level fuzzing tests. Varies by customer, but often Cortex-M class microcontrollers C: gcc + openocd + eclipse, Python: PyCharm, cmake. Visual Studio on Windows. Infrequent by others as my consulting is often sole-developer for a product or specific features. Lint and -Wall -Wextra -pedantic are my friends. Under 10 seconds 35 35 30 (including system testing) Show
test vectors in terms of test code + debuggers ARM based Micro running uCos , Linux based custom target GreenHills IAD, WindRiver work bench rigerous peer review using code collabborator tools 31-60 seconds 30 40 30 Show
Test application to connect unit for test/debug RFID reader MULTI project manager collaborator 1-5 minutes 30% 20% 50% Show
No Windows Mobile, CE, Android and iOS VS2005, 2008, 2013, 2015, Android Studio and XCode N/A 30-60 minutes 40 10 50 Show
Verify functionality Various ARM based targets IAR, GreenHills We use Code Collaborator 1-5 minutes 30 30 40 Show
combo of unit test and functional test. various ARM and ATmega, 8051 greenhills / IAR Yes, we have them 31-60 seconds 25 25 50 Show
Consider how a module can be tested when designing API; write unit tests at same time as implementing code; integration and system testing done as a separate exercise before verification/validation SoC ARM A-9 core; Tensilica Mini108 core Xtensa IDE toolchain; TBD for ARM Use Stash pull requests as gatekeeper; reviewer can add comments and tasks 31-60 seconds 20 40 40 Show
with tests created by us EMMC Visual Studio Collaborator 30-60 minutes 30% 20% 50% Show
Previous project used Parasoft to unit test as much as possible. ARM Greenhills Good 31-60 seconds 50 30 20 Show
CppUnit embedded Visual Studio, Eclipse Stash or Code Collaborator 1-5 minutes 60 20 20 Show
CppUnit and White Box testing ARC Visual Studio Stash, CodeCollaborator 1-5 minutes 60 20 20 Show
I create some stubs in which the function is pasted. The stub has some nested loops. Each cycle loops on all possible cases of an input variable of the tested function. Furthermore, the stub contains the print with all the input paramaters, the function expected ouptut ARC Microcontroller Visual Studio I use Code collaborator tool 30-60 minutes 25 30 45 Show
Using Simulator and TestStub Embedded System with ARC-CPU Lauterbach, Visual Studio before merge feature branch on master branch , we have to create a Pull-Request (STASH-Atlassian) 31-60 seconds 40 40 20 Show
I use the simulator embedded lauterbach, Visual Studio I use STASH-Atlassian to create pull request and approuve code review 31-60 seconds 50 20 30 Show
Regression runs using simulation and emulation/unittest Embedded ASIC Eclipse, VC, SVN In-house, ad-hoc 5-30 minutes 10 25 25 Show
Breakpoints, step in, analyze trace logs. Use printf. Embedded HW platforms Visual C++ Studio, Eclipse Fair 1-5 minutes 20% 60% 20% Show
Tests are primarily Python-driven via a unittest-derived class library. These execute against the product firmware in various targets in emulation, simulation and actual silicon. We also have some standalone variant builds tested via Python, SystemVerilog and C. Its a management ASIC for controlling stacked DRAM die. GNU-derived C compilers, Visual C, make, Eclipse, Source Insight Offline peer review, collaborative walkthrough, resolution action-list 1-5 minutes 20% 60% 20% Show
In system testing using Python based test driver embedded processor in-house, python peer 1-5 minutes 20 40 40 Show
- - XTensa - 11-30 seconds - - - Show
We normally use python unittest embedded c running on iss or compile to intel and tested on an emulator source insight they are reviewed by all the team in a meeting room. 5-30 minutes 20 60 20 Show
just verifing if requirents and specifications are respected Street Light controllers IAR Embedded Workbench ... 31-60 seconds 40% 10% 50% Show
partially with cppUtest arm platforms iar, eclipse indiavidually done 5-30 minutes 45 25 30 Show
YES ballast iar c compiler - 31-60 seconds 10 10 10 Show
Some modules are tested with unit tests (using cpputest); the majority are tested by running the application on evaluation boards and, after that, on the real product. 32bit ARM SoC with integrated 802.15.4 radio IAR EW for ARM; debug printf on UART; 802.15.4 sniffer tools I review the code myself when I need to refactor it 1-5 minutes 60% 30% 10% Show
Most manual testing. Some automated product testing 32bit microcontrollers IAR - Under 10 seconds 40% 30% 30% Show
automatic test microcontrollers IAR system - 1-5 minutes 40 40 20 Show
I write some test code, but most of the tests are done using the debugger Embedded / ARM 7, Cortex M4 EMACS, IAR, Visual Studio Manually hand it over to the reviewer 11-30 seconds 40 10 50 Show
30% unit tests 70% manual tests ARM Cortex M0 up to 64k IAR, Eclipse on demand if I see special risks 5-30 minutes 40% 40% 20% Show
Currently, I'm writing tests in JavaScript for a DALI (digital addressable light interface) library ARM controllers IAR Compiler, TestComplete ? 11-30 seconds ? ? ? Show
After writing Embedded IAR,Subversion,Eclipse Yes 31-60 seconds 60% 40% Can be long, can be short Show
I'm tend to automated the test. Embedded ARM GCC / IAR / VisualStudio Refactor 1-5 minutes 30 40 30 Show
by debuggung, by loggging data from inside the device embedded IAR-IDE seldom Under 10 seconds 20 30 50 Show
system tests µ-C, PC IAR embedded, VS2012 ? 1-2 hours 50 20 30 Show
Because it is code near to hardware, i am testing if the whole system is reacting like it should. ARM Cortex uC IAR Compiler, tortoisesvn, Mantis Bugtracking Mostly done with another developer before check in. 31-60 seconds 40% 40% 20% Show
daily QNX system QT Creator, Visual Studio, Momentics for QNX, Eclipse Daily Code review 1-5 minutes 20 50 30 Show
basic unit testing NA Visua Studio/Eclipse Have another person in the team to perform code review 5-30 minutes 40% 50% 10% Show
Simulator QNX, Windows QT, Visual Studio Technical and Functional done by entire team 1-2 hours 20 30 50 Show
On target QNX/ ARM Visual C++, TI Code composer, MI peer review and desk review 31-60 seconds 30 40 30 Show
Trace logging, in-line debugging Windows CE Visual Studio, Eclipse I like to analyze code for design and logic problems Under 10 seconds 80% 15% 5% Show
Compile time debugger, Third party code validation Embedded Eclipse, Visual Studio Peer evaluation 1-5 minutes 60 20 20 Show
Write own test code TI PICCOLO, Freescale imax QNX, TI CCS Formal code review with peers 2-4 hours 20 50 30 Show
Build/Run X QNX Buildchain, WinCE Buildchain, GNU Buildchain Basic Peer Review 30-60 minutes 10 70 20 Show
Unit test - Test using debugger - test line by line and exercise all the paths using debugger, or unit test program Integration testing Dispenser QDE Momentics, Code Composer studio Done for every change 5-30 minutes 40 35 25 Show
bespoke test code, automated test cases Several different ones Windows CE based and also QNX based Visual Studio 2005 onwards, QNX Mometix Trained to do security code reviews 1-5 minutes 50 25 25 Show
Test on target verify behavior with debugger/logs and faking out stimuli where possible Hard embedded (32-bit TI processor), ARM7 Frescale iMX6 using QNX RTOS Eclipse IDE's provided by vendors (Momentix for QNX and Code Composer Studio for TI) Peer reviews 5-30 minutes 50 20 30 Show
write code and test it in pieces. Really interested to know the unit testing. Heard a lot about it, but never seen it in action debuggers mostly I love to hear from people what they think of my coding 5-30 minutes 30 40 30 Show
Code review QNX, iMX6 ARM v7 QNX SDK with IDE Daily Under 10 seconds 70 20 10 Show
JUnit Mac Eclipse prrq 1-5 minutes 60% 20% 20% Show
unit test scripts and self developed tools CAN and Ethernet based system QNX and Code Composer Review with SMEs and team 1-5 minutes 30 50 20 Show
Running PC-LIN, Tessy and by writing and execute test cases Airbag module WindRiver, eclipse. We have them every time a change is introduced in the code 1-5 minutes 30 20 20 Show
Static and Dynamic Tests are run after development: for the first one we use MISRA PcLint tool; for the second we use TESSY tool. Minim system, with sensors and actuators, and some other peripheral modules. IBM Rhapsody for Architecture and Design, AUTOSAR tool for compliance. Windriver with Eclipse environment for code development. We have three types of code Reviews: Inpection, Walkthrough and Screenning. The first one is the deepest Review (three or more partners), the last one is only for very low changes. 1-5 minutes 30 min 1- 2 hrs 1- 2 hrs Show
with Tessy and with system level tests. We currenly lack of real integration testing Power PC, Renesas RH50 Windriver We do reviews 1-5 minutes 50% 35% 15% Show
By using Python Code, and simulation environments Airbag Module Windriver, WinIdea, Eclipse They are not as good as they should be because of the lack of time 30-60 minutes 40% 20% 40% Show
Tessy, Small functional test. Airbag controller Rhapsody, Winidea, Doors, Winriver compiler, Tessy No mandatory preparation time, most of the times the design is not available. 1-5 minutes 30 10 20 Show
Yes, I guess there are more efficient ways Airbags Debuggers, tests by tessy, lint. We do them as part of the process development once we finish the implementation. 5-30 minutes 40 10 50 Show
I usually use test-driven development for new code and modifications. I write automated checks at multiple levels of abstraction. For true "testing", I use exploratory testing techniques. Multiple Depends on project, language, domain, etc. Prefer continuous review using methods like pairing and mobbing. 11-30 seconds 40% 50% 10% Show
manually, adhoc QNX, Arm7 Momentics Yes, regularly 1-2 hours 10% 50% 40% Show
Create test plans with the test team that cover all use cases we can think of, then go through them manually after writing the code. Freestyle Dispenser, Embedded distributed architecture. QNX Momentics IDE, Eclipse, TFS Standing code reviews each day for 1-2 hours going over changes put in by all SW engineers on the project 1-2 hours 50 30 20 Show
Manual unit test, QA, some Automation QNX and Windows CE QNX Momentics and Visual Studio Sometimes missed 1-2 hours 30 30 40 Show
I painstakingly run manual tests again and again, thinking of every way I can to break the code, fix it, and retest. I get very good quality this way, but very labor intensive. TI Piccolo and Freescale MX6 TI CCS and QNX Momentics Suggestions are to change implementations that already work 1-5 minutes 40 40 20 Show
More of Integration testing. QNX OS running on a dispenser with Qt code and Crank Storyboard and qml. qt Creator and qnx momentics. It is mostly Group Code review with cross functional teams involvement. 5-30 minutes 25% 35% 40% Show
Om a real target. It is QNX based dispenser. - They are unproductive with enough preparation time. 1-2 hours 50% 10% 10% Show
parasoft qnx qt creator , qnx momentics peer to peer ,team 1-2 hours 60 20 20 Show
Cunit when possible. Failing that I normally write small toy programs to exercise algorithms in a debugger before testing them in the real system code. X86 server running linux gcc, gdb, eclipse, sublime text, HDE (our target simulator which runs on windows), real hardware. We use code collaborator. It's excellent. Far better time/payoff tradeoff than paper Fagan inspections which I've used in previous jobs. 5-30 minutes 10 10 10 Show
Windows test environment(Win32 representation of the TM500)using signal injection. On target using real networks or network simulators. Also on target using signal injection(PXI). tm500 capacity mobile Telelogic,Visual Studio Code Collaborator 1-2 hours 50 25 25 Show
Anything from unit-test (rare as limited coverage) to full regression suite (if the change has system-wide impact) Varies. Some of our code is targetted at host machines (usually Windows but on occasion Linux) but most of my work is targetted at a heterogenous embedded system with various processor types: PPC, TI DSP, x86. MS Office and Sharepoint, Vim, our build system (which is built on top of SCons) supporting unit-tests, a non-real-time build running on Windows and the target build. CodeCollaborator is used for code reviews. In-house regression system. Certain areas of the code are subject to these but, while they do catch some bugs, they tend to degrade to debates about academic points in many cases. 1-5 minutes 60 20 20 Show
functional testing Real Chassis Slick Edit Code Collaborator 5-30 minutes 40 40 20 Show
Development test by modifying values in the debugger, system test writing test scripts to exercise the new functionality. Regression test running old test cases on the target. SLCA is effectively reads from a database evaluates the differences and then constructs messages to different layers based on "rules" Slickedit, Visual C++ Performed using code collaborator, are useful in finding issues some are skipped or never actually done to completion 11-30 seconds 30 50 20 Show
ad-hoc module testing during development Overnight system regression Various embedded targets, primarily x86 based running VxWorks or Linux MSVC, gcc, Visual SlickEdit Peer reviews including TA of whichever component is changed. 1-5 minutes 20 40 40 Show
I test the product as a whole, no unit test Windows Visual Studio 2005 and 2010 All code created is reviewed 1-5 minutes 40 30 30 Show
Mostly complete system tests, sometimes in an emulated Windows environment, then manual or automated test suites on target. It's huge (well quite big), with varying configurations and CPU architectures MSVS (emulation/debug), VxWorks, GCC and internal logging/trace tools We use a web based tool - Code Collaborator 5-30 minutes 10% 30% 60% Show
Unit testing and target testing TM500 is a multi UE test system which is used by mobile network vendors Visual studio Peer review 5-30 minutes 20 50 30 Show
Mostly using our simulation environment or the target. in early stages of development, when validating new code, I use hard-coded values, or modified versions of existing tests to spoof new features. TI DSP MSVS I review team members code. My code does not often get reviewed 1-5 minutes 50 35 15 Show
Manual visual testing, System testing. Windows PC/Linux application. Visual studio We do code review using code collaborator. 1-5 minutes 20 40 40 Show
For unit testing, I usually export c functions into matlab, and write matlab test scripts to do the testing. We have a regression and validation team, with hundreds of system level tests that are run on a daily basis. . . . 1-5 minutes 20 20 20 Show
unit, integration, QA Clojure web app _ Occurs during pull requests Under 10 seconds 30 30 40 Show
With an automated test suite through leiningen (clojure project). Beaglebone Black/Raspberry Pi emacs and command line build tools Informal, ad-hoc. Post a pull-request in github and (possibly) ask someone else to review the code. No merging your own PR's. 31-60 seconds 40 45 15 Show
TDD often, but not always N/A vim, tmux Often done individually in pull requests on GitHub, but occasionally we get together for group discussion around some code 11-30 seconds 30 50 20 Show
On client projects I practice TDD to write any production code. On Arduino projects I just write some code, deploy it to the micro controller and check if it worked. Micro Controllers Currently just the Arduino IDE. Currently I am the only one exposed to this code. (No code reviews) Under 10 seconds 30 20 50 Show
Test-first, including black box integration tests Some sort of Linux Vim, Jetbrains IDEs Look for bugs, unhandled cases, inadequate testing, style concerns Under 10 seconds 30 60 10 Show
I unit test in Python using the unittest module. I've experimented with pytest. For C, I use Google Test and call my C code from C++. x86 Windows is the most important, but we also support Mac and Linux. TI DSPs are an important target as well. Visual Studio, Xcode, Code Composer Code reviews are done using Swarm. 3-4 people are involved. 30-60 minutes 30 50 20 Show
TDD OSX Yosemite Mostly VIM, XCode for iOS, Trying out CLion for C++ Pull requests that require multiple reviewers. Don't remind me, it hurts. 1-5 minutes 25% 25% 50% Show
I use a simulator or the HIL (hardware in loop) in our software lab most time. And I have to use cranes to capture the faults that cannot be duplicated in the lab. The modules communicate to each other via CAN bus (CANOpen). I trace the data on the CAN bus to look for anything abnormal. I use Lauterbach debugger for debugging. The unit testing is done by our team in India. mobile cranes GCC compiler, Tasking C/C++ compiler, Qt, Lauterbach with Trace32, dSpace (HIL) We use Phabricator to do online review 5-30 minutes 30 20 50 Show
project/product specific: most recently used catch cpp framework to test C code (x86 target); wrote custom harnesses for embedded products depends on the project: DSP + FPGA, x86, ... gcc variants, MSVC, custom Code Collaborator, Perforce Swarm 1-5 minutes 80 10 10 Show
Visual Studio and/or GDB. 32-bit Linux, usually an ancient version of Ubuntu; sometimes Windows Visual Studio w/helpful plugins; emacs/GCC/GDB I do wish I could get more of my colleagues to review my code! 1-5 minutes 15 7 8 Show
Beginning to do more automated unit testing w/GTest, after baseline CppUnit tests languishing for several years. Also, do manual unit test, incremental integration, and automated functional testing as well as SAT. Telecom Tester - back-end/embedded C++; front-end GUI (Qt), sometimes two different processors Git, Crucible, Jira Crucible, usually involving 2-5 reviewers, with audience mix - those familiar/unfamiliar with code 1-5 minutes 60 30 10 Show
Manual testing during development, Integration testing, occasional unit testing. MTS network testing unit, iOS App Eclipse, OpenGrok, Git I have code reviews for any code that I write. 1-5 minutes 50 20 30 Show
Unit test MTS linux gnu Fisheye Crucible 5-30 minutes 40 30 30 Show
Compile, run, fix problems, repeat. Embedded Linux Jira, Crucible, Custom-developed tools Tedious 5-30 minutes 20 40 40 Show
combination or unit testing and manual testing Generally embedded platform(arm) sometimes pc or mobile vim, git, opengrok, jenkins, jira, crucible (used to use eclipse) code reviews are generally done through crucible and work pretty well 5-30 minutes 20% 30% 50% Show
debugger / break points, trial and error, print statements Intel x86/x64, ARM A7 w/Linux, TI C64x, C66x Visual Studio 2010, Sublime Text, Code Composer, Perforce peer review sings off on changes 5-30 minutes 60 20 20 Show
Regression tests Handheld embedded tester for Telecom applications. Kdevelop IDE Reviews done using Crucible with Fisheye. 1-5 minutes 50 20 30 Show
unit tests, automated UI tests running on hardware, and manual testing of software running on our hardware. several flavors of portable network test instruments. vim use fisheye to review changes with my development team. 11-30 seconds 45 35 20 Show
Mostly through debug statements put into the code. home grown linux system with custom processor board. gcc compiler, custom build and install scripts, sublime text editor, Setup code reviews for all code changes. Can close when at least a couple people have reviewed and comments are addressed. 11-30 seconds 20 30 50 Show
edit, build, check feature manually on hardware. Want to do TDD, but haven't bit the bullet to pay the up front cost of learning how to do it in our system. . I primarily work in the terminal. vim, tmux. Effective when used. Some of the veteran developers don't participate. 1-5 minutes 50 20 30 Show
Large system tests done by a technician A variety of safety products, legacy is mainly pic processors, future is hopefully ARM Whatever is available I have not had a code review at this company. 31-60 seconds 5% 25% 25% Show
My company does manual testing, I just started TDD in the last six months. Microchip dsPics, ST Arm MPLAB X, Keil uVision Weekly. Historically concerned with bugs, but now we cover new code for feedback. Try not to have more than 5 engineers in a review, engineers split up by customers with a few rotating. 11-30 seconds 20 30 50 Show
Mostly verifying that the "happy path" works and poking around a bit to cover all the code "branches" and making sure that the boundaries are satisfied Tcl or QT running on a linux test instrument git, eclipse, jam, opengrok, fedora 17, jira fisheye crucible 1-5 minutes 75 5 20 Show
In system debug and simulation Custom hardware Vendor provided tools Performed based on who is doing the coding 5-30 minutes 50 10 30 Show
At run time I monitor the results to be in line with how they are written. Difficult to find what is built where. Sublime Text, Website designed to print results. Spread across team members. Critiquing naming of anything to make it clear and concise to avoid comments. Elimination of function duplication. 1-5 minutes 30% 60% 10% Show
Benchtop Functional Testing. Prototype headlight systems IAR, KDS, CCS, CCES, MPLABX, LPCXpresso Almost non-existent - there's a lack of knowledgeable firmware developers. 31-60 seconds 70 10 20 Show
Mainly functional testing, by running the code on the target hardware and manually check the behavior. Embedded hardware gcc/gdb, QtCreator Reviews are "created" manually after submitting code. The developer is responsible for creating or not a review and for selecting the reviewers and observers. 1-5 minutes 40 30 30 Show
Mostly integration level testing. Embedded software running on telecommunication test equipment. Major components are test application, user interface, and FPGA. Eclipse for editing. GNU compilers. Atlassian tools to induce anguish and frustration. We use Atlassian Fisheye + Crucible, one of their better tools, to distribute reviews to various engineers. Some take the time to review the code carefully. Most just scroll through it and approve it. 1-5 minutes 40 20 40 Show
Unit tests, automated integration tests, manual testing x86 kvm targets Eclipse, maven, surefire We use them rarely. 1-5 minutes 50 20 30 Show
Project A: Running it on the target. Project B: Gtest + python test driver + manual testing. A: Embedded arm multi-core, homegrown linux distro. B. Xeon server blade vi(m), gnu toolchains, gdb. Jira, Jenkins No enforced standards. FishEye and Crucible available. 5-30 minutes 70 20 10 Show
A combination of writing tests in GoogleTest and manual testing. We cross compile for an ARM system. We compile an entire linux system for it. Eclipse, gcc, gdb, GoogleTest We use Fisheye + Crucible. 5-30 minutes 40% 40% 20% Show
Some Unit testing, Integration testing via C test program, Manual testing, Automated tests using homegrown tool called Lorna which uses REST API for issuing commands. Linux based VMs, Linux based embedded printf Sometimes 1-5 minutes 50 30 20 Show
Component tests, product stack tests Digital TV STBs, linux OS Eclipse, gcc, Depends on reviewer. Some useful, some focused on spelling & whitespace 1-5 minutes 5 25 70 Show
cUnit linux linux, eclipse, unit-testing, OOP, Design Partterns Gerrit (git) done by at least 2 other memebers of the team (and 2 approvals needed before releasing) 1-5 minutes 40 40 20 Show
Our code is componentised and each component consists of a number of modules, so we test at module level (white box), component level (black box, exercising at API level) and at integration level (the complete collection of components that constitute our middleware). Set Top Box GNU toolchain, DDD, Source Insight, Klocwork, GIT, GCOV,CUNIT All code is peer reviewed via Gerrit before delivery 1-5 minutes 50 30 20 Show
As an org we have: - DMS test harness (integration tests at our DMS API) - Component Tests (integration and some unit tests) We have no efficient unit test environment on the host. Set top boxes (less embedded these days and more Linux based with some special hardware) CoVerity being rolled out for static analysis. Pretty standard set of dev tools but no hardware debuggers. PEER review (using Gerrit as we move from ClearCase to GIT) 1-2 hours 0 0 0 Show
component tests, subsystem tests digital tv set-top boxes code editor (eclipse), some tools for viewing/analysing logs they are mandatory, done using Review Board or Gerrit 5-30 minutes 20% 40% 40% Show
CUnit, try to write test code before writing the source code being tested. Embedded linux on TV set-top boxes Eclipse, Git, Gerrit, GNU tools for compilation, debugging etc. Gerrit 31-60 seconds 5% 10% 50% Show
We have component tests for individual Fusion components, DMS tests for the whole middleware and full stack. Set Top Box Eclipse All the code we write is reviewed, at the moment we use Gerrit 31-60 seconds 20% 30% 50% Show
Unit tests, component tests and then feature level tests. Linux gcc, make, cunit Gerrit 5-30 minutes 30 30 40 Show
Produce new/use existing regression tests . Visual C studio Desk reviews 30-60 minutes 25 50 25 Show
Windows emulation. Integration testing. C66 TI DSP family and X86 Visual C++ Eventually they are long and hard to go through. 5-30 minutes 30 30 40 Show
DEBUG Statements TI and Intel Visual Studio Code Collaborator 1-5 minutes 30% 40% 30% Show
Mostly system based testing on target hardware. Correct operation is verified using a logging system built into the system. Due to the distributed nature of the system and the real time constraints of the system, debuggers are not available. Some parts of the system have a 'emulated environment build' available which runs on the local PC and allows the use of a debugger. However, I am involved in a part of the system for which this is not available. Additionally the above testing is performed on remote systems that are shared between developers so systems may not be available when you really want them to be. Some parts of the system have built in unit tests that can be run as part of a build. However, coverage of such tests is very far from complete. Where this is not adequate, or it takes too long or a debugger is required to determine the nature of a fault, add-hoc test harnesses are used to perform functional testing. Distributed system consisting of a mix of X86 processor cards (PowerPC on older targets) and Texas C64/C66 dsp cards. Processors are interconnected using SRIO. Compilers, SCONS Code Collaborator. All code must be reviewed before committing/merging to top of tree. 5-30 minutes 20 50 30 Show
gcc -Werror -Wall debug statements to provide observable feedback that the code is doing the correct thing. embedded linux sbc gcc gdb emacs Can't get one. Under 10 seconds 30 35 35 Show
In Host Development Environment which is a all-software version of the TM500 and 7100 products. Later verification on the hardware in RAV environment using debug logs. TM500 load test mobile Visual Studio, HDE Code Collaborator 5-30 minutes 50% 25% 25% Show
Test harnesses, simulation, target test... Performed during dev stages rather than right at end. TM500 Visual C they occur 1-5 minutes 35 50 5 Show
Using self written automated module test framework PPC, X86 cards running VxWorks Clearcase, Code Collaborator, Module Test framework, BeyondCompare, BoundsChecker, Slickedit We use code collaborator for code reviews 11-30 seconds 40 40 20 Show
Unit test stubs, Host testing TM500 gnu peer reviews 5-30 minutes 30 40 40 Show
On the most part "developer testing", exercising paths in a new piece of code until I have confidence it fulfils the formal/informal requirements. In some cases this requires exercising the same piece of code on multiple operating systems. Some regression testing is also performed using our automation system. PC based (WIN32/UBUNTU/FEDORA), TM500 target (VxWorks/Linux) Various IDEs Eclipse(Juno), Visual Studio (2005/2010/2013), CodeCollaborator, We use SmartBear's Code Collaborator tool to review all code changes. 31-60 seconds 30 30 40 Show
Workstation test environment, workstation debugger Multi-processor embedded system Slickedit, Telelogic Tau, Microsoft Visual Studio All code reviewed prior to returning to code base using online review tools 31-60 seconds 40 40 20 Show
Developer testing is done using debugger in Microsoft Visual Studio. This is used in combination with product simulator TMA (Test Mobile Application) for windows. Custom Hardware with many product variations and associated operating system Visual Studio Very good and detailed. 1-5 minutes 30 30 40 Show
Current and previous roles are architecture/management so very little coding in last 10 years proprietary h/w with mixture of processors (FPGA, DSP, x86) microsoft visual c n/a in current role 5-30 minutes 2.5 5 5 Show
Using bespoke test code Embedded Linux, vxWorks Boundschecker, MS Visual C++ We use Code Collaborator 11-30 seconds 60% 20% 20% Show
In a system simulator - takes ages to set it up and get it running. Sometimes problems are tested on target using logging to establish if it looks 'okay'. Embedded DSP + more recently, Intel server Visual Studio, Slickedit We use Code Collaborator, though I have never used it. 5-30 minutes 10 40 40 Show
Automated regression tests, some bits with cpputest. Aeroflex TM500, PPC, x86, DSP Eclipse, VS, gcc, gdb Use code collaborator, works quite well 5-30 minutes 30 40 30 Show
Self writen test harnesses Windows, Embedded DSP Visual Studio 2008 at present Peer to peer 1-5 minutes 50 25 25 Show
developer level testing which is more of sub-system test. Use debugger extensively to ensure testing specific leg of code by manipulating variables on run time. Then we move for target test where few end to end Test case being tried. . . It has to be done as part of process and code collabrator being used for this. Reviews usually target adherence to coding standard and also check functional need/requirements. 5-30 minutes 30 30 40 Show
Self review - dry run. HDE testing - ad hock testing. Test spec for RAV team system tests - pass criteria. Follow up analysis of debug logs for code path execution and parameter values. Embedded multi-processor system. Visual studio, text editor (gVim), clearcase, command line tools Those I conduct - focus on correctness of behavior and robustness of code, coverage of error checking. 1-5 minutes 25% 15% 20% Show
Windows based development environment Windows, Target (TM500) GNU, Visual Studio, SCONS Code Collaborator 5-30 minutes 40 30 40 Show
through tests DSP GNU and visual studio code is reviewed by peers before commit 5-30 minutes 50 40 10 Show
Custom test functions and an external application that exercises the code. Linux/Windows/vxWorks Slickedit Use Code Collaborator 11-30 seconds 70 20 10 Show
Test harness, IDEs, HDE Linux, VxWorks real-time embedded In-house (HDE), Boundschecker, App verfifier Code Colaborator peer review based 1-5 minutes 33 33 33 Show
Input tests Embedded C platforms In Circuit Debuggers, IDEs, DebugPorts bi-monthly 11-30 seconds 50% 25% 25% Show
Actual hardware in a bench top unit with input and load simulation running processors through IDE Debugger. Currently Microchip PIC24, PIC32 or DSPic33 based hardware. Ethernet, CAN bus, USB, UART communications. Some systems have touchscreen GUI. Microchip MPLAB IDE Has been casual in the past but currently driving for more formal reviews. Under 10 seconds 40 40 20 Show
Manual testing at work (ouch), unit tests at home. Embedded device running uClinux, ATtiny parts at home vim. Have used Visual Studio, CodeRed (Eclipse), Sublime Text, want to use CLion I ask people to take a look and they ask me if it works :( 31-60 seconds ? ? ? Show
nUnit For the purpose of this training, Microchip PIC Visual Studio, MPLab na 31-60 seconds 50 30 20 Show
Mostly at an integration level or module by module with a custom test application Mostly embedded, ARM Cortex devices IAR, GCC, CrossStudio, Visual Studio, CCS Very ad-hoc and informal. 1-5 minutes 50 25 25 Show
In the past I primarily wrote in VHDL for FPGAs where we selectively tested code using testbenches. This is all fairly analogous to unit tests. More recently I've been writing C for microcontrollers, C++ for the ARM-A9 processors in the Zynq SoC, and Python for test equipment. I'm struggling to use GoogleTest and abstract away the hardware specifics of the different devices. Primarily the Xilinx Zynq-7000 series SoC Visual Studio, Eclipse, SDK (Xilinx's port of Eclipse), XCode, Komodo We don't have code reviews. 11-30 seconds 20 50 30 Show
JTAG for difficult errors, trace for simple errors and unit test for regression testing. Typically ARM Cortex-M, ARM cortex A, DSP devices. ARM /Keil compiler, GCC not so much 1-5 minutes 33 33 33 Show
yes, mostly manual, increasingly TDD web sublime editor n/a Under 10 seconds 70 15 15 Show
Writing TUT tests and also automatic testing with calling products from NUnit-tests. Currently Windows CE Visual studio 2008 Not so often used 1-5 minutes %40 %40 %20 Show
Drivers are tested with small test applications ARM CortexA8, NAND/eMMC, lpDDR, cameras, Wifi,USB Host/devices, I2C, SPI, SDHC, headless Visual Studio/Platform bulder, Eclipse for ARM gcc Non existent 31-60 seconds 50 10 40 Show
I test-drive if it's gonna matter. If it matters more, I test-drive more carefully. no idea, Arduino or RPi maybe? git, vim, make, Eclipse, clang Reviewing afterward is too late. I missed the decisions as they were being made (or someone else missed mine). I prefer pairing. Under 10 seconds 40 50 10 Show
First Debugger and printf. Writing code modules to stress functions in my application. Later during develop process via NUnit and scripts stressing the application via communication channels. End face via logging when complete system is stressed. Latest HW is using a ARM11 based controller and OSE RTOS. Lauterbach debuger and gnu complier. Optional to ask a colleague for help... 31-60 seconds 20% 40% 40% Show
Mostly manually. Total station for surveyors. Measures angles and distance with high accuracy. Arm AT91m42800, OSE, WinCE. Visual Studio, GNU make, GCC, AxD debugger, Sublime, Eclipce, Emacs, Putty, GIT, SVN, CVS, ClearCase, Some companies have more of them (defined in the development process). Now i usually have them on request basis. And i prefer in-pair programming or i think its called extreme programming. 1-5 minutes 25% 25% 50% Show
Started using cpputest for new modules using simulator in IAR. Company procuded embedded device using kinetis micro controller. It is installed into cars. IAR compiler We use a web based system called upsource. This integrates with SVN and allows in-line comments.. 11-30 seconds 30 10 30 Show
Unit tested. Programming in pairs whenever possible & sensible Cloud Server JetBrains IDEs Pair Programming: the instant code review. Another pair code reviews Github PRs Under 10 seconds 40 40 20 Show
We do unit tests in CUnit, We have developed a test framework in Python for integration tests. However these things are only used by the team I belong to. The other teams do only manual testing for their parts. Total system testing is only manual. Cortex-A9 CPU GreenHills Integrity OS and compiler The reviews focus on high level though of solution. We lag behind when stressed. 11-30 seconds 15 50 35 Show
mostly manual, some automatic tests on a higher level. Automatic system level tests and integration tests where the target system is accessed from a PC. PC side runs NUnit but the tests are not what is normally called unit level tests. All are ARM based, the one I use most Freescale ARM i.mx35 (ARM11) running Enea OSE RTOS. Some work on a i.MX51 (Cortex-A8) with WinCE GCC toolchain, eclipse as editror, debugging in Lauterbach Trace32 using JTAG we do not perform them. I tried several years ago but there where no interest, but I think/hope this is changing 31-60 seconds 40 20 40 Show
Small things are driven in with unit tests. Acceptance-level tests (written in Gherkin, driven by Cucumber) are written to demonstrate feature completeness to the customer. Currently, its cross-platform desktops IntelliJ, vim, QtCreator, Jenkins Currently we use GitHub 31-60 seconds 40 40 20 Show
TDD, with exploratory testing before handoff to QA. Tomcat servers running in AWS IntelliJ Usually fairly short, examining test coverage as much as code quality. Under 10 seconds 1 1 1 Show
Partial unit testing and manual tests ARM Cortex-M4 IAR Trialing Upsource, 1-5 minutes 40 30 30 Show
Manually Cortex M4 IAR Online 11-30 seconds 60 30 10 Show
Execute functionality with diagnostics It's a vehicle installed unit with a a Kinetis K70 (ARM Cortex-M4) processor and sensors IAR We currently use Upsource. All new blocks of functionality are reviewed. 11-30 seconds 50 30 20 Show
Manual and semi automatic test programs ARM processors Visual Studio, Platform builder Nonexistent Under 10 seconds 30 20 50 Show
NUnit, Test appliactions, debugging, TUT, ... X86, ARM7, STM32, ... VS2008, ... - 30-60 minutes 30 50 20 Show
Unit tests. End-to-end functional tests. Custom linux comminications appliances. Linux development tools, GIT, CVS, etc, etc Peer reviews within the dev team 1-5 minutes 40 30 30 Show
A lot. Web/Clojure IntelliJ, Emacs Pair when possible Under 10 seconds 49 49 2 Show
Functional regression testing. We started on module with TDD but discontinued using because not enough people are on board. We also use LINT. Water heater user interface and safety code Freescale/NXP Kinetis Developement System (Eclipse based) We use CodeCollaboarator on a daily basis. Several people review every piece of code generated. 11-30 seconds 40% 30% 30% Show
Unit tests, manual integration tests, a system test group. Various embedded platforms -- from embedded linux down to tiny bare metal microcontrollers depending on the application. Various - Yocto, Keil, various IDE's provided by chip vendors, various debuggers, etc. Like the target systems -- it depends on the application. Using a Github style code review during the pull request. Currently no checklists / requirements on the code review. Seem to be taken seriously and done well. Much more focused on functionality than boilerplate. 31-60 seconds 60 20 20 Show
gtest unit tests (that often look more like integration tests), nightly runs of our sample apps, QA-run regression tests (cucumber). I mainly focus on automotive head units, but the SDK I work on runs on desktops, in servers, on phones, etc. MS dev studio, p4, araxis merge they happen ad hoc at the discretion of the developer. as a senior guy, I am often the reviewer - sometimes offline, sometimes via shared desktop. sometimes we have group-level reviews. 31-60 seconds 40 20 40 Show
mixture of GoogleTest, manual testing and formal QA. multiple target platforms from embedded class to server class hardware. Windows, mac, & linux based systems. Visual Studio + homebrewed build system infrequent 5-30 minutes 50 10 40 Show
unit tests, functional tests a variety of linux-based systems gnu tools, autotools, cmake Use bitbucket review board system 11-30 seconds 45 15 40 Show
Primarily an automated regression test suite with some unit tests. Linux fgor product and multiplatform for open source version gcc and related open source tools typically one person gives a quick lookover after eature is complete :-( 31-60 seconds 20 70 10 Show
If it compiles, it ships! :) Linux web apps VS, Sublime Text Commercially, I always pair or mob up Under 10 seconds 30 10 60 Show
Manual test fixtures Embedded displays and controllers Eclipse, Visual Studio We use Code Collaborator as a team 1-5 minutes 30 30 40 Show
C# is my current language. For this I'm using TDD, BDD and above mentioned test harnesses. Desktop applications. Previous work on ARM, TI's TMS320C54x,C55x series and Qualcomm's QDSP5 processor. Nowadays primarily Visual Studio. On-the-fly - we discuss in the mob while coding. 11-30 seconds 90 5 5 Show
Unit tests, integration tests and automated UI tests. Windows desktop Visual Studio 2015, SQL Server 2016 We code in groups ("mobs"), so code is constantly peer reviewed by 1-5 other developers at all times. 31-60 seconds 40 45 15 Show
Using printing debugging messages via UART. And JTAG debugger like ICD3. Microchip CPUs and MPUs. Microchip IDE and Micorsoft Visual Studio Myself code reviewing 5-30 minutes 50 30 20 Show
Unit tests, Integration (UI) tests, Smoke tests Web applications, embedded SW for controllers Visual Studio Mob programming code review is donw on the go, no official code review Under 10 seconds 50 45 5 Show
TDD Microchip PICMZ32 and PICMX32 MPLAB-x, Visual C 2008, SlickEdit, GTK, PC-Lint, source monitor None 31-60 seconds 20 40 40 Show
We use Visual Studio's test tools with Resharper installed and providing some extensions. In addition to unit tests we run CodedUI tests which target the user interface of the application. primary-Desktop, embedded isn't my primary Visual Studio We Mob program - code is under constant review 31-60 seconds 40 30 30 Show
TDD, BDD Sprinkler control systems Visual Studio, Visual Studio Code, Atom, Note Pad ++ Mob Programming 31-60 seconds 85 85 5 Show
With a simulator and with test code. Microchip microcontroller Microsoft Visual C General reviews Under 10 seconds 60 30 10 Show
using mstest or nunit usually x86 visual studio, vs code, notepad++, resharper using mob programming, there is frequent questioning about direction and implementation of code which necessarily requires explanation and sometimes justification. 1-5 minutes 33.3% 33.3% 33.3% Show
Run debug session and see if it works. Microchip 8/16/32 cpus MPLABX and XC compilers We don't normally do that. 1-5 minutes 70% 15% 15% Show
Old fashion way PIC microcontrollers Microchip's tool chain None 1-5 minutes 60 20 20 Show
Its mainly running the end application and checking if gives desirable results. Routers and switches. GDB peer code review. 5-30 minutes 40 20 40 Show
glib unit tests, which are really more like functional tests CentOS, C, GLib git, vi, autotools Reviewboard 31-60 seconds 33 33 33 Show
Unit tests N/A Emacs They get reviewed by 2-3 people and usually address bugs, naming conventions, and coding styles. 31-60 seconds 40% 20% 40% Show
List use cases and corner cases. Create test plan, run test plan. I also step through code using JTAG and/or remote debugger. Industrial Ethernet NXP Platform VS 2015, Kinetis, TIA, Logix 5000, Android Studio, etc. Use Code Collaborator 1-5 minutes 25 30 45 Show
single-stepping it's a scanner. What is there to talk about? Greenhills We use code collaborator. 11-30 seconds 40 10 50 Show
If hardware is unavailable I create unit test stubs in Visual Studio to simulate I/O and write the actual embedded code in visual studio. When hardware becomes available I port it to the target embedded platform. On the target platform I do further testing in a JTAG or BDM debugger or emulator. I have used CppUTest as well. ARM RISC processors Multi IDE and Visual Studio 2013 Use Code Collaborator 5-30 minutes 10% 20% 15% Show
on the target device using Android Studio. Android based display Android SDK / Android Studio NA for current project 31-60 seconds 20 70 10 Show
Depends on the project at hand, for newer projects we generally try to use a TDD approach or at least add unit tests. For legacy code, we try to add unit tests for new code when feasible, yet we still only have functional tests for most of the code, if there are tests at all. This means a lot of code requires hardware to be available for tests to be executed, and since the hardware is expensive this is a scarce resource. Thus many tests are postponed to "release test periods". Multiple brands of hardware security modules supporting custom firmware loading. 32 bit PowerPC or ARM, ~2MB-128MB RAM depending on brand. On-target debugging is not available. Visual Studio for non-hardware specific code, software emulation and executing tests. Vendor specific toolchains (mostly GCC). Make, ant, jenkins for build automation. We use code reviews for most of the developed code including tests. The primary purpose is to share knowledge of the source code and to spot misunderstandings. We try to rely on tests for finding subtle bugs. 5-30 minutes 50 25 25 Show
Manual Tests during development and manual tests based on test lists before each release. Microcontrollers, often ARM Cortex M, often STM32xy IAR EWARM Usually after first draft of concept (for each module), then concept just before coding, then after coding (with one peer). Frequency depends on work load of the company. 11-30 seconds 50 20 30 Show
A combination of printf statements to a serial port and watching LEDs flash on real hardware / dev board. Siemens C167 and more currently Xilinx Zynq (with twin ARM A9 cores) A concoction of Eclipse / GCC provided by Xilinx. Also Visual Studio for simple apps to send commands over a serial port Peer reviews using a compare package and tracing back to the PBI requesting the work. There is no formal checklist, we generally use engineering judgement 1-5 minutes 50 25 25 Show
Unit Tests created using TDD automatically executed on code commit. Semi Automated functional tests. vxWorks, linux or bare metal gcc, eclipse, gcov, jenkins, CODESYS... Client defined. My preference is to use static analysis and then review tool e.g. stash, code collaborator, GitHub. Pair programming could be great, only done a very little. 1-5 minutes 30 40 30 Show
I like to write small code to test any functions first. ? Visual Studio, grep, beyond compare ? 31-60 seconds 20 20 60 Show
I test the code on the hardware development board and units, using JTAG interface. For unit testing, I perform boundary and memory checks, use 'assert' statements whereever appropriate for a graceful reset of the system. ARM based scanner development system Greenhills debugger Use code collaborator for peer code reviews 1-5 minutes 40 30 30 Show
JTAG emulator, logic analyzer, oscilloscope, debug logs. Embedded micro-controller, sometimes with sometimes without embedded RT OS Greenhills, IAR Code Collaborator. I focus on finding unintended side effects 1-5 minutes 45% 45% 10% Show
define a test plan and unit test against it a lot of times by writing test code. Then, after feature is integrated to the system, an 'integration' test plan is executed. We also have automated regression tests which is very extensive and it is used as acceptance test before it is given to the independent test and validation team ARM based embedded systems sometimes with a network stack present eclipse, spyder, visual studio, green hills multi IDE take place very often 31-60 seconds 20% 20% 60% Show
Developer testing against requirements before passing onto test engineers Xilinq Zynq Eclipse IDE; GCC compilers/linkers; Xilinx Jtag debugger Peer Review with colleague 1-5 minutes 50 20 30 Show
on hardware 1GHz CPU, DDR, DMA, caches Green Hils Code Collab Under 10 seconds 5 2 5 Show
test plans, debug statements, debugger, unit tests direct to micro, typically micro-controllers with onboard memory, communication peripherals (uarts, dma, i2c) debugger and associated IDE, uart, text editor typically a few individuals, hosted on web interface 1-5 minutes 40 20 40 Show
on PC and on real platforms embedded systems VC++ peer code reviews Under 10 seconds 60% 25% 15% Show
I work in the TnV group and so most of my code is tested internally within the group windows 7,8,10, visual studios, notepad++ code reviews are conducted at regular intervals in development of code 11-30 seconds 50 25 25 Show
Test first and test often Windows and Linux systems VisualStudio, PyCharm, notepad++ I feel they are a waste of time Under 10 seconds 20 30 50 Show
Functional test. Stress testing. Engineering tests that go over corner cases. Then separate T&V group. Embedded processors. Stuff How do I put a whole process on one line? 1-2 hours 10% 10% 10% Show
Automation testing based on test application embedded ARM Greenhills, IAR, Visual studio we use collaborator tool from smartbear 11-30 seconds 50% 25% 25% Show
follow engineering test plans currently in place, also adhoc embedded system with different processor platforms (ARM based) GHS MULTI code collaborator sucks 5-30 minutes 30 30 30 Show
Xcode profiling tools Barcode scanners, Mobile devices (iOS) Xcode, MULTI IDE We use collaborator to share modified files. Reviewers leave comments and then we address the comments. 11-30 seconds 40 40 20 Show
I am yet to start working with the code. National Instruments CompactRIO platform, (ARM processors and other controllers in the future) National Instruments LabVIEW platform Haven't had any yet 1-5 minutes 0 0 0 Show
Pragmatically with system tests or with the debugger or some specific sw tests Very versatile depends on the specific project. Arm, pic, psoc, msp430... IAR Workbench, Ultra Edit, MPLAB, PSoC Creator, Code Composer, etc With a colleague. We review only difficult or complicated parts of a sw. Sometimes we use checklists for the reviews. These contain systematic points like control all switch cases for the breaks, etc. 31-60 seconds 70 15 15 Show
The most of the testing done is on a system level but some of the tests are low level unit tests. The test are automated an runs in our build system. The target system is an instrument running firmware multiple processors platforms written in C/C++. Clients are .NET applications and C++ running on windows desktop systems. Visual Studio 2008. We are working on improving the code review process. Most of the work is not reviewed. 1-2 hours 30 60 10 Show
TDD & ATDD as much as possible Existing embedded projects created with code generation tools and needing to use IAR in order to compile them. Very painful! Vim, Eclipse, Visual Studio, IAR, GCC pairing is usually my code reviews 1-5 minutes 40 50 10 Show
Using the Unit testing framework like gtest Omap5 with QNX Qnx IDE Yes, do the code reviews 1-5 minutes 30 40 30 Show
Functional verification at time of code implementation. QNX Nuetrino ARM Eclipse IDE/QNX Momentics Peer code reviews using Code Collaborator 1-5 minutes 35 15 50 Show
NUnit, tut Windows CE Visual Studio Sporadic 1-5 minutes 40 40 20 Show
Mostly through the nunit framework, for regression testing and to develop new functionality ARM, WinCE VS2008 We have minimum to no code review. 1-5 minutes 20 40 40 Show
Using nunit tests and manual tests. Inbedded ARM cpu with an RTOS. gnu compiler and eclipse Usually no reviews 1-5 minutes 40% 20% 40% Show
I have not done any serious coding for a about 3 months. Last coding was DSP code for a cam laser tracking system. Static logic tests on this code was done by cutting out important code snippets, putting it into a class and feeding it with structures or objects with pre setup data. Dynamic testing were done by injecting virtual targets into live target and field test using lots of trace code Blackfin DSPBF52x, IMX51 Visual studio 2015, 2008, Visual DSP no formal ones 5-30 minutes 40% 30% 30% Show
Unit testing using TUT and mocking with hippomocks. I also write higher level functional testing using nUnit. nUnit interfaces with our product as the users tablet would. Embedded system, iMX51 platform. SVN, Visual studio 2008 We don't do code reviews often. When we do them they are quite informal, mostly someone just has a look at the SVN changes or you go through them together. 31-60 seconds 20 40 40 Show
Automated testing (NUNIT) Manual testing Embedded FW on a Windows CE OS Visual Studio, NUNIT, Team City Nearly non-existant 5-30 minutes 40 40 20 Show
Mostly manually. Arm11 rt-os arm, eclipse Non 1-5 minutes 30 40 30 Show
writing a unit test component QNX platform EA and qnx momentics IDE na 1-5 minutes na na na Show
Mostly debugger. However, I write it extremely cautiously with a very pessimistic / defensive attitude. I still generate an occasional bug and have little way to prevent others from injecting errors after release; #1 gripe: careless programmers. Many. Some embedded, some more like PCs. Various IDEs / 3rd party tool-chains. GCC is most prevalent here. I will probably comment on every changed line. I am a pretty good bug finder. 1-5 minutes 50 20 30 - Verification time. Show
Manual functional testing QNX6.5/6.6 ARM Eclipse/QNX Momentics Peer reviewed using Code Collaborator 1-5 minutes 10 30 60 Show
qt simulator and test hardware ford provided test hardware qnx based qt creator, qnx momentics, git repo jerrit code review 30-60 minutes 50 10 40 Show
local test customer HW, VM Momentics, Netbeans, Android Code Collaborator 1-5 minutes 40 10 50 Show
Black box QNX QDE Collaborator 5-30 minutes 0 0 0 Show
using stubs and test applications QNX OS QNX IDE I use Code collaborator for reviews 1-5 minutes 40 40 20 Show
Code is getting tested as part of integration testing. armv7 QNX Momemtics, Eclipse, Android Studio Using Collaborator 1-5 minutes 40 30 30 Show
using debugger to execute all negative paths RH850 GHS MULTI collaborator tool 31-60 seconds 30 50 20 Show
test scripts -- Qt Creator gerrit, code collaborator 1-5 minutes 65 15 20 Show
Mainly integration testing or sanity testing. Limited / No unit testing. Arm based running QNX OS. compilers/linkers, Debuggers, post mortem analysis (gdb), profilers, Eclipse based IDE (QNX Momentics), use code collaborator to review code. 5-30 minutes 50% 20% 30% Show
We perform module level testing, Integration testing and Functional testing. We have now initiated unit testing using GMock QNX QNX Momentics We use code collaborator 1-5 minutes 50% 30% 20% Show
Automated Testing, Integration Testing QNX, Android Visual Studio, QDE We use Code Collaborator 1-5 minutes 25% 25% 50% Show
Normally I don't do, may be Unit Test sometimes QNX Visual studio, QNX IDE Don't undergo very frequently 5-30 minutes 40 30 30 Show
Some level of unit and mostly integration testing QNX, ARM QDE, RTC, GTest Moderate 1-5 minutes 40 30 30 Show
writing my own simulator code C++ QNX IDE Code Collaborator 5-30 minutes 50% 40% 10% Show
We test out final software against a software verification document which is intended to test all functionality in the extremes of possible configurations. Custom board based around Analog Devices Blackfin 536 Processor Analog Devices VisualDSP++ Limited resources don't always allow code review. Doesn't happen very often. 31-60 seconds 60 20 20 Show
Using execution or debug mode of the IDE running on the target system. 8 bit microcontrollers from MicroChip MPLABX from Microchip, PC-Lint, 1-on-1 typically for small changes, independent reviewer for whole projects 11-30 seconds 45 35 20 Show
Functional tests, system testing, stepping through code. (aka, did it break? no? then it's working) ARM Cortex-M3 IAR EWARM None Under 10 seconds 20 20 60 Show
Manually test features of the hardware. Have external tools parse debug log output for specific entries. 8bit MCU / 32 Bit ARM AVRStudio, LPCExpresso One team member checks in changes, requests other team members to review the SVN revisions. They offer comments, changes are needed and the process repeats until everyone agrees on the changes. 1-5 minutes 40 25 35 Show
Depends what part of my code I am testing(Communication, motion,temperature,servo,hours,state machine) for each of these I have deferent test-bench written in Labwindows Embeded Keil,Labwindows,C# Mostly done when I have question and cannot solve or debug the software 31-60 seconds 30 60 10 Show
n/a ARM, PIC, C8051 IDEs peer review 11-30 seconds 40 30 30 Show
Test my code??! What! :) PIC32, and possibly ARM in the future. Eclipse Currently informal, but moving to Mercurial so they can be more formal. 11-30 seconds 70% 10% 20% Show
code review, static code analysis, functional module level test, system level test medical instrument IAR Use code collabrator 1-2 hours 30 40 30 Show
Manually TI64x processor on a small device, and a Xeon x86 in a rack mount product Eclipse, QTCreator, Visual Studio, Cygwin, Notepad++ Online using Code Collaborator. Generally, submit to depot, then start a review. 5-30 minutes 15% 25% 60% Show
My team currently uses CppUTest, and listening tests. We also have a dedicated QA team, who perform regression tests. I'm a new hire, so learning CppUTest is definitely in my training list. Windows, Mac, Linux systems, and TI C6xxx DSP processor embedded systems Visual Studio 2010. CCStudio My team uses Swarm reviews. So when I modify a file, Swarm is able to identify and visualize the difference, and my team uses that to review code. 31-60 seconds 33 33 33 Show
Write unit tests as we develop. Write Python regression tests after. Windows 64-bit Visual Studio, PyCharm A team of 5 reviews the code via a collaborator tool. 1-5 minutes 20 20 60 Show
Manual testing; system-level testing (some automation); some unit testing with internal test framework x64 based Linux; TI DSP on DSP/BIOS os Visual Studio (w/ Visual Assist addon); vim; cmake; cygwin; TI Eclipse-based IDE We use code collaborator for almost all checkins. We allow a day for code reviews to be completed before checking in. 5-30 minutes 40 20 50 Show
Manual plus basic unit tests Embedded with custom FPGA Linux based C development, Perforce, VIM and other diff tools. I don't have a good eye for catching errors through code review. 1-5 minutes 30 30 40 Show
Manually or with simple python scripts 16 bit uC (24H & 33E), runs an OBD-II stack MPLab, C30 Manual, one-on-ones, don't happen as often as they should 11-30 seconds 20 50 30 Show
Through a set of unit tests and regression tests. Windows, Linux, OSX 32 and 64 bit Visual Studio, Visual Studio Code, GNU Code reviews are performed by the team when developing features. 1-5 minutes 20 30 50 Show
TDD, in the future would like to do BDD, integration testing, and perhaps Design By Contract Ruggedized, cartridge-based power system that delivers maximum reliability and flexibility over a typical 72 hour mission, at a weight savings to the soldier of more than 65% compared to conventional lithium-ion batteries Previously AVR JTAGICE, Atmel Studio, transitioning to STM32 and IAR IDE Casual, sometimes rushed, sometimes too detailed. 31-60 seconds 35 50 15 Show
Run it on the target and perform use cases embedded infotainment running QNX os Eclipse IDE Usually 2-3 people review the code 1-5 minutes 20 40 40 Show
google unit test NA QNX NA 1-2 hours 50 20 30 Show
Automated unit test. Manual test. QNX QNX Code Collaborator 1-5 minutes 3.3 3.3 3.3 Show
Usually in the target system with PC to simulate pieces of the system that are not present VP4 Radio (for a vehicle) RTC Code Collaborator is used. 5-30 minutes 30 30 40 Show
We have started using G-Mock in Sync and previously we used to have Unit testing at a developer level and archive the results/review to ensure code quality. QNX6.5 based system QNX6.5, RTC, G-Mock Using code colloborator 1-5 minutes I am coding close to 0 in the past 3 years but focus mostly on Architecture and Project management 0 0 Show
CLI Test cases, Visual Review, Run Through Code (logically) with predetermined input parameters ( in-boundary params, good values, out of bound params) MQX OS with GDB capability gcc, debug logs, jtag Code Reviews are done in process, and many defects are found in reviews ( but not all) 1-5 minutes 60 ( Design 40 and 20 code) 20 20 Show
TestAgent QNX RTC QNX IDE We use We use Code Collabborator for the revies 1-5 minutes 10 20 20 Show
By writing functional test cases. Embedded systems QNX/Linux ARM tool chains Code collaborator 30-60 minutes 40% 30% 30% Show
We do Interface testing, Integration testing and Functional testing QNX QNX Momentics, GDB Debugger Code Collaborator 1-5 minutes 50% 25% 25% Show
test client, GMOCK/GTEST QNX QDE Use code collaborator and checklist for review 5-30 minutes 40% 30% 30% Show
using ICE, Hardware in the loop Freescale consol make experience as lead 5-30 minutes 50 25 25 Show
Integration testing, Functional testing ARM QNX momentics We use code collabrator to review the code changes 1-5 minutes 10 10 10 Show
Unit test and debugging tools J6 processor with QNX QNX momentics IDE NA 11-30 seconds 30 60 10 Show
Manually doing unit tests after implementing changes/new features. not yet defined Visual Studio, Borland Turbo C++ builder, Embarcadero RAD Usually do code review within our development team (3-4 developers) 31-60 seconds 40 % 30 % 30 % Show
Step by step ARMs IAR, Eclipse, NetBeans, Keil I don't have experience with reviewing other people's code for production 11-30 seconds 40% 30% 30% Show
with debugger: monitoring memory, variables. Functional testing without debugger. embedded platforms mostly based on micro-controllers and DSP's Code composer, Atmel studio, Visual DSP++ x 1-5 minutes 20 30 50 Show
1) Unit testing, but not methodologically and systematically. 2) SQA department ARM Cortex M7; Running Quantum Leaps GCC, JTAG debuggers Code reviews tends to discuss personal coding style rather than reviewing code. 11-30 seconds 30 10 60 Show
Manualy Linux gcc, netbeans / 1-5 minutes 40 30 30 Show
regresion / / / 11-30 seconds 30 60 10 Show
Mostly ad-hoc in the current project. Used plain "hand made" functions as unit test harness at function level testing. In Visual studio I used MS built-in testing framework a couple of times. Embedded linux Netbeans IDE, remote build server (gcc based), svn Using Fisheye/Crucible 31-60 seconds 33 33 33 Show
Component testing mostly, very little unit testing. multiple platforms (STB) gcc and vim gerrit 1-5 minutes 20% 40% 40% Show
CUnit, CppTest Embedded Linux STB Eclipse Gerrit, all team members receive the review 5-30 minutes 30 40 20 Show
Component, Full stack, Continuous Integration. I've recently been introduced to the CPPU test framework. set top box software for satellite broadcasters. Very large code base. git, gerrit, eclipse, sublime, vim, continuous integration, jenkins, coverity We use gerrit. Sometimes pair programming. 30-60 minutes 30 40 40 Show
Functional level validation with manual and scripts to cover code paths. Settop boxes GCC, GDB, Python scripts, vim, cscope, source insight, eclipse Usually done through Gerrits and other peer level reviews 5-30 minutes 40 20 40 Show
A combination of: unit test - newer initiative to use CppUTest to test new features component test - historical test harness, large mock harness around component in test integration test - creating new standalone test cases in C full stack manual testing - exercising customer usecases with end to end intrastructure cross compile from build server(s) to STB H/W mainly Gerrit 1-5 minutes 15 50 35 Show
Full stack testing is run in Jenkins, there re approximately 70 testsscript driven tests than run on all flavors of the build (builds being production, release debug debug). Depending on a go/no test it gets delivered to the customer. Typically there are 3or 4 builds a day. The process takes about 3hours to complete. Linux source insight We use gerrit for code revies it has to be plussed +2 to deliver 2-4 hours 5 30 65 Show
Just starting to use AceUnit Various ARM based microcontrollers (M0, M4), Coldfire based ASICs Keil, Visual Studio, Eclipse Jira - Crucible - the coding process requires a review before a merge back to the trunk 11-30 seconds 30 30 40 Show
Manual Microcontrollers, Windows Keil, Visual Studio Lacking 31-60 seconds 50 25 25 Show
I don't as I have just recently started to learn programming. . Visual Studio Haven't had one yet 1-5 minutes . . . Show
Beta testing finished code to find bugs. Not ideal. High-level GUI on PC, down to low-level, real-time embedded C. Visual Studio, Keil Organized via Crucible in conjunction with Fisheye (Atlassian) 1-5 minutes 40 20 40 Show
I have been a proponent of unit testing, but never had a clear framework or training on how to go about it. Since working at Cirque, I have been starting up a new repository that was meant to be flexible and the only repository of our code. To make that more manageable, I've been writing unit tests all along whenever I ported something to this repository. Chiefly embedded micro-controllers Visual Studio, Kiel (ARM-based projects), Various versions of Eclipse as put out by microcontroller vendors. We use the Atlassian tool Crucible to manage code reviews for non-trivial changes. 31-60 seconds 55% (including planning/diagramming) 30% 15% Show
Manually. Windows OS Microsoft Visual Studio They are slow and painful. 11-30 seconds 70% 10% 20% Show
Test if it does what I meant for it to do. embedded microcontrollers with lots of human and computer real time input lots they are pro forma and a waste of time (they happen too late in the process) 1-5 minutes 10 10 30 Show
GUI interaction Win 10 Visual Studio na 11-30 seconds na na na Show
Manually compare to expected values Windows 10 Visual Studio 2013 for C# internal use software so colleagues tell me if there is a bug. 31-60 seconds 60 20 20 Show
O-scope, logic analyzer, breakpoints, watch lists, throw-away stimulus codd various ARM micros GNU, Keil, other vendor IDEs knowledgeable peers Under 10 seconds 60 20 20 Show
Ad-hoc, primarily. Automotive Infotainment Eclipse, Emacs, gcc, gdb, python All check-ins must be reviewed. We use an online tool called Code Collaborator Under 10 seconds 75 5 20 Show
get it running in the production environment, analyze the log QNX , uITRON eclipse, gcc , jdk, rtc, svn face to face, discuss about the changes 1-5 minutes 20% 50% 30% Show
write test by myself Android Android studio medium 1-2 hours 50 % 25 % 25 % Show
Manually Android Terminal, IDE and Hammer Good 31-60 seconds 50 70 80 Show
Most of the time I mock input from client/server to middleware and verify the output in logs/prints. Since there is a third-party dependency on lower layers, most of the times it is difficult to mock the device behavior. Automotive embeded systems QNX IDE, Enterpeise Architect, UML PASA code review guidelines using collaborator 5-30 minutes 40% 40% 20% Show
Using log output QNX QNX momentics using code collaborator 1-5 minutes 10 30 60 Show
Using Debugger , breakpoints and from Logging. Target system based on TI chipset ( j6) and QC S820 ( Os QNX,And android respectively) QDE , Android studio We have a process of 2 code reviews ( using tools code collaborator and geriit) 1-5 minutes 40 30 30 Show
Test each portion of the code as I write it Amplifier debugger, serial logs Relevent developers checking changes 1-5 minutes 45 20 35 Show
Manual testing ARM RTC Simple, parameter checking, error handling, overflow, resource handling 1-5 minutes 50% 30% 20% Show
Test stubs and simulations HU RTC, GDB, Source Insight, USB Sniffer, ValueCAN Short, in-person to discuss all impacting usecases 11-30 seconds 3 2 2 Show
I unit test by adding test stubs and faking the events. QNX QNX use code collaborator for review. 1-5 minutes 60 20 20 Show
self Android/QNX Android Studio, QT editor gerrit 1-5 minutes 40 30 30 Show
Manual testing develop -> flash it in hardware and check. QNX and MFS QDE and GCC - 11-30 seconds 30 mins 2 hr 1hr Show
Send some input to it and see if I get the expected output. If not, I'll step through the code to see where it went wrong. Microchip PIC32 MPlab x Small and fast. We are working on improving code reviews 11-30 seconds 50 20 30 Show
Automated tests developed in parallel. Some TDD development. Windows Embedded - but future will be Arm Visual Studio 2013 + Resharper Ultimate, some Keil Normally single colleague using Atlassian Crucible. Critical code will be author presenting to reviewer. Under 10 seconds 60% 35% 5% Show
C# - Unit testing and integration testing. C - Manual testing. Started unit testing this year. STM32 Keil uVision, Parasoft, Eclipse, Visual Studio. We use git pull-requests for small iterations then Crucible for final reviews of full code. 11-30 seconds 5% 70% 25% Show
Testing, debugging and validation Embedded Keil, MS Studio Peer reviews 1-5 minutes 40% 40% 20% Show
Unit testing and functional testing STM32 embedded systems Visual Studio with VisualGDB plug-in and Keil What's to say? 11-30 seconds 50 20 30 Show
Manually ARM Cortex M4. GPS, Bluetooth, UHF RF transciever gcc compiler, VisualGDB and Visual Studio, Segger J-Link debugger None (I know this needs to change) 11-30 seconds 30 30 40 Show
Regression Tests C166 uC, AVR Xmega JTAG We have informal reviews with my mentor and I, and formal reviews whenever I am publishing a new framework. 11-30 seconds 40 20 40 Show
No formal methods used regularly. Methods change depending on specific project and team members. varies usually Code Composer Studio for embedded C projects mostly self-review Under 10 seconds 60 20 20 Show
Before learning about TDD, it was tested on the hardware, with my fingers crossed it will still work when out on the field. Now, I've started taking any new code I write, and testing it in a C++ environment on my PC (but not a unit test). Microcontroller running inside of an AC/DC power supply. MPLAB X on microchip. However, we are moving towards ARM (I'm unsure on what tools we will be using for that) We will work on code individually, then the person will issue a Pull Request, and all team members will review each others changes. If every team member agrees the code is good, the PR is merged. 31-60 seconds 10 50 40 Show
Functional testing Power supply MPLAB Weekly 1-5 minutes 10 50 40 Show
Unit testing and manual testing for code that interacts directly with hardware registers. Microcontrollers in a power supply Code Composer Studio, Eclipse, MPLabX, Cygwin Team wide code review of ONLY 1 hour a week for all code that gets written. I would much prefer a tool like Collaborator to the current method. 1-5 minutes 30% 30% 40% Show
Manually Cortex-M GCC, GDB, Gerrit 1-5 minutes 10 20 70 Show
High-level requirements-based test plan Emergency lighting application using PIC16LF18346 MCU MPLAB-X Not enough software people at my company to do code reviews, use PC-Lint instead 11-30 seconds 25% 25% 25% Show
functional, pc-lint Microchip Microchip n/a 11-30 seconds 50 30 20 Show
Ad-hoc method, nothing formalized. Windows, Linux Visual Studio, Atmel Studio We use Code Collaborator on projects that require them. 1-5 minutes 25 37.5 37.5 Show
Yes. Embedded Linux System GDB Peer review using Gerrit 5-30 minutes 20 30 50 Show
unit test, system test, valgrind, debugger Linux, Windows, Solaris, HPUX, AIX Eclipse, Visual Studio, platform specificompilers/debuggers Strong proponent of code reviews. 1-5 minutes 15 15 10 Show
We have unit tests that run on git push and a coverage report that will +1 or -1 the code review, when code is submitted component and integration tests are ran. Centos system ssh, ftp, eclipse, pycharm, vim Code is checked in and must receive a +1 from builds +1 from two team members and finally a sr member +2 and submits the code. 1-5 minutes 50 30 20 Show
Ideally before writing the code, but I do write the unit tests after writing the code. I also try to make a test to run the changes on target device. AMR Cortex-M0/M4 Keil, gcc, cmake We have to have a code review before the code is pushed to master 31-60 seconds 35 45 20 Show
Unit tests and target tests nRF52. An SoC designed for low power wireless applications, mainly Bluetooth LE. Keil, Visual Studio We use CodeCollaborator. We have very productive reviews. Recently we started doing review kickoff meetings when the review is big or complex; this has been very useful. 30-60 minutes 40 30 30 Show
I don't :( Nrf52, a cortex m4 with a 2.4 ghz radio Vim and git (youcompleteme for semantic auto completion in C) Through atlassian stash, not obligatory but recommended 11-30 seconds 30 0 70 Show
google test framework multiple, ppc family as well as arm, x86 for tests gcc, proprietary build system, git, google test, visual slickedit, salea logic peer reviews before code is submitted 11-30 seconds 60 30 10 Show
continuous integration test suite (I write tests to test HW, but they can often fail if there is a code bug, and I sometimes write a few unit tests for key functions, which also runs on CI on the test HW) ARM M4 ASICs and FPGAs SCons, GCC, GDB, Git, Stash vague guidelines of when they are required, usually 2 reviewers on a major change (except TCD additions/changes where they are mandatory, and should involve other testers and HW designers) Under 10 seconds 50 20 30 Show
Unit tests, hw-sw integration tests Cortex m4, rtos eclipse, IAR IDE, visual studio Bitbucket pull requests + checklists 5-30 minutes 40 30 30 Show
unit, target arm cortex m4 cmake, keil/armcc, gcc we use code collaborator 1-5 minutes 40 30 20 Show
Mostly manually Very hardware centric system (hardware monitoring and interaction) MPLAB IDE, JIRA, SourceTree, GitBucket Peer reviews. 2-4 hours 30 30 40 Show
Jenkins but mostly real life equipment Supercapacitor Module Mplab X Done inside and outside the team 1-5 minutes 30 30 40 Show
We have automated test framework based on Microsoft MSTest in Visual Studio. Also, we are using Catch as our unit test module in some projects. Also Google Gtest is used (although i have no experience with this). Low voltage drives used for electric motors Visual Studio code as editor, IAR ARM compiler with IAR debugging tools (I-Jet JTAG), Lauterbach Microtrace for tracing. Also oscilloscope, logic analyser etc. We are using codecollab and over-the-shoulder reviews 5-30 minutes 30 40 30 Show
Debugging through. TI ARM and MSP uCPUs. Code composer studio v7 Small team discussions. 31-60 seconds 20% 10% 70% Show
By checking it works. Multiple GCC Very few 31-60 seconds 50 30 20 Show
We can write unit tests and module tests, but I mostly write code very close to HW and we write too few tests in that area. Nvidia Tegra X1 (ARM Cortex-A57). gcc, clang We use a mailing list 11-30 seconds 50 5 45 Show
Mostly by visual evaluation of video. Telepresence systems Emacs, gcc, proprietary build system Doesn't do 31-60 seconds 30 40 30 Show
Mainly using the Cisco Lysaker in-house "helmet" test system. This is more a integration test system than a unit-test system. Video conferencing box or host running on pc Emacs, gcc, gdb, qt creator Not used so much 31-60 seconds 40 30 30 Show
via tailored bash-scripts that run after the kernel/driver has compiled and bootet a VM or target system Linux kernel emacs, gcc, make patch-reviews via email 1-5 minutes 50 10 40 Show
Using a Simulator and trying , in the process of starting with automatic testing Equipment disinfection system Codesys - 31-60 seconds 30 20 50 Show
Unit, integration/smoke tests with Google Test/Mock Xilinx ZYNQ MS Visual Studio, Xilinx SDK, git Done in informal way (no tooling) 1-5 minutes 30 30 30 Show
Unit tests if feasible, using either GTest+GMock or QtTest Embedded Linux platform QtCreator/KDevelop/VisualStudio IDEs, gdb debugger, gcc compiler mostly, sometimes valgrind for profiling Standard (required) practice in one project, more or less non-existent in another 11-30 seconds 25 25 50 Show
We generate test cases which we test against. These are manual test cases based on the input requirements. Debugging is done via the development environment with breakpoints. Testing during development is also performed via the development environment with breakpoints. Watch windows are used to manipulate data to test various scenarios. Medical device IAR, MPLAB, Kinetics Studio, PSOC Studio Our code reviews are based on our coding standard 1-5 minutes 40 40 20 Show
Junit Unit tests per class and larger-scale integration tests Currently the JVM/Java8, previously x86 and ARM Eclipse, Emacs, GNU toolchain Code reviews are mandatory for each pull request, mostly 1-2 reviewers per PR 11-30 seconds 40 30 30 Show
I have been working more on theory than practice, and as a result do way less coding and even less testing. Usually an arduino or something similar with 1 or more sensors/devices attached. gcc, git, svn ?? 11-30 seconds 60 20 20 Show
not writing code anymore. test code is mostly written in ruby/python, etc. lithographic wafer scanner (in previous project: electron microscope, X-Ray scanner, professional harddisk recorder) not anymore n/a 30-60 minutes 0 0 0 Show
Unit test with gtest/gmock System tests with python autotesters Server running linux Eclipse CodeCollaborator 30-60 minutes 30% 30% 10% Show
Our devs use unit tests, I do additional integration and end to end testing (automated as much as possible). Currently web services, but wanting to go back into (embedded) software testing Visual Studio, Resharper Varying quality... Depending totally on the person 1-2 hours 40 50 10 Show
Through unit tests and system tests. Some system tests are automated. n/a GNU tools, Eclipse, MINGW, Jira All implementations get reviewed using Jira or Crucible 11-30 seconds 40 40 20 Show
Reviews, stubbed environment, on the target system. Windows, Embedded Linux. RSARTE Peer reviews, no formal process. 30-60 minutes 30 20 10 Show
Few enuit tests, run the code, manual tests. Cuerrently work in on automating re add tests for complete products. SiLabs ZigBee HVAC and Home automation. IAR PC Lint, Bcc, python, git, redmine, Do not existj 1-5 minutes 10 10 10 Show
Few enuit tests, run the code, manual tests. Cuerrently work in on automating re add tests for complete products. SiLabs ZigBee HVAC and Home automation. IAR PC Lint, Bcc, python, git, redmine, Do not existj 1-5 minutes 10 10 10 Show
Manually. Identify new test cases, see that they fail with the changes, add changes, and see thath they fix the problem. List them in commit message. multi part system but both targets are ARM based. built under Mac and Windows gcc as toolchain but some processors require Windows to be built completely using Gerrit. Requires votes for code, build, verification, and test... we ignore test. 31-60 seconds 30 20 50 Show
Manual, using test steps defined in Excel files Win + Mac Visual Studio, XCode, Jenkins Reviewboard, request review for each commit 31-60 seconds 20 30 50 Show
Manually. I run long tests and debug myself. I'm using a nordic chip with m4 arm processor. I use IAR to build and flash the code. IAR, sublime text editor We use github and create PRs where I generally tag other software engineers. We go through the code thoroughly for good practices, algorithmic correctness and general coding etiquette. 11-30 seconds 50 20 30 Show
ADA Unit Test framework Embedded Systems Eclipse , GNAT very much like pair programming 5-30 minutes 70 20 10 Show
mostly manually and also using Pyats Router IR800 and CGR1000 vi, python idle NA 1-5 minutes less little more more Show
I have recently started writing unit tests alongside new code. Previously, I would run code and observe its behavior. Proprietary ARM-based hardware for use in aircraft. Green Hills MULTI, Eclipse, Jira, Git We use pull requests to check code against our standard and look for anything concerning. More relaxed when prototyping. 11-30 seconds 55 10 35 Show
GHS scripting on MULTI IDE ARM embedded GHS, PR QAC static analysis, several IDEs Peer review, using Excel spreadsheets to track issues 11-30 seconds 50 30 20 Show
Manual Testing, if at all. I've been writing unit tests for some new code modules. 802.15.4 WSN Makefile, git N/A I work solo 1-5 minutes 45 10 45 Show
Unit test and functional test linux intellij idea , vim we use kingart for the same 1-5 minutes 30 30 40 Show
google test is used but I do not know how to use it yet. Arm processor CLION, vi, gdb GitHub 1-5 minutes 50 10 40 Show
Using Unit Test framework, White Box Testing, and Functional Testing. linux vim, tmux, ctags, cscope for C/C++, IntelliJ IDE for JAVA code. Get really good constructive feedback from senior engineers which helps me improve. 5-30 minutes 30% 10% 60% Show
Before reading Grenning's TDD book, I would write some "test" code that showed what I wanted my new code to do from a client standpoint. I would instantiate the thing, make some requests, etc, to get an idea of what I expected. Then I would write a flurry of code, possibly rewrite my "test" code and cycle that until it worked. These tests never used a test harness. Since I've ready the book, I haven't had the opportunity to write a lot of code but when I do, I try to follow the TDD-style in defining some tests more formally. I don't use a test harness yet but I am testing much lower-level functions now rather than testing very high-level things before (when they didn't work, finding out what was wrong in my 20-30 lines of code was where my time went). My legacy project runs on an STM32F4 that interfaces with a Broadcom WiFi/BLE chip Eclipse GCC in Linux at the moment. Considering IAR Nothing yet. Team of 1 Under 10 seconds 25 25 50 Show
Manual test procedures. ASSERT macros. Low power embedded microcontrollers. IAR embedded workbench Email reviewers ahead of meeting to give them time to review code indiviually. Sit together and review as a group. 11-30 seconds 33 33 33 Show
Existing integration tests to make sure we don't break something (regression), new tests at a functional level for new features Embedded device using C, compiled in IAR. Used by Utility companies IAR, Lint, C-Stat Every commit through Fisheye/crucible 5-30 minutes 75 15 10 Show
Whitebox, GDB, very limited JTAG debugging when doing kernel development Embedded linux built by Yocto Yocto, Visual Studio Code Don't do them 1-5 minutes 80 10 10 Show
Not doing any unit level testing. The unit testing as we have been calling it, is actually integration testing. Target Systems are mostly x86 processor based routers. Vim, Ctags, Cscope, make Code reviews are done without miss. Diffs are uploaded to an internal tool, which will be later reviewed by teammates. 2-4 hours 20 30 50 Show
unit test with unity embedded keil, IAR, codelite peer review 11-30 seconds 30 20 50 Show
Jenkins - custom automated, Manual Arm Eclipse Infrequent Under 10 seconds 60 30 10 Show
Uploading it to hardware Wireless sensor module Keil, IAR, PyCharm Looking for functionality, readability and simplicity 11-30 seconds 30 20 50 Show
Create GUI test harness to exercise interfaces. Create unit tests to automate regression testing. Desktop and Mobile application Visual Studio None 31-60 seconds 35 40 25 Show
Unit testing after implementation Low power, embedded, Arm-core microcontollers. Typically M0. IAR, Keil, SVN, Unity, Jenkins, Code reviews are requested using Review Board. They are done within the team. Not really enforced, it's the responsibility of each team member to request a code review. 31-60 seconds 50% 25% 25% Show
Manually An old systemfull of code from 20 + years Rad Studio, Visual Studio Sporadic at best 1-5 minutes 33 33 34 Show
Manual unit test, google test Subsea Control System uVision, Visual Studio code review 1-2 hours 60 25 15 Show
System tests Runs on Windows RAD Studio N/A 1-2 hours 60 20 20 Show
If the device is not on fire it is working! Mostly low-power ARM M0-M3 controllers or ESP8266 Platformio, Arduino None 11-30 seconds 40% 30% 30% Show
Do not have a great strategy at the moment. Windows + VxWorks. Visual Studio Not standardized yet 31-60 seconds 60 20 20 Show
N/A Linux eclipse I tend to get much in the start but now things are getting better. 1-5 minutes 40% 20% 40% Show
Mostly manual testing. No TDD, jury rigged mocking using GCC's __wrap. Not very elegant. Various, some regular C++ under linux, some libraries for "semi-embedded" systems. Fairly bare bones, vim with clang plugins, CMake Almost non-existant 31-60 seconds 40 30 30 Show
A mix of unit tests (Windows), manual testing (on target). Embedded system Visual Studio, gcc, scripts written in house. Peer-reviewrd by another programmer 1-5 minutes 40 20 40 Show
New code is tested using unit tests developed on Windows and executed on target. Legacy code is tested using manual test procedures. PowerPC based real time controllers for automation and control systems. WindRiver Tornado, Workbench, GCC, Testwell C++ Mandatory review by at least one other person. Now based on Git pull requests. 1-5 minutes - - - Show
manual test plans (smoke tests). RHEL-like linuxes running enterprise IT systems gcc, vagrant for local testing, docker for building kernel modules of various distros, informal peer reviews. after two thumbs-up, we can merge into the code repository 5-30 minutes 10 50 40 Show
Debug printing. End-to-end testing. RHEL/Centos 6.5-6.9 and 7.0-7.4 GCC, Docker, Docker Compose, Vagrant, Virtual Box, Vmware, BBedit Push commits to a topic branch. Create a merge request in gitlab where we can review the code changes. Our team requires sign off of at least two reviewers. 11-30 seconds 50 15 35 Show
runtime instrumentation + standalone unit tests linux kernel editor, compiler, linker, Google + my head mostly useless 1-5 minutes 40 (you are missing design: 40) 20 0 Show
functional testing based on changes Red Hat based Linux systems vim, gcc, make, docker We have them, and they do uncover issues from time to time. 1-5 minutes 25 35 40 Show
gtest, custom test frameworks endpoint sensor gcc, git, Make, CMake, JIRA we do code reviews on all changes 1-5 minutes 25 25 50 Show
Unit tests (custom framework), writing test programs (either in C++ or python) for functional validation testing, manual testing Software I work on targets enterprise Mac and Linux platforms gcc, xcode, pycharm, we use review board or merge requests depending on the team/source control 5-30 minutes 20 60 20 Show
Automation framework to test end to end functionality, but no well-defined way to test automation code as its written Multiple flavors of linux, goal is to have good TDD practices established for new team Pycharm Any merge request needs be reviewed and approved by at least 1 person. (Team is small, ~6 people, so everyone has a good knowledge of most of the codebase) 5-30 minutes 50 15 35 Show
hahaha - by interactions between product code and the test framework I am using. python coded tests for sensor product on Linux using an in house test framework that simulates Server components. Intellij, emacs standards set by my Agile team Under 10 seconds 30 30 40 Show
N/A None None N/A 1 day or more None N/A N/A Show
Combination of whole system integration tests, some unit tests, and manual testing. Desktop/Mobile/Embedded Linux Visual Studio, Visual Studio Code, .NET SDK, Git, SVN - 31-60 seconds 60 20 20 Show
not really as a routine but a separate process. the curent job is a STM32 MCU MS Visual Studio, Eclips, etc. once two weeks as suppose 1-5 minutes 60 20 20 Show
In stages. Write, Test, Repeat. Atmel 8-bit Atmel Studio So far, so good. 11-30 seconds .85 .05 .1 Show
Haphazardly. I start with unit testing, but once the work of mocking becomes large, time pressure generally puts the focus on writing more features rather than mocks. At the intermediate level, we do some integration testing. For example, test that all endpoints of our embedded webserver respond as expected, with the hardware mocked out. At the final level, we test that the UI behaves as expected and that the device behaves as expected, but often overlook testing error conditions or trying to cause error conditions. Currently it is an Arria 10 SoC, which is serving up a webpage for user control. VS Code, Visual Studio, PyCharm Generally, the the reviewers know what the code is supposed to accomplish and the primary developer explains how the code achieves the goal. 31-60 seconds 50 25 25 Show
Run it against requirements and my own list of where I'm likely to make a mistake. Boundary conditions, timing jitter, etc. Embedded devices various embedded IDEs. n/a 11-30 seconds 20 40 40 Show
Manually n/a Pycharm n/a Under 10 seconds 70 10 20 Show
Test by hand on the product. ATmega328P, SAM3X8E, Embedded Linux ARM Atmel Studio, Eclipse I haven't participated in code reviews with Bird 11-30 seconds 80 10 10 Show
I am currently a Product Manager N/A N/A N/A Under 10 seconds N/A N/A N/A Show
I usually go through the code because yet, I didn't have to write long code. N/A Code Blocks, NetBeans N/A 31-60 seconds 0 0 0 Show
Python unittest for REST interface. cpputest for C/C++ unit tests Embedded Linux 3.14 or 4.9 running on armv7hf atom editor, custom build environment We have them, we facilitate with Gerrit. 1-5 minutes 50 20 30 Show
On target. Test at a system level. embedded Greehills Gerrit 5-30 minutes 60 20 20 Show
Run it on real HW. Custom ASIC / SOCs -- Everything gets reviewed by a peer. 5-30 minutes 50 20 30 Show
Some unit test, some manual, some semi automated integration and application acceptance testing. We target multiple uC - 8 and 32 bit, baremetal, RTOS, Limux, Windows and Android Atmel Studio, Visual Studio, GCC, a few others for legacy projects I would guess that <50% is reviewed. A couple of teams schedule and conduct them explicitly. 1-5 minutes 25 25 50 Show
test on target windows control, unix based target MS Visual Studio One on one with boss. 31-60 seconds 10% 30% 60% Show
I tend to build testing into a module and then exercise module functionality using those test functions. RF IQ Capture and PLayback devices Currenlty Visual Studio and QT Since becoming involved in engineering at X-COM, I've been fixing & maintaining code that was written by interns. There are only two of us and we work together daily. 11-30 seconds 5-10 25 65-70 Show
Desk/bench testing, run smoke test. Otherwise rely on other test labs, automated testing. printer engine running on threadx Custom build, git, multi, gerrit, emacs gerrit, mandatory 1-5 minutes 40 40 20 Show
On target hardware with semi-automated test scripts Dozens of different ink jet printers with custom asics and custom smart chip solutions. Printers span a range of end user prices of $39 to $100000+. Greenhills compiler, Multi debugger We use Gerrit to do code reviews on every check in. This is automatically enforced as part of our check in process. 5-30 minutes 50 25 25 Show
Unit test, functional tests. Embedded firmware PyCharm Informal Under 10 seconds 60 20 20 Show
Running it on hardware Printer hardware Linux, Multi Use of Gerrit, all code is reviewed 30-60 minutes 50 25 25 Show
mostly manual testing since it require working hardware. ARM based in-house tools, multi use jenkens 5-30 minutes 30 30 40 Show
Ad hoc Multi-core ARM system running Linux and ThreadX (on different cores) GHS Multi, Eclipse, Visual Studio Gerrit 5-30 minutes 50 25 25 Show
Mostly manually by interacting with the product or using some built in "shell" functionality that enables invoking operations via a serial port. printer or all in one device Products contain Linux and ThreadX RTOS running on different processors. RTOS uses green hills compiler and debugger. On Linux gcc and the green hills debugger. All changes must be reviewed with at least one person via gerrit 1-5 minutes 50 30 20 Show
I test on a representative target using special (UDW) commands that is part of the embedded code that allows direct API access into specific modules. These commands could be run from a target shell or driven by external scripts. ARM baed asysmmetric multi-processing system on a custom ASIC running ThreadX and Linux OS. ARM gcc, git, gdb, MULTI, shell peer review using gerrit 1-5 minutes 60 20 20 Show
Put debug statement and manually test. linux na code reviews is done via gerrit 5-30 minutes 30 40 30 Show
unit tests, functional tests, full system tests arm based embedded linux system gnu C++, python, eclipse, pycharm We use git/bitbucket pull request mechanism for important/complex code that needs to be reviewed Under 10 seconds 30 40 40 Show
With unit tests, and have created functional integration test frameworks for black box testing Embedded linux Qt, python Currently, it's ad-hoc and will walk through the code with a peer if necessary Under 10 seconds 50 30 20 Show
I've mostly done Test After Development, but I've recently been trying to switch to TDD. Android, embedded Linux, Windows Android Studio, Eclipse, IntelliJ None 31-60 seconds 20 40 40 Show
Not formally ARM Microcontroller lpcxpresso (eclipse) NA 11-30 seconds 30 20 50 Show
system testing armv7 netbeans none 30-60 minutes 33 33 33 Show
test concurrent linux kernel, embedded linux, mcu text editor Don't do them Under 10 seconds 33 33 33 Show
Manually :( Android and Linux Android Studio N/A 31-60 seconds 70 20 5 Show
Run everything in debug mode and when it fails check where it fails. embedded linux systems visual studios Do code review on major changes 11-30 seconds 50% 30% 20% Show
Unit test and manual test Embedded and Web Application Eclipse, Clion, PyCharm, IntelliJ, Android Studio Peer review 1-5 minutes 40% 20% 40% Show
Mostly Developer test Embedded Linux Lots, mostly Netbeans now. As needed 11-30 seconds 60 20 20 Show
printk Embeddded Linux - kernel level and mid stack, drivers, ARM gcc, emacs, vi, gdb, oscilloscopes, logic analyzer limited 5-30 minutes 5 5 40 Show
Writing my own quick test harness if I can Kernel + User-mode Xcode One or two reviewers 1-5 minutes 40 40 20 Show
Application specific unit tests, manual tests, automated tests macOS XCode Using ReviewBoard 1-5 minutes 20 30 50 Show
depends - sometimes with test driver code, sometimes following flow with a debugger MacOS services & device drivers xcode, eclipse code reviews are via automated diff generation 5-30 minutes 35 15 50 Show
We don't really haha. Debug statements by hand I guess. There is some automation for QA but in terms of me trying to figure out if what I wrote does what I think it does there's not much there. Mac OS 10.13 Xcode Usually very thorough but not a whole lot of rules or procedures 1-5 minutes 40% 20% 40% Show
Unit/functional tests. Win7+ Visual Studio Peer-reviewed by fellow co-workers, across teams. 1-5 minutes 30% 40% 30% Show
yes windows Visual Studio, WinDbg A must-have 5-30 minutes 50 35 15 Show
test module functionality varies eclipse, gcc, visual studio Github reviews 11-30 seconds 60 20 20 Show
Honestly, logging statements for the most part. MacOS endpoints Xcode, IntelliJ 1-3 other developers and review, wait for "ship its" 5-30 minutes 45 10 45 Show
Often times by running the code and seing the results first hand. printer Eclipse and VS code haven't been part of one yet 1-5 minutes 35 25 40 Show
run it on the printer! unit tests if written on epoch codebase. Micrium imx6 dual core Eclipse, ARM DS-5 Online, mostly find functional erros 1-5 minutes 40 10 50 Show
No unit tests, functional test at a high level. ARM Imx5, Imx6 ARM DS-5 IDE, Onx IDE We use code collobarator or pair with someone else 1-5 minutes 30 20 50 Show
Verify that it works as necessary to meet specification. Use Zebra unit test harness. Zebra Mobile Printers Eclipse, Notepad++, Beyond Compare, Cygwin, Toolbox Have not participated in a code review. 31-60 seconds 25 35 40 Show
unit tests, automated black box testing, system testing. NXP iMX6.SoloX ARM DS-5 toolchain Code changes are reviewed by at least two other developers via online platform, or are developed using pair programming 31-60 seconds 50 25 25 Show
Unit/Acceptance tests. Linux Visual Studio, Eclipse Code reviews are done before committing code 5-30 minutes 50 30 20 Show
Writing acceptance tests, unit tests. Sometimes manual use case testing. Zebra Printer/Web/Mobile Device Momentics, Jetbrains IDE's, etc. Peer code review, where a colleague comes to my desk before checking in. As well as pull requests. 1-5 minutes 70 10 20 Show
Create cpputest, junit, some times manual testing linux, windows, android eclipse, VS, momentics self and peer 5-30 minutes 50 30 20 Show
Unit tests, automated acceptance tests embedded system Momentics IDE in a development VM We do either pair programming or code reviews using Code Collaborator 31-60 seconds 60 20 20 Show
Unittests, acceptance test frameworks, manual tests where necessary Ask me about my target system. vim? Linux? Git pull requests. 1-2 reviewers 1-5 minutes 33 33 33 Show
- - - - 5-30 minutes 60% 10% 30% Show
Unit Testing. Black Box Testing. Embedded IDE - Momentics, Build Server, SVN, Google Search, Collaborator Meh. Usually good for checking syntax.. Normally not very effective for concept understanding. 30-60 minutes 50 10 40 Show
Develop manual and automated test cases and execute them It varies, currently testing an SDK Visual Studio, never had one 11-30 seconds ~0 60 40 Show
Unit tests arm processor momentix I find them helpful. 5-30 minutes 40 30 30 Show
Using the debugger printers Momentics sometimes we pair program with more experienced developers 5-30 minutes 30 10 60 Show
Debugging and/or sending data and checking the output. Printers, Windows, Android Momentics (Eclipse sort of), Visual Studio, Eclipse, Android Studio Only in certain projects; pair program and review code together or review the code by myself 1-5 minutes 50 25 25 Show
By running through a suite of tests that verify output from our products. Firmware Eclipse, Visual Studio, Resharper, Various in-house tools for testing and communication We don't do them in most projects 5-30 minutes 80 5 15 Show
We put the burden on the end user to test the code (user acceptance test). New product engineering here produces unit tests for proper code coverage, however our business model does not easily facilitate creating/testing code in the typical manner. QNX/arm Momentics, GCC they don't exist 5-30 minutes 25 50 25 Show
Unit test individual code components. Write small test applications to test the code. Primarily PC application with some RT controllers Mostly the tools that come with the LabVIEW IDE. G is a graphical programming language and traditional text based tools do not work. We generally do pair programming for more complex development and use peer code reviews for other code. Under 10 seconds 40% 40% 20% Show
Manually Web Platforms (Spring and Tomcat based), Android Development, and Android / Windows SDKs Intellij, PyCharm, Atom, Beyond Compare, git, jira, jenkins, bitbucket, Android Studio, AWS rarely performed 31-60 seconds 20 30 50 Show
PDB/Pycharm debugger, prints Zebra Printers NA Pair 1-5 minutes 20 50 30 Show
- - VS,Momentics None 31-60 seconds 90 5 5 Show
In-system debugging Blackbox tests Our applications are energy harvesting or low powered radio transceiver/transmitters. Current micontroller platform is STM32L0 (ARM-M0 core). We also worked with MSP430, 8051 platform and Microchip 18F, 16F and 12F families. Keil µVision with ARM compiler. In the past also IAR-ARM/MSP430 and MPlab Done/documented by another partner before product is going to be released. 11-30 seconds 45 55 5 Show
Manually STM32M (ARM-M4) IAR, Git Implementing via Atlassian tool-chain; Bitbucket pull requests. 1-5 minutes 33.3% 33.3% 33.4% Show
mostly printf. i.mx35 and STM32 with a ARM cortex M3 optima, visual studio, lauterbach debugger, keil µVision, svn send the code to a colleague who usually returns it with some critique, if a larger discussion is needed a forum for all developers is held every other week where such questions can be raised. 1-5 minutes 34 33 33 Show
Use a Documented Test Plan I usually write software for desktops or web. Visual Studio N/A - We have just started doing this here Under 10 seconds 25 50 25 Show
nose2 for python or make test for C. Local gitlab server with CI. Use pylint or flake8 as a linter for python. I would love to have a good linter for C. Always compile with -Wall -Werror in CI. x89, arm, blackfin vim, cscope, ctags. Sometimes an IDE like pycharm. depends on language,codebase and build system. Have started to look into docker but have not had the time to do anything usefull with it. I love constructive criticism. It helps me grow and improve. 1-5 minutes 70 10 20 Show
Unity for Unittest. Pytest on Raspberry Pi for Black Box testing on hardware. STM32l0 We hope getting away from Keil in the near future We use ReviewTool 31-60 seconds 45 20 35 Show
In-system uC & DSP-based RF-system gcc, gnu make sparse 5-30 minutes 40 20 40 Show
Full device I/O tests. mostly microcontrollers IDEs, debuggers, Understand Several people in a conf. room who review changes to the code. 1-5 minutes 35% 15% 50% Show
Write Unit Tests after writing the code, then altering the code to work better for the tests Medical Instrument Visual Studio Require 2 code reviewers on every code change 1-5 minutes 15 30 55 Show
By Hand For BD - Target embedded PC, Windows 10 VS 2015 CE Code reviews done in TFS/VSTS 11-30 seconds 30 50 20 Show
unit test + manual testing Win 10 IOT Visual Studio, SSMS, notepad++, linqpad we have them 5-30 minutes 40 30 30 Show
Unit Tests, Module Level Tests Windows Visual Studio 2015 Try to concentrate on function rather than standards 1-5 minutes 75 10 15 Show
Mix of automatic, simulator and manual test Medical instrument visual studio 2 code reviews 5-30 minutes 40 30 30 Show
Repeatedly Windows Embedded VS2017, VSTS Online via VSTS 1-5 minutes 40 40 20 Show
unit test, on machine, in simulator W10 SBC Visual Studio, IAR verify developer understands how what they're doing fits into larger system 1-5 minutes 70 20 10 Show
Using the tests already implemented. Windows 10 IoT Visual Studio Normally last about 3 minutes with light questions about pull requests 30-60 minutes 30 40 30 Show
automated tests and instrument tests windows visual studio we review the pull requests agaisnt the team branch. 1-5 minutes 50 10 40 Show
Manual tests and Visual Studio Microsoft Windows visual studio, NuGet packages manual and git pull requests 1-5 minutes 25 25 25 Show
Automated unit test using Moq & Fakes. Debug on instruments. Windows OS VS, Git >= 2 developers in MS Team. Follow coding standards. Review unit test effectiveness. 31-60 seconds 45 45 10 Show
unit testing, instrument testing medical device system visual studio Last project: pretty thorough. 1-5 minutes 33 33 33 Show
Debugger A mix of embedded C++ and a Windows computer running a C# application VS 2015, IAR Much better now that we use TFS. We used to print diffs and walk through with other developers in a meeting 11-30 seconds 40 30 30 Show
Unit tests and on-instrument testing Windows 10 IoT Visual Studio, TypeScript, npm, SQL Management Studio Online using VSO, or in-person if need be 5-30 minutes 20% 50% 30% Show
Unit, Integrated and directed testing directed testing Visual Studio, MS SQL Server industry standard 4-8 hours 20 50 30 Show
Automated and manual tests x64 Window 10 Iot Visual Studio Done using VSTS tools on git pulls requests. 2 peers review and comment, and pulls are gated on branch policies. 1-5 minutes 40% 40% 20% Show
1) unit tests 2) simulators 3) on instrument 4) mini test programs bd instrument visual studio often perfunctory 5-30 minutes 30 40 30 Show
We start using device simulators to run a "simulated run" Then we basically make sure it didn't break any unit tests, which are mostly integration tests in our case. Then we take it to a real instrument and test the code on the instrument, if it works it's usually approved for a merge. Lots of devices which we connect to by creating objects to represent them via XML Configuration files, we have "Device Simulator" versions of these Devices for running in simulator (testing) mode. Node, Sql Server Express, Git, Visual Studio Team Services, NuGet We require 2 reviewers for any merge - usually people who are the most knowledgeable of the areas of code you're changing. 5-30 minutes 15% 60% 25% Show
Run existing automated tests and then manually test on a real device. Windows 10 VS2015 Too infrequent; not substantial enough 5-30 minutes 30 20 50 Show
It depends on what type of testing I am doing. I do a combination of simulation, integration, and instrument testing depending on the code changes I have made. Instrument, Azure VMs Visual Studio Code reviews tend to be fast, but also tend to catch at least some errors. 5-30 minutes 60 20 20 Show
Testing involves unit test and manual testing on instruments Windows based computer that talks to different devices TFS, Visual Studio, SQL Management Studio code reviewed by two people 1-5 minutes 40 40 20 Show
After complete, hardcore functional testing. Embedded System using MicroChip controllers. CCS IDE system local colleagues and a 3rd party resource. Been favorable, though high cyclomatic and cognitive scores 1-5 minutes 40% 25% 35% Show
desktop tests, unit tests, hardware in the loop, software in the loop and system level verification Battery powered device with radios IAR, gcc, cmake We require 2 people to approve a Git pull request. We also do more formal design reviews based on risk/safety 1-5 minutes 60 20 20 Show
Some automation w/ Cygwin and gmock, but mostly manual on-target testing though a debugger 16-bit NXP MCU (S12Z) CodeWarrior, Visual Studio Code, PE Micro debugger they are done virtually through our VCS (GitLab) 11-30 seconds 40 40 20 Show
Manual unit tests and manual regression tests Battery powered microcontrollers GCC, IAR, and Eclipse Use SmartBear's Collaborator tool. Some code, but not all. 11-30 seconds 20 40 40 Show
Machine system/regression tests. Mail processing equipment controls. Specifically Atmel ARM (AT91 and ATSAM) using GPIO, CAN, Serial etc. Lots of control state machines. Rowley Crossworks, AtmelStudio, B&R Automation Studio (PLC) I'm the only experienced embedded developer, so code reviews are essentially, me explaining my code to the very junior C developer or to experienced C# developers. Under 10 seconds 50% 15% 35% (debugging before release, very few errors make it to the field) Show
Hand testing, a custom in-unit test framework is sometimes utilized. CLI environment and debugger often used to introduce conditions. Custom code to change paths. Microprocessors, typically ARM based running FreeRTOS, but a decent number of Microchip PICs. Some Embedded Linux is appearing. Some java for non-embedded. IAR, Jenkins for Continuous build. Segger J-Flash. Tracealyzer We try to review everything.Reviews are deskchecks for small changes, meetings w/ 2-4 people for bigger changes. Focus on fitting in the code architecture + error paths. Led by the developer. 31-60 seconds 20 20 20 Show
Simulations, test code for test-automation, logging STM32 Keil ARM uVision Pro Only had 1. Not much help.... 1-5 minutes 70 10 20 Show
unit test code, console commands, wireless remote test commands, interfaces, emulators/debuggers, GUI/Management software for integration/system tests ARM3/4 32 bit or Linux SBC GCC, IAR, Eclipse mandotory, at least 2 developers use Fisheye 1-5 minutes 50% 40% 10% Show
Mix of manual and gtest: Using gtest on the host to check on application logic. Using emulator (breakpoints) to check on logic at target system. Using open source toolchains, ARM GCC STMF4/STMF7, No RTOS(Bare Metal), FreeRTOS Using open source tools. Peer code reviews 5-30 minutes 30 40 30 Show
Debugger, regression test scripts using custom test harnesses, software simulator running in Windows environment. Microcontroller based government systems IAR Workbench Peer reviews usually toward the end of coding 31-60 seconds 50 15 35 Show
Gtest, currently getting familiar with parasoft tools. Some experience with Segger trace tools. STM32F4/7 Eclipse, Cmake, Manual and inefficient. Under 10 seconds 25 25 50 Show
Manual testing Cortex M-3 GNU Team based code reviews 1-5 minutes 40 30 30 Show
Some amount of unit testing, we do a certain amount of integration testing. Mostly lab testing Freescale HC12 and TI TIva Understand We do them consistently 1-2 hours 3 hours 2 days 0.5 day Show
unit tests using ceedling + integration tests using proprietary HW platform. watch :) jlink + exlipse, KEIL, Jtrace using bitbucket for each pull request 1-5 minutes 50% 20% 30% Show
Due to time constraints. I'm testing in on emulator... Trying to write unit tests when/if there is a bug.. to show highlight the bug, and demonstrate the fix. 8Bits EM microelectronic Coolrisc. Eclipse and Winidea (Raisonance), I also used to work with IAR, Keil uVision, Code Composer Studin The software shall build and all the unit tests shall pass. Regarding the way I'm reviewing, code aspect and the atomicity of the commit are important in order to be able to focus on the implementation change. 1-5 minutes 35 40 25 Show
Unit test, On the target Small 8 bits uC, 32k flash, 4k Ram Eclipse Code reviews done by 3-4 team members on bitbucket, 2 approuvals are needed 31-60 seconds 50 10 40 Show
Unit test and manual integration tests 8 bits EM chip with CoolRisc core eclipse/winidea We do a code review for each pull request 31-60 seconds 40 40 20 Show
manual manipulation, target interface control by python tests, test code emulated on pc confidental CLion IDE done on each pull request 1-5 minutes 30 % 20 % 50 % Show
I currently test my code by playing with the target and sometimes modifying the internal states with the debugger. Mostly 32 bits ARM Cortex-M (M0-M4) gcc toolchain, Makefile + Eclipse. Sometimes Ozone for debugging. We review every branch before it's merged. We pay attention to readability, standards, function size, and try to spot weak points, although we know we're never able to get enough understanding of the code reviewed to spot real logic issues. 1-5 minutes 10 20 70 Show
Integration tests. . Eclipse and Keil Reviews are obligatory to push our changes to the server 31-60 seconds 30 30 40 Show
I wrote integration tests of module/system which is platform depending and it is closer to reality (we uses external IC and etc...) But recently we started using unit test approach in our compony. So I write unit tests for all functions of module including static function (to get 100% code and branch coverage). NA Eclipse, Ceedling, Unity, Pytest Our company review mechanism is maintained by BitBucket where we need to have two approves from reviewers before we can put our code to develop branch of repository. During review we need to check coding standards and partially functionality of code. 1-5 minutes 45 40 15 Show
Unit tests, Automated integration tests, Exploratory testing ARM Cortex-M4 Keil MDK, GNU Arm Embedded toolchain Pull requests in Bitbucket reviewed at least by 2 people. 31-60 seconds 35 25 40 Show
Most of the time, manually with the debugger. Sometimes, I write unit test, executed directly on the target. MCU 8 bits WinIDEA We use bitbucket PR (attlassian), 2 reviews before merging 11-30 seconds 50% 30% 20% Show
With Ceedling, I do tests and try to maximize test coverage For tests: The target is the computer If possible, I apply test driven development Tests with Ceedling 31-60 seconds 60 20 20 Show
Currently, I always test my code with only some specific cases. The reason of this bad method is often the time missing. :( uC 8 bits We use the GCC toolchain with an Eclipse IDE. For debug, we use WinIDEA IDE. We use the pull request method of Git 1-5 minutes 60 10 20 Show
Manually test in target system based on requirements or design expectations Write a test harness "hello world" to execute code I pulled out of a system An NXP iMX6 Dual Core processor running at 800 MHz running an Integrity operating system Visual Studio, Understand, Green Hills MULTI, Cygwin Some teams perform code reviews on a story by story basis to remove defects early. This is not required by our process. Formal code reviews happen to groups of code changes associated with feature changes that can span weeks to months. 1-5 minutes 65 25 10 Show
Multiple layers. Informal bench testing. Code analysis. Code Reviews. Manual integration tests. Coverage unit tests and some functional (requirements) tests using VectorCAST. Also manual functional tests (we have a separate group set up for this). Systems V&V. Algorithm verification tests automated in MATLAB/Simulink. 4 boards, main target is iMX.6 based. Editors, Green Hills emulators, RTOS debuggers, VectorCAST. Code Reviews on all code. Many sentiments that they are not that useful. We have certain personnel who analyze code for defects. 11-30 seconds 25 0 75 Show
We have something we call CBAT (Continuous Build and Automated Testing), which includes all of automated builds that are executed within TeamCity. Once the builds are complete we run unit tests (for those that have implemented them) then integration tests on the target devices and monitor them with various lab equipment to measure performance, current draw, test interfaces, pre-compliance testing, etc. We also do separate design validation tests in our Electronics Lab, which are sometimes manual in nature. We also have an SQA department that does testing from a customer perspective. Some products use microcontrollers (ARM M3/M4, 8051, MSP430) running bare metal code. The other end of the spectrum are microprocessors (BlackFin) and FPGAs IAR, Keil, PC-lint, RSM, Doxygen, JTAG debuggers, oscilloscopes, protocol analyzers, spectrum analyzers, DC current analyzers All code merged to mainline is done via Pull Request, which generates online review on our internal Bitbucket server. 11-30 seconds 60 25 15 Show
Code inspection and running the code on the machine. PrisMax Visual Studio This is where most of our issues are found. 5-30 minutes 50 25 25 Show
Functional testing using actual system HW and events that drive all paths to uncover all timing dependent race conditions to determine sufficient margins. No OS Oscilloscopes, LA, in-house tools too much focus on syntax/coding-std and not enough on overall structure of design 11-30 seconds 30 60 10 Show
printf Embedded Greenhills Integrity OS SlickEdit code editor, command line build Use Crucible, tend to be well after code has been written/tested 1-5 minutes 60% 10% 30% Show
--- --- --- --- 5-30 minutes --- --- --- Show
Manual test procedures, unit tests written in VectorCast by engineers other than the developers imx6/ARM Cygwin Not a big believer in code reviews 5-30 minutes 50 25 25 Show
On the advice of counsel, I invoke my fifth amendment privilege against self-incrimination and respectfully decline to answer your question. 2006 cel phone Changing from Windows (Visual Studio) to Linux based Perfunctory 5-30 minutes 50 10 40 Show
Using a debugger, trying the code on the system, and adding in test functionality that can run while the system is live (for example, changing a temporary test global value to be what's needed to inject a failure value in to see that system behaves correctly) 1 main processor running the application on a real time operating system (Green Hills), 3 bare metal processors that perform other secondary/safety functions of the system Green Hills Software MULTI debugger and SlickEdit They seem to focus (in general) on simple coding standard fixes when schedule is rushed, too late and rushed for it to be useful for making design improvements 5-30 minutes 50 30 20 Show
I test my code by creating test scenarios and analyzing the output. NA NA NA Under 10 seconds 1/2 1/4 1/4 Show
Running it on the target system. It is many pumps. Notepad+++ Peer to Peer 1 day or more 9% 9% 9% Show
New company and still figuring that out. Before I just tested the code on the physical machines it controlled. It measures how much water is being used. IAR Embedded Workbench, Simplicity Studios, Git Flow, Bitbucket, SourceTree We use Bitbucket to do pull requests where then everyone can review the code on their own time. 11-30 seconds 60 30 10 Show
I test my code by stepping through or via breakpoints. MCUs IAR compilers (ARM, MSP430), MLAB X, Linux Currently done via the build in Bitbucket interface 1-5 minutes 30 0 70 Show
I add lots of asserts. Most testing is done manually, or I may write a script if manual testing is too tedious. Blackfin, 8051, MSP430, ARM. Bare-metal or Linux. GNU make, gcc, Keil, IAR, Simplicity Studio (Eclipse) Bitbucket pull requests with at least 2 approvers to merge 31-60 seconds 60 10 30 Show
Functionally, at my desk. We also use TeamCity. currently, a SiLabs blue gecko (ARM cortex m4 w/ integrated bluetooth) VS Code, IAR IDE, vendor-specific IDE's Done on bitbucket 1-5 minutes 30% 35% 35% Show
Manually constructed test programs (no unit-test framework) put together on a case-by-case basis that run natively or on target depending on the situation. We also perform integration testing that is occasionally automated using Labview. Many of our systems have multiple processors. Most are bare metal systems, and we use a non-real time flavor of Linux. CCS (for MSP430), Doxygen, Notepad++, Cygwin, GCC/MinGW, GIT/Bitbucket Conducted within Bitbucket Pull Requests, with out of band verbal and slack conversations. 31-60 seconds 30 35 35 Show
I add code to a main function that prints the result to stdout. Then, I'll check the printed result and compare it the expected result. Bare metal systems and linux Notepad++, Cygwin Typically, every team member is added as a reviewer for pull requests. Two approvals are needed before merging to the master branch. Under 10 seconds 25 5 70 Show
Automated testing is handled by TeamCity. Manual testing is handled by our electronics lab technicians (we write a work order, someone picks it up, etc). We have a mix of unit tests, on-target tests, and static analysis. The amount of that depends on the project. Most of the unit-testing was implemented by a contractor. We've done some maintenance, but almost no expanding on that. We have a few on-target tests that we've implemented, but they are mostly black-box style tests. Mostly bare-metal, 8051 or ARM. Keil uVision, IAR, CCS, Simplicity Studio, gcc+make, Visual Studios, LabVIEW We use crucible or bitbucket, depending on whether the project has been ported from SVN to git. 5-30 minutes 30 50 20 Show
The most useful tests today are all manual and take a lot of time. We have some unit tests using cpputest, and it's growing, but at current levels benefits have not outweighed efforts....yet. Linux, Windows Qt Creator, NetBeans, Spyder Usually too big, so suggestions rarely cause a noticeable impact 1-5 minutes 40 40 20 Show
TDD embedded avionics GreenHills, vscode comments go in, merge ensues (comments mostly ignored) 1-2 hours 30% 50% 20% Show
CppUTest for individual functions, integration tests for full system, and then user testing in a development environment Windows 7 and 10 QT Creator gitlab merge requests 5-30 minutes 20 60 20 Show
Unit tests, integration tests in the HIL (Hardware In the Loop), and flight tests x86 and ARM Visual Studio Code, vim, spacemacs Done using code reviews, sometimes done over the shoulder 31-60 seconds Not actively coding regularly at work Not actively coding regularly at work Not actively coding regularly at work Show
manual debugger-based testing. The target is an embedded multicore processor. GCC toolchain, vendor-provided debugger/loader/BSP generator extremely unofficial at this time 1-5 minutes 50 0 50 Show
Was using Cpputest last year and found it very useful. On my current project, the testing is more at the system level with a development breakout board and breadboard... not enough time to unit test before delivery, but hopefully will have time to dive into that after delivery (which I realize is 100% ass backwards... but it's driven by our schedule and lack of resources). Avnet PicoZed SOC Xilinx Vivado/SDK (Eclipse) Haven't had one yet for this project (time and resource limitation) 31-60 seconds 20 30 50 Show
I write unit and integration tests for it. ? vim and make mostly. Sometimes the intellij suite. We use them. I try to do mostly substantive comments, but sometimes the pedantic "too many spaces" or "brackets on the wrong line" slips through. 1-5 minutes 50 30 20 Show
Prototypes: A simple program that exercises the primary use cases Production Code: Unit tests with 100% block statement coverage, integration tests Avionics hardware Visual Studio Code, gcc Quite thorough: must have 2+ approvers. >20% of our time each week is spent reviewing. 11-30 seconds 30 60 10 Show
Various levels of test ranging from code developed with TDD all the way to "give this to my (internal) customer and hope it still works because there is no other way to tell" Combination of real time and not real time software on x86-based servers clang or gcc, cmake, google test, vim I hope people are paying attention Under 10 seconds 40 55 5 Show
cpputest on simarm and ontarget embedded ARM SOC Greenhills lightweight delta reviews (github) 1-5 minutes 20 40 40 Show
By writing a sizeable chunk of code, backfilling unit-tests to exercise its nominal behavior & edge-cases, and then iteratively developing both tests and code together. This is not real TDD because my tests usually don't come first. Embedded microcontroller vim, and the compiler, that's pretty much it... all merge requests into our mainline development branch are peer-reviewed by at least 2 other developers 11-30 seconds 40 40 20 Show
MATLAB unittest and Pytest (mostly have been working in MATLAB and Python recently) Servers (for build tools) and embedded controllers (for flight code) MATLAB IDE, Eclipse, Sublime, Jenkins, Git, Subversion Varies by specific repo but usually GitLab reviews by 1-4 people 1-5 minutes 40 30 30 Show
cpputest micro-controller-based embedded platform. Green Hills probe, Jenkins for CI, GIt for SCM. Gitlab merge request 5-30 minutes 40 40 20 Show
CPPUTest, python unittest, or ad hoc testing C++ for a mission critical application <cough>MinGW 4.6.2 and python 2.7</cough>, Qt Creator and PyCharm as IDEs They take too long and nitpick about style as much as being about substance. 5-30 minutes 30% 45% 25% Show
Unit testing in CppUTest, manual system testing Windows desktop applications Qt Creator & Designer, PyCharm Gitlab code review with at least one member of the team 1-5 minutes 40 40 20 Show
CppUTest Windows Desktop CppUTest, GDB, ant, ivy, Qt Creator We use gitlab, and review all of our code. The process is somewhat fast and usually helpful. Under 10 seconds 35 60 5 Show
In CI, sometimes at interfaces, with dependency injection, emulators, etc It's a computer gcc, gdb, MULTI, pdb, pytest, cpputest, valgrind, gperf, gcov, gprof, 1 or more people, varies with content of MR Under 10 seconds 20 40 40 Show
go through entry and exit points of the code and attempt to exercise all these routes and get 100% code coverage 32-bit unix RT Visual Studio Code, Green Hills we use gitlab for code reviews, any comment that is made on a code review must be answered, discussed, and resolved before the code review can be closed. 1-5 minutes 60 10 30 Show
ECP where each node in the ECP represents a single CPPUTest Embedded platform/uController VSCode, CPPUTest, Jenkins, Git, in-house tools Code reviews are generally done on Gitlab 31-60 seconds 20 40 40 Show
Mix of light TDD, custom off-target test suite, manual testing. PIC32, AWS EC2 MPLAB X IDE, GCC Code briefly reviewed by peer team members, but no specific review guidelines. 31-60 seconds 50% 20% 10% Show
usually via automated unit tests on a build server embedded devices, FreeRTOS as well as Linux on ARM vim, assorted *nix tools, make, gcc, IAR, git, using bitbucket, git-based pull request process. 2 approvals required. 11-30 seconds 40 10 50 Show
Automated tests Linux micro services, running in docker, hosted on Arnouse BioDigital controls (C tools listed only) Vim, You Complete Me, Nerd Tree, Syntastic, clang, clang sanitizers, clang tidy, lldb, valgrind Pull requests must be reviewed before check in. Also, code is often critiqued during retrospectives Under 10 seconds 80 10 10 Show
I use unit tests in some parts of the code, however I have difficulties to make the rest of the team use it. If I'm the only one using it, even if it helps during development, it becomes pointless for maintainability. The rest of the code is manually tested. This is something I would also like to understand, where shall I draw the line between unit and integration tests? Currently working on an ESP32 project make, cmake, Visual Studio Code, Sublime Text, terminal, git I tend to focus on trying to find bugs instead of asking myself why is this done this or this other way 11-30 seconds 50 30 20 Show
GTest & GMock. But writing test afterwards. But this tests are more for visual testing, not logical testing. (We are devolop a HMI) PC Software which is later compiled for a ECU Visual Studio 2005/2007 and its tools. During study Vim with gcc/clang and other console tools Units (afterwards) in the current project. So we hope to see if the code do the same. But don't touch it, it runs and there are so much other thinks. Or "don't touch it, it will get yours " (C. Martins) Under 10 seconds 3 1 6 Show
I don't currently code. n/a Comfortable with emacs. Used to use Eclipse. Fan of pair/mob programming. We did design and code reviews on aerospace stuff and they weren't super effective. 2-4 hours 0 0 0 Show
Unit testing (written after the fact), manual integration testing, automated integration testing (usually controlled by a test sequence which talks to to the target hardware) Target code: variety of ARM/PIC/AVR systems, 8-bit to 64-bit, usually no OS. I also frequently program for POSIX systems gcc/clang, clang-format, lizard, GitNStats, Jenkins, meson, ninja, gdb, openocd I am the only developer, so I use self-review checklists. When on a team: all changes reviewed by 1-2 other programmers. I encourage clients to have the team define a review process & review criteria. 31-60 seconds 25% 35% 40% Show
After I write it (usually). Primarily Linux hosts. Sublime Text, PDB, GDB, MULTI They are mandatory and gating for merge into master branches Under 10 seconds 55 35 10 Show
Not TDD :( I like to setup an integration workbench project alongside the product code and use that to drive general test development (Both Integration and Unit). Then once interfaces are stable I'll spin up a formal unit-test project and move my unit-tests over there. Embedded Multi-Core Real-Time Network Device C++ (Product), Python (Integration Testing), VIM, VSCode, Markdown + PlantUML (for documentation), Gitlab, Jenkins (CI/CD), Pytest (Integration Testing), CppUTest (Unit Testing) Gitlab Merge Requests Yo! 1-5 minutes 10 20 20 Show
b c gcc b Under 10 seconds 40 40 20 Show
Unit tests using cpputest, on sim & on target Too new VSC, probe to test on target Too new 31-60 seconds 33 33 33 Show
Unit tests and system tests. Real-time Linux based HIL systems. QT Creator, GitKraken, Crucible For flight code, crucible. For simulation code, immediate peer review. 5-30 minutes 30 50 20 Show
For personal projects, I tend to write scripted integration tests. For work, Matlab unittest suite, Simulink Test suite, native x86 Windows (personal), many (work) Visual C++ 2010 (personal), Matlab 2016b/2017b (work) Done with Fisheye/Crucible 30-60 minutes 30 30 40 Show
Unittest and integration Linux Visual Studio Required Under 10 seconds 40 40 20 Show
I use a unit testing framework and/or a debugger. The STM32L476 discovery board (bare metal) and a PC-104 running QNX Keil and QNX Momentics No formal reviews. My students use my framework code and provide feedback. 11-30 seconds 20 (40 percent doing requirements and design) 30 (mostly writing test cases) 10 Show
Manually Sensors & PC VS and MPLAB X na 11-30 seconds not sure not sure not sure Show
Programmer(s) are paired with QA tech(s). They participate together in creating the software specifications (algorithms, flow charts, etc.) Then, the programmer tests his own code, before handing it over to the QA tester. The QA tester often has a debugger, and will perform debugging tasks using it (i.e. insert breakpoints and look at internal variables/states). Other possible tests: - black-box testing - monkey tests - reusing code (however, this doesn't guarantee successful integration of this pre-tested code...) We are keenly aware that as applications get bigger, testing costs a lot, especially the regression tests, which should be automated somehow. Embedded ARM MCUs, 16-bit MCUs, non-ARM 32-bit MCUs Mostly IAR's suite of C cross-compilers/linkers/debuggers Rarely done, unless we have a problem, and a programmer needs help 1-5 minutes 30 50 20 Show
A mix of tdd and writing tests after the code. arm cortex gcc/msvc for testing, IAR for firmware Generally done during merge/pull requests or through informal meetings Under 10 seconds 50 30 20 Show
Some automated integrated tests, but a lot of manual Custom boards IAR, CCS, MPLAB, Atmel Studio Requested by developer on as needed basis 31-60 seconds 50 20 30 Show
Off target unit testing with gcc and gtest. On target integration testing driving serial communication with pyunit. Cortex M (stm32), baremetal IAR Pull requests through gitlab 1-5 minutes 25 25 (not separate phase) 50 Show
I didn’t until just recetly. Typically Cortex-M based embedded systems. Eclipse, Kiel, IAR, Formally - non-existent. 1-5 minutes 65 10 25 Show
Limited Unit Test via gtest Manual Testing Pull Requests Cypress PSoC4, PIC, STM32 Vim, uC Vendor IDE/Toolchains, Team Foundation Server (PR's, CI), VisualStudio (when forced), Sigrok, Tiobe TiCS, TFS Pull Requests 1-5 minutes 15 10 75 Show
Mostly manual. Several embedded systems ranging from 8-32 bit. Used in power electronics Eclipse, MPLabX No code is allowed to be pushed into our master branche(s), unless reviewed. 31-60 seconds 25 50 25 Show
Manual testing STM ARM + FreeRTOS based (Mosttly) Eclipse GCC peer code review (gerrit) 1-5 minutes 4 2 3 Show
- Reviewing each others code - TiCS analyze tool from Tiobe (https://www.tiobe.com/tics/tics-framework/) STM32L4, STM32F2, PIC24HJ, PIC18F Eclipse + gcc, MPLAB X + XC16 + C18 Using Gerrit server (https://www.gerritcodereview.com/) to review each others code 11-30 seconds 40 40 20 Show
TIOBE software rating and test in hardware application DSP real time control of power electronics Code Composer Studio - 1-5 minutes ? ? ? Show
preferably with automated unit en integration tests Everything except non-arm bare metal VSCode 4 eyes minimum 11-30 seconds 50 40 10 Show
I don't :) Mostly developed windows interfaces with hardware. This includes RS232, RS485 and other serial protocols. As part of testability mostly worked with Labwindows CVI and GPIB communication. Labwindows CVI somehow no-one ever asked me... 31-60 seconds 50% 25% 25% Show
Functional testing by hand. STM32 (ARM Cortex) Eclipse, Git, GCC Through a Gerrit server 1-5 minutes 70 20 10 Show
Different methods, sometimes writing dedicated code to test functionality. STM32 / freeRTOS, TI dsp28x / bare metal platform and Microchip bare metal. editors, compilers, simulators, etc code review / merge to the master branche is done / managed by Gerrit. SmartGit flow. 1-5 minutes 20 50 30 Show
EmbUnit tests Matlab tests Win32, HALO Visual Studio I feel some of them are superficial and just based on the code aesthetics and not on the logic or efficiency. 31-60 seconds 40 10 50 Show
I have recently joined this company. But in the past My code was first developed and tested in Matlab environment. I had then a VS projects which load Matlab engine run the code and get the result back and used it to compare/debug the C/C++ code. In house DSP system Visula Studio / In house IDE Used in house code review software Under 10 seconds 20% 40% 40% Show
Unit tests System-level test Touch test (manual exercise of some subset of use cases) Cirrus DSP Toolchain provided by Synopsys including their IDE Notepad++ SonarQube cppcheck We use CodeCollaborator for online code review - at least 2 reviewers per review 5-30 minutes 20 20 20 Show
manual testing, jenkins unit testing and google test framework. ARM Cortex M0+ and a DSP for audio applications (headsets) Keil, Eclipse, VIM, ARM compiler, Meson, jenkins, make, git, gerrit. We review code using Gerrit. 1-5 minutes 30 30 40 Show
Unit tests for unity/cmock, googletest, junit ARM based deeply embedded, OS-less ARM based NucleusOS Windows based unit tests framework Java DS-5 CodeBench MSVC Eclipse/Maven Gerrit based reviews on every commit 1-5 minutes 33 33 33 Show
Existing unit tests, compliance tests, manual execution. Embedded ARM Cortex Keil, Git, Sublime Gerrit code reviews among team members. Need a "+2" and passing unit tests to merge. Most developers can give +1, except for admins. 1-5 minutes 10 30 60 Show
unit tests, integration tests, full regression test (system wide) ARM Cortex A53 running RTOS Mentor Codebench, ARM GNU toolchain, Microsoft Visual Studio gerrit. not much else to say really... 1-5 minutes 45 30 25 Show
unit tests, integration tests, full regression test (system wide) ARM Cortex A53 running RTOS Mentor Codebench, ARM GNU toolchain, Microsoft Visual Studio gerrit. not much else to say really... 1-5 minutes 45 30 25 Show
Write unit test code to test the agreed API. Use case testing for end-to-end functionality and integration. Embedded, proprietary. VisualStudio VisualStudio Code Codebench DS-5 Through gerrit 31-60 seconds 60 20 20 Show
unit and integration testing based on googletest ARM on an ASIC Eclipse and Visual Studio Still have some to learn, my code usually has to be reviewed about 3 times before going to the master branch 1-5 minutes 5 2 3 Show
Using Logic Analyzers to monitor proper execution, Unit tests to look at modular code, ... I work on ARM M0+ and Arm M4 (+ possible bluetooth SOC) devices. Some of my projects also involve system tests involving a host pc or smart phone app. Keil, Jlink, Salea Logic analyser, WireShark, MCUXpresso Most of my reviews are on small features or patches. I tend to focus on readability and correct logic. There's much to the part that I'm still understanding so, this is what I focus on. 11-30 seconds 50 20 30 Show
Manual testing on hardware and in simulation. Jenkins unit test on hardware and in simulation. Win32 simulation system. Cirrus DSP cores. Synopsys tool chain Visual studio We use code collaborator for reviews with at least 2 peer reviewers for a review. 30-60 minutes 30 40 30 Show
The tools that we use provide a simulator that we use for initial testing (development). This simulator is also used for system testing (complete system tests). We develop with multi-platform support in mind, and one of the platforms is win32. Every time we create a new block (module), we write unit tests that run on the win32 platform (c mockery framework). These tests contain short vectors that exercise as much of the code as possible but still keeping a small simple test file. For complete and more comprehensive testing, the system tests are in place. The final test is to run the SW on the HW and evaluate its results for correctness. The target system is a small embedded dsp with very small memory (around 8k). This has 24-bit support and is optimised for audio processing at the instruction and memory architecture levels. in-house built c,c++ compiler We do reviews using the Gerrit/Git framework. Upon submission of the code, two peers must review and +2 the change (and the automatic build job must succeed) before the code can be merged. 1-5 minutes 2 5 3 Show
1 - unit tests of modules using embunit; 2 - system integration tests with real-world stimuli; 3 - compliance tests ARM Cortex M0+ -based SoC Keil armcc for target and unit test builds gcc for unit test builds eclipse for IDE bullseye for coverage python for misc tools our *intention* is a codecollaborator review for every SVN commit; but it tends to be at most a review for every feature developed/bug fixed Under 10 seconds 25 50 25 Show
CppUTest and ad-hoc manual testing bare-metal ARM, linux ARM, Linux x86 GCC, make, qt creator, vs code, git, svn standard practice and fairly rigorous 31-60 seconds 15 5 10 Show
A combination of running our CppUTest suite and manual testing. An STM32 Microcontroller running no OS. vim, Visual Studio Code, gcc, gdb, valgrind, eclipse, openocd All developers on a team are encouraged to participate in code reviews. A given PR cannot be merged unless it passes all Unit Tests in CI and has been approved by one or more developers and blocked by no one. 5-30 minutes 30% 20% 50% Show
More and more unit testing with lots and lots of manual integration testing. embedded linux x86 qt creator, subversion, smartsvn, make, cpputest We have them regularly now which is good, but I would still consider them unfocused. 1-5 minutes 33 33 33 Show
A significant amount of manual testing to validate graphics. I've recently mocked/stubbed the OpenGL ES 2.0 api allowing me to get more of this area under tests. This doesn't remove the need for manual testing of graphical elements however... This varies per project. Currently my target is a raspberry pi compute module 1 running a customized version of Rasbian lite. gvim, Visual Studio Code, gdb, git, and subversion. This changes based on the project. For one, there is no reviews that take place. For another we utilize git hub's pull request review mechanic with defined guidelines. We use other tools for other projects. 1-5 minutes 60 20 20 Show
I do unit testing most of the time. I tend to do it after the fact though. I write some code, do some debug and manual test iterations to get it working, then write unit tests to cover the use cases. target is SkyView app running under linux on custom hardware. Sometimes the target can be an app running on an arm processor that communicates with SkyView over a serial or USB interface. QT Creator, GDB, make, printf, cpputest, sometimes other IDEs to build software for different processors, etc See above! I think we do pretty well at code reviews. 1-5 minutes 20 40 20 Show
Typically through manual test. If it's a piece of critical infrastructure, I'll create unit tests. I don't really have one, I'm sort of all over the place. PyCharm, Jenkins, Qt Creator, Visual Studio Excellent, I credit most of my development as a programmer to the feedback I've received in code reviews. 5-30 minutes 20 60 20 Show
Try to run it. n/a Notepad++, Arduino, Processing Rarely. Not a SWE. 11-30 seconds 60% 5% 10% Show
Dynamic simulation. Embedded: arm Non imbedded: posix compliant vi, Qt Creator, cscope, gdb Informal. 1-5 minutes 10 25 25 Show
Recently been working on legacy code with no unit test integration. Code is tested my manually exercising the functionality Instrument control for a fluid handling instrument, written primarily in C#, with a UWP frontend Visual Studio Git Primarily done using diff comparison in Git They're not as effective as they could be 1-5 minutes 60 10 30 Show
I do unit testing, integration testing, and if needed system testing. The verification teams do the majority of system testing. I always step through every line of new code in the debugger to check that it is behaving exactly like I intended it to behave. We have builds of the software that run in windows allowing us to do much of the testing without the real hardware. The target system is a hematology analyzer. It drives stepper motors, valves, sensors, lasers, and other hardware. The data acquisition is complicated. Over 10 MB of data is collected for each specimen. There are many threads. Our currently released product uses VxWorks Tornado, Green Hills Multi c++ compiler, Visual Studio, and Rational Rose. We do multi-pass inspections with at least 2 developers. 5-30 minutes 15 15 15 Show
Informal unit test methods .NET Visual Studio, Spyder Done through github pull requests 11-30 seconds 30 40 30 Show
We perform all 3 levels of testing (System, Integration & Unit Test), but the Unit Test it is very limited probably covering less than 50% of the code, perhaps even less than that. Intel-Atom CPU running "WindRiver Linux" (It is a YOCTO distribution with patches and security support provided by WindRiver) Visual Studio 2017 (MSFT C++, C), GNU C++ compiler, Elipse-IDE with GNU C++, C. Rational Rose for Design Modeling. We use 2 reviewers (Safety and standard) for each code change. 5-30 minutes 35 45 20 Show
n/a n/a n/a n/a 4-8 hours n/a n/a n/a Show
For automated testing, most recently we use Google Test/Google Mock. Prior to this, we used a homegrown solution that is comprised of a collection of Perl scripts. We also do manual testing. Board with an Intel Atom CPU running embedded Linux with real-time extensions. Visual Studio 2017 We first personally inspect our own code, using a checklist-driven approach. After the personal inspection, there is an inspection by one or more teammates (usually two). 1-5 minutes 40 40 20 Show
Unit test and system test. Multi-platform. Visual Studio, Tornado Minimum 2 to 3 reviewers. 1-5 minutes 50 20 30 Show
I have in the past created unit tests as I write the code. Currently I have been manually testing the code after I write the code. Windows 10 embedded Currently I am using Visual Studio The code review consists of sending the code (differences report) to other developers to be inspected for defects and coding standard violations. Any findings are reported an then corrected by the developer. 1-5 minutes 50 20 30 Show
None embedded VS Very important 1-5 minutes 50 0 30 Show
Current assignment is to learn the old code-base from a recently acquired product line for maintenance releases. Target is windows pc workstation system manager software for urinalysis running on XP or W7. Visual Studio 2010, 2012 Two person peer review with Microsoft TFS (team foundation server). 30-60 minutes ? ? ? Show
In-house test harness, Microsoft Studio Testing Tools Pic, Arm Visual Studio, IAR Embedded Workbench, MPLab Authors performs inspection of the code then submit their code for inspection by two peers. We follow a multi-pass checklist delineated in the TSP/PSP process. 5-30 minutes 30 40 30 Show
Some unit tests, some integration tests, some end-to-end tests, some manual tests, some automated tests, some regression tests, some code checkers (valgrind etc.), some beta deployments, ... Linux-on-PC vi/vim, GNU toolchain, gdb, printf, bash, Linux utilities It is hard to get the right level of communication. Either nIt-picking or glossing prevails. 1-5 minutes 40 10 50 Show
Unit test, test applications, integration test as well as end-to-end test Linux, Windows, MAC git, gcc, Visual Studio, Visual Studio Code, vim, gdb, windbg We use gitlab for code reviews, must be approved by two peers before merging can take place. I would say people are pretty involved in our code reviews. 1-5 minutes 40 15 5 Show
we don't have unit tests. We do everything end to end w/ a combination of python and manual testing. Our current challenge is dealing w/ a codebase, that is maintained by two separate teams, w/o a lot of unit tests, and no will/desire to change. macOS. Some higher level, w/ some low level mixed in Xcode (terrible for C++ development) VMware Fusion, Python/pytest for testing jenkins svn we do have code reviews but I feel we can standardize them more. Also we use review board and svn which feels like its from the 90s/ 1 day or more 1 7 2 Show
TDD for new projects - personal projects Integrated system test for legacy code at work Legacy C (Linux kernel) C++ (Linux daemon) CLion Vim CMake Git Jenkins Review by two other developers required for all commits, enforced by Gitlab. 5-30 minutes 5 20 15 Show
Unit tests, automated functional and end-to-end tests. Linux pycharm, git, python/pytest Source code requires 2 reviewers to approve, automation code requires 1 approval. Both run through a build pipeline with unit tests & linter 5-30 minutes 30 20 50 Show
We're just now implementing unit testing. So, prior to that, it was write code, test the changes and debug any issues. We used unit tests in my last company, but the vast majority of the tests were written after the code was complete. However, there was still great benefit achieved as it greatly reduced the time needed for testing and debugging issues as that was resolved during the unit testing phase. MacOS Xcode, VI, LLDB Our code is reviewed via merge requests and informally requires at least 1 approval. 1-5 minutes 40 40 20 Show
cpputest or pytest Linux CMake, conan, cpputest We use gitlab, 2 approvals are required before code can be merged 11-30 seconds 50 25 25 Show
Functional gray-box tests. Linux (CentOS, Red Hat, ...) cscope, ctags, gcc, vim Gitlab MR, Reviewboard, or other code review tools. 2-4 hours 25% 60% 15% Show
write unit test linux kernel, embedded system gcc, gdb peer review, code refectory? 1-5 minutes 50 30 20 Show
Robot framework or python scripts along with using glove to determine how much of my code was touched Linux Command line, vim, gcc Developers look at the review and see if anything stands out to them 31-60 seconds 25 25 50 Show
Current product does a mix of automated test and manual test. Don't know. PyCharm for IDE, Jenkins as CI We use a mix of Gitlab and Reviewboard. 5-30 minutes 50 25 25 Show
Unit test, run time and debug Stable, light weight, and simplest. Sublime, Notepad++, Vim, Eclipse, VS Self review, compare with other source 11-30 seconds 1 2 2 Show
Manual test Smartwatches IAR Mostly self-review 5-30 minutes 3 3 4 Show
Yes Embedded system IAR Be reviewed by my team lead. 1-5 minutes 3 3 4 Show
Manual test Validate hardware at the factory Pycharm Manual code review and by myself 1-5 minutes 3 4 3 Show
Write tests for each function, try to achieve high code coverage Keil, IAR, j-Link Draw flowchart -> Flowchart review -> Code based on flowchart -> Do unit test -> Code review with leader. Try to cover as many corner cases as possible. Under 10 seconds 60% 20% 20% Show
No Stable and no bug IAR Test then fix bug 1-2 hours 40 20 40 Show
Write simple unit test such as create many scenarios for test. Develop/bring up device driver IAR, Keil Team leader will review code before code is released 11-30 seconds 40 40 20 Show
manual, black box/white box testing MCU IAR IDE/Compiler, Segger for debugging one-to-one 5-30 minutes 4 3 3 Show
Create test cases to test main functionalbility (normal operation) of the driver/module. In my point of view, create abnomal test cases will take a lot of time since it will need many test cases, so I just test if the functionality of the driver / module are working correctly. Doing self review the code / the design of the driver / module to find out if any corner case that cause the system running incorrectly/unstable. Embedded system IAR Doing self code review to find out if any corner case that cause the system running incorrectly or unstable based on the design and limitation of the driver / module. 31-60 seconds 50% 20% 30% Show
Write test cases and do manual test Standardize the unit test system for everyone to follow. We can add more test cases and re-use the old test cases for a test module. Keil, IAR Ask somebody else to review my code(pairing review). 31-60 seconds 60 20 20 Show
Manually WearOS QT My leader reviews. 31-60 seconds 5 4 5 Show
Debug, register and memory watch Microcontroller Keil, MATLAB, CCS N/A 5-30 minutes 40 30 30 Show
I use Robolectric to create test code arm64 Android Studio I create pull request for each task, then other people will do review on it. 31-60 seconds 60 10 30 Show
Using NUnit and CPPUTest to test our C# console application as well as generated C++ code. Flight systems on a rocket. Rider Done via online merge request discussions. 11-30 seconds 60 20 20 Show
My calculations/scientific software is thoroughly tested generally with comparisons against known values from the literature and image tests to ensure that plots are working correctly. My embedded code... not so much. C being my second language I'm less familiar with the testing capabilities. My target systems are instruments that are deployed to collect data in the field, generally stm32 based (some are propeller based). PyCharm/CLion (same tool really), eclipse based tools, atom On projects with multiple people (not nearly all of them) we have a mandatory review by at least one other dev before merging. 31-60 seconds 50 20 30 Show
CppuTest; integration tests . vs code, gcc, misc other Required 2, up to 5 or 6 depending on functionality. May do offline design review for new components. 11-30 seconds 20 60 20 Show
A combination of unit testing and manual testing, with manual being more common. Avr 8 and 32 bit systems, and embedded linux Atmel studio, pycharm, visual studio code, visual studio, yocto. Code reviews are rather informal and mostly involve demoing the working firmware to colleagues. 30-60 minutes 2 3 5 Show
pytest is my go to lately. I tend to work in systems testing lately, with a lesser portion of my development on posix environments gcc, pytest, cppunit, pycharms, jenkins I work in an environment in which a minimum of two colleagues have to review code changes. These reviews range in content from critiques regarding readability to compliance with safety standards and requirement satisfaction. 1-5 minutes 35 35 30 Show
pytest ubuntu, docker, but our team manages infrastructure for rm57 and tms570 based system VS code Jenkins Docker approvals in GitLab, pipeline passes in CI is mergable into master branch. 31-60 seconds 50 25 25 Show
I'm currently not a developer. Embedded Linux. Visual Studio Code N/A 1-5 minutes N/A 7 hours 1 hour Show
execution windows, real time embedded, arduino visual studio na 31-60 seconds na na na Show
TDD for new code, manual spot testing for everything else. Various - bare metal embedded (C/C++), RTOS embedded (C), future development on embedded Linux (C++). CLion, CMake, Vim None for most legacy projects. Process using Bitbucket tools is evolving based on work in greenfields project. Under 10 seconds 70 20 10 Show
Mostly using DLP (Debug Later Programming) approach. Have read your TDD using embedded C book and have tried to start implementing TDD using Unity. Time spent in development is very heavy in code to start, then most of the time is spent testing, then most of the time is spent debugging. Continuous testing with the same test set is not done. Agriculture control systems and agriculture sensing applications. MPLAB X IDE, Unity, MPLAB REAL ICE, Atollic TrueSTUDIO. There is no code review process. Development team usually consists of one individual. 1-5 minutes 30 20 50 Show
I have just started to use Unit test to test my code Embedded system for agriculture industry CLion, MPLAB X, QtCreator We have just started to do code reviews; we use bitbucket to interact with each other to do reviews. 1-5 minutes 40 40 20 Show
The hard way! Depends on the system/application, during development I will try to construct different conditions (HW signals or comm messages) to verify expected operation including error handling (not doing what it should not do) Current target is STM32 Arm M4 processor running FreeRTOS. Development includes C & C++. Primary function as a weight/force sensor. Primary interface is CAN comms. IAR Embedded Workbench We used to do them! :-/ We had tried the free-form throw it on a projector and everyone talk about it, give comments. Very limited success at trying to methodically walk thru line by line as a group. Also tried a code review tool but not used now 11-30 seconds 25 10 15 Show
Compiling and running it. I do not have one at the moment. Microsoft Visual Studio, Apple Xcode I have worked at companies that practice code reviews enforcing certain style guidelines and practices. 31-60 seconds 35% 20% 45% Show
Google Test based unit testing ARM based MCU's CLion IDE in the past I have extensively used almost all famous IDE's for ARM MCU's I am mostly on the receiving end of code reviews where my lead reviews my code. It depends on the task that how good or bad is the review outcome after first review 1-5 minutes 5/10 3/10 2/10 Show
ad hoc unit testing automated gui testing embedded software running on linux embedded software running on micro controller Qt framework c++,c++11,c++14 Peer review via desk check or reviewing a patch 1-5 minutes 25 25 25 Show
debugging, line by line. Minor test functions embedded hardware - varies IAR, rarely 1-5 minutes 10 50 60 Show
GUI testing, so manually do what squish could at the moment. linux. VS code, gdb. quite lacking, there is now the ability to see everyone's changes through phabricator but it's not manditory. 1-5 minutes 7 1 5 Show
gtest for core modules. Manually elsewhere. Mostly Linux semi-embedded gdb Occasional only. 1-5 minutes 40 30 30 Show
Manual testing is the current method. Strongly want to get TDD and unit tests as standard practice. ARM Cortex PWM, I2C, CAN, UART, and Wifi. CLion Segger JLink Plus Due to small team it ammounts to me reviewing and cleaning my own code before committing to version control. Code is altered by two other team members, currently without review. I'm currently introducing new code processes into the company. 1-5 minutes 40% 10% 50% Show
Client dependent; usually a mix of manual and unit tests. At home, manual testing for simple programs, but unit tests for more complicated ones. Usually .NET desktop development (WinForms, WPF), but various C and C++ in the past. Microsoft Visual Studio exclusively for about 20 years. Client dependent - all over the map. 11-30 seconds 70 10 20 Show
Unit tests for small modules. System tests, generally using a PC tool automating one end of the communication link. Nordic nRF52 series (ARM Cortex-M4) GCC Eclipse Visual Studio for PC test tools. Formal part of our software development process, using Gerrit. All code requires at least one reviewer to approve before integration. 31-60 seconds 20 60 20 Show
- Run code online, and using text based logging, confirm that the expected output is obtained - Enforce correct behavior using assertions and catch assertions with a debugger ARM Cortex M4 SPI, I2C Custom RTOS Eclipse Code reviews require the owner to verify, another person to review the coding standards and functionality, and third person to approve the change. 11-30 seconds 40 30 30 Show
Google and Qt tests cover around 60% of our very large code base. Many of these test cases are not unit tests in a narrower sense but are integration tests. The remaining tests are done manually using a GUI client. We develop for a Linux platform with arm v5 and v7 processors. The device is a management console for cooling controllers (if that's the right term in English). Qt Creator, Git If branches are merged and it is nontrivial, a review is made. 5-30 minutes 25 25 50 Show
Badly. We have Overnight regression tests, but code/feature coverage is very patchy, and bugs frequently get through to released code. Test are normally only run on main branch, so bugs are often not detected until after code is merged this makes re-working code to fix things slow and difficult. Module level testing often tests functions in a way that is not representative of the way they are used in released code. Often a lot of testing done by the developer does not end up in regression tests Windows/PC Matlab Visual studio GIT/Bitbucket JIRA We carry out code reviews when anyone merges code onto the master branch. It is often difficult for the people reviewing the code to understand the feature (or the code) Under 10 seconds 30 35 35 Show
using test harnesses, write test scripts to test combination of variables such that 100% code coverage is achieved. Write test scripts to break the code and define the behaviour if it happens in production code Windows/PC matlab, eclipse (occassionally) Reviewee shall give a code walk thru to reviewer(s) to procide background of the feature or bugfix or enhancement changes. 1-5 minutes 50 30 20 Show
Use regression test Checking signals captured against what stipulated in 3GPP specs. Editor: MVS Version control: git We do peer review. 11-30 seconds 30 30 40 Show
Random testing configuration (using generation/analysis tools) Windows machines - Minimal/none 1-5 minutes 6 2 2 Show
1) debugging particular test cases 2) regression test- which generates randomized test cases. I am not sure if you mean the operating system, or something else. Anyways, I am working on windows. I have been primarily working in MATLAB for a while now. So, I don't really use any development tools. I haven't had any formal code reviews yet. 5-30 minutes 40 20 40 Show
NUnit SW-component testing and test on complete ECU in HW rigs. Several different ECU's in 2 different generations. One Motorola based architecture and one Intel based. Embedded ECU's, with several CAN and LIN buses, for communication. Vector davinci for autosar. Microsoft Visual Studio with NUnit for coding and component testing. gradle for building. I work in maintainance, so we normally only use buddy reviews before merging in GIT. 5-30 minutes 47,5 47,5 5 Show
component test/unit test, HIL test of ECU, system test of end2end functions with all related ECUs Closed loop control system distributed on two I/O ECUs in the truck, running on MPC574x family CPUs MS visual studio and Vector Autosar toolchain it depends on who you ask to do the review, but ranging from only style to functional. in most cases, we do not review test cases. 31-60 seconds 70 20 10 Show
Manual debugging, trial and error for the test environment. Test cases in test suites to test end user functionality in our products we build and deliver. Instrument cluster for commercial vehicles. Running Autosar "logica components" and HMI. Mostly Visual Studio. - 1-2 hours 20 40 40 Show
Unit testing and SW component testing with google test Black box testing HIL with pytest. Telematics system, two CPUs + power CPU (ARM), one handling vehicle interfaces VSCode, Eclipse, Git, PC lint, Jenkins, Bullseye code coverage We use Atlassian tool chains at Volvo -> Bitbucket for reviews 11-30 seconds 30 60 10 Show
Unit tests, tests on component level after changes are integrated, HIL test in system rig and test in truck. Instrument cluster, displays to provide driver information Vector Davinci, CANoe, Jenkins, Visual Studio, PN-Tool, SE-Tool Mostly handled within the team. We can't merge without approval. Checking that logic looks correct (have we understood the requirements the same way), new Misra warnings aren't introduced, unit tests cover all scenarios, the code looks pretty, etc 5-30 minutes 40 40 20 Show
unitary SIL and Inegration tests on HIL PowerPc 32 bit architecture MPCXXX familly CPU code generators minimum in pairs in front of the pull request 11-30 seconds 50% 30% 20% Show
Unit testing using NUint and visual studio, Verification test benches trucks Visual Studio, Vector DaVinci Peer reviews, formal rviews for critical changes 2-4 hours 40 40 20 Show
By manually comparing the actual and the expected behaviour. I hope this information is confidential, so I do not want to disclose. Visual Studio Code I review the code as if I'm writing a code again 11-30 seconds 40 30 30 Show
Unit test Embedeed system Visual studio, vector CANoe, wireshark Try to read the code as i have not seen it befire and can i then understand it. How easy is it to make changes. 1-5 minutes 3 5 4 Show
Mostly ECU-tests and bench tests Sensor fusion node Vector Davinci Developer Vector Davinci Configurator MS Visual Studio Notepad++ Cygwin... Code reviews are P2P using stash. No formal protocol used 5-30 minutes 15 5 80 Show
1) Unit Test within Debugger 2) System Verification Tests through Test Plans TI-MSP430 with home grown scheduler (No OS) and written in C. IAR Workbench, PCAN Explorer (CAN protocol tracing), TeraTerm (Serial Traffic tracing), Enterprise Architect (Detailed Design), Understand (Code Analysis), Perforce SCM (Source Code Mgmt and Code Reviews), and Helix (Bug and Test Tracking) Done through Perforce SCM tool. 31-60 seconds 2 5 3 Show
Unit tests, ad-hoc, system level tests Mostly Cortex-M 0/3/4 chips in consumer robots. gcc, clang, Segger JLink hardware and software Required for every commit. They’re generally helpful. 31-60 seconds 50 20 30 Show
Manually with some automation. MSP430 IAR We perform code reviews. Just quickly examine software. 11-30 seconds 5 5 5 Show
Manual regression tests; bench-top testing using the debugger; manually-written test programs that run on my PC We use MSP430 microcontrollers and are transitioning to ARM. We use IAR Embedded Workbench, SurroundSCM, and HelixALM. Code reviews are performed separately using SurroundSCM. 31-60 seconds 20% 50% 30% Show
I first use unit tests, then when I finished a library or executable I write component tests (black body) and after integrating my piece of software with other libs or modules I write integration tests and lastly I test the whole system with system tests. I write unit tests with the same language used for the component, but then upwards in the test pyramid I use python for testing. At my current employee I use Zync7000 target small 32 bits processor, also IMX51 from Zylinx. Ubuntu OS: Visual Code, bash, tmux and CMake. They are quite new for this company but helps to improve on knowledge, best practices and spread new ideas/knowledge. 1-5 minutes 30 30 40 Show
integration software-in-the-loop or target processor and hardware testing Floating point processors for production Powertrain Control ~ widely varying 5-30 minutes ~ ~ ~ Show
Manual testing of new features using the debugger. Then sometimes test cases are created to exercise those features. An online gas analyzer with 2 main target boards both running an AM335x A8, once of which has a FPGA. There are additional sub-boards for a display, analog and digital I/O, and interface to the sensor. - Visual Studio with 4 build configurations for firmware (2 for PC simulation and 2 for the target) and for C# test case development - IAR for debugging on the target - PC lint - Nant - Nunit - Narrange, StyleCop - Git Extensions, GitLab They are informal and often times done behind the scenes when someone is done with their branch and requests a merge into develop with GitLab 31-60 seconds 50 25 25 Show
Combination of unit tests, integration tests, and testing as deployed Engine controller in a vehicle Matlab (with Simulink and code generation) Editor for C code I am in research. The code is reviewed by the production team at handoff. 30-60 minutes 25 35 40 Show
All my boards use USB for communications, I write Windows apps to test my boards. These boards are part of a video slot machine which runs under Linux As I mentioned we make video slot machines, my boards monitor doors, switches, temps as well as drives sound and lighting Atmel Studio, Code Composer Studio and Visual Studio As part of my release process I go through a peer review. Under 10 seconds 70 15 15 Show
googletest for unit/integration tests. System tests are done using an in-house test harness. Mostly Windows. User and kernel components. Visual Studio 2017 Wish we did them. Under 10 seconds 50 20 30 Show
Mostly in-system tests Gaming machines Code Composer Studio Visual review 11-30 seconds 50 15 35 Show
Some unit tests, mostly system/QA testing ARM based MCU IAR, Cygwin, SVN, Git After each ~2 week sprint we have 2 days of Code Review and test. Any meaningful changes should (and for the most part are) sent out to two other people, and one tester (who may also be one of the two reviewers). 1-5 minutes 70 10 20 Show
Manually, with units tests for some very small libraries Microchip PIC32 microcontrollers, usually MPLAB Not as efficient as I would like. As a newly forming group all our processes are in infancy. I am also usually the sole final say for code reviews, I'd like to transition that to others once we get more settled. 31-60 seconds 40 20 40 Show
yes Docker Engine Xcode, Eclipse/SpringBoot, Visual Studio, Vi, Pair program all production code. Spend at least 5 minutes making the code better before checking in. 11-30 seconds 90 5 5 Show
logging, making codes using gtest webOS based on linux source insight . Under 10 seconds 40 30 30 Show
using python code. emit ipc command as luna-api and verify the log message. tv vi using Gerrit. reviewed by some co-workers. 30-60 minutes 50 20 30 Show
integration test and g-test linux Visual Studio Code check-list made by my part 2-4 hours 40 40 20 Show
Although I don't write test code directly, I guide our part member to set-up test plan for each requirements before developing them. And I review test plan is enough to meet requirement and check any regression. Arm-based SoC Linux environment Gcc Cmake I don't understand this question 1 day or more 0 0 0 Show
Using an auto test script implemented using Shell script and Python. LG webOS TV Visual Studio Code The company is using a git. 5-30 minutes - - - Show
using debug print . VI, Sourceinsight Check the main routine, not necessary code, optimization. and extensibility 30-60 minutes 2 3 5 Show
I've recently moved embedded linux system to WEB back-end field. Now I've recently tested WEB API by mocking web-server with nock. Web server CLion, Eclipse, IntelliJ, PyCharm, QNX momentics, Visual Studio code CMake, Make, Gradle GoogleTest, GoogleMock, Cpputest, JUnit, Mockito, TestFx, PowerMock, PIT(MutationTest), Pytest, sonarQube, Cppcheck, gcov/lcov, Cobertura, Jacoco, LLVM Fuzzer Not properly conducted 5-30 minutes 30 50 20 Show
I typically do system test. LInux Source insight / Croess toolchain Peer review(Logic review) + code review(coding standard) 1-5 minutes 40 30 30 Show
I test what I changed manually. Then I test integration test cases, manually also. Linux Visual Code Editor Cross compiler Qt Creator Peer review with gerrit system. Under 10 seconds 30 30 40 Show
using gTest ARM based embedded system gdb I focus on like below for our source code - Easy to debug - Easy to understand code flow - Not to long function and file size - Separate between pure function and side effect 1-5 minutes 20 30 50 Show
I do unit test. Linux VI I use gerrit review 1-5 minutes 50 30 20 Show
define TC & execute in TC system webOS TV gcc, vim with team reviewer 31-60 seconds 6 2 2 Show
I test my code as requirement. linux embedid system vi editor and oe build I do peer review 1-5 minutes 5 3 2 Show
Differential tests Customized microcontroller system comprised of an Arm M0+ controller and an MSP430 compatible core Cygwin, gcc, make, svn, IAR embedded workbench Used occationally in critical sections 5-30 minutes 50 30 20 Show
RUn the code No PC na 1-5 minutes 60 30 10 Show
Debugging tools and oscilloscopes I don't know much about the target system uVision, Atmel Studio, Visual Studio I dont know much about the code reviews 31-60 seconds 20 30 50 Show
Load the code onto the target and run the system in a series of predetermined scenarios. The target system is an automotive fluid dispenser. It allows a user to enter a work order and the dispenser will automatically dispense the proper amount of fluid. Eclipse, CoceComposer, IAR Unsure of company code review practices. Personally, I test until a bug is found, and then review code applicable to that bug to determine the cause. 11-30 seconds 40 40 20 Show
Manually on production hardware. I've developed mostly on the microchip PIC32 platform, operating a task-based RTOS. MPLAB X IDE suite, microchip ICD 3 programmer, some work done on TI code composer IDE We don't typically do code reviews. 1-5 minutes 50 20 30 Show
On a harness ARMs mainly Various Eclipse & Studio-based IDEs. There are none here 11-30 seconds 30 55 15 Show
In all honesty, White-box unit testing post development. All Badger Meter Flow Instrumentation Products Visual Studio Professional 2015/2017, Visual Studio Enterprise 2017, Eclipse, NetBeans, Git Bash, TortoiseSVN, Often review code for clarity and stream-lined functionality. Code should perform its function and perform it well. 11-30 seconds 50% 20% 30% Show
Functional testing during development and regression testing before release Various Microcontrollers and Processors Atmel Studio, MPLab, Code Composer Studio Testing Based. If I identify a bug during testing I review the related code. 31-60 seconds 40 30 30 Show
Stepping through code and functional testing in hardware Sample board connected through J-Tag. The system is IAR. IAR embedded workbench none 11-30 seconds 30 40 30 Show
System, Integration (between feature configurations), step thru and debugging each path individually, occasional unit test source code is written PCB board using Microchip SAM L22 Atmel Studio Individual, after testing code and prior to committal, I perform a code differential review with prior repository version to make sure changes are in alignment with expectations and that no rogue changes are remaining. 5-30 minutes 25 25 50 Show
Manual test procedures; trial and error. Windows PC. Visual Studio. None, since I was the only one. Under 10 seconds 40 25 35 Show
Depends on what is being tested, currently I build it, flash it the target device and test functionality or try to break it. Otherwise I try to add a piece at a time, and test it as I go to see if it works with known input/outputs I use several, AVR's, ARM, SNC, PICs Atmel Studio, Notepad++, custom tools, SONiX36K compiler N/A 1-5 minutes 20 40 40 Show
TDD, ATDD. Unknown at this time. Will depend on the client. Depends on the language. Generally an IDE from Jetbrains and whatever build/dependency management tools are part of that language ecosystem. I.e. Bundler for Ruby, Gradle for Java. CMake for C++. Pair programming. Under 10 seconds 60 40 0 Show
We practice TDD on all of our projects. We also strive to practice ATDD on our projects. I don't have a target system. We build systems for clients with various languages and constraints JetBrains Suite of tools We use PRs for our code reviews and most of the time the comments are very minor Under 10 seconds 45 50 5 Show
Debug later style STM32 Visual Studio, Atollic TrueStudio (ST specific IDE), IAR previously Dev team of 1 currently so N/A 1-5 minutes 45 40 15 Show
I try to cover more than 80% of my code with unit tests. Units are mostly classes, sets of classes, modules, libraries, up to the whole system. GUI test tools like Squish allow to test user interactions and system responses. I write the tests at the same time as the code: sometimes before the code, sometimes after the code. TDD is my goal, but I don't always achieve it. My TDD steps are sometimes too big. The big steps force me to backtrack and start over with smaller steps in a slightly different direction. When implementing the CAN communication for harvesters, I wrote a mock object for testing. The mock object can check that an expected sequence of CAN messages occur. It can even answer to requests. I get nervous, if I have to work on a code base without tests. Then, I start introducing tests, which is onerous in the beginning but pays off in the end. Embedded systems with an HMI. The Systems run on ARM SoCs with Linux: mostly microprocessors, rarely microcontrollers. QtCreator, Squish, Valgrind, Gdb, Docker, Yocto, Git, Github, Jira Code reviews often take too much time. The fewer tests there are the longer code reviews take. Code reviews trigger discussions that should have taken place when starting the implementation of a story. Under 10 seconds 45 40 15 Show
unit testing and black box integration testing Embedded system Eclipse Basic aspects like syntax, workflow are checked. After this logical aspects and unit test coverage is tested. 1-5 minutes 40 30 30 Show
developing test scripts in dSPACE AutomationDesk Microchip controller C application, C++ for Google Test Framework and python for Automations. Static Analysis 31-60 seconds 5 1 4 Show
By writing unit test scripts using Google Test Framework R-Car E2 Eclipse, GCC We have code review checklist created based on "Code Complete" by Steve McConnell book and customized based on our learning 2-4 hours 30 30 40 Show
using GTEST framework arm, x86_64, x86_32 GCC, KPIT proprietary toolchain, Eclipse, cpplint, clang-format is coding standards followed? is code scalable? is commented code present? is code optimized ? is proper breakdown to smaller reusable Classes/Function ? is it responsibility reflected in name ? is code readable and maintainable ? 5-30 minutes 30 30 40 Show
Manual testing Embedded system for connected vehicles Eclipse , Android studio We do peer reviews of code from the other team member. 1-5 minutes 50 20 30 Show
Using Google gtest/gmock framework Embedded Linux Eclipse, gcc, gdb Peer review/architect view 31-60 seconds 3 3 4 Show
Using google test framework arm and X86 Eclipse IDE (Photon version) 1.Does Implementation have proper design.2.Does code follows SOLID principle.3.Class and function should have only one responsibility & it's name should represent it's responsibility.4.memory allocated on heap freed or not.5.Comments are added or not 1-2 hours 5 5 7 Show
We have developed a unit test framework to call our periodic and sporadic functions using that we perform our testing on the hardware itself. Micro-controller Eclipse We have basic self-checklist to perform code review. After that, we perform peer review. 5-30 minutes 50 30 20 Show
Unit tests, integration tests, regression test Linux Ubuntu vim, bazel Github + reviewable PR process 11-30 seconds 60 10 30 Show
Unit tests, perception CI, marvel log tests, on-vehicle tests. Zoox perception system, running on Linux. VSCode, Bazel Mainly try to understand each line of code. I do have difficulty in finding bugs in this process. 31-60 seconds 30 20 50 Show
gtest / CI -> bench test -> on-dyno test -> dynamic vehicle test Embedded board, runs QNX on a microprocessor and an RTOS on a separate microcontroller. Sublime 3, Bazel, clang-format, easy-clang-complete, socketcan, CANalyzer, TextPastry Process-oriented and protected against human mistakes by CI and other pre-merge checks. 1 reviewer minimum, 2 preferred for non-trivial changes. Larger changes should be split up and merged into a feature branch instead. 1-5 minutes 50% 10% 40% Show
gtest beefy x64 vim, gdb, bazel, valgrind, asan, tsan each file must be marked as reviewed by any reviewer. Typical questions are more about style than design and usually a lack of tests is considered to be OK. 1-5 minutes 30 20 50 Show
With googletest. Linux VSCode Git Reviews are done by team-members in my local group, or outside the group if there is a better expert for the subject. 11-30 seconds 20 40 40 Show
Unit test Bazel vim short and to the point. 31-60 seconds 60 10 30 Show
1. Unit tests for non-trivial methods, classes and functions (unfortunately, not everything has unit tests) 2. Simulation tests for testing the entire code end-to-end 3. Hardware-in-the-loop tests, depending on the nature of the change 4. On-vehicle tests for changes that end up on the vehicle It is a custom developed heterogeneous compute architecture bazel, git, vim Every PR must go through code review, testing and possibly CCB approval before it can be merged. There's tooling to ensure each step is done. Most of the reviews I've had have been useful but sometimes it's hard to find the right people for a review. Under 10 seconds 60 15 25 Show
I write unit tests (after writing the code :( ) Linux servers Bazel, gdb, valgrind There is not universally agreed "checklist", it's often hard to get consensus among reviewers about the "readable" coding style. 1-5 minutes 6 2 2 Show
googletest and python unit tests C++ and python driving an autonomous vehicle. Many complex, tightly coupled and performance-sensitive systems. gcc, clang, bazel, vim, VsCode We review for implementation, test coverage, code style and comments. 31-60 seconds 40 40 20 Show
Catch2 at Neato, behave for python side projects Ubuntu 18.04, x86_64 Preferred Editor: Sublime Preferred IDE: Eclipse I don't feel comfortable nit picking others about coding standards / styling so I focus on regression issues, checking the logic, and optimization. 5-30 minutes 20 50 30 Show
There is a small portion of the code with unit test using catch2. The rest of the code runs in the device and evaluated based on its performance. Embedded device git, stash, slack, jenkins, eclipse, vim Peer review through Stash. Usually involve 1 or 2 other developers 1-5 minutes 40 30 30 Show
catch2 qnx OS, 32 bit vim and cscope we make people familiar with the code we're touching / introducing as reviewers on pull requests 1-5 minutes 4 2 4 Show
log files and printfs QNX on arm cpu gcc, xemacs online using Stash 1-5 minutes 25 50 25 Show
Peer reviews, simulation, and hardware testing Embedded Systems Visual Studio, CLion N/A 11-30 seconds 8 8 8 Show
Simulator, unit tests, compile the code The objective of our peer reviews is to find mistakes and ways that the code can be improved. Also, team members are supposed to make sure that the different pieces of the code are used and integrated correctly. We have a handful of unit tests (which are relatively recent), but we do not have much in the way of automated testing. Embedded devices Visual Studio, Vim, git (BitBucket), Jira They are slow, but I do learn from them. 5-30 minutes 70 10 20 Show
catch Linux embedded system Visual Studio Code / Internet Use github pull requests 31-60 seconds 75 5 20 Show
Self-built unit and integration tests, interactive feature tests Linux, MacOS Command line, Make, compiler Mostly ineffective and are the largest delay in getting code deployed. Unit tests with coverage analysis are better. Under 10 seconds 5 3 2 Show
Testing different inputs to check different cases Not sure Command line/text editors Not really existent 11-30 seconds 50 10 40 Show
Functional testing via external interfaces for larger code that crosses multiple modules. Utility executables that test specific functions and sequence of functions- Mainly success and fail cases. Arm processor running QNX- a unix-like real time os, POSIX compliant. Linux environment & vi. Use Stash side-by-side diffs for individual review. Depending on the complexity of the code, there will be a group review and comment 1-5 minutes 20 30 50 Show
functional test and module level test SoC boards : mostly custom eclipse, vim, gcc, gdb trace32 manual 1-5 minutes 50 30 20 Show
The tag cloud words are from attendees 'Current test practice'

Make this into a word cloud

Select text below (triple click), copy, then try one of these tag cloud generators
jasondavies word cloud generator - colorful
tagcrowd - lets you show counts