What concerned 949 people about their first TDD experience

Course code Summary selected Impressions
T***-1 concern: 851 concern 851, revised, again, one more time. and again
B***-1 concern: ??? don't know
V**-1 concern: Abstracting Concerned about how to scale and use this approach when you work in larger projects that might require a lot of abstraction/generalization/Templatization (e.g. found out when incrementing)
A*-1 concern: All the above. Slow - continuous change of focus could take you out of the zone Fake it till you make it - fake stuff getting stuck in the code Fragile test problem - hard to refactor bad interface decisions
C****-25 concern: All the above. Could write some code for a test to pass in a simplistic way, forget about simplification and leave it there.
H***-1 concern: All the above. In the real world, the code is much more complex, and usually, much time will not be there for coding. In that TDD takes some time to write test-cases and then code and then write UnitTest for it.
J***-1 concern: All the above. How do we get started? What setup is involved in getting this to the starting point provided?
J***-1 concern: All the above. This seems to be okay for new start.. how do you apply this to existing legacy code.
L***-1 concern: All the above. Definitely not how I'm use to coding!
M***-1 concern: All the above. We make progress only base on test, it look like we just move the time from debug to test. We do not consider further until we found it. just try and error
M***-1 concern: All the above. some test case might miss and take much time.
T*-1 concern: All the above. nothing yet.
T*-1 concern: All the above. nothing yet.
W**-23 concern: All the above. I'm concerned about the development time if we are going to do that for every module in my systems. But what about maintaining already developed code? How to deal with the hardware?
W**-24 concern: All the above. I need to better visualize how this can work in Embedded applications
W**-24 concern: All the above. Breaking a small behavior seems streaight foward, but how about breaking a complex system into small test concerns me.
W**-31 concern: All the above. Some of the steps are so small that they break up my train of thought. TDD seems to discourage design and thinking ahead.
W**-37 concern: All the above. increase of development time.
W**-39 concern: All the above. It may be hard/very time consuming to test larger, "more undefined" modules/systems
W**-41 concern: All the above. ı have some concerns about abstracting hardware dependencies
B**-1 concern: applicable Is TDD suitable for any application / situation. For instance when code cannot be that modular.
B***-12 concern: apply about how TDD should be applied.
L*-4 concern: apply It's good. I'm not sure that I can apply it to my project.
C**-1 concern: approval from others Spending too much time on code development because of TDD and then having to justify this time to my managers - it's hard when I don't even believe that pure TDD is necessary.
F**-1 concern: architecture - How can we know that the test cases are enough? - If we only write the code to pass all the test cases, we may not think much about the architecture to expand more in the future.
B**-1 concern: asdertyhju Building software under testing can treacherous; because it displaces thinking about state and persistence in code being written.
L*-4 concern: asynchronous system I'm making the product rely on the asynchronous system, how can I write the unit test for that system?
W**-21 concern: atypical Doing minimum to pass test - what about best practices? There are things you "should" do that require a certain amount of overhead structure to create good code. Seems to skip that by just gettingby
T**-1 concern: Awkward I already had a vision in my head, but implementing it in this manner seemed to slow the process and felt awkward.
V**-1 concern: Awkward It feels a bit awkward and like it takes much more time than the usual way of coding
B**-1 concern: Backwards It can be hard to figure out exactly what you have to test for.
C**-1 concern: bad enfluence encourages bad development practices
C*****-2 concern: bad example The example (circular buffer) was much too simple to be instructive. It's too easy to "game" the tests: write things that are known to be wrong, just to pass the test.
B***-2 concern: badger It breaks up my global train of thought when making such small code increments.
B***-12 concern: blind I want to use a debugger and step through my code. I can't SEE why it is failing
Z**-1 concern: blinders Difficult to see the forest for the trees. Feels a little bit like finding bugs/pitfalls by falling into them rather than planning out a path between the pits.
C**-1 concern: boundary Some boundary conditions are implementation specific, so hard to write test in advance.
V**-1 concern: build-environment our build time
B***-11 concern: Burdensome Additional time in projects to write all the unit tests and legacy code not written with TDD.
C*****-2 concern: busy work Testing the test meant writing bad code just to make the test fail first. This led to some confusion as to where I was in the process: still making and verifying the test fails, or trying to fix it?
Z**-1 concern: careful If we don't have *all* the test cases written down (and implemented), I'm afraid that the result will be a mixture of TDD and Debug-later-programming, which might defeat the purpose.
L*-4 concern: case missing Takes too much time and can miss cases
W**-21 concern: challenge How to implement in our Projects ?
T**-1 concern: change defining the interface is difficult changing the test cases when the interface changes
W**-21 concern: change control How to avoid missing any cases? What happens when a component is developed according to a requirement using TDD and then the requirement changes?
W**-19 concern: Changes When can I do changes without writing a test? What criterions are there?
F**-1 concern: Code coverage Can we achieve 100% code coverage by using TDD?
B***-12 concern: Complexity With a more complex situation it might be difficult to accurately develop unit tests. However, if the situation is too complex that might mean the design needs rework!
Z**-1 concern: Complexity "Will fix later" makes me uneasy. What if I forget? Feedback is great, but it seems we are maybe being encouraged to rely on it too much. It's already difficult to remember everything.
B***-12 concern: Comprehension It seems easy to over-focus on the tests. I ended up feeling like I didn't understand what I'd written very well.
W**-18 concern: concerned Seems chaotic to write code that will you know will clearly fail tests later on. Seems that you could approach the problem holistically instead of ignoring your technical debt.
F**-1 concern: confusing I wonder if writing a small piece of code then test it or writing a module based on the design is better. I feel that it really hard to design a module by the way we did in exercise 1.
C*****-2 concern: confusion Writing code that passes tests but is obviously not going to do what we want. You end up with a bunch of code that doesn't work.
F**-1 concern: confusion My concern about TDD is how to write full unit test case for my project which is maybe so complex.
W**-20 concern: consuming It seems like a lot of extra work, which increase development time.
V**-1 concern: context Currently i´m a bit thrown off by looking from two different contexts
B***-2 concern: context switching Context switching when "cheat". I.e. thinking about empty, cheat, move on to full
Z**-1 concern: context switching Continuous context switching between code and tests.
B***-2 concern: corner cases How do I know I get all the corner cases (by going step by step, I might miss the big picture).
B***-2 concern: counterintuitive That you have to work against your instincts when writing your implementation. That it took us 90 minutes to write something that is normally a 15 minute interview question.
F**-1 concern: Cover all cases Time spend to test Does the test cover all cases?
F**-1 concern: cover all use-cases When do we know our test cases can cover all the use-cases?
B***-12 concern: Coverage Getting the tests right seems critical. I probably overlooked something, but I got through the entire exercise without dynamically allocating the array.
L*-4 concern: Coverage It look like it's important to make high test case coverage. If we make test case with low coverage, we can't verify a lot of bugs.
L*-4 concern: coverage If a program has long and complex codes then how can test codes cover all cases?
B***-11 concern: Culture-change TDD is a big shift from our current embedded development "culture." I have concerns about effectively introducing a process like this to my team.
B***-12 concern: cumbersome Difficult for larger systems. Sometimes you don't know how the full system will look when you begin, and it can't leave even more code to refactor when you start making large changes.
C**-1 concern: cumbersome In a deadline, i can see this process to be cumbersome to manage.
Z**-1 concern: Dangerous The idea that the test should first fail, and that other logic that is untested should be put off seems (even in this example) to lead to *deliberately* writing wrong code, however temporarily.
F**-1 concern: Deadline In real project, maybe project cannot follow full workflow of TDD. It will take too much time to done Unit test, integrated test, system test.
L*-4 concern: deadline to deliver My concerns about TDD are difficulty, no time and maintenance. Test codes would be difficult to be made. I will have no enough time to make the tests. I should maintain the test codes after I made.
C**-1 concern: dependencies My concern is how to handle Hardware dependencies.
T**-1 concern: dependencies How to make it work with complex dependencies.
C**-1 concern: dependency Can't see a downside for the moment, but it looks like it tends to make the developer to rely heavily on the tests
L*-4 concern: dependency This kind of unit test useful for only pure function testing. However in the embedded system, there are some dependency with HW or external input data. I think it may hard job to replace it as Fake.
L*-4 concern: dependency When module has a dependency of external environments such as network and server, how can it be include in test codes.
B***-11 concern: Dependent Still dependent on me to think of many tests.
B***-12 concern: design how to do design? no design in TDD development?
C*****-2 concern: design im worried that designing things for the short term will have large negative consequences later on
T**-1 concern: design May lead me towards a design that I'll regret later because I didn't do as much upfront design work.
F**-1 concern: Design test case I concern about: 1. "hardware dependence": errors can come anytime from hardware, I dont know how TDD can cover it. 2. We must have Testcase design technique before appling TDD?
C**-1 concern: distracting Can be difficult to focus on good design/good refactoring
A**-1 concern: Does not fit how I code. Need the idea to be clear in advance
A*-1 concern: Does not fit how I code. It's very different from how I usually work, and I need think in a different way. At the moment I can solve problems faster with my usual method (might get better after getting used to the method)
A*-1 concern: Does not fit how I code. It felt artificial to only create minimal implementations that you knew were wrong and had to be changed.
B***-13 concern: Does not fit how I code. How do you apply this to target hardware and external inputs?
B***-13 concern: Does not fit how I code. It felt like cheating to pass the tests before a "final" implementation existed.
B***-13 concern: Does not fit how I code. What if your compiler takes a *very* long time to compile?
B***-13 concern: Does not fit how I code. How does this method fit in and integrate with libraries and other people's code?
B***-13 concern: Does not fit how I code. There felt like there was a moment where you had to do the big refactor where it stopped being iterative, but felt like the big bang implementation that would have happened right off the bat.
B***-13 concern: Does not fit how I code. Setting up the test harness to within a target environment.
B***-13 concern: Does not fit how I code. Not sure how to plug-in a tdd framework in our platform (specific compilers, C-only, etc.)
C****-25 concern: Does not fit how I code. not really sure how to apply it in a large project with multiple actors and complex distributed data...
C****-25 concern: Does not fit how I code. Simultaneously coding and testing means an interface change may propagate and break prior tests before the final product. What is a good granularity for this?
C****-25 concern: Does not fit how I code. I like putting a scratch-pad like implementation to test the concept first so I don't know the final functionality or code structure initially.
C**-1 concern: Does not fit how I code. It might be difficult to always think of the all tests required before developing the software.
E*-1 concern: Does not fit how I code. Initial time to adapt it into normal work, and will others use and maintain the unit tests.
E*-1 concern: Does not fit how I code. I did not like hardcoding values to get tests to pass. That goes against long-engrained practices from years of coding and left me feeling like I might forget to go back and fix it.
H***-1 concern: Does not fit how I code. Not sure if this can be applied to testing an existing legacy code.
H***-1 concern: Does not fit how I code. You need to do really one step after the other - even you would like to do several steps at once. First do the Test, then take care of only those tests.
H***-1 concern: Does not fit how I code. I find it strange that I have to develop in very small pieces but I can live with it.
I**-1 concern: Does not fit how I code. I am concerned that is considered good practice to (initially) hardcode tests to pass/fail. Wouldn't it be better to draft test cases that are intended to be functional?
I**-1 concern: Does not fit how I code. Implementation requirements for what I work on are not clear; details depend on me determining the caps. and limits of target platform while coding. My embedded hw lack resources or I/O to test well
J***-1 concern: Does not fit how I code. Intentionally not implementing the full (or even any) functionality is uncomfortable and seems prone to future errors
J***-1 concern: Does not fit how I code. Just a new way to approach coding. If I adopt this new way of writing, it'll be a push against the current standard.
J***-1 concern: Does not fit how I code. Intentionally not implementing the full (or even any) functionality feels uncomfortable
J***-1 concern: Does not fit how I code. Purposely writing code that is wrong. Not just stubbed, but wrong.
J***-1 concern: Does not fit how I code. It was always considered bad practice to rely on a compiler to code well. This seems to be the basis of this methodology.
J***-1 concern: Does not fit how I code. It can be challenging not to get ahead of yourself when coding.
M***-1 concern: Does not fit how I code. I need to think differently from my usual working style.
M***-1 concern: Does not fit how I code. Scalability. This is a pretty simple exercise that is pretty cut and dry. how does this scale for more complex functions.
M***-1 concern: Does not fit how I code. It can be annoying to intentionally write the incorrect implementation for the production code, simply because we don't want to "get ahead of the tests".
M***-1 concern: Does not fit how I code. It's very methodical and requires developers to fight their instincts to start writing code that they know they'll need, in some cases by intentionally writing code that they know is wrong.
M***-1 concern: Does not fit how I code. I watched the video about TDD with hardware dependencies, but I still have trouble visualizing how to best do this. I'm hoping future modules give an example to show how this process looks like!
M**-1 concern: Does not fit how I code. translate this into Visual Studio
M**-2 concern: Does not fit how I code. Incremental testing may not fit for certain cases.
M**-2 concern: Does not fit how I code. It requires discipline, some of the issues I have is that test cases are written after the fact, not as a way to drive the development.
N/A concern: Does not fit how I code. I have to get used to narrowing the scope of the code I am testing and not moving too far ahead.
N/A concern: Does not fit how I code. Hard to not jump ahead towards the final solution when writing code to pass tests.
N**-2 concern: Does not fit how I code. My lack of understanding of C
R**-1 concern: Does not fit how I code. completely breaks my way of thinking about a problem. forces me to do things wrong, just to fix them afterwards --> ineffective. have i mentioned that i like "the borg" from star trek?!
R**-1 concern: Does not fit how I code. It requires to change the perspective very often. I would prefer to concentrate on a solution for a longer period. I don't like temporary solutions just to have a response to test case.
R**-2 concern: Does not fit how I code. Hard to carry out if something has to be adapted in legacy code.
R**-2 concern: Does not fit how I code. Usually i do not write code till i have a plan how everything should work. First writing wrong/empty code to get failing tests feels like additional work. (But i also see the good side of it.)
R**-2 concern: Does not fit how I code. Not yet sure when faking it is harder than writing the real thing
R**-2 concern: Does not fit how I code. Theoretical approach that one has to get used to first
R**-3 concern: Does not fit how I code. Breaks the development cycle. If I am in the middle of implementing something, I need to write tests in a separate project, or prepare tests. I get back to the original problem maybe one hour later.
R**-3 concern: Does not fit how I code. Hard to get rid of old habits.
S***-1 concern: Does not fit how I code. Requires discipline and a change in thought process and activities
N/A concern: Does not fit how I code. I am not used to decomposing a module at this level of detail. E.g. with the circular buffer I pretty much know how it is supposed to work, so it is a bit tedious to force myself to go step by step.
T*-1 concern: Does not fit how I code. A bit irritating in the beginning
T*-2 concern: Does not fit how I code. As stating when registering we need 5-30 minutes to get new SW running. This does not fit to those tiny steps.
T*-2 concern: Does not fit how I code. to much test driven .....
U**-1 concern: Does not fit how I code. Slightly prescriptive test regime in terms of how it would cope with multi-threaded systems.
W**-22 concern: Does not fit how I code. how to apply this to driver code that interact with hardware
W**-22 concern: Does not fit how I code. Old leftover "test" code is left after testing. Must have a more comprehensive testing to ensure all avenues and old test code is checked.
W**-24 concern: Does not fit how I code. novice programmer, so I often don't intuit good interfaces, but that comes with experience, and hopefully the testing forces the issue!
W**-25 concern: Does not fit how I code. TDD feels unnatural and seems like it will kill being "In The Zone".
W**-29 concern: Does not fit how I code. Doing this methodology will take time for me to adapt efficiently to development. Not the coding aspect, just the mental approach to not add stuff I foresee I need early on.
W**-29 concern: Does not fit how I code. It seems a bit dogmatic in it's approach. Do I really need to make every test fail first?
W**-29 concern: Does not fit how I code. Having the tests written in advance means I can think about coding and not what the tests should be. I can see this works well with pair programming. How about without a pair?
W**-30 concern: Does not fit how I code. The concept is new and I have to change my whole prospective of writing code. Whatever I have learnt/worked on is now redundant with this concept
W**-30 concern: Does not fit how I code. my concerns that if will i continue doing this approach in the future or not
W**-31 concern: Does not fit how I code. It is a bit hard not to write the code before the test. It may take awhile to get used to it.
W**-31 concern: Does not fit how I code. Need to change your mindset
W**-31 concern: Does not fit how I code. It was hard to write such small pieces of code, knowing that I would have to change it again in a couple tests later.
W**-31 concern: Does not fit how I code. Can I write tests with real hardware inputs rather than hardcoded values?
W**-32 concern: Does not fit how I code. Your trap for extra functionality was brutal. Some times it was easier to do the right thing than to fake it.
W**-34 concern: Does not fit how I code. Not intuitive to keep it simple, tended to keep "jumping ahead" in the process and coding too much.
W**-34 concern: Does not fit how I code. After coding for 35+ years, it's hard to not think ahead and write code I know I will need eventually before writing the test for it.
W**-35 concern: Does not fit how I code. Struggling with the WHY of the tight test-code-test-code loop. My natural inclination is to define the interface, then write the tests, then write the implementation.
W**-35 concern: Does not fit how I code. It seems like it would be faster to write several tests at once and try and make them all pass at once for things all tied to one implementation feature rather than trying to add dummy code each step.
W**-35 concern: Does not fit how I code. The tests are being passed using hard coding techniques that can be missed in refactoring code.
W**-36 concern: Does not fit how I code. Seems slow-ish, backtracking, not "able" to make simple steps forward when you should.
W**-42 concern: Does not fit how I code. Not particular how I code but how my coworkers code. How to integrade it into a team workflow
W**-1 concern: Does not fit how I code. I often forgot to follow the "fake it til you make it" paradigm and moved straight into the actual 'unfaked' implementation.
W**-21 concern: double code size write twice as many lines to make it working
W**-20 concern: drivers How can we use TDD to create microcontroller driver code? Is this possible, or is all of the TDD code just for business logic? Subtle bugs in driver code can be really hard to track down.
W**-18 concern: dynamic languages I find C/C++ difficult, coming from a dynamic language background, but I understand that you're focused on embedded systems.
M***-1 concern: easy to skip tests you have to think about everything that can go wrong. Its easy to skip tests
Z**-1 concern: Efficiency Spending extra time on trivial cases. How to select which test and failure cases to try first.
C**-1 concern: effort Concern that the additional up-front development effort could be significant
B**-1 concern: Embedded Implementation to embed. systems with spec. libraries and multitasking apps using embed. OS.
C*****-2 concern: Enumerate tests At what point do we enumerate all the tests you want to write. It felt to me, that would be good to take a first stab at enumerating the unit-tests before writing the first one.
C*****-2 concern: expectations Trying to figure out what to test before you have a clear idea what you want.
B***-2 concern: Experience Nothing so far. Perhaps the decision as to when in the process to procrastinate and when to implement is a matter of experience.
T**-1 concern: experience Require experience to how many tests or how much to test on one function/class. Things you can't think of testing may remain as a hidden bug which very difficult to figure out.
W**-21 concern: Extra effort Bit of extra effort to write much more lines of code for testing compared to actual developed code.
W**-20 concern: fake That you should fake until you make it. I am used to think carefully before I implement anything, so when I implement it should be close to the finished code, not a fake code.
C*****-2 concern: fake until you make At least in the first few tests, "the fake until you make it" approach seemed a lot of throw-away work.
B***-12 concern: faking Getting too good at faking it
D****-2 concern: feels slow It FEELS it takes longer to accomplish simple tasks, causing some anxiety in terms of perceived progress.
V**-1 concern: Force to write tests TDD force to write tests for expected results which has to be fixed.
B***-1 concern: forget Forgetting a test case and, therefore, forgetting to properly implement a function with a "return true" type of temporary implementation.
C*****-2 concern: Frustrating In the exercise, I wrote myself into a corner. I did not consider wrap around until I got to those test cases. I wish I had considered it from the beginning.
W**-18 concern: gaps Applying tests to the API may be a real artform and I'd be concerned about hiding failure through poorly scoped tests.
Z**-1 concern: granular I want to have a good understanding of the overall picture before coding. This granular approach let me go back and forth couple of times
C**-1 concern: granularity Doing this iterative test and development at very low granularity can be conterproductive.
Z**-1 concern: granularity The cycle is a bit too granular for my taste. I think the number of required steps tends to distract more than help. I prefer to collapse some steps in my everyday work.
B***-2 concern: greedy It's kind of a greedy algorithm, fear that might end up in a complex place with lots of cheats rather than incrementally getting closer to end goal
D****-2 concern: Grrr Not being able to think ahead and write the code to make design decisions that I know I am going to make. Having to write the bare minimum code to make the test pass often times frustrates me.
V**-1 concern: habits Still need to force yourself not getting into deep coding quickly. Also need to break the habit of trying out possible solutions before constructing tests.
B**-1 concern: Habitual Hard to break coding habits
F**-1 concern: Hard Hard, take more time to dev
L*-4 concern: Hard to continue It's hard to get used to it. Hard to keep continuing when i have short time to do some prototyping.
A**-1 concern: Hard to think of the tests too extra work
A**-1 concern: Hard to think of the tests I wonder how easy will it be to implement it with more complex code, not so easy to test
A**-1 concern: Hard to think of the tests Not designing well the test or taking the wrong approach
A**-1 concern: Hard to think of the tests Hard to cover all aspects and not miss any feature to test. If you feel too confident after testing it could fail anyway.
A*-1 concern: Hard to think of the tests Difficult to perform procrastination.
A*-1 concern: Hard to think of the tests Might be hard to think of test yourself
A*-2 concern: Hard to think of the tests Figuring out how to write the correct tests
A*-2 concern: Hard to think of the tests * Could consume a bit more time than necessary * Chance of redundant test cases * Is my coverage good enough?
A*-2 concern: Hard to think of the tests Chance of redundant test cases
A*-2 concern: Hard to think of the tests Is my coverage good enough?
A*-2 concern: Hard to think of the tests How to cover all meaningful scenarios. It takes time to plan all tests.
B***-13 concern: Hard to think of the tests Immense effort and analysis required to develop the test cases upfront
B***-13 concern: Hard to think of the tests How do you figure out all the test cases? What if you don't think of a critical boundary condition?
C****-25 concern: Hard to think of the tests Incomplete production code/HardcodecodedValued might be forgotten if the tests are not good enough. I now have to worry about test code and production code :)
C****-25 concern: Hard to think of the tests What if I forget to test something not so obvious?
C**-1 concern: Hard to think of the tests Coming up with the test and their orders can be the more challenging part. Poorly designed tests can lead to a terrible implementation flow
E*-1 concern: Hard to think of the tests Its a new way of thinking.
E*-1 concern: Hard to think of the tests Up front development time needs to be incorporated into project estimations. Understanding that features may take a little longer but less development time solving bugs in the future.
H***-1 concern: Hard to think of the tests Applying TDD on bigger functions or complex requirements
H***-1 concern: Hard to think of the tests Time consumed for thinking and putting down the test, but I am sure its all fruitful, but new user its a big concern.
H***-1 concern: Hard to think of the tests Test cases are limited
H***-1 concern: Hard to think of the tests Test cases are limited
H***-1 concern: Hard to think of the tests What if we miss identifing the test cases.
H***-1 concern: Hard to think of the tests What if we miss identifing the test cases.
H**-1 concern: Hard to think of the tests Writing test cases seems like it might be difficult without explicit guidance on how/what to test
H**-1 concern: Hard to think of the tests The tests were written for me, how would I know what tests to write?
H**-1 concern: Hard to think of the tests Difficult to know if you have tested everything you needed to.
I**-1 concern: Hard to think of the tests still working on the simple things, would like to see how to do TDD in complex code
I**-1 concern: Hard to think of the tests I would not figure out some of the tests.
I**-1 concern: Hard to think of the tests Not sure how it works for more complex parts of a code
I**-1 concern: Hard to think of the tests It'll probably take some getting used to in order to come up with tests.
I**-1 concern: Hard to think of the tests I can still fail due to lack of imagination. If I don't have tests for all possibilities, I KNOW that's the first thing a user will do!
J***-1 concern: Hard to think of the tests In the drive to get tests passing, some shortcuts are taken. I'm concerned if the tests do not get revisited to flesh out hard-coded behavior, giving the illusion that all tests are good.
J***-1 concern: Hard to think of the tests Since TDD is new to me it is a bit challenging to do it on the spot.
M***-1 concern: Hard to think of the tests Code size may increase due to adding more test cases, and how to confirm full test coverage of a function/feature ?
M***-1 concern: Hard to think of the tests Code size may increase due to adding more test cases, and how to confirm full test coverage of a function/feature ?
M***-1 concern: Hard to think of the tests It's hard to think good test case.
M***-1 concern: Hard to think of the tests Not every test is easy to achieve. For example, memory population is a hard issue to debug and write the test.
M***-1 concern: Hard to think of the tests Will it spend too much time to development the test case before start to writing code? Because thinking all possibility cases is not easy.
M***-2 concern: Hard to think of the tests User may not think of all test cases and miss a few.
M***-2 concern: Hard to think of the tests Sometimes it becomes hard to think various test scenarios. Paired or mob thinking might be helpful in this case.
M***-2 concern: Hard to think of the tests Coder must be ready with all the test-scenarios first. This may be difficult in real time situations.
M***-1 concern: Hard to think of the tests My main concern for using TDD is ensuring that I write enough tests and the correct tests as I implement the feature(s).
M***-1 concern: Hard to think of the tests initial "brainstorming" of the concept that needs implementation is fragmented when using tdd not going for the large overall use cases, this is iterative
M***-1 concern: Hard to think of the tests Don't know where to start the tests and code dependency on hardware.
M**-1 concern: Hard to think of the tests Initial time to adjust developing according to tdd will take some time
M**-1 concern: Hard to think of the tests Convincing others to use it could be difficult.
N/A concern: Hard to think of the tests It's hard to think what to test for and how to write them
N/A concern: Hard to think of the tests Finding meaningful ways to break your code
N/A concern: Hard to think of the tests How much testing is too much and how much testing is not enough. In some cases it is easy to determine what needs to be tested but in others it's difficult to determine edge cases.
N/A concern: Hard to think of the tests It is hard with to think of tests and how to do it incrementally. Also, spending time with build errors.
N/A concern: Hard to think of the tests The TDD refreshed me in what I have seen past FPGA teams do in verification testing. The hardest part for me is translating that to C++ C# but they languages were very easy to read & study.
N/A concern: Hard to think of the tests I am still concerned on the approach of TTD. I understand the concept, but do not understand how/what steps to take to write Tests for our code...
N/A concern: Hard to think of the tests Tests may be skewed when working by yourself. However, working in a team can provide the right direction.
N/A concern: Hard to think of the tests It seems like you have to consider every little possibility and can get overwhelming.
N/A concern: Hard to think of the tests I am worried I wouldn't know where to start when trying to create my own test functions.
N/A concern: Hard to think of the tests As the program evolves, do we have to modify previous test code or do we have to still satisfy elementary test cases.
N/A concern: Hard to think of the tests As the program evolves, do we have to modify previous test code or do we have to still satisfy elementary test cases.
N/A concern: Hard to think of the tests I think it tends to be difficult to think of a test to make sure your code works as it should
N***-1 concern: Hard to think of the tests How to decide on which tests are appropriate.
N***-1 concern: Hard to think of the tests Avoiding redundant cases and balancing between bloating testcases and finding corner cases.
N**-2 concern: Hard to think of the tests Its a tighter test/write loop than I'm used to.
R**-1 concern: Hard to think of the tests Might get confused what test i did already and what needs still to be done.
R**-1 concern: Hard to think of the tests Maybe I forget something I made in order to get my test to run but should be implemented with functionality.
R**-1 concern: Hard to think of the tests It is hard not to jump into implementation and test one behaviour at a time.
R**-1 concern: Hard to think of the tests At the end, I think I was writing "too much" code at once, without being able to cut it into multiple parts (the wrapping around part)
R**-2 concern: Hard to think of the tests it takes a lot of time; its "new thinking"
R**-3 concern: Hard to think of the tests The tests should be written after writing the functional code. Sometimes it's really hard to design a good testing schedule for the functional code
R**-3 concern: Hard to think of the tests It is difficult to make a step-by-step implementation (code/test).
S***-1 concern: Hard to think of the tests Tests could become complicated and it may be difficult to tell if there is an error in the test setup vs an error in the implementation
S***-1 concern: Hard to think of the tests Here the tests were already written for me. But, having done this before, I know it can be difficult to think of tests, and to know that the test you come up with is a small enough chunk.
S***-1 concern: Hard to think of the tests If you forget one use case and it comes up later, you have to find all the instances where it comes up. Just feels like chasing my tail.
S***-1 concern: Hard to think of the tests Everything depends on robust tests, it's difficult to determine what should be tested first and how the test cases should be designed. The order in which you add tests could change the code structure.
S***-1 concern: Hard to think of the tests This is great for code that doesn't rely on outside data/sensors/environment/peripherals. I guess this is where DBC checks, etc. fit in real-time code.
N/A concern: Hard to think of the tests - Sometimes writing the tests take some time - Sometimes difficult to come up with edge cases for tests - Sometimes the test itself is wrong
N/A concern: Hard to think of the tests How to test functions that do not return a value and/or are not stateless is sometimes a headache.
N/A concern: Hard to think of the tests It is not always simple to come up with the right way to test a module or to shoehorned yourself in a corner by getting ahead of the tests in the code
N/A concern: Hard to think of the tests It is not always simple to come up with the right way to test a module or to shoehorned yourself in a corner by getting ahead of the tests in the code
N/A concern: Hard to think of the tests I might miss an important test
N/A concern: Hard to think of the tests At this point I don't have an overarching knowledge of the CppUTest directives, so I don't know how I could extend it.
N/A concern: Hard to think of the tests I believe a lot of skill is required to define good tests with suitable coverage but no doubt that will come with practice.
N/A concern: Hard to think of the tests Is also time consuming
T*-1 concern: Hard to think of the tests if test code is not fully covers the problem the code is not fully functional!
T*-2 concern: Hard to think of the tests Sometimes it maybe hard to think of scenarios that could occur.
T*-2 concern: Hard to think of the tests Difficult to apply to a huge amount of source code where I just do tiny modifications.
T*-2 concern: Hard to think of the tests Difficult to apply to a huge amount of source code where I just do tiny modifications.
U**-1 concern: Hard to think of the tests Tests could be wrong
U**-1 concern: Hard to think of the tests It is sometimes difficult to imagine the simplest solution.
U**-1 concern: Hard to think of the tests It may take too much time for test-modifying-more tests-more modifying circle. I prefer to dump some log to see how things works.
U**-1 concern: Hard to think of the tests It may take too much time for test-modifying-more tests-more modifying circle. I prefer to dump some log to see how things works.
U**-1 concern: Hard to think of the tests It may take too much time for test-modifying-more tests-more modifying circle. I prefer getting some debug log to see how things works.
U**-1 concern: Hard to think of the tests I fear losing focus on final desired module behavior. While designing the code, you are also busy with designing the tests. And if you don't test in the proper way, a bug might not be hit and found
W**-22 concern: Hard to think of the tests in more complicated systems, where functions cover a lot of functionality it can be hard to create tests for.
W**-22 concern: Hard to think of the tests Sometimes it might be difficult to come up with a test case that would target an edge case or part of the function that I'm trying to test
W**-22 concern: Hard to think of the tests how not to over think or over design
W**-22 concern: Hard to think of the tests if i hard-coded everything then forgot some test cases to check it
W**-23 concern: Hard to think of the tests Thread safeness is hard ensure with TDD
W**-24 concern: Hard to think of the tests I feel like its easy to get ahead of yourself and then have gaps in the testing.
W**-24 concern: Hard to think of the tests Test code can also lead to possible errors when the target implementation is not fully exercised or understood
W**-24 concern: Hard to think of the tests Kind of lost when started in the test environment coding.. may be need to spend more time here
W**-26 concern: Hard to think of the tests Concerned I won't come up with tests to cover every case.
W**-27 concern: Hard to think of the tests Sometimes, I got a feeling of writing more code (involved into requirements) and starting to think ahead and confusing actual test I need to write.
W**-28 concern: Hard to think of the tests You have to think of all the failure modes. A test will not catch corner cases you didn't think of.
W**-29 concern: Hard to think of the tests Should you write a test for everything?
W**-29 concern: Hard to think of the tests How can I be confident that I've thought of all the tests? If I develop with hard-coded values and deliberate mistakes as part of TDD, I need to know the tests are strong enough to catch those.
W**-30 concern: Hard to think of the tests As we face more complex code bases, how can we anticipate the scenarios to test for and how do we incorporate a test harness in already well established systems?
W**-30 concern: Hard to think of the tests Writing the test cases might be harder than satisfying them. It might be difficult to break down the problem in such small increments in some unfamiliar domain scenarios.
W**-30 concern: Hard to think of the tests I'm concerned that I might miss things in the test-setup.
W**-30 concern: Hard to think of the tests I am quite concerned it will not be easy for me to write the "silly" test I wrote at the beginning. Than everything get easier.
W**-30 concern: Hard to think of the tests I am quite concerned about taking a long time to get used to TDD as I have taken a long time to code few lines
W**-32 concern: Hard to think of the tests It relies heavily on how well you write your tests
W**-32 concern: Hard to think of the tests Writing tests could be difficult.
W**-34 concern: Hard to think of the tests writing the test cases does take longer on the front end, though it should save time in the long run, can also be hard to come up with test cases
W**-34 concern: Hard to think of the tests The number of tests needed could be difficult to estimate when sizing the effort.
W**-34 concern: Hard to think of the tests Trying to think of every test case as the code is being written seems difficult
W**-35 concern: Hard to think of the tests Sometimes it is difficult to write a test when the code does not yet exist.
W**-35 concern: Hard to think of the tests Sometimes, it is hard to come up with the idea,of how to write the test
W**-35 concern: Hard to think of the tests skills to write just enough code to pass test
W**-35 concern: Hard to think of the tests i got the feeling, that finding the best "next test" to implement is one of or even the most important aspect in tdd.
W**-35 concern: Hard to think of the tests A concern looking in from the outside perspective of development is that one may get lost in the weeds of this slower moving iterative process.
W**-35 concern: Hard to think of the tests Although the test supports a solid design, the TDD requires a different mentality than I am used to.
W**-36 concern: Hard to think of the tests I'm concerned about tests not being well developed and directing the implementation in the wrong direction.
W**-37 concern: Hard to think of the tests Might test the wrong thing Are you adding tests that are useless/not doing anything? Hard to know sometimes
W**-37 concern: Hard to think of the tests Its seems easy when you are fallowing test plan - its sure would be a lot harder without test plan.
W**-37 concern: Hard to think of the tests I think it is hard to think of the tests and not get ahead of myself and write too much code. With practice, this should get easier.
W**-37 concern: Hard to think of the tests It's hard to think of good, not redundant test.
W**-37 concern: Hard to think of the tests It will be more difficult to write all the tests in the right order. Since the tests guide development, the order seems to matter a lot.
W**-38 concern: Hard to think of the tests Practicing how to write tests that don't miss edge cases. The take it takes for me to know how to write each unit tests and quality tests in general (I assume Ill improve overtime).
W**-38 concern: Hard to think of the tests By not writing the correct implementation to pass the test you might put yourself at a disadvantage later down the line.
W**-40 concern: Hard to think of the tests It can be hard to write a test sometimes, and even harder to write a test that tests in the correct way.
W**-40 concern: Hard to think of the tests What if I write a bad test and don't realize it right away? Will I waste more time trying to trying to debug issues caused by a bad test?
W**-40 concern: Hard to think of the tests You have to be sure you cover any possible condition otherwise you would end up having the same bugs.
W**-40 concern: Hard to think of the tests Discover the right tests to do
W**-40 concern: Hard to think of the tests Applying it to code that depends on the hardware, dependencies to other modules, mking sure that the tests cover all requirements and use cases and boundary conditions
W**-40 concern: Hard to think of the tests Applying it to code that depends on the hardware, dependencies to other modules, mking sure that the tests cover all requirements and use cases and boundary conditions
W**-40 concern: Hard to think of the tests It seems not easy define the right test
W**-40 concern: Hard to think of the tests There might still be lots of holes in my code
W**-41 concern: Hard to think of the tests Shifting the thought process to think of test scenarios first
W**-41 concern: Hard to think of the tests While you progress further in TDD and your previous tests fails, you tend to modify your code to make previous tests happy.
W**-41 concern: Hard to think of the tests Difficulty in thinking about the tests needed for validation.
W**-42 concern: Hard to think of the tests I am a bit concerned that it might take time to get the feel for this type coding. Maybe immersion and time is what it takes?
W**-42 concern: Hard to think of the tests Getting stuck designing useful testcases.
W**-42 concern: Hard to think of the tests It's important to implement your tests correctly, so you don't get a false sense of security
W**-43 concern: Hard to think of the tests It's not easy to determine how much scope to break down for each iteration loop. In addition, it's hard to think of what tests to add in hindsight.
W**-43 concern: Hard to think of the tests When to stop writing code How do I know I write the right tests
W***-1 concern: Hard to think of the tests in the example I had tests already defined - that was fun, but I would need to change my way of thinking to first find scenarios, and then implement the code
Z**-1 concern: Hard to think of the tests You have to think the needed test.
Z**-1 concern: Hard to think of the tests I find it difficult to think, imagine and build TEST cases that not only serve to test the more immediate SW implementation I have just done, but also help me in future steps.
Z**-1 concern: Hard to think of the tests It is difficult to think well about tests. It is necessary to train the mind to think well in simple and concrete functional tests
Z**-1 concern: Hard to think of the tests I wrote bad code
Z**-1 concern: Hard to think of the tests i am so worried
B***-11 concern: harware interface So far, it seems to be easy to test "stand alone" code, i.e. code that does not depend on hardware, or external communications, for instance. How can TDD be applied to these cases?
W**-21 concern: headers The step of adding the function prototype to the header file makes the feedback cycle slower than I'm used to. Normally in the languages I work with, get to a running but failing unit test faster.
Z**-1 concern: hesitate Hard to determine when I should follow TDD and when I should be confident in my programming skills and skip it for efficiency. Some functions are hard or expensive to design unit tests for.
C**-1 concern: hurdle These tests are useful for a win32 platform. When it comes to putting these in place for our dsp platform, which has a different compiler, data types and architecture, the workflow is not trivial.
N/A concern: I have no concerns. There weren't any concerns I had
N/A concern: I have no concerns. No concerns
N/A concern: I have no concerns. I don't have any concerns.
N/A concern: I have no concerns. I was concerned on how much C/C++ knwoledge I would need for the training, but it's not a concern anymore
N/A concern: I have no concerns. Think so far TDD is proving very useful.
N/A concern: I have no concerns. Think so far TDD is proving very useful.
W**-42 concern: I have no concerns. lkjslkjkl as;lkasdlkj changed
W**-1 concern: I have no concerns. Nothing.
B***-2 concern: I'm Crazy How does TDD work with techniques like "Property Based Testing"? Can they be combined?
B***-12 concern: implementation How to implement this in what we do at work.
V**-1 concern: Implementation here How we can implement this in our own systems with the build times we have for one software component. Too slow feedback
V****-2 concern: inadequate Implementing code to pass test may not guarantee it does what is supposed to if test coverage is inadequate.
M***-1 concern: incomplete The many steps make it feel as if not all things are tested, especially the first few (very simple) tests...
C**-1 concern: incorrect testing My test script for testing the code may be wrong and misleading
C**-1 concern: infeasible Too structured and not that practical in a deadline driven work environment.
W**-21 concern: Integration How can we integrate TDD with different Requirements?
Z**-1 concern: integration It is not yet obvious how TDD scales to integration or system level tests.
V****-2 concern: integration code how to test integrated or system level code
C**-1 concern: Interface Usually we lock down the interface so others can use it, this TDD approach keeps the interface in flux
B***-1 concern: Irritating Allowing compiler/linker errors that you know are going to happen doesn't seem like it adds a lot of value.
D****-2 concern: Laborious TDD has more deliverables since the functional code and the test code are required. This could appear as a hurdle to some.
C**-1 concern: lazy it is OK to be a 'lazy' programmer, and procrastinate until you are forced to. If you run out of time, you're left with unfinished work.
C*****-2 concern: legacy Integrating the concepts into our legacy codebase.
D****-2 concern: legacy It seems very straight forward for new development but very hard to wedge into legacy code.
L*-4 concern: legacy Can I apply TDD for the legacy codes when I change part of them?
T**-1 concern: legacy How to make it work with legacy code.
W**-18 concern: legacy Adding tests for new or existing functionality in code that was not at all written with TDD in mind.
B***-11 concern: lengthy It feels like it takes too long. I instinctively want to write more.
Z**-1 concern: less-coverage I felt like I had less confidence in my coverage, perhaps because I spent very little time thinking about a correct implementation.
B***-11 concern: limited identifying hard to track down integration or timing bugs that are not reproducible in a test-harness
L*-4 concern: Limited-Time Spending some time to make TESTs.
C**-1 concern: Linear I'd rather work on one function at at time, however TDD could be applied to this approach too.
C**-1 concern: long process long repeated procedure breaks my flow of ideas on how the pieces are put together on the module I'm developing
Z**-1 concern: longer Results in additional work to revise previous tests when the interface/design changes midway through the TDD cycle
B**-1 concern: Loss of fluidity I feel like I am hopping all over the place rather than getting the job done. One line, test, one line, test, rather then getting my ideas all on the table and sorting out the problems.
C**-1 concern: maintainability There is a maintenance cost for the tests. It seems possible that the volume of test code might eventually exceed that of the production code.
V****-2 concern: maintainability If tests are built around an interface will it make changing the code more difficult, as both the interface and the tests will need changing.
W**-19 concern: Mess? Sometimes I start making a test pass and suddenly more and more tests fail ...
W**-18 concern: misleading TDD gives a misleading of security that the code is safe. Any missed test cases will lead to buggy. This approach is only testing the expected outcomes, the code could be doing far more or not enough.
W**-19 concern: More At first, it seems like a lot of extra code to write
W**-18 concern: more bascis Probably need some more low level background information. The basics, like what are all of the available test macros, how can it be targeted in different development environments.
W**-21 concern: More Code Lots of test code needs to be written
M***-1 concern: more time Longer development time before first working proto release version.
V**-1 concern: More work when wrong Sometimes it is really hard to drive the design using test cases. If the abstraction of the interfaces gets wrong, there is a lot to change in the test code as well.
W**-20 concern: new It goes against the grain of doing it the other way for many years
B***-13 concern: None How to jump from the mockup environment to the real HW.
B***-13 concern: None It can feel like a lot of friction to do something simple.
B***-1 concern: none Nothing yet.
C****-25 concern: None None.
C****-25 concern: None none
C**-1 concern: None making small changes that seems obvious makes you think you are going slower.
E***-1 concern: None learning curve, takes a long time to become fast at TDD
H***-1 concern: None Reuse of the tests in other tests
H**-1 concern: None No concerns.
I**-1 concern: None tests for complex products and UI components
M***-2 concern: None Nothing!
M***-2 concern: None I don't have any concerns as of now.
M***-2 concern: None None, as if now
M***-2 concern: None No concerns.
M***-2 concern: None No concern
N/A concern: None I don't have any concerns.
M***-1 concern: None None
M**-1 concern: None Not clear how much to test within each test. For example. Fill buffer. Empty buffer. Should we be testing IsFull and IsEmpty after each put and get?
M**-1 concern: None Just seeing the test fail does not explictly tell us that what we are intending to test is failing. Possible false sense of security.
M**-2 concern: None Some of the steps seemed very small and would waste time, but see that maybe this is more in the training to learn the system
M**-2 concern: None Need to understand the test framework a little better for the helper functions.
M**-2 concern: None No concerns so far
N***-2 concern: None If a test case has any compilation errors, execution stops there itself. It is not going to the next test case.
R**-1 concern: None Making tests
R**-3 concern: None Writing tests for GUI might be more challenging.
T*-1 concern: None Do not have any concern about TDD.
T*-1 concern: None additional effort to write first test and than function, guess will need more time if we not just remove comments
T*-1 concern: None A bit irritating in the beginning
T*-2 concern: None nothing yet
U**-1 concern: None Prograstination and lazy steps to get test pass has to be done in a smart way.
U**-1 concern: None We will still able to add bugs that the tests didn't catch, so need to think more about what to tests needed. That doesn't mean thinking the tests was hard, just don't assume have caught all bugs.
U**-1 concern: None Will it be as time efficient with bigger projects or when integrating older code?
U**-1 concern: None How it will be applicated on our existing code that is more complex and already written
W**-21 concern: None No concerns so far.
W**-22 concern: None organizing the volume of produced tests
W**-24 concern: None A lot of time is spend waiting for the compiler to discover that I forgot to define/declare a function.
W**-25 concern: None Overhead of testing framework.
W**-26 concern: None none
W**-26 concern: None None
W**-27 concern: None None
W**-28 concern: None I already try and use TDD so I am already a bit biased so not a lot of concern, but I do tend to write multiple tests and then go implement a chunk of them instead of one test at a time
W**-28 concern: None How can this work with hardware dependent code?
W**-28 concern: None How can this work with hardware dependent code?
W**-28 concern: None How can this work with hardware dependent code?
W**-31 concern: None Nothing really
W**-31 concern: None (Lack of) Acceptance by other embedded programmers.
W**-32 concern: None Overlooking test cases. Accidentally leaving 'false positive' results in refactor.
W**-33 concern: None I find it too easy to skip the "test fails" step
W**-33 concern: None None so far.
W**-34 concern: None Not many, I like it, just need to sharpen up on my C for your course. :)
W**-34 concern: None I had one concern before we start: How to convince colleagues to get into TDD. But now I'm sure that if they take this course, they will love it too.
W**-35 concern: None Nothing.
W**-35 concern: None Hopefully covering all the test cases. Implementing the logic and knowing all the syntax for that.
W**-36 concern: None Liking the method too much!
W**-36 concern: None *The hardest part for me is sticking to the red, green, refactor cycle. I often skip the red.
W**-36 concern: None I don't have concerns, it will take more time to code but it saves time in the end.
W**-36 concern: None I don't have concerns, it will take more time to code but it saves time in the end.
W**-36 concern: None Can be easy to get too far ahead
W**-37 concern: None Initial Overhead
W**-38 concern: None We don't teach it early enough in student's SW learning.
W**-38 concern: None No concerns. It's tool I've used for 10 years.
W**-39 concern: None nothing so far. but I've been wanting to take this course for a while so that's understandable.
W**-39 concern: None None at all because I was already familiar with it
W**-40 concern: None how to apply to Python and PHP
W**-40 concern: None None
W**-40 concern: None nothing
W**-41 concern: None no concern, all fine (:
V**-1 concern: NoRequirements Where are the requirements
F**-1 concern: normal It takes me a lot time to develop task with TDD. But I will try it.
C**-1 concern: not Somebody will think that is a method to fit all use-cases.
V**-1 concern: Not covering all I sometimes felt afraid there were needed test cases we would miss, not covering all expected behaviour.
L*-4 concern: not enough time I don't have enough time to develop full test case on my real work.
Z**-1 concern: not good enough what concerns me the most is that what if i don't come up with good tests?! also wouldn't it a lot more complicated when the function is way more complex?
V**-1 concern: nothing nothing.. i think it works fine
B***-1 concern: obstacle dependencies and coupling
V**-1 concern: old habits risk falling in to old habits of not using when stressed
A***-1 concern: Other. My main concern is that unit testing (not TDD) only can make you focus too much on the details and miss the big picture. You risk making wrong design choices and then be heavily invested
A**-1 concern: Other. I could miss some functions with hardcoded return values or some cases without being properly tested.
A**-1 concern: Other. Complex modules are difficult to test, sometimes modules are not so easy to test (most of times because the design is wrong). The test framwork probably wont' compile on my embedded systems.
A**-1 concern: Other. You might forget of some harcoded value that you wrote just to make a test pass that interfiers know with another test
A*-1 concern: Other. Writing code that I know is not going to work, just to pass the test.
A*-1 concern: Other. Hard to think in small incremental steps
A*-1 concern: Other. Will this approach work on embedded code, with hw dependencies, and on more complex problems ?
A*-1 concern: Other. That you are not testing the tests well enough. And that this can cause uncertainties when debugging future failed tests. Is there a bug in the implementation or a bug in the test?
A*-2 concern: Other. Though I understand the reason behind it, it felt wrong to hard-code things when I know what should actually be implemented.
B***-13 concern: Other. Trying to compile a test that doesn't have supporting code will slow cycle time and will confound scope-aware IDEs.
B***-13 concern: Other. It adds a layer of complexity that requires some understanding and skill to tackle.
C****-25 concern: Other. Writing code with minimum needed to pass the test (e.g. return 42) has the downside that if one forgets to test a certain behavior, code will stay wrong.
C****-25 concern: Other. You must employ tools to cover all potential bugs strictly related to the implementation. I see TDD checks adherence to requirements but can't give any guarantee on the quality of the implementation.
C****-25 concern: Other. it can happen that we develop the tests that might not catch bugs introduced in step by step approach concerned how it fits in more complex projects
C****-25 concern: Other. What's the test completion criteria? How to identify test gaps?
E***-1 concern: Other. Changing a prototype could lead to lots of broken tests. Finding the right amount of up-front thinking about your interface. But this is likely something that comes with experience.
E*-1 concern: Other. overhead of test
E*-1 concern: Other. I have concerns that hardcoding functions to pass test can lead to missing requirements which will then not be caught by the test suite.
E*-1 concern: Other. inaccurate test results due to hardcoded values
H***-1 concern: Other. I'm a little concerned that the process may force multiple refactors since the early tests may allow a simple implementation.
H**-1 concern: Other. It does seem that TDD can lead to some work arounds or repetition if you do not go back and fix earlier revisions.
H**-1 concern: Other. It seems easy to fool oneself about whether test coverage is complete or not. It's easy to overlook some edge case, have all tests pass, and believe one's code is thoroughly tested.
H**-1 concern: Other. It seems easy to fool oneself about whether test coverage is complete or not. It's easy to overlook some edge case, have all tests pass, and believe one's code is thoroughly tested.
I**-1 concern: Other. I guess that the this approach should be shared and agreed by all the developers in the team.
I**-1 concern: Other. my concern
I**-1 concern: Other. that it might not be so easy to use in more realistic (and complex) scenarios
I**-1 concern: Other. May not be clear order of tests to write. Probably would think of tests to write simultaneously while thinking of interface.
I**-1 concern: Other. I'm concerned that I think ahead and jump to implementation. I need to slow down to think about testing
J***-1 concern: Other. How do you avoid overly brittle tests that are far too sensitive to internal changes?
J***-1 concern: Other. Getting used to the steps/mentality of TDD can be challenging because of the "fake it 'til you make it" approach to implementation. It feels strange to make something that is wrong on purpose.
M***-2 concern: Other. Not feasible to compile after every test if build process takes too much time
M***-2 concern: Other. Sometimes we need to modify the code just to maintain the old tests
M***-1 concern: Other. I need to keep a separate list of tests to implement as I think of them. It would be nice to just put the test down as I think of it even though it's not ready for prime time.
M***-1 concern: Other. TDD could create excess code.
M***-1 concern: Other. TDD could create excess code.
M***-1 concern: Other. TDD could create excess code.
M***-1 concern: Other. TDD could create excess code.
M***-1 concern: Other. I'm concerned that I won't be able to convince my team and managers that this is a worthwhile practice.
M***-1 concern: Other. use it in real world
M**-1 concern: Other. When jumping ahead a bit too much, I was a tad miffed trying to figure out what to comment out.
M**-2 concern: Other. It's definitely a change from my current coding paradigm.
M**-2 concern: Other. The primary goal being test of the code and not the application.
N/A concern: Other. Intentionally implementing bugs instead of doing it right the first time
N/A concern: Other. I didn't like how it was penalized to think ahead
N/A concern: Other. It was difficult to adjust mentally to coding to pass tests first without solving the whole problem (i.e. hard coding values)
N/A concern: Other. There will be alot of tests but not a bad thing
N/A concern: Other. I wish there were more guidance on how to perform TDD in the discussion notes in the exercises.
N/A concern: Other. Does one strictly adhere to Fake-it-till-you-make-it? For example, we have implemented the circular buffer before we tested for this specific behavior and this was flagged in the output result.
N***-1 concern: Other. Again about the legacy code base: my understanding is that TDD is an approach for *developing* software and that TDD is not about writing lots of tests for existing legacy code
N***-1 concern: Other. It could be useful for new code (with all the requirements/specs defined). I'm a bit concern of how to implement this with our current software (legacy, convoluted, too many dependencies/includes)
N***-1 concern: Other. Gives me anxiety when you can see the future problems at the earlier steps and you can't code ahead
N***-1 concern: Other. It remains to be seen how this can be applied to our legacy code base
N***-1 concern: Other. Applying this to neato's codebase because of tight coupling / dependencies
N**-2 concern: Other. How does this scale up to higher order tests - perhaps functional or whole system?
N**-2 concern: Other. It requires discipline which I lack.
R**-1 concern: Other. Incrementally discovering the API might not lead to the best possible API. If I discover my API is bad after writing 60 tests, I have to change a lot!
R**-2 concern: Other. If the project grows, the compile time might get too long. What about third party dependencies? How to deal with them?
R**-2 concern: Other. When a problem becomes more complex I will not be able to write all tests. Without TDD I was not able to write the code. In any case the problem will not be solved.
R**-2 concern: Other. No worries at the moment
R**-3 concern: Other. Nothing to complain
R**-3 concern: Other. Not always easy to split tasks, I Often make many tasks in the same time. Need practice Also how and when design our code in these steps ?
S***-1 concern: Other. I still don't understand the concept. If I can understand the CircularBuffer excercise at least up to the GET ad PUT methods then I will probably understand TDD.
S***-1 concern: Other. It's difficult to hold back wiring the whole implementation while doing TDD.
N/A concern: Other. Having to focus on designing the test first and then only keeping myself to write code that satisfies the test, and not code features that "I might need later" or "early optimizations".
N/A concern: Other. My only concern is that it is hard to break old habits and even during TDD, I tend to cut some corners here and there. It takes time and some effort to overcome that.
N/A concern: Other. Is it possible to miss the 'big picture' by just making the it pass the tests... No, I don't think so, IF your tests are accurate/representative.
N/A concern: Other. I'm concerned that the TDD approach may not scale well or apply to most real development efforts.
N/A concern: Other. In my embedded system firmware I configured an input capture peripheral for measuring the frequency of a signal. Can I test that the peripheral is configured truly and that captured values are true?
N/A concern: Other. The test results give a summary of the number of tests run, etc. Is there a way to create a report of the tests themselves, along with a pass/fail status?
T*-1 concern: Other. Good for new software, but not so good for changing existing software (not yet covered by tests)
T*-1 concern: Other. - sometime misunderstanding test result (example what yellow sign really mean)
T*-1 concern: Other. Good for new software but I'm wondering how it will work for code already existing and without tests yet.
T*-1 concern: Other. Testing code which calls other code might be expensive (time, environment)
T*-1 concern: Other. who do I reach the end of the test code? when I is finished? who decides this?
T*-2 concern: Other. It's still the developer who implemets the testcases from his coding point of view and might oversee important states which an independant tester won't.
U**-1 concern: Other. I am not sure if there are much benefits in enviroments with extremely good design stage or the opposite one, poor design phase, kind of I am not sure what I should do.
W**-22 concern: Other. I have a good feel for how this works on a micro level with functions/classes. I'm curious how it would work on a higher system level with multiple components integrated together.
W**-23 concern: Other. hey 1023
W**-23 concern: Other. It really requires self dicipline to write those minimal, not-functional-but-make-green-test implementations.
W**-24 concern: Other. Some types of requirements may be difficult to verify this way, particularly multi-threading issues with lots of asynchronous randomness.
W**-24 concern: Other. My concern is that the test code now has to be maintained as well.
W**-24 concern: Other. By willingly implementing only specific pieces at a time, it may be difficult to know when to implement the proper functionality, vs. just enough to pass the test.
W**-26 concern: Other. As far as TDD itself, no concerns. I do see the need to ensure that I have a bit more experience with hands-on programming but I'm eager to learn the good habits sooner rather than later
W**-26 concern: Other. I don't have concern about concept of TDD. maybe this is slow way of working if there is Super man. like who don't need fast feeback because no mistake.
W**-27 concern: Other. So far we have used cyber-dojo. How do I integrate CppUTest or another tool to a current project?
W**-28 concern: Other. That certain parts of embedded development would have a hard time utilizing TDD
W**-28 concern: Other. Strict TDD emphasizes one step at a time when experienced programmers may be able to at least combine one or two steps.
W**-28 concern: Other. lose confidence when something is "done"
W**-28 concern: Other. It's hard to not want to get ahead of yourself during implementation
W**-28 concern: Other. When I don't know how to test something, e.g. dynamic memory allocation.
W**-29 concern: Other. Well, I was pretty lost right at the beginning, so there wasn't much I could do to proceed. Of course, this is a limitation on my end...
W**-29 concern: Other. Well, I was pretty lost right at the beginning, so there wasn't much I could do to proceed. Of course, this is a limitation on my end...
W**-29 concern: Other. Deliberately writing a naive implementation (with hard-coded values etc) makes me feel like, if I take a break and come back, I'll lose the thread of what I was doing and forget to clean up.
W**-29 concern: Other. Hard-coding the implementation to make the tests pass feels morally wrong (this eventually wears off?)
W**-30 concern: Other. my concer is that I look stupid
W**-30 concern: Other. When I am doing it allone, it's sometimes hard to come up with a decision.
W**-30 concern: Other. Spending too much time in the incremental code construction that I might forget some hard-coded values later on.
W**-30 concern: Other. Integration of unit-testing sw with current toolchains...
W**-31 concern: Other. I don't fully understand how to utilize the create/setup functions yet to ensure I'm testing the proper things.
W**-31 concern: Other. Though the approach is structured, I feel very tempted to make shortcuts, even if I logically believe they'll save time down the line.
W**-31 concern: Other. I wonder how to communicate the shift in time investment (more time to complete stories during sprints, less time testing) to upper management that may not understand TDD, even if the dev team does
W**-32 concern: Other. Well because of the narrow scope on a singular test, it can make anticipating future problems harder. Kinda seems like a you are reacting to the test rather thinking about what could go wrong.
W**-33 concern: Other. some of the error messages are not that easy to decode (I am sure this will come with practice though)
W**-35 concern: Other. Development process is backwards compared to how most developers are trained. May make it harder to get adopted in a team
W**-35 concern: Other. I feel like there is high potential to miss steps if you keep putting off implementation details until later on.
W**-35 concern: Other. Colleagues not doing it in common code
W**-35 concern: Other. It seems not possible that one guy writes a test and the other guy writes the implementation unless we would work in a close pair programming.
W**-35 concern: Other. we pair, actually we mob, and I'm wondering styles/options for TDD together, rather than solo.
W**-35 concern: Other. Refactoring seems like it can be a nightmare with 30-40 tests intermittently failing as you continue (sometimes due to a fundamental issue).
W**-37 concern: Other. Learning the framework, though that's only a concern at this point, when I haven't used it too much.
W**-37 concern: Other. I am concerned about adding a layer of complexity to my programming. I expect this to resolve with more experience with TDD.
W**-38 concern: Other. Getting whole team to do it. Takes humility and extra skills.
W**-40 concern: Other. Writing a test and then writing just enough code so that it passes becomes frustrating when you can already figure out how the final implementation is going to be
W**-41 concern: Other. It can be tricky deciding between the simplest thing to make a test to pass versus picking something a bit more complicated that moves us towards the goal.
C*****-2 concern: overhead It seems that the overhead of intentionally failing one test at a time is very high. It seem like it would be much more efficient to work towards a few tests at a time and not intentionally fail them.
B**-1 concern: overlook Intentionally ignoring errors if they aren't part of the current test. Feels like I could forget to overlook something if I don't write a test for it.
W**-19 concern: overlook Using hard coded values to quickly pass a test. It felt like cheating, and I was nervous that I could forget to fix that later and let the hard coded value into production code.
M***-1 concern: Overthinking I notice my mind tries to go 3 steps ahead instead of sticking to the step-program. Hard to shutdown/slowdown
M***-1 concern: Overview After a while I am doubting whether my original tests still provide the "correct" coverage /mindgames?
M***-1 concern: Overview The sheer endless amount of tests you write may clutter the overview of what is tested, and what not (yet).
W**-18 concern: overwhelmed I wish we started with a much simpler code.
W**-18 concern: overwrought - Writing templated wrong code to end up rewriting right code. - the design follows the test
L*-4 concern: pass Making test code to only pass.
V****-2 concern: Planning How to make the TDD practices work with legacy code ? Writing the TEST_GROUP function for legacy code can be lot of work. What is a good ratio of test code and code under test
B***-1 concern: Portability Portability.
B**-1 concern: Practice Might take some time getting better at the process at the beginning.
F**-1 concern: Practice Takes time to get familiar with.
L*-4 concern: practice Can I make every test codes when I write codes in the real work?
Z**-1 concern: practiciality It's not suitable for things which are highly difficult to test e.g. UI code
W**-21 concern: predecessor Implementing it against an already established code base
B***-2 concern: procrastination Getting the experience to figure out when to drive toward a solution or delaying implementation.
C**-1 concern: progress-ish? it can feel like you are making good progress when you are may actually not be developing much useful functionality.
B***-12 concern: puzzled Integrating with components that I control. Integrating with components that are 3rd-party controlled. Integrating with system components.
L*-4 concern: range how much test range is similarity with real testing
B***-1 concern: Redundancy The approach being taken in the exercise seems to result in a lot of redundant code being written to simply satisfy test cases when implementing features in a different order would have removed that
M***-1 concern: redundant Total running time of test suite might become to large because of a lot of redundant tests.
F**-1 concern: refactor time In a large system, does it take much effort to refactor code for every new test case?
C*****-2 concern: retrofitting Retrofitting legacy code seems like it would be overwhelming.
B***-12 concern: rework risk of not planning enough by just designing for the next test, requiring a lot of rework
W**-20 concern: rework Am I wasting effort refactoring code that I *KNOW* is wrong in the long run (like with hardcoded values)?
B***-11 concern: Scalability Concerns about scaling to more complex functionality and hardware integration.
W**-18 concern: scaling How to test more complex functionality (say implementing an FFT) - it seems difficult to break that into as small of chunks. Is a result test the way to start?
B***-1 concern: schedule how to add this type of testing without impacting overall schedule
V****-2 concern: Scope Developer writing test cases for own code? How much time to devote for writing tests?
B***-12 concern: Security False sense of security, "All my tests pass so I must be alright!" But in reality the tests were just written poorly.
D****-2 concern: self-limiting It's really hard to limit myself to the smallest possible change.
C**-1 concern: Short-cuts Easy to short cut the steps by knowing where you want to get to and doing many steps at once.
Z**-1 concern: simplistic I can see how it could be adopted as a development strategy in this toy example of a CircularBuffer, but not sure how it would translate in more complicated scenarios?
M***-1 concern: situations I am not sure if we tested everything there is to test up to now
B***-1 concern: slow It will slow me down when coding. Not used to writing test before implementation.
C**-1 concern: Slow Very time consuming. Baby steps not often appropriate for fast development cycles. Antagonistic to requirements driven development as dictated by many outfits.
T**-1 concern: slow Feels slow when I think I already know the implementation required.
V****-2 concern: Slow Slows down development time
Z**-1 concern: slow 1. Development becomes slower 2. When I start with new tests, I may need to rewrite/refactor existing code.
Z**-1 concern: slow The very small iterations are substantially impacted by the speed of the build, and make it easy to take a poor design path early in development. I prefer to write more tests up front.
V**-1 concern: Slow process Slows down development process.
B***-12 concern: slower Overall more time consuming, though bug prevention may save time in the long-run. What if a critical test condition is not thought of during development?
B***-12 concern: Speed A fast build system is critical.
F**-1 concern: Stranger I make me stranger things. Because It's news for me. Do we need it when my system's small?
F**-1 concern: Strong How to test all function with simple, cover all condition
C**-1 concern: Stubs Stubbing production code to make tests pass allows these stubs to be deployed if missed by peer review. I'd rather see tests failing until production code functions are implemented.
Z**-1 concern: Subjective Just because there are tests, it doesn't mean they are good tests.
T**-1 concern: syntax New keywords/test syntax to learn. Need cheatsheet/quick reference to help make development of tests efficient
F**-1 concern: taking time It takes more time to develop features
W**-21 concern: TDD FrmWrk + DevTool During actual development projects, we should have common platform for development and testing. I have concern that how to integrate test-frame work for TDD in actual development tools.
C**-1 concern: tedious very long and tedious development process, sometimes I feel like I've lost sense of direction of where the end product should look and behave like
M***-1 concern: Tedious Takes more time, especially on smaller task where you already might know all the pitfalls or how your implementation should behave. It also is a bit hard to stick to TDD in that case
B***-1 concern: Test bugs We can write bugs in the test cases.
L*-4 concern: test case Sometimes I faced the limit of my imagination, how can I specify test cases which is fully testable my module?
L*-4 concern: test for corner case I concerns about how to make test code for corner case.
D****-2 concern: Test Mistakes Writing bad tests which give you a false sense of security.
F**-1 concern: test the tests - Do we ALWAYS need to make the test fail before making it pass? - How we make sure the tests cover all the corner cases?
L*-4 concern: testcase How do you systematically write test cases?
C**-1 concern: testing testing Gives fake sense of achievement by writing tests that fail, then making them pass, doing some development on the way. But the actual activity feels a lot like we are developing tests, not a code
B**-1 concern: tests reliant Becoming too reliant on the test cases and not thinking through the code. It seems like writing the test cases takes longer than writing the code.
A**-1 concern: There is no design. Think before coding
A**-1 concern: There is no design. It feels like you are going to forget to modified the initial simple tests as you modified them on demand.
A*-2 concern: There is no design. TDD creates code in a very small incremental way.. but does this generate code that is efficient, fast and optimized, or do we get drawn into a sub-optimal implementation?
A*-2 concern: There is no design. I don't like to implement things wrong
B***-13 concern: There is no design. Not sure how to organize/plan my own test iterations so it is as smooth as the demo was. I'm used to doing all the boundary checking at the start since those are common pitfalls which seems opposite.
C****-25 concern: There is no design. I could "overfit" and "cheat" and make tests pass even if my implementation is a hack
C****-25 concern: There is no design. Lack of global view we are sometimes too focused on the specific piece we are implementing and we should step back to reconsider the overall design
C****-25 concern: There is no design. Perhaps by focusing on having each individual test passing, I forget about the bigger picture and how my overall design can be optimized.
C****-25 concern: There is no design. It doesn't have design/architecture approach in the process. Having the test passing doesnt mean that the implementation is optimal.
C****-25 concern: There is no design. If a test case is missing, we will miss the code for a use case.
C****-25 concern: There is no design. Can lead us into a false sense of security: just because the tests pass, doesn't mean we can skip the design phase.
C****-25 concern: There is no design. TDD is not separating requirement from implementation testing.
H***-1 concern: There is no design. Not sure how it fits into a large system approach (not a simple, well-defined class). How does the bigger picture work?
J***-1 concern: There is no design. Using partial implementations with otherwise incorrect behavior to satisfy early tests opens up a risk of forgetting about something if you forget to write a test for the affected condition(s).
M***-2 concern: There is no design. Setup and Tear down might work for simple tests.What if we need to test more complex scenarios? Would we need to use helper functions to get to a state?
M***-1 concern: There is no design. With the 'faking' it seems that there is no pre-mediated design efforts and that design happens as a side effect of the testing. this seems backwards.
M***-1 concern: There is no design. Is TDD scalable?
M**-1 concern: There is no design. None to think of
M**-2 concern: There is no design. Is it possible that this method does not allow flexibility for code changes down the road? Feels like if the design specifications change it might leave us with a structure that is harder to change.
M**-2 concern: There is no design. Thinking in terms of TDD and thinking in terms of internal software design are somewhat at odds or in potential conflict or at least competition with each other.
N/A concern: There is no design. When there is no initial design code or you are unaware of what you need to create, TDD can be difficult to start
N/A concern: There is no design. Implementing the bare minimum to pass each test can lead to large code changes later that may have been easier to implement if done from the beginning.
N**-2 concern: There is no design. Without design or architecture up front, how do we keep the design from going totally off the rails?
R**-1 concern: There is no design. TDD may leads you to not think about good design or implementation, because you are only trying to get the tests running ;)
R**-1 concern: There is no design. Difficult to accept "forgetting" about the design you have in mind and concentrate on smaller steps. My head had the whole concept already in mind, and accepting to write down "temp" code was hard
R**-2 concern: There is no design. Both implementation and test are undocumented.
S***-1 concern: There is no design. for the demo, not scoping worst case of circular buffer with memory constrained devices, dynamically allocating memory concerns me.
N/A concern: There is no design. Sometimes I try to think of the problem holistically when designing software. But it seems to me that the "fake it until you make it" approach, prevents this.
N/A concern: There is no design. It isn't clear what order to start implementing the various parts.
U**-1 concern: There is no design. Code churn (in my experience churn introduces bugs) High proportion of time writing tests Don't have a good handle on the overall design Writing code like an automaton (creativity--)
W**-23 concern: There is no design. Lack of scope as the function unrolls; potential for necessary re-do of functions
W**-24 concern: There is no design. Fit this simple problem well - will it fit more complex problems requiring more complex design decisions?
W**-24 concern: There is no design. Bertrand Meyer: "No serious engineering process can skip an initial step of careful planning. It is good to put limits on it, but irresponsible to remove it."
W**-28 concern: There is no design. I thought we were going to be using C++ seems a bit tedious and interferes with the flow a bit
W**-28 concern: There is no design. Perhaps if you have a specific design or implementation in mind, it might "prevent" you from using that? (Perhaps, TDD will help you find a "better", simpler one).
W**-34 concern: There is no design. Forces me to fail. If I have done this in the past, I see the pitfalls, and I know how to structure this properly, it still forces me to code like a fresh college graduate.
W**-35 concern: There is no design. The design may just grow, without being related to a higher level architecture.
W**-35 concern: There is no design. In the circular buffer example we have a design, but in some actual practice we jumped directly into some test and writing code without a high level design.
W**-35 concern: There is no design. switching too often between test and development context. (e.g: for implementing algorithms may not work)
W**-38 concern: There is no design. It was not always intuitive to develop tests and know when to run to failure. It felt messy passing on hardcoded values and it is difficult to know what done is if we are writing our own tests.
W**-38 concern: There is no design. Possibly not fully thinking about entire implementation.
W**-38 concern: There is no design. Possibly not fully thinking about entire implementation.
W**-41 concern: There is no design. It is time consuming.
W**-41 concern: There is no design. It is time consuming.
V**-1 concern: Think Tfind It's fit the way that we work.
C**-1 concern: Tideous Sometimes we could end up spending lot of time in writing the tests and making sure that valid use cases are tested and is driving towards the end product.
D****-2 concern: time Not implementing it in the correct way. I.E. with a 'develop, test, TDD' mindset will simply produce more work.
F**-1 concern: time Is it save time to develop and maintain system ?
L*-4 concern: time hard to categorize the test case time consuming
L*-4 concern: TIME I think TDD will take long time to develope software. How about big project? Can TDD be applied without project size?
T**-1 concern: time Painful. A lot slower than normal development. A price for the benefits but still something I don't like.
V**-1 concern: Time It takes long time when new to it because it is another way of thinking than you are used to.
W**-19 concern: Time Time to get code written. BUT.... that is partly inexperience, and you get the benefit of code security
B**-1 concern: time consuming Full-blown TDD seems inefficient and overly time consuming.
B***-2 concern: Time consuming It seems time consuming for the real world to go step by step, instead of creating tests based on the expected behavior... and then testing against code (other implemented or later implemented)
D****-2 concern: time consuming It feels time consuming without building momentum.
V**-1 concern: Time Consuming Requires good unit testing skills to create more scenarios as this becomes essential for TDD
W**-21 concern: time consuming time consuming need to write code with knowing that it will be scrapped later
W**-21 concern: Time Consuming It slows down development and may affects the deliverable
V****-2 concern: time-consuming It is hard to make a conclusion at this point. But it appears to me that this approach may be time consuming. But at the same time I understand that it could be time-consuming in the beginning only...
W**-19 concern: Tiny steps Small steps sometimes hard to reconcile with big picture.
M***-1 concern: ToAddAllTheTests When writing the tests, not to miss come test condition.
T**-1 concern: too early Just started learning it so I feel too early to raise any concern
L*-4 concern: too much code Test code is too much. For example, if source code is 5 lines, test code will be 20~25 lines.
A**-1 concern: Too much test code. For simple problems it requires more time to think and write the tests, than to write the solution.
A**-1 concern: Too much test code. The concept of more test code than production code, implies more code to maintain
A*-1 concern: Too much test code. Difficult to make interface changes without having to change the tests
A*-1 concern: Too much test code. It breaks with the flow of programming & problem solving. It seems unececcary to break realively simple tasks into 100 baby steps. The test are also very narrow.
A*-2 concern: Too much test code. How much tests is it viable to write during startup of a project? Things might change.
C****-25 concern: Too much test code. Strong coupling to the APIs. Changing an internal API means changing the places where it is used, including the tests. Even more, changing the logic of the test may be needed.
E*-1 concern: Too much test code. How quick can this approach be adopted by a team.
H***-1 concern: Too much test code. i might end up overengineering the testcases
H***-1 concern: Too much test code. Like to know more details about how to use TDD to project development, and how to improve the code quality, and how to balance the test cost and development cost.
H***-1 concern: Too much test code. Like to know more details about how to use TDD to project development, and how to improve the code quality, and how to balance the test cost and development cost.
I**-1 concern: Too much test code. Probably too many test cases. I would personally fit some of them into a single test.
I**-1 concern: Too much test code. Our implementation differed from the suggested one. It passed, but it may not be bug-free. It could go wrong when int overflows. Even, with the test the code may get wrong.
M***-1 concern: Too much test code. it may take lots time and lots test codes.
M***-1 concern: Too much test code. How many test case is enough or good
M***-2 concern: Too much test code. Our focus may move more towards the test cases writing. (May be doing first time so I have this feeling)
M***-2 concern: Too much test code. Too much test code
M***-2 concern: Too much test code. Lots of test code to be written to test each and every scenario of code.
M***-2 concern: Too much test code. Looks like it will take more time on the test code than the actual code.
M***-2 concern: Too much test code. Too much test code
R**-1 concern: Too much test code. that's a lot of code (yet simple) for a "basic" implementation of a problem. I can't imagine how long it becomes for complex systems :/
R**-3 concern: Too much test code. You get caught up into testing too many things which are maybe not necessary
S***-1 concern: Too much test code. It seems like you could end up spending as much time writing the test code as you are writing the actual code.
T*-1 concern: Too much test code. Sometimes slow in synchronizing code changes to other team member.
U**-1 concern: Too much test code. maybe the applicability in an existing project with interaction with other module. It takes time to have it all done
U**-1 concern: Too much test code. There is maybe quite a bit of test code that could be hard to maintain
U**-1 concern: Too much test code. Maintainability of the test cases could be expensive and it makes your design rigid to the change. It takes more time and effort to modify the test cases if you decide to do some major change in API.
W**-22 concern: Too much test code. This seems to add a lot of upfront time. I understand that this helps with having more bug free code in the long run, but for simple code, it seems like overkill.
W**-23 concern: Too much test code. I'm concerned that tests written during TDD will have some overlap with acceptance tests. Test duplication...
W**-24 concern: Too much test code. Forging ahead and not taking time to remove duplication
W**-25 concern: Too much test code. We would spend more time writing test code.
W**-27 concern: Too much test code. Lots of tests could be hard to maintain.
W**-27 concern: Too much test code. Can get a little carried away and add 'too many' tests. Some of them might not add any value, need to be careful with that
W**-30 concern: Too much test code. This is probably just because this is a first time use, but writing so many tests for such simple scenarios seems like it would get impractical with more complex systems.
W**-30 concern: Too much test code. How do I fit all this test code into my embedded development?
W**-34 concern: Too much test code. It can lead to bloated code security holes may be introduced
W**-37 concern: Too much test code. More test code than actual code.
W**-39 concern: Too much test code. That it will be a challange to start using TDD in every day coding
W**-39 concern: Too much test code. That it will be a challange to start using TDD in every day coding
Z**-1 concern: Too much test code. In general we have a lot of code and it is quite coupled
A***-1 concern: Too much time. 1. I think time is a valuable resource in company that not exists much of, it takes time... But on the other hand, we save a lot of time in the future if we do things right from the start...
A**-1 concern: Too much time. It is taking much time to get a circular buffer to work properly. Maybe this implementation can be more safe but I am not really sure
A**-1 concern: Too much time. Waste of code
A**-1 concern: Too much time. The structured approach can take some time for basic tests.
A*-1 concern: Too much time. Using a debugger to develop the code would be faster. Tests could be written after.
A*-1 concern: Too much time. I am curious how much time this will take in more complex code. But if this approach will give fewer bugs, I guess it will outperform the time spent on debugging.
A*-2 concern: Too much time. Extra development time
B***-13 concern: Too much time. It seems like a lot of time to test the basic functionality. But I understand that the tests build on each other.
C****-25 concern: Too much time. It takes longer than coding and then testing. We keep rewriting the same code over and over.
C****-25 concern: Too much time. Would it work the same in big projects with different modules and complicated interactions and a very badly designed system where nothing is stateless? (Without sacrificing execution speed)
C****-25 concern: Too much time. It makes you avoid writing the final "correct" solution by introducing fake input. If this is not fixed and tested later it might introduce bugs. It takes more time.
C**-1 concern: Too much time. Sometimes looking into smaller requirements is undoing and redoing things also have to forcefully break the train of thoughts. Breaking into smaller steps for a bigger project may be challenge.
E***-1 concern: Too much time. It takes longer than the quick and dirty solution.
E*-1 concern: Too much time. I would be concerned that it may take more time to develop code using this approach. Although I think with practice things would move faster.
E*-1 concern: Too much time. More time up front for coding to save time later can conflict with deadlines when management doesn't understand this method.
E*-1 concern: Too much time. overall time spent debugging test code
H***-1 concern: Too much time. Its bit slow approach . And more re work is needed if we keep adding complex test cases for later stage.
H***-1 concern: Too much time. Seems to work better with a fast compilation cycle. Ours isn't. :(
H***-1 concern: Too much time. It require more time at beggining
H***-1 concern: Too much time. Jumping between compiler/linker errors (which are mostly obvious) is a waste of time.
H***-1 concern: Too much time. It might be more time consuming than directly implementing it. If something is obvious, do we need to make a test fail first then implement it?
H***-1 concern: Too much time. -Need a paradigm shift in the way we approach development -Not sure we will implement it despite it's advantages -Legacy code?
H***-1 concern: Too much time. takes more time to delivery
H***-1 concern: Too much time. takes more time to delivery
H***-1 concern: Too much time. takes more time to delivery
H***-1 concern: Too much time. takes more time to delivery
H***-1 concern: Too much time. takes more time to delivery
H***-1 concern: Too much time. Feel like it takes much time
H**-1 concern: Too much time. It is slow at times, feel like there are some unnecessary steps in there.
H**-1 concern: Too much time. Deadlines
H**-1 concern: Too much time. It seems like it will slow down development.
I**-1 concern: Too much time. Sometime you’ll get the sensation that you simply need an extended duration of your time for straightforward implementations
I**-1 concern: Too much time. Making the test fail first seems slow.
I**-1 concern: Too much time. That I would get too caught up in passing a test that I've written incorrectly before realizing the test is wrong
J***-1 concern: Too much time. The added time it may take initially (even if it saves you time on the back end)
J***-1 concern: Too much time. It would be difficult to implement in a complex code base that has already been developed
J***-1 concern: Too much time. How to handle changing requirements? After TDD, I have X lines of source code plus Y lines of test code. When a client changes a requirement, now I have to rework X + Y lines of code.
J***-1 concern: Too much time. The process seems to require re-writing code, which could be time consuming
J***-1 concern: Too much time. It new to me and takes longer
J***-1 concern: Too much time. test concern
J***-1 concern: Too much time. test from lounge on concerns page
J***-1 concern: Too much time. hey, 2:57 fri
L****-1 concern: Too much time. It takes too long and makes me feel like a code monkey.
M***-1 concern: Too much time. it's almost midnight
M***-1 concern: Too much time. Initially I took sometime to understand what am I going to do during the TDD.
M***-1 concern: Too much time. The tempo of TDD is slow, is it suitable for fast-pace development?
M***-1 concern: Too much time. The tempo of TDD is slow, is it suitable for fast-pace development?
M***-1 concern: Too much time. It will spend more time to develop the test code.
M***-1 concern: Too much time. The exercise takes too long.
M***-1 concern: Too much time. None
M***-1 concern: Too much time. None
M***-2 concern: Too much time. We might bypass the original motive while concentrating on granular stuff or while writing the tests
M***-2 concern: Too much time. Time required for complex problem.
M***-2 concern: Too much time. development timing may increase
M***-2 concern: Too much time. Switching between test case mindset and developer mindset.
M***-1 concern: Too much time. The micro steps of 'faking' seem to create extra work.
M***-1 concern: Too much time. Going through the motions of watching the test fail to build, then build but fail, and then finally pass can feel tedious when a test case is trivial.
M***-1 concern: Too much time. The only concern that I have is with respect to the extra time it takes to develop the tests.
M***-1 concern: Too much time. Adds lots of overhead to writing code.
M***-1 concern: Too much time. I think design is taking a back seat
M**-1 concern: Too much time. Hope we will have the time to use TDD.
M**-1 concern: Too much time. Making an adjustment to a small piece of code might upset many tests.
M**-1 concern: Too much time. test concern
M**-2 concern: Too much time. It is such a particular process that I have a tendency to jump ahead because the best practice seems silly and unnecessary at times. Might lead to more work.
M**-2 concern: Too much time. It's a slow process.
M**-2 concern: Too much time. It is such a particular process that I have a tendency to jump ahead because the best practice seems silly and unnecessary at times. Might lead to more work.
M**-2 concern: Too much time. It is such a particular process that I have a tendency to jump ahead because the best practice seems silly and unnecessary at times. Might lead to more work.
N/A concern: Too much time. TDD works best when implementation is known beforehand. In problems where the solution depends on experimentation, TDD slows down the process because tests have to be rewritten.
N/A concern: Too much time. It was hard to collaborate when only one person can test the code at a time
N/A concern: Too much time. I am not used to write C++ in the past 25 years
N***-1 concern: Too much time. Seems like a slow process.
N***-1 concern: Too much time. Iterative and might increase the development time exponentially. The design is happening during code. The lack of a high level design can seriously mess with large code-bases.
N***-1 concern: Too much time. I did not like having to wait ~5 seconds every time I ran the tests
N***-1 concern: Too much time. Might be time-consuming initially.
N**-2 concern: Too much time. May take very long to complete the overall task assigned.
R**-1 concern: Too much time. I think writing test code for hardware / testing physics can be very difficult and time consuming. For example testing what the actual output on an SPI bus is or whether a GPIO state changed.
R**-1 concern: Too much time. It's longer than usual, but I think it worth it !
R**-2 concern: Too much time. I'm worried if I have time to write that much test in real projects. Is it necessary to test every little detail?
R**-2 concern: Too much time. something
R**-2 concern: Too much time. Based on the timeline it may take too much time upfront.
R**-3 concern: Too much time. Implementing the tests might be more work than the implementation it self.
R**-3 concern: Too much time. Slows down the development of "easy" tasks
S***-1 concern: Too much time. This approach took a lot time and in the beggining I was kind of faking passing the test eventhough I knew exactly how to fix it.
S***-1 concern: Too much time. Is it really going to be faster overall?
S***-1 concern: Too much time. When implementing more tests/code, I had to go back and update previous tests that had inadequate coverage. This made me less confidence in earlier tests, since I would need to rebreak/retest them.
S***-1 concern: Too much time. This is a shift in the way we write code. Seems like you are investing in the future by doing this and therefore it requires a bit more upfront time.
N/A concern: Too much time. Takes a long time, can be tedious
T*-1 concern: Too much time. The effort for all these tests...
T*-1 concern: Too much time. TTD is very time consuming, but it will help to avoid problems at the beginning
T*-1 concern: Too much time. seems to need more time for coding
U**-1 concern: Too much time. Will it be as time efficient with bigger projects or when integrating older code?
W**-22 concern: Too much time. This can be very time consuming if the code will not be re used
W**-22 concern: Too much time. It is not the most fun way to write code.
W**-22 concern: Too much time. Time commitment and adapting to changing requirements
W**-23 concern: Too much time. sloooowwww and you have to limit your self not to code away... all my managers are concerned about extra time..... I dont know yet
W**-23 concern: Too much time. test 1
W**-23 concern: Too much time. test 2
W**-23 concern: Too much time. test 3
W**-23 concern: Too much time. test 4
W**-23 concern: Too much time. test 5
W**-23 concern: Too much time. fjsdfljhsdflijsh
W**-24 concern: Too much time. The speed (slow), or the perceived speed to get to a working FW.
W**-24 concern: Too much time. It takes some time to write the tests and it might not be easy to see how much time of debugging it saves me.
W**-24 concern: Too much time. way too slow
W**-24 concern: Too much time. way too slow
W**-25 concern: Too much time. More time for development
W**-25 concern: Too much time. This could become tedious. The rejoinder to that is obviously "so is hunting and shipping bugs". I'd have to do this for a while before deciding, but my first impression is not uniformly positive.
W**-25 concern: Too much time. Slow built times == frequent interruptions when trying to get into a "flow".
W**-25 concern: Too much time. it could be simulated env but it's slow
W**-27 concern: Too much time. TDD seems to take a fair bit of time to develop as you have to make the design fail first then pass.
W**-27 concern: Too much time. Integrating to a old project with a lot of dependecies.
W**-28 concern: Too much time. I am concerned that taking the time to do each of the TDD steps will take up too much time.
W**-28 concern: Too much time. I am concerned that taking the time to do each of the TDD steps will take up too much time.
W**-28 concern: Too much time. it may take too much time to complete or think of enough tests before a scheduled delivery to a customer, for example.
W**-29 concern: Too much time. A LOT of iteration.
W**-29 concern: Too much time. A LOT of iteration.
W**-29 concern: Too much time. Getting to the first passing test for a new embedded target can be time-consuming.
W**-32 concern: Too much time. It seems ideal for code you are starting out writing yourself, but seems like it could get challenging to get it up and running at first for a huge code base you inherit.
W**-32 concern: Too much time. TDD can take longer to code in the early stages of a software project, and some of the benefits only become apparent in the maintenance portion of the software development process
W**-34 concern: Too much time. Too much time to continue writing test cases before writing applicable code.
W**-34 concern: Too much time. Is looks like time consuming effort to test over and over the code that is not yet finished.
W**-34 concern: Too much time. As we just experienced we could have wrote the code more efficiently ( think about the capacity in the beginning).This approach seems at least to me,to make you move slower.
W**-34 concern: Too much time. Scheduling time for setting up tests.
W**-34 concern: Too much time. The iterative process seems very time consuming
W**-34 concern: Too much time. It will take more time on coding the TDD code, if time is urgent to finish implementing all the needed features.
W**-34 concern: Too much time. More time required to work out than by just coding alone without any testing. Understand that this is time saved overall, especially if a bug is prevented from appearing later on.
W**-35 concern: Too much time. It seems that later tests can cause your previous tests to fail taking up time to restest
W**-35 concern: Too much time. Development time could increase on the front end and there's the risk of developers writing subpar code that simply passes tests.
W**-35 concern: Too much time. too many small steps
W**-35 concern: Too much time. progress per (test-)iteration is quite small
W**-36 concern: Too much time. It seems that in the beginning the process starts too slow.
W**-36 concern: Too much time. It requires discipline and humility. May appear slow to some people and hence skipped.
W**-36 concern: Too much time. It seems very time consuming to have to refactor each function so many times, when you already have the initial requirements.
W**-37 concern: Too much time. Time intensive, learning curve, requires spending time on something other than simply writing the required code.
W**-37 concern: Too much time. - Not sure if it's too incremental? Initial implementation takes longer b/c of the constant refactoring
W**-37 concern: Too much time. - Not sure if it's too incremental? Initial implementation takes longer b/c of the constant refactoring
W**-37 concern: Too much time. Concerned it might be hard to do this efficiently when adding the time to come up with the tests as well.
W**-37 concern: Too much time. Concerned it might be hard to do this efficiently when adding the time to come up with the tests as well.
W**-38 concern: Too much time. It often feels slow.
W**-38 concern: Too much time. I fear it could lead to issues only focusing on getting the test to pass
W**-38 concern: Too much time. Large amount of time spent.
W**-39 concern: Too much time. I'm concerned about the time it takes, although the response back if something is broken in code is great.
W**-39 concern: Too much time. Is it always necessary? If not, when is it necessary?
W**-40 concern: Too much time. Slowness at the moment but I can accept compared to the results
W**-40 concern: Too much time. It can get frustrating. Doing everything step by step.
W**-40 concern: Too much time. It is frustrating early on to have to write simple tests and do the minimum for them to pass early on when you can easily think ahead to a more working implementation.
W**-40 concern: Too much time. - different approach to a problem - need to get used to it
W**-41 concern: Too much time. The amount of time it takes to write any meaningful code is far greater than it would be without TDD.
W**-41 concern: Too much time. Doing it iteratively seems like it will take a lot more time than just implementing the code and testing it afterwards.
W**-41 concern: Too much time. I don't have any concerns so far. This platform is really amazing and looking forward to learning a lot from this workshop. But I feel it is taking a lot of time to complete the unit tests.
W**-41 concern: Too much time. It feels like it takes longer, but I think it's worth it.
W**-1 concern: Too much time. My biggest concern is people's tendency to comment out tests and stop writing them when a deadline draws near, often with pressure to do so from management.
W***-1 concern: Too much time. fake-it-till-you-make-it and thinking about tests first seems to take some time
Z**-1 concern: Too much time. When working on small pieces of SW the TDD strategy seems controllable. I believe that when a SW module is scaled up to a larger dimension the TDDs can be a stick in the wheel for the SW devep.
Z**-1 concern: triviality It seems unnecessarily trivial in many cases.
Z**-1 concern: unease I felt uneasy not thinking as much about implementation, like I'd forgotten my keys or left the oven on.
B**-1 concern: unfamiliar the c++ part how does hardware fit in
C*****-2 concern: unnatural It feels 'fake' to have to write intermediate 'dummy' code/steps that you know are not going to cut it
L*-4 concern: unnecessary I had to too many code to test my codes, and some seem to be unnecessary. And When I modify INTERFACE, I also have to modify my main code as well as test cases
W**-18 concern: unplanned Perhaps a false sense of security if you "just get the test to pass".
W**-21 concern: Unsure Didn't get to finish lab so not sure if there are others. Stuck at malloc area...
B**-1 concern: Untested Getting ahead of oneself by writing code that doesn't yet have a test for it
F**-1 concern: Usability - Dependency between modules in system. - For embedded, how to test things that interact with peripherals or use peripheral inputs - Different in target and host compiler
M***-1 concern: use in embedded needs initial effort
V****-2 concern: useful vs practical How to put the approach into practice under the time constraint.
A**-1 concern: Where are the requirements? I think is important to know the requirements before start with the deployment
A*-1 concern: Where are the requirements? Tests can lock-in the implementation. Will it be hard to change the code if a lot of tests brake?
A*-2 concern: Where are the requirements? To spend debugging test code instead of application code
A*-2 concern: Where are the requirements? Small steps might lead to not think something obvious coming which may lead to change many codes and test cases.
B***-13 concern: Where are the requirements? So far we've demonstrated that it is possible to produce failing and passing tests using brute force. I worry that we might instill a false sense of confidence by writing poorly designed tests.
C**-1 concern: Where are the requirements? I was little lost at the start without requirements
C**-1 concern: Where are the requirements? Requires different design and development approach compared traditional when when have all the requirements up front
E*-1 concern: Where are the requirements? need to write full test case
H**-1 concern: Where are the requirements? I'm concerned about writing tests that ensure all functionality is checked.
M***-1 concern: Where are the requirements? How I got the total over view about the requirement?
M**-2 concern: Where are the requirements? May be hard to define requirements that are easily testable when interfacing with hardware? Maybe not.
N/A concern: Where are the requirements? I felt like the requirements were not specific enough. we had to do some guesswork and we ended up "getting ahead".
N***-1 concern: Where are the requirements? A test being inaccurate and not noticing (leaving something hardcoded)
N**-2 concern: Where are the requirements? Getting to know the framework
R**-1 concern: Where are the requirements? Test´s don´t really cover the full functionality. Production code can be faked easily
R**-2 concern: Where are the requirements? We are moving forward from ant perspective of view. Implement before thinking may provide into blown up code.
R**-2 concern: Where are the requirements? We have not yet tested non-functional requirements like speed and memory usage.
R**-3 concern: Where are the requirements? The implementation of the circular buffer has only less complexity. I wonder if TDD will also been working on complex tasks as well.
N/A concern: Where are the requirements? If I don't write good design notes and test plan outside the code, shortcuts could get left in the code, and I could miss important tests and implementation
N/A concern: Where are the requirements? If I don't write good design notes and test plan outside the code, shortcuts could get left in the code, and I could miss important tests and implementation
T*-1 concern: Where are the requirements? New kind of thinking. Test harness everytime availbale?
U**-1 concern: Where are the requirements? the number of steps to writing the test feels tedious at times
W**-23 concern: Where are the requirements? When should I move on from testing interfaces into the internal implementation?
W**-23 concern: Where are the requirements? Sometimes unsure whether or how test implementation details. e.g. static or dynamic memory allocation
W**-23 concern: Where are the requirements? Tests driving functional correctness without addressing qualities such as performance.
W**-24 concern: Where are the requirements? Wasn't clear where I was headed - should have read through all the test names first
W**-24 concern: Where are the requirements? Simple implementation existed for a long time, needed a specific test to cause it to change (IsEmpty using index comparison instead of count).
W**-27 concern: Where are the requirements? Not a fan of writing false positives. As the API gets written in test more, those false positives turn to true negatives, but I'm not sure that this work may be done fully.
W**-31 concern: Where are the requirements? TDD seems to promote "hacking" at the problem instead of thinking about the requirements.
W**-32 concern: Where are the requirements? Seems like a silly question: the test cases should have been formed from the requirements, I assume.
W**-32 concern: Where are the requirements? Sometimes it looks a bit daunting if you have a lot of code to write.
W**-36 concern: Where are the requirements? How to make sure we captured all behaviors? How to validate unit test framework for quality system?
W**-37 concern: Where are the requirements? Hard to think of all edge cases of functions
W**-40 concern: Where are the requirements? I worry that I'll forget a test, and miss an important requirement or edge case; focusing on one thing at a time is good, but TDD makes it easier for me to lose global perspective
W**-41 concern: Where are the requirements? Hard to set the requirements for each case.
W**-42 concern: Where are the requirements? If you may only implement code that's necessary to pass the tests, you end up with the same problem that you forget about corner cases and still have bugs in the code.
W**-42 concern: Where are the requirements? If you may only implement code that's necessary to pass the tests, you end up with the same problem that you forget about corner cases and still have bugs in the code.
W**-42 concern: Where are the requirements? How to use in a situation of unclear requirements/ interfaces. API changes and I have to rewrite all my testcases.
Z**-1 concern: Where are the requirements? this and that
Z**-1 concern: Where are the requirements? this is it
Z**-1 concern: Where are the requirements? you know it
T**-1 concern: which-is-wrong Discerning if (when) the test is wrong instead of the code under test
T**-1 concern: who knows TDD seems to be geared more to a system which has its requirements defined before development begins. Significant changes to requirements could be costly (time)
C*****-2 concern: wonder it can be difficult to find out how low you can go, e.g. I made a change at one point that made a couple of the subsequent tests work, which I guess was not supposed to happen
B***-12 concern: wrong Add code that you know is wrong because the "right" behavior isn't yet needed.