Browse Source

first commit

remotes/origin/userinput
TheUltimateOptimist 11 months ago
commit
74b50a7e65
  1. 1
      .gitignore
  2. 1
      README.md
  3. 3
      build-project.sh
  4. 831
      docs/CMock_Summary.md
  5. 2266
      docs/CeedlingPacket.md
  6. 83
      docs/CeedlingUpgrade.md
  7. 207
      docs/ThrowTheSwitchCodingStandard.md
  8. 787
      docs/UnityAssertionsReference.md
  9. 505
      docs/UnityConfigurationGuide.md
  10. 242
      docs/UnityGettingStartedGuide.md
  11. 245
      docs/UnityHelperScriptsGuide.md
  12. 22
      docs/plugin_beep.md
  13. 76
      docs/plugin_bullseye.md
  14. 20
      docs/plugin_colour_report.md
  15. 53
      docs/plugin_command_hooks.md
  16. 29
      docs/plugin_compile_commands_json.md
  17. 254
      docs/plugin_dependencies.md
  18. 250
      docs/plugin_fake_function_framework.md
  19. 433
      docs/plugin_gcov.md
  20. 36
      docs/plugin_json_tests_report.md
  21. 36
      docs/plugin_junit_tests_report.md
  22. 119
      docs/plugin_module_generator.md
  23. 19
      docs/plugin_raw_output_report.md
  24. 19
      docs/plugin_stdout_gtestlike_tests_report.md
  25. 18
      docs/plugin_stdout_ide_tests_report.md
  26. 20
      docs/plugin_stdout_pretty_tests_report.md
  27. 63
      docs/plugin_subprojects.md
  28. 18
      docs/plugin_teamcity_tests_report.md
  29. 19
      docs/plugin_warnings_report.md
  30. 36
      docs/plugin_xml_tests_report.md
  31. 101
      project.yml
  32. 0
      src/.gitkeep
  33. 1
      team.md
  34. 0
      test/support/.gitkeep

1
.gitignore

@ -0,0 +1 @@
build/

1
README.md

@ -0,0 +1 @@
group project cstools101

3
build-project.sh

@ -0,0 +1,3 @@
#!/usr/bin/env bash
ceedling test:all

831
docs/CMock_Summary.md

@ -0,0 +1,831 @@
CMock: A Summary
================
*[ThrowTheSwitch.org](http://throwtheswitch.org)*
*This documentation is released under a Creative Commons 3.0 Attribution Share-Alike License*
What Exactly Are We Talking About Here?
---------------------------------------
CMock is a nice little tool which takes your header files and creates
a Mock interface for it so that you can more easily unit test modules
that touch other modules. For each function prototype in your
header, like this one:
int DoesSomething(int a, int b);
...you get an automatically generated DoesSomething function
that you can link to instead of your real DoesSomething function.
By using this Mocked version, you can then verify that it receives
the data you want, and make it return whatever data you desire,
make it throw errors when you want, and more... Create these for
everything your latest real module touches, and you're suddenly
in a position of power: You can control and verify every detail
of your latest creation.
To make that easier, CMock also gives you a bunch of functions
like the ones below, so you can tell that generated DoesSomething
function how to behave for each test:
void DoesSomething_ExpectAndReturn(int a, int b, int toReturn);
void DoesSomething_ExpectAndThrow(int a, int b, EXCEPTION_T error);
void DoesSomething_StubWithCallback(CMOCK_DoesSomething_CALLBACK YourCallback);
void DoesSomething_IgnoreAndReturn(int toReturn);
You can pile a bunch of these back to back, and it remembers what
you wanted to pass when, like so:
test_CallsDoesSomething_ShouldDoJustThat(void)
{
DoesSomething_ExpectAndReturn(1,2,3);
DoesSomething_ExpectAndReturn(4,5,6);
DoesSomething_ExpectAndThrow(7,8, STATUS_ERROR_OOPS);
CallsDoesSomething( );
}
This test will call CallsDoesSomething, which is the function
we are testing. We are expecting that function to call DoesSomething
three times. The first time, we check to make sure it's called
as DoesSomething(1, 2) and we'll magically return a 3. The second
time we check for DoesSomething(4, 5) and we'll return a 6. The
third time we verify DoesSomething(7, 8) and we'll throw an error
instead of returning anything. If CallsDoesSomething gets
any of this wrong, it fails the test. It will fail if you didn't
call DoesSomething enough, or too much, or with the wrong arguments,
or in the wrong order.
CMock is based on Unity, which it uses for all internal testing.
It uses Ruby to do all the main work (versions 2.0.0 and above).
Installing
==========
The first thing you need to do to install CMock is to get yourself
a copy of Ruby. If you're on linux or osx, you probably already
have it. You can prove it by typing the following:
ruby --version
If it replied in a way that implies ignorance, then you're going to
need to install it. You can go to [ruby-lang](https://ruby-lang.org)
to get the latest version. You're also going to need to do that if it
replied with a version that is older than 2.0.0. Go ahead. We'll wait.
Once you have Ruby, you have three options:
* Clone the latest [CMock repo on github](https://github.com/ThrowTheSwitch/CMock/)
* Download the latest [CMock zip from github](https://github.com/ThrowTheSwitch/CMock/)
* Install Ceedling (which has it built in!) through your commandline using `gem install ceedling`.
Generated Mock Module Summary
=============================
In addition to the mocks themselves, CMock will generate the
following functions for use in your tests. The expect functions
are always generated. The other functions are only generated
if those plugins are enabled:
Expect:
-------
Your basic staple Expects which will be used for most of your day
to day CMock work. By calling this, you are telling CMock that you
expect that function to be called during your test. It also specifies
which arguments you expect it to be called with, and what return
value you want returned when that happens. You can call this function
multiple times back to back in order to queue up multiple calls.
* `void func(void)` => `void func_Expect(void)`
* `void func(params)` => `void func_Expect(expected_params)`
* `retval func(void)` => `void func_ExpectAndReturn(retval_to_return)`
* `retval func(params)` => `void func_ExpectAndReturn(expected_params, retval_to_return)`
ExpectAnyArgs:
--------------
This behaves just like the Expects calls, except that it doesn't really
care what the arguments are that the mock gets called with. It still counts
the number of times the mock is called and it still handles return values
if there are some. Note that an ExpectAnyArgs call is not generated for
functions that have no arguments, because it would act exactly like the existing
Expect and ExpectAndReturn calls.
* `void func(params)` => `void func_ExpectAnyArgs(void)`
* `retval func(params)` => `void func_ExpectAnyArgsAndReturn(retval_to_return)`
Array:
------
An ExpectWithArray is another variant of Expect. Like expect, it cares about
the number of times a mock is called, the arguments it is called with, and the
values it is to return. This variant has another feature, though. For anything
that resembles a pointer or array, it breaks the argument into TWO arguments.
The first is the original pointer. The second specify the number of elements
it is to verify of that array. If you specify 1, it'll check one object. If 2,
it'll assume your pointer is pointing at the first of two elements in an array.
If you specify zero elements, it will check just the pointer if
`:smart` mode is configured or fail if `:compare_data` is set.
* `void func(void)` => (nothing. In fact, an additional function is only generated if the params list contains pointers)
* `void func(ptr * param, other)` => `void func_ExpectWithArray(ptr* param, int param_depth, other)`
* `retval func(void)` => (nothing. In fact, an additional function is only generated if the params list contains pointers)
* `retval func(other, ptr* param)` => `void func_ExpectWithArrayAndReturn(other, ptr* param, int param_depth, retval_to_return)`
Ignore:
-------
Maybe you don't care about the number of times a particular function is called or
the actual arguments it is called with. In that case, you want to use Ignore. Ignore
only needs to be called once per test. It will then ignore any further calls to that
particular mock. The IgnoreAndReturn works similarly, except that it has the added
benefit of knowing what to return when that call happens. If the mock is called more
times than IgnoreAndReturn was called, it will keep returning the last value without
complaint. If it's called fewer times, it will also ignore that. You SAID you didn't
care how many times it was called, right?
* `void func(void)` => `void func_Ignore(void)`
* `void func(params)` => `void func_Ignore(void)`
* `retval func(void)` => `void func_IgnoreAndReturn(retval_to_return)`
* `retval func(params)` => `void func_IgnoreAndReturn(retval_to_return)`
StopIgnore:
-------
Maybe you want to ignore a particular function for part of a test but dont want to
ignore it later on. In that case, you want to use StopIgnore which will cancel the
previously called Ignore or IgnoreAndReturn requiring you to Expect or otherwise
handle the call to a function.
* `void func(void)` => `void func_StopIgnore(void)`
* `void func(params)` => `void func_StopIgnore(void)`
* `retval func(void)` => `void func_StopIgnore(void)`
* `retval func(params)` => `void func_StopIgnore(void)`
IgnoreStateless:
----------------
This plugin is similar to the Ignore plugin, but the IgnoreAndReturn functions are
stateless. So the Ignored function will always return the last specified return value
and does not queue the return values as the IgnoreAndReturn of the default plugin will.
To stop ignoring a function you can call StopIgnore or simply overwrite the Ignore
(resp. IgnoreAndReturn) with an Expect (resp. ExpectAndReturn). Note that calling
Ignore (resp IgnoreAndReturn) will clear your previous called Expect
(resp. ExpectAndReturn), so they are not restored after StopIgnore is called.
You can use this plugin by using `:ignore_stateless` instead of `:ignore` in your
CMock configuration file.
The generated functions are the same as **Ignore** and **StopIgnore** above.
Ignore Arg:
------------
Maybe you overall want to use Expect and its similar variations, but you don't care
what is passed to a particular argument. This is particularly useful when that argument
is a pointer to a value that is supposed to be filled in by the function. You don't want
to use ExpectAnyArgs, because you still care about the other arguments. Instead, after
an Expect call is made, you can call this function. It tells CMock to ignore
a particular argument for the rest of this test, for this mock function. You may call
multiple instances of this to ignore multiple arguments after each expectation if
desired.
* `void func(params)` => `void func_IgnoreArg_paramName(void)`
ReturnThruPtr:
--------------
Another option which operates on a particular argument of a function is the ReturnThruPtr
plugin. For every argument that resembles a pointer or reference, CMock generates an
instance of this function. Just as the AndReturn functions support injecting one or more
return values into a queue, this function lets you specify one or more return values which
are queued up and copied into the space being pointed at each time the mock is called.
* `void func(param1)` => `void func_ReturnThruPtr_paramName(val_to_return)`
* => `void func_ReturnArrayThruPtr_paramName(cal_to_return, len)`
* => `void func_ReturnMemThruPtr_paramName(val_to_return, size)`
Callback:
---------
If all those other options don't work, and you really need to do something custom, you
still have a choice. As soon as you stub a callback in a test, it will call the callback
whenever the mock is encountered and return the retval returned from the callback (if any).
* `void func(void)` => `void func_[AddCallback,Stub](CMOCK_func_CALLBACK callback)`
where `CMOCK_func_CALLBACK` looks like: `void func(int NumCalls)`
* `void func(params)` => `void func_[AddCallback,Stub](CMOCK_func_CALLBACK callback)`
where `CMOCK_func_CALLBACK` looks like: `void func(params, int NumCalls)`
* `retval func(void)` => `void func_[AddCallback,Stub](CMOCK_func_CALLBACK callback)`
where `CMOCK_func_CALLBACK` looks like: `retval func(int NumCalls)`
* `retval func(params)` => `void func_[AddCallback,Stub](CMOCK_func_CALLBACK callback)`
where `CMOCK_func_CALLBACK` looks like: `retval func(params, int NumCalls)`
You can choose from two options:
* `func_AddCallback` tells the mock to check its arguments and calling
order (based on any Expects you've set up) before calling the callback.
* `func_Stub` tells the mock to skip all the normal checks and jump directly
to the callback instead. In this case, you are replacing the normal mock calls
with your own custom stub function.
There is also an older name, `func_StubWithCallback`, which is just an alias
for either `func_AddCallback` or `func_Stub` depending on setting of the
`:callback_after_arg_check` toggle. This is deprecated and we recommend using
the two options above.
Cexception:
-----------
Finally, if you are using Cexception for error handling, you can use this to throw errors
from inside mocks. Like Expects, it remembers which call was supposed to throw the error,
and it still checks parameters first.
* `void func(void)` => `void func_ExpectAndThrow(value_to_throw)`
* `void func(params)` => `void func_ExpectAndThrow(expected_params, value_to_throw)`
* `retval func(void)` => `void func_ExpectAndThrow(value_to_throw)`
* `retval func(params)` => `void func_ExpectAndThrow(expected_params, value_to_throw)`
Running CMock
=============
CMock is a Ruby script and class. You can therefore use it directly
from the command line, or include it in your own scripts or rakefiles.
Mocking from the Command Line
-----------------------------
After unpacking CMock, you will find cmock.rb in the 'lib' directory.
This is the file that you want to run. It takes a list of header files
to be mocked, as well as an optional yaml file for a more detailed
configuration (see config options below).
For example, this will create three mocks using the configuration
specified in MyConfig.yml:
ruby cmock.rb -oMyConfig.yml super.h duper.h awesome.h
And this will create two mocks using the default configuration:
ruby cmock.rb ../mocking/stuff/is/fun.h ../try/it/yourself.h
Mocking From Scripts or Rake
----------------------------
CMock can be used directly from your own scripts or from a rakefile.
Start by including cmock.rb, then create an instance of CMock.
When you create your instance, you may initialize it in one of
three ways.
You may specify nothing, allowing it to run with default settings:
require 'cmock.rb'
cmock = CMock.new
You may specify a YAML file containing the configuration options
you desire:
cmock = CMock.new('../MyConfig.yml')
You may specify the options explicitly:
cmock = Cmock.new(:plugins => [:cexception, :ignore], :mock_path => 'my/mocks/')
Creating Skeletons:
-------------------
Not only is CMock able to generate mock files from a header file, but it is also able
to generate (and update) skeleton C files from headers. It does this by creating a
(mostly) empty implementation for every function that is declared in the header. If you later
add to that header list, just run this feature again and it will add prototypes for the missing
functions!
Like the normal usecase for CMock, this feature can be used from the command line
or from within its ruby API. For example, from the command line, add `--skeleton` to
generate a skeleton instead:
```
ruby cmock.rb --skeleton ../create/c/for/this.h
```
Config Options:
---------------
The following configuration options can be specified in the
yaml file or directly when instantiating.
Passed as Ruby, they look like this:
{ :attributes => [“__funky”, “__intrinsic”], :when_ptr => :compare }
Defined in the yaml file, they look more like this:
:cmock:
:attributes:
- __funky
- __intrinsic
:when_ptr: :compare
In all cases, you can just include the things that you want to override
from the defaults. We've tried to specify what the defaults are below.
* `:attributes`:
These are attributes that CMock should ignore for you for testing
purposes. Custom compiler extensions and externs are handy things to
put here. If your compiler is choking on some extended syntax, this
is often a good place to look.
* defaults: ['__ramfunc', '__irq', '__fiq', 'register', 'extern']
* **note:** this option will reinsert these attributes onto the mock's calls.
If that isn't what you are looking for, check out :strippables.
* `:c_calling_conventions`:
Similarly, CMock may need to understand which C calling conventions
might show up in your codebase. If it encounters something it doesn't
recognize, it's not going to mock it. We have the most common covered,
but there are many compilers out there, and therefore many other options.
* defaults: ['__stdcall', '__cdecl', '__fastcall']
* **note:** this option will reinsert these attributes onto the mock's calls.
If that isn't what you are looking for, check out :strippables.
* `:callback_after_arg_check`:
Tell `:callback` plugin to do the normal argument checking **before** it
calls the callback function by setting this to true. When false, the
callback function is called **instead** of the argument verification.
* default: false
* `:callback_include_count`:
Tell `:callback` plugin to include an extra parameter to specify the
number of times the callback has been called. If set to false, the
callback has the same interface as the mocked function. This can be
handy when you're wanting to use callback as a stub.
* default: true
* `:cexception_include`:
Tell `:cexception` plugin where to find CException.h... You only need to
define this if it's not in your build path already... which it usually
will be for the purpose of your builds.
* default: *nil*
* `:enforce_strict_ordering`:
CMock always enforces the order that you call a particular function,
so if you expect GrabNabber(int size) to be called three times, it
will verify that the sizes are in the order you specified. You might
*also* want to make sure that all different functions are called in a
particular order. If so, set this to true.
* default: false
* `:framework`:
Currently the only option is `:unity.` Eventually if we support other
unity test frameworks (or if you write one for us), they'll get added
here.
: default: :unity
* `:includes`:
An array of additional include files which should be added to the
mocks. Useful for global types and definitions used in your project.
There are more specific versions if you care WHERE in the mock files
the includes get placed. You can define any or all of these options.
* `:includes`
* `:includes_h_pre_orig_header`
* `:includes_h_post_orig_header`
* `:includes_c_pre_header`
* `:includes_c_post_header`
* default: nil #for all 5 options
* `:memcmp_if_unknown`:
C developers create a lot of types, either through typedef or preprocessor
macros. CMock isn't going to automatically know what you were thinking all
the time (though it tries its best). If it comes across a type it doesn't
recognize, you have a choice on how you want it to handle it. It can either
perform a raw memory comparison and report any differences, or it can fail
with a meaningful message. Either way, this feature will only happen after
all other mechanisms have failed (The thing encountered isn't a standard
type. It isn't in the :treat_as list. It isn't in a custom unity_helper).
* default: true
* `:mock_path`:
The directory where you would like the mock files generated to be
placed.
* default: mocks
* `:mock_prefix`:
The prefix to prepend to your mock files. For example, if it's `Mock`, a file
“USART.h” will get a mock called “MockUSART.c”. This CAN be used with a suffix
at the same time.
* default: Mock
* `:mock_suffix`:
The suffix to append to your mock files. For example, it it's `_Mock`, a file
"USART.h" will get a mock called "USART_Mock.h". This CAN be used with a prefix
at the same time.
* default: ""
* `:plugins`:
An array of which plugins to enable. ':expect' is always active. Also
available currently:
* `:ignore`
* `:ignore_stateless`
* `:ignore_arg`
* `:expect_any_args`
* `:array`
* `:cexception`
* `:callback`
* `:return_thru_ptr`
* `:strippables`:
An array containing a list of items to remove from the header
before deciding what should be mocked. This can be something simple
like a compiler extension CMock wouldn't recognize, or could be a
regex to reject certain function name patterns. This is a great way to
get rid of compiler extensions when your test compiler doesn't support
them. For example, use `:strippables: ['(?:functionName\s*\(+.*?\)+)']`
to prevent a function `functionName` from being mocked. By default, it
is ignoring all gcc attribute extensions.
* default: `['(?:__attribute__\s*\(+.*?\)+)']`
* `:exclude_setjmp_h`:
Some embedded systems don't have <setjmp.h> available. Setting this to true
removes references to this header file and the ability to use cexception.
* default: false
* `:subdir`:
This is a relative subdirectory for your mocks. Set this to e.g. "sys" in
order to create a mock for `sys/types.h` in `(:mock_path)/sys/`.
* default: ""
* `:treat_as`:
The `:treat_as` list is a shortcut for when you have created typedefs
of standard types. Why create a custom unity helper for UINT16 when
the unity function TEST_ASSERT_EQUAL_HEX16 will work just perfectly?
Just add 'UINT16' => 'HEX16' to your list (actually, don't. We already
did that one for you). Maybe you have a type that is a pointer to an
array of unsigned characters? No problem, just add 'UINT8_T*' =>
'HEX8*'
* NOTE: unlike the other options, your specifications MERGE with the
default list. Therefore, if you want to override something, you must
reassign it to something else (or to *nil* if you don't want it)
* default:
* 'int': 'INT'
* 'char': 'INT8'
* 'short': 'INT16'
* 'long': 'INT'
* 'int8': 'INT8'
* 'int16': 'INT16'
* 'int32': 'INT'
* 'int8_t': 'INT8'
* 'int16_t': 'INT16'
* 'int32_t': 'INT'
* 'INT8_T': 'INT8'
* 'INT16_T': 'INT16'
* 'INT32_T': 'INT'
* 'bool': 'INT'
* 'bool_t': 'INT'
* 'BOOL': 'INT'
* 'BOOL_T': 'INT'
* 'unsigned int': 'HEX32'
* 'unsigned long': 'HEX32'
* 'uint32': 'HEX32'
* 'uint32_t': 'HEX32'
* 'UINT32': 'HEX32'
* 'UINT32_T': 'HEX32'
* 'void*': 'HEX8_ARRAY'
* 'unsigned short': 'HEX16'
* 'uint16': 'HEX16'
* 'uint16_t': 'HEX16'
* 'UINT16': 'HEX16'
* 'UINT16_T': 'HEX16'
* 'unsigned char': 'HEX8'
* 'uint8': 'HEX8'
* 'uint8_t': 'HEX8'
* 'UINT8': 'HEX8'
* 'UINT8_T': 'HEX8'
* 'char*': 'STRING'
* 'pCHAR': 'STRING'
* 'cstring': 'STRING'
* 'CSTRING': 'STRING'
* 'float': 'FLOAT'
* 'double': 'FLOAT'
* `:treat_as_array`:
A specialized sort of `:treat_as` to be used when you've created a
typedef of an array type, such as `typedef int TenIntegers[10];`. This
is a hash of typedef name to element type. For example:
{ "TenIntegers" => "int",
"ArrayOfFloat" => "float" }
Telling CMock about these typedefs allows it to be more intelligent
about parameters of such types, so that you can use features like
ExpectWithArray and ReturnArrayThruPtr with them.
* `:treat_as_void`:
We've seen "fun" legacy systems typedef 'void' with a custom type,
like MY_VOID. Add any instances of those to this list to help CMock
understand how to deal with your code.
* default: []
* `:treat_externs`:
This specifies how you want CMock to handle functions that have been
marked as extern in the header file. Should it mock them?
* `:include` will mock externed functions
* `:exclude` will ignore externed functions (default).
* `:treat_inlines`:
This specifies how you want CMock to handle functions that have been
marked as inline in the header file. Should it mock them?
* `:include` will mock inlined functions
* `:exclude` will ignore inlined functions (default).
CMock will look for the following default patterns (simplified from the actual regex):
- "static inline"
- "inline static"
- "inline"
- "static"
You can override these patterns, check out :inline_function_patterns.
Enabling this feature does require a change in the build system that
is using CMock. To understand why, we need to give some more info
on how we are handling inline functions internally.
Let's say we want to mock a header called example.h. example.h
contains inline functions, we cannot include this header in the
mocks or test code if we want to mock the inline functions simply
because the inline functions contain an implementation that we want
to override in our mocks!
So, to circumvent this, we generate a new header, also named
example.h, in the same directory as mock_example.h/c . This newly
generated header should/is exactly the same as the original header,
only difference is the inline functions are transformed to 'normal'
functions declarations. Placing the new header in the same
directory as mock_example.h/c ensures that they will include the new
header and not the old one.
However, CMock has no control in how the build system is configured
and which include paths the test code is compiled with. In order
for the test code to also see the newly generated header ,and not
the old header with inline functions, the build system has to add
the mock folder to the include paths.
Furthermore, we need to keep the order of include paths in mind. We
have to set the mock folder before the other includes to avoid the
test code including the original header instead of the newly
generated header (without inline functions).
* `:unity_helper_path`:
If you have created a header with your own extensions to unity to
handle your own types, you can set this argument to that path. CMock
will then automagically pull in your helpers and use them. The only
trick is that you make sure you follow the naming convention:
`UNITY_TEST_ASSERT_EQUAL_YourType`. If it finds macros of the right
shape that match that pattern, it'll use them.
* default: []
* `:verbosity`:
How loud should CMock be?
* 0 for errors only
* 1 for errors and warnings
* 2 for normal (default)
* 3 for verbose
* `:weak`:
When set this to some value, the generated mocks are defined as weak
symbols using the configured format. This allows them to be overridden
in particular tests.
* Set to '__attribute ((weak))' for weak mocks when using GCC.
* Set to any non-empty string for weak mocks when using IAR.
* default: ""
* `:when_no_prototypes`:
When you give CMock a header file and ask it to create a mock out of
it, it usually contains function prototypes (otherwise what was the
point?). You can control what happens when this isn't true. You can
set this to `:warn,` `:ignore,` or `:error`
* default: :warn
* `:when_ptr`:
You can customize how CMock deals with pointers (c strings result in
string comparisons... we're talking about **other** pointers here). Your
options are `:compare_ptr` to just verify the pointers are the same,
`:compare_data` or `:smart` to verify that the data is the same.
`:compare_data` and `:smart` behaviors will change slightly based on
if you have the array plugin enabled. By default, they compare a
single element of what is being pointed to. So if you have a pointer
to a struct called ORGAN_T, it will compare one ORGAN_T (whatever that
is).
* default: :smart
* `:array_size_type`:
* `:array_size_name`:
When the `:array` plugin is disabled, these options do nothing.
When the `:array` plugin is enabled, these options allow CMock to recognize
functions with parameters that might refer to an array, like the following,
and treat them more intelligently:
* `void GoBananas(Banana * bananas, int num_bananas)`
* `int write_data(int fd, const uint8_t * data, uint32_t size)`
To recognize functions like these, CMock looks for a parameter list
containing a pointer (which could be an array) followed by something that
could be an array size. "Something", by default, means an `int` or `size_t`
parameter with a name containing "size" or "len".
`:array_size_type` is a list of additional types (besides `int` and `size_t`)
that could be used for an array size parameter. For example, to get CMock to
recognize that `uint32_t size` is an array size, you'd need to say:
cfg[:array_size_type] = ['uint32_t']
`:array_size_name` is a regular expression used to match an array size
parameter by name. By default, it's 'size|len'. To get CMock to recognize a
name like `num_bananas`, you could tell it to also accept names containing
'num_' like this:
cfg[:array_size_name] = 'size|len|num_'
Parameters must match *both* `:array_size_type` and `:array_size_name` (and
must come right after a pointer parameter) to be treated as an array size.
Once you've told it how to recognize your arrays, CMock will give you `_Expect`
calls that work more like `_ExpectWithArray`, and compare an array of objects
rather than just a single object.
For example, if you write the following, CMock will check that GoBananas is
called and passed an array containing a green banana followed by a yellow
banana:
Banana b[2] = {GreenBanana, YellowBanana};
GoBananas_Expect(b, 2);
In other words, `GoBananas_Expect(b, 2)` now works just the same as:
GoBananas_ExpectWithArray(b, 2, 2);
* `:fail_on_unexpected_calls`:
By default, CMock will fail a test if a mock is called without `_Expect` and `_Ignore`
called first. While this forces test writers to be more explicit in their expectations,
it can clutter tests with `_Expect` or `_Ignore` calls for functions which are not the focus
of the test. While this is a good indicator that this module should be refactored, some
users are not fans of the additional noise.
Therefore, :fail_on_unexpected_calls can be set to false to force all mocks to start with
the assumption that they are operating as `_Ignore` unless otherwise specified.
* default: true
* **note:**
If this option is disabled, the mocked functions will return
a default value (0) when called (and only if they have to return something of course).
* `:inline_function_patterns`:
An array containing a list of strings to detect inline functions.
This option is only taken into account if you enable :treat_inlines.
These strings are interpreted as regex patterns so be sure to escape
certain characters. For example, use `:inline_function_patterns: ['static inline __attribute__ \(\(always_inline\)\)']`
to recognize `static inline __attribute__ ((always_inline)) int my_func(void)`
as an inline function.
The default patterns are are:
* default: ['(static\s+inline|inline\s+static)\s*', '(\bstatic\b|\binline\b)\s*']
* **note:**
The order of patterns is important here!
We go from specific patterns ('static inline') to general patterns ('inline'),
otherwise we would miss functions that use 'static inline' iso 'inline'.
Compiled Options:
-----------------
A number of #defines also exist for customizing the cmock experience.
Feel free to pass these into your compiler or whatever is most
convenient. CMock will otherwise do its best to guess what you want
based on other settings, particularly Unity's settings.
* `CMOCK_MEM_STATIC` or `CMOCK_MEM_DYNAMIC`
Define one of these to determine if you want to dynamically add
memory during tests as required from the heap. If static, you
can control the total footprint of Cmock. If dynamic, you will
need to make sure you make some heap space available for Cmock.
* `CMOCK_MEM_SIZE`
In static mode this is the total amount of memory you are allocating
to Cmock. In Dynamic mode this is the size of each chunk allocated
at once (larger numbers grab more memory but require fewer mallocs).
* `CMOCK_MEM_ALIGN`
The way to align your data to. Not everything is as flexible as
a PC, as most embedded designers know. This defaults to 2, meaning
align to the closest 2^2 -> 4 bytes (32 bits). You can turn off alignment
by setting 0, force alignment to the closest uint16 with 1 or even
to the closest uint64 with 3.
* `CMOCK_MEM_PTR_AS_INT`
This is used internally to hold pointers... it needs to be big
enough. On most processors a pointer is the same as an unsigned
long... but maybe that's not true for yours?
* `CMOCK_MEM_INDEX_TYPE`
This needs to be something big enough to point anywhere in Cmock's
memory space... usually it's a size_t.
Other Tips
==========
resetTest
---------
While this isn't strictly a CMock feature, often users of CMock are using
either the test runner generator scripts in Unity or using Ceedling. In
either case, there is a handy function called `resetTest` which gets
generated with your runner. You can then use this handy function in your tests
themselves. Call it during a test to have CMock validate everything to this point
and start over clean. This is really useful when wanting to test a function in
an iterative manner with different arguments.
C++ Support
---------
C++ unit test/mocking frameworks often use a completely different approach (vs.
CMock) that relies on overloading virtual class members and does not support
directly mocking static class member methods or free functions (i.e., functions
in plain C). One workaround is to wrap the non-virtual functions in an object
that exposes them as virtual methods and modify your code to inject mocks at
run-time... but there is another way!
Simply use CMock to mock the static member methods and a C++ mocking framework
to handle the virtual methods. (Yes, you can mix mocks from CMock and a C++
mocking framework together in the same test!)
Keep in mind that since C++ mocking frameworks often link the real object to the
unit test too, we need to resolve multiple definition errors with something like
the following in the source of the real implementation for any functions that
CMock mocks:
#if defined(TEST)
__attribute__((weak))
#endif
To address potential issues with re-using the same function name in different
namespaces/classes, the generated function names include the namespace(s) and
class. For example:
namespace MyNamespace {
class MyClass {
static int DoesSomething(int a, int b);
};
}
Will generate functions like
void MyNamespace_MyClass_DoesSomething_ExpectAndReturn(int a, int b, int toReturn);
Examples
========
You can look in the [examples directory](/examples/) for a couple of examples on how
you might tool CMock into your build process. You may also want to consider
using [Ceedling](https://throwtheswitch.org/ceedling). Please note that
these examples are meant to show how the build process works. They have
failing tests ON PURPOSE to show what that would look like. Don't be alarmed. ;)

2266
docs/CeedlingPacket.md
File diff suppressed because it is too large
View File

83
docs/CeedlingUpgrade.md

@ -0,0 +1,83 @@
# Upgrading Ceedling
You'd like to stay in sync with the latest Ceedling... and who wouldn't? Depending on
how you've made use of Ceedling, that may vary slightly. No matter what, though, our first
step is to update Ceedling itself.
## Step 1: Update Ceedling Itself
```
gem update ceedling
```
That should do it... unless you don't have a valid connection to the internet. In that case,
you might have to download the gem from rubygems.org and then install it manually:
```
gem update ceedling --local=ceedling-filename.zip
```
## Step 2: Udpate Projects Using Ceedling
When you set up your project(s), it was either configured to use the gem directly, or it was
configured to install itself locally (often into a vendor directory).
For projects that are of the first type, congratulations, you're finished. The project will
automatically use the new ceedling. There MAY be things that need to be tweaked if features have
moved significantly. (And we apologize if that's your situation... as we get to version 1, we're
going to have a stronger focus on backwards compatibility). If your project isn't working perfectly,
skip down to Step 3.
If the project was installed to have a copy of ceedling locally, you have a choice. You may
choose to continue to run THIS project on the old version of Ceedling. Often this is the
preferred method for legacy projects which only get occasional focus. Why go through the effort
of updating for new tools if it's serving its purpose and you're unlikely to actually use the new
features?
The other choice, of course, is to update it. To do so, we open a command prompt and address ceedling
from *outside* the project. For example, let's say we have the following structure:
- projects
- myproject
- project.yml
- src
- tgt
- vendor
In this case, we'd want to be in the `projects` directory. At that point, we can ask Ceedling to
update our project.
```
ceedling upgrade myproject
```
Ceedling will automatically look for your project yaml file and do its best to determine what needs
to be updated. If installed locally, this will mean copying the latest copy of Unity, CMock, and
Ceedling. It will also involve copying documentation, if you had that installed.
## Step 3: Solving Problems
We wish every project would update seamlessly... unfortunately there is a lot of customization that
goes into each project, and Ceedling often isn't aware of all of these. To make matter worse, Ceedling
has been in pre-release for awhile, meaning it occasionally has significant changes that may break
current installations. We've tried to capture the common ones here:
### rakefile
Ceedling is built in a utility called Rake. In the past, rake was the method that the user actually
interacted with Ceedling. That's no longer the case. Using a modern version of Ceedling means that
you issue commands like `ceedling test:all` instead of `rake test:all`. If you have a continuous
integration server or other calling service, it may need to be updated to comply.
Similarly, older versions of Ceedling actually placed a rakefile in the project directory, allowing
the project to customize its own flow. For the most part this went unused and better ways were later
introduced. At this point, the `rakefile` is more trouble than its worth and often should just be
removed.
### plugins
If you have custom plugins installed to your project, the plugin architecture has gone through some
revisions and it may or may not be compatible at this time. Again, this is a problem which should
not exist soon.

207
docs/ThrowTheSwitchCodingStandard.md

@ -0,0 +1,207 @@
# ThrowTheSwitch.org Coding Standard
Hi. Welcome to the coding standard for ThrowTheSwitch.org. For the most part,
we try to follow these standards to unify our contributors' code into a cohesive
unit (puns intended). You might find places where these standards aren't
followed. We're not perfect. Please be polite where you notice these discrepancies
and we'll try to be polite when we notice yours.
;)
## Why Have A Coding Standard?
Being consistent makes code easier to understand. We've made an attempt to keep
our standard simple because we also believe that we can only expect someone to
follow something that is understandable. Please do your best.
## Our Philosophy
Before we get into details on syntax, let's take a moment to talk about our
vision for these tools. We're C developers and embedded software developers.
These tools are great to test any C code, but catering to embedded software has
made us more tolerant of compiler quirks. There are a LOT of quirky compilers
out there. By quirky I mean "doesn't follow standards because they feel like
they have a license to do as they wish."
Our philosophy is "support every compiler we can". Most often, this means that
we aim for writing C code that is standards compliant (often C89... that seems
to be a sweet spot that is almost always compatible). But it also means these
tools are tolerant of things that aren't common. Some that aren't even
compliant. There are configuration options to override the size of standard
types. There are configuration options to force Unity to not use certain
standard library functions. A lot of Unity is configurable and we have worked
hard to make it not TOO ugly in the process.
Similarly, our tools that parse C do their best. They aren't full C parsers
(yet) and, even if they were, they would still have to accept non-standard
additions like gcc extensions or specifying `@0x1000` to force a variable to
compile to a particular location. It's just what we do, because we like
everything to Just Work™.
Speaking of having things Just Work™, that's our second philosophy. By that, we
mean that we do our best to have EVERY configuration option have a logical
default. We believe that if you're working with a simple compiler and target,
you shouldn't need to configure very much... we try to make the tools guess as
much as they can, but give the user the power to override it when it's wrong.
## Naming Things
Let's talk about naming things. Programming is all about naming things. We name
files, functions, variables, and so much more. While we're not always going to
find the best name for something, we actually put quite a bit of effort into
finding *What Something WANTS to be Called*™.
When naming things, we more or less follow this hierarchy, the first being the
most important to us (but we do all four whenever possible):
1. Readable
2. Descriptive
3. Consistent
4. Memorable
#### Readable
We want to read our code. This means we like names and flow that are more
naturally read. We try to avoid double negatives. We try to avoid cryptic
abbreviations (sticking to ones we feel are common).
#### Descriptive
We like descriptive names for things, especially functions and variables.
Finding the right name for something is an important endeavor. You might notice
from poking around our code that this often results in names that are a little
longer than the average. Guilty. We're okay with a tiny bit more typing if it
means our code is easier to understand.
There are two exceptions to this rule that we also stick to as religiously as
possible:
First, while we realize hungarian notation (and similar systems for encoding
type information into variable names) is providing a more descriptive name, we
feel that (for the average developer) it takes away from readability and
therefore is to be avoided.
Second, loop counters and other local throw-away variables often have a purpose
which is obvious. There's no need, therefore, to get carried away with complex
naming. We find i, j, and k are better loop counters than loopCounterVar or
whatnot. We only break this rule when we see that more description could improve
understanding of an algorithm.
#### Consistent
We like consistency, but we're not really obsessed with it. We try to name our
configuration macros in a consistent fashion... you'll notice a repeated use of
UNITY_EXCLUDE_BLAH or UNITY_USES_BLAH macros. This helps users avoid having to
remember each macro's details.
#### Memorable
Where ever it doesn't violate the above principles, we try to apply memorable
names. Sometimes this means using something that is simply descriptive, but
often we strive for descriptive AND unique... we like quirky names that stand
out in our memory and are easier to search for. Take a look through the file
names in Ceedling and you'll get a good idea of what we are talking about here.
Why use preprocess when you can use preprocessinator? Or what better describes a
module in charge of invoking tasks during releases than release_invoker? Don't
get carried away. The names are still descriptive and fulfill the above
requirements, but they don't feel stale.
## C and C++ Details
We don't really want to add to the style battles out there. Tabs or spaces?
How many spaces? Where do the braces go? These are age-old questions that will
never be answered... or at least not answered in a way that will make everyone
happy.
We've decided on our own style preferences. If you'd like to contribute to these
projects (and we hope that you do), then we ask if you do your best to follow
the same. It will only hurt a little. We promise.
#### Whitespace
Our C-style is to use spaces and to use 4 of them per indent level. It's a nice
power-of-2 number that looks decent on a wide screen. We have no more reason
than that. We break that rule when we have lines that wrap (macros or function
arguments or whatnot). When that happens, we like to indent further to line
things up in nice tidy columns.
```C
if (stuff_happened)
{
do_something();
}
```
#### Case
- Files - all lower case with underscores.
- Variables - all lower case with underscores
- Macros - all caps with underscores.
- Typedefs - all caps with underscores. (also ends with _T).
- Functions - camel cased. Usually named ModuleName_FuncName
- Constants and Globals - camel cased.
#### Braces
The left brace is on the next line after the declaration. The right brace is
directly below that. Everything in between in indented one level. If you're
catching an error and you have a one-line, go ahead and to it on the same line.
```C
while (blah)
{
//Like so. Even if only one line, we use braces.
}
```
#### Comments
Do you know what we hate? Old-school C block comments. BUT, we're using them
anyway. As we mentioned, our goal is to support every compiler we can,
especially embedded compilers. There are STILL C compilers out there that only
support old-school block comments. So that is what we're using. We apologize. We
think they are ugly too.
## Ruby Details
Is there really such thing as a Ruby coding standard? Ruby is such a free form
language, it seems almost sacrilegious to suggest that people should comply to
one method! We'll keep it really brief!
#### Whitespace
Our Ruby style is to use spaces and to use 2 of them per indent level. It's a
nice power-of-2 number that really grooves with Ruby's compact style. We have no
more reason than that. We break that rule when we have lines that wrap. When
that happens, we like to indent further to line things up in nice tidy columns.
#### Case
- Files - all lower case with underscores.
- Variables - all lower case with underscores
- Classes, Modules, etc - Camel cased.
- Functions - all lower case with underscores
- Constants - all upper case with underscores
## Documentation
Egad. Really? We use markdown and we like pdf files because they can be made to
look nice while still being portable. Good enough?
*Find The Latest of This And More at [ThrowTheSwitch.org](https://throwtheswitch.org)*

787
docs/UnityAssertionsReference.md

@ -0,0 +1,787 @@
# Unity Assertions Reference
## Background and Overview
### Super Condensed Version
- An assertion establishes truth (i.e. boolean True) for a single condition.
Upon boolean False, an assertion stops execution and reports the failure.
- Unity is mainly a rich collection of assertions and the support to gather up
and easily execute those assertions.
- The structure of Unity allows you to easily separate test assertions from
source code in, well, test code.
- Unity's assertions:
- Come in many, many flavors to handle different C types and assertion cases.
- Use context to provide detailed and helpful failure messages.
- Document types, expected values, and basic behavior in your source code for
free.
### Unity Is Several Things But Mainly It's Assertions
One way to think of Unity is simply as a rich collection of assertions you can
use to establish whether your source code behaves the way you think it does.
Unity provides a framework to easily organize and execute those assertions in
test code separate from your source code.
### What's an Assertion?
At their core, assertions are an establishment of truth - boolean truth. Was this
thing equal to that thing? Does that code doohickey have such-and-such property
or not? You get the idea. Assertions are executable code (to appreciate the big
picture on this read up on the difference between
[link:Dynamic Verification and Static Analysis]). A failing assertion stops
execution and reports an error through some appropriate I/O channel (e.g.
stdout, GUI, file, blinky light).
Fundamentally, for dynamic verification all you need is a single assertion
mechanism. In fact, that's what the [assert() macro][] in C's standard library
is for. So why not just use it? Well, we can do far better in the reporting
department. C's `assert()` is pretty dumb as-is and is particularly poor for
handling common data types like arrays, structs, etc. And, without some other
support, it's far too tempting to litter source code with C's `assert()`'s. It's
generally much cleaner, manageable, and more useful to separate test and source
code in the way Unity facilitates.
### Unity's Assertions: Helpful Messages _and_ Free Source Code Documentation
Asserting a simple truth condition is valuable, but using the context of the
assertion is even more valuable. For instance, if you know you're comparing bit
flags and not just integers, then why not use that context to give explicit,
readable, bit-level feedback when an assertion fails?
That's what Unity's collection of assertions do - capture context to give you
helpful, meaningful assertion failure messages. In fact, the assertions
themselves also serve as executable documentation about types and values in your
source code. So long as your tests remain current with your source and all those
tests pass, you have a detailed, up-to-date view of the intent and mechanisms in
your source code. And due to a wondrous mystery, well-tested code usually tends
to be well designed code.
## Assertion Conventions and Configurations
### Naming and Parameter Conventions
The convention of assertion parameters generally follows this order:
```c
TEST_ASSERT_X( {modifiers}, {expected}, actual, {size/count} )
```
The very simplest assertion possible uses only a single `actual` parameter (e.g.
a simple null check).
- `Actual` is the value being tested and unlike the other parameters in an
assertion construction is the only parameter present in all assertion variants.
- `Modifiers` are masks, ranges, bit flag specifiers, floating point deltas.
- `Expected` is your expected value (duh) to compare to an `actual` value; it's
marked as an optional parameter because some assertions only need a single
`actual` parameter (e.g. null check).
- `Size/count` refers to string lengths, number of array elements, etc.
Many of Unity's assertions are clear duplications in that the same data type
is handled by several assertions. The differences among these are in how failure
messages are presented. For instance, a `_HEX` variant of an assertion prints
the expected and actual values of that assertion formatted as hexadecimal.
#### TEST_ASSERT_X_MESSAGE Variants
_All_ assertions are complemented with a variant that includes a simple string
message as a final parameter. The string you specify is appended to an assertion
failure message in Unity output.
For brevity, the assertion variants with a message parameter are not listed
below. Just tack on `_MESSAGE` as the final component to any assertion name in
the reference list below and add a string as the final parameter.
_Example:_
```c
TEST_ASSERT_X( {modifiers}, {expected}, actual, {size/count} )
```
becomes messageified like thus...
```c
TEST_ASSERT_X_MESSAGE( {modifiers}, {expected}, actual, {size/count}, message )
```
Notes:
- The `_MESSAGE` variants intentionally do not support `printf` style formatting
since many embedded projects don't support or avoid `printf` for various reasons.
It is possible to use `sprintf` before the assertion to assemble a complex fail
message, if necessary.
- If you want to output a counter value within an assertion fail message (e.g. from
a loop) , building up an array of results and then using one of the `_ARRAY`
assertions (see below) might be a handy alternative to `sprintf`.
#### TEST_ASSERT_X_ARRAY Variants
Unity provides a collection of assertions for arrays containing a variety of
types. These are documented in the Array section below. These are almost on par
with the `_MESSAGE`variants of Unity's Asserts in that for pretty much any Unity
type assertion you can tack on `_ARRAY` and run assertions on an entire block of
memory.
```c
TEST_ASSERT_EQUAL_TYPEX_ARRAY( expected, actual, {size/count} )
```
- `Expected` is an array itself.
- `Size/count` is one or two parameters necessary to establish the number of array
elements and perhaps the length of elements within the array.
Notes:
- The `_MESSAGE` variant convention still applies here to array assertions. The
`_MESSAGE` variants of the `_ARRAY` assertions have names ending with
`_ARRAY_MESSAGE`.
- Assertions for handling arrays of floating point values are grouped with float
and double assertions (see immediately following section).
### TEST_ASSERT_EACH_EQUAL_X Variants
Unity provides a collection of assertions for arrays containing a variety of
types which can be compared to a single value as well. These are documented in
the Each Equal section below. these are almost on par with the `_MESSAGE`
variants of Unity's Asserts in that for pretty much any Unity type assertion you
can inject `_EACH_EQUAL` and run assertions on an entire block of memory.
```c
TEST_ASSERT_EACH_EQUAL_TYPEX( expected, actual, {size/count} )
```
- `Expected` is a single value to compare to.
- `Actual` is an array where each element will be compared to the expected value.
- `Size/count` is one of two parameters necessary to establish the number of array
elements and perhaps the length of elements within the array.
Notes:
- The `_MESSAGE` variant convention still applies here to Each Equal assertions.
- Assertions for handling Each Equal of floating point values are grouped with
float and double assertions (see immediately following section).
### Configuration
#### Floating Point Support Is Optional
Support for floating point types is configurable. That is, by defining the
appropriate preprocessor symbols, floats and doubles can be individually enabled
or disabled in Unity code. This is useful for embedded targets with no floating
point math support (i.e. Unity compiles free of errors for fixed point only
platforms). See Unity documentation for specifics.
#### Maximum Data Type Width Is Configurable
Not all targets support 64 bit wide types or even 32 bit wide types. Define the
appropriate preprocessor symbols and Unity will omit all operations from
compilation that exceed the maximum width of your target. See Unity
documentation for specifics.
## The Assertions in All Their Blessed Glory
### Basic Fail, Pass and Ignore
#### `TEST_FAIL()`
#### `TEST_FAIL_MESSAGE("message")`
This fella is most often used in special conditions where your test code is
performing logic beyond a simple assertion. That is, in practice, `TEST_FAIL()`
will always be found inside a conditional code block.
_Examples:_
- Executing a state machine multiple times that increments a counter your test
code then verifies as a final step.
- Triggering an exception and verifying it (as in Try / Catch / Throw - see the
[CException](https://github.com/ThrowTheSwitch/CException) project).
#### `TEST_PASS()`
#### `TEST_PASS_MESSAGE("message")`
This will abort the remainder of the test, but count the test as a pass. Under
normal circumstances, it is not necessary to include this macro in your tests...
a lack of failure will automatically be counted as a `PASS`. It is occasionally
useful for tests with `#ifdef`s and such.
#### `TEST_IGNORE()`
#### `TEST_IGNORE_MESSAGE("message")`
Marks a test case (i.e. function meant to contain test assertions) as ignored.
Usually this is employed as a breadcrumb to come back and implement a test case.
An ignored test case has effects if other assertions are in the enclosing test
case (see Unity documentation for more).
#### `TEST_MESSAGE(message)`
This can be useful for outputting `INFO` messages into the Unity output stream
without actually ending the test. Like pass and fail messages, it will be output
with the filename and line number.
### Boolean
#### `TEST_ASSERT (condition)`
#### `TEST_ASSERT_TRUE (condition)`
#### `TEST_ASSERT_FALSE (condition)`
#### `TEST_ASSERT_UNLESS (condition)`
A simple wording variation on `TEST_ASSERT_FALSE`.The semantics of
`TEST_ASSERT_UNLESS` aid readability in certain test constructions or
conditional statements.
#### `TEST_ASSERT_NULL (pointer)`
#### `TEST_ASSERT_NOT_NULL (pointer)`
Verify if a pointer is or is not NULL.
#### `TEST_ASSERT_EMPTY (pointer)`
#### `TEST_ASSERT_NOT_EMPTY (pointer)`
Verify if the first element dereferenced from a pointer is or is not zero. This
is particularly useful for checking for empty (or non-empty) null-terminated
C strings, but can be just as easily used for other null-terminated arrays.
### Signed and Unsigned Integers (of all sizes)
Large integer sizes can be disabled for build targets that do not support them.
For example, if your target only supports up to 16 bit types, by defining the
appropriate symbols Unity can be configured to omit 32 and 64 bit operations
that would break compilation (see Unity documentation for more). Refer to
Advanced Asserting later in this document for advice on dealing with other word
sizes.
#### `TEST_ASSERT_EQUAL_INT (expected, actual)`
#### `TEST_ASSERT_EQUAL_INT8 (expected, actual)`
#### `TEST_ASSERT_EQUAL_INT16 (expected, actual)`
#### `TEST_ASSERT_EQUAL_INT32 (expected, actual)`
#### `TEST_ASSERT_EQUAL_INT64 (expected, actual)`
#### `TEST_ASSERT_EQUAL_UINT (expected, actual)`
#### `TEST_ASSERT_EQUAL_UINT8 (expected, actual)`
#### `TEST_ASSERT_EQUAL_UINT16 (expected, actual)`
#### `TEST_ASSERT_EQUAL_UINT32 (expected, actual)`
#### `TEST_ASSERT_EQUAL_UINT64 (expected, actual)`
### Unsigned Integers (of all sizes) in Hexadecimal
All `_HEX` assertions are identical in function to unsigned integer assertions
but produce failure messages with the `expected` and `actual` values formatted
in hexadecimal. Unity output is big endian.
#### `TEST_ASSERT_EQUAL_HEX (expected, actual)`
#### `TEST_ASSERT_EQUAL_HEX8 (expected, actual)`
#### `TEST_ASSERT_EQUAL_HEX16 (expected, actual)`
#### `TEST_ASSERT_EQUAL_HEX32 (expected, actual)`
#### `TEST_ASSERT_EQUAL_HEX64 (expected, actual)`
### Characters
While you can use the 8-bit integer assertions to compare `char`, another option is
to use this specialized assertion which will show printable characters as printables,
otherwise showing the HEX escape code for the characters.
#### `TEST_ASSERT_EQUAL_CHAR (expected, actual)`
### Masked and Bit-level Assertions
Masked and bit-level assertions produce output formatted in hexadecimal. Unity
output is big endian.
#### `TEST_ASSERT_BITS (mask, expected, actual)`
Only compares the masked (i.e. high) bits of `expected` and `actual` parameters.
#### `TEST_ASSERT_BITS_HIGH (mask, actual)`
Asserts the masked bits of the `actual` parameter are high.
#### `TEST_ASSERT_BITS_LOW (mask, actual)`
Asserts the masked bits of the `actual` parameter are low.
#### `TEST_ASSERT_BIT_HIGH (bit, actual)`
Asserts the specified bit of the `actual` parameter is high.
#### `TEST_ASSERT_BIT_LOW (bit, actual)`
Asserts the specified bit of the `actual` parameter is low.
### Integer Less Than / Greater Than
These assertions verify that the `actual` parameter is less than or greater
than `threshold` (exclusive). For example, if the threshold value is 0 for the
greater than assertion will fail if it is 0 or less. There are assertions for
all the various sizes of ints, as for the equality assertions. Some examples:
#### `TEST_ASSERT_GREATER_THAN_INT8 (threshold, actual)`
#### `TEST_ASSERT_GREATER_OR_EQUAL_INT16 (threshold, actual)`
#### `TEST_ASSERT_LESS_THAN_INT32 (threshold, actual)`
#### `TEST_ASSERT_LESS_OR_EQUAL_UINT (threshold, actual)`
#### `TEST_ASSERT_NOT_EQUAL_UINT8 (threshold, actual)`
### Integer Ranges (of all sizes)
These assertions verify that the `expected` parameter is within +/- `delta`
(inclusive) of the `actual` parameter. For example, if the expected value is 10
and the delta is 3 then the assertion will fail for any value outside the range
of 7 - 13.
#### `TEST_ASSERT_INT_WITHIN (delta, expected, actual)`
#### `TEST_ASSERT_INT8_WITHIN (delta, expected, actual)`
#### `TEST_ASSERT_INT16_WITHIN (delta, expected, actual)`
#### `TEST_ASSERT_INT32_WITHIN (delta, expected, actual)`
#### `TEST_ASSERT_INT64_WITHIN (delta, expected, actual)`
#### `TEST_ASSERT_UINT_WITHIN (delta, expected, actual)`
#### `TEST_ASSERT_UINT8_WITHIN (delta, expected, actual)`
#### `TEST_ASSERT_UINT16_WITHIN (delta, expected, actual)`
#### `TEST_ASSERT_UINT32_WITHIN (delta, expected, actual)`
#### `TEST_ASSERT_UINT64_WITHIN (delta, expected, actual)`
#### `TEST_ASSERT_HEX_WITHIN (delta, expected, actual)`
#### `TEST_ASSERT_HEX8_WITHIN (delta, expected, actual)`
#### `TEST_ASSERT_HEX16_WITHIN (delta, expected, actual)`
#### `TEST_ASSERT_HEX32_WITHIN (delta, expected, actual)`
#### `TEST_ASSERT_HEX64_WITHIN (delta, expected, actual)`
#### `TEST_ASSERT_CHAR_WITHIN (delta, expected, actual)`
### Structs and Strings
#### `TEST_ASSERT_EQUAL_PTR (expected, actual)`
Asserts that the pointers point to the same memory location.
#### `TEST_ASSERT_EQUAL_STRING (expected, actual)`
Asserts that the null terminated (`'\0'`)strings are identical. If strings are
of different lengths or any portion of the strings before their terminators
differ, the assertion fails. Two NULL strings (i.e. zero length) are considered
equivalent.
#### `TEST_ASSERT_EQUAL_MEMORY (expected, actual, len)`
Asserts that the contents of the memory specified by the `expected` and `actual`
pointers is identical. The size of the memory blocks in bytes is specified by
the `len` parameter.
### Arrays
`expected` and `actual` parameters are both arrays. `num_elements` specifies the
number of elements in the arrays to compare.
`_HEX` assertions produce failure messages with expected and actual array
contents formatted in hexadecimal.
For array of strings comparison behavior, see comments for
`TEST_ASSERT_EQUAL_STRING` in the preceding section.
Assertions fail upon the first element in the compared arrays found not to
match. Failure messages specify the array index of the failed comparison.
#### `TEST_ASSERT_EQUAL_INT_ARRAY (expected, actual, num_elements)`
#### `TEST_ASSERT_EQUAL_INT8_ARRAY (expected, actual, num_elements)`
#### `TEST_ASSERT_EQUAL_INT16_ARRAY (expected, actual, num_elements)`
#### `TEST_ASSERT_EQUAL_INT32_ARRAY (expected, actual, num_elements)`
#### `TEST_ASSERT_EQUAL_INT64_ARRAY (expected, actual, num_elements)`
#### `TEST_ASSERT_EQUAL_UINT_ARRAY (expected, actual, num_elements)`
#### `TEST_ASSERT_EQUAL_UINT8_ARRAY (expected, actual, num_elements)`
#### `TEST_ASSERT_EQUAL_UINT16_ARRAY (expected, actual, num_elements)`
#### `TEST_ASSERT_EQUAL_UINT32_ARRAY (expected, actual, num_elements)`
#### `TEST_ASSERT_EQUAL_UINT64_ARRAY (expected, actual, num_elements)`
#### `TEST_ASSERT_EQUAL_HEX_ARRAY (expected, actual, num_elements)`
#### `TEST_ASSERT_EQUAL_HEX8_ARRAY (expected, actual, num_elements)`
#### `TEST_ASSERT_EQUAL_HEX16_ARRAY (expected, actual, num_elements)`
#### `TEST_ASSERT_EQUAL_HEX32_ARRAY (expected, actual, num_elements)`
#### `TEST_ASSERT_EQUAL_HEX64_ARRAY (expected, actual, num_elements)`
#### `TEST_ASSERT_EQUAL_CHAR_ARRAY (expected, actual, num_elements)`
#### `TEST_ASSERT_EQUAL_PTR_ARRAY (expected, actual, num_elements)`
#### `TEST_ASSERT_EQUAL_STRING_ARRAY (expected, actual, num_elements)`
#### `TEST_ASSERT_EQUAL_MEMORY_ARRAY (expected, actual, len, num_elements)`
`len` is the memory in bytes to be compared at each array element.
### Integer Array Ranges (of all sizes)
These assertions verify that the `expected` array parameter is within +/- `delta`
(inclusive) of the `actual` array parameter. For example, if the expected value is
\[10, 12\] and the delta is 3 then the assertion will fail for any value
outside the range of \[7 - 13, 9 - 15\].
#### `TEST_ASSERT_INT_ARRAY_WITHIN (delta, expected, actual, num_elements)`
#### `TEST_ASSERT_INT8_ARRAY_WITHIN (delta, expected, actual, num_elements)`
#### `TEST_ASSERT_INT16_ARRAY_WITHIN (delta, expected, actual, num_elements)`
#### `TEST_ASSERT_INT32_ARRAY_WITHIN (delta, expected, actual, num_elements)`
#### `TEST_ASSERT_INT64_ARRAY_WITHIN (delta, expected, actual, num_elements)`
#### `TEST_ASSERT_UINT_ARRAY_WITHIN (delta, expected, actual, num_elements)`
#### `TEST_ASSERT_UINT8_ARRAY_WITHIN (delta, expected, actual, num_elements)`
#### `TEST_ASSERT_UINT16_ARRAY_WITHIN (delta, expected, actual, num_elements)`
#### `TEST_ASSERT_UINT32_ARRAY_WITHIN (delta, expected, actual, num_elements)`
#### `TEST_ASSERT_UINT64_ARRAY_WITHIN (delta, expected, actual, num_elements)`
#### `TEST_ASSERT_HEX_ARRAY_WITHIN (delta, expected, actual, num_elements)`
#### `TEST_ASSERT_HEX8_ARRAY_WITHIN (delta, expected, actual, num_elements)`
#### `TEST_ASSERT_HEX16_ARRAY_WITHIN (delta, expected, actual, num_elements)`
#### `TEST_ASSERT_HEX32_ARRAY_WITHIN (delta, expected, actual, num_elements)`
#### `TEST_ASSERT_HEX64_ARRAY_WITHIN (delta, expected, actual, num_elements)`
#### `TEST_ASSERT_CHAR_ARRAY_WITHIN (delta, expected, actual, num_elements)`
### Each Equal (Arrays to Single Value)
`expected` are single values and `actual` are arrays. `num_elements` specifies
the number of elements in the arrays to compare.
`_HEX` assertions produce failure messages with expected and actual array
contents formatted in hexadecimal.
Assertions fail upon the first element in the compared arrays found not to
match. Failure messages specify the array index of the failed comparison.
#### `TEST_ASSERT_EACH_EQUAL_INT (expected, actual, num_elements)`
#### `TEST_ASSERT_EACH_EQUAL_INT8 (expected, actual, num_elements)`
#### `TEST_ASSERT_EACH_EQUAL_INT16 (expected, actual, num_elements)`
#### `TEST_ASSERT_EACH_EQUAL_INT32 (expected, actual, num_elements)`
#### `TEST_ASSERT_EACH_EQUAL_INT64 (expected, actual, num_elements)`
#### `TEST_ASSERT_EACH_EQUAL_UINT (expected, actual, num_elements)`
#### `TEST_ASSERT_EACH_EQUAL_UINT8 (expected, actual, num_elements)`
#### `TEST_ASSERT_EACH_EQUAL_UINT16 (expected, actual, num_elements)`
#### `TEST_ASSERT_EACH_EQUAL_UINT32 (expected, actual, num_elements)`
#### `TEST_ASSERT_EACH_EQUAL_UINT64 (expected, actual, num_elements)`
#### `TEST_ASSERT_EACH_EQUAL_HEX (expected, actual, num_elements)`
#### `TEST_ASSERT_EACH_EQUAL_HEX8 (expected, actual, num_elements)`
#### `TEST_ASSERT_EACH_EQUAL_HEX16 (expected, actual, num_elements)`
#### `TEST_ASSERT_EACH_EQUAL_HEX32 (expected, actual, num_elements)`
#### `TEST_ASSERT_EACH_EQUAL_HEX64 (expected, actual, num_elements)`
#### `TEST_ASSERT_EACH_EQUAL_CHAR (expected, actual, num_elements)`
#### `TEST_ASSERT_EACH_EQUAL_PTR (expected, actual, num_elements)`
#### `TEST_ASSERT_EACH_EQUAL_STRING (expected, actual, num_elements)`
#### `TEST_ASSERT_EACH_EQUAL_MEMORY (expected, actual, len, num_elements)`
`len` is the memory in bytes to be compared at each array element.
### Floating Point (If enabled)
#### `TEST_ASSERT_FLOAT_WITHIN (delta, expected, actual)`
Asserts that the `actual` value is within +/- `delta` of the `expected` value.
The nature of floating point representation is such that exact evaluations of
equality are not guaranteed.
#### `TEST_ASSERT_EQUAL_FLOAT (expected, actual)`
Asserts that the ?actual?value is "close enough to be considered equal" to the
`expected` value. If you are curious about the details, refer to the Advanced
Asserting section for more details on this. Omitting a user-specified delta in a
floating point assertion is both a shorthand convenience and a requirement of
code generation conventions for CMock.
#### `TEST_ASSERT_EQUAL_FLOAT_ARRAY (expected, actual, num_elements)`
See Array assertion section for details. Note that individual array element
float comparisons are executed using T?EST_ASSERT_EQUAL_FLOAT?.That is, user
specified delta comparison values requires a custom-implemented floating point
array assertion.
#### `TEST_ASSERT_FLOAT_IS_INF (actual)`
Asserts that `actual` parameter is equivalent to positive infinity floating
point representation.
#### `TEST_ASSERT_FLOAT_IS_NEG_INF (actual)`
Asserts that `actual` parameter is equivalent to negative infinity floating
point representation.
#### `TEST_ASSERT_FLOAT_IS_NAN (actual)`
Asserts that `actual` parameter is a Not A Number floating point representation.
#### `TEST_ASSERT_FLOAT_IS_DETERMINATE (actual)`
Asserts that ?actual?parameter is a floating point representation usable for
mathematical operations. That is, the `actual` parameter is neither positive
infinity nor negative infinity nor Not A Number floating point representations.
#### `TEST_ASSERT_FLOAT_IS_NOT_INF (actual)`
Asserts that `actual` parameter is a value other than positive infinity floating
point representation.
#### `TEST_ASSERT_FLOAT_IS_NOT_NEG_INF (actual)`
Asserts that `actual` parameter is a value other than negative infinity floating
point representation.
#### `TEST_ASSERT_FLOAT_IS_NOT_NAN (actual)`
Asserts that `actual` parameter is a value other than Not A Number floating
point representation.
#### `TEST_ASSERT_FLOAT_IS_NOT_DETERMINATE (actual)`
Asserts that `actual` parameter is not usable for mathematical operations. That
is, the `actual` parameter is either positive infinity or negative infinity or
Not A Number floating point representations.
### Double (If enabled)
#### `TEST_ASSERT_DOUBLE_WITHIN (delta, expected, actual)`
Asserts that the `actual` value is within +/- `delta` of the `expected` value.
The nature of floating point representation is such that exact evaluations of
equality are not guaranteed.
#### `TEST_ASSERT_EQUAL_DOUBLE (expected, actual)`
Asserts that the `actual` value is "close enough to be considered equal" to the
`expected` value. If you are curious about the details, refer to the Advanced
Asserting section for more details. Omitting a user-specified delta in a
floating point assertion is both a shorthand convenience and a requirement of
code generation conventions for CMock.
#### `TEST_ASSERT_EQUAL_DOUBLE_ARRAY (expected, actual, num_elements)`
See Array assertion section for details. Note that individual array element
double comparisons are executed using `TEST_ASSERT_EQUAL_DOUBLE`.That is, user
specified delta comparison values requires a custom implemented double array
assertion.
#### `TEST_ASSERT_DOUBLE_IS_INF (actual)`
Asserts that `actual` parameter is equivalent to positive infinity floating
point representation.
#### `TEST_ASSERT_DOUBLE_IS_NEG_INF (actual)`
Asserts that `actual` parameter is equivalent to negative infinity floating point
representation.
#### `TEST_ASSERT_DOUBLE_IS_NAN (actual)`
Asserts that `actual` parameter is a Not A Number floating point representation.
#### `TEST_ASSERT_DOUBLE_IS_DETERMINATE (actual)`
Asserts that `actual` parameter is a floating point representation usable for
mathematical operations. That is, the ?actual?parameter is neither positive
infinity nor negative infinity nor Not A Number floating point representations.
#### `TEST_ASSERT_DOUBLE_IS_NOT_INF (actual)`
Asserts that `actual` parameter is a value other than positive infinity floating
point representation.
#### `TEST_ASSERT_DOUBLE_IS_NOT_NEG_INF (actual)`
Asserts that `actual` parameter is a value other than negative infinity floating
point representation.
#### `TEST_ASSERT_DOUBLE_IS_NOT_NAN (actual)`
Asserts that `actual` parameter is a value other than Not A Number floating
point representation.
#### `TEST_ASSERT_DOUBLE_IS_NOT_DETERMINATE (actual)`
Asserts that `actual` parameter is not usable for mathematical operations. That
is, the `actual` parameter is either positive infinity or negative infinity or
Not A Number floating point representations.
## Advanced Asserting: Details On Tricky Assertions
This section helps you understand how to deal with some of the trickier
assertion situations you may run into. It will give you a glimpse into some of
the under-the-hood details of Unity's assertion mechanisms. If you're one of
those people who likes to know what is going on in the background, read on. If
not, feel free to ignore the rest of this document until you need it.
### How do the EQUAL assertions work for FLOAT and DOUBLE?
As you may know, directly checking for equality between a pair of floats or a
pair of doubles is sloppy at best and an outright no-no at worst. Floating point
values can often be represented in multiple ways, particularly after a series of
operations on a value. Initializing a variable to the value of 2.0 is likely to
result in a floating point representation of 2 x 20,but a series of
mathematical operations might result in a representation of 8 x 2-2
that also evaluates to a value of 2. At some point repeated operations cause
equality checks to fail.
So Unity doesn't do direct floating point comparisons for equality. Instead, it
checks if two floating point values are "really close." If you leave Unity
running with defaults, "really close" means "within a significant bit or two."
Under the hood, `TEST_ASSERT_EQUAL_FLOAT` is really `TEST_ASSERT_FLOAT_WITHIN`
with the `delta` parameter calculated on the fly. For single precision, delta is
the expected value multiplied by 0.00001, producing a very small proportional
range around the expected value.
If you are expecting a value of 20,000.0 the delta is calculated to be 0.2. So
any value between 19,999.8 and 20,000.2 will satisfy the equality check. This
works out to be roughly a single bit of range for a single-precision number, and
that's just about as tight a tolerance as you can reasonably get from a floating
point value.
So what happens when it's zero? Zero - even more than other floating point
values - can be represented many different ways. It doesn't matter if you have
0 x 20 or 0 x 263.It's still zero, right? Luckily, if you
subtract these values from each other, they will always produce a difference of
zero, which will still fall between 0 plus or minus a delta of 0. So it still
works!
Double precision floating point numbers use a much smaller multiplier, again
approximating a single bit of error.
If you don't like these ranges and you want to make your floating point equality
assertions less strict, you can change these multipliers to whatever you like by
defining UNITY_FLOAT_PRECISION and UNITY_DOUBLE_PRECISION. See Unity
documentation for more.
### How do we deal with targets with non-standard int sizes?
It's "fun" that C is a standard where something as fundamental as an integer
varies by target. According to the C standard, an `int` is to be the target's
natural register size, and it should be at least 16-bits and a multiple of a
byte. It also guarantees an order of sizes:
```C
char <= short <= int <= long <= long long
```
Most often, `int` is 32-bits. In many cases in the embedded world, `int` is
16-bits. There are rare microcontrollers out there that have 24-bit integers,
and this remains perfectly standard C.
To make things even more interesting, there are compilers and targets out there
that have a hard choice to make. What if their natural register size is 10-bits
or 12-bits? Clearly they can't fulfill _both_ the requirement to be at least
16-bits AND the requirement to match the natural register size. In these
situations, they often choose the natural register size, leaving us with
something like this:
```C
char (8 bit) <= short (12 bit) <= int (12 bit) <= long (16 bit)
```
Um... yikes. It's obviously breaking a rule or two... but they had to break SOME
rules, so they made a choice.
When the C99 standard rolled around, it introduced alternate standard-size types.
It also introduced macros for pulling in MIN/MAX values for your integer types.
It's glorious! Unfortunately, many embedded compilers can't be relied upon to
use the C99 types (Sometimes because they have weird register sizes as described
above. Sometimes because they don't feel like it?).
A goal of Unity from the beginning was to support every combination of
microcontroller or microprocessor and C compiler. Over time, we've gotten really
close to this. There are a few tricks that you should be aware of, though, if
you're going to do this effectively on some of these more idiosyncratic targets.
First, when setting up Unity for a new target, you're going to want to pay
special attention to the macros for automatically detecting types
(where available) or manually configuring them yourself. You can get information
on both of these in Unity's documentation.
What about the times where you suddenly need to deal with something odd, like a
24-bit `int`? The simplest solution is to use the next size up. If you have a
24-bit `int`, configure Unity to use 32-bit integers. If you have a 12-bit
`int`, configure Unity to use 16 bits. There are two ways this is going to
affect you:
1. When Unity displays errors for you, it's going to pad the upper unused bits
with zeros.
2. You're going to have to be careful of assertions that perform signed
operations, particularly `TEST_ASSERT_INT_WITHIN`.Such assertions might wrap
your `int` in the wrong place, and you could experience false failures. You can
always back down to a simple `TEST_ASSERT` and do the operations yourself.
*Find The Latest of This And More at [ThrowTheSwitch.org][]*
[assert() macro]: http://en.wikipedia.org/en/wiki/Assert.h
[ThrowTheSwitch.org]: https://throwtheswitch.org

505
docs/UnityConfigurationGuide.md

@ -0,0 +1,505 @@
# Unity Configuration Guide
## C Standards, Compilers and Microcontrollers
The embedded software world contains its challenges.
Compilers support different revisions of the C Standard.
They ignore requirements in places, sometimes to make the language more usable in some special regard.
Sometimes it's to simplify their support.
Sometimes it's due to specific quirks of the microcontroller they are targeting.
Simulators add another dimension to this menagerie.
Unity is designed to run on almost anything that is targeted by a C compiler.
It would be awesome if this could be done with zero configuration.
While there are some targets that come close to this dream, it is sadly not universal.
It is likely that you are going to need at least a couple of the configuration options described in this document.
All of Unity's configuration options are `#defines`.
Most of these are simple definitions.
A couple are macros with arguments.
They live inside the unity_internals.h header file.
We don't necessarily recommend opening that file unless you really need to.
That file is proof that a cross-platform library is challenging to build.
From a more positive perspective, it is also proof that a great deal of complexity can be centralized primarily to one place to provide a more consistent and simple experience elsewhere.
### Using These Options
It doesn't matter if you're using a target-specific compiler and a simulator or a native compiler.
In either case, you've got a couple choices for configuring these options:
1. Because these options are specified via C defines, you can pass most of these options to your compiler through command line compiler flags. Even if you're using an embedded target that forces you to use their overbearing IDE for all configuration, there will be a place somewhere in your project to configure defines for your compiler.
2. You can create a custom `unity_config.h` configuration file (present in your toolchain's search paths).
In this file, you will list definitions and macros specific to your target. All you must do is define `UNITY_INCLUDE_CONFIG_H` and Unity will rely on `unity_config.h` for any further definitions it may need.
Unfortunately, it doesn't usually work well to just #define these things in the test itself.
These defines need to take effect where ever unity.h is included.
This would be test test, the test runner (if you're generating one), and from unity.c when it's compiled.
## The Options
### Integer Types
If you've been a C developer for long, you probably already know that C's concept of an integer varies from target to target.
The C Standard has rules about the `int` matching the register size of the target microprocessor.
It has rules about the `int` and how its size relates to other integer types.
An `int` on one target might be 16 bits while on another target it might be 64.
There are more specific types in compilers compliant with C99 or later, but that's certainly not every compiler you are likely to encounter.
Therefore, Unity has a number of features for helping to adjust itself to match your required integer sizes.
It starts off by trying to do it automatically.
#### `UNITY_EXCLUDE_STDINT_H`
The first thing that Unity does to guess your types is check `stdint.h`.
This file includes defines like `UINT_MAX` that Unity can use to learn a lot about your system.
It's possible you don't want it to do this (um. why not?) or (more likely) it's possible that your system doesn't support `stdint.h`.
If that's the case, you're going to want to define this.
That way, Unity will know to skip the inclusion of this file and you won't be left with a compiler error.
_Example:_
```C
#define UNITY_EXCLUDE_STDINT_H
```
#### `UNITY_EXCLUDE_LIMITS_H`
The second attempt to guess your types is to check `limits.h`.
Some compilers that don't support `stdint.h` could include `limits.h` instead.
If you don't want Unity to check this file either, define this to make it skip the inclusion.
_Example:_
```C
#define UNITY_EXCLUDE_LIMITS_H
```
If you've disabled both of the automatic options above, you're going to have to do the configuration yourself.
Don't worry.
Even this isn't too bad... there are just a handful of defines that you are going to specify if you don't like the defaults.
#### `UNITY_INT_WIDTH`
Define this to be the number of bits an `int` takes up on your system.
The default, if not autodetected, is 32 bits.
_Example:_
```C
#define UNITY_INT_WIDTH 16
```
#### `UNITY_LONG_WIDTH`
Define this to be the number of bits a `long` takes up on your system.
The default, if not autodetected, is 32 bits.
This is used to figure out what kind of 64-bit support your system can handle.
Does it need to specify a `long` or a `long long` to get a 64-bit value.
On 16-bit systems, this option is going to be ignored.
_Example:_
```C
#define UNITY_LONG_WIDTH 16
```
#### `UNITY_POINTER_WIDTH`
Define this to be the number of bits a pointer takes up on your system.
The default, if not autodetected, is 32-bits.
If you're getting ugly compiler warnings about casting from pointers, this is the one to look at.
_Hint:_ In order to support exotic processors (for example TI C55x with a pointer width of 23-bit), choose the next power of two (in this case 32-bit).
_Supported values:_ 16, 32 and 64
_Example:_
```C
// Choose on of these #defines to set your pointer width (if not autodetected)
//#define UNITY_POINTER_WIDTH 16
//#define UNITY_POINTER_WIDTH 32
#define UNITY_POINTER_WIDTH 64 // Set UNITY_POINTER_WIDTH to 64-bit
```
#### `UNITY_SUPPORT_64`
Unity will automatically include 64-bit support if it auto-detects it, or if your `int`, `long`, or pointer widths are greater than 32-bits.
Define this to enable 64-bit support if none of the other options already did it for you.
There can be a significant size and speed impact to enabling 64-bit support on small targets, so don't define it if you don't need it.
_Example:_
```C
#define UNITY_SUPPORT_64
```
### Floating Point Types
In the embedded world, it's not uncommon for targets to have no support for floating point operations at all or to have support that is limited to only single precision.
We are able to guess integer sizes on the fly because integers are always available in at least one size.
Floating point, on the other hand, is sometimes not available at all.
Trying to include `float.h` on these platforms would result in an error. This leaves manual configuration as the only option.
#### `UNITY_INCLUDE_FLOAT`
#### `UNITY_EXCLUDE_FLOAT`
#### `UNITY_INCLUDE_DOUBLE`
#### `UNITY_EXCLUDE_DOUBLE`
By default, Unity guesses that you will want single precision floating point support, but not double precision.
It's easy to change either of these using the include and exclude options here.
You may include neither, either, or both, as suits your needs.
For features that are enabled, the following floating point options also become available.
_Example:_
```C
//what manner of strange processor is this?
#define UNITY_EXCLUDE_FLOAT
#define UNITY_INCLUDE_DOUBLE
```
#### `UNITY_EXCLUDE_FLOAT_PRINT`
Unity aims for as small of a footprint as possible and avoids most standard library calls (some embedded platforms don’t have a standard library!).
Because of this, its routines for printing integer values are minimalist and hand-coded.
Therefore, the display of floating point values during a failure are optional.
By default, Unity will print the actual results of floating point assertion failure (e.g. ”Expected 4.56 Was 4.68”).
To not include this extra support, you can use this define to instead respond to a failed assertion with a message like ”Values Not Within Delta”.
If you would like verbose failure messages for floating point assertions, use these options to give more explicit failure messages.
_Example:_
```C
#define UNITY_EXCLUDE_FLOAT_PRINT
```
#### `UNITY_FLOAT_TYPE`
If enabled, Unity assumes you want your `FLOAT` asserts to compare standard C floats.
If your compiler supports a specialty floating point type, you can always override this behavior by using this definition.
_Example:_
```C
#define UNITY_FLOAT_TYPE float16_t
```
#### `UNITY_DOUBLE_TYPE`
If enabled, Unity assumes you want your `DOUBLE` asserts to compare standard C doubles.
If you would like to change this, you can specify something else by using this option.
For example, defining `UNITY_DOUBLE_TYPE` to `long double` could enable gargantuan floating point types on your 64-bit processor instead of the standard `double`.
_Example:_
```C
#define UNITY_DOUBLE_TYPE long double
```
#### `UNITY_FLOAT_PRECISION`
#### `UNITY_DOUBLE_PRECISION`
If you look up `UNITY_ASSERT_EQUAL_FLOAT` and `UNITY_ASSERT_EQUAL_DOUBLE` as documented in the big daddy Unity Assertion Guide, you will learn that they are not really asserting that two values are equal but rather that two values are "close enough" to equal.
"Close enough" is controlled by these precision configuration options.
If you are working with 32-bit floats and/or 64-bit doubles (the normal on most processors), you should have no need to change these options.
They are both set to give you approximately 1 significant bit in either direction.
The float precision is 0.00001 while the double is 10-12.
For further details on how this works, see the appendix of the Unity Assertion Guide.
_Example:_
```C
#define UNITY_FLOAT_PRECISION 0.001f
```
### Miscellaneous
#### `UNITY_EXCLUDE_STDDEF_H`
Unity uses the `NULL` macro, which defines the value of a null pointer constant, defined in `stddef.h` by default.
If you want to provide your own macro for this, you should exclude the `stddef.h` header file by adding this define to your configuration.
_Example:_
```C
#define UNITY_EXCLUDE_STDDEF_H
```
#### `UNITY_INCLUDE_PRINT_FORMATTED`
Unity provides a simple (and very basic) printf-like string output implementation, which is able to print a string modified by the following format string modifiers:
- __%d__ - signed value (decimal)
- __%i__ - same as __%i__
- __%u__ - unsigned value (decimal)
- __%f__ - float/Double (if float support is activated)
- __%g__ - same as __%f__
- __%b__ - binary prefixed with "0b"
- __%x__ - hexadecimal (upper case) prefixed with "0x"
- __%X__ - same as __%x__
- __%p__ - pointer (same as __%x__ or __%X__)
- __%c__ - a single character
- __%s__ - a string (e.g. "string")
- __%%__ - The "%" symbol (escaped)
_Example:_
```C
#define UNITY_INCLUDE_PRINT_FORMATTED
int a = 0xfab1;
TEST_PRINTF("Decimal %d\n", -7);
TEST_PRINTF("Unsigned %u\n", 987);
TEST_PRINTF("Float %f\n", 3.1415926535897932384);
TEST_PRINTF("Binary %b\n", 0xA);
TEST_PRINTF("Hex %X\n", 0xFAB);
TEST_PRINTF("Pointer %p\n", &a);
TEST_PRINTF("Character %c\n", 'F');
TEST_PRINTF("String %s\n", "My string");
TEST_PRINTF("Percent %%\n");
TEST_PRINTF("Color Red \033[41mFAIL\033[00m\n");
TEST_PRINTF("\n");
TEST_PRINTF("Multiple (%d) (%i) (%u) (%x)\n", -100, 0, 200, 0x12345);
```
### Toolset Customization
In addition to the options listed above, there are a number of other options which will come in handy to customize Unity's behavior for your specific toolchain.
It is possible that you may not need to touch any of these... but certain platforms, particularly those running in simulators, may need to jump through extra hoops to run properly.
These macros will help in those situations.
#### `UNITY_OUTPUT_CHAR(a)`
#### `UNITY_OUTPUT_FLUSH()`
#### `UNITY_OUTPUT_START()`
#### `UNITY_OUTPUT_COMPLETE()`
By default, Unity prints its results to `stdout` as it runs.
This works perfectly fine in most situations where you are using a native compiler for testing.
It works on some simulators as well so long as they have `stdout` routed back to the command line.
There are times, however, where the simulator will lack support for dumping results or you will want to route results elsewhere for other reasons.
In these cases, you should define the `UNITY_OUTPUT_CHAR` macro.
This macro accepts a single character at a time (as an `int`, since this is the parameter type of the standard C `putchar` function most commonly used).
You may replace this with whatever function call you like.
_Example:_
Say you are forced to run your test suite on an embedded processor with no `stdout` option.
You decide to route your test result output to a custom serial `RS232_putc()` function you wrote like thus:
```C
#include "RS232_header.h"
...
#define UNITY_OUTPUT_CHAR(a) RS232_putc(a)
#define UNITY_OUTPUT_START() RS232_config(115200,1,8,0)
#define UNITY_OUTPUT_FLUSH() RS232_flush()
#define UNITY_OUTPUT_COMPLETE() RS232_close()
```
_Note:_
`UNITY_OUTPUT_FLUSH()` can be set to the standard out flush function simply by specifying `UNITY_USE_FLUSH_STDOUT`.
No other defines are required.
#### `UNITY_OUTPUT_FOR_ECLIPSE`
#### `UNITY_OUTPUT_FOR_IAR_WORKBENCH`
#### `UNITY_OUTPUT_FOR_QT_CREATOR`
When managing your own builds, it is often handy to have messages output in a format which is recognized by your IDE.
These are some standard formats which can be supported.
If you're using Ceedling to manage your builds, it is better to stick with the standard format (leaving these all undefined) and allow Ceedling to use its own decorators.
#### `UNITY_PTR_ATTRIBUTE`
Some compilers require a custom attribute to be assigned to pointers, like `near` or `far`.
In these cases, you can give Unity a safe default for these by defining this option with the attribute you would like.
_Example:_
```C
#define UNITY_PTR_ATTRIBUTE __attribute__((far))
#define UNITY_PTR_ATTRIBUTE near
```
#### `UNITY_PRINT_EOL`
By default, Unity outputs \n at the end of each line of output.
This is easy to parse by the scripts, by Ceedling, etc, but it might not be ideal for YOUR system.
Feel free to override this and to make it whatever you wish.
_Example:_
```C
#define UNITY_PRINT_EOL { UNITY_OUTPUT_CHAR('\r'); UNITY_OUTPUT_CHAR('\n'); }
```
#### `UNITY_EXCLUDE_DETAILS`
This is an option for if you absolutely must squeeze every byte of memory out of your system.
Unity stores a set of internal scratchpads which are used to pass extra detail information around.
It's used by systems like CMock in order to report which function or argument flagged an error.
If you're not using CMock and you're not using these details for other things, then you can exclude them.
_Example:_
```C
#define UNITY_EXCLUDE_DETAILS
```
#### `UNITY_PRINT_TEST_CONTEXT`
This option allows you to specify your own function to print additional context as part of the error message when a test has failed.
It can be useful if you want to output some specific information about the state of the test at the point of failure, and `UNITY_SET_DETAILS` isn't flexible enough for your needs.
_Example:_
```C
#define UNITY_PRINT_TEST_CONTEXT PrintIterationCount
extern int iteration_count;
void PrintIterationCount(void)
{
UnityPrintFormatted("At iteration #%d: ", iteration_count);
}
```
#### `UNITY_EXCLUDE_SETJMP`
If your embedded system doesn't support the standard library setjmp, you can exclude Unity's reliance on this by using this define.
This dropped dependence comes at a price, though.
You will be unable to use custom helper functions for your tests, and you will be unable to use tools like CMock.
Very likely, if your compiler doesn't support setjmp, you wouldn't have had the memory space for those things anyway, though... so this option exists for those situations.
_Example:_
```C
#define UNITY_EXCLUDE_SETJMP
```
#### `UNITY_OUTPUT_COLOR`
If you want to add color using ANSI escape codes you can use this define.
_Example:_
```C
#define UNITY_OUTPUT_COLOR
```
#### `UNITY_SHORTHAND_AS_INT`
#### `UNITY_SHORTHAND_AS_MEM`
#### `UNITY_SHORTHAND_AS_RAW`
#### `UNITY_SHORTHAND_AS_NONE`
These options give you control of the `TEST_ASSERT_EQUAL` and the `TEST_ASSERT_NOT_EQUAL` shorthand assertions.
Historically, Unity treated the former as an alias for an integer comparison.
It treated the latter as a direct comparison using `!=`.
This asymmetry was confusing, but there was much disagreement as to how best to treat this pair of assertions.
These four options will allow you to specify how Unity will treat these assertions.
- AS INT - the values will be cast to integers and directly compared.
Arguments that don't cast easily to integers will cause compiler errors.
- AS MEM - the address of both values will be taken and the entire object's memory footprint will be compared byte by byte.
Directly placing constant numbers like `456` as expected values will cause errors.
- AS_RAW - Unity assumes that you can compare the two values using `==` and `!=` and will do so.
No details are given about mismatches, because it doesn't really know what type it's dealing with.
- AS_NONE - Unity will disallow the use of these shorthand macros altogether, insisting that developers choose a more descriptive option.
#### `UNITY_SUPPORT_VARIADIC_MACROS`
This will force Unity to support variadic macros when using its own built-in RUN_TEST macro.
This will rarely be necessary. Most often, Unity will automatically detect if the compiler supports variadic macros by checking to see if it's C99+ compatible.
In the event that the compiler supports variadic macros, but is primarily C89 (ANSI), defining this option will allow you to use them.
This option is also not necessary when using Ceedling or the test runner generator script.
## Getting Into The Guts
There will be cases where the options above aren't quite going to get everything perfect.
They are likely sufficient for any situation where you are compiling and executing your tests with a native toolchain (e.g. clang on Mac).
These options may even get you through the majority of cases encountered in working with a target simulator run from your local command line.
But especially if you must run your test suite on your target hardware, your Unity configuration will
require special help.
This special help will usually reside in one of two places: the `main()` function or the `RUN_TEST` macro.
Let's look at how these work.
### `main()`
Each test module is compiled and run on its own, separate from the other test files in your project.
Each test file, therefore, has a `main` function.
This `main` function will need to contain whatever code is necessary to initialize your system to a workable state.
This is particularly true for situations where you must set up a memory map or initialize a communication channel for the output of your test results.
A simple main function looks something like this:
```C
int main(void) {
UNITY_BEGIN();
RUN_TEST(test_TheFirst);
RUN_TEST(test_TheSecond);
RUN_TEST(test_TheThird);
return UNITY_END();
}
```
You can see that our main function doesn't bother taking any arguments.
For our most barebones case, we'll never have arguments because we just run all the tests each time.
Instead, we start by calling `UNITY_BEGIN`.
We run each test (in whatever order we wish).
Finally, we call `UNITY_END`, returning its return value (which is the total number of failures).
It should be easy to see that you can add code before any test cases are run or after all the test cases have completed.
This allows you to do any needed system-wide setup or teardown that might be required for your special circumstances.
#### `RUN_TEST`
The `RUN_TEST` macro is called with each test case function.
Its job is to perform whatever setup and teardown is necessary for executing a single test case function.
This includes catching failures, calling the test module's `setUp()` and `tearDown()` functions, and calling `UnityConcludeTest()`.
If using CMock or test coverage, there will be additional stubs in use here.
A simple minimalist RUN_TEST macro looks something like this:
```C
#define RUN_TEST(testfunc) \
UNITY_NEW_TEST(#testfunc) \
if (TEST_PROTECT()) { \
setUp(); \
testfunc(); \
} \
if (TEST_PROTECT() && (!TEST_IS_IGNORED)) \
tearDown(); \
UnityConcludeTest();
```
So that's quite a macro, huh?
It gives you a glimpse of what kind of stuff Unity has to deal with for every single test case.
For each test case, we declare that it is a new test.
Then we run `setUp` and our test function.
These are run within a `TEST_PROTECT` block, the function of which is to handle failures that occur during the test.
Then, assuming our test is still running and hasn't been ignored, we run `tearDown`.
No matter what, our last step is to conclude this test before moving on to the next.
Let's say you need to add a call to `fsync` to force all of your output data to flush to a file after each test.
You could easily insert this after your `UnityConcludeTest` call.
Maybe you want to write an xml tag before and after each result set.
Again, you could do this by adding lines to this macro.
Updates to this macro are for the occasions when you need an action before or after every single test case throughout your entire suite of tests.
## Happy Porting
The defines and macros in this guide should help you port Unity to just about any C target we can imagine.
If you run into a snag or two, don't be afraid of asking for help on the forums.
We love a good challenge!
*Find The Latest of This And More at [ThrowTheSwitch.org][]*
[ThrowTheSwitch.org]: https://throwtheswitch.org

242
docs/UnityGettingStartedGuide.md

@ -0,0 +1,242 @@
# Unity - Getting Started
## Welcome
Congratulations.
You're now the proud owner of your very own pile of bits!
What are you going to do with all these ones and zeros?
This document should be able to help you decide just that.
Unity is a unit test framework.
The goal has been to keep it small and functional.
The core Unity test framework is three files: a single C file and a couple header files.
These team up to provide functions and macros to make testing easier.
Unity was designed to be cross-platform.
It works hard to stick with C standards while still providing support for the many embedded C compilers that bend the rules.
Unity has been used with many compilers, including GCC, IAR, Clang, Green Hills, Microchip, and MS Visual Studio.
It's not much work to get it to work with a new target.
### Overview of the Documents
#### Unity Assertions reference
This document will guide you through all the assertion options provided by Unity.
This is going to be your unit testing bread and butter.
You'll spend more time with assertions than any other part of Unity.
#### Unity Assertions Cheat Sheet
This document contains an abridged summary of the assertions described in the previous document.
It's perfect for printing and referencing while you familiarize yourself with Unity's options.
#### Unity Configuration Guide
This document is the one to reference when you are going to use Unity with a new target or compiler.
It'll guide you through the configuration options and will help you customize your testing experience to meet your needs.
#### Unity Helper Scripts
This document describes the helper scripts that are available for simplifying your testing workflow.
It describes the collection of optional Ruby scripts included in the auto directory of your Unity installation.
Neither Ruby nor these scripts are necessary for using Unity.
They are provided as a convenience for those who wish to use them.
#### Unity License
What's an open source project without a license file?
This brief document describes the terms you're agreeing to when you use this software.
Basically, we want it to be useful to you in whatever context you want to use it, but please don't blame us if you run into problems.
### Overview of the Folders
If you have obtained Unity through Github or something similar, you might be surprised by just how much stuff you suddenly have staring you in the face.
Don't worry, Unity itself is very small.
The rest of it is just there to make your life easier.
You can ignore it or use it at your convenience.
Here's an overview of everything in the project.
- `src` - This is the code you care about! This folder contains a C file and two header files.
These three files _are_ Unity.
- `docs` - You're reading this document, so it's possible you have found your way into this folder already.
This is where all the handy documentation can be found.
- `examples` - This contains a few examples of using Unity.
- `extras` - These are optional add ons to Unity that are not part of the core project.
If you've reached us through James Grenning's book, you're going to want to look here.
- `test` - This is how Unity and its scripts are all tested.
If you're just using Unity, you'll likely never need to go in here.
If you are the lucky team member who gets to port Unity to a new toolchain, this is a good place to verify everything is configured properly.
- `auto` - Here you will find helpful Ruby scripts for simplifying your test workflow.
They are purely optional and are not required to make use of Unity.
## How to Create A Test File
Test files are C files.
Most often you will create a single test file for each C module that you want to test.
The test file should include unity.h and the header for your C module to be tested.
Next, a test file will include a `setUp()` and `tearDown()` function.
The setUp function can contain anything you would like to run before each test.
The tearDown function can contain anything you would like to run after each test.
Both functions accept no arguments and return nothing.
You may leave either or both of these blank if you have no need for them.
If you're using Ceedling or the test runner generator script, you may leave these off completely.
Not sure?
Give it a try.
If your compiler complains that it can't find setUp or tearDown when it links, you'll know you need to at least include an empty function for these.
The majority of the file will be a series of test functions.
Test functions follow the convention of starting with the word "test_" or "spec_".
You don't HAVE to name them this way, but it makes it clear what functions are tests for other developers.
Also, the automated scripts that come with Unity or Ceedling will default to looking for test functions to be prefixed this way.
Test functions take no arguments and return nothing. All test accounting is handled internally in Unity.
Finally, at the bottom of your test file, you will write a `main()` function.
This function will call `UNITY_BEGIN()`, then `RUN_TEST` for each test, and finally `UNITY_END()`.
This is what will actually trigger each of those test functions to run, so it is important that each function gets its own `RUN_TEST` call.
Remembering to add each test to the main function can get to be tedious.
If you enjoy using helper scripts in your build process, you might consider making use of our handy [generate_test_runner.rb][] script.
This will create the main function and all the calls for you, assuming that you have followed the suggested naming conventions.
In this case, there is no need for you to include the main function in your test file at all.
When you're done, your test file will look something like this:
```C
#include "unity.h"
#include "file_to_test.h"
void setUp(void) {
// set stuff up here
}
void tearDown(void) {
// clean stuff up here
}
void test_function_should_doBlahAndBlah(void) {
//test stuff
}
void test_function_should_doAlsoDoBlah(void) {
//more test stuff
}
// not needed when using generate_test_runner.rb
int main(void) {
UNITY_BEGIN();
RUN_TEST(test_function_should_doBlahAndBlah);
RUN_TEST(test_function_should_doAlsoDoBlah);
return UNITY_END();
}
```
It's possible that you will need more customization than this, eventually.
For that sort of thing, you're going to want to look at the configuration guide.
This should be enough to get you going, though.
### Running Test Functions
When writing your own `main()` functions, for a test-runner.
There are two ways to execute the test.
The classic variant
``` c
RUN_TEST(func, linenum)
```
Or its simpler replacement that starts at the beginning of the function.
``` c
RUN_TEST(func)
```
These macros perform the necessary setup before the test is called and handles clean-up and result tabulation afterwards.
### Ignoring Test Functions
There are times when a test is incomplete or not valid for some reason.
At these times, TEST_IGNORE can be called.
Control will immediately be returned to the caller of the test, and no failures will be returned.
This is useful when your test runners are automatically generated.
``` c
TEST_IGNORE()
```
Ignore this test and return immediately
```c
TEST_IGNORE_MESSAGE (message)
```
Ignore this test and return immediately.
Output a message stating why the test was ignored.
### Aborting Tests
There are times when a test will contain an infinite loop on error conditions, or there may be reason to escape from the test early without executing the rest of the test.
A pair of macros support this functionality in Unity.
The first `TEST_PROTECT` sets up the feature, and handles emergency abort cases.
`TEST_ABORT` can then be used at any time within the tests to return to the last `TEST_PROTECT` call.
```c
TEST_PROTECT()
```
Setup and Catch macro
```c
TEST_ABORT()
```
Abort Test macro
Example:
```c
main()
{
if (TEST_PROTECT())
{
MyTest();
}
}
```
If MyTest calls `TEST_ABORT`, program control will immediately return to `TEST_PROTECT` with a return value of zero.
## How to Build and Run A Test File
This is the single biggest challenge to picking up a new unit testing framework, at least in a language like C or C++.
These languages are REALLY good at getting you "close to the metal" (why is the phrase metal? Wouldn't it be more accurate to say "close to the silicon"?).
While this feature is usually a good thing, it can make testing more challenging.
You have two really good options for toolchains.
Depending on where you're coming from, it might surprise you that neither of these options is running the unit tests on your hardware.
There are many reasons for this, but here's a short version:
- On hardware, you have too many constraints (processing power, memory, etc),
- On hardware, you don't have complete control over all registers,
- On hardware, unit testing is more challenging,
- Unit testing isn't System testing. Keep them separate.
Instead of running your tests on your actual hardware, most developers choose to develop them as native applications (using gcc or MSVC for example) or as applications running on a simulator.
Either is a good option.
Native apps have the advantages of being faster and easier to set up.
Simulator apps have the advantage of working with the same compiler as your target application.
The options for configuring these are discussed in the configuration guide.
To get either to work, you might need to make a few changes to the file containing your register set (discussed later).
In either case, a test is built by linking unity, the test file, and the C file(s) being tested.
These files create an executable which can be run as the test set for that module.
Then, this process is repeated for the next test file.
This flexibility of separating tests into individual executables allows us to much more thoroughly unit test our system and it keeps all the test code out of our final release!
*Find The Latest of This And More at [ThrowTheSwitch.org][]*
[generate_test_runner.rb]: ../auto/generate_test_runner.rb
[ThrowTheSwitch.org]: https://throwtheswitch.org

245
docs/UnityHelperScriptsGuide.md

@ -0,0 +1,245 @@
# Unity Helper Scripts
## With a Little Help From Our Friends
Sometimes what it takes to be a really efficient C programmer is a little non-C.
The Unity project includes a couple of Ruby scripts for making your life just a tad easier.
They are completely optional.
If you choose to use them, you'll need a copy of Ruby, of course.
Just install whatever the latest version is, and it is likely to work. You can find Ruby at [ruby-lang.org][].
### `generate_test_runner.rb`
Are you tired of creating your own `main` function in your test file?
Do you keep forgetting to add a `RUN_TEST` call when you add a new test case to your suite?
Do you want to use CMock or other fancy add-ons but don't want to figure out how to create your own `RUN_TEST` macro?
Well then we have the perfect script for you!
The `generate_test_runner` script processes a given test file and automatically creates a separate test runner file that includes ?main?to execute the test cases within the scanned test file.
All you do then is add the generated runner to your list of files to be compiled and linked, and presto you're done!
This script searches your test file for void function signatures having a function name beginning with "test" or "spec".
It treats each of these functions as a test case and builds up a test suite of them.
For example, the following includes three test cases:
```C
void testVerifyThatUnityIsAwesomeAndWillMakeYourLifeEasier(void)
{
ASSERT_TRUE(1);
}
void test_FunctionName_should_WorkProperlyAndReturn8(void) {
ASSERT_EQUAL_INT(8, FunctionName());
}
void spec_Function_should_DoWhatItIsSupposedToDo(void) {
ASSERT_NOT_NULL(Function(5));
}
```
You can run this script a couple of ways.
The first is from the command line:
```Shell
ruby generate_test_runner.rb TestFile.c NameOfRunner.c
```
Alternatively, if you include only the test file parameter, the script will copy the name of the test file and automatically append `_Runner` to the name of the generated file.
The example immediately below will create TestFile_Runner.c.
```Shell
ruby generate_test_runner.rb TestFile.c
```
You can also add a [YAML][] file to configure extra options.
Conveniently, this YAML file is of the same format as that used by Unity and CMock.
So if you are using YAML files already, you can simply pass the very same file into the generator script.
```Shell
ruby generate_test_runner.rb TestFile.c my_config.yml
```
The contents of the YAML file `my_config.yml` could look something like the example below.
If you're wondering what some of these options do, you're going to love the next section of this document.
```YAML
:unity:
:includes:
- stdio.h
- microdefs.h
:cexception: 1
:suite_setup: "blah = malloc(1024);"
:suite_teardown: "free(blah);"
```
If you would like to force your generated test runner to include one or more header files, you can just include those at the command line too.
Just make sure these are _after_ the YAML file, if you are using one:
```Shell
ruby generate_test_runner.rb TestFile.c my_config.yml extras.h
```
Another option, particularly if you are already using Ruby to orchestrate your builds - or more likely the Ruby-based build tool Rake - is requiring this script directly.
Anything that you would have specified in a YAML file can be passed to the script as part of a hash.
Let's push the exact same requirement set as we did above but this time through Ruby code directly:
```Ruby
require "generate_test_runner.rb"
options = {
:includes => ["stdio.h", "microdefs.h"],
:cexception => 1,
:suite_setup => "blah = malloc(1024);",
:suite_teardown => "free(blah);"
}
UnityTestRunnerGenerator.new.run(testfile, runner_name, options)
```
If you have multiple files to generate in a build script (such as a Rakefile), you might want to instantiate a generator object with your options and call it to generate each runner afterwards.
Like thus:
```Ruby
gen = UnityTestRunnerGenerator.new(options)
test_files.each do |f|
gen.run(f, File.basename(f,'.c')+"Runner.c"
end
```
#### Options accepted by generate_test_runner.rb
The following options are available when executing `generate_test_runner`.
You may pass these as a Ruby hash directly or specify them in a YAML file, both of which are described above.
In the `examples` directory, Example 3's Rakefile demonstrates using a Ruby hash.
##### `:includes`
This option specifies an array of file names to be `#include`'d at the top of your runner C file.
You might use it to reference custom types or anything else universally needed in your generated runners.
##### `:suite_setup`
Define this option with C code to be executed _before any_ test cases are run.
Alternatively, if your C compiler supports weak symbols, you can leave this option unset and instead provide a `void suiteSetUp(void)` function in your test suite.
The linker will look for this symbol and fall back to a Unity-provided stub if it is not found.
##### `:suite_teardown`
Define this option with C code to be executed _after all_ test cases have finished.
An integer variable `num_failures` is available for diagnostics.
The code should end with a `return` statement; the value returned will become the exit code of `main`.
You can normally just return `num_failures`.
Alternatively, if your C compiler supports weak symbols, you can leave this option unset and instead provide a `int suiteTearDown(int num_failures)` function in your test suite.
The linker will look for this symbol and fall back to a Unity-provided stub if it is not found.
##### `:enforce_strict_ordering`
This option should be defined if you have the strict order feature enabled in CMock (see CMock documentation).
This generates extra variables required for everything to run smoothly.
If you provide the same YAML to the generator as used in CMock's configuration, you've already configured the generator properly.
##### `:externc`
This option should be defined if you are mixing C and CPP and want your test runners to automatically include extern "C" support when they are generated.
##### `:mock_prefix` and `:mock_suffix`
Unity automatically generates calls to Init, Verify and Destroy for every file included in the main test file that starts with the given mock prefix and ends with the given mock suffix, file extension not included.
By default, Unity assumes a `Mock` prefix and no suffix.
##### `:plugins`
This option specifies an array of plugins to be used (of course, the array can contain only a single plugin).
This is your opportunity to enable support for CException support, which will add a check for unhandled exceptions in each test, reporting a failure if one is detected.
To enable this feature using Ruby:
```Ruby
:plugins => [ :cexception ]
```
Or as a yaml file:
```YAML
:plugins:
-:cexception
```
If you are using CMock, it is very likely that you are already passing an array of plugins to CMock.
You can just use the same array here.
This script will just ignore the plugins that don't require additional support.
##### `:include_extensions`
This option specifies the pattern for matching acceptable header file extensions.
By default it will accept hpp, hh, H, and h files.
If you need a different combination of files to search, update this from the default `'(?:hpp|hh|H|h)'`.
##### `:source_extensions`
This option specifies the pattern for matching acceptable source file extensions.
By default it will accept cpp, cc, C, c, and ino files.
If you need a different combination of files to search, update this from the default `'(?:cpp|cc|ino|C|c)'`.
### `unity_test_summary.rb`
A Unity test file contains one or more test case functions.
Each test case can pass, fail, or be ignored.
Each test file is run individually producing results for its collection of test cases.
A given project will almost certainly be composed of multiple test files.
Therefore, the suite of tests is comprised of one or more test cases spread across one or more test files.
This script aggregates individual test file results to generate a summary of all executed test cases.
The output includes how many tests were run, how many were ignored, and how many failed. In addition, the output includes a listing of which specific tests were ignored and failed.
A good example of the breadth and details of these results can be found in the `examples` directory.
Intentionally ignored and failing tests in this project generate corresponding entries in the summary report.
If you're interested in other (prettier?) output formats, check into the [Ceedling][] build tool project that works with Unity and CMock and supports xunit-style xml as well as other goodies.
This script assumes the existence of files ending with the extensions `.testpass` and `.testfail`.
The contents of these files includes the test results summary corresponding to each test file executed with the extension set according to the presence or absence of failures for that test file.
The script searches a specified path for these files, opens each one it finds, parses the results, and aggregates and prints a summary.
Calling it from the command line looks like this:
```Shell
ruby unity_test_summary.rb build/test/
```
You can optionally specify a root path as well.
This is really helpful when you are using relative paths in your tools' setup, but you want to pull the summary into an IDE like Eclipse for clickable shortcuts.
```Shell
ruby unity_test_summary.rb build/test/ ~/projects/myproject/
```
Or, if you're more of a Windows sort of person:
```Shell
ruby unity_test_summary.rb build\teat\ C:\projects\myproject\
```
When configured correctly, you'll see a final summary, like so:
```Shell
--------------------------
UNITY IGNORED TEST SUMMARY
--------------------------
blah.c:22:test_sandwiches_should_HaveBreadOnTwoSides:IGNORE
-------------------------
UNITY FAILED TEST SUMMARY
-------------------------
blah.c:87:test_sandwiches_should_HaveCondiments:FAIL:Expected 1 was 0
meh.c:38:test_soda_should_BeCalledPop:FAIL:Expected "pop" was "coke"
--------------------------
OVERALL UNITY TEST SUMMARY
--------------------------
45 TOTAL TESTS 2 TOTAL FAILURES 1 IGNORED
```
How convenient is that?
*Find The Latest of This And More at [ThrowTheSwitch.org][]*
[ruby-lang.org]: https://ruby-labg.org/
[YAML]: http://www.yaml.org/
[Ceedling]: http://www.throwtheswitch.org/ceedling
[ThrowTheSwitch.org]: https://throwtheswitch.org

22
docs/plugin_beep.md

@ -0,0 +1,22 @@
ceedling-beep
=============
This is a simple plugin that just beeps at the end of a build and/or test sequence. Are you getting too distracted surfing
the internet, chatting with coworkers, or swordfighting while it's building or testing? The friendly beep will let you know
it's time to pay attention again.
This plugin has very few configuration options. At this time it can beep on completion of a task and/or on an error condition.
For each of these, you can configure the method that it should beep.
```
:tools:
:beep_on_done: :bell
:beep_on_error: :bell
```
Each of these have the following options:
- :bell - this option uses the ASCII bell character out stdout
- :speaker_test - this uses the linux speaker-test command if installed
Very likely, we'll be adding to this list if people find this to be useful.

76
docs/plugin_bullseye.md

@ -0,0 +1,76 @@
ceedling-bullseye
=================
# Plugin Overview
Plugin for integrating Bullseye code coverage tool into Ceedling projects.
This plugin requires a working license to Bullseye code coverage tools. The tools
must be within the path or the path should be added to the environment in the
`project.yml file`.
## Configuration
The bullseye plugin supports configuration options via your `project.yml` provided
by Ceedling. The following is a typical configuration example:
```
:bullseye:
:auto_license: TRUE
:plugins:
:bullseye_lib_path: []
:paths:
:bullseye_toolchain_include: []
:tools:
:bullseye_instrumentation:
:executable: covc
:arguments:
- '--file $': ENVIRONMENT_COVFILE
- -q
- ${1}
:bullseye_compiler:
:executable: gcc
:arguments:
- -g
- -I"$": COLLECTION_PATHS_TEST_SUPPORT_SOURCE_INCLUDE_VENDOR
- -I"$": COLLECTION_PATHS_BULLSEYE_TOOLCHAIN_INCLUDE
- -D$: COLLECTION_DEFINES_TEST_AND_VENDOR
- -DBULLSEYE_COMPILER
- -c "${1}"
- -o "${2}"
:bullseye_linker:
:executable: gcc
:arguments:
- ${1}
- -o ${2}
- -L$: PLUGINS_BULLSEYE_LIB_PATH
- -lcov
:bullseye_fixture:
:executable: ${1}
:bullseye_report_covsrc:
:executable: covsrc
:arguments:
- '--file $': ENVIRONMENT_COVFILE
- -q
- -w140
:bullseye_report_covfn:
:executable: covfn
:stderr_redirect: :auto
:arguments:
- '--file $': ENVIRONMENT_COVFILE
- --width 120
- --no-source
- '"${1}"'
:bullseye_browser:
:executable: CoverageBrowser
:background_exec: :auto
:optional: TRUE
:arguments:
- '"$"': ENVIRONMENT_COVFILE
```
## Example Usage
```sh
ceedling bullseye:all utils:bullseye
```

20
docs/plugin_colour_report.md

@ -0,0 +1,20 @@
ceedling-colour-report
======================
## Overview
The colour_report replaces the normal ceedling "pretty" output with
a colorized variant, in order to make the results easier to read from
a standard command line. This is very useful on developer machines, but
can occasionally cause problems with parsing on CI servers.
## Setup
Enable the plugin in your project.yml by adding `colour_report`
to the list of enabled plugins.
``` YAML
:plugins:
:enabled:
- colour_report
```

53
docs/plugin_command_hooks.md

@ -0,0 +1,53 @@
ceedling-command-hooks
======================
Plugin for easily calling command line tools at various points in the build process
Define any of these sections in :tools: to provide additional hooks to be called on demand:
```
:pre_mock_generate
:post_mock_generate
:pre_runner_generate
:post_runner_generate
:pre_compile_execute
:post_compile_execute
:pre_link_execute
:post_link_execute
:pre_test_fixture_execute
:pre_test
:post_test
:pre_release
:post_release
:pre_build
:post_build
```
Each of these tools can support an :executable string and an :arguments list, like so:
```
:tools:
:post_link_execute:
:executable: objcopy.exe
:arguments:
- ${1} #This is replaced with the executable name
- output.srec
- --strip-all
```
You may also specify an array of executables to be called in a particular place, like so:
```
:tools:
:post_test:
- :executable: echo
:arguments: "${1} was glorious!"
- :executable: echo
:arguments:
- it kinda made me cry a little.
- you?
```
Please note that it varies which arguments are being parsed down to the
hooks. For now see `command_hooks.rb` to figure out which suits you best.
Happy Tweaking!

29
docs/plugin_compile_commands_json.md

@ -0,0 +1,29 @@
compile_commands_json
=====================
## Overview
Syntax highlighting and code completion are hard. Historically each editor or IDE has implemented their own and then competed amongst themselves to offer the best experience for developers. Often developers would still to an IDE that felt cumbersome and slow just because it had the best syntax highlighting on the market. If doing it for one language is hard (and it is) imagine doing it for dozens of them. Imagine a full stack developer who has to work with CSS, HTML, JavaScript and some Ruby - they need excellent support in all those languages which just made things even harder.
In June of 2016, Microsoft with Red Hat and Codenvy got together to create a standard called the Language Server Protocol (LSP). The idea was simple, by standardising on one protocol, all the IDEs and editors out there would only have to support LSP, and not have custom plugins for each language. In turn, the backend code that actually does the highlighting can be written once and used by any IDE that supports LSP. Many editors already support it such as Sublime Text, vim and emacs. This means that if you're using a crufty old IDE or worse, you're using a shiny new editor without code completion, then this could be just the upgrade you're looking for!
For C and C++ projects, many people use the `clangd` backend. So that it can do things like "go to definition", `clangd` needs to know how to build the project so that it can figure out all the pieces to the puzzle. There are manual tools such as `bear` which can be run with `gcc` or `clang` to extract this information it has a big limitation in that if run with `ceedling release` you won't get any auto completion for Unity and you'll also get error messages reported by your IDE because of what it perceives as missing headers. If you do the same with `ceedling test` now you get Unity but you might miss things that are only seen in the release build.
This plugin resolves that issue. As it is run by Ceedling, it has access to all the build information it needs to create the perfect `compile_commands.json`. Once enabled, this plugin will generate that file and place it in `./build/artifacts/compile_commands.json`. `clangd` will search your project for this file, but it is easier to symlink it into the root directory (for example `ln -s ./build/artifacts/compile_commands.json`.
For more information on LSP and to find out if your editor supports it, check out https://langserver.org/
## Setup
Enable the plugin in your project.yml by adding `compile_commands_json` to the list
of enabled plugins.
``` YAML
:plugins:
:enabled:
- compile_commands_json
```
## Configuration
There is no additional configuration necessary to run this plugin.

254
docs/plugin_dependencies.md

@ -0,0 +1,254 @@
ceedling-dependencies
=====================
Plugin for supporting release dependencies. It's rare for an embedded project to
be built completely free of other libraries and modules. Some of these may be
standard internal libraries. Some of these may be 3rd party libraries. In either
case, they become part of the project's ecosystem.
This plugin is intended to make that relationship easier. It allows you to specify
a source for dependencies. If required, it will automatically grab the appropriate
version of that dependency.
Most 3rd party libraries have a method of building already in place. While we'd
love to convert the world to a place where everything downloads with a test suite
in Ceedling, that's not likely to happen anytime soon. Until then, this plugin
will allow the developer to specify what calls Ceedling should make to oversee
the build process of those third party utilities. Are they using Make? CMake? A
custom series of scripts that only a mad scientist could possibly understand? No
matter. Ceedling has you covered. Just specify what should be called, and Ceedling
will make it happen whenever it notices that the output artifacts are missing.
Output artifacts? Sure! Things like static and dynamic libraries, or folders
containing header files that might want to be included by your release project.
So how does all this magic work?
First, you need to add the `:dependencies` plugin to your list. Then, we'll add a new
section called :dependencies. There, you can list as many dependencies as you desire. Each
has a series of fields which help Ceedling to understand your needs. Many of them are
optional. If you don't need that feature, just don't include it! In the end, it'll look
something like this:
```
:dependencies:
:libraries:
- :name: WolfSSL
:source_path: third_party/wolfssl/source
:build_path: third_party/wolfssl/build
:artifact_path: third_party/wolfssl/install
:fetch:
:method: :zip
:source: \\shared_drive\third_party_libs\wolfssl\wolfssl-4.2.0.zip
:environment:
- CFLAGS+=-DWOLFSSL_DTLS_ALLOW_FUTURE
:build:
- "autoreconf -i"
- "./configure --enable-tls13 --enable-singlethreaded"
- make
- make install
:artifacts:
:static_libraries:
- lib/wolfssl.a
:dynamic_libraries:
- lib/wolfssl.so
:includes:
- include/**
```
Let's take a deeper look at each of these features.
The Starting Dash & Name
------------------------
Yes, that opening dash tells the dependencies plugin that the rest of these fields
belong to our first dependency. If we had a second dependency, we'd have another
dash, lined up with the first, and followed by all the fields indented again.
By convention, we use the `:name` field as the first field for each tool. Ceedling
honestly doesn't care which order the fields are given... but as humans, it makes
it easier for us to see the name of each dependency with starting dash.
The name field is only used to print progress while we're running Ceedling. You may
call the name of the field whatever you wish.
Working Folders
---------------
The `:source_path` field allows us to specify where the source code for each of our
dependencies is stored. If fetching the dependency from elsewhere, it will be fetched
to this location. All commands to build this dependency will be executed from
this location (override this by specifying a `:build_path`). Finally, the output
artifacts will be referenced to this location (override this by specifying a `:artifact_path`)
If unspecified, the `:source_path` will be `dependencies\dep_name` where `dep_name`
is the name specified in `:name` above (with special characters removed). It's best,
though, if you specify exactly where you want your dependencies to live.
If the dependency is directly included in your project (you've specified `:none` as the
`:method` for fetching), then `:source_path` should be where your Ceedling can find the
source for your dependency in you repo.
All artifacts are relative to the `:artifact_path` (which defaults to be the same as
`:source_path`)
Fetching Dependencies
---------------------
The `:dependencies` plugin supports the ability to automatically fetch your dependencies
for you... using some common methods of fetching source. This section contains only a
couple of fields:
- `:method` -- This is the method that this dependency is fetched.
- `:none` -- This tells Ceedling that the code is already included in the project.
- `:zip` -- This tells Ceedling that we want to unpack a zip file to our source path.
- `:git` -- This tells Ceedling that we want to clone a git repo to our source path.
- `:svn` -- This tells Ceedling that we want to checkout a subversion repo to our source path.
- `:custom` -- This tells Ceedling that we want to use a custom command or commands to fetch the code.
- `:source` -- This is the path or url to fetch code when using the zip or git method.
- `:tag`/`:branch` -- This is the specific tag or branch that you wish to retrieve (git only. optional).
- `:hash` -- This is the specific SHA1 hash you want to fetch (git only. optional, requires a deep clone).
- `:revision` -- This is the specific revision you want to fetch (svn only. optional).
- `:executable` -- This is a list of commands to execute when using the `:custom` method
Environment Variables
---------------------
Many build systems support customization through environment variables. By specifying
an array of environment variables, Ceedling will customize the shell environment before
calling the build process.
Environment variables may be specified in three ways. Let's look at one of each:
```
:environment:
- ARCHITECTURE=ARM9
- CFLAGS+=-DADD_AWESOMENESS
- CFLAGS-=-DWASTE
```
In the first example, you see the most straightforward method. The environment variable
`ARCHITECTURE` is set to the value `ARM9`. That's it. Simple.
The next two options modify an existing symbol. In the first one, we use `+=`, which tells
Ceedling to add the define `ADD_AWESOMENESS` to the environment variable `CFLAGS`. The second
tells Ceedling to remove the define `WASTE` from the same environment variable.
There are a couple of things to note here.
First, when adding to a variable, Ceedling has no way of knowing
what delimiter you are expecting. In this example you can see we manually added some whitespace.
If we had been modifying `PATH` instead, we might have had to use a `:` on a unux or `;` on
Windows.
Second, removing an argument will have no effect on the argument if that argument isn't found
precisely. It's case sensitive and the entire string must match. If symbol doesn't already exist,
it WILL after executing this command... however it will be assigned to nothing.
Building Dependencies
---------------------
The heart of the `:dependencies` plugin is the ability for you, the developer, to specify the
build process for each of your dependencies. You will need to have any required tools installed
before using this feature.
The steps are specified as an array of strings. Ceedling will execute those steps in the order
specified, moving from step to step unless an error is encountered. By the end of the process,
the artifacts should have been created by your process... otherwise an error will be produced.
Artifacts
---------
These are the outputs of the build process. There are there types of artifacts. Any dependency
may have none or some of these. Calling out these files tells Ceedling that they are important.
Your dependency's build process may produce many other files... but these are the files that
Ceedling understands it needs to act on.
### `static_libraries`
Specifying one or more static libraries will tell Ceedling where it should find static libraries
output by your build process. These libraries are automatically added to the list of dependencies
and will be linked with the rest of your code to produce the final release.
If any of these libraries don't exist, Ceedling will trigger your build process in order for it
to produce them.
### `dynamic_libraries`
Specifying one or more dynamic libraries will tell Ceedling where it should find dynamic libraries
output by your build process. These libraries are automatically copied to the same folder as your
final release binary.
If any of these libraries don't exist, Ceedling will trigger your build process in order for it
to produce them.
### `includes`
Often when libraries are built, the same process will output a collection of includes so that
your release code knows how to interact with that library. It's the public API for that library.
By specifying the directories that will contain these includes (don't specify the files themselves,
Ceedling only needs the directories), Ceedling is able to automatically add these to its internal
include list. This allows these files to be used while building your release code, as well we making
them mockable during unit testing.
### `source`
It's possible that your external dependency will just produce additional C files as its output.
In this case, Ceedling is able to automatically add these to its internal source list. This allows
these files to be used while building your release code.
Tasks
-----
Once configured correctly, the `:dependencies` plugin should integrate seamlessly into your
workflow and you shouldn't have to think about it. In the real world, that doesn't always happen.
Here are a number of tasks that are added or modified by this plugin.
### `ceedling dependencies:clean`
This can be issued in order to completely remove the dependency from its source path. On the
next build, it will be refetched and rebuilt from scratch. This can also apply to a particular
dependency. For example, by specifying `dependencies:clean:DepName`.
### `ceedling dependencies:fetch`
This can be issued in order to fetch each dependency from its origin. This will have no effect on
dependencies that don't have fetch instructions specified. This can also apply to a particular
dependency. For example, by specifying `dependencies:fetch:DepName`.
### `ceedling dependencies:make`
This will force the dependencies to all build. This should happen automatically when a release
has been triggered... but if you're just getting your dependency configured at this moment, you
may want to just use this feature instead. A single dependency can also be built by specifying its
name, like `dependencies:make:MyTunaBoat`.
### `ceedling dependencies:deploy`
This will force any dynamic libraries produced by your dependencies to be copied to your release
build directory... just in case you clobbered them.
### `paths:include`
Maybe you want to verify that all the include paths are correct. If you query Ceedling with this
request, it will list all the header file paths that it's found, including those produced by
dependencies.
### `files:include`
Maybe you want to take that query further and actually get a list of ALL the header files
Ceedling has found, including those belonging to your dependencies.
Testing
=======
Hopefully all your dependencies are fully tested... but we can't always depend on that.
In the event that they are tested with Ceedling, you'll probably want to consider using
the `:subprojects` plugin instead of this one. The purpose of this plugin is to pull in
third party code for release... and to provide a mockable interface for Ceedling to use
during its tests of other modules.
If that's what you're after... you've found the right plugin!
Happy Testing!

250
docs/plugin_fake_function_framework.md

@ -0,0 +1,250 @@
# A Fake Function Framework Plug-in for Ceedling
This is a plug-in for [Ceedling](https://github.com/ThrowTheSwitch/Ceedling) to use the [Fake Function Framework](https://github.com/meekrosoft/fff) for mocking instead of CMock.
Using fff provides less strict mocking than CMock, and allows for more loosely-coupled tests.
And, when tests fail -- since you get the actual line number of the failure -- it's a lot easier to figure out what went wrong.
## Installing the plug-in
To use the plugin you need to 1) get the contents of this repo and 2) configure your project to use it.
### Get the source
The easiest way to get the source is to just clone this repo into the Ceedling plugin folder for your existing Ceedling project.
(Don't have a Ceedling project already? [Here are instructions to create one.](http://www.electronvector.com/blog/try-embedded-test-driven-development-right-now-with-ceedling))
From within `<your-project>/vendor/ceedling/plugins`, run:
`git clone https://github.com/ElectronVector/fake_function_framework.git`
This will create a new folder named `fake_function_framework` in the plugins folder.
### Enable the plug-in.
The plug-in is enabled from within your project.yml file.
In the `:plugins` configuration, add `fake_function_framework` to the list of enabled plugins:
```yaml
:plugins:
:load_paths:
- vendor/ceedling/plugins
:enabled:
- stdout_pretty_tests_report
- module_generator
- fake_function_framework
```
*Note that you could put the plugin source in some other loaction.
In that case you'd need to add a new path the `:load_paths`.*
## How to use it
You use fff with Ceedling the same way you used to use CMock.
Modules can still be generated with the default module generator: `rake module:create[my_module]`.
If you want to "mock" `some_module.h` in your tests, just `#include "mock_some_module.h"`.
This creates a fake function for each of the functions defined in `some_module.h`.
The name of each fake is the original function name with an appended `_fake`.
For example, if we're generating fakes for a stack module with `push` and `pop` functions, we would have the fakes `push_fake` and `pop_fake`.
These fakes are linked into our test executable so that any time our unit under test calls `push` or `pop` our fakes are called instead.
Each of these fakes is actually a structure containing information about how the function was called, and what it might return.
We can use Unity to inspect these fakes in our tests, and verify the interactions of our units.
There is also a global structure named `fff` which we can use to check the sequence of calls.
The fakes can also be configured to return particular values, so you can exercise the unit under test however you want.
The examples below explain how to use fff to test a variety of module interactions.
Each example uses fakes for a "display" module, created from a display.h file with `#include "mock_display.h"`. The `display.h` file must exist and must contain the prototypes for the functions to be faked.
### Test that a function was called once
```c
void
test_whenTheDeviceIsReset_thenTheStatusLedIsTurnedOff()
{
// When
event_deviceReset();
// Then
TEST_ASSERT_EQUAL(1, display_turnOffStatusLed_fake.call_count);
}
```
### Test that a function was NOT called
```c
void
test_whenThePowerReadingIsLessThan5_thenTheStatusLedIsNotTurnedOn(void)
{
// When
event_powerReadingUpdate(4);
// Then
TEST_ASSERT_EQUAL(0, display_turnOnStatusLed_fake.call_count);
}
```
## Test that a single function was called with the correct argument
```c
void
test_whenTheVolumeKnobIsMaxed_thenVolumeDisplayIsSetTo11(void)
{
// When
event_volumeKnobMaxed();
// Then
TEST_ASSERT_EQUAL(1, display_setVolume_fake.call_count);
TEST_ASSERT_EQUAL(11, display_setVolume_fake.arg0_val);
}
```
## Test that calls are made in a particular sequence
```c
void
test_whenTheModeSelectButtonIsPressed_thenTheDisplayModeIsCycled(void)
{
// When
event_modeSelectButtonPressed();
event_modeSelectButtonPressed();
event_modeSelectButtonPressed();
// Then
TEST_ASSERT_EQUAL_PTR((void*)display_setModeToMinimum, fff.call_history[0]);
TEST_ASSERT_EQUAL_PTR((void*)display_setModeToMaximum, fff.call_history[1]);
TEST_ASSERT_EQUAL_PTR((void*)display_setModeToAverage, fff.call_history[2]);
}
```
## Fake a return value from a function
```c
void
test_givenTheDisplayHasAnError_whenTheDeviceIsPoweredOn_thenTheDisplayIsPoweredDown(void)
{
// Given
display_isError_fake.return_val = true;
// When
event_devicePoweredOn();
// Then
TEST_ASSERT_EQUAL(1, display_powerDown_fake.call_count);
}
```
## Fake a function with a value returned by reference
```c
void
test_givenTheUserHasTypedSleep_whenItIsTimeToCheckTheKeyboard_theDisplayIsPoweredDown(void)
{
// Given
char mockedEntry[] = "sleep";
void return_mock_value(char * entry, int length)
{
if (length > strlen(mockedEntry))
{
strncpy(entry, mockedEntry, length);
}
}
display_getKeyboardEntry_fake.custom_fake = return_mock_value;
// When
event_keyboardCheckTimerExpired();
// Then
TEST_ASSERT_EQUAL(1, display_powerDown_fake.call_count);
}
```
## Fake a function with a function pointer parameter
```
void
test_givenNewDataIsAvailable_whenTheDisplayHasUpdated_thenTheEventIsComplete(void)
{
// A mock function for capturing the callback handler function pointer.
void(*registeredCallback)(void) = 0;
void mock_display_updateData(int data, void(*callback)(void))
{
//Save the callback function.
registeredCallback = callback;
}
display_updateData_fake.custom_fake = mock_display_updateData;
// Given
event_newDataAvailable(10);
// When
if (registeredCallback != 0)
{
registeredCallback();
}
// Then
TEST_ASSERT_EQUAL(true, eventProcessor_isLastEventComplete());
}
```
## Helper macros
For convenience, there are also some helper macros that create new Unity-style asserts:
- `TEST_ASSERT_CALLED(function)`: Asserts that a function was called once.
- `TEST_ASSERT_NOT_CALLED(function)`: Asserts that a function was never called.
- `TEST_ASSERT_CALLED_TIMES(times, function)`: Asserts that a function was called a particular number of times.
- `TEST_ASSERT_CALLED_IN_ORDER(order, function)`: Asserts that a function was called in a particular order.
Here's how you might use one of these instead of simply checking the call_count value:
```c
void
test_whenTheDeviceIsReset_thenTheStatusLedIsTurnedOff()
{
// When
event_deviceReset();
// Then
// This how to directly use fff...
TEST_ASSERT_EQUAL(1, display_turnOffStatusLed_fake.call_count);
// ...and this is how to use the helper macro.
TEST_ASSERT_CALLED(display_turnOffStatusLed);
}
```
## Test setup
All of the fake functions, and any fff global state are all reset automatically between each test.
## CMock configuration
Use still use some of the CMock configuration options for setting things like the mock prefix, and for including additional header files in the mock files.
```yaml
:cmock:
:mock_prefix: mock_
:includes:
-
:includes_h_pre_orig_header:
-
:includes_h_post_orig_header:
-
:includes_c_pre_header:
-
:includes_c_post_header:
```
## Running the tests
There are unit and integration tests for the plug-in itself.
These are run with the default `rake` task.
The integration test runs the tests for the example project in examples/fff_example.
For the integration tests to succeed, this repository must be placed in a Ceedling tree in the plugins folder.
## More examples
There is an example project in examples/fff_example.
It shows how to use the plug-in with some full-size examples.

433
docs/plugin_gcov.md

@ -0,0 +1,433 @@
ceedling-gcov
=============
# Plugin Overview
Plugin for integrating GNU GCov code coverage tool into Ceedling projects.
Currently only designed for the gcov command (like LCOV for example). In the
future we could configure this to work with other code coverage tools.
This plugin currently uses [gcovr](https://www.gcovr.com/) and / or
[ReportGenerator](https://danielpalme.github.io/ReportGenerator/)
as utilities to generate HTML, XML, JSON, or Text reports. The normal gcov
plugin _must_ be run first for these reports to generate.
## Installation
gcovr can be installed via pip like so:
```sh
pip install gcovr
```
ReportGenerator can be installed via .NET Core like so:
```sh
dotnet tool install -g dotnet-reportgenerator-globaltool
```
It is not required to install both `gcovr` and `ReportGenerator`. Either utility
may be installed to create reports.
## Configuration
The gcov plugin supports configuration options via your `project.yml` provided
by Ceedling.
### Utilities
Gcovr and / or ReportGenerator may be enabled to create coverage reports.
```yaml
:gcov:
:utilities:
- gcovr # Use gcovr to create the specified reports (default).
- ReportGenerator # Use ReportGenerator to create the specified reports.
```
### Reports
Various reports are available and may be enabled with the following
configuration item. See the specific report sections in this README
for additional options and information. All generated reports will be found in `build/artifacts/gcov`.
```yaml
:gcov:
# Specify one or more reports to generate.
# Defaults to HtmlBasic.
:reports:
# Make an HTML summary report.
# Supported utilities: gcovr, ReportGenerator
- HtmlBasic
# Make an HTML report with line by line coverage of each source file.
# Supported utilities: gcovr, ReportGenerator
- HtmlDetailed
# Make a Text report, which may be output to the console with gcovr or a file in both gcovr and ReportGenerator.
# Supported utilities: gcovr, ReportGenerator
- Text
# Make a Cobertura XML report.
# Supported utilities: gcovr, ReportGenerator
- Cobertura
# Make a SonarQube XML report.
# Supported utilities: gcovr, ReportGenerator
- SonarQube
# Make a JSON report.
# Supported utilities: gcovr
- JSON
# Make a detailed HTML report with CSS and JavaScript included in every HTML page. Useful for build servers.
# Supported utilities: ReportGenerator
- HtmlInline
# Make a detailed HTML report with a light theme and CSS and JavaScript included in every HTML page for Azure DevOps.
# Supported utilities: ReportGenerator
- HtmlInlineAzure
# Make a detailed HTML report with a dark theme and CSS and JavaScript included in every HTML page for Azure DevOps.
# Supported utilities: ReportGenerator
- HtmlInlineAzureDark
# Make a single HTML file containing a chart with historic coverage information.
# Supported utilities: ReportGenerator
- HtmlChart
# Make a detailed HTML report in a single file.
# Supported utilities: ReportGenerator
- MHtml
# Make SVG and PNG files that show line and / or branch coverage information.
# Supported utilities: ReportGenerator
- Badges
# Make a single CSV file containing coverage information per file.
# Supported utilities: ReportGenerator
- CsvSummary
# Make a single TEX file containing a summary for all files and detailed reports for each files.
# Supported utilities: ReportGenerator
- Latex
# Make a single TEX file containing a summary for all files.
# Supported utilities: ReportGenerator
- LatexSummary
# Make a single PNG file containing a chart with historic coverage information.
# Supported utilities: ReportGenerator
- PngChart
# Command line output interpreted by TeamCity.
# Supported utilities: ReportGenerator
- TeamCitySummary
# Make a text file in lcov format.
# Supported utilities: ReportGenerator
- lcov
# Make a XML file containing a summary for all classes and detailed reports for each class.
# Supported utilities: ReportGenerator
- Xml
# Make a single XML file containing a summary for all files.
# Supported utilities: ReportGenerator
- XmlSummary
```
### Gcovr HTML Reports
Generation of Gcovr HTML reports may be modified with the following configuration items.
```yaml
:gcov:
# Set to 'true' to enable HTML reports or set to 'false' to disable.
# Defaults to enabled. (gcovr --html)
# Deprecated - See the :reports: configuration option.
:html_report: [true|false]
# Gcovr supports generating two types of HTML reports. Use 'basic' to create
# an HTML report with only the overall file information. Use 'detailed' to create
# an HTML report with line by line coverage of each source file.
# Defaults to 'basic'. Set to 'detailed' for (gcovr --html-details).
# Deprecated - See the :reports: configuration option.
:html_report_type: [basic|detailed]
:gcovr:
# HTML report filename.
:html_artifact_filename: <output>
# Use 'title' as title for the HTML report.
# Default is 'Head'. (gcovr --html-title)
:html_title: <title>
# If the coverage is below MEDIUM, the value is marked as low coverage in the HTML report.
# MEDIUM has to be lower than or equal to value of html_high_threshold.
# If MEDIUM is equal to value of html_high_threshold the report has only high and low coverage.
# Default is 75.0. (gcovr --html-medium-threshold)
:html_medium_threshold: 75
# If the coverage is below HIGH, the value is marked as medium coverage in the HTML report.
# HIGH has to be greater than or equal to value of html_medium_threshold.
# If HIGH is equal to value of html_medium_threshold the report has only high and low coverage.
# Default is 90.0. (gcovr -html-high-threshold)
:html_high_threshold: 90
# Set to 'true' to use absolute paths to link the 'detailed' reports.
# Defaults to relative links. (gcovr --html-absolute-paths)
:html_absolute_paths: [true|false]
# Override the declared HTML report encoding. Defaults to UTF-8. (gcovr --html-encoding)
:html_encoding: <html_encoding>
```
### Cobertura XML Reports
Generation of Cobertura XML reports may be modified with the following configuration items.
```yaml
:gcov:
# Set to 'true' to enable Cobertura XML reports or set to 'false' to disable.
# Defaults to disabled. (gcovr --xml)
# Deprecated - See the :reports: configuration option.
:xml_report: [true|false]
:gcovr:
# Set to 'true' to pretty-print the Cobertura XML report, otherwise set to 'false'.
# Defaults to disabled. (gcovr --xml-pretty)
:xml_pretty: [true|false]
:cobertura_pretty: [true|false]
# Cobertura XML report filename.
:xml_artifact_filename: <output>
:cobertura_artifact_filename: <output>
```
### SonarQube XML Reports
Generation of SonarQube XML reports may be modified with the following configuration items.
```yaml
:gcov:
:gcovr:
# SonarQube XML report filename.
:sonarqube_artifact_filename: <output>
```
### JSON Reports
Generation of JSON reports may be modified with the following configuration items.
```yaml
:gcov:
:gcovr:
# Set to 'true' to pretty-print the JSON report, otherwise set 'false'.
# Defaults to disabled. (gcovr --json-pretty)
:json_pretty: [true|false]
# JSON report filename.
:json_artifact_filename: <output>
```
### Text Reports
Generation of text reports may be modified with the following configuration items.
Text reports may be printed to the console or output to a file.
```yaml
:gcov:
:gcovr:
# Text report filename.
# The text report is printed to the console when no filename is provided.
:text_artifact_filename: <output>
```
### Common Report Options
There are a number of options to control which files are considered part of
the coverage report. Most often, we only care about coverage on our source code, and not
on tests or automatically generated mocks, runners, etc. However, there are times
where this isn't true... or there are times where we've moved ceedling's directory
structure so that the project file isn't at the root of the project anymore. In these
cases, you may need to tweak `report_include`, `report_exclude`, and `exclude_directories`.
One important note about `report_root`: gcovr will take only a single root folder, unlike
Ceedling's ability to take as many as you like. So you will need to choose a folder which is
a superset of ALL the folders you want, and then use the include or exclude options to set up
patterns of files to pay attention to or ignore. It's not ideal, but it works.
Finally, there are a number of settings which can be specified to adjust the
default behaviors of gcovr:
```yaml
:gcov:
:gcovr:
# The root directory of your source files. Defaults to ".", the current directory.
# File names are reported relative to this root. The report_root is the default report_include.
:report_root: "."
# Load the specified configuration file.
# Defaults to gcovr.cfg in the report_root directory. (gcovr --config)
:config_file: <config_file>
# Exit with a status of 2 if the total line coverage is less than MIN.
# Can be ORed with exit status of 'fail_under_branch' option. (gcovr --fail-under-line)
:fail_under_line: 30
# Exit with a status of 4 if the total branch coverage is less than MIN.
# Can be ORed with exit status of 'fail_under_line' option. (gcovr --fail-under-branch)
:fail_under_branch: 30
# Select the source file encoding.
# Defaults to the system default encoding (UTF-8). (gcovr --source-encoding)
:source_encoding: <source_encoding>
# Report the branch coverage instead of the line coverage. For text report only. (gcovr --branches).
:branches: [true|false]
# Sort entries by increasing number of uncovered lines.
# For text and HTML report. (gcovr --sort-uncovered)
:sort_uncovered: [true|false]
# Sort entries by increasing percentage of uncovered lines.
# For text and HTML report. (gcovr --sort-percentage)
:sort_percentage: [true|false]
# Print a small report to stdout with line & branch percentage coverage.
# This is in addition to other reports. (gcovr --print-summary).
:print_summary: [true|false]
# Keep only source files that match this filter. (gcovr --filter).
:report_include: "^src"
# Exclude source files that match this filter. (gcovr --exclude).
:report_exclude: "^vendor.*|^build.*|^test.*|^lib.*"
# Keep only gcov data files that match this filter. (gcovr --gcov-filter).
:gcov_filter: <gcov_filter>
# Exclude gcov data files that match this filter. (gcovr --gcov-exclude).
:gcov_exclude: <gcov_exclude>
# Exclude directories that match this regex while searching
# raw coverage files. (gcovr --exclude-directories).
:exclude_directories: <exclude_dirs>
# Use a particular gcov executable. (gcovr --gcov-executable).
:gcov_executable: <gcov_cmd>
# Exclude branch coverage from lines without useful
# source code. (gcovr --exclude-unreachable-branches).
:exclude_unreachable_branches: [true|false]
# For branch coverage, exclude branches that the compiler
# generates for exception handling. (gcovr --exclude-throw-branches).
:exclude_throw_branches: [true|false]
# Use existing gcov files for analysis. Default: False. (gcovr --use-gcov-files)
:use_gcov_files: [true|false]
# Skip lines with parse errors in GCOV files instead of
# exiting with an error. (gcovr --gcov-ignore-parse-errors).
:gcov_ignore_parse_errors: [true|false]
# Override normal working directory detection. (gcovr --object-directory)
:object_directory: <objdir>
# Keep gcov files after processing. (gcovr --keep).
:keep: [true|false]
# Delete gcda files after processing. (gcovr --delete).
:delete: [true|false]
# Set the number of threads to use in parallel. (gcovr -j).
:num_parallel_threads: <num_threads>
# When scanning the code coverage, if any files are found that do not have
# associated coverage data, the command will abort with an error message.
:abort_on_uncovered: true
# When using the ``abort_on_uncovered`` option, the files in this list will not
# trigger a failure.
# Ceedling globs described in the Ceedling packet ``Path`` section can be used
# when directories are placed on the list. Globs are limited to matching directories
# and not files.
:uncovered_ignore_list: []
```
### ReportGenerator Configuration
The ReportGenerator utility may be configured with the following configuration items.
All generated reports may be found in `build/artifacts/gcov/ReportGenerator`.
```yaml
:gcov:
:report_generator:
# Optional directory for storing persistent coverage information.
# Can be used in future reports to show coverage evolution.
:history_directory: <history_directory>
# Optional plugin files for custom reports or custom history storage (separated by semicolon).
:plugins: CustomReports.dll
# Optional list of assemblies that should be included or excluded in the report (separated by semicolon)..
# Exclusion filters take precedence over inclusion filters.
# Wildcards are allowed, but not regular expressions.
:assembly_filters: "+Included;-Excluded"
# Optional list of classes that should be included or excluded in the report (separated by semicolon)..
# Exclusion filters take precedence over inclusion filters.
# Wildcards are allowed, but not regular expressions.
:class_filters: "+Included;-Excluded"
# Optional list of files that should be included or excluded in the report (separated by semicolon)..
# Exclusion filters take precedence over inclusion filters.
# Wildcards are allowed, but not regular expressions.
:file_filters: "-./vendor/*;-./build/*;-./test/*;-./lib/*;+./src/*"
# The verbosity level of the log messages.
# Values: Verbose, Info, Warning, Error, Off
:verbosity: Warning
# Optional tag or build version.
:tag: <tag>
# Optional list of one or more regular expressions to exclude gcov notes files that match these filters.
:gcov_exclude:
- <exclude_regex1>
- <exclude_regex2>
# Optionally use a particular gcov executable. Defaults to gcov.
:gcov_executable: <gcov_cmd>
# Optionally set the number of threads to use in parallel. Defaults to 1.
:num_parallel_threads: <num_threads>
# Optional list of one or more command line arguments to pass to Report Generator.
# Useful for configuring Risk Hotspots and Other Settings.
# https://github.com/danielpalme/ReportGenerator/wiki/Settings
:custom_args:
- <custom_arg1>
- <custom_arg2>
```
## Example Usage
```sh
ceedling gcov:all utils:gcov
```
## To-Do list
- Generate overall report (combined statistics from all files with coverage)
## Citations
Most of the comment text which describes the options was taken from the
[Gcovr User Guide](https://www.gcovr.com/en/stable/guide.html) and the
[ReportGenerator Wiki](https://github.com/danielpalme/ReportGenerator/wiki).
The text is repeated here to provide the most accurate option functionality.

36
docs/plugin_json_tests_report.md

@ -0,0 +1,36 @@
json_tests_report
=================
## Overview
The json_tests_report plugin creates a JSON file of test results, which is
handy for Continuous Integration build servers or as input into other
reporting tools. The JSON file is output to the appropriate
`<build_root>/artifacts/` directory (e.g. `artifacts/test/` for test tasks,
`artifacts/gcov/` for gcov, or `artifacts/bullseye/` for bullseye runs).
## Setup
Enable the plugin in your project.yml by adding `json_tests_report` to the list
of enabled plugins.
``` YAML
:plugins:
:enabled:
- json_tests_report
```
## Configuration
Optionally configure the output / artifact filename in your project.yml with
the `artifact_filename` configuration option. The default filename is
`report.json`.
You can also configure the path that this artifact is stored. This can be done
by setting `path`. The default is that it will be placed in a subfolder under
the `build` directory.
``` YAML
:json_tests_report:
:artifact_filename: report_spectuluarly.json
```

36
docs/plugin_junit_tests_report.md

@ -0,0 +1,36 @@
junit_tests_report
====================
## Overview
The junit_tests_report plugin creates an XML file of test results in JUnit
format, which is handy for Continuous Integration build servers or as input
into other reporting tools. The XML file is output to the appropriate
`<build_root>/artifacts/` directory (e.g. `artifacts/test/` for test tasks,
`artifacts/gcov/` for gcov, or `artifacts/bullseye/` for bullseye runs).
## Setup
Enable the plugin in your project.yml by adding `junit_tests_report`
to the list of enabled plugins.
``` YAML
:plugins:
:enabled:
- junit_tests_report
```
## Configuration
Optionally configure the output / artifact filename in your project.yml with
the `artifact_filename` configuration option. The default filename is
`report.xml`.
You can also configure the path that this artifact is stored. This can be done
by setting `path`. The default is that it will be placed in a subfolder under
the `build` directory.
``` YAML
:junit_tests_report:
:artifact_filename: report_junit.xml
```

119
docs/plugin_module_generator.md

@ -0,0 +1,119 @@
ceedling-module-generator
=========================
## Overview
The module_generator plugin adds a pair of new commands to Ceedling, allowing
you to make or remove modules according to predefined templates. WIth a single call,
Ceedling can generate a source, header, and test file for a new module. If given a
pattern, it can even create a series of submodules to support specific design patterns.
Finally, it can just as easily remove related modules, avoiding the need to delete
each individually.
Let's say, for example, that you want to create a single module named `MadScience`.
```
ceedling module:create[MadScience]
```
It says we're speaking to the module plugin, and we want to create a new module. The
name of that module is between the brackets. It will keep this case, unless you have
specified a different default (see configuration). It will create three files:
`MadScience.c`, `MadScience.h`, and `TestMadScience.c`. *NOTE* that it is important that
there are no spaces between the brackets. We know, it's annoying... but it's the rules.
You can also create an entire pattern of files. To do that, just add a second argument
to the pattern ID. Something like this:
```
ceedling module:create[SecretLair,mch]
```
In this example, we'd create 9 files total: 3 headers, 3 source files, and 3 test files. These
files would be named `SecretLairModel`, `SecretLairConductor`, and `SecretLairHardware`. Isn't
that nice?
Similarly, you can create stubs for all functions in a header file just by making a single call
to your handy `stub` feature, like this:
```
ceedling module:stub[SecretLair]
```
This call will look in SecretLair.h and will generate a file SecretLair.c that contains a stub
for each function declared in the header! Even better, if SecretLair.c already exists, it will
add only new functions, leaving your existing calls alone so that it doesn't cause any problems.
## Configuration
Enable the plugin in your project.yml by adding `module_generator`
to the list of enabled plugins.
Then, like much of Ceedling, you can just run as-is with the defaults, or you can override those
defaults for your own needs. For example, new source and header files will be automatically
placed in the `src/` folder while tests will go in the `test/` folder. That's great if your project
follows the default ceedling structure... but what if you have a different structure?
```
:module_generator:
:project_root: ./
:source_root: source/
:inc_root: includes/
:test_root: tests/
```
Now I've redirected the location where modules are going to be generated.
### Includes
You can make it so that all of your files are generated with a standard include list. This is done
by adding to the `:includes` array. For example:
```
:module_generator:
:includes:
:tst:
- defs.h
- board.h
:src:
- board.h
```
### Boilerplates
You can specify the actual boilerplate used for each of your files. This is the handy place to
put that corporate copyright notice (or maybe a copyleft notice, if that's your perference?)
```
:module_generator:
:boilerplates: |
/***************************
* This file is Awesome. *
* That is All. *
***************************/
```
### Test Defines
You can specify the "#ifdef TEST" at the top of the test files with a custom define.
This example will put a "#ifdef CEEDLING_TEST" at the top of the test files.
```
:module_generator:
:test_define: CEEDLING_TEST
```
### Naming Convention
Finally, you can force a particular naming convention. Even if someone calls the generator
with something like `MyNewModule`, if they have the naming convention set to `:caps`, it will
generate files like `MY_NEW_MODULE.c`. This keeps everyone on your team behaving the same way.
Your options are as follows:
- `:bumpy` - BumpyFilesLooksLikeSo
- `:camel` - camelFilesAreSimilarButStartLow
- `:snake` - snake_case_is_all_lower_and_uses_underscores
- `:caps` - CAPS_FEELS_LIKE_YOU_ARE_SCREAMING

19
docs/plugin_raw_output_report.md

@ -0,0 +1,19 @@
ceedling-raw-output-report
==========================
## Overview
The raw-output-report allows you to capture all the output from the called
tools in a single document, so you can trace back through it later. This is
useful for debugging... but can eat through memory quickly if left running.
## Setup
Enable the plugin in your project.yml by adding `raw_output_report`
to the list of enabled plugins.
``` YAML
:plugins:
:enabled:
- raw_output_report
```

19
docs/plugin_stdout_gtestlike_tests_report.md

@ -0,0 +1,19 @@
ceedling-stdout-gtestlike-tests-report
======================
## Overview
The stdout_gtestlike_tests_report replaces the normal ceedling "pretty" output with
a variant that resembles the output of gtest. This is most helpful when trying to
integrate into an IDE or CI that is meant to work with google test.
## Setup
Enable the plugin in your project.yml by adding `stdout_gtestlike_tests_report`
to the list of enabled plugins.
``` YAML
:plugins:
:enabled:
- stdout_gtestlike_tests_report
```

18
docs/plugin_stdout_ide_tests_report.md

@ -0,0 +1,18 @@
ceedling-stdout-ide-tests-report
================================
## Overview
The stdout_ide_tests_report replaces the normal ceedling "pretty" output with
a simplified variant intended to be easily parseable.
## Setup
Enable the plugin in your project.yml by adding `stdout_ide_tests_report`
to the list of enabled plugins.
``` YAML
:plugins:
:enabled:
- stdout_ide_tests_report
```

20
docs/plugin_stdout_pretty_tests_report.md

@ -0,0 +1,20 @@
ceedling-pretty-tests-report
============================
## Overview
The stdout_pretty_tests_report is the default output of ceedling. Instead of
showing most of the raw output of CMock, Ceedling, etc., it shows a simplified
view. It also creates a nice summary at the end of execution which groups the
results into ignored and failed tests.
## Setup
Enable the plugin in your project.yml by adding `stdout_pretty_tests_report`
to the list of enabled plugins.
``` YAML
:plugins:
:enabled:
- stdout_pretty_tests_report
```

63
docs/plugin_subprojects.md

@ -0,0 +1,63 @@
ceedling-subprojects
====================
Plugin for supporting subprojects that are built as static libraries. It continues to support
dependency tracking, without getting confused between your main project files and your
subproject files. It accepts different compiler flags and linker flags, allowing you to
optimize for your situation.
First, you're going to want to add the extension to your list of known extensions:
```
:extension:
:subprojects: '.a'
```
Define a new section called :subprojects. There, you can list as many subprojects
as you may need under the :paths key. For each, you specify a unique place to build
and a unique name.
```
:subprojects:
:paths:
- :name: libprojectA
:source:
- ./subprojectA/first/dir
- ./subprojectA/second/dir
:include:
- ./subprojectA/include/dir
:build_root: ./subprojectA/build/dir
:defines:
- DEFINE_JUST_FOR_THIS_FILE
- AND_ANOTHER
- :name: libprojectB
:source:
- ./subprojectB/only/dir
:include:
- ./subprojectB/first/include/dir
- ./subprojectB/second/include/dir
:build_root: ./subprojectB/build/dir
:defines: [] #none for this one
```
You can specify the compiler and linker, just as you would a release build:
```
:tools:
:subprojects_compiler:
:executable: gcc
:arguments:
- -g
- -I"$": COLLECTION_PATHS_SUBPROJECTS
- -D$: COLLECTION_DEFINES_SUBPROJECTS
- -c "${1}"
- -o "${2}"
:subprojects_linker:
:executable: ar
:arguments:
- rcs
- ${2}
- ${1}
```
That's all there is to it! Happy Hacking!

18
docs/plugin_teamcity_tests_report.md

@ -0,0 +1,18 @@
ceedling-teamcity-tests-report
==============================
## Overview
The teamcity_tests_report replaces the normal ceedling "pretty" output with
a version that has results tagged to be consumed with the teamcity CI server.
## Setup
Enable the plugin in your project.yml by adding `teamcity_tests_report`
to the list of enabled plugins.
``` YAML
:plugins:
:enabled:
- teamcity_tests_report
```

19
docs/plugin_warnings_report.md

@ -0,0 +1,19 @@
warnings-report
===============
## Overview
The warnings_report captures all warnings throughout the build process
and collects them into a single report at the end of execution. It places all
of this into a warnings file in the output artifact directory.
## Setup
Enable the plugin in your project.yml by adding `warnings_report`
to the list of enabled plugins.
``` YAML
:plugins:
:enabled:
- warnings_report
```

36
docs/plugin_xml_tests_report.md

@ -0,0 +1,36 @@
xml_tests_report
================
## Overview
The xml_tests_report plugin creates an XML file of test results in xUnit
format, which is handy for Continuous Integration build servers or as input
into other reporting tools. The XML file is output to the appropriate
`<build_root>/artifacts/` directory (e.g. `artifacts/test/` for test tasks,
`artifacts/gcov/` for gcov, or `artifacts/bullseye/` for bullseye runs).
## Setup
Enable the plugin in your project.yml by adding `xml_tests_report` to the list
of enabled plugins.
``` YAML
:plugins:
:enabled:
- xml_tests_report
```
## Configuration
Optionally configure the output / artifact filename in your project.yml with
the `artifact_filename` configuration option. The default filename is
`report.xml`.
You can also configure the path that this artifact is stored. This can be done
by setting `path`. The default is that it will be placed in a subfolder under
the `build` directory.
``` YAML
:xml_tests_report:
:artifact_filename: report_xunit.xml
```

101
project.yml

@ -0,0 +1,101 @@
---
# Notes:
# Sample project C code is not presently written to produce a release artifact.
# As such, release build options are disabled.
# This sample, therefore, only demonstrates running a collection of unit tests.
:project:
:use_exceptions: FALSE
:use_test_preprocessor: TRUE
:use_auxiliary_dependencies: TRUE
:build_root: build
# :release_build: TRUE
:test_file_prefix: test_
:which_ceedling: gem
:ceedling_version: 0.31.1
:default_tasks:
- test:all
#:test_build:
# :use_assembly: TRUE
#:release_build:
# :output: MyApp.out
# :use_assembly: FALSE
:environment:
:extension:
:executable: .out
:paths:
:test:
- +:test/**
- -:test/support
:source:
- src/**
:support:
- test/support
:libraries: []
:defines:
# in order to add common defines:
# 1) remove the trailing [] from the :common: section
# 2) add entries to the :common: section (e.g. :test: has TEST defined)
:common: &common_defines []
:test:
- *common_defines
- TEST
:test_preprocess:
- *common_defines
- TEST
:cmock:
:mock_prefix: mock_
:when_no_prototypes: :warn
:enforce_strict_ordering: TRUE
:plugins:
- :ignore
- :callback
:treat_as:
uint8: HEX8
uint16: HEX16
uint32: UINT32
int8: INT8
bool: UINT8
# Add -gcov to the plugins list to make sure of the gcov plugin
# You will need to have gcov and gcovr both installed to make it work.
# For more information on these options, see docs in plugins/gcov
:gcov:
:reports:
- HtmlDetailed
:gcovr:
:html_medium_threshold: 75
:html_high_threshold: 90
#:tools:
# Ceedling defaults to using gcc for compiling, linking, etc.
# As [:tools] is blank, gcc will be used (so long as it's in your system path)
# See documentation to configure a given toolchain for use
# LIBRARIES
# These libraries are automatically injected into the build process. Those specified as
# common will be used in all types of builds. Otherwise, libraries can be injected in just
# tests or releases. These options are MERGED with the options in supplemental yaml files.
:libraries:
:placement: :end
:flag: "-l${1}"
:path_flag: "-L ${1}"
:system: [] # for example, you might list 'm' to grab the math library
:test: []
:release: []
:plugins:
:load_paths:
- "#{Ceedling.load_path}"
:enabled:
- stdout_pretty_tests_report
- module_generator
...

0
src/.gitkeep

1
team.md

@ -0,0 +1 @@
- TheUltimateOptimist, fdai8031

0
test/support/.gitkeep

Loading…
Cancel
Save