Compare commits

..

13 Commits

Author SHA1 Message Date
Petteri Aimonen
283a8f36cb Publishing nanopb-0.2.3 2013-09-18 12:44:46 +03:00
Petteri Aimonen
8accc25710 Update changelog 2013-09-13 13:41:57 +03:00
Petteri Aimonen
73142ba082 Add a new very simple example 2013-09-13 13:35:25 +03:00
Petteri Aimonen
f47410ea4b Move examples into subfolders, add READMEs 2013-09-13 12:59:31 +03:00
Petteri Aimonen
fd9a79a06d Merge branch 'dev_get_rid_of_ternary_operator' 2013-09-13 11:31:45 +03:00
Petteri Aimonen
9ada7e7525 Fine-tune the naming of new macros before merging into master.
Requires re-generation of files generated with dev_get_rid_of_ternary_operator.
2013-09-13 11:30:58 +03:00
Petteri Aimonen
59cba0beea Expand extra_fields test to cover field skipping in case of streams. 2013-09-11 17:33:50 +03:00
Petteri Aimonen
152c2c910c Disable warning about uint64_t (long long) 2013-09-11 16:51:53 +03:00
Petteri Aimonen
2b72815036 Fix build error when path contains spaces 2013-09-11 16:45:52 +03:00
Petteri Aimonen
61ad04afd5 Merge branch 'dev_tests_using_scons' 2013-09-11 16:13:19 +03:00
Petteri Aimonen
840e213b9f Get rid of the ternary operator in the pb_field_t initialization.
Some compilers where unable to detect that the ternary operator
can be evaluated at the compile time. This commit does the evaluation
on the Python side, which should fix the problem.

The new .pb.c files are generated using PB_FIELD2() macro. The old
PB_FIELD() macro remains, so that previously generated files keep
working.
2013-09-11 09:53:51 +03:00
Petteri Aimonen
5b9ad17dc2 Move the declarations of _pb_ostream_t and _pb_istream_t before first use.
Otherwise Microsoft Visual C++ threats them as C++ classes instead of plain
structs, forbidding use in C linkage functions.

Thanks to Markus Schwarzenberg for the patch.

Update issue 84
Status: Started
2013-09-09 10:53:04 +03:00
Petteri Aimonen
4821e7f457 Add support for running the nanopb generator as protoc plugin.
Will be used to implement issue 47.

For now, symlink nanopb_generator.py as protoc-gen-nanopb and
use protoc --nanopb_out=. to call it.
2013-09-08 19:55:05 +03:00
39 changed files with 825 additions and 143 deletions

View File

@@ -1,3 +1,12 @@
nanopb-0.2.3
Improve compatibility by removing ternary operator from initializations (issue 88)
Fix build error on Visual C++ (issue 84, patch by Markus Schwarzenberg)
Don't stop on unsupported extension fields (issue 83)
Add an example pb_syshdr.h file for non-C99 compilers
Reorganize tests and examples into subfolders (issue 63)
Switch from Makefiles to scons for building the tests
Make the tests buildable on Windows
nanopb-0.2.2 nanopb-0.2.2
Add support for extension fields (issue 17) Add support for extension fields (issue 17)
Fix unknown fields in empty message (issue 78) Fix unknown fields in empty message (issue 78)

View File

@@ -90,22 +90,37 @@ After that, buffer will contain the encoded message.
The number of bytes in the message is stored in *stream.bytes_written*. The number of bytes in the message is stored in *stream.bytes_written*.
You can feed the message to *protoc --decode=Example message.proto* to verify its validity. You can feed the message to *protoc --decode=Example message.proto* to verify its validity.
For complete examples of the simple cases, see *tests/test_decode1.c* and *tests/test_encode1.c*. For an example with network interface, see the *example* subdirectory. For a complete example of the simple case, see *example/simple.c*.
For a more complex example with network interface, see the *example/network_server* subdirectory.
Compiler requirements Compiler requirements
===================== =====================
Nanopb should compile with most ansi-C compatible compilers. It however requires a few header files to be available: Nanopb should compile with most ansi-C compatible compilers. It however
requires a few header files to be available:
#) *string.h*, with these functions: *strlen*, *memcpy*, *memset* #) *string.h*, with these functions: *strlen*, *memcpy*, *memset*
#) *stdint.h*, for definitions of *int32_t* etc. #) *stdint.h*, for definitions of *int32_t* etc.
#) *stddef.h*, for definition of *size_t* #) *stddef.h*, for definition of *size_t*
#) *stdbool.h*, for definition of *bool* #) *stdbool.h*, for definition of *bool*
If these header files do not come with your compiler, you should be able to find suitable replacements online. Mostly the requirements are very simple, just a few basic functions and typedefs. If these header files do not come with your compiler, you can use the
file *compat/pb_syshdr.h* instead. It contains an example of how to provide
the dependencies. You may have to edit it a bit to suit your custom platform.
Alternatively, you can define *PB_SYSTEM_HEADER*, which should be the name of a single header file including all the necessary definitions. To use the pb_syshdr.h, define *PB_SYSTEM_HEADER* to be the name of your custom
header file. It should provide all the dependencies listed above.
Debugging and testing Running the test cases
===================== ======================
Extensive unittests are included under the *tests* folder. Just type *make* there to run the tests. Extensive unittests and test cases are included under the *tests* folder.
To build the tests, you will need the `scons`__ build system. The tests should
be runnable on most platforms. Windows and Linux builds are regularly tested.
__ http://www.scons.org/
In addition to the build system, you will also need a working Google Protocol
Buffers *protoc* compiler, and the Python bindings for Protocol Buffers. On
Debian-based systems, install the following packages: *protobuf-compiler*,
*python-protobuf* and *libprotobuf-dev*.

View File

@@ -1,14 +0,0 @@
CFLAGS=-ansi -Wall -Werror -I .. -g -O0
DEPS=../pb_decode.c ../pb_decode.h ../pb_encode.c ../pb_encode.h ../pb.h
all: server client
clean:
rm -f server client fileproto.pb.c fileproto.pb.h
%: %.c $(DEPS) fileproto.pb.h fileproto.pb.c
$(CC) $(CFLAGS) -o $@ $< ../pb_decode.c ../pb_encode.c fileproto.pb.c common.c
fileproto.pb.c fileproto.pb.h: fileproto.proto ../generator/nanopb_generator.py
protoc -I. -I../generator -I/usr/include -ofileproto.pb $<
python ../generator/nanopb_generator.py fileproto.pb

View File

@@ -1,22 +0,0 @@
CFLAGS=-Wall -Werror -I .. -g -O0
DEPS=double_conversion.c ../pb_decode.c ../pb_decode.h ../pb_encode.c ../pb_encode.h ../pb.h
all: run_tests
clean:
rm -f test_conversions encode_double decode_double doubleproto.pb.c doubleproto.pb.h
test_conversions: test_conversions.c double_conversion.c
$(CC) $(CFLAGS) -o $@ $^
%: %.c $(DEPS) doubleproto.pb.h doubleproto.pb.c
$(CC) $(CFLAGS) -o $@ $< double_conversion.c ../pb_decode.c ../pb_encode.c doubleproto.pb.c
doubleproto.pb.c doubleproto.pb.h: doubleproto.proto ../generator/nanopb_generator.py
protoc -I. -I../generator -I/usr/include -odoubleproto.pb $<
python ../generator/nanopb_generator.py doubleproto.pb
run_tests: test_conversions encode_double decode_double
./test_conversions
./encode_double | ./decode_double

View File

@@ -1,17 +0,0 @@
CFLAGS=-ansi -Wall -Werror -I .. -g -O0
DEPS=../pb_decode.c ../pb_decode.h ../pb_encode.c ../pb_encode.h ../pb.h
all: encode decode
./encode 1 | ./decode
./encode 2 | ./decode
./encode 3 | ./decode
clean:
rm -f encode unionproto.pb.h unionproto.pb.c
%: %.c $(DEPS) unionproto.pb.h unionproto.pb.c
$(CC) $(CFLAGS) -o $@ $< ../pb_decode.c ../pb_encode.c unionproto.pb.c
unionproto.pb.h unionproto.pb.c: unionproto.proto ../generator/nanopb_generator.py
protoc -I. -I../generator -I/usr/include -ounionproto.pb $<
python ../generator/nanopb_generator.py unionproto.pb

View File

@@ -0,0 +1,19 @@
CFLAGS = -ansi -Wall -Werror -g -O0
# Path to the nanopb root folder
NANOPB_DIR = ../..
DEPS = $(NANOPB_DIR)/pb_decode.c $(NANOPB_DIR)/pb_decode.h \
$(NANOPB_DIR)/pb_encode.c $(NANOPB_DIR)/pb_encode.h $(NANOPB_DIR)/pb.h
CFLAGS += -I$(NANOPB_DIR)
all: server client
clean:
rm -f server client fileproto.pb.c fileproto.pb.h
%: %.c $(DEPS) fileproto.pb.h fileproto.pb.c
$(CC) $(CFLAGS) -o $@ $< $(NANOPB_DIR)/pb_decode.c $(NANOPB_DIR)/pb_encode.c fileproto.pb.c common.c
fileproto.pb.c fileproto.pb.h: fileproto.proto $(NANOPB_DIR)/generator/nanopb_generator.py
protoc -ofileproto.pb $<
python $(NANOPB_DIR)/generator/nanopb_generator.py fileproto.pb

View File

@@ -0,0 +1,60 @@
Nanopb example "network_server"
===============================
This example demonstrates the use of nanopb to communicate over network
connections. It consists of a server that sends file listings, and of
a client that requests the file list from the server.
Example usage
-------------
user@host:~/nanopb/examples/network_server$ make # Build the example
protoc -ofileproto.pb fileproto.proto
python ../../generator/nanopb_generator.py fileproto.pb
Writing to fileproto.pb.h and fileproto.pb.c
cc -ansi -Wall -Werror -I .. -g -O0 -I../.. -o server server.c
../../pb_decode.c ../../pb_encode.c fileproto.pb.c common.c
cc -ansi -Wall -Werror -I .. -g -O0 -I../.. -o client client.c
../../pb_decode.c ../../pb_encode.c fileproto.pb.c common.c
user@host:~/nanopb/examples/network_server$ ./server & # Start the server on background
[1] 24462
petteri@oddish:~/nanopb/examples/network_server$ ./client /bin # Request the server to list /bin
Got connection.
Listing directory: /bin
1327119 bzdiff
1327126 bzless
1327147 ps
1327178 ntfsmove
1327271 mv
1327187 mount
1327259 false
1327266 tempfile
1327285 zfgrep
1327165 gzexe
1327204 nc.openbsd
1327260 uname
Details of implementation
-------------------------
fileproto.proto contains the portable Google Protocol Buffers protocol definition.
It could be used as-is to implement a server or a client in any other language, for
example Python or Java.
fileproto.options contains the nanopb-specific options for the protocol file. This
sets the amount of space allocated for file names when decoding messages.
common.c/h contains functions that allow nanopb to read and write directly from
network socket. This way there is no need to allocate a separate buffer to store
the message.
server.c contains the code to open a listening socket, to respond to clients and
to list directory contents.
client.c contains the code to connect to a server, to send a request and to print
the response message.
The code is implemented using the POSIX socket api, but it should be easy enough
to port into any other socket api, such as lwip.

22
examples/simple/Makefile Normal file
View File

@@ -0,0 +1,22 @@
# Compiler flags to enable all warnings & debug info
CFLAGS = -Wall -Werror -g -O0
# Path to the nanopb root folder
NANOPB_DIR = ../..
CFLAGS += -I$(NANOPB_DIR)
# C source code files that are required
CSRC = simple.c # The main program
CSRC += simple.pb.c # The compiled protocol definition
CSRC += $(NANOPB_DIR)/pb_encode.c # The nanopb encoder
CSRC += $(NANOPB_DIR)/pb_decode.c # The nanopb decoder
# Build rule for the main program
simple: $(CSRC)
$(CC) $(CFLAGS) -osimple $(CSRC)
# Build rule for the protocol
simple.pb.c: simple.proto
protoc -osimple.pb simple.proto
python $(NANOPB_DIR)/generator/nanopb_generator.py simple.pb

30
examples/simple/README Normal file
View File

@@ -0,0 +1,30 @@
Nanopb example "simple"
=======================
This example demonstrates the very basic use of nanopb. It encodes and
decodes a simple message.
The code uses four different API functions:
* pb_ostream_from_buffer() to declare the output buffer that is to be used
* pb_encode() to encode a message
* pb_istream_from_buffer() to declare the input buffer that is to be used
* pb_decode() to decode a message
Example usage
-------------
On Linux, simply type "make" to build the example. After that, you can
run it with the command: ./simple
On other platforms, you first have to compile the protocol definition using
the following two commands::
protoc -osimple.pb simple.proto
python nanopb_generator.py simple.pb
After that, add the following four files to your project and compile:
simple.c simple.pb.c pb_encode.c pb_decode.c

68
examples/simple/simple.c Normal file
View File

@@ -0,0 +1,68 @@
#include <stdio.h>
#include <pb_encode.h>
#include <pb_decode.h>
#include "simple.pb.h"
int main()
{
/* This is the buffer where we will store our message. */
uint8_t buffer[128];
size_t message_length;
bool status;
/* Encode our message */
{
/* Allocate space on the stack to store the message data.
*
* Nanopb generates simple struct definitions for all the messages.
* - check out the contents of simple.pb.h! */
SimpleMessage message;
/* Create a stream that will write to our buffer. */
pb_ostream_t stream = pb_ostream_from_buffer(buffer, sizeof(buffer));
/* Fill in the lucky number */
message.lucky_number = 13;
/* Now we are ready to encode the message! */
status = pb_encode(&stream, SimpleMessage_fields, &message);
message_length = stream.bytes_written;
/* Then just check for any errors.. */
if (!status)
{
printf("Encoding failed: %s\n", PB_GET_ERROR(&stream));
return 1;
}
}
/* Now we could transmit the message over network, store it in a file or
* wrap it to a pigeon's leg.
*/
/* But because we are lazy, we will just decode it immediately. */
{
/* Allocate space for the decoded message. */
SimpleMessage message;
/* Create a stream that reads from the buffer. */
pb_istream_t stream = pb_istream_from_buffer(buffer, message_length);
/* Now we are ready to decode the message. */
status = pb_decode(&stream, SimpleMessage_fields, &message);
/* Check for errors... */
if (!status)
{
printf("Decoding failed: %s\n", PB_GET_ERROR(&stream));
return 1;
}
/* Print the data contained in the message. */
printf("Your lucky number was %d!\n", message.lucky_number);
}
return 0;
}

View File

@@ -0,0 +1,7 @@
// A very simple protocol definition, consisting of only
// one message.
message SimpleMessage {
required int32 lucky_number = 1;
}

View File

@@ -0,0 +1,29 @@
CFLAGS = -Wall -Werror -g -O0
# Path to the nanopb root directory
NANOPB_DIR = ../..
DEPS = double_conversion.c $(NANOPB_DIR)/pb.h \
$(NANOPB_DIR)/pb_decode.c $(NANOPB_DIR)/pb_decode.h \
$(NANOPB_DIR)/pb_encode.c $(NANOPB_DIR)/pb_encode.h
CFLAGS += -I$(NANOPB_DIR)
all: run_tests
clean:
rm -f test_conversions encode_double decode_double doubleproto.pb.c doubleproto.pb.h
test_conversions: test_conversions.c double_conversion.c
$(CC) $(CFLAGS) -o $@ $^
%: %.c $(DEPS) doubleproto.pb.h doubleproto.pb.c
$(CC) $(CFLAGS) -o $@ $< double_conversion.c \
$(NANOPB_DIR)/pb_decode.c $(NANOPB_DIR)/pb_encode.c doubleproto.pb.c
doubleproto.pb.c doubleproto.pb.h: doubleproto.proto $(NANOPB_DIR)/generator/nanopb_generator.py
protoc -odoubleproto.pb $<
python $(NANOPB_DIR)/generator/nanopb_generator.py doubleproto.pb
run_tests: test_conversions encode_double decode_double
./test_conversions
./encode_double | ./decode_double

View File

@@ -1,3 +1,6 @@
Nanopb example "using_double_on_avr"
====================================
Some processors/compilers, such as AVR-GCC, do not support the double Some processors/compilers, such as AVR-GCC, do not support the double
datatype. Instead, they have sizeof(double) == 4. Because protocol datatype. Instead, they have sizeof(double) == 4. Because protocol
binary format uses the double encoding directly, this causes trouble binary format uses the double encoding directly, this causes trouble
@@ -9,7 +12,7 @@ platforms. The file double_conversion.c provides functions that
convert these values to/from floats, without relying on compiler convert these values to/from floats, without relying on compiler
support. support.
To use this method, you need to make two modifications to your code: To use this method, you need to make some modifications to your code:
1) Change all 'double' fields into 'fixed64' in the .proto. 1) Change all 'double' fields into 'fixed64' in the .proto.
@@ -17,6 +20,6 @@ To use this method, you need to make two modifications to your code:
3) Whenever reading a 'double' field, use double_to_float(). 3) Whenever reading a 'double' field, use double_to_float().
The conversion routines should be as accurate as the float datatype can The conversion routines are as accurate as the float datatype can
be. Furthermore, they should handle all special values (NaN, inf, denormalized be. Furthermore, they should handle all special values (NaN, inf, denormalized
numbers) correctly. There are testcases in test_conversions.c. numbers) correctly. There are testcases in test_conversions.c.

View File

@@ -0,0 +1,22 @@
CFLAGS = -ansi -Wall -Werror -g -O0
# Path to the nanopb root folder
NANOPB_DIR = ../..
DEPS = $(NANOPB_DIR)/pb_decode.c $(NANOPB_DIR)/pb_decode.h \
$(NANOPB_DIR)/pb_encode.c $(NANOPB_DIR)/pb_encode.h $(NANOPB_DIR)/pb.h
CFLAGS += -I$(NANOPB_DIR)
all: encode decode
./encode 1 | ./decode
./encode 2 | ./decode
./encode 3 | ./decode
clean:
rm -f encode unionproto.pb.h unionproto.pb.c
%: %.c $(DEPS) unionproto.pb.h unionproto.pb.c
$(CC) $(CFLAGS) -o $@ $< $(NANOPB_DIR)/pb_decode.c $(NANOPB_DIR)/pb_encode.c unionproto.pb.c
unionproto.pb.h unionproto.pb.c: unionproto.proto $(NANOPB_DIR)/generator/nanopb_generator.py
protoc -ounionproto.pb $<
python $(NANOPB_DIR)/generator/nanopb_generator.py unionproto.pb

View File

@@ -0,0 +1,52 @@
Nanopb example "using_union_messages"
=====================================
Union messages is a common technique in Google Protocol Buffers used to
represent a group of messages, only one of which is passed at a time.
It is described in Google's documentation:
https://developers.google.com/protocol-buffers/docs/techniques#union
This directory contains an example on how to encode and decode union messages
with minimal memory usage. Usually, nanopb would allocate space to store
all of the possible messages at the same time, even though at most one of
them will be used at a time.
By using some of the lower level nanopb APIs, we can manually generate the
top level message, so that we only need to allocate the one submessage that
we actually want. Similarly when decoding, we can manually read the tag of
the top level message, and only then allocate the memory for the submessage
after we already know its type.
Example usage
-------------
Type `make` to run the example. It will build it and run commands like
following:
./encode 1 | ./decode
Got MsgType1: 42
./encode 2 | ./decode
Got MsgType2: true
./encode 3 | ./decode
Got MsgType3: 3 1415
This simply demonstrates that the "decode" program has correctly identified
the type of the received message, and managed to decode it.
Details of implementation
-------------------------
unionproto.proto contains the protocol used in the example. It consists of
three messages: MsgType1, MsgType2 and MsgType3, which are collected together
into UnionMessage.
encode.c takes one command line argument, which should be a number 1-3. It
then fills in and encodes the corresponding message, and writes it to stdout.
decode.c reads a UnionMessage from stdin. Then it calls the function
decode_unionmessage_type() to determine the type of the message. After that,
the corresponding message is decoded and the contents of it printed to the
screen.

View File

@@ -1,2 +1,5 @@
nanopb_pb2.py: nanopb.proto nanopb_pb2.py: nanopb.proto
protoc --python_out=. -I /usr/include -I . nanopb.proto protoc --python_out=. -I /usr/include -I . nanopb.proto
plugin_pb2.py: plugin.proto
protoc --python_out=. -I /usr/include -I . plugin.proto

124
generator/nanopb_generator.py Normal file → Executable file
View File

@@ -1,5 +1,7 @@
#!/usr/bin/python
'''Generate header file for nanopb from a ProtoBuf FileDescriptorSet.''' '''Generate header file for nanopb from a ProtoBuf FileDescriptorSet.'''
nanopb_version = "nanopb-0.2.3-dev" nanopb_version = "nanopb-0.2.3"
try: try:
import google.protobuf.descriptor_pb2 as descriptor import google.protobuf.descriptor_pb2 as descriptor
@@ -244,10 +246,11 @@ class Field:
'''Return the pb_field_t initializer to use in the constant array. '''Return the pb_field_t initializer to use in the constant array.
prev_field_name is the name of the previous field or None. prev_field_name is the name of the previous field or None.
''' '''
result = ' PB_FIELD(%3d, ' % self.tag result = ' PB_FIELD2(%3d, ' % self.tag
result += '%-8s, ' % self.pbtype result += '%-8s, ' % self.pbtype
result += '%s, ' % self.rules result += '%s, ' % self.rules
result += '%s, ' % self.allocation result += '%s, ' % self.allocation
result += '%s, ' % ("FIRST" if not prev_field_name else "OTHER")
result += '%s, ' % self.struct_name result += '%s, ' % self.struct_name
result += '%s, ' % self.name result += '%s, ' % self.name
result += '%s, ' % (prev_field_name or self.name) result += '%s, ' % (prev_field_name or self.name)
@@ -602,7 +605,7 @@ def generate_header(dependencies, headername, enums, messages, extensions, optio
# End of header # End of header
yield '\n#endif\n' yield '\n#endif\n'
def generate_source(headername, enums, messages, extensions): def generate_source(headername, enums, messages, extensions, options):
'''Generate content for a source file.''' '''Generate content for a source file.'''
yield '/* Automatically generated nanopb constant definitions */\n' yield '/* Automatically generated nanopb constant definitions */\n'
@@ -780,27 +783,28 @@ optparser.add_option("-v", "--verbose", dest="verbose", action="store_true", def
optparser.add_option("-s", dest="settings", metavar="OPTION:VALUE", action="append", default=[], optparser.add_option("-s", dest="settings", metavar="OPTION:VALUE", action="append", default=[],
help="Set generator option (max_size, max_count etc.).") help="Set generator option (max_size, max_count etc.).")
def process(filenames, options): def process_file(filename, fdesc, options):
'''Process the files given on the command line.''' '''Process a single file.
filename: The full path to the .proto or .pb source file, as string.
if not filenames: fdesc: The loaded FileDescriptorSet, or None to read from the input file.
optparser.print_help() options: Command line options as they come from OptionsParser.
return False
if options.quiet:
options.verbose = False
Globals.verbose_options = options.verbose
Returns a dict:
{'headername': Name of header file,
'headerdata': Data for the .h header file,
'sourcename': Name of the source code file,
'sourcedata': Data for the .c source code file
}
'''
toplevel_options = nanopb_pb2.NanoPBOptions() toplevel_options = nanopb_pb2.NanoPBOptions()
for s in options.settings: for s in options.settings:
text_format.Merge(s, toplevel_options) text_format.Merge(s, toplevel_options)
for filename in filenames: if not fdesc:
data = open(filename, 'rb').read() data = open(filename, 'rb').read()
fdesc = descriptor.FileDescriptorSet.FromString(data) fdesc = descriptor.FileDescriptorSet.FromString(data).file[0]
# Check if any separate options are specified # Check if there is a separate .options file
try: try:
optfilename = options.options_file % os.path.splitext(filename)[0] optfilename = options.options_file % os.path.splitext(filename)[0]
except TypeError: except TypeError:
@@ -816,37 +820,89 @@ def process(filenames, options):
Globals.separate_options = [] Globals.separate_options = []
# Parse the file # Parse the file
file_options = get_nanopb_suboptions(fdesc.file[0], toplevel_options, Names([filename])) file_options = get_nanopb_suboptions(fdesc, toplevel_options, Names([filename]))
enums, messages, extensions = parse_file(fdesc.file[0], file_options) enums, messages, extensions = parse_file(fdesc, file_options)
# Decide the file names
noext = os.path.splitext(filename)[0] noext = os.path.splitext(filename)[0]
headername = noext + '.' + options.extension + '.h' headername = noext + '.' + options.extension + '.h'
sourcename = noext + '.' + options.extension + '.c' sourcename = noext + '.' + options.extension + '.c'
headerbasename = os.path.basename(headername) headerbasename = os.path.basename(headername)
if not options.quiet:
print "Writing to " + headername + " and " + sourcename
# List of .proto files that should not be included in the C header file # List of .proto files that should not be included in the C header file
# even if they are mentioned in the source .proto. # even if they are mentioned in the source .proto.
excludes = ['nanopb.proto', 'google/protobuf/descriptor.proto'] + options.exclude excludes = ['nanopb.proto', 'google/protobuf/descriptor.proto'] + options.exclude
dependencies = [d for d in fdesc.file[0].dependency if d not in excludes] dependencies = [d for d in fdesc.dependency if d not in excludes]
header = open(headername, 'w') headerdata = ''.join(generate_header(dependencies, headerbasename, enums,
for part in generate_header(dependencies, headerbasename, enums, messages, extensions, options))
messages, extensions, options):
header.write(part)
source = open(sourcename, 'w') sourcedata = ''.join(generate_source(headerbasename, enums,
for part in generate_source(headerbasename, enums, messages, extensions): messages, extensions, options))
source.write(part)
return True return {'headername': headername, 'headerdata': headerdata,
'sourcename': sourcename, 'sourcedata': sourcedata}
def main_cli():
'''Main function when invoked directly from the command line.'''
if __name__ == '__main__':
options, filenames = optparser.parse_args() options, filenames = optparser.parse_args()
status = process(filenames, options)
if not status: if not filenames:
optparser.print_help()
sys.exit(1) sys.exit(1)
if options.quiet:
options.verbose = False
Globals.verbose_options = options.verbose
for filename in filenames:
results = process_file(filename, None, options)
if not options.quiet:
print "Writing to " + results['headername'] + " and " + results['sourcename']
open(results['headername'], 'w').write(results['headerdata'])
open(results['sourcename'], 'w').write(results['sourcedata'])
def main_plugin():
'''Main function when invoked as a protoc plugin.'''
import plugin_pb2
data = sys.stdin.read()
request = plugin_pb2.CodeGeneratorRequest.FromString(data)
import shlex
args = shlex.split(request.parameter)
options, dummy = optparser.parse_args(args)
# We can't go printing stuff to stdout
Globals.verbose_options = False
options.verbose = False
options.quiet = True
response = plugin_pb2.CodeGeneratorResponse()
for filename in request.file_to_generate:
for fdesc in request.proto_file:
if fdesc.name == filename:
results = process_file(filename, fdesc, options)
f = response.file.add()
f.name = results['headername']
f.content = results['headerdata']
f = response.file.add()
f.name = results['sourcename']
f.content = results['sourcedata']
sys.stdout.write(response.SerializeToString())
if __name__ == '__main__':
# Check if we are running as a plugin under protoc
if 'protoc-gen-' in sys.argv[0]:
main_plugin()
else:
main_cli()

145
generator/plugin.proto Normal file
View File

@@ -0,0 +1,145 @@
// Protocol Buffers - Google's data interchange format
// Copyright 2008 Google Inc. All rights reserved.
// http://code.google.com/p/protobuf/
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
// Author: kenton@google.com (Kenton Varda)
//
// WARNING: The plugin interface is currently EXPERIMENTAL and is subject to
// change.
//
// protoc (aka the Protocol Compiler) can be extended via plugins. A plugin is
// just a program that reads a CodeGeneratorRequest from stdin and writes a
// CodeGeneratorResponse to stdout.
//
// Plugins written using C++ can use google/protobuf/compiler/plugin.h instead
// of dealing with the raw protocol defined here.
//
// A plugin executable needs only to be placed somewhere in the path. The
// plugin should be named "protoc-gen-$NAME", and will then be used when the
// flag "--${NAME}_out" is passed to protoc.
package google.protobuf.compiler;
import "google/protobuf/descriptor.proto";
// An encoded CodeGeneratorRequest is written to the plugin's stdin.
message CodeGeneratorRequest {
// The .proto files that were explicitly listed on the command-line. The
// code generator should generate code only for these files. Each file's
// descriptor will be included in proto_file, below.
repeated string file_to_generate = 1;
// The generator parameter passed on the command-line.
optional string parameter = 2;
// FileDescriptorProtos for all files in files_to_generate and everything
// they import. The files will appear in topological order, so each file
// appears before any file that imports it.
//
// protoc guarantees that all proto_files will be written after
// the fields above, even though this is not technically guaranteed by the
// protobuf wire format. This theoretically could allow a plugin to stream
// in the FileDescriptorProtos and handle them one by one rather than read
// the entire set into memory at once. However, as of this writing, this
// is not similarly optimized on protoc's end -- it will store all fields in
// memory at once before sending them to the plugin.
repeated FileDescriptorProto proto_file = 15;
}
// The plugin writes an encoded CodeGeneratorResponse to stdout.
message CodeGeneratorResponse {
// Error message. If non-empty, code generation failed. The plugin process
// should exit with status code zero even if it reports an error in this way.
//
// This should be used to indicate errors in .proto files which prevent the
// code generator from generating correct code. Errors which indicate a
// problem in protoc itself -- such as the input CodeGeneratorRequest being
// unparseable -- should be reported by writing a message to stderr and
// exiting with a non-zero status code.
optional string error = 1;
// Represents a single generated file.
message File {
// The file name, relative to the output directory. The name must not
// contain "." or ".." components and must be relative, not be absolute (so,
// the file cannot lie outside the output directory). "/" must be used as
// the path separator, not "\".
//
// If the name is omitted, the content will be appended to the previous
// file. This allows the generator to break large files into small chunks,
// and allows the generated text to be streamed back to protoc so that large
// files need not reside completely in memory at one time. Note that as of
// this writing protoc does not optimize for this -- it will read the entire
// CodeGeneratorResponse before writing files to disk.
optional string name = 1;
// If non-empty, indicates that the named file should already exist, and the
// content here is to be inserted into that file at a defined insertion
// point. This feature allows a code generator to extend the output
// produced by another code generator. The original generator may provide
// insertion points by placing special annotations in the file that look
// like:
// @@protoc_insertion_point(NAME)
// The annotation can have arbitrary text before and after it on the line,
// which allows it to be placed in a comment. NAME should be replaced with
// an identifier naming the point -- this is what other generators will use
// as the insertion_point. Code inserted at this point will be placed
// immediately above the line containing the insertion point (thus multiple
// insertions to the same point will come out in the order they were added).
// The double-@ is intended to make it unlikely that the generated code
// could contain things that look like insertion points by accident.
//
// For example, the C++ code generator places the following line in the
// .pb.h files that it generates:
// // @@protoc_insertion_point(namespace_scope)
// This line appears within the scope of the file's package namespace, but
// outside of any particular class. Another plugin can then specify the
// insertion_point "namespace_scope" to generate additional classes or
// other declarations that should be placed in this scope.
//
// Note that if the line containing the insertion point begins with
// whitespace, the same whitespace will be added to every line of the
// inserted text. This is useful for languages like Python, where
// indentation matters. In these languages, the insertion point comment
// should be indented the same amount as any inserted code will need to be
// in order to work correctly in that context.
//
// The code generator that generates the initial file and the one which
// inserts into it must both run as part of a single invocation of protoc.
// Code generators are executed in the order in which they appear on the
// command line.
//
// If |insertion_point| is present, |name| must also be present.
optional string insertion_point = 2;
// The file contents.
optional string content = 15;
}
repeated File file = 15;
}

161
generator/plugin_pb2.py Normal file
View File

@@ -0,0 +1,161 @@
# Generated by the protocol buffer compiler. DO NOT EDIT!
from google.protobuf import descriptor
from google.protobuf import message
from google.protobuf import reflection
from google.protobuf import descriptor_pb2
# @@protoc_insertion_point(imports)
import google.protobuf.descriptor_pb2
DESCRIPTOR = descriptor.FileDescriptor(
name='plugin.proto',
package='google.protobuf.compiler',
serialized_pb='\n\x0cplugin.proto\x12\x18google.protobuf.compiler\x1a google/protobuf/descriptor.proto\"}\n\x14\x43odeGeneratorRequest\x12\x18\n\x10\x66ile_to_generate\x18\x01 \x03(\t\x12\x11\n\tparameter\x18\x02 \x01(\t\x12\x38\n\nproto_file\x18\x0f \x03(\x0b\x32$.google.protobuf.FileDescriptorProto\"\xaa\x01\n\x15\x43odeGeneratorResponse\x12\r\n\x05\x65rror\x18\x01 \x01(\t\x12\x42\n\x04\x66ile\x18\x0f \x03(\x0b\x32\x34.google.protobuf.compiler.CodeGeneratorResponse.File\x1a>\n\x04\x46ile\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x17\n\x0finsertion_point\x18\x02 \x01(\t\x12\x0f\n\x07\x63ontent\x18\x0f \x01(\t')
_CODEGENERATORREQUEST = descriptor.Descriptor(
name='CodeGeneratorRequest',
full_name='google.protobuf.compiler.CodeGeneratorRequest',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
descriptor.FieldDescriptor(
name='file_to_generate', full_name='google.protobuf.compiler.CodeGeneratorRequest.file_to_generate', index=0,
number=1, type=9, cpp_type=9, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
descriptor.FieldDescriptor(
name='parameter', full_name='google.protobuf.compiler.CodeGeneratorRequest.parameter', index=1,
number=2, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=unicode("", "utf-8"),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
descriptor.FieldDescriptor(
name='proto_file', full_name='google.protobuf.compiler.CodeGeneratorRequest.proto_file', index=2,
number=15, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[],
enum_types=[
],
options=None,
is_extendable=False,
extension_ranges=[],
serialized_start=76,
serialized_end=201,
)
_CODEGENERATORRESPONSE_FILE = descriptor.Descriptor(
name='File',
full_name='google.protobuf.compiler.CodeGeneratorResponse.File',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
descriptor.FieldDescriptor(
name='name', full_name='google.protobuf.compiler.CodeGeneratorResponse.File.name', index=0,
number=1, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=unicode("", "utf-8"),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
descriptor.FieldDescriptor(
name='insertion_point', full_name='google.protobuf.compiler.CodeGeneratorResponse.File.insertion_point', index=1,
number=2, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=unicode("", "utf-8"),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
descriptor.FieldDescriptor(
name='content', full_name='google.protobuf.compiler.CodeGeneratorResponse.File.content', index=2,
number=15, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=unicode("", "utf-8"),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[],
enum_types=[
],
options=None,
is_extendable=False,
extension_ranges=[],
serialized_start=312,
serialized_end=374,
)
_CODEGENERATORRESPONSE = descriptor.Descriptor(
name='CodeGeneratorResponse',
full_name='google.protobuf.compiler.CodeGeneratorResponse',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
descriptor.FieldDescriptor(
name='error', full_name='google.protobuf.compiler.CodeGeneratorResponse.error', index=0,
number=1, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=unicode("", "utf-8"),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
descriptor.FieldDescriptor(
name='file', full_name='google.protobuf.compiler.CodeGeneratorResponse.file', index=1,
number=15, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[_CODEGENERATORRESPONSE_FILE, ],
enum_types=[
],
options=None,
is_extendable=False,
extension_ranges=[],
serialized_start=204,
serialized_end=374,
)
_CODEGENERATORREQUEST.fields_by_name['proto_file'].message_type = google.protobuf.descriptor_pb2._FILEDESCRIPTORPROTO
_CODEGENERATORRESPONSE_FILE.containing_type = _CODEGENERATORRESPONSE;
_CODEGENERATORRESPONSE.fields_by_name['file'].message_type = _CODEGENERATORRESPONSE_FILE
DESCRIPTOR.message_types_by_name['CodeGeneratorRequest'] = _CODEGENERATORREQUEST
DESCRIPTOR.message_types_by_name['CodeGeneratorResponse'] = _CODEGENERATORRESPONSE
class CodeGeneratorRequest(message.Message):
__metaclass__ = reflection.GeneratedProtocolMessageType
DESCRIPTOR = _CODEGENERATORREQUEST
# @@protoc_insertion_point(class_scope:google.protobuf.compiler.CodeGeneratorRequest)
class CodeGeneratorResponse(message.Message):
__metaclass__ = reflection.GeneratedProtocolMessageType
class File(message.Message):
__metaclass__ = reflection.GeneratedProtocolMessageType
DESCRIPTOR = _CODEGENERATORRESPONSE_FILE
# @@protoc_insertion_point(class_scope:google.protobuf.compiler.CodeGeneratorResponse.File)
DESCRIPTOR = _CODEGENERATORRESPONSE
# @@protoc_insertion_point(class_scope:google.protobuf.compiler.CodeGeneratorResponse)
# @@protoc_insertion_point(module_scope)

62
pb.h
View File

@@ -43,7 +43,7 @@
/* Version of the nanopb library. Just in case you want to check it in /* Version of the nanopb library. Just in case you want to check it in
* your own program. */ * your own program. */
#define NANOPB_VERSION nanopb-0.2.3-dev #define NANOPB_VERSION nanopb-0.2.3
/* Include all the system headers needed by nanopb. You will need the /* Include all the system headers needed by nanopb. You will need the
* definitions of the following: * definitions of the following:
@@ -331,58 +331,69 @@ struct _pb_extension_t {
}; };
/* These macros are used to declare pb_field_t's in the constant array. */ /* These macros are used to declare pb_field_t's in the constant array. */
/* Size of a structure member, in bytes. */
#define pb_membersize(st, m) (sizeof ((st*)0)->m) #define pb_membersize(st, m) (sizeof ((st*)0)->m)
/* Number of entries in an array. */
#define pb_arraysize(st, m) (pb_membersize(st, m) / pb_membersize(st, m[0])) #define pb_arraysize(st, m) (pb_membersize(st, m) / pb_membersize(st, m[0]))
/* Delta from start of one member to the start of another member. */
#define pb_delta(st, m1, m2) ((int)offsetof(st, m1) - (int)offsetof(st, m2)) #define pb_delta(st, m1, m2) ((int)offsetof(st, m1) - (int)offsetof(st, m2))
#define pb_delta_end(st, m1, m2) (int)(offsetof(st, m1) == offsetof(st, m2) \ /* Marks the end of the field list */
? offsetof(st, m1) \
: offsetof(st, m1) - offsetof(st, m2) - pb_membersize(st, m2))
#define PB_LAST_FIELD {0,(pb_type_t) 0,0,0,0,0,0} #define PB_LAST_FIELD {0,(pb_type_t) 0,0,0,0,0,0}
/* Macros for filling in the data_offset field */
/* data_offset for first field in a message */
#define PB_DATAOFFSET_FIRST(st, m1, m2) (offsetof(st, m1))
/* data_offset for subsequent fields */
#define PB_DATAOFFSET_OTHER(st, m1, m2) (offsetof(st, m1) - offsetof(st, m2) - pb_membersize(st, m2))
/* Choose first/other based on m1 == m2 (deprecated, remains for backwards compatibility) */
#define PB_DATAOFFSET_CHOOSE(st, m1, m2) (int)(offsetof(st, m1) == offsetof(st, m2) \
? PB_DATAOFFSET_FIRST(st, m1, m2) \
: PB_DATAOFFSET_OTHER(st, m1, m2))
/* Required fields are the simplest. They just have delta (padding) from /* Required fields are the simplest. They just have delta (padding) from
* previous field end, and the size of the field. Pointer is used for * previous field end, and the size of the field. Pointer is used for
* submessages and default values. * submessages and default values.
*/ */
#define PB_REQUIRED_STATIC(tag, st, m, pm, ltype, ptr) \ #define PB_REQUIRED_STATIC(tag, st, m, fd, ltype, ptr) \
{tag, PB_ATYPE_STATIC | PB_HTYPE_REQUIRED | ltype, \ {tag, PB_ATYPE_STATIC | PB_HTYPE_REQUIRED | ltype, \
pb_delta_end(st, m, pm), 0, pb_membersize(st, m), 0, ptr} fd, 0, pb_membersize(st, m), 0, ptr}
/* Optional fields add the delta to the has_ variable. */ /* Optional fields add the delta to the has_ variable. */
#define PB_OPTIONAL_STATIC(tag, st, m, pm, ltype, ptr) \ #define PB_OPTIONAL_STATIC(tag, st, m, fd, ltype, ptr) \
{tag, PB_ATYPE_STATIC | PB_HTYPE_OPTIONAL | ltype, \ {tag, PB_ATYPE_STATIC | PB_HTYPE_OPTIONAL | ltype, \
pb_delta_end(st, m, pm), \ fd, \
pb_delta(st, has_ ## m, m), \ pb_delta(st, has_ ## m, m), \
pb_membersize(st, m), 0, ptr} pb_membersize(st, m), 0, ptr}
/* Repeated fields have a _count field and also the maximum number of entries. */ /* Repeated fields have a _count field and also the maximum number of entries. */
#define PB_REPEATED_STATIC(tag, st, m, pm, ltype, ptr) \ #define PB_REPEATED_STATIC(tag, st, m, fd, ltype, ptr) \
{tag, PB_ATYPE_STATIC | PB_HTYPE_REPEATED | ltype, \ {tag, PB_ATYPE_STATIC | PB_HTYPE_REPEATED | ltype, \
pb_delta_end(st, m, pm), \ fd, \
pb_delta(st, m ## _count, m), \ pb_delta(st, m ## _count, m), \
pb_membersize(st, m[0]), \ pb_membersize(st, m[0]), \
pb_arraysize(st, m), ptr} pb_arraysize(st, m), ptr}
/* Callbacks are much like required fields except with special datatype. */ /* Callbacks are much like required fields except with special datatype. */
#define PB_REQUIRED_CALLBACK(tag, st, m, pm, ltype, ptr) \ #define PB_REQUIRED_CALLBACK(tag, st, m, fd, ltype, ptr) \
{tag, PB_ATYPE_CALLBACK | PB_HTYPE_REQUIRED | ltype, \ {tag, PB_ATYPE_CALLBACK | PB_HTYPE_REQUIRED | ltype, \
pb_delta_end(st, m, pm), 0, pb_membersize(st, m), 0, ptr} fd, 0, pb_membersize(st, m), 0, ptr}
#define PB_OPTIONAL_CALLBACK(tag, st, m, pm, ltype, ptr) \ #define PB_OPTIONAL_CALLBACK(tag, st, m, fd, ltype, ptr) \
{tag, PB_ATYPE_CALLBACK | PB_HTYPE_OPTIONAL | ltype, \ {tag, PB_ATYPE_CALLBACK | PB_HTYPE_OPTIONAL | ltype, \
pb_delta_end(st, m, pm), 0, pb_membersize(st, m), 0, ptr} fd, 0, pb_membersize(st, m), 0, ptr}
#define PB_REPEATED_CALLBACK(tag, st, m, pm, ltype, ptr) \ #define PB_REPEATED_CALLBACK(tag, st, m, fd, ltype, ptr) \
{tag, PB_ATYPE_CALLBACK | PB_HTYPE_REPEATED | ltype, \ {tag, PB_ATYPE_CALLBACK | PB_HTYPE_REPEATED | ltype, \
pb_delta_end(st, m, pm), 0, pb_membersize(st, m), 0, ptr} fd, 0, pb_membersize(st, m), 0, ptr}
/* Optional extensions don't have the has_ field, as that would be redundant. */ /* Optional extensions don't have the has_ field, as that would be redundant. */
#define PB_OPTEXT_STATIC(tag, st, m, pm, ltype, ptr) \ #define PB_OPTEXT_STATIC(tag, st, m, fd, ltype, ptr) \
{tag, PB_ATYPE_STATIC | PB_HTYPE_OPTIONAL | ltype, \ {tag, PB_ATYPE_STATIC | PB_HTYPE_OPTIONAL | ltype, \
0, \ 0, \
0, \ 0, \
pb_membersize(st, m), 0, ptr} pb_membersize(st, m), 0, ptr}
#define PB_OPTEXT_CALLBACK(tag, st, m, pm, ltype, ptr) \ #define PB_OPTEXT_CALLBACK(tag, st, m, fd, ltype, ptr) \
{tag, PB_ATYPE_CALLBACK | PB_HTYPE_OPTIONAL | ltype, \ {tag, PB_ATYPE_CALLBACK | PB_HTYPE_OPTIONAL | ltype, \
0, 0, pb_membersize(st, m), 0, ptr} 0, 0, pb_membersize(st, m), 0, ptr}
@@ -421,7 +432,20 @@ struct _pb_extension_t {
*/ */
#define PB_FIELD(tag, type, rules, allocation, message, field, prevfield, ptr) \ #define PB_FIELD(tag, type, rules, allocation, message, field, prevfield, ptr) \
PB_ ## rules ## _ ## allocation(tag, message, field, prevfield, \ PB_ ## rules ## _ ## allocation(tag, message, field, \
PB_DATAOFFSET_CHOOSE(message, field, prevfield), \
PB_LTYPE_MAP_ ## type, ptr)
/* This is a new version of the macro used by nanopb generator from
* version 0.2.3 onwards. It avoids the use of a ternary expression in
* the initialization, which confused some compilers.
*
* - Placement: FIRST or OTHER, depending on if this is the first field in structure.
*
*/
#define PB_FIELD2(tag, type, rules, allocation, placement, message, field, prevfield, ptr) \
PB_ ## rules ## _ ## allocation(tag, message, field, \
PB_DATAOFFSET_ ## placement(message, field, prevfield), \
PB_LTYPE_MAP_ ## type, ptr) PB_LTYPE_MAP_ ## type, ptr)

View File

@@ -73,11 +73,14 @@ if 'gcc' in env['CC']:
env.Append(CFLAGS = '-ansi -pedantic -g -O0 -Wall -Werror --coverage -fstack-protector-all') env.Append(CFLAGS = '-ansi -pedantic -g -O0 -Wall -Werror --coverage -fstack-protector-all')
env.Append(LINKFLAGS = '--coverage') env.Append(LINKFLAGS = '--coverage')
# We currently need uint64_t anyway, even though ANSI C90 otherwise..
env.Append(CFLAGS = '-Wno-long-long')
# More strict checks on the nanopb core # More strict checks on the nanopb core
env.Append(CORECFLAGS = '-Wextra -Wcast-qual -Wlogical-op -Wconversion') env.Append(CORECFLAGS = '-Wextra -Wcast-qual -Wlogical-op -Wconversion')
elif 'clang' in env['CC']: elif 'clang' in env['CC']:
# CLang # CLang
env.Append(CFLAGS = '-ansi -pedantic -g -O0 -Wall -Werror') env.Append(CFLAGS = '-ansi -g -O0 -Wall -Werror')
env.Append(CORECFLAGS = ' -Wextra -Wcast-qual -Wconversion') env.Append(CORECFLAGS = ' -Wextra -Wcast-qual -Wconversion')
elif 'cl' in env['CC']: elif 'cl' in env['CC']:
# Microsoft Visual C++ # Microsoft Visual C++

View File

@@ -6,5 +6,9 @@ dec = env.GetBuildPath('#basic_buffer/${PROGPREFIX}decode_buffer${PROGSUFFIX}')
env.RunTest('person_with_extra_field.output', [dec, "person_with_extra_field.pb"]) env.RunTest('person_with_extra_field.output', [dec, "person_with_extra_field.pb"])
env.Compare(["person_with_extra_field.output", "person_with_extra_field.expected"]) env.Compare(["person_with_extra_field.output", "person_with_extra_field.expected"])
dec = env.GetBuildPath('#basic_stream/${PROGPREFIX}decode_stream${PROGSUFFIX}')
env.RunTest('person_with_extra_field_stream.output', [dec, "person_with_extra_field.pb"])
env.Compare(["person_with_extra_field_stream.output", "person_with_extra_field.expected"])
dec2 = env.GetBuildPath('#alltypes/${PROGPREFIX}decode_alltypes${PROGSUFFIX}') dec2 = env.GetBuildPath('#alltypes/${PROGPREFIX}decode_alltypes${PROGSUFFIX}')
env.RunTest('alltypes_with_extra_fields.output', [dec2, 'alltypes_with_extra_fields.pb']) env.RunTest('alltypes_with_extra_fields.output', [dec2, 'alltypes_with_extra_fields.pb'])

View File

@@ -37,7 +37,8 @@ def add_nanopb_builders(env):
src_suffix = '.pb', src_suffix = '.pb',
emitter = nanopb_targets) emitter = nanopb_targets)
env.Append(BUILDERS = {'Nanopb': nanopb_file_builder}) env.Append(BUILDERS = {'Nanopb': nanopb_file_builder})
env.SetDefault(NANOPB_GENERATOR = 'python ' + env.GetBuildPath("#../generator/nanopb_generator.py")) gen_path = env['ESCAPE'](env.GetBuildPath("#../generator/nanopb_generator.py"))
env.SetDefault(NANOPB_GENERATOR = 'python ' + gen_path)
env.SetDefault(NANOPB_FLAGS = '-q') env.SetDefault(NANOPB_FLAGS = '-q')
# Combined method to run both protoc and nanopb generator # Combined method to run both protoc and nanopb generator
@@ -71,8 +72,10 @@ def add_nanopb_builders(env):
# Build command that decodes a message using protoc # Build command that decodes a message using protoc
def decode_actions(source, target, env, for_signature): def decode_actions(source, target, env, for_signature):
dirs = ' '.join(['-I' + env.GetBuildPath(d) for d in env['PROTOCPATH']]) esc = env['ESCAPE']
return '$PROTOC $PROTOCFLAGS %s --decode=%s %s <%s >%s' % (dirs, env['MESSAGE'], source[1], source[0], target[0]) dirs = ' '.join(['-I' + esc(env.GetBuildPath(d)) for d in env['PROTOCPATH']])
return '$PROTOC $PROTOCFLAGS %s --decode=%s %s <%s >%s' % (
dirs, env['MESSAGE'], esc(str(source[1])), esc(str(source[0])), esc(str(target[0])))
decode_builder = Builder(generator = decode_actions, decode_builder = Builder(generator = decode_actions,
suffix = '.decoded') suffix = '.decoded')