Skip to content
BluAge Documentation

Batch Components

This page explains how to put in place the components of your Batch modeling, i.e. your Readers, Processors and Writers declarations.

Introduction

Readers, Processors and Writers are the core pieces of your Batch Modeling, representing a single task of a batch process. In your UML model, they are represented by Call Behavior Actions for Processors and by blank Call Operation Actions for Readers and Writers (no linked operation).

Here is an example of a Step, defining :

  • A Reader in red, which is a blank Call Operation Action with a specific stereotype depending on the reader type;
  • A PreProcessor, Processor and PostProcessor bulk in green, which are process operations defined in the Service layer of your application and whose activity diagrams are referenced here by the mean of Call Behavior Actions;
  • A Writer in blue, which is a blank Call Operation Action with a specific stereotype depending on the writer type.

Component_Step

Readers Modeling

A Reader is the core piece of your process which allows you to retrieve data to be manipulated from multiple source types explained below.

JDBC Reader / SQL Reader

A JDBC Reader allows you to retrieve data from a JDBC Datasource by the mean of an SQL request. You can implement such a JDBC Reader by creating an empty Call Operation Action in the flow of your Step and by applying the JDBCReader stereotype to it.

The following tagged values can be configured for your JDBCReader :

Tagged value (Type) Description
driverSupportsAbsolute (boolean) Indicates whether the JDBC driver supports setting the absolute row on a ResultSet. It is recommended that this is set to true for JDBC drivers that supports ResultSet.absolute().
Default is false.
fetchSize (int) Indicates the number of rows that should be fetched when more rows are needed for the ResultSet object.
ignoreWarnings (boolean) Whether to throw errors or to ignore (simply logged) SQL warnings.
Default is true.
maxRows (int) Specifies the maximum number of rows that any ResultSet object can contain.
orderBy (String) Deprecated.
query (Class) Indicates the Class used to map the result of the SQL request.
queryTimeout (int) Sets the number of seconds the driver will wait for a Statement object to execute.
restartSql (String) Deprecated.
saveState (boolean) Whether or not to save internal data for ExecutionContext.
Default is true.
setUseSharedExtendedConnection (boolean) Whether or not to share the connection used for the cursor between all processings.
Default is false.
sql (String) The SQL select request to execute.
verifyCursorPosition (boolean) Specifies whether or not to verify the cursor position after the current row has been processed by RowMapper or RowCallbackHandler.
Default value is false.

The Entity targeted by the SQL request must be modeled as a transient Entity, as described here.

Here is an example of a JDBCReader fetching data into an Entity called JDBCMovie :

Component_JDBCReader1 Component_JDBCReader2

JDBC Mapping

There are two ways to map the ResultSet retrieved by the SQL request to the attributes of the transient Entity specified by the query tagged value :

  • NAME : The first way is to bind the ResultSet using column names.
    This is the default behavior when you don't specify anything. In that case, the names of the attributes in the transient Entity are mapped to the column names. You can use aliases in your SQL request to map the name of your attribute to this alias and not being forced to name your attributes as your columns.
  • INDEX : The second way is to bind this ResultSet using column indexes.
    This behavior is activated whenever you add the COLUMN stereotype to an attribute in this Entity. Each COLUMN-stereotyped attribute will then be mapped with the ResultSet such as the order of the attributes are the same as the order of the selected columns in the SQL request.
    You can also override the default order by explicitly defining the index of each attribute using the index tagged value of the COLUMN stereotype. In that case, it is strongly recommended that you specify all indexes in order to avoid multiple columns mapped to the same index.

ExecutionContext Parameters

Dynamic jobExecutionContext and stepExecutionContext parameters can be used in your SQL query. If you want to use a jobExecutionContext parameter, use the $j([name]) syntax, and the $s([name]) syntax for stepExecutionContext parameters, where [name] is the key of your ExecutionContext parameter.

SQL Reader

The SQLReader stereotype is a simplified version of the JDBCReader, offering only the query and sql tagged values as configuration but strictly equivalent to the JDBCReader as for its usage.

Group Reader

JDBCReaders can be customized so that the records retrieved by the query are grouped together depending on rupture fields of that query. To implement this behavior, you can add the GroupReader stereotype alongside the JDBCReader one to your blank Call Operation Action. You can then set the rupture fields used for grouping by filling the field tagged value with a comma-separated list of field names.

Here is an example of a JDBCReader configured as a GroupReader :

Component_GroupReader

JPA Reader

A JPA Reader allows you to retrieve data from a JPA Datasource by the mean of an HQL request. You can implement such a JPA Reader by creating an empty Call Operation Action in the flow of your Step and by applying the JPAReader stereotype to it.

The following tagged values can be configured for your JPAReader :

Tagged value (Type) Description
jpql (String) The HQL select request to execute.
restartJpql (String) Deprecated.

The Entity targeted by the HQL request must be modeled as a persisted Entity, as described here.

Here is an example of a JPAReader fetching data into an Entity called JPAMovie :

Component_JPAReader1 Component_JPAReader2

ExecutionContext Parameters

Dynamic jobExecutionContext and stepExecutionContext parameters can be used in your SQL query. If you want to use a jobExecutionContext parameter, use the $j([name]) syntax, and the $s([name]) syntax for stepExecutionContext parameters, where [name] is the key of your ExecutionContext parameter.

Multi-Datasource

Entities can be fetched from multiple JPA Datasources. If you don't specify anything, the default JPA Datasource configured in the BluAge Forward process is used both for Readers and their associated Entities in the business layer.

If you want to target a different JPA Datasource, you have to add the Datasource stereotype to the Call Operation Action defining your JPA reader. You can then configure its datasource tagged value with the key of your secondary persistence unit.

The Entities targeted by your JPAReaders pointing to a different Datasource must also be configured to be linked to this Datasource. The steps to configure the business layer to an other Datasource are described here.

Here is an example of a JPAReader linked to a different Datasource (both Datasource and PersistenceUnit stereotypes have the key otherDs as tagged value) :

Component_JPAReaderOtherDS1 Component_JPAReaderOtherDS2

FlatFile Reader

A FlatFile Reader allows you to retrieve data from a formatted text file. You can implement such a FlatFile Reader by creating an empty Call Operation Action in the flow of your Step and by applying the FlatFileReader stereotype to it.

The following tagged values can be configured for your FlatFileReader :

Tagged value (Type) Description
autoMapping (boolean) Deprecated.
dateParserClass (String) Specifies the FQN of a class implementing the IDateParser interface provided by BluAge. This interface provides a way to customize the encoding/decoding of Date types. This configuration is only needed when at least one attribute of the Class specified in the file tagged value is of type Date.
An implementation provided by BluAge is used by default if needed.
delimiter (String) Specifies the String used as delimiter between each field of the text file.
This tagged value must only be set if the position tagged value is not set, to provide a DelimitedLineTokenizer.
encoding (String) Defines the encoding of the input text file.
Default is Spring DEFAULT_CHARSET.
fields (String) Optional. List of fields separated by ",". See "FlatFile Mapping" section below.
file (Class) Indicates the Class used to map each line of the formatted text file.
multiFiles (boolean) Whether or not to provide multiple formatted text files as inputs.
Default is false.
position (String) Comma-separated list of ranges of form beginCol-endCol, used to parse each line of the formatted text file with fixed length columns strategy.
This tagged value must only be set if the delimiter tagged value is not set, to provide a FixedLengthLineTokenizer.

The Entity targeted by the LineMapper must be modeled as a transient Entity, as described here.

Here is an example of a FlatFileReader using a DelimitedLineTokenizer (left), one using a FixedLengthLineTokenizer (center), and one using delimiter and fields list (right) :

Component_FlatFileReaderDel Component_FlatFileReaderPos Component_FlatFileReaderFields

FlatFile Mapping

To bind the FieldSet containing the values of the columns in your flat file, you have to specify which attributes in your target transient Entity participate to the mapping.

If the "fields" tag is supplied in the reader, this field list will be used to fetch in the entity (by bean introspection).

Otherwise, you must apply the COLUMN stereotype on each required attribute. Attributes are then mapped by index in the same order as the parsed columns of each line.

You can explicitly specify the order of your indexes by filling the index tagged value of the COLUMN stereotype. In that case, it is strongly recommended that you specify all indexes in order to avoid multiple columns mapped to the same index.

EBCDIC Reader

An EBCDIC Reader allows you to retrieve data from copybook files. You can implement such an EBCDIC Reader by creating an empty Call Operation Action in the flow of your Step and by applying the EBCDIC_READER stereotype to it.

The following tagged values can be configured for your EBCDIC_READER :

Tagged value (Type) Description
booleanTrueExpressionValue (String) Defines a pattern to match to parse Strings as Booleans.
Default is [Y1].
copybook (String) Convenient way to provide an hardcoded classpath path pointing to the input copybook. The other and more dynamic way to provide this information, if this tagged value is not filled, is through the BluAge Forward process configuration.
dateParserClass (String) Specifies the FQN of a class implementing the IDateParser interface provided by BluAge. This interface provides a way to customize the encoding/decoding of Date types. This configuration is only needed when at least one attribute of the Classes specified in the file tagged value is of type Date.
An implementation provided by BluAge is used by default if needed.
file (Class[0..*]) List of Classes to be used as EBCDIC mappers.
hasRDW (boolean) Whether or not the input copybook has a Record Descriptor Word (RDW).
Default is false.
legacyMode (boolean) Whether or not to stick to the legacy mode when parsing packed and zoned copybook values.
Default is true.
multiFiles (boolean) Whether or not to provide multiple EBCDIC files as inputs.
Default is false.

The Entities used as EBCDIC mappers must be modeled as transient Entities, as described here.

Here is an example of a simple EBCDIC_READER with an Ebcdic1MovieIn mapper :

Component_EBCDIC_READER1 Component_EBCDIC_READER2

EBCDIC Mapping And Submappers

There are two ways to map the values retrieved by the copybook to the attributes of the transient Entity specified by the file tagged value :

  • NAME : The first way is to bind these values using attributes names.
    This is the default behavior when you don't specify anything. In that case, the names of the attributes in the transient Entity are mapped to the names specified in the copybook configuration.
  • INDEX : The second way is to bind these values using indexes.
    This behavior is activated whenever you add the COLUMN stereotype to an attribute in this Entity. Each COLUMN-stereotyped attribute will then be mapped with the list of values such as the order of the attributes are the same as the order in the retrieved list.
    You can also override the default order by explicitly defining the index of each attribute using the index tagged value of the COLUMN stereotype. In that case, it is strongly recommended that you specify all indexes in order to avoid multiple values mapped to the same attribute.

You can also configure multiple levels of mappers if submappers are required for FieldsGroup managing. You can add the FieldsGroup stereotype to the complex attribute representing the submapper in the main mapper, as demonstrated in the example below.

Component_EBCDICSubmappers

Please note that you only have to fill the file tagged value with the main mappers in that case, and not all mappers including submappers.

The FieldsGroup also has the index tagged value which have the same purpose as the index tagged value of the COLUMN stereotype. Please note that you don't have to put both COLUMN and FieldsGroup stereotype to the attribute, only the latter is required in that case.

Discriminator Patterns

Multiple classes can be specified as EBCDIC mappers. In that extent, you may have to specify the discriminator pattern to be used by each mapper to only map the records which are directed to them.

You can specify this pattern by applying the Discriminator stereotype to your transient Entities targeted by the file tagged value, then by filling the appropriate pattern in the regex tagged value of the Discriminator stereotype.

Processors Modeling

A Processor is the core piece of your process which allows you to manipulate the data your received from your Reader, in the aim of triggering actions and/or passing it to a Writer which will later persist your modified data. This Processor is executed for each item retrieved from the Reader.

To mark an operation as a Processor of your Step, simply drag-and-drop the operation into the flow of your Step, without further stereotype. It will automatically create a Call Behavior Action pointing to your operation.

A Processor must be an operation with no input argument if no Reader is provided in the Step or with an input argument matching the Reader type otherwise. A Processor must return a value type which will be the output type of the Step.

Operations in process

Almost all Readers and Writers explained in previous and next sections have an equivalent which can be executed during the process.

The table below defines every available "operations in process" stereotypes and, if exists, its Reader, Writer or Tasklet stereotype equivalent :

Operation in process (Stereotype) Equivalent Reader / Writer / Tasklet (Stereotype) Input parameters Return value
read_ebcdic_operation EBCDIC_READER. See here. none EBCDIC mapped Class
read_flat_file_operation FlatFileReader. See here. none Flatfile mapped Class
sort_ebcdic_operation SortTasklet. See here. none void
sql_operation JDBCReader. See here. Request parameters using name binding Depends on the request
sql_update_operation JDBCWriter. See here. Request parameters using name binding Depends on the request
write_ebcdic_operation EBCDIC_WRITER. See here. EBCDIC mapped Class void
write_flat_file_operation FlatFileWriter. See here. Flatfile mapped Class void
write_template_line_operation TemplateLineWriter. See here. Template ID String (optional)
List of Flatfile mapped Class
void
write_text_file_operation TextFileWriter. See here. Flatfile mapped Class void
xml_read No equivalent. XML-formatted String to parse XML mapped Class
xml_write No equivalent. XML mapped Class XML-formatted String result

All tagged values provided in the operation in process stereotypes are the same as the tagged values provided in their equivalent Reader, Writer or Tasklet and won't be developed further in this section.

To create an operation in process, create the prototype of your operation in a service using parameters and stereotype defined in the previous table. You can then directly reference this operation in your process operation using a Call Operation Action. Here is an example of a readActor flatfile reader in process and its usage in a readActors process operation :

Component_ReadFlatFile1 Component_ReadFlatFile2

XML Read / XML Write

XML Read and XML Write operations are convenient ways to convert an Entity from/into its corresponding XML representation.

To get the behavior that suits your need, you have to configure your Entity to indicate which attributes have to be mapped and with which names, by the mean of the xml_element and xml_property stereotypes respectively configuring Entities and attributes.

The following tagged values can be configured through the xml_element stereotype :

Tagged value (Type) Description
indent (boolean) Deprecated.
xml_list_name (String) Specifies a root XML tag name enclosing the Entity's attributes list.
xml_name (String) Specifies the corresponding XML tag name for an Entity.
If this tagged value is not filled, the Entity name will be used as default.

The following tagged values can be configured through the xml_property stereotype :

Tagged value (Type) Description
propertyAsAttribute (boolean) Whether or not this attribute has to be considered as a tag or a property of the enclosing tag.
xml_list_name (String) Specifies the corresponding root XML tag name for enclosing list attributes.
xml_name (String) Specifies the corresponding XML tag name for an attribute.
If this tagged value is not filled, the attribute name will be used as default.

Here is an example of 3 Entities configured and an example of their corresponding XML representation with dummy values :

Component_XmlConfiguration Component_XmlRepresentation

Pre-processors & Post-processors

Pre-processors and post-processors are operations which can be executed respectively at the beginning / at the end of a Step, typically for initialization / cleaning, logging or to override the default ExitStatus depending on some custom conditions.

You can mark a Call Behavior Action pointing to an operation as a pre-processor or a post-processor by adding the PreProcessor or PostProcessor to this Call Behavior Action.

Here is an example Step configuration defining a PreProcessor and a PostProcessor :

Component_PrePostProcessor

  • A PreProcessor must be an operation with no input parameter and returning void;
  • A PostProcessor must be an operation with no input parameter and returning void or a value type compatible with the input argument of the String.valueOf Java method :

    • If void is returned, the ExitStatus returned by the PostProcessor will be the standard one from the call to super.postprocess();
    • Otherwise, the returned ExitStatus is the result of the String.valueOf with the returned value of your custom operation.

Job Context / Step Context / Service Stop

You can access the Job Context and the Step Context of your Batch application by accessing special interfaces provided by BluAge in the PK_UTILS package.

Component_JobStepContext

Here is an example of a PreProcessor putting a number of written records in the Step Context Manager, with the count key, and a PostProcessor retrieving this value :

Component_JobStepContextPut Component_JobStepContextGet

You can also access the ServiceStop interface which provides several convenient methods to stop the Job or skip items :

Component_JobStepContext

NOTE : Methods crossed out are deprecated.

Last Item

You can access an other information in your process operation which is a boolean indicating whether or not the current item is the last one to be proceeded. You can access this value by specifying a second input parameter to your process operation and give it the boolean type. The generation will automatically take care of providing the right value for this parameter using the Step Context.

Writers Modeling

A Writer is the core piece of your process which allows you to persist your data to multiple target types explained below. You can specify as many writers as you want in your Step activity diagram.

JDBC Writer / SQL Writer

A JDBC Writer allows you to persist/update data into a JDBC Datasource. You can implement such a JDBC Writer by creating an empty Call Operation Action in the flow of your Step and by applying the JDBCWriter stereotype to it.

The following tagged values can be configured for your JDBCWriter :

Tagged value (Type) Description
assertUpdates (boolean) Whether or not to force the assertion that each item needs to update at least one row.
Default is false.
condition (Operation) Specifies an operation used to conditionally trigger the writer. This operation must take as a parameter the transient Entity used to store the data and return a boolean indicating if this Entity needs to be processed or not.
For more information about Conditional Writers, see section Conditional Writers.
sql (String) The SQL query to be executed.

Here is an example of a JDBCWriter update data from a JdbcMovie Entity to a JDBC Datasource :

Component_JDBCWriter1

SQL Writer

The SQLWriter stereotype is a simplified version of the JDBCWriter, offering only the condition and sql tagged values as configuration but strictly equivalent to the JDBCWriter as for its usage.

SQL Script Writer

If the standard SQLWriter doesn't cover your need, you can also create a writer which will execute some SQL statements contained in a text file. You can implement such a SQL Script Writer by creating an empty Call Operation Action in the flow of your Step and by applying the SQLScriptWriter stereotype to it.

The following tagged values can be configured for your SQLScriptWriter :

Tagged value (Type) Description
condition (Operation) Specifies an operation used to conditionally trigger the writer. this operation must take as a parameter the transient Entity used to store the data and return a boolean indicating if this Entity needs to be processed or not.
For more information about Conditional Writers, see section Conditional Writers.

The path to the text file containing the statements is configured by the BluAge Forward process through a properties file entry.

JPA Writer

A JPA Writer allows you to persist/update data into a JPA Datasource. You can implement such a JPA Writer by creating an empty Call Operation Action in the flow of your Step and by applying the JPAWriter stereotype to it.

The following tagged values can be configured for your JPAWriter :

Tagged value (Type) Description
serviceOp (Operation) Specifies the process operation which will provide the update logic. This operation usually calls a CRUD operation leading to the update of the persisted Entity.
See below for more information about CRUD operations calls.

CRUD Operations

There are two ways to access the JPA repository associated with a persisted Entity :

  • In all Batch stacks, you can call the standard CRUD methods declared in the JpaBaseDAO service of the Batch BluAge Profile directly into your process, as shown below :

    Component_JPAWriterJpaBaseDAOWriter Component_JPAWriterJpaBaseDAOCall Component_JPAWriterJpaBaseDAO

  • In newer SpringBoot Batch stack, you can also directly call a CRUD operation linked to your Entity as described here. Here is an example of a JPAWriter updating data from a Jpa1Movie Entity to a JPA Datasource using this CRUD operation call strategy :

    Component_JPAWriter1 Component_JPAWriter2 Component_JPAWriter3

FlatFile / TemplateLine / TextFile Writers

FlatFile, TemplateLine and TextFile writers are three variants which allow you to persist/update data into output text files using different configurations :

  • The FlatFileWriter allows you to write your data to a text file either using a delimiter character or a fixed characters length for each column;
  • The TemplateLineWriter allows you to write your data to a text file using a line pattern defined in a text file provided as an input. The writer uses this pattern to map the data for each line of the text file;
  • The TextFileWriter allows you to write your data to a text file using a service process operation which will define the String to write for each line of the text file.

You can implement such writers by creating an empty Call Operation Action in the flow of your Step and by applying the FlatFileWriter, TemplateLineWriter or TextFileWriter stereotype to it depending on your needs.

The following tagged values can be configured for this 3 types of writers :

Tagged value (Type) Description
append (boolean) Whether or not to erase previous content in the file or to write data at the end.
Default is false.
condition (Operation) Specifies an operation used to conditionally trigger the writer. this operation must take as a parameter the transient Entity used to store the data and return a boolean indicating if this Entity needs to be processed or not.
For more information about Conditional Writers, see section Conditional Writers.
encoding (String) Defines the encoding of the output file.
Default is Spring DEFAULT_CHARSET.

The path to the text file to be written to as well as the file holding the template for the TemplateLineWriter, are configured by the BluAge Forward process through properties file entries.

FlatFileWriter

The following tagged values can also be configured for your FlatFileWriter :

Tagged value (Type) Description
booleanEncoder (String) Specifies the FQN of a class implementing the IBooleanEncoder interface provided by BluAge. This interface provides a way to customize the encoding of Boolean types. This configuration is only needed when at least one attribute of the input Class used by the writer is of type Boolean..
An implementation provided by BluAge is used by default if needed, and returns the "1" String if true and "0" otherwise.
delimiter (String) Specifies the String used as delimiter between each field of the text file.
This tagged value must only be set if the format tagged value is not set, to provide a DelimitedLineAggregator.
format (String) Specifies the format String used to parse each line of the formatted text file with fixed length columns strategy.
This tagged value must only be set if the delimiter tagged value is not set, to provide a FormatterLineAggregator.
For more information about the syntax used by the java Formatter, please refer to this page.
fieldsToWrite (String) Comma-separated list of fields to be written to the output text file. The fields names must match the attributes names of the Class used by the writer.

Here is an example of a FlatFileWriter using a DelimitedLineAggregator (left) and one using a FormatterLineAggregator (right) :

Component_FlatFileWriter2 Component_FlatFileWriter1 Component_FlatFileWriter3

TemplateLineWriter

The following tagged values can also be configured for your TemplateLineWriter :

Tagged value (Type) Description
footer_id (String) Specifies the ID used at the beginning of the template line to discriminate which pattern is used for footers.
This tagged value must only be set alongside the footerWriter tagged value if needed.
footerWriter (Operation) Specifies the target service process operation used to provide the String to be parsed for the footer, based on the pattern provided by the footer_id value. This operation must not take any parameter and must return a String.
This tagged value must only be set alongside the footer_id tagged value if needed.
format (TemplateFormat) Specifies the formatter to use for the patterns between Cobol, String or Template. For more information about available formatters, see below.
Default is Template.
header_id (boolean) Specifies the ID used at the beginning of the template line to discriminate which pattern is used for headers.
This tagged value must only be set alongside the headerWriter tagged value if needed.
headerWriter (Operation) Specifies the target service process operation used to provide the String to be parsed for the header, based on the pattern provided by the header_id value. This operation must not take any parameter and must return a String.
This tagged value must only be set alongside the header_id tagged value if needed.
input_encoding (String) Defines the encoding of the template file.
Default is Java's Charset.defaultCharset.
recordWriter (Operation) Specifies the target service process operation used to provide the list of parameters to be matched to the provided pattern. This operation must take as a parameter the Entity used to store the data and must return a List.
templateId (String) Specifies the ID used at the beginning of the template line to discriminate which pattern is used for records.

There are three available formats that can be used to build the patterns to apply to each record and/or header/footer :

  • Cobol : This format is based on COBOL format, in conjunction with an EbcdicEncoder;
  • String : This format uses the standard Java Formatter, used in String.format static method for instance;
  • Template : This format is based on the Java MessageFormat with the addition of offering the possibility to specify a length and an alignment for each pattern. The following syntax is provided :
    • MessageFormatPattern
    • MessageFormatPattern [ Length ]
    • MessageFormatPattern [ Length Alignment ]
    • Length is an integer and it forces the pattern to this size, by removing characters if the result of the pattern is too long or by adding spaces if the result is too short.
    • Alignment is either 'r' or 'l', respectively for left and right alignment. If not specified, right alignment is implied.
    • Examples :
      • Template.format("_{0}[9]_", 123456) -> _   123456_
      • Template.format("_{0}[9l]_", 123456) -> _123456   _

Here is an example of a TemplateLineWriter based on the following template file content :

EMPLOYEE: EMPID:%1d EMPNAME:%2s EMPDOB:%3s EMPSALARY:%4.2f EMAILID:%5s

Component_TemplateLineWriterStep Component_TemplateLineWriterRecordWriter

TextFileWriter

The following tagged values can also be configured for your TextFileWriter :

Tagged value (Type) Description
headerWriter (Operation) Specifies the target service process operation used to provide the String to be written at the top of the output text file. This operation must not take any parameter and must return a String.
recordWriter (Operation) Specifies the target service process operation used to provide the String to be written for each record. This operation must take as a parameter the Entity used to store the data and must return a String.

EBCDIC Writer

An EBCDIC Writer allows you to persist/update data into an EBCDIC file. You can implement such an EBCDIC Writer by creating an empty Call Operation Action in the flow of your Step and by applying the EBCDIC_WRITER stereotype to it.

The following tagged values can also be configured for your EBCDIC_WRITER :

Tagged value (Type) Description
append (boolean) Whether or not to erase previous content in the file or to write data at the end.
Default is false.
booleanEncoder (String) Specifies the FQN of a class implementing the IBooleanEncoder interface provided by BluAge. This interface provides a way to customize the encoding of Boolean types. This configuration is only needed when at least one attribute of the input Class used by the writer is of type Boolean..
An implementation provided by BluAge is used by default if needed, and returns the "1" String if true and "0" otherwise.
condition (Operation) Specifies an operation used to conditionally trigger the writer. this operation must take as a parameter the transient Entity used to store the data and return a boolean indicating if this Entity needs to be processed or not.
For more information about Conditional Writers, see section Conditional Writers.
copybook (String) Convenient way to provide an hardcoded classpath path pointing to the output copybook. The other and more dynamic way to provide this information, if this tagged value is not filled, is through the BluAge Forward process configuration.
dateParserClass (String) Specifies the FQN of a class implementing the IDateParser interface provided by BluAge. This interface provides a way to customize the encoding/decoding of Date types. This configuration is only needed when at least one attribute of the input Class used by the writer is of type Date.
An implementation provided by BluAge is used by default if needed.
items (Class[0..*]) Deprecated.
legacyMode (boolean) Whether or not to stick to the legacy mode when parsing packed and zoned copybook values.
Default is true.
rejectedFields (Property[0..*]) Deprecated.
writeRDW (boolean) Whether or not to write a Record Descriptor Word (RDW) at the beginning of the EBCDIC output file.
Default is false.

Composite Writers

Multiple writers can be used in a single step, allowing you to persist the same data to different targets without having to fetch them through a reader each time. If multiple writers are present in a Step activity diagram, this will implicitly trigger the creation of a CompositeItemWriter which will delegate the writing process to each declared writer.

Conditional Writers

Writers can be triggered conditionally depending on some condition using many ways :

  • The first one is to fill the condition tagged value provided by all writers stereotypes except the JPAWriter. This tagged value allows you to point to an operation which will serve as condition to check whether or not the writer has to proceed this item. This condition operation must take as input parameter the Entity and must return a boolean.
    Here is an example of a FlatFileWriter conditioned by a process operation which will filter only male actors :

    Component_ConditionWriterOperation1 Component_ConditionWriterOperation2

  • The second one is roughly equivalent except that the expected returned boolean can be configured. This behavior is obtained by adding an InputPin to the Call Operation Action supporting your writer and providing as type an Interface stereotyped ItemCondition. With this in place, you can specify the target condition operation with the service tagged value of the ItemCondition stereotype, and specify whether the expected returned boolean should be true or false with the onValue tagged value :

    Component_ItemCondition1 Component_ItemCondition2

  • The last one can be used to provide a type checking and is implemented by adding an InputPin to the Call Operation Action supporting your writer. You can then specify a type to this pin which will be checked against using instanceof in order to be written. It can be useful in case of multiple readers leading to generic objects being manipulated at process time. Here is an example of a Job defining a CompositeWriter for movies and actors, and defining one different writer for each type of class :

    Component_ConditionWriterTarget1

  • You can also combine the use of the condition tagged value with this last check to provide type checking and, if the type is correct, the item is casted and passed to your condition operation. It can be useful to avoid the manual type checking in your condition operation and work directly with the correct type as a parameter.