Thursday, 31 July 2014

Implementations-

Machinery- Komatsu, ABB, Andritz, Kalmar, Alstom Liebher, Rolls Royce, Husky, Metso, OTIS, Solar Turbines, Brugman International, Brill, Gildemeister, Heimer

Electrical Engineering, Electronics- NEWAYS, Philips, MSL, Thrane, Barco, Hitachi, Elcoteq, Flextronics, Soletron, Tyco, Siemens

Aircraft and Defence- Boeing, Sypris, Piper, Stork, Bae Systems, Telair, Gulfstream, Drissen, Bombardier, CAE, Dutch Aero

Automotive & Transport- Honda, Ferrari, Dana, Paccar, Visteon, Aprilia, Magna Donnelly, Volkswagen

Wednesday, 23 July 2014

Product Version

Triton 1.0 to 2.2d, 3.0 to last version of Triton is 3.1bx, then the product is renamed to Baan
Baan 4.0 (last version of BaanIV is BaanIVc4 SP30) & Industry extensions (A&D,...)
Baan 5.0 (last version of BaanV is Baan5.0 c SP26)
Baan 5.1, 5.2 (for specific customers only)
SSA ERP 6.1 /Infor ERP LN 6.1 / Infor10 ERP Enterprise / Infor LN
ERP Ln 6.1 FP6, released in December, 2009
ERP Ln 6.1 FP7, released in January, 2011
ERP LN 6.1 FP8, released 2012
Infor LN 10.3, released in July, 2013


Infor ERP Ln 6.1 supports Unicode and comes with additional language translations.

Wednesday, 9 July 2014

Changing sort order of index fields

Situation
Information in Infor LN is mostly sorted using one if the indices. The default sort order is always ascending. Suppose, you write history each time a user changes an item. The item history is stored in a separate table tcibd321 and the first index consists of the field “item” and “changed on date”. If you show the history in a list window, the data is generally sorted ascending by “item” and “changed on date”. However, the requirement is to show the latest change on top. So, we need to influence the order of one of the index fields.

Solution
A new tools function is available to change the order of the index fields.
The syntax is as follows:
function long query.define.sort.order(long table.index, const string field.var, direction.var [, const string ..., ...])

Use this function to set the sort order of a field of a table index. The function overrides the sort order of parts in the table index for the current session only.
The function does not override the field sort order of an active filter. You can use this function in the section before.program. In other sections you need to activate this feature by using the function rebuild.query(). One or more index fields with the required sort order can be set.


Example
How to get the history information from the example above in the required order? Use the new function in the before.program.
before.program:
long ret
ret = query.define.sort.order(1, "tcibd321.date", "desc" )
if ret < 0 then
|* an error message can be shown ...
Endif

TIV
This function is available from TIV level 1900 and higher.

Function -> sum.records.in.view

Situation
Many grid sessions in Infor LN contain total fields at the bottom. In most cases, the value for this total field is calculated using a SQL statement with a SUM in the where-clause. Customers are complaining that this total field does not have the proper value if the user uses column filtering.

Solution
A new tools function from FP7 onwards is available to calculate the value of the total field. The syntax is as follows:
function void sum.records.in.view(const string column1.var, double result1.var [..., ...])
This function performs a query on the main table of the session, including all reference tables, and will sum the values of each specified column. The query will take the current view fields, active user filter and any query extensions into account. An even number of parameters must be passed and any specified column must exist in the data dictionary. If one of these two conditions is not met, an assert is given.

Note: Make sure you call this function only once by sending all pairs of fields due to performance!

Wrong usage:
sum.records.in.view("tpctm070.perc", total.percentage)
sum.records.in.view("tpctm070.flmt", total.funding.limit)
sum.records.in.view("tpctm070.apra", total.approved.amount)
Right usage:
sum.records.in.view("tpctm070.perc", total.percentage, "tpctm070.flmt", total.funding.limit, "tpctm070.apra", total.approved.amount)

TIV
This function is available from TIV level 1753 till 1800, 1802 and higher.

Implementation
This new function should be used from the current development release onwards, unless filtering should not impact the total field. The function can also be used to solve customer problems on FP7 and FP8, although we will not pro-actively implement this in these maintenance versions.

Also Check the usage of below functions:
function void display.total.fields (string fieldname1, void value1, string fieldname2,..., void value2,...)

Check if a new record is inserted or copied

 
Situation
The DAL is as much as possible decoupled from the user Interface. Obviously, the DAL knows if a record is being added, changed or deleted. If a record is added, it might sometimes be very useful to know if this new record is being copied from another one.
 
Solution
  A tools function is available to check if a new record is inserted or copied. The syntax is as follows: function boolean dal.is.copy.active()  
 
You can use this function in all field hooks and in the after.new.object(), before.save.object() and after.save.object() hook of the DAL. Note that you should only use this function if mode is DAL_NEW.
If the function returns false, a normal insert is done.
If the function returns true, a dal.copy.object() has been done or the 4GL engine is handling the choice DUPL.OCCUR or GLOBAL.COPY.
 
 
Example function extern long before.save.object(long mode)
{
on case mode
case DAL_NEW:
if dal.is.copy.active() then
|** Record is being copied, also copy children
if copy.children() <> 0 then
return(DALHOOKERROR)
endif
else
|** Record is new ...
endif
break
case DAL_UPDATE:
break
endcase
return(0)
}  
 
TIV
This function is available from TIV level 1801 and higher.
 
 

Tuesday, 8 July 2014

Session is started with the context of the Parent

Introduction

Sessions started from the menu are not started with a certain context. However, many sessions can be started from another session via the session menu, a button, etc. In that case, the session is started with the context of the parent and should show the proper data. This document explains what is needed in your program script in order to handle this behavior properly.

Situation


Suppose that table 'Items' (tcibd001) contains some fields and two indexes. The first index is the primary key and exists of field 'Item' (tcibd001.item). The second index consists of fields 'Item Group' (tcibd001.citg) and 'Item' (tcibd001.item). Index 1 and index 2 are available in the 'Sort by' option of the session 'Items'. Suppose you have started the session 'Items'. You have changed the sort by to index two and have closed the session. The next time you start this session again, it should start with index 2.

Note: Currently sessions are not always starting with the last-used index, but with index 1. This is an error and has been solved in Tools.

Suppose you start the session 'Items' from the menu of another session, so with context. In many cases, the zoom.from.all or init.group section is used to import some data from the parent session. See the following code for an example. Note that this example is made as easy as possible, without the usage of the UI-template.
 
 
group.1:
init.group:
import("tcibd001.item", "imported.item")
if not isspace(imported.item) then
tcibd001.item = imported.item
ignore.first.event = true
execute(find.data)
endif


Find below the content of the item table in our example, ordered by index 2.

ItemItem group
APPLE FRUIT
BANANA FRUIT
ORANGE FRUIT
CHISEL TOOLS
DRILL TOOLS
HAMMER TOOLS
SAW TOOLS
SCREWDRIVER TOOLS

Problem Description


Suppose that the selected item code in a certain parent session is 'HAMMER' which belongs to item group 'TOOLS'. What happens if we start session 'Items' from the parent sessions? It will use the imported item to show the proper object. The choice find.data will retrieve the data from the database. To do so, the current active index is used. In our example index 2 ('Item Group', 'Item') is used. The value of tcibd001.item (HAMMER) has been imported and is available. But, the value of tcibd001.citg is empty. So, choice find.data obviously returns item APPLE as the first record to be displayed. And this is wrong.

Summarized: A session, started from a parent, will not show the proper data if one of the fields in the active key is unknown.

Solution


The solution for this problem is very easy. Make sure that all fields of the active key are known. This can be done in different ways. The preferred and most robust way is to read this data with a simple select. The solution in our example would be:

group.1:
init.group:
import("tcibd001.item", "imported.item")
if not isspace(imported.item) then
if curr.key <> 1 then
select tcibd001.*
from tcibd001
where tcibd001._index1 = {:imported.item} | sometimes you need: "where tcibd001._index1 => {:imported.item}"
as set with 1 rows
selectdo
endselect
else
tcibd001.item = imported.item
endif
ignore.first.event = true
execute(find.data)
endif


Notes

1.    If you start a session from a parent and a certain key is required from a functionality perspective, you need to set the whished index in your program logic.

2.    In most cases a standard read function is available. In our example the select on tcibd001 should be replaced by tcibd.dll0001.read.item(imported.item).

3.    Reading all fields ensures that all fields of all indexes are filled.

4.    Make sure this solution will not conflict if you use function to.key() in your script.

5.    In our example index 1 is imported. If you import another index, you need to change the code accordingly.

Transaction Management and DAL errors

Introduction
This newsletter is intended to make sure you are aware of good transaction management related to error handling.


Situation A
Suppose a database transaction is being done on the item master table tcibd001. For all items of type ‘Purchased’ the field tcibd001.fldx is changed to ‘value_x’. There are 50 items of type ‘Purchased’ in table tcibd001. What is the best way to update these 50 records?

Solution 1 (not so good)
The following code is written to do the update as mentioned in the situation A description:
long ret.val
db.retry.point()
select tcibd001.*
from tcibd001 for update
where tcibd001._index6 = {tckitm.purchase}
selectdo
ret.val = dal.change.object("tcibd001")
dal.set.field("tcibd001.fldx", value_x)
 ret.val = dal.save.object("tcibd001")
if ret.val <> 0 then
     print.error.to.report(...)
endif
endselect
commit.transaction()

As you see, the return value ret.val is used to print an error to the report. What happens with the records for which the dal.save.object(…) hook returns a DALHOOKERROR? Obviously, these items will be printed to the report. But, the update to the database may still have been performed! For example if the DALHOOKERROR was returned from the after.save.object() hook. Also updates done to dependent objects may have been performed!

Important Note: a DALHOOKERROR only means a signal that something is not correct. But, it does not mean the insert/update/delete action on the table or dependent tables has been skipped.

Solution 2 (updating a small range)
There is only one way to skip the update of a record to the database: use abort.transaction(). The following code is written to do the update as mentioned in the situation A description:
long ret.val
db.retry.point()
select tcibd001.*
from tcibd001 for update
where tcibd001._index6 = {tckitm.purchase}
order by tcibd001._index6 with retry
selectdo
ret.val = dal.change.object("tcibd001")
if ret.val = 0 then
 dal.set.field("tcibd001.fldx", value_x)
 ret.val = dal.save.object("tcibd001")
endif
if ret.val <> 0 then
     abort.transaction()
     print.error.to.report(...)
else
     commit.transaction()
endif
endselect

This solution is much better than the previous one. All return values are checked and the records are only saved if no errors occur. The ‘order by with retry’ is added to the select statement to make sure that successfully updated records are not updated again in case of a db.retry.

Important Note: Make logical database transactions. A logical transaction is a sequence of related database actions that are treated as a unit. The database actions that make up a logical transaction are processed in their entirety, or not at all. A logical transaction is a transaction that mostly applies to one parent record and its dependent objects. This means that the complete transaction must be committed or, in case of errors, aborted. Neither a commit nor an abort will lead to inconsistent data in the database.




Is there any disadvantage of this solution? Unfortunately, yes. For 50 records the solution is ok. But what about the following situation?

Situation B
Suppose a database transaction is being done on the item master table tcibd001. For all items of type ‘Purchased’ field tcibd001.fldx is changed to ‘value_x’. There are one million items of type ‘Purchased’ in table tcibd001. What is the best way to update all these records?

Solution 3 (updating a large range)
The code from solution 2 would result into one million separate transactions. This will lead very likely to performance issues. In quite some cases, the usage of a so-called counted commit is used to reduce the number of transactions. The code is then mostly as follows:
#define COMMIT.SIZE 1000
long ret.val
long commit.counter
db.retry.point()
commit.counter = 0
select tcibd001.* 
from tcibd001 for update 
where tcibd001._index6 = {tckitm.purchase}
order by tcibd001._index6 with retry 
selectdo
ret.val = dal.change.object("tcibd001")
if ret.val = 0 then
     dal.set.field("tcibd001.fldx", value_x)
     ret.val = dal.save.object("tcibd001")
endif
if ret.val <> 0 then
     print.error.to.report(...)
endif
if commit.counter < COMMIT.SIZE then
     commit.counter = commit.counter + 1
else
     commit.transaction()
     commit.counter = 0
endif
selecteos
if commit.counter > 0 then
     commit.transaction()
endif
endselect

In this solution the performance has obviously been improved. The commits are only done after 1000 records have been updated. Unfortunately, we are again saving the records for which a DALHOOKERROR occurred. And it is quite difficult to solve this.

Important Note: It is not easy to perform proper error handling if you do not use logical database transactions because of performance or any other reasons.

Some possible solutions if record 1234 gives an error:
1. Save the key values of this record. Do an abort and perform the transaction again for records 1001- 1233. Skip record 1234 and continue.
2. Set the commit.counter to 1 and do an abort. Restart from record 1001. A commit per record is now performed. As soon as a record (very likely record 1234) gives an error, an abort is done. After that set the commit.counter to COMMIT.SIZE and continue with record 1235.

It is clear that each solution requires quite some (technical) code. Moreover, we do not have a good method to abort a transaction and force the system to re-start from the last retry point. With the current tool set it is recommended to use a counted commit only if the transactions will very likely not result in a DALHOOKERROR.


Conclusion
After reading this newsletter you understand that:
1. A DALHOOKERROR only means a signal that something is not correct. But, it does not mean the insert/update/delete action on the table or dependent tables has been skipped.
2. Database transactions should be logically organized.
3. It is not easy to implement error handling properly if you do not use logical database transactions. 

To Open a script in debugger on Web-UI

  1. In Worktop, run object 'Connector Daemon' (ottstpcondaemon). Click Listen and take note of the Listener Port.
  2. For WebUI, change your URL from "http://<webaddress>/Login" to "http://<webaddress>/debugLogin" (correct capitalizaton 'debugLogin' is important).
  3. Use the Lister Port and log in with your user and password.
  4. When an object is in debug, it will come up in the debugger in Worktop.