Recent Posts

Pages: [1] 2 3 ... 10
1
Prelude Support / Re: Please explain Error Message
« Last post by Tom Pellitieri on June 20, 2018, 12:28:06 pm »
If you're doing this from BASIC, you might only need to select ORDER.HISTORY.  The corresponding Line Keys are in Invoice order in Attribute 200.  Here's the BASIC framework.

Code: [Select]
CMD = "SELECT ORDER.HISTORY WITH ":(criteria):" BY ":(sort options)
EXECUTE CMD
LOOP WHILE READNEXT ID DO
   READ HREC FROM F.ORDER.HISTORY,HID THEN
      IX = DCOUNT(HREC<200>,@VM)
      FOR I = 1 TO IX
         READ LREC FROM F.ORDER.HISTORY.LINE,HREC<200,I> THEN
            (decide if you need the line or not, and print what you need)
         END
      NEXT I
   END
REPEAT

You could use /RW and a work file as well.  Rather than copying ALL of the data to a new file, I suggest that you create a file with just the record keys from both files, with no data.  You can then use dictionary items to pull the data you need from the appropriate file.  Since ORDER.HISTORY.LINE uses a two-part key (ORD.KEY!LINE), you don't have to worry about having the same key in both files. 

Since you might have multiple users running the report at the same time, I would suggest pre-pending @PORT (or PORT in BASIC) to the key when you write it to the file.  Header IDs would be port!orderkey and Line IDs would be port!orderkey!linenumber.

I do this for some tracking I need to do for our Sales reports.  I have to get Freight Charges from ORDER.HISTORY, and other Sales information from ORDER.HISTORY.LINE.  I have my own file with the appropriate keys only, and set up the few derived fields I need (e.g., Fiscal Period from ORDER.HISTORY, Ship Quantity/Ext. Price from ORDER.HISTORY.LINE).

For example, you can set up these derived fields in SB+:

PORT - A3:  <0>"G0!1"
ORD.KEY - A15:  <0>"G1!1"
BASE.LN - N3:  <0>"G2!1"
SORT.LN - N3:  (IF(BASE.LN="",0,BASE.LN))

You can also use SB+ F(file,key) derivations to get whatever fields you need from either file.  For example:

SHP.QTY - N6:  (IF(BASE.LN="",0,F("ORDER.HISTORY.LINE",KEY)<41>)
PERIOD - A5:  (F("ORDER.HISTORY",ORD.KEY)<72>)

Another alternative would be to run the report strictly from ORDER.HISTORY.LINE, and get the information from the header only for the first record of each order.  You would have to track when your order number changes to get the header information out.

Yet another alternative would be to use /RW from ORDER.HISTORY, and use a Process After Read paragraph to read the lines and load appropriate MVs into an @WORK slot.

Needless to say, there are a lot of ways to attack this.  Not knowing exactly what you need from both files makes it hard to give you better advice.
2
Prelude Support / Re: Please explain Error Message
« Last post by DonQuixote on June 20, 2018, 09:29:01 am »
My original problem is to blend the ORDER.LINE and ORDER.HISTORY.LINE files for a report.
The report is done using a basic program; so if the ID is not in one file then I read the other.
Instead of creating a work file as originally planned, I'm now thinking to work around that issue.
I could select the IDs I need from one file and then the other.  I can merge the two list using.
merge.list 1 UNION 2

It worked except for one thing; the sort.
I need to sort the blended lists by the order number and by the line number.
Remember the size of this select list is huge.
Any ideas?
3
Prelude Support / Re: Please explain Error Message
« Last post by DonQuixote on June 19, 2018, 07:38:04 am »
Thank you. That's exactly the answer. Thank you.
4
Prelude Support / Re: Please explain Error Message
« Last post by Tom Pellitieri on June 19, 2018, 04:49:44 am »
Check the physical size of the file at the O/S level.  Linux/Unix/AIX has a file size limit of around 2Gb.  If the file gets too big, you need to make it dynamic so the data can be split into multiple parts.

Generally, any file over 1.5Gb should be reviewed to purge old records or made dynamic instead.

Given that you are creating an historic copy of ORDER.HISTORY.LINE, I would recommend you use

CREATE.FILE WRK.port 3701,16 DYNAMIC KEYDATA

If no one is accessing the file, you may reformat your existing file using

memresize WRK.port 3701,16 MEMORY 64000 DYNAMIC KEYDATA

The MEMORY keyword allocates 64Mb to use during the resize.  The default is 8Mb, which is extremely slow.  You could easily bump that to 256000 if you don't have others on the system.

There are two options for determining when to split the file.  KEYONLY uses just the size of the record keys, while KEYDATA uses the size of both the keys and data.  Given that you are using ORDER.HISTORY.LINE, I recommend KEYDATA.

Hope this helps.
5
Prelude Support / Please explain Error Message
« Last post by DonQuixote on June 18, 2018, 03:58:21 pm »
I created a work file.   WRK.port
I read the ORDER.HISTORY.LINE file and add "H*" before the ID and write to this work file.
Many records write to the work file and then it aborts with the following errors.

'new block's offset is over the limit
error in write_record for file 'WRK.227C'
over the limit error in U_add_record for file 'WRK.227'
insertion failed error in add_to_group for file 'WRK.227C',
key 'H*001007G2464!5', number = 790
insertion failed error in U_apprend_strtuple for file 'WRK.227C'
key 'H*001007G2464!5', number = 790
Fatal error: WRITE error

What does it mean and how can I correct it.
I thought it was the size as these work files are created using modulo  101,1
So I forced to to create using modulo 3701,8
No change... same errors.
6
Prelude Support / Join us for Prelude 2018!
« Last post by precisonline on June 06, 2018, 02:30:22 pm »
October 8-10 2018 , Prelude users will be gathering at the Oak Ridge Hotel & Conference Center in Chaska, MN!
   Three full days of user and expert-led Conference Sessions
   Balanced agenda with sessions for technical and business application users
   Users, vendors and developers providing interaction, collaboration and support
   A great value for an exceptional learning, sharing and networking experience!
   Hotel and meals included in the conference fee!

Rates for Conference Sessions
Day Package:  $845     Early Bird rate: $745 (expires July 1, 2018)
3 days of conference sessions, including registration, conference fees, breakfast buffet, lunch and snacks & beverages throughout the day.

Overnight Package: $1395  Early Bird rate: $1295 (expires July 1, 2018)
3 days of sessions, plus overnight accommodations. This includes registration, conference fees, a standard hotel guest room for 3 nights, breakfast, lunch, dinner and snacks & beverages throughout the day.

You can register for the event here: http://www.novoroisystems.com/nugm-2018
7
Prelude Support / Re: Process Definition Sort mv
« Last post by precisonline on March 28, 2018, 01:42:24 pm »
Any single BY.EXP is going to make the whole deal BY.EXP.  The selection criteria on a /PD.S translates eventually to a TCL SELECT statement, and that is true for a TCL statement.
8
Prelude Support / Process Definition Sort mv
« Last post by DonQuixote on February 01, 2018, 10:53:11 am »
/PD.S     Process Definition Sort
is there a way to get a mix of  BY and BY-EXP in the process?
9
Announcements / EUG 2017
« Last post by precisonline on September 06, 2017, 07:49:13 pm »
Who will be at the Epicor User Group meeting in Minneapolis in October?
10
Your problem shows one of the philosophical differences between an SQL TABLE and a UniData/MultiValue Dictionary.  The TABLE forces the data into a particular structure, while the Dictionary allows you to access the data by multiple methods.

For example, our CUSTOMER file has TAX.JUR.NUM as a Multi-Valued field (Attribute 16).  It also has TAX.JUR.NUM1, TAX.JUR.NUM2 and TAX.JUR.NUM3 defined for the first three values (Attributes 16.1, 16.2, and 16.3).  Which field(s) would you want the SQL SELECT to return?  That's why LIST ALL provides the data by Attribute, with little regard to the Dictionary (it converts Dates, apparently).

Even if you applied CONVERT.SQL to a file, you may still have issues with the Multi-Valued fields.

*** Warning ***  Gory "Under-the-Hood" details follow

A "roll your own" solution would be to write a program to build the field names for the LIST/SORT command from the SB+ Dictionary Items.  In the file's DICT, these items have "Z" in Attribute 1.  Attribute 2.1 is the corresponding Attribute number (zero for Derived fields), and Attribute 2.2 has the Value number (zero for the entire attribute, -1 for MV fields). 

The SB+ field names are the same as the UniData field names, with a prepended dot.  E.g., SB+ stores its information in .PROD.NUM for the UniData field PROD.NUM (/FD maintains both).

--Tom
Pages: [1] 2 3 ... 10