question
stringlengths
19
6.88k
answer
stringlengths
38
33.3k
Problem Statement: Table CRDDIST is used to define the structure of theoretical or logical crude units. Row ESTxxx in table CRDDIST is to configure the possible crude feeds to particular crude units. Any non-blank numerical number in the intersection of row ESTxxx with column crude unit indicates the feed xxx is allowed to go to that crude unit. However, how cana user change the feed configuration in table CASE, for example, not allow a particular feed to the crude unit?
Solution: Let?s look at a typical feed configuration in Table CRDDIST (only shown partial of table CRDDIST), * TABLE CRDDISTL Table of Contents * Crude Distillation Map TEXT CD1 CD2 CD3 *** * * Crude Type -> SWEET SOUR SOUR * * Operating Mode -> Fuels Fuels Lube * Feed ESTANS Alaskan N Slope 50.0 ESTNSF North Sea Forties 25.0 ESTTJL TiaJuana Light 25.0 The intersection between Row ESTANS and column CD1 is 50, which means crude ANS is allowed to feed to crude unit CD1. The same logic goes to NSF to CD1, TJL to CD1. Because there is a blank in the intersection between row ESTANS and column CD2, this indicates there is no ANS allowed to crude unit CD2. However, if a user wants to change this configuration in table CASE and not allow ANS to CD1, but only to CD2, the way to do it is to replace the cell by entering the word ?EMPTY?, as shown below. * TABLE CASE Table of Contents TABLE CRDDIST TEXT CD1 CD2 * * Crude Type -> SWEET SOUR * * Operating Mode -> Fuels Fuels * ESTANS Alaskan N Slope EMPTY 50 PIMS will calculate the real activities to the crude units during optimization. Keywords: CRDDIST CASE EMPTY Feed configuration References: None
Problem Statement: You get an error or warning message highlighting that UBALXYZ is not a row name, where XYZ is a utility tag. This error or warning does not come from Aspen PIMS but rather from the Optimizer, after the matrix has already been created. What is causing this and how can you solve it? Note: using the XPRESS optimizer in version 7.1.xx of Aspen PIMS this situation issues a warning, in previous versions it is an error, using the CPLEX optimizer it is an error in all versions. These are the actual error messages that appear with different versions of the optimizers: XPRESS-MP version 18.00 98 Error: At line 1772 no match for row UBALTST XPRESS version 19.00.17 ?780 Warning: Ignoring unknown row name 'UBALTST ' found in column 'SCD1ANS ' -- in line 1772 while reading in fixed format the file 'Volume Sample\MPSPROB.mps' CPLEX version 11.0.0 CPLEX Error 1448: Line 1772: 'UBALTST' is not a row name.
Solution: The message appears because the utility XYZ is declared in table CRDDISTL but it has not been declared in table UTILBUY. TheSolution is to either remove it from table CRDISTL, or add it to table UTILBUY as a purchase. If this is an internally generated utility (i.e. not purchased), you still need to declare it in table UTILBUY with an entry of zero under column FIX, as indicated below: Keywords: UTILBUY References: None
Problem Statement: We are using PBONUS in our PIMS model and have found it to be a good way to have PIMS blending recipes close to the actual ones without adding aditional complexity to our models. We have also found that we need to change PBONUS coefficients when switching between summer and winter blending. Now we would like to move to PPIMS and would like to know how to make PBONUS entries period specific just like BLNSPEC entries. We would like to avoid keeping separate winter and summer blends.
Solution: Table PBONUS was never designed with period specific entries in mind. We are considering the introduction of this in a future release of AO. If one would like to use period specific blending bonus values the best way is to create own user defined structure in place of PBONUS. In this example we will show only one component (HCN) in one grade (URG) and on one property (DON) on the example of our PPIMS Volume Sample model. It is necessary to remove the PBONUS structure. Then build additional columns into the model and place E-rows to drive the new columns to have exactly the same activity as the blending components columns for each period. Now at the intersection of these columns with the blending quality rows of each period we will add entries which equal the PBONUS value. Here we have replaced the value in the 3rd period with a higher one. We will show below tables PBONUS and ROWS before and after. Before: *TABLE PBONUS * TEXT DON D11 *** * LCNURG LCN in URG -0.7 HCNURG HCN in URG 0.3 RFTURG RFT in URG -0.8 ALKURG ALK in URG 0.6 * * in * LCNUPR LCN in UPR -0.4 HCNUPR HCN in UPR 0.7 RFTUPR RFT in UPR -0.5 1.0 ALKUPR ALK in UPR 0.3 -0.6 *** * TABLE ROWS * User Defined Rows TEXT FIX FREE *** * VBALRFL Refinery Fuel 1 VBALTGT SRU Tail Gas 1 VBALLOS Loss/Gain 1 VBALLC1 LCN C5-350 (Hi Conv) 1 VBALLC2 LCN C5-350 (Lo Conv) 1 VBALR90 90 RONC Reformate 1.00000 VBALR94 94 RONC Reformate 1.00000 VBALR98 98 RONC Reformate 1.00000 VBALR02 102 RONC Reformate 1.00000 *** After: *TABLE PBONUS * TEXT DON D11 *** * LCNURG LCN in URG -0.7 *HCNURG *HC in URG 0.3 RFTURG RFT in URG -0.8 ALKURG ALK in URG 0.6 * * in * LCNUPR LCN in UPR -0.4 HCNUPR HCN in UPR 0.7 RFTUPR RFT in UPR -0.5 1.0 ALKUPR ALK in UPR 0.3 -0.6 *** * TABLE ROWS * User Defined Rows TEXT FIX BHCNURG1 BHCNURG2 BHCNURG3 PHCNURG1 PHCNURG2 PHCNURG3 FREE *** * VBALRFL Refinery Fuel 1 VBALTGT SRU Tail Gas 1 VBALLOS Loss/Gain 1 VBALLC1 LCN C5-350 (Hi Conv) 1 VBALLC2 LCN C5-350 (Lo Conv) 1 VBALR90 90 RONC Reformate 1.00000 VBALR94 94 RONC Reformate 1.00000 VBALR98 98 RONC Reformate 1.00000 VBALR02 102 RONC Reformate 1.00000 EPBDNHU1 Driver row 1 1 EPBDNHU2 Driver row -1 1 EPBDNHU3 Driver row -1 1 NDONURG1 Minimum Octane URG -0.3 NDONURG2 Minimum Octane URG -0.3 NDONURG3 Minimum Octane URG -0.5 *** Keywords: PBONUS PPIMS Winter grade Summer grade Period specific References: None
Problem Statement: How do I prevent Global Optimization from failing when using Convex Relaxation?
Solution: Global Optimization using Convex Relaxation is only suitable for models whose only non-linearities are either bi- or tri- linear terms. Any external non-linearities like ABML, RFG gasoline, Table CURVE/NONLIN, etc. make a model unsuitable for use with Convex Relaxation. If you try to use this feature with a model that is not a valid candidate, you will see a failure message and there is a log file with more information called XMPS_new###.lst in the model directory. Keywords: None References: None
Problem Statement: When using CRDTANKS / CRDALLOC to implement crude tanks logic, PIMS automatically creates alias tag for each crude stream involved in the Crude De-Pooling structure, for example for Alaskan N Slope (ANS) - an1. Description of the new stream is also created automatically and by default is: Crude Stream xx1: How can I change this description so the crude streams in the reports are easier to recognize?
Solution: You can add an EST row in T.CRDDISTL with a crude alias tag and input a description in the TEXT column. This way you will replace the automatically generated description both in Crude Unit and De-Pooling submodel reports. Please note that you do not have to provide mapping input as it will be generated automatically if left blank: * TABLE CRDDISTL Table of Contents * Crude Distillation Map TEXT CD1 CD2 CD3 ESTan1 Alaskan N Slope ESTANS Alaskan N Slope 50.00000 Keywords: De-Pooling description References: None
Problem Statement: When setting up a pooling structure to recurse some properties of the pool, usually -999 are used as placeholders for the RprpPol rows. This is possible as long as the yield coefficient of the components is -1, which is typically the case in a feed pool. However when the yield coefficient of the pool components is not -1 (which is typical in product pools), then -999 can not be used, because always the coefficient in the RprpPol is Yield Coefficient * Property. As the Yield Coefficient is not -1, using a -999 (which assumes a -1 yield coefficient) will produce erroneous property calculations for the pools. How can you model this situation correctly?
Solution: There are two ways of handling this situation: Use explicitly Yield Coefficient * Property Values in the recursion rows. Create an additional structure to allow the use of -999 by creating -1 Yield Coefficients. Both methods are highlighted below, in an example of two Product Pools, PR1 and PR2. There are two components for each pool, coming from Feeds FD1 and FD2. Explicit Yield Coefficients The recursion coefficients are Yield Coefficient * Property, for example for the contribution of FD1 to pool PR1: Yield Coefficient * Property = -0.7*0.75 = -0.5250 In this case, the properties for each pool member have to be explicitly defined in this unit. Creating driver rows to allow for -1 Yield Coefficients In this case, for every member of every pool, you need to define a driver row that will capture the amount of product and drive this into a collector column. Rows EFD1PR1, EFD2PR1, EFD1PR2 and EFD2PR2 have been used in this case. The activity of the collector columns is now the desired quantity of pool components, therefore the yield coefficients are -1. With this structure, -999 can now be used in the recursion rows. Although it is more extensive, this structure is desirable if some properties come from PCALC, as in that case you do not know the value for those properties and therefore can not hardcode them. Keywords: None References: None
Problem Statement: In table CRDCUTS, swing cuts are defined as type 4 cuts. By default, they have two destinations, up and down to combine with the adjacent streams. Can swing cuts also be explicitly produced in the crude units and consumed downstream?
Solution: Swing cuts can be consumed downstream. The only thing that you need to do is provide a destination, which could be in a submodel, in blending or in sales. Those destinations are defined by adding the appropiate entries to the corresponding tables as for any other stream. For example, if swing cut HN1 can go to unit SNHT, you need to add VBALHN1 (or WBALHN1) to the unit and consume it as any other feed. In the case that you have multiple logical crude units, the swing cuts must be segregated, i.e. you must enter a different number under each crude unit, as shown below. In this case, each segregated swing cut (e.g. HN1, HN2, HN3) can be used downstream also. Keywords: Swing Cut Type 4 Crude Units References: None
Problem Statement: My network is very slow, especially when I am on the road, what is the best way to configure the license?
Solution: You can use Aspen 'SLM Commute License' tool to check out the license so it acts as a standalone license. This eliminates the need for a network connection when running Aspen products. From Start | All Programs | AspenTech | Common Utilities | SLM Commute, Select the necessary license you need to checkout. Keep in mind, you need to return them when you no longer need them, because, those tokens will be checked out until you return them. Keywords: SLM Commute References: None
Problem Statement: How do I detect if RPENALTY is being used from the ExecutionLog file?
Solution: Compare the following two partial log from ExecutionLog.lst file. Before PIMS start solving the problem, it shows a section as 'Problem Statistics' in the ExecutionLog file. When there is no RPENALTY used, the problem should be close to be square, which means the number in 'Row' and the number in 'Structural Columns' should be close to each other. In the first example, that is 708 vs. 673. Problem Statistics: 708 Rows 673 Structural Columns 0 Integer Variables 0 Special Ordered Sets 1 RHS Columns 4978 Non-Zero Matrix Elements Density = 1.04 Percent CASE 1: Base Case In the second example, 'Row' = 708 but 'Structural Columns' = 1347. This indicates that the RPENALTY is being used in this case. Problem Statistics: 708 Rows 1347 Structural Columns 0 Integer Variables 0 Special Ordered Sets 1 RHS Columns 6326 Non-Zero Matrix Elements Density = 0.66 Percent CASE 1: Base Case To learn more about how to use RPENALTY, please refer toSolution article 127439. Keywords: ExecutionLog RPENALTY References: None
Problem Statement: Where can I find a list of properties that can be defined in Aspen Assay Manager? Can the units for these properties be different?
Solution: The user can find the list of available properties that can be used in assay manager in Libraries branch on the model tree. Once you select a property the units of measure can be chosen from various dropdowns in the system. The user can have different units of measure and can mix the units. For eg. The user can have pour point in Degree Celsius and Freeze point in Degree Fahrenheit. Keywords: Properties list Units of measure References: None
Problem Statement: ABML (Aspen Blend Model Library) provides a portfolio of pre-defined linear and non-linear blending prediction methods and correlations. The ABML portfolio can be expanded by adding user defined correlations using the following ABML functions: GBLNVAL GNDXR GPRPCALC UBML (User Blending Model Library) The use of the GPRPCALC function is described here. This correlation allows to incorporate a user-defined second order correlation created with the Property Calculator Formula to impose blending specification on a property defined for the final blend.
Solution: In this example, we will model the properties G15 and G59, defined as follows: G15 = T50-T10 G59 = T90-T50 The following steps are required: 1. Add the formulas to the Property Calculation Formula In the Aspen PIMS Model Tree go to the Property Calculation Formula facility, and add the new Calculations as indicated in the figure below: Three input properties are required T10 (D86 T10), T50 (D86 T50) and T90 (D86 T90). These properties are provided in the model in table BLNPROP. 2. Access the new property in table AMBL through correlation GPRPCALC In table ABML, introduce correlation GPRPCALC. No input is required, the output are the tags defined in the Property Calculation Formula (G15, G59 in this case). This will ensure that the G15 and G59 properties can now be used to impose blending specifications. Up to 20 output properties can be identified in the OUTPUT section of function GPRPCALC. Note: the property names are G15 and G59, the suffix 10 is added to create unique rows (i.e. G1510). 3. Set up a Blending Specification for this property Impose a specification in table BLNSPEC as required. In this example, a MAX G15 and G59 limits of 150 are imposed to all gasolines. 4. Review results for final blend The property limit and the final value of the blend can be seen in the Specification Blend section of the FullSolution or Down/Across reports. Keywords: ABML GPRPCALC Property Calculation Formula References: None
Problem Statement: How to use 'USER TOOLS MENU' to copy Aspen PIMS files from one model directory to another?
Solution: First create a batch file using Notepad. In this example, the file is called xcopy.bat. There is one line in xcopy.bat as follows: XCOPY/S/Y * d:\volume sample Where: /S Copies directories and subdirectories except empty ones. /Y Suppresses prompting to confirm if want to overwrite an existing destination file. d:\volume sample destination directory, user can change it as desired To see other XCOPY options, simply type 'HELP XCOPY' from DOS command. Secondarily, in Aspen PIMS, 1. select TOOLS | Edit User Tools Menu 2. In the Tools dialog box, click on the 'Add...' button 3. Browse to the file xcopy.bat on your machine and click 'Open'. PIMS will generate the command line and menu text. You can modify the menu text as desired. Note in the example, this text is 'Click OK to select and run XCOPY or CANCEL to stop'. This text will be displayed at the end of the PIMS run. 4. Select 'Run After Model Execution' and 'Ask for Arguments' in the lower left corner. If you select the 'Ask for Arguments' option, PIMS will prompt the user to continue at the end of each PIMS run. This provides a means for the user to decline the XCOPY run. If you would like the XCOPY to run after every PIMS run, then do not select this option. 5. In the 'Arguments' box, add ''d:\utility\xcopy.bat''. This can be changed as desired, but should match the location chosen from step 3. When completed, the dialog box will look like this: When the Aspen PIMS model is run, the user will see this prompt after Aspen PIMS has completed: If the user wishes to stop here, click on CANCEL and Aspen PIMS will return to the end of the execution log. If they wish to proceed with XCOPY, click on OK and Aspen PIMS will initiate the copy. The user will now see DOS window open and all the files in current PIMS folder and its subdirectories will be copied to destination directory (in this example, it is D:\Volume Sample). If you wish to 'turn off' the automatic running XCOPY, go to TOOLS | Edit User Tools Menu and select the User Tool defined for this function. Now de-select the 'Run after Execution' option. Keywords: User Tools Menu Copy files References: None
Problem Statement: How are 999 and -999 placeholders handled in the data validation stage in PIMS AO and PIMS DR?
Solution: In PIMS DR, for the validation stage, the placeholders are handled in the following manner. 1. For 999 placeholders, the values are updated from the Pguess table 2. For -999 placeholders, the values are updated from BLNPROP, ASSAY, BUY, and PCALC tables 3. If PIMS cannot get a value for the placeholder, then it shows the 999 or -999 in the validation report In PIMS AO, for the validation stage, the placeholders are handled in the following manner 1. If we miss values for recursed qualities or Property values in BLNPROP, ASSAY, BUY, or PCALC tables, there will be an explicit error message for the placeholders. 2. The 999 and -999 placeholder values are changed only during the generation step and not the validation step. So, data validation report will show 999 and -999 in the model. Keywords: Data validation Placeholders References: None
Problem Statement: How to disable a property (quality) variable from the PIMS model
Solution: Sometimes users would like to disable a specific quality variable in the PIMS model because they are no longer using/tracking this in the model. Leaving the structure in the model may cause PIMS to take more recursion passes to solve. In order to disable a quality, the rows and columns associated with this quality need to be commented out manually. This method is cumbersome and time consuming and if this quality needs to be enabled for a different situation, undoing all the changes is also complicated. The easy way to disable this quality and to provide an option to enable it again is to set the ATOL for this quality to a higher value in Table Scale. This large tolerance value will help this property to converge in few recursion passes and almost eliminates its influence on the model. Keywords: Remove error vectors of a quality Strip quality error vector Remove quality variable References: None
Problem Statement: If I have created a VPRICE structure for a blended product and it does not have SPG values for one of the components, will Aspen PIMS convert the API property values to SPG for getting Weight to Volume conversion?
Solution: PIMS-AO will automatically consider the API values and convert these values to SPG in order to get the necessary VPRICE structure working. However, PIMS-DR needs to be provided with SPG values in the BLNPROP tables for all the components in order to get the right objective function when using VPRICE structure. If this is not done, PIMS-DR, in this case, will also have SPG related warnings for the blended product in the execution log. Keywords: VPRICE SPG API References: None
Problem Statement: How do I link one of the provided PowerPivot templates to my SQL database?
Solution: Templates for PIMS Analytics are provided with a typical PIMS installation. These templates are located in a folder called Analytics in the same location as the PIMS sample models. These templates can be used with your own SQL database to provide an example of the reporting that can be done using the automatically generated PIMS OLAP cube in your SQL output database. The steps below assume that you have already run PIMS with output going to a SQL database and you have Microsoft PowerPivot installed. Open the desired template in Excel For V8.x, these are located at C:\Users\Public\Documents\AspenTech\Aspen PIMS\Analytics Open the PowerPivot window from the Excel ribbon Link to the desired SQL database: Click on the Design Tab Click on the Existing Connections button Select the PowerPivot Data Connection, and click EDIT Select the appropriate database name in dropdown box and click SAVE Click REFRESH When completed, click Close, and Close ? PowerPivot should now show the current data Keywords: None References: None
Problem Statement: Can I use base delta structure for yields that vary depending on the percentage of atmospheric residues in FCC.
Solution: Tables where you would add the structure: Feed Pool submodel, FCC submodel, SCALE, PGUESS, and BLNPROP. I took these tables from the Volume Sample Model. Note that the ?important? changes are marked in the yellow cells. In my case, the name of the Atmospheric Residue stream is ?ATB? and the Atmospheric Residue % property is ?atb?. In order for you to ?convert? the % volume of AR into a property, you would add a recursion row in the Feed Pool submodel. You recurse property atb in the CFP stream (cat feed pool), with the name of the row as RatbCFP. As you can see in the table, I added zeros under all the feed streams except for ATB, which is 100 (if you want fraction instead of percentage, this number would be 1). Then, in FCC you would add this property as your delta vector and an E-row. Notice that I?m not adding any other property (SUL, AFC, etc) as delta vectors. In table SCALE I?m adding the MIN and MAX for this property, which will be 0 to 100%. In table PGUESS I add an initial estimate of property atb in CFP. Finally, in table BLNPROP I add this property for ATB, which doesn?t change and is 100. Keywords: References: None
Problem Statement: How to use the Submodel Calculator (SMC) in Aspen PIMS global models
Solution: The Submodel Calculator feature is not enabled in Aspen MPIMS or XPIMS because the main model that is open is the global model, which does not have any submodels. The submodels to be evaluated with the Submodel Calculator belong to the local models. Open one of the local models of the XPIMS structure and run Submodel Calculator from there for units of that local model. For example, if you open model A, you can open submodel SXYZ from that model A, but you cannot open submodel SXYZ from model B. You can choose to 'Resolve' the initial data, i.e. fill in the 999, or you can choose to 'ImportSolution' to fill the 999 andSolution activities. Keywords: Submodel Calculator SMC smc MPIMS XPIMS References: None
Problem Statement: This
Solution: demonstrates how to build a discrete yield submodel in PIMS.Solution Use the submodel tables to construct and link process submodels that represent different process units in a plant. Subject only to a minimal set of restrictions, the submodels can be constructed to be as simple or complex as necessary. Submodels typically include material balances and capacity and utility consumption, but might also include energy and component balances as well as implications of a variety of component and operating characteristics. A standard set of submodels for common refinery operations is provided in the PIMS library. You may modify and incorporate these submodels into your model as necessary. In this particular example, a simple submodel with discrete yield, diesel hydro-treater (SDHT) is built for demonstration purpose only. GSO is the feed to SDHT, DSL and H2S are the output. 1. Define the submodel in table SUBMODS with a 4-character tags starting with S, i.e SXXX where XXX can be any alphabetical characters. In this example, we define SDHT in table SUBMODS. 2. Create table SDHT and provide input on material balance (VBAL row) and capacity (CCAP row). The example below shows that GSO is entering SDHT whereas DSL and H2S are produced in a ratio of 0.80: 0.20. Note the sign convention of material balance row: sales or consumption takes a positive sign whereas purchase or production takes a negative sign. The value entered for CCAPDHT is the ratio of capacity consumed in DHT per unit of feed. In this example, 1 unit capacity of DHT is consumed per 1 unit of feed. 3. Define the capacity of SDHT in table CAPS. 4. A submodel is built successfully. If any process limits are imposed, define them in table PROCLIM. Keywords: Submodel, SUBMODS, CAPS, discrete yield References: None
Problem Statement: How can I model quality dependent unit capacities?
Solution: Some situations may require the unit capacities to be dependent on quality of the feedstock. In this case, the model is to be modified in such a way that PIMS generates a coefficient in the capacity row that can reflect the quality of the feed of the unit Consider the following example Assume we need to change the capacity of the PRO unit with varying PRP of the FD1 stream 1. Table SPRO (Partial table shown) This table suggests that for every 1 unit change of quality (PRP) greater than 40, PIMS will reduce the capacity of the PRO submodel by 1 unit. The greater than row, G-row, is used for this example to accommodate feed qualities that are below 40 units. Depending on the situation, an L-row could also be used. Column FX1 is used to capture the quality of the feed upon which the capacity of the unit is dependent. The quality gets into the -999 placeholder through the PCALC table. Column FX1 and FX2 must be fixed using the keyword “FIX” PCALC table (partial table shown) The profile of the capacity is a constant unit consumption of capacity up to the point where the feed quality reaches 40.0. If the feed quality exceeds 40.0, then there is a capacity penalty. The effect is to reduce the throughput of the unit for lower quality feeds. Similar structure can be used to increase the throughput of the unit based on feed quality. Keywords: Process Unit Capacity Quality dependent Feed Quality References: None
Problem Statement: How do I control total transfers in an M-PIMS model?
Solution: Inter-plant transfers in an M-PIMS model can be controlled by creating an L-Row or a CCAP-Row in the global Table ROWS. If a CCAP-row is used, the limits and activities corresponding to this row will be reported in FullSolution report. In this example the transfer from plant C to plant A is controlled * TABLE TRANSFER * Inter-Plant Transfers TEXT MIN MAX COST !Product !Source !Destination !Mode * HYHCAP Hi Purity H2 0 5 0.05 HYH C A P PGSCAP Fuel Gas 0 0.05 PGS C A P PYXCAP PyGas 0 5 0.05 PYX C A P LFOCAR Liquid Fuel Oil 0 5 0.2 LFO C A R *** The first step is to create a CCAP constraint restricting the transfers in T.Rows * TABLE ROWS TEXT THYHCAP TPSGCAP TPYXCAP TLFOCAR CCAPTTTA 1 1 1 1 Then add a C-row in the global Table CAPS. Here the key thing is: at the global level C-row only with 5 characters are allowed and the fifth character stands for the plant designation, since a C-row in the global model needs to be modeled like this, a C-row must be added in Table CAPS of a local model as well; in this example Model A is selected. The structure added to the global Table CAPS is given below *TABLE CAPS * TEXT MIN MAX REPORT *** * CTTTA Transfer Control 6 Table CAPS for the local model A * TABLE CAPS * Process Capacities CTTT 6.0000 The above structure will also allow this capacity information be reported in the ‘Model A’ section of fullSolution report as Detailed results listing activities of each transfer can be viewed in the matrix analyzer. The matrixSolution files are .Xlp file in PIMS-AO and MPSBCD file in DR. Keywords: Control transfer Interplant transfer MPIMS M-PIMS Report transfer References: None
Problem Statement: When physical crude units are specified in the ATMTWR row of Table CRDDISTL, Aspen PIMS automatically generates a capacity for each physical unit. These are called CCAPAT1, CCAPAT2, etc. If I have more than 10 physical crude units, how are those after #9 named?
Solution: Aspen PIMS allows up to 50 physical crude units. Below is a chart showing the physical crude number and the corresponding capacity name. 1 CCAPAT1 2 CCAPAT2 3 CCAPAT3 4 CCAPAT4 5 CCAPAT5 6 CCAPAT6 7 CCAPAT7 8 CCAPAT8 9 CCAPAT9 10 CCAPAT: 11 CCAPAT; 12 CCAPAT< 13 CCAPAT= 14 CCAPAT> 15 CCAPAT? 16 CCAPAT@ 17 CCAPATA 18 CCAPATB 19 CCAPATC 20 CCAPATD 21 CCAPATE 22 CCAPATF 23 CCAPATG 24 CCAPATH 25 CCAPATI 26 CCAPATJ 27 CCAPATK 28 CCAPATL 29 CCAPATM 30 CCAPATN 31 CCAPATO 32 CCAPATP 33 CCAPATQ 34 CCAPATR 35 CCAPATS 36 CCAPATT 37 CCAPATU 38 CCAPATV 39 CCAPATW 40 CCAPATX 41 CCAPATY 42 CCAPATZ 43 CCAPAT[ 44 CCAPAT\ 45 CCAPAT] 46 CCAPAT^ 47 CCAPAT_ 48 CCAPAT' 49 CCAPATa 50 CCAPATb Keywords: crude unit capacity References: None
Problem Statement: What are the matrix row and column names for an MPIMS model'?
Solution: The matrix column and row names for a standard PIMS model are 7-characters in length. For an MPIMS matrix, the name have 8 characters. The 8th character is typically the local plant ID. The global model has some additional tables that are not part of a standard PIMS model. These include MODELS, MODES, MARKETS, TRANSFER, DEPOTS, SUPPLY, GSUPPLY, and DEMAND. MPIMS will generate the matrix structures based on the input of those tables in conjunction with the input of the local models. Here is a list of the most common Row and Column names in MPIMS. Row Names: Local plant row names are 8-character in length. The first 7 characters are the same as a standard model and the 8th character is the local plant ID. Some additional matrix structure in an M ASUPxxx Global Purchases xxx from GSUPPLY by GROUP ASELxxx Global Sales xxx from DEMAND by Market Group ABUYxxxA Purchases xxx limited by GROUP in local plant A EPURCxxx Sum Material or Utility Purchases ESELxxxK Material or Utility Sales to market K Note: For the row names above, xxx represents a material tag. Column Names: Local plant column names are 8-character in length. The 8th character is the local plant ID. PURCxxx: Global purchase from GSUPPLY PURCxxxA: Global purchase from SUPPLY for plant A SELxxxK: Global Material or Utility Sale into market K SELxxxKA: Material or Utility Sale into market K from plant A TxxxABM: Transfer of Material from source plant A to destination plant B via mode M Note, o xxx - material or Utility tag o A - local plant or Source plant, or depot o B - Destination plant, or depot o M - Transfer mode o K - Market Keywords: MPIMS Column Row Matrix local plant local References: None
Problem Statement: Assuming we are using Property Calculation Formulas to calculate index RVI from property RVP, can we see RVI for a stream in a given submodel using a reporting rows (Prows)?
Solution: It depends. Lets say, a submodel has input streams A, and B, whose RVP are known (and their RVI supposed to be calculated from the property formula). The output stream is C, and and you want to report its RVI using report row. In this case, you will have problem because PIMS does not have any RVI information for stream A, and B. You did not tell PIMS to calculate them. However, if you recursed RVI for C, then PIMS will calculate RVI for stream A, and B based on their RVP and using Property Calculation Formulas (or Index table) to calculate their RVI values. Now you can create Prow to report RVI value for stream C. The same logic can be used if I define a spec RVI for blending, then I can report the related RVP in its input streams. Keywords: Prow Reporting Rows PCF (Property Calculation Formulas) References: None
Problem Statement: Upon investigation of Material Sales section of Full
Solution: Report it turned out that some sold products have incorrect aggregate value. Please refer to the example below: If the same is calculated manually then the result is different: 346 Units/DAY * 73.180 $/Unit = 25,320 $/DAY Where does this difference come from?Solution As you can see HSF pricing is volume basis. That means it is defined using VPRICE in T.SELL. Therefore Units/DAY are in weight basis and $/Unit are in volume basis and cannot be multiplied like in the example above due to inconsistent units. To calculate aggregate $/DAY value PIMS is recalculating the price for HSF to weigh basis using below data: SPG of HSF = 0.9941 VTW conversion factor = 0.1587 weight basis price of HSF = 73.18 / (0.9941 * 0.1587) = 463.86 $/Unit The problematic calculation with correct price: 346 Units/DAY * 463.86 $/Unit = 160.496 $/DAY This is consistent with the FullSolution report. Small discrepancy comes from HTML values rounding up. Keywords: material sales $/DAY aggregate References: None
Problem Statement: A user is trying to commute an Aspen PIMS license, but it is asking for an Administrator user name and password. Does the user need an Administrator right to commute the Aspen License?
Solution: SLM Commute V7.3 needs the Administrator right to run. In SLM V8, we removed the administrator privilege requirement. You can use the Aspen 'SLM License Configuration Wizard' then click button 'Config' to check what SLM version you have. Please see the SLM Version 2012.0.1.304 or above, which is SLM V8.0. Keywords: License, Commute, wizard, config, Administrator, Administrator right References: None
Problem Statement: The California Air Board updated the Amended CARB3 equations at the end of September, 2008. How can these changes be incorporated into PIMS, ORION, and MBO?
Solution: ABML version 1.7.43 contains the correction to the amended CARB3 spreadsheet that CARB posted toward the end of September 2008. The change will affect winter gasoline grades that have oxygenate in them. The main impact is in the potency weighted toxics. The ABML.dll file attached to thisSolution incorporates these changes. To install this updated ABML.dll file, first go to the PIMSWIN.exe (or ORION.exe / MBO.exe as appropriate) and save a copy of the existing ABML.dll file as a backup. Then download the attached ABML.dll file and save it in that directory. You will need to do this for each of the products that you are using (PIMS, ORION, MBO). This version of the ABML file will also be included in our future installations. NOTE: Gasoline types 12 and 13 exist in PIMS version 17.1.12 and higher. Gasoline types 14 and 15 exist only in PIMS version V7.1 (expected to be released in January, 2009). Keywords: ABML CARB CARB3 References: None
Problem Statement: When users select SQL server as output database, execute Aspen PIMS, PIMS will generate the report in the SQL PIMS database. Now open platinum to generate the flowsheet, why platinum does not generate a flowsheet from the active model?
Solution: When this happens, check if you have different models saved in the same database. You can check from the table PrModel , by right click 'PrModel', then 'Select Top 1000 Rows'. Check to see how many models you have written into this database. If you have more than one model there, you may have a problem. PIMS SQL only supports one model in one database. Keywords: Platinum SQL SQL server flowsheet active model database connection References: None
Problem Statement: How can I model a stream that can be either sent to the crude unit or sent directly downstream to blending -- for example a purchased kerosene stream?
Solution: Attached is an example demonstrating this. In this model stream zzz can go directly to SCD1 or to blending into product ZZZ. The steps include: 1) Setup stream zzz in Table ASSAYS. In this example the yield is specified as 100% kerosene. Populate the properties as completely as possible. 2) Add row ESTzzz to Table CRDDSTL to allow this stream to be processed in the appropriate crude unit(s) 3) Setup zzz as a blend component into a new blend called ZZZ. This involved the following changes: a) Add new blend ZZZ to table BLENDS b) Add blend ZZZ and component zzz to Table BLNMIX 4) Add component zzz to Table BUY 5) Add blend ZZZ to Table SELL. In the attached example, when you run case 2, you will see that PIMS runs zzz to both dispositions. Keywords: crude recycle References: None
Problem Statement: When running my model with Multi-Start, I get the following error: Html Report Failed! There is an inconsistency in the result database! What can I do to solve it?
Solution: Do a Model Cleanup. On the Execution dialog box, select the Access Database Maintenance option of Only Unique Cases, instead of Purge Existing. Also, turn off any unneeded database options under MODEL SETTINGS | GENERAL | Output Database tab, click on OPTIONS. Keywords: multistart stacked cases References: None
Problem Statement: When I load the PIMS model, I am unable to see any tables in the model tree. How can I get the excel files on the model tree?
Solution: After Installing Aspen PIMS it might happen that the user loads a model and is unable to see any Excel files attached to the model tree. This might be either a PIMS issue or an excel issue. When this happens the user can try the following two approaches: 1. Try to follow the steps given inSolution ID 136561 2. Excel needs an XML upgrade. In order to do this follow the steps given in the followingSolution on the Microsoft support web page: https://support.microsoft.com/kb/973688 After following these steps, restart PIMS and load the model again. The model tree should have the excel files attached. Keywords: XML upgrade Excel tables References: None
Problem Statement: What rules are to be followed for pool segregation of Crude cut types?
Solution: The following rules are to be followed for the pool segregation of different types of crude cuts 1. All Type-2 cuts must be segregated 2. All Type-4 cuts must be segregated 3. Segregated pools can be combined through user defined sub-model tables 4. Type-3 cuts are not required to be segregated, but segregation is recommended PIMS will give an error message if the mandatory rules for pool segregation of the crude cuts are not implemented Keywords: Table CRDCUTS Segregation rules Type 4 cut References: None
Problem Statement: How do I identify very large or very small coefficients in the matrix?
Solution: It is important to eliminate very large and very small coefficients in the matrix because these can cause convergence problems. In general we recommend that coefficients are not above 1e+7 or below 1e-7. There is a tool available in the Matrix Analyzer that can identify coefficients that are outside your desired range. First the MPSBCD file must be created. The setting to generate this file is located under MODEL SETTINGS | Reporting | Outputs tab, and is called Create MPSBCD File. Once this is selected, run the model. Once the model has run and created the MPSBCD file, you can open the Matrix Analyzer by going to theSolution FILES tab on the left side navigation pane and expand the MPS Files branch. Here you can right click on the desired MPSBCD file and select Analyze. The Matrix Analyzer will open. Click on TOOLS, then Program Options and you will see this dialog box. If you are searching for large coefficients, select Large Coefficient and set the desired threshold under the Filter Tolerances. When you click OK, the dialog will close and on the right side of the Matrix Analyzer window you will see all the rows that contain coefficients larger than your designated threshold. The same can be done for small coefficients. Now that you know where the coefficients occur, you can work to eliminate then as appropriate for your model. Keywords: References: None
Problem Statement: How do I temporarily worsen my yields without recalculating all the assays? E.g. experiencing temporary increase in Vacuum Residue yields or just wanting to make a sensitivity in an investment study without too much effort.
Solution: Table SWING gives an excellent way for quick crude unit yields adjustments. For those not familiar with it: it is a quick way of sending a certain portion of one stream into another. This affects not just the material balance but also all of the qualities are recalculated. Downside is that for the quality calculation a whole cuts quality is multiplied with its material balance contribution. Quality calculated that way is therefore somewhat imprecise though good enough when changes are minor and temporary, especially in situations where the exact extent of the yield worsening is only estimated and needs to be analysed in several scenarios. Table SWING can also be modified through table CASE which makes it ideal for sensitivity calculations. Generally the source stream should have all of the qualities of the sink stream present. In case the entire stream is swung its material balance row should be made FREE as it is described in PIMS Help File in more details. Keywords: Assays, Swing, Sensitivity analysis References: None
Problem Statement: The properties of yield stream are calculated by Recursion in the Base-Delta submodel without using PCALC. How do I calculate the coefficients for the Recursion in Base-Delta submodel?
Solution: Below is an explanation of the calculation using a simple example. BAS is a Base vector, SUL is a Delta vector. Recursion Matrix (suppose Error vector is zero) BAS SUL PRD RHS RBALPRD -a -b 1 =0 RSULPRD -c -d f =0 We need to calculate the coefficient d from RSULPRD -c*BAS -d*SUL +f*PRD =0 d*SUL = f*PRD -c*BAS where PRD = a*BAS + b*SUL --- from RBALPRD so d*SUL = f*(a*BAS + b*SUL) -c*BAS where SUL = x*BAS x: Factor for Delta Prop of Feed then d*x = f*a + f*b*x -c where c=a*y y: Base quality of Pool then d*x = f*a + f*b*x - a*y d*x = (f-y)*a + f*x*b therefore d = (f-y)/x*a + f*b f: Pool Quality x: Factor for Delta Prop of Feed y: Base quality of Pool Keywords: RECURSION PCALC References: None
Problem Statement: When trying to run Assay Manager I get following error messages and application crashes: There is already a listener on IP endopint... This could happen if there is another application already listening on this endopint or if you have multiple service endpoints in your service host with the same IP endpoint but with incompatible binding configurations. Error occurred in initializing Assay Management. Assay Management will be closed... How can I resolve this problem and run Assay Manager?
Solution: Go to Windows Services, right click on Aspen Enterprise Logging Server and select Stop: Assay Manager should run properly now. Keywords: There is already a listener on IP endpoint Assay Manager error References: None
Problem Statement: Can Impose Nonzero Pool Flows and Fix Quality of zero flow pools to initial values be used at the same time?
Solution: This is not recommended. The Impose Nonzero Pool Flows solver setting can potentially conflict with the Fix Quality of zero flow pools to initial values XSLP solver setting. These settings aim to address the same issue and should not be used together. Keywords: References: None
Problem Statement: There are sometimes questions about the sign convention of the PI values (shadow prices or marginal values) of the Rows, specifically for the Capacity (CCAP) and Material Balance (WBAL, VBAL) Rows. Intuition indicates that the sign of the PI value should be positive when the Row Activity hits a MAX (e.g. a Capacity hits a MAX) and negative when the Row Activity hits a MIN. This would be consistent with the interpretation of the PI as the derivative of the OBJFN with respect to the RHS of that Row. In the .LST reports (available in older versions of PIMS), the PI values on the Capacities had this sign convention. However, the mathematically correct sign convention is that the PI value is negative when the Row Activity hits a MAX (e.g. a Capacity hits a MAX) and positive when the Row Activity hits a MIN. This sign convention is respected in this interpretation of the PI value: The current HTML / TXT reports follow this convention for Capacities and Material Balances. See below for the correct definition of the PI value, as well as for a discussion on the history of the sign conventions in PIMS.
Solution: 1. The mathematically correct definition of the shadow prices or PI values on the rows are the marginal values on the slack variables of each row. That is, they are the Dual Activities of the DualSolution. The sign on them is consistent with the PIMS sign convention: · Costs are negative · Sales are postive · Consumptions have positive coefficients · Yields have negative coefficients · OBJFN is maximized The Slack variable moves in the opposite direction of the RHS limit; that is the reason for the somehow confusing resulting sign of the PI value. For example, if a capacity hits a maximum, the slack variable on that row will have an activity = 0, therefore it is hitting its lower limit, and its marginal value is negative. The opposite is true when the capacity is hitting the lower limit: the slack variable is hitting the maximum, and its marginal value will be positive. 2. The shadow prices (which can also be called the Dual Activities) on the material balance rows have always been reported as negative numbers. That is the correct sign for the Dual Activity. 3. The shadow prices on the capacity rows have been problematic because while they have actually been reported mathematically wrong, the sign seemed intuitively correct. They are mathematically correct in the HTML / TXT reports and consistent with the LPSolution. Capacities at an upper limit have a negative Dual Activity, the same as the material balance rows. Both the material balance and capacity rows are L type rows and should have the same Dual Activity sign convention. 4. The shadow prices for capacities in the .LST reports are mathematically wrong but intuitively correct. We have found that there are two camps on this sign issue: The mathematically correct camp and the intuitive camp. Whatever theSolution, half the people are unhappy. The PIMS HTML / TXT output reports will not be changing the signs on the shadow prices of limited rows. Rows that are at an upper limit (status=UL) will be shown with negative Dual Activities. However, if you prefer the intuitive sign convention, then you can use the SIGNREV setting to indicate this to PIMS. The SIGNREV setting prompts PIMS to change the sign convention of PI values in the reports. Note that this entire discussion is specifically about the PI values of rows in the matrix. Matrix columns are not affected by this discussion since for them there is no difference between a mathematically correct view and an intuitive view. Keywords: PI Values, Shadow Price, Marginal Value, Sign Convention References: None
Problem Statement: What causes unable to write error when running Case Comparison?
Solution: To avoid this issue, turn on the Use Excel to createSolution spreadsheets in current Excel format option. This is located under MODEL SETTINGS | Reporting | Miscellaneous. NOTE that some users may not see the Unable to Write message and may instead get this dialog in Excel as it is trying to generate the Case Comparison file: Keywords: None References: None
Problem Statement: 如何解决当PIMS运行Excel自动化时所得到的server busy信息?
Solution: 当Excel自动化运行缓慢时, server busy的信息会显现。这有可能是Excel add-ins,错误的Excel公式或数据读取问题等等所导致的。在补丁Aspen PIMS V8.7 CP7中针对这个问题添加修补程序。您可以从以下这个链接下载补丁:http://support.aspentech.com/webteamcgi/SolutionDisplay_view.cgi?key=143447 如果您未能安装补丁或必须使用V8.7 CP7之前的PIMS版本,请参考以下要点以加速Excel自动化的运行: · 尽量减少使用Excel add-ins · 尽量减少在Aspen PIMS Excel表里使用条件格式(Conditional formatting ) · 尽量减少在Aspen PIMS Excel表里使用 Macros · 尽量减少Aspen PIMS Excel表的文件大小 (移除已不再使用的数据和工作表) · 将Aspen PIMS Excel表保存为和电脑上的Excel版本相同的格式。如果您将Excel表从2003或更早的保存格式(*.xls)更改成2010或较后的格式(*.xlsx),您必须把该*.xlsx表重新挂上PIMS的模型树,否则PIMS将继续读取*.xls表。 实时病毒扫描软件也有可能导致server busy的信息。为了避免实时病毒扫描软件所引起的server busy信息,用户可将储存PIMS模型的文件夹设置为免实时扫描。 当然,这些PIMS模型的文件需要适时地被扫描,用户可以自行选择稍描PIMS模型的文件的频率。 Keywords: Server Busy Excel, switch to, retry Chinese 中文 References: None
Problem Statement: In the Pr
Solution: table of the database there is a field “Description” where a description of theSolutionID can be found. How can you specify this description of theSolutionID such that this field in the database is filled? Solution The Description column entry in the PrSolution table comes from the model execution dialog box as shown below: Initially, for an SQL results database, in the model execution tab, the Description section is greyed out, so we cannot enter anything there: But there is a workaround for this: 1. Go to Model Settings tab - General settings 2. In the Output Database Tab, click on the checkbox “Output to Access Database Also” 3. Open the Model Execution dialog box and you can see that the description section is now active, so we can enter data there 4. If you enter anything there, it will be displayed in the SQL database table PrSolution under description. Keywords: PIMS results database, Pr References: None
Problem Statement: In the Scenario Evaluation Tool, for Override, Selection, and Parametric scenarios I see there are two columns: Activity and MarginalValue. What are these for and how do I get them populated?
Solution: These columns show the base caseSolution Activity and Marginal Value for all the scenario tables (BUY, SELL, PROCLIM, and CAPS). To get the values populated once you have configured and saved your set file: 1. Run as a standalone case the one you configured as Base Case for the scenario. In the following example, the Base Case is Case 1 as defined in table CASE: 2. Go back to the Scenario Evaluation Tool and open the set file. Select any type of scenario (Override, Selection, or Parametric). Make sure that the columns Activity and MarginalValue are selected for the table you want to modify. In the example below, Activity and Marginal Value are selected as columns for table CAPS in the Override scenario. You should be able to see the values populated: Keywords: scenario dialog box scenario table SET node References: None
Problem Statement: How to interpret Quality Change By Occurance and Quality Change By Relative Change in the
Solution: Tracking Solution In theSolution Tracking, there are Quality Change By Occurance and Quality Change By Relative Change results included to track the quality data. 1.Quality Change By Occurance Under Quality Change By Occurance, it lists the top 50 properties based on how many times the property value is changed, 2. Quality Change By Relative Change Under Quality Change By Relative Change, it lists the top 50 properties based on how much the property value is changed. Keywords: References: None
Problem Statement: Warning message
Solution: files with no/multipleSolution IDs selected. Only spreadsheet report is supported while trying to generate Case comparison For example, here while generating a case comparison of cases 1and 2, the warning message appears. Solution The reason for this warning message is that the cases were not run at the same time. The information in the Case Comparison report will be the same, however only the xls format is available. If you would like the HTML or TXT formats, then you must re-run all the desired cases in a single execution. Keywords: Case comparison Warning References: None
Problem Statement: Most of the refiners that are processing both low sulphur and high sulphur crudes and producing fuel oil or any other product which is blended from straight run streams or where straight run streams properties are critical for product meeting specifications tend to split their crude units into logical units which process either low or high sulphur crudes. Later on only streams from low sulphur crudes are allowed to enter the low (usually 1 wt% S) sulphur fuel oil. This is done for a number of reasons: preventing teaspoon blending, disabling blending which is infeasible from scheduling or tank farm point of view and also to increase model stability by removing infeasible options as means of helping the solver reach optimal
Solution: easier. When comparing planned and actual refinery yields refiners who process different types of crude may notice that their actual ratio of 1% vs 3% sulphur fuel oil may be different than planned. This is due to the fact that crude and vacuum residue are present in the heel of the tank and in pipes which come from the crude processed previously. Refiners processing predominantly cargoes of high sulphur crude and occasionally cargoes of low sulphur crude may experience that part of their low sulphur crude and streams coming from its processing get mixed with parts of higher sulphur materials. This phenomena is inevitable, it cannot be optimized and sometimes it is necessary to acknowledge it. Most of the time the amount of low sulphur fuel oil is lower than the one in PIMS results. Though it must be noted here that for crudes where sulphur is low enough this can even mean that part of the high sulphur crude may end up in low sulphur usual case is a lower than planned blending of low sulphur fuel oil. High and low sulphur crudes are just an example here. Similar situation can occur when only some crudes are suitable for bitumen or lubes production and a portion of them gets unavailable for that purpose. Solution There is a fairly simple way to present this in PIMS and to assure that this phenomena is properly presented in the optimization. We can use Table Ratio to force a proportional part of each low sulphur crude to the high sulphur unit. Note that this proportion varies with cargo size and may have to be reduced when evaluating scenarios where we process two low sulphur crudes one after another In the example of processing crude Siberian Light with code SIB and logical crude submodels named SCR1 (low sulphur) and SCR2 (high sulphur) we would create in the table RATIO a following structure: *Table RATIO * ROWNAMES TEXT RT1 *** SCR1SIB Sib Light in LS 9 SCR2SIB Sib Light in HS 1 *** Do note that the 3 letter code for the ratio should be one that is not used elsewhere in the model. and that the crude in question has to be allowed by CRDDISTL to be processed in both crude units. For the not-so-often-encountered scenario where part of high sulphur material may end up in low sulphur fuel oil owing to very low sulphur of the light crude we may use the table CRDBLEND to allow for high sulphur crude to be partially blended with a low sulphur crude in question and then use RATIO to connect this with the processing level of that low sulphur crude Keywords: Low sulphur crude Overlapping RATIO CRDBLEND Plan vs Actual References: None
Problem Statement: What does uncrunching matrix in the execution log mean?
Solution: This message is simply an indication that the optimizer is returning to the original matrix after using the presolved matrix. It is normal to see this when the Presolve function is active. There is no problem and no action is required. Keywords: uncrunching matrix References: None
Problem Statement: What is the difference between T.Gases and T.Units while using them to define units for gases?
Solution: There are the following things to take note of while using T.gases and T.units 1. User can only use one of the T.gases or T.units for a particular product 2. For T.gases, the conversion factor used is taken from the General Settings tab in the model tree. By default it converts from BBLS (Volume) to KSCF (Gas). However, you can put user defined units and conversion factors to define gas units. However, the same unit and conversion factor will be used for all the gases defined. 3. T. units use the conversion factor and units defined by the user in the table. It can be specific to individual product depending on the requirements. If one wants to define units for a specific gaseous product it is recommended to use T.Units. This table gives user more flexibility. T. gases can be used when you need to change the units of all the gases in the process. Keywords: Table GASES Table UNITS Units Definition References: None
Problem Statement: During a PIMS run, there is warning message 'Specification Blend Component Quality Data Missing'. Should I do something about it?
Solution: Warning message W086, 'SPECIFICATION BLEND COMPONENT QUALITY DATA MISSING', means that you have a specification on a finished product, but one or more of the blend components does not have that property assigned. For example, let us assume that finished product C is made up of two blend components A and B. Product C has a minimum octane specification (NRON) of 95 in BLNSPEC. In order for the blending equations to work, both components A and B need to have the property RON defined. If component A or B is missing the property RON, PIMS will assume that the property value is zero. Therefore the finished blend C will either be inaccurate or unachievable. You must make sure that all the blend components have the required properties defined to support the corresponding specifications in table BLNSPEC. To find out which component and which property it is missing, run a data validation from RUN | Data Validation, and check the messages at the beginning of the report. The messages will tell exactly what component is missing which property. Keywords: Properties Missing Specification Component References: None
Problem Statement: When running in DR, to improve convergence in a model, you may try changing PGUESS, inverting PDIST, using MINOBJ, etc. What are the equivalent options, settings and tips to improve the convergence and stability of a model using XNLP?
Solution: Here is a list of options and settings to try to improve the OBJFN and the stability of theSolution using XNLP. 1. Global Optimization | MultiStart Use the Global Optimization MultiStart option under Model Settings | XNLP | Global Optimization. This option will force test theSolution by starting the run from different areas (different variable initialization) and will produce the reports with the bestSolution. The main objective of using this option is to get the highest OBJFN and avoid local optimality. Using this option can be time consuming, as each run in the MultiStart is similar to running that case again. 2. InputSolution You can use an InputSolution based on the results from a previous succesful run. A good startingSolution would be created by running a Global Optimization with MultiStart of the model. Every time a run finishes, a file named XNLP_Solution.dat is created. If you rename the extension of this file to .nnn (e.g. XNLP_Solution.003), where nnn is a number, you can use it as an input for another run (either for the same case or another one). This provides a starting point for the run that normally helps in the convergence of the run and achieving a higher OBJFN. Keep in mind that a given startingSolution should be used for similar cases: for example, if you are running a set of cases with Winter gasolines, you can use one inputSolution, but thisSolution will probably be not a good starting point for a set of cases running Summer gasolines. To access thisSolution file, in the Model Execution window, select the InputSolution drop down box and choose a number (you will see as many numbers as you have XNLP_Solution.nnn files). You can also specify this option in the CASE table by using the keywords SAVESOL, LOADSOL with given numbers for the files (i.e. to rename and useSolution files with a given number). Use the keyword SAVESOL in table CASE to instruct PIMS to save the XNLP_Solution.xxx file. Use the keyword LOADSOL in table CASE to instruct PIMS to load the XNLP_Solution.xxx file. 3. VerifySolution To force test the quality of theSolution, you can use the following setting: VerifySolution under Model Settings | XNLP | Advanced (use the default values of 10 and 0.001 for the passes and tolerances). This will rerun the model up to 10 times using in every new run theSolution file from the one that just finished. This is the same as using a different inputSolution file every time (e.g. XNLP_Solution.001). The benefit is that this is automated, the drawback is that it will take more time as using an InputSolution, as it has to run at least two times to check theSolution. This does not replace the Global Optimization, but it is a good check on the health of theSolution. This option can be used in conjunction with an InputSolution. 4. Improve LocalSolution Use the option Improve LocalSolution under Model Settings | XSLP | Advanced 1 to force the pool-collector columns to be non-zero. This option introduces temporary bounds on the pool collector columns to force them to be non-zero. These constraints are automatically dropped after executing a few XNLP iterations. Use this option to improve the localSolution. By keeping all pools open, the visibility of the model is improved to look for alternative course of actions. Keep the Smallest Initial Pool Collector Column Activity at the default value of 0.001. This setting adjusts the initial values for all the pool collector columns to a specified non-zero value. The higher the number, the further away you are from the initial point and it becomes harder to solve. 5. Use Complementarity Sets In Model Settings | XSLP | Advanced 2, turn the setting Use Complementarity Sets on. This means that properties that are not updated because the pool is very small will use the property value from PGUESS if the pool comes back into theSolution. If not, it will take either the lower or upper bound for that property and that is a bad guess. Each (flow * quality) pairs are defined as complementarity sets. When the flow is zero, the LP solver will push the quality to either its lower or upper bound. When this option is selected, the quality value will be left at its last value when the flow was non-zero. 6. Property limits in Table SCALE Introduce the appropriate limits for properties in table SCALE under columns MIN and MAX. This is especially important if using table ABML for blending. For composition properties, MIN should be 0 and MAX 100 (or 1 if using fractions). Keywords: Local Optima Local Optimality Convergence Stability References: None
Problem Statement: What is the purpose of column VOL in T. BUY in weight based model and column “WGT” in a volume based model?
Solution: In T.BUY for a weight based model For a weight based model, if you want to specify the purchase on volume basis, you can do so by giving a non-blank entry under the column VOL. An entry “1” under the column VOL indicates that even though this is a weight based model, the purchase is on volume basis (volume /time) along with the cost in Currency/volume units. Basically it indicates that the material is priced and constrained on a volume basis In T. BUY for a volume based model For a volume based model, if you want to specify the purchase on weight basis, you can do so by giving a non-blank entry under the column WGT. An entry “1” under the column WGT indicates that even though this is a volume based model, the purchase is on weight basis (weight /time) along with the cost in Currency/weight units. Basically it indicates that the material is priced and constrained on a weight basis Keywords: Table BUY VOL in BUY WGT in BUY References: None
Problem Statement: What does the ‘Load Previous Case Basis’ option do in Aspen PIMS DR?
Solution: The ‘Load Previous Case Basis’ option can be found on the execution window as shown below: This option initializes the basis for the subsequent case to be solved by starting from theSolution basis file of the previously solved case. It also initializes the flow basis in addition to the properties. So, for example, if the user is running 3 cases with “Load Previous Case Basis� checked, assuming no cases are run previously, Pguess will be used to start case 1, basis from case 1 will be used to start case 2, and basis from case 2 will be used to start case 3. If this option is not checked, the cases are solved using the default starting basis (Pguess file). Keywords: PIMS DR Execution dialog box Load Previous Case Basis References: None
Problem Statement: Help file for T.DEMALLOC states: Enter a five to eight character tag. The first three characters identify the material to be sold into a specific market, or the characters (…) that signify that the information provided in this row applies to all materials being sold in the specified market. The fourth character identifies the market into which the material can be sold. The optional fifth through eighth characters are plant identifiers. Could you please clarify the last sentence and provide an example?
Solution: “The optional fifth through eighth characters are plant identifiers” statement means that user can specify up to 4 plants that can satisfy the demand for each product into each market. For example: UPRBACD would specify UPR to Market B from Plants A, C and D: * TABLE DEMALLOC Table of Contents * TEXT MIN MAX FIX COST * UPRBACD UPR To Mkt B from Plts A, C and D 0.00000 0.03000 Keywords: DEMALLOC MPIMS XPIMS References: None
Problem Statement: I have a submodel which can operate in 2 modes, based on the feed quality. For example, if paraffin content of feed, PAR is less than 80%, it operates in mode 1. Else, operate in mode 2. How do I model this kind of situation?
Solution: You can use table SXXX (submodel) and table PROCLIM to model this kind of situation. For example, you have a submodel name SDHT, feed material is DS1. It operates in mode 2 when feed quality PAR (in this case paraffin content) is greater than 80%, it operates at mode 1 when PAR is less than 80%. I will create 2 table SDHT, one is called SDHT, one is SDH2, and supply the data (eg. feed, yield, capacity consumption & utility consumption) for each mode of operation. In SDHT, add a 7-character row starting with character Z, then enter 3-character quality tag and 3-character user-defined tag, for example, ZPARDH1. In the intersection of the feed DS1 and the Z row, enter paraffin content of the feed for mode 1 or enter -999 (a placeholder) if paraffin content of the feed is calculated in upstream or defined in the model. In SDH2, add a row ZPARDH2.In the intersection of the feed DS1 and the Z row, enter paraffin content of the feed for mode 2 or enter -999 (a placeholder) if paraffin content of the feed is calculated in upstream or defined in the model. This Z row which corresponds to a Z row in T. PROCLIM, is used to controlled process operation in a submodel. Then in table PROCLIM, enter the limit for the operation conditions: Keywords: 2-mode operation submodel PROCLIM process limit submodel Z row References: None
Problem Statement: How to interpret the base-delta submodel structure
Solution: Base delta submodel is used for the scenario that the submodel feed quality will affect the yields. In the following example, the N+2A quality of the feed stream will affect the yields of H2L, LPG and R95. The delta vectors in the base delta submodel represens the slopes of the each change (∆yield / ∆feed quality). So we can calculate the delta vectors for the yields streams in this SREF submodel. We can pick a base feed quality and use the corresponding delta vector to set up base delta structure. The base vector represents yields per feed unit generated at the base feed quality. The delta vector represents the amount of yield adjustment per designated unit of feed quality deviation. We have an Exxxyyy row to balance the feed vector with base vector and use Eqqqyyy row to balance quality barrels to compare actual feed quality with base feed quality. For example, in this SREF submodel, ECHGREF: -1 *SREFRFD + 1*SREFBAS =0 → SREFBAS = SREFRFD EN2AREF: -999 *SREFRFD +50*SREFBAS + 5*SREFN2A =0 → And we can calculate the VBAL material rows for the yields for this submodel: VBALH2L: -0.025*SREFBAS - 0.0003*SREFN2A = -(0.025+0.0003* )*SREFRFD So for example, if N+2A of the feed stream equals 60, the VBALH2L row calculation for this submodel will be: VBALH2L: -(0.025+0.0003* )*SREFRFD = -(0.025+0.0003*2)*SREFRFD Keywords: Base Vector Delta Vector Base-Delta Submodel References: None
Problem Statement: What is the best way to shut off/disconnect VPOOL for just certain cases within T Case?
Solution: The best way to disable vpool for a particular case would be to disable the column activity of the Virtual Pool in the T.SVPL, by fixing its min and max limits at zero in T.Bounds. For example, if there is a virtual pool L+A consisting of ALK and LCN, then in table CASE: CASE 5 TABLE BOUNDS TEXT MIN MAX SvplL+A Pool containing ALK and LCN 0 0 This would mean that for this case, the activity of L+A will be 0, so that the other tables can be modified suitably such that ALK and LCN have their separate dispositions, in the same case. Also note that you can view the svpl table from the model documenter report. Warning: Disabling the column activity of the virtual pool may lead to material imbalances if the stream also has other dispositions Keywords: Table CASE, Table VPOOL, virtual pool, Table BOUNDS, column activity References: None
Problem Statement: When I right click on the Assays branch of my model and select Manage Assay Data, I get the following error message: Duplicate coefficient in table <directory>\filename.xls[sheetname] for row *IFVTXXX column CRD where XXX is the crude cut tag and CRD is the crude tag. However, the only duplicates I see are commented out. Why am I getting this error?
Solution: Aspen Assay Manager will read commented out rows of the type *IFVT (also *IIVT, *FVT, and *IVT), along with the specified whole crude comment rows, such as *SULCRD. Therefore, either delete the duplicate rows or, if you need them for calculations, change their names, so that there are no duplicates between comment rows and non-comment rows. Keywords: None References: None
Problem Statement: How can I blend two crudes to generate a new crude in Assay Manager
Solution: Blended crudes are a combination of two or more existing crudes and will be treated as a single feed. PIMS Assay Management can calculate the assay properties for the blended crude using the data of the component crudes. Steps to blend crudes in Assay Manager: 1. Select Assays Summary on the navigation pane and click the Blend button on ribbon 2. Blend tab 3. Name the blend 4. Add Assay button allows you select the blend components 5. Select and enter component amounts (Weight or Volume) 6. Normalize button will ensure the component crudes add up to 1 7. Create Assay will blend the crudes and add the blend to your Case View Keywords: Assay Manager Blend References: None
Problem Statement: How can I report streams with no activities?
Solution: The purchases, sales and transfers with zero activities are reported unless the ZEROPS setting is selected. Uncheck this option to report all the flows with zero activities in the fullSolution report. Keywords: ZEROPS Reporting settings References: None
Problem Statement: In different versions of PIMS, I see that the value for PrDBVersion in the ouput database (Results.mdb) has not changed (V7.3.1). What does it mean?
Solution: The value of V7.3.1 is currently hard-coded and does not get regularly updated. Nevertheless the database schema may have changed between versions. Keywords: PrDBVersion, Results.mdb References: None
Problem Statement: How do I identify matrix rows with potential scaling problems?
Solution: It is important to avoid scaling issues in matrix rows because these can cause convergence problems. In general we recommend that the range of values for coefficients in a matrix row does not exceed 1e+7. For example, avoid a matrix row where one coefficient is 100,000 and another coefficient in the same row is 0.0001. There is a tool available in the Matrix Analyzer that can identify coefficients that are outside the recommended range. First the MPSBCD file must be created. The setting to generate this file is located under MODEL SETTINGS | Reporting | Outputs tab, and is called Create MPSBCD File. Once this is selected, run the model. Once the model has run and created the MPSBCD file, you can open the Matrix Analyzer by going to theSolution FILES tab on the left side navigation pane and expand the MPS Files branch. Here you can right click on the desired MPSBCD file and select Analyze. The Matrix Analyzer will open. Click on TOOLS, then Program Options and you will see this dialog box. To find rows which have a very wide range of coefficient values select Ratio and set the desired threshold under the Filter Tolerances. When you click OK, the dialog will close and on the right side of the Matrix Analyzer window you will see all the rows that contain coefficients with a range larger than your designated threshold. Now that you know where the coefficients occur, you can work to eliminate them as appropriate for your model. Keywords: ratio, scaling, filter References: None
Problem Statement: Can we use a finished product as a component to other blend? Although in general as a best practice, AspenTech does not recommend using a finished blend as a component for another blend, it is possible to do so.
Solution: Yes. A finished product can be used as a component to other blend. Consider this Example: Product PRE and Product FIN are the blends defined. The user needs to use the Product PRE as a component in Product FIN. This can be implemented as follows: 1. Both PRE and FIN must be specification blends. 2. PRE must have specifications for the same properties as FIN. However, the specification values of PRE will be different than those of FIN. 3. PRE must have recursed properties. Normally, PIMS does not recurse on the properties of a specification blend. You can force PIMS to recurse the properties of PRE by putting guesses for the property values of PRE into Table PGUESS. PIMS will then set up the recursion structure for PRE. 4. PRE and FIN should both be set up in Tables BLENDS, BLNMIX, and BLNSPEC, as usual 5. 5. In Table BLNMIX, PRE should be listed as a component of FIN. Keywords: BLENDS BLNMIX References: None
Problem Statement: When Excel Automation is running slowly, it will cause the “server busy” message, which could be caused by the Excel Add-ins. When the add-ins like PI-Datalink cannot be eliminated due to the business purposes, registry settings could be changed to reduce or resolve the problems. Note that these steps have worked for some cases, but not all. It depends on the specific add-in and how the user's machine is configured.
Solution: The following registry changes that may prevent the problems caused by Excel Add-Ins. Before making any registry changes you should consider if a backup of the registry should be made. 1) Click Start, click Run, type regedit, and then click OK. 2) Expand the following registry subkey: HKEY_CURRENT_USER\SOFTWARE\Microsoft\Office\Common 3) Right-click Common, point to New, and then click Key. 4) Type Security, and then press ENTER to name the new subkey. 5) Right-click Security, point to New, and then click DWORD Value. 6) Type UFIControls, and then press ENTER to name the value. 7) Double-click UFIControls. 8) In the Value data box, type 1, and then click OK. 9) Expand the following registry subkey: HKEY_CURRENT_USER\Software\Microsoft\VBA\ 10) Right-click VBA, point to New, and then click Key. 11) Type Security, and then press ENTER to name the new subkey. 12) Right-click Security, point to New, and then click DWORD Value. 13) Type LoadControlsInForms, and then press ENTER to name the value. 14) Double-click LoadControlsInForms 15) In the Value data box, type 1, and then click OK. 16) Quit Registry Editor. Keywords: Excel Add-ins Registry References: None
Problem Statement: Attached is a simple LP model that has multiple layers of cascaded pools and the source pool goes to more than one destinations, in this example, source pool ABD goes to destination pool ABE & ABF. Note that this LP model is not a representation of refinery or petrochemical PIMS model, this model is for illustration purpose only.: Note that error distribution coefficient (D) is calculated as such: D = Volume of ABD travel to ABE /Total Volume of ABD produced = 1 bbl/400000 bbl = 0.0000025. Hence we can see that the error distribution coefficient of ABD to ABE is 0.0000025 in the matrix analyzer. However, when I check !PDIST.xls, it is shown that the value of error distribution coefficient (D) of ABD to ABE is 0.001, instead of 0.0000025. Why?
Solution: The actual error distribution coefficient is 0.0000025, but it is reported as 0.001 in !PDIST.xls because a default value of 0.001 is defined in DSMALL under Model Settings | General | Tolerance, this DSMALL defines the minimum allowable value in the PDIST table. Keywords: DSMALL PDIST Error Distribution Coefficient Cascaded Pools References: None
Problem Statement: This
Solution: describes the functions of Aspen Process Industry Modeling System (PIMS) and how it is used.Solution PIMS is an economic planning tool used to model industry processes, mainly refinery process. It is used to generate operating production plan for either a single plant or multiple plants; for a single period or over multiple period. The durations of the plan are typically a week, a month or a year. PIMS takes into account the availability of crude feedstocks, market supply and demand, plant operating capacity and limit, and products specifications during the optimization to find an optimum point of operation. This optimum operating point ensures that the client is gaining the maximum profit while satisfying all the constraints. PIMS employs linear-programming techniques to optimize the operation and design of refineries, petrochemical plants, or other industry facilities. It also models nonlinear problems using Successive Linear Programming (SLP) and Distributive Recursion (DR) as well as simulator interfaces that link to rigorous simulator models. It can be used for a wide variety of short-term and strategic-planning purposes, including the following: · Evaluation of alternative feedstocks · Sizing of plant units in grass roots studies · Optimization of product mix for a given feed state · Optimization of product blending and other operating decisions · Evaluation of grass root opportunities or expansions, and many others. · Capital improvement studies to simulate material and financial impact of adding new units, new regulations, debottlenecking, different catalysts, varying severities, changing cutting schemes, etc. A key feature of the system is the integrated use of Microsoft Excel for input and maintenance of data. Since the spreadsheet program is extremely easy and effective to use, PIMS data management becomes a pleasure rather than a burden. This benefit leads to improved results since erroneous data is both easier to locate and to correct. Data is entered or modified in Microsoft Excel and can be transferred to PIMS by attaching the Excel sheets to PIMS tables. Based on the data provided by user, PIMS generates an optimized production plan that fulfills all the user requirements. Keywords: Planning tool, constraints, optimization, successive linear programming (SLP), distributive recursion (DR) References: None
Problem Statement: What is the difference between the V8.0 and V8.2 installation of Aspen PIMS Platinum?
Solution: Platinum solo will be installed with PIMS installation. In V8.2, there is no short cut icon from 'All Programs | AspenTech | Planning 8.2'. This also means you can not run Platinum outside PIMS for client installation anymore. In V8.0 version, the Platinum has its own icon from start menu. You can also run Platinum solo alone outside PIMS. Keywords: Platinum Platinum solo installation install V8.2 V8.0 References: None
Problem Statement: Ranging Analysis is one of the features in the Aspen PIMS Advanced Optimization package (PIMS-AO). It is a tool to analyze the minimum and maximum range for feeds/products/capacities at or near optimal
Solution: . The first step is to establish a base. Once we have the optimalSolution for this base case, we set the allowable percent giveaway in the objective function. During the Ranging Analysis, PIMS will minimize the lower bound for each selected variable and maximize the upper bound for each selected variable while keeping theSolution within the designated percent change in objective function. The combination of the Utility Index and the Flexibility Index provides the maximum operational range under optimality conditions for each feedstock. Utility Index (Maximum Feedstock Addition) For a given feedstock, this metric is defined as the fraction of the remaining total feed that this particular crude can displace for a predefined marginal drop in the economic objective function (typically 1%). This index ranges from 0 to 1. The formula for this is: In other words, the Utility Index tells how much of a particular Feedstock/Product/Capacity can be added to theSolution. 0 : No additional quantity can be added (at maximum) 0.5 : It can replace 50% of all the other items at theSolution 1 : It can replace all of the other items Flexibility Index (Maximum Feedstock Removal) For a given feedstock, this metric is defined as the fraction (in terms of the total feed) of the optimum amount that can be displaced for a predefined marginal drop in the economic objective function. This index ranges from 0 to 1. In other words, the Flexibility Index tells how much of a particular Feedstock/Product/Capacity can be removed from theSolution. 0 : It cannot be removed (at minimum) 0.5 : Half of it can be removed 1 : All of it can be removed (minimum zero) Where: Xi, MAX : the maximum fraction of the total feed this feedstock can reach at or near optimal conditions, Xi, MIN : the minimum fraction of the total feed this feedstock can reach at or near optimal conditions. Xi, BASE : the fraction of total feed this feedstock represents at the base optimalSolution, Fi, MAX : the maximum of this feedstock can reach at or near optimal conditions, Fi, MIN : the minimum of this feedstock can reach at or near optimal conditions, Fi, BASE : the base value of this feedstock represents at the base optimalSolution N : the total number of feeds used for range analysis Note: If the particular crude constitutes the ENTIRE optimum feed, i.e., Xi, BASE = 1, the index is by definition 1. If the particular crude is NOT part of the optimum feed, i.e. XBASE = 0, the index is by definition 1. Solution In the attached model, we use feed analysis as an example to perform Ranging Analysis. Procedure to run Ranging Analysis, 1. Use a model setup with PIMS-AO, 2. Run a Base case 3. From Menu Run | Advanced Optimization | Range Analysis | Perform, Here we select 2 feed stocks AHV and ANS to analyze. 4. Select the case we ran at step 1 as a base case, set 4% as Acceptable Percentage Reduction in Objective. 5. The partial execution log shows, --- Base Case Feedstock AHV = 41.6612577799 Feedstock ANS = 0. Display Equation OBJFN = 1192.95799205 ================================ Starting AHV Maximization Case =================================== Major Iteration ======= Variable Convergence Function ======= Residual Convergence Function ======= Objective Convergence Function ======= Objective Function Value ======= Non Linearity Ratio ======= Time ======= Most Infeasible Row ======= Total Infeasible Row ======= 0 8.8385e-001 2.1397e-001 1.0002e+000 50.00000 1.07 9:27:00 AM 1 3.1910e-001 4.9834e-004 0.0000e+000 50.00000 0.995 9:27:00 AM 2 5.9976e-004 4.9801e-007 0.0000e+000 50.00000 1 9:27:00 AM --- Case Maximize AHV - Results Feedstock AHV = 50. Feedstock ANS = 0. Display Variable OBJFN = 1145.23957237 =============================== Starting AHV Minimization Case =============================== Variable Residual Objective Objective Non Most Total Major Convergence Convergence Convergence Function Linearity Infeasible Infeasible Iteration Function Function Function Value Ratio Time Row Value ========= =========== =========== =========== ============= ========= =========== ========== ========== 0 1.1865e+001 1.4868e-003 9.9984e-001 19.82250 0.853 9:27:00 AM 1 1.9810e+000 3.4555e-004 2.7814e-002 20.65199 0.776 9:27:00 AM 2 1.4384e+001 5.5734e-004 8.2812e-003 20.39816 0.42 9:27:00 AM 3 2.6742e+000 5.7829e-004 7.6313e-003 20.63014 -0.0789 9:27:00 AM 4 1.5307e+001 5.3961e-004 7.6012e-003 20.39731 -0.0772 9:27:01 AM 5 2.6722e+000 5.7730e-004 7.6650e-003 20.63031 0.0354 9:27:01 AM 6 1.5307e+001 5.3977e-004 7.6056e-003 20.39735 0.0301 9:27:01 AM 7 1.0376e+000 2.1130e-004 5.8599e-003 20.57547 0.623 9:27:01 AM 8 6.6840e-001 1.1945e-004 1.1373e-003 20.61025 0.487 9:27:01 AM 9 9.5714e-001 6.1257e-005 7.6235e-004 20.63358 0.5 9:27:01 AM 10 4.8932e-001 5.3835e-005 5.3753e-004 20.65005 0.0741 9:27:01 AM 11 4.8134e-001 1.3571e-005 2.8150e-004 20.65868 0.744 9:27:01 AM 12 1.6205e-001 3.6004e-006 1.0743e-004 20.66197 0.742 9:27:01 AM --- Case Minimize AHV - Results Feedstock AHV = 20.6619705496 Feedstock ANS = 0. Display Variable OBJFN = 1145.23957237 =============================== Starting ANS Maximization Case =============================== Variable Residual Objective Objective Non Most Total Major Convergence Convergence Convergence Function Linearity Infeasible Infeasible Iteration Function Function Function Value Ratio Time Row Value ========= =========== =========== =========== ============= ========= =========== ========== ========== 0 1.2378e+000 1.3593e-004 9.9998e-001 5.590149 1.03 9:27:02 AM 1 7.1433e+000 1.3370e-003 1.3487e-001 6.478962 -2.91 9:27:02 AM 2 8.7239e-001 1.1351e-003 1.3032e-002 6.381494 0.296 9:27:02 AM 3 6.7292e+000 1.0040e-003 4.3105e-002 6.063314 0.263 9:27:02 AM 4 8.7105e-001 1.0550e-003 2.2667e-002 6.223419 -0.117 9:27:02 AM 5 6.7764e+000 1.0613e-003 9.8332e-003 6.152390 -0.0659 9:27:02 AM 6 8.7133e-001 1.0715e-003 1.4557e-002 6.256509 0.0211 9:27:02 AM 7 6.7671e+000 1.0491e-003 1.6934e-002 6.133624 0.0134 9:27:02 AM 8 8.7127e-001 1.0680e-003 1.6243e-002 6.249497 -0.00459 9:27:02 AM 9 6.7691e+000 1.0516e-003 1.5436e-002 6.137593 -0.00299 9:27:02 AM 10 8.7128e-001 1.0687e-003 1.5886e-002 6.250979 0.00107 9:27:02 AM 11 1.9820e-001 9.1155e-004 4.1029e-003 6.221229 0.148 9:27:03 AM 12 1.2005e+000 7.9958e-004 3.7536e-002 5.950172 0.662 9:27:03 AM 13 5.6075e-001 3.8641e-005 6.7301e-003 5.903397 0.931 9:27:03 AM 14 9.9356e-001 3.5984e-006 2.6537e-004 5.901565 0.574 9:27:03 AM --- Case Maximize ANS - Results Feedstock AHV = 36.8632232532 Feedstock ANS = 5.90156506135 Display Variable OBJFN = 1145.23957237 6. PIMS generates a report 'FeedstockRanging001.xls'. 7. From Menu Run | Advanced Optimization | Range Analysis | View, we can view the results graphically. Keywords: Ranging Analysis AO XLP XNLP Flexibility Index Utility Index References: None
Problem Statement: Aspen issues PIMS product major release, such as v7.1, v7.2, v7.3, v7.3.1, v7.3.2, v8.0, and v8.1. However within each major release, we may issue some patches with certain fixes. How does a user check which patch or which build they currently have installed or in use?
Solution: Open PIMS, from menu HELP | About Aspen PIMS. Next to the major release on the right, there is a series number which is the product patch/build number. In the screen below it is 18.60.6. Keywords: patch build version check number References: None
Problem Statement: How do I resolve Getting
Solution: error message?Solution Please follow below steps: · Perform model clean up once · Archive the model and unarchive in to a different folder and try to execute · Delete PMLOG11.MDB file and run again If the aboveSolutions are not helpful: · Disable Antivirus software and run PIMS again · Stop Windows indexing service for model folder and run PIMS again (KB 134228) Additionally if you have Bitlocker software installed, please make sure it is not interfering with the model folder. Keywords: getting References: None
Problem Statement: What is the cause for the error `System Transactions Diagnostic Trace? threw an exception?
Solution: While trying to open PIMS the above error pos up preventing the application to open, this problem can be resolved by opening PIMS as an administrator Keywords: System transaction error System transactions Diagnostic trace References: None
Problem Statement: How to formulate the LP (Linear Programming) model for a periodic problem?
Solution: In refinery industry, planning for several time periods may be required to meet the demands and increase the profits. In PIMS, we can use P-PIMS (Periodic PIMS) to set up multiple period model, which is based on linear programming system. To better explain the equations and relationship in P-PIMS LP system, an example of a lumber company is listed below, which includes similar influence facts in refinery industry, such as prices variation, seasonal demands difference, storage capacity, etc. The Woodstock Company purchases and sells lumber. The price of lumber varies during difference seasons, so the manager decides to develop most profitable plan for the company. The situations are as follows: 1.If lumber purchased is to be stored for sale in a later season, handling and storage fee of $ 10 per 1000 board feet for each season is incurred. 2. A maximum of 2 million board feet can be stored in the warehouse at any one time. 3. Purchase & Selling Price: Season Purchase Price Selling Price Maximum Sales Winter 410 425 1000 Spring 430 440 1400 Summer 460 465 2000 Autumn 450 455 1600 According to the above requirements, we can number 1=winter, 2=spring, 3=summer,4=autumn. Let xi and yi be number of purchase and selling in season i. Let zij be the number stored in season i for sale in season j. So the linear programming system for this periodic problem could be set up as follows: The objective function is to maximize the total profit of whole year: Z = -410x1 + 425y1 - 10z12 - 20z13 - 30z14 -430x2 + 440y2 - 10z23 - 20z24 -460x3 + 465y3 - 10z34 -450x4 + 455y4 And the objective function subjects to the constraints below: 1) The purchase of each season should finally be sold out. x1 - y1 - z12 - z13 - z14 = 0 x2 - y2 + z12 - z23 - z24 =0 x3 - y3 + z13 + z23 - z34 = 0 x4 - y4 + z14 + z24 + z34=0 2) Warehouse has maximum storage capacity. x1<= 2000 z12 + z13 + z14 +x2 <= 2000 z13 + z14 + z23 +z24 +x3 <= 2000 3) For each season, there is limitation on maximum sales of lumber. y1<=1000 y2<=1400 y3<=2000 y4<=1600 4) According to zij definition, at leaset zij should be sold in season j. z12 - y2 <=0 z13 + z23 -y3 <=0 5) Variables should be non-negative xi>=0 yi>=0 zij>=0 To solve this multiple period linear programming problem, we can use simplex method and specific solving steps can be referred in KB141376. Keywords: P- PIMS LP system References: None
Problem Statement: In which report can I find the matrix size of an Aspen PIMS DR model?
Solution: You may refer to the ExecutionLog.lst “Problem Size Characteristic� section to find out the matrix size.            Rows = number of rows/constraints in the matrix Structural Columns = number of variables in the matrix Integer variables and Special Ordered Sets are for MIP related variables RHS columns = number of Right Hand Side column (which is always 1) Non-Zero Matrix Elements = matrix coefficient In addition, there is also a “Problem Size Characteristic� section in the Validation report “Setup report� section:            The information in this section is a results of internal validation done by PIMS, we recommend user to always refer to the information of matrix size in the Execution Log, not in Validation Report. Here we provide explanation of some of the key information of Problem Size Characteristics in the Validation Report: Table ROWS = the table with the largest number of rows in PIMS tables in the model. Table COLUMNS = the table with the largest number of columns in the model. Table ELEMENTS = the table with the largest number of matrix coefficients in the model. TOTAL MODEL TABLES = Number of Tables in PIMS model and internal tables created by PIMS in memory for other tables. Keywords: Matrix Size Problem Size Characteristic Execution Log Validation Rows Columns Tables Matrix Coefficients RHS References: None
Problem Statement: How do I turn off the Multi-Start graph when running multi-start on several cases at one time?
Solution: In Aspen PIMS V8.2, the Multi-start graph can be turned off if desired. To do this go to Model Settings | on-linear Model (XNLP) | Global Optimization. When Perform Global Optimization is checked and Multi-Start is selected, you can click on the Multi-Start... button and this opens the MultiStartOptionsForm shown below. On this form, you can de-select the Show Plot option. When this is de-selected, the Multi-Start graphs will not be shown during model execution. Keywords: None References: None
Problem Statement: What factors can contribute to model convergence problems?
Solution: Factors that can contribute to model non-convergence include: 1. ATOL and RTOL too tight. By default, ATOL (Absolute Tolerance) and RTOL (Relative Tolerance) are 0.001 in the model setting. But in some models, this tolerance can be too tight for some recursed properties to converge. We can change the ATOL and RTOL in the Recursion>>Tolerances settings, or T. SCALE table can be used to change the tolerance for certain recursed property values. KB143405: When will Aspen PIMS-DR stop doing the recursion for properties? KB111993: How do SCALE, MIN, MAX, ATOL and RTOL in T.SCALE affect recursion? 2. Over-constrained models. For over-constrained models, please try to check the primal-dual report to see if there are large marginal values which are caused by tight constraints. 3. Weak pricing used in model. Weak pricing will lead to weak economic drive in the model. 4. Too many cascaded pools. Try to combine or delete unnecessary pool structures in the model. 5. Too much recursion. If users recurse and track all possible product specifications instead of just those that are truly relevant, PIMS will create unnecessary structure in the matrix. To identify if the recursion structure is necessary, please refer to KB125128: How to identify recursed properties that are not used downstream, to clear them from the model? 6. Parallel paths for materials to destinations To fix the parallel path/multipath issue, please refer to KB134339: What does the multipath message in the iteration log file mean? 7. Large number of periods in multiperiod models. Large number of periods will cause the matrix size to be very large, which could lead to the convergence issue. For how equations and constraints are created in P-PIMS model, please refer to KB141544: How PPIMS model is set up. Keywords: Non-convergence References: None
Problem Statement: When a client machine try to run PIMS, it fails to access the license server. However the other machines can. What can be the cause?
Solution: In order for the Aspen application run on the client machine, the client machine has to be correctly configured so it is able to check out a license from the license server. This is implemented through the SLM tools. The SLM tools are installed in sub-folders under C:\Program Files\ or C:\Program File (x86) for 64 bit machine, such as C:\Program Files (x86)\Common Files\Hyprotech\Shared Run the 'SLM configuration Wizard' by either double click the file 'SLMConfigWizard.exe' from this location, or from Start | All Programs | AspenTech | Common Utilities | SLM Configuration Wizard. Please refer to the article 117820, from step 7 forward to configure the license. But if you have trouble accessing the server when you try to configure the license, please check the versions of the SLM tools on the server and on the client machine. The SLM version on the client machine has to be lower than or the same as the SLM version on the server. For example, if the server SLM version is v7.3 and the client SLM version is v8.0, it will not work. If that is the case, you need to uninstall the SLM tools 8.0 on the client machine and install the SLM v7.3 or upgrade the license server to v8.0. Keywords: License server SLM PIMS installation access configuration References: None
Problem Statement: I am running a model highly influenced by crude slate. We are a small refinery only processing 2 cargoes of crude per month and the envisaged crude changes the stability of our model from one case to another. We have discovered that this is mostly due to the PGUESS being too far away from the expected result. Unfortunately we can't refresh PGUESS by importing it from results as PGUESS is where our opening inventories qualities are defined and that would make the results of our calculation uncomparable with other cases. What are our options to preserve the baseline and make our runs more stable?
Solution: The obviousSolution here would be to switch to defining the opening inventories quality in PINV. If that is not a viable option one alternative would be to use the 999 PGUESS entry for the streams defined in assays table and to couple this with a CRDDISTL update which can be then done via table CASE. The 999 entries are being replaced by the weighted average of the figures found in ASSAYS. The ESTxxx entries in CRDDISTL are used here as the weighting factors so these need to be updated together with the crude scenario update unless tables CRDTANKS and CRDALLOC are used. This is safe to do as the CRDDISTL values only define the estimated charge used to create the first guesses of distributions. It is still important to have numeric entries regardless of their values at all the needed places as this instructs PIMS to create crude units submodels. Crudes which will not be used can be represented with a very small value or a zero and the crudes to be used with an actual estimated rate. One needs to make sure that none of the entries in the opening inventories is a material directly coming from the crude unit as then the original consideration would not be met. Keywords: PGUESS CRDDISTL 999 Model stability Opening inventories quality References: None
Problem Statement: What does the invalid data in Excel during table loading error mean?
Solution: When PIMS-AO (XLP) reads the tables when you open a model, you may see the new message: “There was invalid data in Excel during table loading. These cell entries will be ignored if you execute the model.” This is not an error in terms of solving the model. It appears whenever there is data in an Excel spreadsheet that PIMS cannot process. Examples of this may be #N/A, #REF, #VALUE, etc. This message is to make you aware that this data will not be processed. Clicking OK in the notification dialog will allow the model to run as expected. The cell with the unknown data is treated as if the cell was blank. It is recommended that you review your Excel spreadsheets to find and correct the data. Keywords: None References: None
Problem Statement: When importing an smc file into Aspen Petroleum Scheduler (APS), would -999 placeholders that derived their values from PCALC/PCALCB or BLNPROP tables have these calculations and values automatically brought over from Aspen PIMS?
Solution: It is not designed to import the fixed properties and PCALC factors with SMC files to APS. The fixed properties or PCALC'ed properties from PIMS imported with SMC files can be used to resolve the -999's in the SMC submodel matrix because those values are input values. The fixed properties and PCALC factors in the SMC files can only be used to calculate SMC unit product properties where mapped properly and where the product properties are not calculated by other means, such as recursion structures or UBML functions. In other words, the fixed properties will be used for mapped product properties where not supplied by recursion structure. The PCALC factors will be used to provide mapped product properties where not supplied by recursion structure. The PCALC factors are also not be used if the product properties are calculated by UBML functions. To resolve the -999's (input), these property values must be mapped to inputs that are calculated before the SMC unit (most likely as output from another unit or possibly in a PREP sheet). Keywords: -999 placeholder placeholders PCALC BLNPROP Properties SMC APS References: None
Problem Statement: In the Scenario Evaluation Tool, for Override, Selection, and Parametric scenarios I see there are two columns: Activity and MarginalValue. What are these for and how do I get them populated?
Solution: These columns show the base caseSolution Activity and Marginal Value for all the scenario tables (BUY, SELL, PROCLIM, and CAPS). To get the values populated once you have configured and saved your set file: 1. Run as a standalone case the one you configured as Base Case for the scenario. In the following example, the Base Case is Case 1 as defined in table CASE: 2. Go back to the Scenario Evaluation Tool and open the set file. Select any type of scenario (Override, Selection, or Parametric). Make sure that the columns Activity and MarginalValue are selected for the table you want to modify. In the example below, Activity and Marginal Value are selected as columns for table CAPS in the Override scenario. You should be able to see the values populated: Keywords: scenario dialog box scenario table SET node References: None
Problem Statement: How to interpret Quality Change By Occurance and Quality Change By Relative Change in the
Solution: Tracking Solution In theSolution Tracking, there are Quality Change By Occurance and Quality Change By Relative Change results included to track the quality data. 1.Quality Change By Occurance Under Quality Change By Occurance, it lists the top 50 properties based on how many times the property value is changed, 2. Quality Change By Relative Change Under Quality Change By Relative Change, it lists the top 50 properties based on how much the property value is changed. Keywords: References: None
Problem Statement: Warning message
Solution: files with no/multipleSolution IDs selected. Only spreadsheet report is supported while trying to generate Case comparison For example, here while generating a case comparison of cases 1and 2, the warning message appears. Solution The reason for this warning message is that the cases were not run at the same time. The information in the Case Comparison report will be the same, however only the xls format is available. If you would like the HTML or TXT formats, then you must re-run all the desired cases in a single execution. Keywords: Case comparison Warning References: None
Problem Statement: Most of the refiners that are processing both low sulphur and high sulphur crudes and producing fuel oil or any other product which is blended from straight run streams or where straight run streams properties are critical for product meeting specifications tend to split their crude units into logical units which process either low or high sulphur crudes. Later on only streams from low sulphur crudes are allowed to enter the low (usually 1 wt% S) sulphur fuel oil. This is done for a number of reasons: preventing teaspoon blending, disabling blending which is infeasible from scheduling or tank farm point of view and also to increase model stability by removing infeasible options as means of helping the solver reach optimal
Solution: easier. When comparing planned and actual refinery yields refiners who process different types of crude may notice that their actual ratio of 1% vs 3% sulphur fuel oil may be different than planned. This is due to the fact that crude and vacuum residue are present in the heel of the tank and in pipes which come from the crude processed previously. Refiners processing predominantly cargoes of high sulphur crude and occasionally cargoes of low sulphur crude may experience that part of their low sulphur crude and streams coming from its processing get mixed with parts of higher sulphur materials. This phenomena is inevitable, it cannot be optimized and sometimes it is necessary to acknowledge it. Most of the time the amount of low sulphur fuel oil is lower than the one in PIMS results. Though it must be noted here that for crudes where sulphur is low enough this can even mean that part of the high sulphur crude may end up in low sulphur usual case is a lower than planned blending of low sulphur fuel oil. High and low sulphur crudes are just an example here. Similar situation can occur when only some crudes are suitable for bitumen or lubes production and a portion of them gets unavailable for that purpose. Solution There is a fairly simple way to present this in PIMS and to assure that this phenomena is properly presented in the optimization. We can use Table Ratio to force a proportional part of each low sulphur crude to the high sulphur unit. Note that this proportion varies with cargo size and may have to be reduced when evaluating scenarios where we process two low sulphur crudes one after another In the example of processing crude Siberian Light with code SIB and logical crude submodels named SCR1 (low sulphur) and SCR2 (high sulphur) we would create in the table RATIO a following structure: *Table RATIO * ROWNAMES TEXT RT1 *** SCR1SIB Sib Light in LS 9 SCR2SIB Sib Light in HS 1 *** Do note that the 3 letter code for the ratio should be one that is not used elsewhere in the model. and that the crude in question has to be allowed by CRDDISTL to be processed in both crude units. For the not-so-often-encountered scenario where part of high sulphur material may end up in low sulphur fuel oil owing to very low sulphur of the light crude we may use the table CRDBLEND to allow for high sulphur crude to be partially blended with a low sulphur crude in question and then use RATIO to connect this with the processing level of that low sulphur crude Keywords: Low sulphur crude Overlapping RATIO CRDBLEND Plan vs Actual References: None
Problem Statement: What does uncrunching matrix in the execution log mean?
Solution: This message is simply an indication that the optimizer is returning to the original matrix after using the presolved matrix. It is normal to see this when the Presolve function is active. There is no problem and no action is required. Keywords: uncrunching matrix References: None
Problem Statement: What is the difference between T.Gases and T.Units while using them to define units for gases?
Solution: There are the following things to take note of while using T.gases and T.units 1. User can only use one of the T.gases or T.units for a particular product 2. For T.gases, the conversion factor used is taken from the General Settings tab in the model tree. By default it converts from BBLS (Volume) to KSCF (Gas). However, you can put user defined units and conversion factors to define gas units. However, the same unit and conversion factor will be used for all the gases defined. 3. T. units use the conversion factor and units defined by the user in the table. It can be specific to individual product depending on the requirements. If one wants to define units for a specific gaseous product it is recommended to use T.Units. This table gives user more flexibility. T. gases can be used when you need to change the units of all the gases in the process. Keywords: Table GASES Table UNITS Units Definition References: None
Problem Statement: One important step in the transition between Distributive Recursion (DR) and XNLP is the comparison of results between the two methods. As you have to run the model twice, changing the XNLP setting in between, the best way to do this analysis is with a Case Comparison report. From Aspen PIMS version 7.1 on, the Case Comparison report is created from the Results.mdb database, therefore you have to follow these instructions to create the Case Comparison for cases created in different runs.
Solution: To be able to run a Case Comparison between two cases, one with DR, the other with XNLP follow this procedure. 1. Change the Output Database Maintenance selection to Keep Existing or Only Unique Cases as shown below, so that different cases from different runs are stored in the database. 2. Create two dummy cases for DR and XNLP Set up a Case Table with two dummy cases, one for DR, the other for XNLP, as shown. 3. Run the cases in two runs First deactivate XNLP (uncheck the setting under Model Settings | Miscellaneous | XNLP) and run Case 1. Then activate XNLP and run Case 2. 4. Run the Case Comparison Report in .xls format Go to Run | Case Comparison and create the Case Comparison report. Note that only the .xls format is available for cases created in different runs. Keywords: Case Comparison XNLP PIMS AO Output Database Results.mdb References: None
Problem Statement: What does the server busy message mean when running Excel Automation and how do I resolve it?
Solution: The server busy message is displayed when Excel Automation is running slowly. This could be due to Add-ins, formula errors, access issues, etc. Some tips for speeding up Excel Automation include: ? Minimize Excel add-ins Minimize Conditional formatting in the Aspen PIMS input tables Minimize macros in the Aspen PIMS input tables Minimize the size of the Aspen PIMS input tables (eliminate obsolete data or worksheets within the workbook) Save the PIMS input file in the same Excel version format as resides on the machine. Be aware that if you change an Excel file from the 2003 or earlier format (*.xls) to 2010 or later format (*.xlsx) then the file must be re-attached to the model tree because Aspen PIMS will still be pointed to the .xls file. The Server Busy message can also been seen if on-access virus scanning is checking the file as Aspen PIMS is using Excel automation to read it. This can be avoided by setting the Aspen PIMS model directories as an exception to on-access virus scanning. Of course, the files should still be periodically scanned and you can set that up on a desired frequency. Keywords: Server Busy, Excel References: None
Problem Statement: My model generates the error message during data validation: *** Error. Is Both an Input and Output Tag in Correlation EPA. It is not indicating what tag is duplicated, so how do I troubleshoot this?
Solution: This can happen if you have used an * to tell PIMS to ignore some of the input/output variable rows in Table ABML. For example, if you don't intend to use some of the input or output variables for a particular correlation, you may enter an * as seen in the left figure below. This can also happen if some of the input/output variable rows are omitted as shown in the right figure below. The reSolution is to provide variable names for all the input and output variables for the correlation. You can fill in names that do not correspond to anything in your matrix if it is not a variable that you use or track. For example: Keywords: ABML, correlation References: None
Problem Statement: How does Aspen PIMS generate a matrix for entries under IPRICE and ICOST.
Solution: IPRICE and ICOST are used in Aspen PIMS as the infeasibility breakers in the model. For example, in Table BUY in the Volume Sample model, IPRICE for crude BAC is $89/BBL, which means BAC can be sold at $89/BBL while it is bought at $90/BBL. The purpose is to eliminate the model infeasibilities although penalties will be added to the OBJFN. According to the entries in IPRICE and ICOST, PIMS will automatically generate additional SELLxxx and PURCxxx columns for corresponding feedstock or product in the matrix. For example, in Volume Sample model, PIMS will generate both PURCBAC, SELLBAC to calculate the objective function. Similarly, SELLUPR and PURCUPR are created according to the ICOST entry in Table Sell. Keywords: Matrix ICOST IPRICE References: None
Problem Statement: Why ar excel outputs not generated if I choose output spreadsheet extension as .xlsx?
Solution: There is an EXCEL issue related to Office 2010 and probably 2007 as well. The problem is that Microsoft has configured EXCEL 2010 such that files saved in some older versions of EXCEL will be blocked from being opened or saved. If opened, they will be opened in protected view. An example for this scenario would be the output file being not generated if the file was in EXCEL 2 format and was blocked from being opened through automation. To fix this issue, please do the following steps: 1. In Excel, File->Help->Options->Trust Center 2. Click on the Trust Center Settings. 3. Go to File Block Settings. 4. Locate Excel 2 Work sheets and un check the Open check box Keywords: Excel output not generated Excel protective view error Excel 2 error .xlsx output References: None
Problem Statement: How can I change a recursive quality to a fixed quality via table CASE?
Solution: You can use the keyword EMPTY in the CASE table to eliminate structure that exists in the base model tables. We will use the recursion structure below to demonstrate. This represents an excerpt of the base model table for SXYZ. TABLE SXYZ TEXT XXX YYY ZZZ RBALZZZ -1 -1 1 RSPGZZZ -999 -999 999 RSULZZZ -999 -999 999 To eliminate the SUL recursion, but keep the SPG recursion, I can enter the following in Table CASE. I also provide the fixed quality in Table BLNPROP (in this case, 0.5). CASE 1 Eliminate recursion of SUL of ZZZ TABLE SXYZ TEXT XXX YYY ZZZ RSULZZZ EMPTY EMPTY EMPTY * TABLE BLNPROP TEXT SUL ZZZ 0.5 Note that if ALL the recursive qualities of a recursed pool are eliminated, then the model will give error 250: Recursed Stream XXX Has No Recursed Properties E250 This is because there is still a row name ZZZ in Table PGUESS. The reSolution is to either remove the row from Table PGUESS of the base model, or use table REPORT to downgrade E250 from an error to a warning. The Table REPORT entry can be done in the base model or in the CASE table. Keywords: None References: None
Problem Statement: Are there any conversion issues while trying to send data from a weight based PIMS model to a volume based APS model?
Solution: If you are seeing and comparing volumes, PIMS weight data have to be converted. It is better to do it in PIMS; create duplicate tags on the weight basis and send those to APS. If densities of the streams do not vary very much it is recommended to use constant conversion factors. Keywords: PIMS APS Integration, weight based to volume based conversion References: None
Problem Statement: Table BLNCAP allows the user to impose a capacity limit to each blender volumetric throughput for each set of blenders. It also allows the user to impose a component pumping limit to the blenders. In this
Solution: , we will demonstrate how it works. Solution We are using volume sample model as example here. We will demonstrate how PIMS works based on the specifications from the Table BLNCAP. Let's assume we have 2 blenders. Blender 1 is defined as GB1 in PIMS. The maximum capacity is 100 for gasoline LRG, UPR and URG. The row name NC4, LN1, ?, are the 3-char component names. The limits 50 are the capacity limit for the component-pumping constraint to the blenders. In the matrix, you will find additional equations created based on those constraints. Row BCAPGB1: BVBLURG +BVBLUPR <= 100.000000 Since the LRG production is disabled in SELL, the BVBLLRG is absent. Row BCAPNC4: 0.1 * FBLNLPG +BNC4URG + BNC4UPR <= 50.000000 The limit is imposed on the total flow from NC4 goes to the blenders only. From the VBALNC4 row, we see that NC4 also goes to submodel SLPR Keywords: BLNCAP Blender capacity Pump capacity blend blender limit References: None
Problem Statement: While working with PIMS-AO, sometimes the Internal Tables disappear from the model tree
Solution: If I close the table grid after running the model and then select “Restore Table Grid” the internal tables are still present and are again displayed. Note that if “Reload Table Grid” is selected, PIMS will reload the model data from scratch, so the internal tables are gone. You cannot see them unless you generate the matrix. This is by design, because if there have been changes in the tables which prompted you to reload them, then the internal tables may be changed and therefore the matrix must be regenerated before they can be displayed. Keywords: grid, load, reload, restore References: None
Problem Statement: What is the matrix structure for process limits?
Solution: To add the process limits in the PIMS model, users need to set up the ZLIMXXX or ZquaXXX rows in the submodel and enter MIN/MAX limits in Table PROCLIM. In the matrix, The example below demonstrates this structure for ZSULCFP to control the sulfur content of the cat feed pool (CFP). The above table entries will prompt the matrix structure shown below. PIMS will internally create three control rows, E,L G, to set up the stream quality limitations. Keywords: PROCLIM References: None
Problem Statement: If you are using BLNMIP (multiple integer tables for blending) you will most likely be familiar with the fact that there exists a possibility to limit the number of components that are entering a blend. Usually this is done since in-line blending equipment has a limited number of ports through which components are blended. You may also wish to do the same with a pool in your model as sometimes you may have a limited number of tanks or pipes from which you can create a feed for a unit.
Solution: TheSolution presented here is based on the SCFP submodel pool consisting of 6 components which can be found in the Volume Sample model present within the AspenTech folder of Public Documents on every computer with installed Aspen PIMS. This pool has 6 potential streams as components and theSolution of the base case is using 3. Here steps will be presented which will limit this to 2 components only. In order to do this we will create 6 new columns in the SCFP submodel and we will designate these columns as bivalent variables through table MIP. Bivalent variables by definition can only take values of 0 or 1. Furthermore we will create an L- row for each stream connecting its activity with the activity of one of the bivalent variables. Here we will use a very high number for the coefficient in the bivalent variable. This way the activity of the column through which a stream enters the pool will be either zero (if the bivalent variable activity is zero) or it will be lesser than the value for the coefficient. Do note that it is critical here to choose a coefficient value high enough to not constrain the model. Now we will build an additional L-row in the submodel collecting all of the bivalent variables activities. In the table ROWS we will assign an RHS (right hand side) to this row which will limit the number of streams. Since the L-row total value has to be lower than or equal to this number and bivalent variables can only be 0 or 1 we achieve that the desired number or less of the bivalent variables are active and this way limit the number of the pool components. The structures that need to be built in Volume Sample model look like this: SCFP: * TABLE SCFP Table of Contents * FCC Feed TEXT LV1 LV2 HV1 HV2 DCG AR2 CFP TOT BV1 BV2 BV3 BV4 BV5 BV6 *** * VBALLV1 CD1 LVGO 1 VBALLV2 CD2 LVGO 1 VBALHV1 CD1 HVGO 1 VBALHV2 CD2 HVGO 1 VBALDCG DC GO 1 VBALAR2 CD2 AR 1 * VBALCFP FCC Feed -1 -1 -1 -1 -1 -1 * ETOTFED Total Feed -1 -1 -1 -1 -1 -1 1 LPCTAR2 Max AR2 100 -5 *** LBV1CFP CD1 LVGO 0.001 -1000 LBV2CFP CD2 LVGO 0.001 -1000 LBV3CFP CD1 HVGO 0.001 -1000 LBV4CFP CD2 HVGO 0.001 -1000 LBV5CFP DC GO 0.001 -1000 LBV6CFP 0.001 -1000 LBV1BV2 1 1 1 1 1 1 * Table ROWS: * TABLE ROWS * TEXT RHS *** * LBV1BV2 2 *** Table MIP: * TABLE MIP * TEXT BV *** SCFPBV1 CD1 LVGO 670-680F 1 SCFPBV2 CD2 LVGO 650-680F 1 SCFPBV3 CD1 HVGO 680-1050F 1 SCFPBV4 CD2 HVGO 680-1000F 1 SCFPBV5 Delayed Coker Gas Oil 1 SCFPBV6 CD2 Atmos Btms 650+ 1 *** By looking up theSolution we can confirm that the base case, which previously contained 3 streams in the pool, now has 2 streams only. Keywords: Limiting the number of pool components MIP References: None
Problem Statement: How does Aspen PIMS Advanced Optimization using XLP initialize the first optimization?
Solution: In Aspen PIMS, the initialization process for XLP is related with the default initial setting, the values from Table PGUESS and the inputSolution file. PIMS will follow the follow 3 steps to initialize the optimization: 1. PIMS sets all values to the default initial settings, which will mostly affect non-qualities since all recursed quality variables must have a PGUESS entry. 2. PIMS updates recursed quality variables based on the PGUESS value and does some initialization 3. PIMS will open theSolution file and search for exact matches in names to overwrite the values from steps 1 & 2. Keywords: First Initialization Input References: None
Problem Statement: What is the purpose of the Nqqqxxx and the Xqqqxxx In T. Sxxx?
Solution: An example would be if there was something like an additive that the user needs to insert into the spec row. Note that is needs to be a normal blend that is setup in Tables BLENDS, BLNMIX, BLNSPEC. This would only be necessary if the PIMS additive method (Table ADDITIVE) could not be used because the response curve is not convex. An example of the structure would be something like this: “aaa� is a quality If necessary, the 0.1 coefficient could be changed based on Table CURVE/NONLIN to adjust the response the blend has at various levels of aaa. So you would populate the data for the non-convex curve in tables CURVE/NONLIN. Keep in mind that this need would be extremely rare. While it is allowed, I have not seen a model with things setup this way. Note : These rows are applicable only for a non-convex relationship (like a bell curve) between a quality and the additive. It should be defined in the submodel based on whose feed the additive quantity will vary. Keywords: Quality restriction in Submodel table. Nqqqxxx and Xqqqxxx in T. Sxxx Blending rows in T. Submodels References: None
Problem Statement: What does Primal Simplex method in the execution log of PIMS-DR means?
Solution: Primal Simplex is an algorithm to solve LP problems, it is one of the common methods to solve Linear Programming problems. Details on this algorithm can be referred texts such as: Advances in Linear and Integer Programming. Oxford Science, 1996, J. E. Beasley. PIMS refinery LP are large scale (4000-75000 variable) problems and cannot be used as a case to demonstrate the solving process, this article presents a simple LP model (only two variables) and explains how LPs are solved using Simplex method. The primary focus of this articles is to demonstrate two purpose 1) How to formulate the LP problem from the verbal description of the problem statement, the formulated problem is often referred as the LP model or the matrix 2) How to solve the formulated problem using primal simplex method The content of this article is academic. The following is a simple 2-Dimensinal LP (2 variables). Wood Co is a furniture manufacturing company that manufactures tables and chairs. The company makes a profit of 8$/Table and 6$/Chair. The resource required to make tables and chairs are: lumber and man hours, which are in limited supply. Each table required 4 ft. of lumber and 2 man hours. Similarly, each chair requires 2 ft. of lumber and 4 man hours. Also there is a market restriction that Wood Co should make at most 12 tables. Summary of the problem is listed in Table below. TABLE CHAIR AVAIL. RESOURCES: LUMBER 4 BD FT 2 BD FT 60 BD FT LABOR 2 MHRS 4 MHRS 48 MHRS DEMAND: MAXIMUM 12 PROFITS: $8 $6 Based on the above information, the LP formulation can be written as: Maximize: 8*TAB + 6*CHR Such that 4*TAB + 2*CHR ≤ 60 (Lumber Constraint) 2*TAB + 4*CHR ≤ 48 (Labor Constraint) TAB ≤ 12 (Limitation on table demand) TAB, CHR ≥ 0 Solving the above problem using simplex method id given below: Step 1: Convert the LP to standard form The LP problem is converted to the standard form by adding slack variables Row 1 4*TAB + 2*CHR + S1 = 60 Row 2 2*TAB + 4*CHR + S2 =48 Row 3 TAB +S3 = 12 Row 0 Z-8*TAB - 6*CHR +0*S1 + 0*S2 + 0*S3=0 The tableau representation is given below TAB CHR S1 S2 S3 RHS Row1 S1 4 2 1 0 0 60 Row2 S2 2 4 0 1 0 48 Row3 S3 1 0 0 0 1 12 Row0 C -8 -6 0 0 0 0 The simplex algorithm begins with an initial basic feasibleSolution and attempts to find betterSolution. Here the initial basic variables are is S1, S2, S3 and the non-basic variables areTAB, CHR. Step 2: Is the Current Basic FeasibleSolution Optimal? Once we obtained a basic feasibleSolution, we need to determine whether it is optimal by Rule 1. Rule 1: if all variables have a nonnegative coefficient in Row 0, the current basicSolution is optimal. Otherwise, pick a variable with a negative coefficient in Row 0 as entering variable. So in this problem, TAB and CHR are with negative coefficient in Row 0, which means by increase number of table or chair, the objective function will also increase. So it is it is not the optimalSolution. Step 3: Determine The Entering variable and Pivot Row We need to choose the entering variable and the pivot row by Rule 2. Rule 2: For each Row i (except Row 0), there is a strictly positive “entering variable coefficient”. Compute the ratio of the Right Hand Side to the “entering variable coefficient”. Choose the pivot row as being the one with MINIMUM ratio. So in this problem, we choose TAB to be entering value to replace Row 3, which can increase the objective function in largest degree. TAB CHR S1 S2 S3 RHS ratio Row1 S1 4 2 1 0 0 60 15 Row2 S2 2 4 0 1 0 48 24 Row3 S3 1 0 0 0 1 12 12 Row0 C -8 -6 0 0 0 0 Step 4: Find a New Basic FeasibleSolution by Pivoting in the Entering Variable So in this case, we have the new basic feasibleSolution S1, S2, TAB. TAB CHR S1 S2 S3 RHS ratio Row1 S1 0 0.5 0.25 0 -1 3 6 Row2 S2 0 2 0 0.5 -1 12 6 Row3 TAB 1 0 0 0 1 12 Row0 C 0 -6 0 0 8 96 To find the optimalSolution, repeat from step 2 to step 4 and the optimalSolution will be RHS values of the final table. TAB CHR S1 S2 S3 RHS Row1 S1 0 0.5 0.25 0 -1 3 Row2 CHR 0 1 0 0.25 -0.5 6 Row3 TAB 1 0 0 0 1 12 Row0 C 0 0 0 1.5 5 132 So in this LP problem, the final optimalSolution is that Wood Co should manufacture 6 Chairs and 12 Tables to maximum the profit. And the final objective function (profit) reaches $132. Keywords: Linear Programming Simplex Method References: None
Problem Statement: When I run attached Gulf Coast sample model in PIMS, I found the messages below in the iteration log: DISTRIBUTION MN1 TO NHT VALUE 1.13 EXCEEDS 1.00 DISTRIBUTION MN1 TO NHF VALUE 1.13 EXCEEDS 1.00 Why is the error distribution of MN1 to NNT and NHF exceeding 1 and how to resolve this?
Solution: MN1 is a deferred cut split from whole naphtha WN1 in submodel SNSP and fed into Naphtha Hydrotreater SNHT. When refering to matrix analyzer, we found a difference in the activity of SCD1MN1 (pool collector column for recursing quality of MN1 = 5089.93 bbl) vs. SNHTMN1 (amount of MN1 fed to SNHT = 5767.24 bbl) and SNSPMN1 (amount of MN1 produced from SNSP = 5767.24 bbl)). This difference is due to the swing up amount of Naphtha/Kerosene swing cut NK1 not being accounted in SCD1MN1. NK1 is a swing cut that can swing up to combined with WN which in turns splitted out as LN1 and MN1. Hence when NK1 swing up to combine with WN1, it actually swings up to MN1. When PIMS calculate the error distribution coefficient of MN1 to the destinations NHT (in SNHT) and NHF (in SNSP), it is calculated as such: error distribution coefficient of MN1 to NHT = SNHTMN1/SCD1MN1 = 5767.24/5089.93 = 1.13 error distribution coefficient of MN1 to NSP = SNSPMN1/SCD1MN1 = 5767.24/5089.93 = 1.13 To resolve this, user can add the structure below: 1. In T. ROWS, add a E-row to drive the yield of NK1 swing up amount in SCD1: 2. In T. SNSP, drive the NK1 swing up amount here; also for properties of MN1, we also take into account of the property of the swing by adding RBALMN1 and RprpMN1 rows intersecting the NK1 swing up column (these RBALMN1 and RprpMN1 rows were originally generated by crude architecture from T. ASSAYS IprpMN1 rows) . This structure would resolve the issue “distribution coefficient exceed 1” and also correctly reflect the recursed properties of MN1 by taking into account the portion of NK1 swing.. Keywords: Error Distribution Coefficient Swing Cut Defer cut Recursion Gulf Coast References: None
Problem Statement: From the Aspen PIMS menu, select Model | Open, select any model which has not yet been converted to V8.2. Choose Archive in the next screen, and Aspen PIMS may pop up the window below. The model is not opened. What does this mean? Unable to find table in model: Volume Sample. This error usually occurs because an invalid specification in table TABLIB.
Solution: When a table is suppressed on the Aspen PIMS model tree, PIMS is not going to use that file when generating the matrix. However, when PIMS archives the model before conversion to v8.2, PIMS requires that all the tables attached to the model tree exist - including the suppressed ones. So the message will appear such a file does not exist in the model folder. To resolve the issue, either remove the file from the model tree or put the file back in the model directory. Keywords: archive, suppressed, suppress, exist, convert, conversion References: None
Problem Statement: How to use T.Case in 3rd normal format
Solution: An alternate format of table CASE is fully supported. That form is known as 3rd-normal. In that format, the column name(s) in the table are constant and the rows contain the data for each entry to be revised. An example of this format is shown below: Predefined Columns: · CASE · TITLE · MODIFIES · GENERATE · EXPERT · TABLE · COLUMN · ROW · VALUE · PERIOD Example Table: CASE TITLE MODIFIES GENERATE EXPERT TABLE COLUMN ROW VALUE TEXT tb PERIOD REPLACE REPLACEALL 1 BASE CASE 1 ALTAGS TEXT DSX DSN 2 MIN BAC 1 2 tfs22 BUY MIN BAC 3 3 FCCU CAP 3 CAPS MAX CCCU 33 1 4 REF YLD 3 4 SLPR R90 VBALR90 -0.881 4 SLPR R94 VBALR94 -0.8525 4 SLPR R98 VBALR98 -0.82 4 SLPR R02 VBALR02 -0.7822 5 MIN BAC 5 BUY MIN ANS 25.7741 5 BUY MIN NSF 14.2259 5 BUY MIN TJL 0 5 BUY MIN ARL 40 Keywords: 3rd Normal Form Table Case References: None