text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Hello Juergen and Tim, "Weiss, Juergen" <address@hidden> writes: >> I'm trying at the moment to rebuild the database but I'm getting >> a value stack overflow in GCL while loading the domains. (Daly's >> law: there is no such thing as a simple job.) Other than that >> problem and a few remaining domains that I need to fix, the whole >> Axiom world builds from scratch. I never thought I'd see the day >> that would happen. > > This is strange indeed. I used the most recent cvs files and > compiled under Debian (with the supplied gcl). Are you using woody or sid? If sid, then maybe Camm has uploaded an updated version of GCL? Anyway, for the value stack overflow, Camm proposed a patch to GCL which is the bug database: The proposed patch of Camm: The C stack size is to limited. Apply following patch: --- /fix/s/camm/gcl/o/main.c Thu Feb 13 17:31:27 2003 +++ main.c Thu Jul 17 16:30:18 2003 @@ -235,7 +235,7 @@ #ifdef BSD #ifndef MAX_STACK_SIZE -#define MAX_STACK_SIZE (1<<23) /* 8Mb */ +#define MAX_STACK_SIZE (1<<24) /* 16Mb */ #endif #ifdef RLIMIT_STACK getrlimit(RLIMIT_STACK, &rl); I also suppose that you, Tim, have increased the VSSIZE. The sources in a public CVS would definitely help track down bugs. ;) Yours, d. -- address@hidden | http://lists.gnu.org/archive/html/axiom-developer/2003-08/msg00059.html | CC-MAIN-2014-41 | refinedweb | 219 | 72.97 |
resmgr_context_t
Context information that's passed between resource-manager functions
Synopsis:
#include <sys/resmgr.h> typedef struct _resmgr_context { int rcvid; struct _msg_info info; resmgr_iomsgs_t *msg; dispatch_t *dpp; int id; unsigned tid; unsigned msg_max_size; int status; int offset; int size; iov_t iov[1]; } resmgr_context_t;
Since:
BlackBerry 10.0.0
Description:
The resmgr_context_t structure defines context information that's passed to resource-manager functions.
The members include:
- rcvid
- The receive ID to use for messages to and from the client.
- info
- A _msg_info structure that contains information about the message received by the resource manager.
- msg
- A pointer to the message received by the resource manager, expressed as a union of all the possible message types.
- dpp
- The dispatch handle, created by a successful call to dispatch_create().
- id
- The link Id, returned by resmgr_attach().
- tid
- Not used; always zero.
- msg_max_size
- The minimum amount of space reserved for receiving a message.
- status
- A place to store the status of the current operation. Always use _RESMGR_STATUS() to set this member.
- offset
- The offset, in bytes, into the client's message. You'll use this when working with combine messages.
- size
- The number of valid bytes in the message area.
- iov
- An I/O vector where you can place the data that you're returning to the client.
Classification:
Last modified: 2014-06-24
Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus | http://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/r/resmgr_context_t.html | CC-MAIN-2015-22 | refinedweb | 235 | 51.34 |
There are 128 characters in the ASCII (American Standard Code for Information Interchange) table with values ranging from 0 to 127.
Some of the ASCII values of different characters are as follows −
A program that finds the ASCII value of a character is given as follows −
#include <iostream> using namespace std; void printASCII(char c) { int i = c; cout<<"The ASCII value of "<<c<<" is "<<i<<endl; } int main() { printASCII('A'); printASCII('a'); printASCII('Z'); printASCII('z'); printASCII('$'); printASCII('&'); printASCII('?'); return 0; }
The ASCII value of A is 65 The ASCII value of a is 97 The ASCII value of Z is 90 The ASCII value of z is 122 The ASCII value of $ is 36 The ASCII value of & is 38 The ASCII value of ? is 63
In the above program, the function printASCII() prints the ASCII values of characters. This function defines an int variable i and the value of the character c is stored into this variable. Since i is integer type, the corresponding ASCII code of the character is stored into i. Then the values of c and i are displayed.
This is demonstrated by the following code snippet.
void printASCII(char c) { int i = c; cout<<"The ASCII value of "<<c<<" is "<<i<<endl; } | https://www.tutorialspoint.com/cplusplus-program-to-find-ascii-value-of-a-character | CC-MAIN-2021-39 | refinedweb | 209 | 65.96 |
Monte Carlo Options Pricing in Two Lines of Python
Tom Starke September 1, 2017 Uncategorized 0
This is an old video that I produced sitting on my bed in the morning in order to learn how to make basic Youtube videos. Nevertheless, I became quite popular. I apologise for looking a bit rough. Hope you enjoy it regardless.
Here is the code:
from numpy import cumprod, random, sqrt, mean k = cumprod(1+random.randn(10000,252)*0.2/sqrt(252),1)*100 mean((k[:,-1]-100)*((k[:,-1]-100)>0))
If that seems a bit complex, check out the video for an explanation how the code works. | http://aaaquants.com/2017/09/01/monte-carlo-options-pricing-in-two-lines-of-python/ | CC-MAIN-2019-39 | refinedweb | 107 | 73.37 |
2021SC@SDUSC
catalogue
request Interceptor at login
response Interceptor at login
This article analyzes the js files related to login
Login related
First, analyze the src/utils/request.js file. The source code is as follows:
import axios from 'axios' import { Message, MessageBox } from 'element-ui' import store from '../store' import { getToken } from '@/utils/auth' const service = axios.create({ baseURL: process.env.BASE_API, timeout: 20000 })
First, axios and MessageBox are introduced. Then, we introduce store and token, which we haven't touched. Next, we analyze the usefulness of these two components.
Use of vuex store
We click the file where the store is located, and the source code is as follows:
Vue.use(Vuex) const initPlugin = store => { } const store = new Vuex.Store({ modules: { app, user, cacheView, help }, state: { }, plugins: [initPlugin], actions, mutations, getters }) export default store
vuex is a state management model developed specifically for vue.js applications. It uses centralized storage to manage the state of all components of the application, and uses corresponding rules to ensure that the state changes in a predictable way. vuex also integrates devtools extension, the official debugging tool of vue, and provides advanced debugging functions such as time travel debugging of zero configuration, import and export of state snapshots and so on. After we introduce this component in the request.js file, we can use this to establish state management. A complete store architecture includes:
state: { // Storage status }, getters: { // Calculated properties of state }, mutations: { // Change the logic of the state in state and synchronize the operation }, actions: { // Commit mutation, asynchronous operation },
But this component is not used in request.js. I don't know why.
token related
Let's click in the utils/auth.js file. Part of the source code is as follows:
It is not difficult to find that this is used for authentication and provides a series of token related methods.
What is a token
Token is to frequently request data from the server at the client. The server frequently goes to the database to query the user name and password and compare them, judge whether the user name and password are correct, and give corresponding prompts. It is very troublesome, and token can avoid this trouble.
A Token is a string of strings generated by the server as a Token requested by the client. After the first login, the server generates a Token and returns the Token to the client. Later, the client only needs to bring the Token to request data without bringing the user name and password again.
Go back to the original request.js file, then create an axios instance in the code, and define the baseURL and timeout.
request Interceptor at login
The following source code in the request.js file is as follows:
service.interceptors.request.use( config => { config.headers['Authorization'] = (getToken() == null || getToken() == undefined) ? '' : getToken() return config }, error => { // Do something with request error Promise.reject(error) } )
The purpose of config is to carry a token in the request before each request, and assign a token to the Authorization field in the headers. The judgment logic is as follows:
1. If the getToken method (this method is the method in the imported auth component, mentioned above) returns null or undefined, it will be assigned null
2. Otherwise, copy the real token to the Authorization field in eaders.
In this way, each request will carry a token. If the background verification does not verify the token, it is a problem in the background.
response Interceptor at login
The following code will set the reply interceptor of token to handle the error of receipt code. The source code is as follows:
service.interceptors.response.use( response => { const res = response.data if (res.code == '50008' || res.code == '50012' || res.code == '50014') { MessageBox.confirm( 'You have been logged out. You can cancel staying on this page or log in again', 'Login again', { confirmButtonText: 'Login again', cancelButtonText: 'cancel', type: 'warning' } ).then(() => { sessionStorage.clear() localStorage.clear() window.location.href = "/"; }) } else if (res.code !== '200') { Message({ message: res.message, type: 'error', duration: 5 * 1000 }) return response.data } else { return response.data } }, error => { Message({ message: error.message, type: 'error', duration: 5 * 1000 }) return Promise.reject(error) } )
You can see the whole logic and AJ report project analysis (6) The response interceptor in is very similar.
The meaning of status code is as follows:
50008: illegal token; 50012: other clients have logged in; 50014: the token has expired;
If one of the three is satisfied, a pop-up prompt will pop up. We can choose to click login again, and then click:
window.location.href = "/";
Return to the login page. | https://programmer.group/619b11fbc987b.html | CC-MAIN-2022-40 | refinedweb | 767 | 58.48 |
Recycling is one of those things that is good for the planet, and it’s an common sense way to make sure we don’t find ourselves buried in our own rubbish or without sufficient resources in the future.
A few Android engineers thought about the benefits of recycling and realized that an OS can also run more efficiently if it recycles. The result of this inspiration was millions of eco-Warriors and recycling enthusiasts rejoicing when the
RecyclerView widget was introduced into Android Lollipop — or something like that. :]
There was even more celebration when Google announced a support library to make this clean, green, recycling machine backwards compatible all the way to Android Eclair (2.2), which was released back in 2010!
In this tutorial, you’re going to experience the power of RecyclerView in action and learn:
- The purpose of a RecyclerView
- The components that make up a RecyclerView
- How to change the layout of a RecyclerView
- How to add some nice animations to your RecyclerView
You’re also going to blast off into outer space with the sample app Galacticon . You’ll use it to build out a feed of daily astronomy photos from a public NASA API.
Prerequisites : You should have a working knowledge of developing for Android before working through this tutorial. If you need a refresher, take a look at some of our introductory tutorials !
Heading to Cape Canaveral: Getting Started
Download thestarter project and open it up in Android Studio. There isn’t much to it yet, nor is the almighty RecyclerView anywhere to be seen.
Click the Run app button at the top and you’ll see something that resembles outer space in all the wrong ways:
It’s empty, but that’s ok. You wouldn’t learn much if all the work was done for you! Before you can add that amazing astrophotography from NASA, you’ll need to do some set up work.
Obtaining The (API) Keys to the Shuttle
You’ll use the Astronomy Picture of the Day API , one of the most popular web services provided by NASA. To ensure it doesn’t fall victim to unsolicited traffic, the service requires you to have an API key to use it in an application.
Fortunately, getting a key is as simple as putting your name and email address into api.nasa.gov and copying the API key that appears on the screen or in the email you receive a few moments later:
Once you’ve acquired your API key, copy it and open the strings.xml file in your project. Paste your API key into the
api_key string resource, replacing
INSERT API KEY HERE :
Space Oddity: Learning About RecyclerView
You’re about to blast off into outer space to explore the vastness of RecyclerViews, but no competent commander heads into the unknown without preparation. You have questions, and you need answers before you go any further. Consider this section as your mission brief.
A RecyclerView can be thought of as a combination of a ListView and a GridView . However, there are extra features that separate your code into maintainable components even as they enforce memory-efficient design patterns.
But how could it be better than the tried and tested ListView and GridView you’re used to? Could it be some kind of alien technology? The answers, as always, are in the details.
Why You Need RecyclerView
Imagine you’re creating a ListView where the custom items you want to show are quite complicated. You take time to lovingly create a row layout for these items, and then use that layout inside your adapter.
Inside your
getView() method, you inflate your new item layout. You then reference every view within by using the unique ids you provided in your XML to customize and add some view logic. Once finished, you pass that view to the ListView, ready to be drawn on the screen. All is well…or is it?
The truth is very processor-intensive, especially for complicated layouts. Furthermore, the situation can cause your ListView scrolling to become jerky or non-responsive as it frantically tries to grab references to the views you need.
Android engineers initially provided a solution to this problem on the Android Developers with smooth scrolling , via the power of the
View Holder pattern.
When you use this pattern, you create a class that becomes an in-memory reference to all the views needed to fill your layout. The benefit is you set the references once and reuse them, effectively working around the performance hit that comes with repeatedly calling
findViewById() .
The problem is that it’s an optional pattern for a ListView or GridView. If you’re unaware of this detail, then you may wonder why your precious ListViews and GridViews are so slow.
First Contact: RecyclerView and Layouts
The arrival of the RecyclerView changed everything. It still uses an Adapter to act as a data source; however, you have to create ViewHolders to keep references in memory.
When you need a new view, it — they just work.
Thanks to the requirement for a ViewHolder, the RecyclerView knows exactly which animation to apply to which item. Best of all, it just does it as required. You can even create your own animations and apply them as needed.
The last and most interesting component of a RecyclerView is its LayoutManager . This object positions the RecyclerView’s items and tells it when to recycle items that have transitioned off-screen.
Layout Managers come in three default flavors:
- LinearLayoutManager positions your items to look like a standard ListView
- GridLayoutManager positions your items in a grid format similar to a GridView
- StaggerGridLayoutManager positions your items in a staggered grid format.
You can also create your own
LayoutManagers to use with a
RecyclerView if you want an extra bit of customization.
Hopefully that answers all your questions, commander. Now, onto the mission!
Preparing for Launch: Creating the RecyclerView
To create the RecyclerView, you’ll break the work into four parts:
- Declare the RecyclerView in an activity layout and reference it in your activity Java file.
- Create a custom item XML layout for your RecyclerView to use for its items.
- Create the view holder for your view items, hook up sample project already adds the RecyclerView Support Library as a dependency in your app’s build.gradle file. If you want more information on how to do it yourself, check out the Android developer website .
Open MainActivity.java and declare the following member variables at the top of the class:
private RecyclerView mRecyclerView; private LinearLayoutManager mLinearLayoutManager;
In
onCreate() , add the following lines after
setContentView :
mRecyclerView = (RecyclerView) findViewById(R.id.recyclerView); mLinearLayoutManager = new LinearLayoutManager(this); mRecyclerView.setLayoutManager(mLinearLayoutManager);
Phase one of ignition is complete! You’ve declared and allocated memory for two parts of the puzzle that RecyclerViews need to work: The RecyclerView and its Layout Manager.
Ignition Phase 2: Laying out the RecyclerView Items
Phase two of ignition involves creating a custom layout for the item you want your RecyclerView to use. It works exactly the same as it does when you create a custom layout for a ListView or Gridview.
Head over to your layout folder and create a new layout with the name
recyclerview_item_row and set the root element as a
LinearLayout . In your new layout, add the following XML elements as children of your LinearLayout:
<ImageView android: <TextView android: <TextView android:
No rocket science here: You declared a few views as children of your layout, and can now use them in your adapter.
Adapters: Rocket Fuel for Your RecyclerView
Right-click on the com.raywenderlich.galacticon folder, select New / Java Class , and name it RecyclerAdapter . At the top of the file below the
package declaration, import the support library’s version of RecyclerView:
import android.support.v7.widget.RecyclerView;
Make the class extend RecyclerView.Adapter so it looks like the following:
public class RecyclerAdapter extends RecyclerView.Adapter<RecyclerAdapter.PhotoHolder> { }
Android Studio will detect that you’re extending a class that has required methods and will underline your class declaration with a red squiggle.
To resolve this, click on the line of code to insert your cursor, then press alt + return (or option + return) to bring up a context menu. Select Implement Methods :
Confirm you want to implement the suggested methods by clicking OK :
These methods are the driving force behind your RecyclerView adapter. Note how there is still a compiler error for the moment– this is because your adapter and the required methods are actually defined using your ViewHolder class,
PhotoHolder , which doesn’t exist just yet. You’ll get to define your ViewHolder and see what each required method does shortly, so just hang tight, Commander!
As with every adapter, you need to provide the corresponding view a means of populating items and deciding how many items there should be.
Item clicks were previously managed by a ListView’s or GridView’s
onItemClickListener . A RecyclerView doesn’t provide methods like this because it has one focus: ensuring the items inside are positioned properly and managed efficiently. to hold your photos:
private ArrayList<Photo> mPhotos;
Next, add a constructor to set it up:
public RecyclerAdapter(ArrayList<Photo> photos) { mPhotos = photos; }
Nice job, Commander! Your adapter now knows where to look for data. Soon you’ll have an ArrayList of photos filled with the finest astrophotography!
Next, you’ll populate the stubbed methods that were added by Android Studio.
The first method,
getItemCount() , is pretty simple and should be familiar from your work with ListViews or GridViews.
The adapter will work out how many items to display. In this case, you want the adapter to show every photo you’ve downloaded from NASA’s API. To do that, add the following line between the method braces:
return mPhotos.size();
Next, you’re going to exploit the
ViewHolder pattern to make an object that holds all your view references.
Velcro For All: Keeping Hold Of Your Views
To create a PhotoHolder for your view references, you’ll create a static inner class for your adapter. You’ll add it here rather than in a separate class because its behavior is tightly coupled with the adapter.
It’s a static class, so regardless of how many instances you create, its values are shared amongst all of them — it’s pretty handy for holding references.
Add the following code underneath your adapter class member variables, but before any of the methods:
//1 public static class PhotoHolder extends RecyclerView.ViewHolder implements View.OnClickListener { //2 private ImageView mItemImage; private TextView mItemDate; private TextView mItemDescription; private Photo mPhoto; //3 private static final String PHOTO_KEY = "PHOTO"; //4 public PhotoHolder(View v) { super(v); mItemImage = (ImageView) v.findViewById(R.id.item_image); mItemDate = (TextView) v.findViewById(R.id.item_date); mItemDescription = (TextView) v.findViewById(R.id.item_description); v.setOnClickListener(this); } //5 @Override public void onClick(View v) { Log.d("RecyclerView", "CLICK!"); } }
So what did you do here?
- Made the class extend RecyclerView.ViewHolder, allowing it to be used as a ViewHolder for the adapter.
- Added a list of references to the lifecycle of the object to allow the ViewHolder to hang on to your ImageView and TextView, so it doesn’t have to repeatedly query the same information.
- Added a key for easier reference to the particular item being used to launch your RecyclerView.
- Set up a constructor to handle grabbing references to various subviews of the photo layout.
- Implemented the required method for
View.OnClickListenersince ViewHolders are responsible for their own event handling.
You should now be able to build and run the app again, but it’ll look about.
Replace the
return null line between the curly braces with the following:
View inflatedView = LayoutInflater.from(parent.getContext()) .inflate(R.layout.recyclerview_item_row, parent, false); return new PhotoHolder(inflatedView);
Here you add the suggested LayoutInflater import. Then you inflate the view from its layout and pass it in to a PhotoHolder.
And with that, you’ve made it so the object holds onto those references while it’s recycled, but there are still more pieces to put together before you can launch your rocketship.
Start a new activity by replacing the log in ViewHolder’s
onClick with this code:
Context context = itemView.getContext(); Intent showPhotoIntent = new Intent(context, PhotoActivity.class); showPhotoIntent.putExtra(PHOTO_KEY, mPhoto); is leaving.
Next thing to do is to add this method inside
PhotoHolder :
public void bindPhoto(Photo photo) { mPhoto = photo; Picasso.with(mItemImage.getContext()).load(photo.getUrl()).into(mItemImage); mItemDate.setText(photo.getHumanDate()); mItemDescription.setText(photo.getExplanation()); }
This binds the photo to the PhotoHolder, giving your item the data it needs to work out what it should show.
It also adds the suggested Picasso import, which is a library that makes it significantly:
Photo itemPhoto = mPhotos.get(position); holder.bindPhoto(itemPhoto);
Here you’re passing in a copy of your ViewHolder and the position where the item will show in your RecyclerView.
And that’s all you needed to do here on the assembly — just use the position where your ViewHolder will appear to grab the photo out of your list, and then pass it to your ViewHolder.
Step three of your ignition check protocol is complete!
Countdown And Liftoff: Hooking up the Adapter And RecyclerView
This is the moment you’ve been waiting for, the final stage before blast off! All you need to do is hook your adapter up to your RecyclerView and make sure it retrieves photos when it’s created so you can explore space — in pictures.
Open MainActivity.java , and add this variable at the top:
private RecyclerAdapter mAdapter;
Next, underneath the creation of your array list in
onCreate() add the following:
mAdapter = new RecyclerAdapter(mPhotosList); mRecyclerView.setAdapter(mAdapter);
Here you’re creating the adapter, passing in the constructors it needs and setting it as the adapter for your RecyclerView.
Although the adapter is connected, there’s one more thing to do to make sure you don’t have an empty screen.
In
onStart() , underneath the call to
super , add this code:
if (mPhotosList.size() == 0) { requestPhoto(); }
This adds a check to see if your list is empty, and if yes, it requests a photo.
Next, in
receivedNewPhoto() , update the method so it looks like the following:
@Override public void receivedNewPhoto(final Photo newPhoto) { runOnUiThread(new Runnable() { @Override public void run() { mPhotosList.add(newPhoto); mAdapter.notifyItemInserted(mPhotosList.size()); } }); }
Here you are informing the recycler adapter that an item was added after the list of photos was updated.
Now you’re ready to commence the ignition sequence, er…I mean run the app.
Run the app , load up the emulator and before long, Galacticon should look something like this:
That’s not all. Tap on the photo, and you should be greeted with a new activity that brings that item into focus:
But that’s still not all! Try rotating your device or emulator (function + control + F11/F12) and you’ll see the image in full screen glory!
Depending on the size of the image and your device screen it may look a little distorted, but don’t worry about that.
Congratulations! You have a working RecyclerView and can take your journey amongst the stars.
Taking A Spacewalk: Adding Scrolling support
If you head back to MainActivity on your device and try to scroll down, you’ll notice something is amiss — your RecyclerView isn’t retrieving any new photos.
Your RecyclerView is doing exactly as it’s told by showing the contents of
mPhotosList . The problem is that the app will only retrieve one photo when you load the app. It has no idea when or how to grab more photos.
So next, you’ll retrieve the number of the photos and the last visible photo index while scrolling. Then you’ll check to see if the last photo is visible and if there are no photos already on request. If these are both true, then your app goes away and downloads more pretty photos!
This patch will require a spacewalk, so break out your spacesuit and get ready for a zero gravity experience.
In MainActivity.java , add this method below
onStart :
private int getLastVisibleItemPosition() { return mLinearLayoutManager.findLastVisibleItemPosition(); }
This uses your RecyclerView’s LinearLayoutManager to get the index of the last visible item on the screen.
Next, you add a method that inserts an
onScrollListener to your RecyclerView, so it can get a callback when the user scrolls:
private void setRecyclerViewScrollListener() { mRecyclerView.addOnScrollListener(new RecyclerView.OnScrollListener() { @Override public void onScrollStateChanged(RecyclerView recyclerView, int newState) { super.onScrollStateChanged(recyclerView, newState); int totalItemCount = mRecyclerView.getLayoutManager().getItemCount(); if (!mImageRequester.isLoadingData() && totalItemCount == getLastVisibleItemPosition() + 1) { requestPhoto(); } } }); }
Now the RecyclerView has a scroll listener attached to it that is triggered by scrolling. During scrolling, the listener retrieves the count of the items in its LayoutManager and calculates the last visible photo index. Once done, it compares these numbers (incrementing the index by 1 because the index begins at 0 while the count begins at 1). If they match and there are no photos already on request, then you request a new photo.
Finally, hook everything to the RecyclerView by calling this method from
onCreate , just beneath where you set your RecyclerView Adapter:
setRecyclerViewScrollListener();
Hop back in the ship variable for a GridLayoutManager to the top of MainActivity.java :
private GridLayoutManager mGridLayoutManager;
Note that this is a default LayoutManager, but it could just as easily be custom.
In
onCreate() , initialize the LayoutManager below the existing Linear Layout Manager:
mGridLayoutManager = new GridLayoutManager(this, 2);
Just like you did with the previous LayoutManager, you pass in the context the manager will appear in, but unlike the former, it takes an integer parameter. In this case, you’re setting the number of columns the grid will have.
Add this method to MainActivity:
private void changeLayoutManager() { if (mRecyclerView.getLayoutManager().equals(mLinearLayoutManager)) { //1 mRecyclerView.setLayoutManager(mGridLayoutManager); //2 if (mPhotosList.size() == 1) { requestPhoto(); } } else { //3 mRecyclerView.setLayoutManager(mLinear, you need to make some changes to
getLastVisibleItemPosition() to help it handle the new LayoutManager. Replace its current contents with the following:
int itemCount; if (mRecyclerView.getLayoutManager().equals(mLinearLayoutManager)) { itemCount = mLinearLayoutManager.findLastVisibleItemPosition(); } else { itemCount = mGridLayoutManager.findLastVisibleItemPosition(); } return item public boolean onOptionsItemSelected(MenuItem item) { if (item.getItemId() == R.id.action_change_recycler_manager) { changeLayoutManager(); return true; } return super.onOptionsItemSelected(item); }
This checks the ID of the item tapped in the menu, then works out what to do about it. In this case, there should only be one ID that will match up, effectively telling the app to go away and rearrange the RecyclerView’s LayoutManager.
And just like that, you’re ready to go! Load up the app and tap the button at the top right of the screen, and you’ll begin to see the stars shift:
Star Killer
Sometimes you’ll see things you just don’t like the look of, perhaps a galaxy far, far away that has fallen to the dark side or a planet that is prime for destruction. How could you go about killing it with a swipe?
Luckily, Android engineers have provided a useful class named
ItemTouchHelper that gives you easy swipe behavior. Creating and attaching this to a RecyclerView requires just a few lines of code.
In MainActivity.java , underneath
setRecyclerViewScrollListener() add the following method:
private void setRecyclerViewItemTouchListener() { //1 ItemTouchHelper.SimpleCallback itemTouchCallback = new ItemTouchHelper.SimpleCallback(0, ItemTouchHelper.LEFT | ItemTouchHelper.RIGHT) { @Override public boolean onMove(RecyclerView recyclerView, RecyclerView.ViewHolder viewHolder, RecyclerView.ViewHolder viewHolder1) { //2 return false; } @Override public void onSwiped(RecyclerView.ViewHolder viewHolder, int swipeDir) { //3 int position = viewHolder.getAdapterPosition(); mPhotosList.remove(position); mRecyclerView.getAdapter().notifyItemRemoved(position); } }; //4 ItemTouchHelper itemTouchHelper = new ItemTouchHelper(itemTouchCallback); itemTouchHelper.attachToRecyclerView(mRecyclerView); }
Let’s go through this step by step:
- You create the callback and tell it what events to listen for. It takes two parameters, one for drag directions and one for swipe directions, but you’re only interested in swipe, so you pass 0 to inform the callback not to respond to drag events.
- You return false in
onMovebecause you don’t want to perform any special behavior here.
onSwipedis called when you swipe an item in the direction specified in the
ItemTouchHelper. Here, you request the
viewHolderparameter passed for the position of the item view, then you remove that item from your list of photos. Finally, you inform the RecyclerView adapter that an item has been removed at a specific position.
- You initialize the, then they will reorganize themselves to cover the hole. How cool is that?
Where To Go From Here?
Nice job! You’ve been on quite an adventure, but now it’s time to head back to Earth and think about what you’ve learned.
- You’ve created a RecyclerView and all the components it needs, such as a LayoutManager, an adapter and a ViewHolder.
- You’ve updated and removed items from an Adapter.
- You’ve added some cool features like changing layouts and adding swipe functionality.
Above all, you’ve experienced how separation of components — a key attribute of RecyclerViews — provides so much functionality with such ease. If you want your collections to be flexible and provide some excitement, then look no further than the all-powerful RecyclerView.
The final project for this tutorial is availablehere.
If you want to learn more about RecyclerViews then check out the Android documentation to see what it can do. Take a look at the support library for RecyclerViews to learn how to use it on older devices. If you want to make them fit with the material design spec then check out the list component design specification.
Join us in the forums to discuss this tutorial and your findings as you work with RecylerViews!
Until next time, space traveler!
评论 抢沙发 | http://www.shellsec.com/news/28681.html | CC-MAIN-2017-39 | refinedweb | 3,596 | 54.22 |
;
class C
{
public static void Main ()
{
const int i = 9; // Set a breakpoint here
Console.WriteLine (i);
}
}
The breakpoint is not hit, VS behaviour is that the breakpoint is automatically moved to next line with symbol info
MonoDevelop seems to leave the red marker on the "const int" line until the app is run, and then once the breakpoint is resolved, moves it to the CWL.
It seems broken, though, that when we are done debugging, it moves the bp back to the const int decl.
That appears to be by design, the debugger keeps track of the "adjustment" so that it can be restored afterwards. But I agree, I'm not sure that's the best thing to do.
That's now what is happening for me with MD master + Mono master. It does not break at all.
The breakpoint adjustment when debugging starts and when it ends is by design. If it does not break after the adjustment, then this is a debugger issue.
I cannot see any adjustment happening.
This is from Application Output
Could not insert breakpoint at '/home/marek/Projects/g2/g2/Main.cs:7': The breakpoint location is invalid. Perhaps the source line does not contain any statements, or the source does not correspond to the current binary
The adjustment only happens when the invalid position is inside a range of valid positions. For example, if you try to place the breakpoint outside of a method, no adjustment will be made. Maybe in this case that line falls outside the range of valid positions of the method.
Okay, so it seems to make a difference having the class in a namespace.
If you wrap a namespace around class C, then this works. Otherwise it says invalid breakpoint and doesn't adjust the breakpoint (and thus no breakpoint is hit).
n/m, now it's not working with the namespace either.
What else did I remove??...
aha! I had changed the Active Configuration to Mono 2.11
Mono 2.10.9 works as expected.
The problem was that Mono 2.10 emitted Location info for line 7, but Mono 2.11 does not.
The fix was to keep scanning locations beyond line 7 until we find something we can break on. | https://xamarin.github.io/bugzilla-archives/32/3238/bug.html | CC-MAIN-2019-43 | refinedweb | 376 | 75 |
Red Hat Bugzilla – Bug 126629
rhnpush very slow
Last modified: 2013-07-03 09:05:10 EDT
Description of problem:
running rhnpush for 300MB package took in the order of an hour.
Version-Release number of selected component (if applicable):
RHN Satellite 3.2 (from build 12 ISO).
How reproducible:
Seen on two different systems.
Actual results:
With rpm package file initially located locally on the Satellite,
running rhnpush successfully uploads the package but it takes
much longer than expected. The first time this delay was seen
to take over an hour. The second time, while not measured, was
consistent with the first (different systems, different places).
Expected results:
Additional info:
Satellite host has an embedded Oracle configuration.
It is a dual CPU system and has 2GB RAM.
How big is the RPM?
Was the upload happening over SSL? (i.e. did the URL say https://)?
Package in question is 231MB.
The rhnpush command indicates that upload is to http://<hostname>/APP.
On one occasion when the upload was occuring at the same time as
a satellite-sync the system hung and was showing iterations of the
following message on the console:
"count_ramdisk_pages: pagemap_lru_lock locked".
When that occured the system had only 1GB RAM. Subsequently the
system was upgraded to 2GB and the rhnpush succeeded, but the time
was still excessive.
Kernel version 2.4.9-e.41smp.
rhnpush of small packages (<2MB) is very quick (< 4 seconds)
on the same system.
another data point. rhnpush of a 13MB package took ~24 seconds
(all other variables being unchanged).
We have this specific problem (big rpms taking a lot of time to
upload) addressed in satellite 3.4
The change cannot be backported easily to 3.2. Will keep the bug open
and will see how we can upgrade the customer or something.
I encountered this issue after applying all updates (kernel included)
to stock RHEL2.1U3 version. rhnpush of 220MB package taken over 30
mins. Whereas, with just the standard RHEL2.1U3 packages with no
updates applied rhnpush of the same package (220MB) took just under 2
minutes.
May be kernel related, downgraded to kernel 2.4.9-e.34enterprise
(standard RHEL2.1U3 kernel) and rhnpush now takes 1min 40secs to
import 220MB package.
For lack of a better answer, closing. | https://bugzilla.redhat.com/show_bug.cgi?id=126629 | CC-MAIN-2016-44 | refinedweb | 381 | 68.57 |
After knowing primitive data types and Java rules of Data Type Casting (Type Conversion), let us cast long to float as an example.
A float carries a mantissa part (value of decimal point) where as long takes a whole number. Assignment of long to float is done implicitly. Observe the following order of data types.
The left-side value can be assigned to any right-side value and is done implicitly. The reverse like float to int requires explicit casting.
Examples of implicit casting
int i1 = 10; // 4 bytes
long l1 = i1; // 4 bytes to 8 bytes
byte x = 10; // 1 byte
short y = x; // 1 byte to 2 bytes
byte x = 10; // 1 byte
int y = x; // 1 byte to 4 bytes
Following program on long to float explains implicit casting, Java style a long value is assigned to a float. It is equivalent to a simple assignment.
public class Conversions { public static void main(String args[]) { long l1 = 10; float f1 = l1; System.out.println("long value: " + l1); // prints 10 System.out.println("Converted float value: " + f1); // prints 10.0 } }
Output screenshot of long to float Java
float f1 = l1;
As long value l1 is assigned to float f1, the whole number is converted into floating-point value 10.0.
View all for 65 types of Conversions
12 thoughts on “long to float Java”
This makes no sense to me. I literally changed just one character (f to F).
This fails:
long l1 = 10;
Float f1 = l1;
System.out.println(“long value: ” + l1); // prints 10
System.out.println(“Converted float value: ” + f1); // prints 10.0
result:
error: incompatible types: long cannot be converted to Float
Float f1 = l1;
I know this is old, but maybe someone else stumbles on this and is scratching his or her head like I am. This is still true in java 8. The original (using float instead of Float) is fine. Don’t try new Float(l1) either, that fails too.
Upto now we don’t get exact answer from you
By passing 8 bytes value into 4 bytes value .we may get loss of data.But java allows it implicitly how?
Java will not allow to assign 8 bytes value like long to 4 bytes value like int.
Upto now we don’t get exact answer from you
It requires explicit casting.
I didnot understand..can you please explain me in detail
I understand your doubt. Your doubt is how a 8 bytes of value of long can be assigned to 4 bytes value of float, that too implicitly? It is an exceptional case where a whole number long is assigned to floating-point number float of 4 bytes. Designers permitted this because float contains fractional part but not long.
ok, Thanks
As u told that designer has designed in this way. But how it converts from long to float.
long does not have mantissa value. For this reason, it is possible to assign to float where float adds .0
But the reverse is not true as mantissa value cannot be give to a long.
Sir,
Float size is of 4 bytes whereas long size is of 8 bytes.
But here long to float conversion is of implicit. Then how is it possible to store long value in float value.
Can you explain in detail.
Thanks in advance.
Your thinking is right, but it is to accommodate the floating point value. | https://way2java.com/casting-operations/java-long-to-float/ | CC-MAIN-2022-33 | refinedweb | 569 | 75.3 |
I want to connect raspberry pi to thingworx via mqtt. I am using Mosquitto broker and Paho mqtt as a client on raspberry pi to send data to thingworx. But i am not able to see the message on thingworx can anyone help. The steps I followed are given below.
Raspberry Pi
Steps for Thingworx
Thanks in advance.
Hi,
Did you manage to solve this issue?
I did this a while ago have a look here and let me know if this helps.
The full link to the thread is at the bottom.
Hi,
I have managed to get this working using MQTT, After reading quite a bit of documentation and the sites that you provided I'm 99% happy with the results.
This is what I did to get it working.
# MQTT Publish Demo
# Publish two messages, to two different topics
import paho.mqtt.publish as publish
#client.connect("192.168.2.207", 1883, 60)
#publish.single("CoreElectronics/test", "Hello", hostname="test.mosquitto.org")
publish.single("Thingworx/Name", "Hello", hostname="192.168.2.207")
#publish.single("CoreElectronics/topic", "World!", hostname="test.mosquitto.org")
publish.single("Thingworx/Topic", "World!", hostname="192.168.2.207")
print("Done")
# MQTT Client demo
# Continuously monitor two different MQTT topics for data,
# check if the received data matches two predefined 'commands'
import paho.mqtt.client as mqtt
from sense_hat import SenseHat
sense = SenseHat()
#("Thingworx/Name")
cleint.subscribe("Thingworx/Topic")
client.subscribe("Thingworx/Color")
#client.subscribe("CoreElectronics/topic")
# The callback for when a PUBLISH message is received from the server.
def on_message(client, userdata, msg):
print(msg.topic+" "+str(msg.payload))
#sense.show_message(msg.payload) # placing this here will print all the messages on the sence hat screen which is not optimal
if msg.payload == "Hello":
print("Received message #1, do something")
# Do something
if msg.payload == "World!":
print("Received message #2, do something else")
# Do something else
if msg.payload =="red":
print("Recieved a message from the PI to ")
# Do Something here like print a message to the the sense hat screen
sense.show_message(msg.payload) # This is a better place to do something like set the screen to red/blue/clear etc.
# Create an MQTT client and attach our routines to it.
client = mqtt.Client()
client.on_connect = on_connect
client.on_message = on_message
#client.connect("test.mosquitto.org", 1883, 60)
client.connect("192.168.2.207", 1883, 60)
# Process network traffic and dispatch callbacks. This will also handle
# reconnecting. Check the documentation at
#
# for information on how to use other loop*() functions
client.loop_forever()
Now that that was working I can connect my thingworx server and test the theory.
Create a new thing and use the MQTT template.
Create 3 properties called Name Topic and Screen_Color
In configuration the Name must be the same as the property and the value that you want it to be set to must be the full path to the topic.
For example Name: Screen_Color Topic Thingworx/Color... | https://community.ptc.com/t5/ThingWorx-Developers/Sending-data-from-Raspberry-Pi-to-Thingworx-using-MQTT/m-p/550560/thread-id/28731/highlight/true?attachment-id=64216 | CC-MAIN-2019-22 | refinedweb | 487 | 59.7 |
Hello, I'm working on a program that should manage data structures like queues and stacks.
For exercise, I made structs to model those structures, using two different .c files, each with the struct and the methods.
Now the problem is: if I include my files in a main program, there's name ambiguity. Both structures have "empty" and "null" methods.
Is there a way to disambiguate the methods without changing the name? Of course first solution is change empty(..) to emptyQueue(...) but is tricky and annoying to use.
Maybe namespaces? I know about their existence but I dont know how to use them.
Thanks for help! | https://cboard.cprogramming.com/c-programming/95261-name-disamb-multifile-project.html | CC-MAIN-2017-13 | refinedweb | 107 | 77.74 |
Quick Thumbnails in Django
Posted on September 13th, 2008 by Greg Allard in Django, Programming | Comments
I normally like to write code myself instead of installing some large script just to do one task for me. There were a few scripts out there that could create thumbnails, but I wanted something simple and wouldn’t use most of those features. Plus, I wanted to know how to use the Python Image Library with Django 1.0 and learn on my own how to take an uploaded picture and create a few thumbnails of them.
After searching for a while I was able to piece some things together to get something working. In my model I added these two functions.
def thumbnailed(self, file, width, height): from django.core.files.base import ContentFile from StringIO import StringIO import Image try: size = width, height tmp_file = StringIO() im = Image.open(StringIO(file.read())) format = im.format # since new im won't have format if format == "gif" or format == "GIF": im = im.convert("RGB") im.thumbnail(size, Image.ANTIALIAS) if format == "gif" or format == "GIF": im = im.convert("P", dither=Image.NONE, palette=Image.ADAPTIVE) im.save(tmp_file, format, quality=95) except IOError: return None return ContentFile(tmp_file.getvalue())
Using StringIO I was able to create a temporary file in memory to hold the thumbnail data and return it where it would later be passed to django. I was trying to create 3 thumbnails which presented another problem: django was obliterating the uploaded file and the data in my temp files when I was saving. So I figured out how to get around that, but there may be better ways.
def create_thumbnails(self, file): pic = self.thumbnailed(file, 160, 400) if pic is not None: self.picture.save(file.name, pic, save=False) # Django's InMemoryUploadedFile uses StringIO # This will reset it to be ready to use again file.seek(0) thumb = self.thumbnailed(file, 288, 96) if thumb is not None: self.thumbnail.save(file.name, thumb, save=False) # Django's InMemoryUploadedFile uses StringIO # This will reset it to be ready to use again file.seek(0) tiny = self.thumbnailed(file, 144, 48) if tiny is not None: self.tiny_image.save(file.name, tiny, save=False)
The model has picture, thumbnail, and tiny_image as ImageFields and create_thumbnails was originally called from the view and passed the uploaded file from request.FILES.
There was a lot of trial and error trying to get this together, so I hope it helps someone get past that.
Updated July 10, 2009
I added a condition to check if the image is a gif. If it is, it is converted so that the thumnails will look much better than they would without converting them. I also set the quality to 95 so that all images will have the best possible thumbnails.... | http://codespatter.com/2008/09/13/quick-thumbnails-in-django/ | CC-MAIN-2017-17 | refinedweb | 473 | 64.81 |
This is a blog post about handling circle-rectangle collisions. For some reason, these seem to be generally regarded as something complicated, even though they aren't.
First things first, you may already know how to check circle-point collision - it's simply checking that the distance between the circle' center and the point is smaller than the circle' radius:
DeltaX = CircleX - PointX; DeltaY = CircleY - PointY; return (DeltaX * DeltaX + DeltaY * DeltaY) < (CircleRadius * CircleRadius);
Surprisingly or not, rectangle-circle collisions are not all too different - first you find the point of rectangle that is the closest to the circle' center, and check that point is in the circle.
And, if the rectangle is not rotated, finding a point closest to the circle' center is simply a matter of clamping the circle' center coordinates to rectangle coordinates:
NearestX = Max(RectX, Min(CircleX, RectX + RectWidth)); NearestY = Max(RectY, Min(CircleY, RectY + RectHeight));
So, combining the above two snippets yields you a 3-line function for circle-rectangle check:
DeltaX = CircleX - Max(RectX, Min(CircleX, RectX + RectWidth)); DeltaY = CircleY - Max(RectY, Min(CircleY, RectY + RectHeight)); return (DeltaX * DeltaX + DeltaY * DeltaY) < (CircleRadius * CircleRadius);
And here's it in action, along with a bit of debug drawing:
Click and drag elements.
And that is it. I've told you that it really isn't complicated, didn't I?
Bonus: Visualization ported to GameMaker
This is beautifully simple. Thanks for sharing it, and I love the interactive demo!
Excellent algorithm, explanation and demo! Thank you for sharing it!
what are the rectangles points? is it the center?
It’s top-left and bottom-right corners of the rectangle – in other words, bounds of it.
Hi there! I’ve been searching for ages for this! I’m not really a math person so even if this looked really simple, I can’t seem to understand how to apply this into my code. Are you willing to share the project file with me? I need to also see the debug drawing to understand it.
You can right-click on the page, pick “View source”, and scroll down to document.getElementById("canvas-605"). That’s exactly how the demo code was written – right in the blog post.
Thanks for getting back! And I just saw it. This may seem silly.. but how do I translate that into GML?
In GameMaker you wouldn’t need it at all, since there is a built-in rectangle_in_circle for this.
Here’s the demo ported to GML.
This is way to simple ^^ its something i have been thinking about for some time, but this is really qute nice 🙂
Awesome!
Thank you very much, author!
Big fan of your stuff, especially this article. Thank you for making it. : )
Hey, so for this what does Max mean, which one of the two values is bigger, or something else?
I moved the code into java and it doesn’t work properly…
Indeed – the bigger of two arguments, not too unlike an extra line with an if-branch.
I think the only things to watch for are:
– that both objects are in same coordinate space
– that you are passing (x, y, width, height) for rectangle rather than (x1, y1, x2, y2)
– that circle radius is not negative
Where the point (RectX, RectY) is located? Bottom-Left or Top-Left?
Top-left, assuming that +x is to the right and +y is downwards.
If your coordinate system assumes +y to be upwards, it would be bottom-left instead. | https://yal.cc/rectangle-circle-intersection-test/ | CC-MAIN-2017-30 | refinedweb | 579 | 64.3 |
When I tell the file or folder picker that I want only file system files and folders, why does it still show virtual folders?
Raymond
You can ask a file picker dialog to limit itself to files in the file system by passing the
FOS_
FORCEFILESYSTEM flag. There is an analogous
BIF_
RETURNONLYFSDIRS flag for the folder picker.
But if you pass this flag, you’ll still see virtual folders in the user interface. Things like Network and My Computer This PC. And if the user picks it, the OK button grays out, which is confusing. Why are these virtual folders showing up when I explicitly asked that they not show up?
Well, that’s not what you asked.
You asked that the user be able to select only file system files or folder. You didn’t ask that non-file-system object be removed from view.
But why are these non-file-system objects shown in the view if the user can’t select them?
Because there might be a file system object inside them.
The shell namespace has two attributes related to the file system. One is
SFGAO_
FILESYSTEM, which means that the item is itself in the file system. The other is
SFGAO_
FILESYSANCESTOR which means that the item or its children are in the file system.
If an item has the
SFGAO_
FILESYSANCESTOR attribute, then it will show up in the “make sure the user picks an item in the file system” dialogs: Even if the item itself is not a file system item, it may contain a file system item, so the dialog shows the item so the user can navigate into it to find the actual file system item.
The
SFGAO_
FILESYSANCESTOR attribute is like a sign that says “This way to the file system.”
If the file and folder picker dialogs showed only file system objects and not also file system ancestors, then your dialog would be pretty blank, seeing as the root items like This PC and Network are themselves not file system items. But if you want to pick something from your D: drive, you’ll probably need to go through This PC to get there.
I also miss My Computer. | https://devblogs.microsoft.com/oldnewthing/20191021-00/?p=103014/ | CC-MAIN-2020-34 | refinedweb | 370 | 70.02 |
Firstly we booked the room and the detail state that there’s a bathtub but there wasn’t any .
Secondly ask for non smoking room but the first thing we smell upon opening the door was the stench of cigarette (there was an ashtray in the room , went to the front desk to check with is the room given to us non smoking she kept nodding and say our room is non smoking room . Didn’t confront with them anymore as we were too tired and they don’t understand a single word that we say both English and mandarin.
We borrowed 2 umbrellas from them and we broke one as the wind was too strong (typhoon) the counter girl wasn’t very happy , gave us the irritated face and made rude sound . We were going to buy a new umbrella the next day to compensate as it was already late when we return ( we bought a brand new after)
Anyhow I wouldn’t recommend anyone with young children to stay in this hotel as it’s situated in a red light district kinda area lots of bar and men’s club etc.
Only redeeming part was the room size and the huge tv . More
- Free Wifi
- Free parking | https://www.tripadvisor.com/ShowUserReviews-g297885-d3356496-r622959767-Hotel_W_Shin_Jeju-Jeju_Jeju_Island.html | CC-MAIN-2019-26 | refinedweb | 209 | 67.72 |
I have to find sum of the all even numbers that are not divisible by 7 using recursion. I tried this code but it seems I am making mistake somewhere because it returns 0:
public static void main(String[] args) {
System.out.println(specialSum(50));
}
public static int specialSum(int a) {
if ((a >= 1) && ((specialSum(a-1))%7 !=0)) {
return a + specialSum(a -1);
} else{
return 0;
}
}
}
Here you have the solution in one line, you should have a case to stop the recursion, in your case you stop the recursion at 49 because it is divisible by 7, and you don't take into account numbers less than 49
main() { specialSum(50, 7); } public static int specialSum(int a, int toDivide) { return (a == 0) ? 0 : (a >= 1 && a%2 == 0 && a%7 != 0) ? a + specialSum(a - 1, toDivide) : specialSum(a - 1, toDivide); } | https://codedump.io/share/w1kFMK24lvwN/1/java-recursion-sum-of-number-non-divisible-by-specific-number | CC-MAIN-2017-34 | refinedweb | 143 | 58.62 |
I have an image stack with two channels containing the image and corresponding mask of the contained objects (which I extract using thresholding). Each slice is a time step. I generate the masks in Python an store them with the original image to TIFF in the two separate channels.
I would now like to semi-transparently overlay the object mask in the second channel over the image in the first channel from within the ImageJ GUI. That way I could easily browse through the time steps to check, if the masks are correct. See the Python code below to have a better idea of the results I am going for.
I did already try using the “glesbay inverted” LUT to color the mask-channel (which AFAIK should assign different colors independently of intensity), but for some reason only some of the objects are transparent (see the image below).
import matplotlib.pyplot as plt from skimage.color import label2rgb from skimage.measure import label plt.imshow(objects_image, cmap='gray') plt.imshow(label2rgb(label(objects_mask, connectivity=1), bg_label=0), alpha=0.05) plt.show()
| https://forum.image.sc/t/showing-a-semi-transparent/29186 | CC-MAIN-2019-39 | refinedweb | 182 | 57.47 |
On Jun 22, 2007, at 18:51:10, C. Scott Ananian wrote:> Back to kernel-land: in an IPv6 only world, it might make sense to > export a /proc file compatible with the format of /etc/resolv.conf, > with one DNS server address per line. If glibc uses/used inotify > on /etc/resolv.conf, then symlinking /etc/resolv.conf to /proc/net/ > ipv6_dns allows glibc to get kernel DNS autoconfiguration updates > without a special case. [Assuming that glibc was smart enough to > watch the referenced file and not the symlink...]>> A draft patch to implement /proc/net/ipv6_dns is attached, just to > make the discussion concrete. [Not guaranteed to apply cleanly, as > I'm not sure that gmail won't munge the whitespace. But it should > be readable at least.]Ewwww, I suspect you're likely to get a lot of NAKs from people on this one.1) Why must the kernel grok the DNS portions of the packets? Can't you just have a little userspace daemon which listens for the appropriate ICMPv6 messages and updates /etc/resolv.conf accordingly? That way you could even have userspace policy about which DNS information is acceptable for the given system.2) New files in /proc which aren't directly related to processes are strictly forbidden. Hopefully eventually (IE: in several years when appropriate replacements are widely used) the /proc/meminfo, /proc/ cpuinfo, /proc/mdstat, and other similar non-process-related files can be made to go away, but we certainly aren't adding new ones.3) It's really ugly to generate random text data from kernelspace, because then people write 42 different userspace parsers for the text data and each one has subtle incompatibilities which make it impossible to extend the file in the future. This is why (2) is true.4) Within 30 sec of such a patch going in, the virtualization people are going to start griping at you for not properly implementing a virtual namespace-ized IPv6 DNS autoconfiguration proc-file. Since that really can't be done easily without putting lots of policy in the kernel it's probably easier to just follow the advice in (1) and do it in userspace.Cheers,Kyle Moffett-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at | https://lkml.org/lkml/2007/6/24/213 | CC-MAIN-2017-13 | refinedweb | 397 | 55.13 |
DropArea signals seem wonky. Entered, exited, dropped aren't when I expect.
The Entered, exited, and dropped signals aren't being emitted when I would expect them to be. I've created a DropZone thusly:
import QtQuick 2.7 DropArea { id: dropArea height: 120; width: 120 onEntered: console.log("enter") onExited: console.log("exit") onDropped: console.log("drop") Rectangle { id: rect anchors.fill: parent color: "grey" border.color: "black" border.width: 4 } }
When I drag an item over this I get no signal. When I drop it the first time I get no signal. However - if I move it around within the DropZone (without dragging it out) i will get both signals.
Subsequently, when I drag the item out and drop it somewhere else the DropZone emits entered and Dropped again one last time. I never get Exited.
Is this the expected behavior? | https://forum.qt.io/topic/70582/droparea-signals-seem-wonky-entered-exited-dropped-aren-t-when-i-expect | CC-MAIN-2018-09 | refinedweb | 143 | 71.71 |
This is the second blog of the blog series about Remote Code Analysis in ABAP Test Cockpit (ATC).
See also blogs:
Technical Requirements
To use Remote Code Analysis in ABAP Test Cockpit (ATC) you need to install and configure one ATC central check system SAP_BASIS 7.51 or SAP_BASIS 7.52. Depending on how many custom objects you need to consider, SAP recommendation for system sizing is about 1 CPU and 4GB RAM per 16000 objects and day, for data volume on the DB about 400kB per object.
The systems, which you want to check in your landscape, must be on SAP_BASIS 7.00, 7.01, 7.02, 7.31, 740 or 7.50 releases.
The RFC destinations for all checked systems must be provided in the ATC central check system and RFC-stubs must be implemented in all checked systems.
Depending on your support package level you need to apply the respective SAP Notes for using ATC to perform remote analysis. See the collective SAP Note 2364916.
If you intend to run ATC for the objects in your own custom namespaces (other as Z* or Y* namespaces), you will need to register custom namespaces of the checked systems at the central ATC system.
You would also need authorizations for administrative and quality assurance activities. See Authorizations for the ABAP Test Cockpit.
1. Setup System Role
By setting up the system role you can specify the current system as a central check system for remote ATC checks.
Log on to the ATC central check system as ATC administrator and call transaction ATC. Under ATC Administration -> Setup double-click the Set System Role entry:
Switch to change mode, choose ATC Checks by Object Providers Only option and click Save button.
This option defines that the current system takes the role of the ATC central check system and ATC is used to analyze development objects in remote systems.
2. Define RFC destinations for checked systems
Use transaction SM59 to create RFC destinations for each ABAP system, which will be checked in the current central ATC system.
3. Maintain Object Providers
While ATC check runs in the central system, the ATC framework uses RFC connections to the remotely checked systems to extract a model from the custom code for analysis. Object Providers define these RFC connections and therefore they must be configured for usage.
Every Object Provider must be assigned to a System Group, therefore before you configure Object providers, you need to create System Groups. A System Group contains multiple SAP systems of the same SAP release. It can be helpful to define more than one System Group (e.g. for each subsidiary of your company with its own custom code). In the central check system ATC exemptions are valid only for the whole relevant System Group.
To create a System Group, in the ATC transaction, under ATC Administration -> Setup double-click Maintain Object Providers and then double-click the item System Groups for selection. Switch to change mode, click the New Entries button in the toolbar and enter an ID and short Description for the new system group to be added.
Now you can maintain Object Providers. Select RFC Object Providers by double-clicking, switch to change mode and click the New Entries button. Specify ID, Description, System Group and the valid RFC destination to the remote SAP system. Repeat these steps for each Object Provider you want to set up.
Correction Systems define RFC connections which can be used to view and change the source code.
4. Configure Run Series
Now you need to configure ATC run series for remote ATC checks. In the ATC transaction under ATC Administration -> Runs double-click Schedule Runs and click the Create button in the toolbar. Enter the name for the series in the dialog and click Enter. Specify the data for the new series configuration: Description, Check Variant (global Code Inspector Variant), Object Provider and Object Selection (choose the packages or the object set in the remote system.). Save your configuration.
5. Schedule Run Series
Now you can schedule ATC check run in a central system to check remote systems. In the ATC transaction, in der Overview, under ATC Administration -> Runs double-click the Schedule Runs entry. Select the run series in question from the list with run series and click the Schedule button in the toolbar. Choose Execute (or F8) to run ATC checks.
6. View Results
After successful execution of the ATC run, the check results from the remotely checked SAP systems will be available in the ATC central check system for analysis.
Now you can logon to ATC central check system as developer, and view the ATC results in the ATC Result Browser (switch to the ATC Result Browser in the SE80).
In the ATC Result Browser, choose Results by system group, select the relevant system group and display its ATC run series.
Double-clicking the node of the run series allows to view the list of findings. Double-click a finding to view the details. And now you can examine the finding in detail, access the related documentation with information on how to correct it, navigate to the relevant source code line and correct the finding or request an exemption. That’s it.
This is very informative.
Thank you for the detailed setup instructions.
BR
Mahadevan
This will create a one time run. However what if you want this to run every 30 days? How would you automate this so that you would not have to manually schedule this to run each time?
Hi Greg,
on the “Schedule Runs” UI you can maintain regular run series via the “Execute in Background” Option.
Regards,
Thomas.
Dear Olga,
Thank you presenting a series of detailed step by step ATC set up.
We have Evaluation system set up done and SAP Notes implemented on source ECC system. But when we run remote ATC check from evaluation system, we observe considerable number of tool failures. Along with priority 1,2 & 3, the number in tool failure is in hundreds. The error listed is primarily “Prerequisites not met”.
Could you please let us know what action needs to be taken to address this issue.
Hi Ilyas,
Did you also process the manual post-implementation steps mentioned in the SAP Notes you applied? E.g., did you execute program RS_ABAP_INIT_ANALYSIS as menitoned in SAP Note 2270689?
Michael
Hi Michael,
Yes I ran program RS_ABAP_INIT_ANALYSIS in check system. If I were to run the program again, the message says “Tables have already been created”.
Our check system is SAP_BASIS 701. So I don’t know if there needs to be any additional notes applied. Could you please suggest..
FYI, I can confirm below SAP notes are applied in check system:
2270689, 2011106.
Below manual activity is also done:
Create the function group SABP_COMP_PROCS_E in the package SABP_COMPILER (the master language is German – DE).
Activate the whole function group immediately after the creation.
Hi,
i already implement this solution on different sap servers, but with one i had aproblem that I always get the exception CX_SY_IMPORT_MISMATCH_ERROR.
Code:
import SYMBOLS = P_SYMBOLS CHECKSUM = P_CHECKSUM from data buffer L_SCR_ABAP_SYMB-CONTENT accepting padding.
Is there any solution for this problem? I couldn’t find something at SDN or in support.sap.com
Hi Marcel,
Could you check whether the issue is solved after applying SAP Note 2527903 in the central check system?
Thanks,
Michael
Hi Michael,
thank you. Now it is working.
Regards
Marcel
Hello,
after a hard and long way to bring this bird to fly we received our first scan result. But a lot of tool failures attend the scan result.
One tool error I do not understand:
It seams that the name of report is not forwarted to the message. There is a blank between the dot (.) and “program”
The note 2270689 is already implemented.
Also the message type (others) and the check title (unavailable) are not really precise.
But which exception or runtime error raise this message? What is the cause for this error? How can I analysis this?
Many thanks in advance.
with kind regards
Stephan Scheithauer
Hello Stephan,
Did you also process the manual post-implementaton steps of SAP note 2270689 (run program RS_ABAP_INIT_ANALYSIS) in the checked system?
Michael
Hello Michael,
thanks for your reply Michael.
Yes, the report was executed. At least I could see, that the tables were created and also filled.
When I start the report again, I get the message, that the report did already run.
Could the report RS_ABAP_INIT_ANALYSIS be really the reson why I did not receive remote data?
Other reports was checked correctly (at least over 2000). So this is not a general issue but I want to understand the issue and how to fix it so that I can check the rest of the reports.
Thanks in advance
With Kind regards
Stephan Scheithauer
Hello Stephan,
did you find a solution? I encounter the same issue and would like to know how to analyze the messages with check title ‘Unavailable’.
Best regards
David
Hello there,
I have 2 questions re the setup of such a ATC system and performing the custom code check for S/4 adaptation.
I would be grateful for any clarifications!
Thanks,
Michael
Hi Michael,
1. yes, HANA database is not required for the central ATC system.
2. custom code is not transferred and is checked by the central ATC system remotely (RFC).
Regards,
Olga.
Hi Olga,
Thanks for your response.
If the ATC check is done remotely without the (even temporary) transfer of custom code, doesn’t this mean that the check might as well be done in the target system itself?
Sorry to be specific here, but from the viewpoint of where the Intellectual Property (IP) even temporarily resides -esp when the ATC central system is cloud-based- is important to us.
Thank you!
M
Hi Michael,
the ATC check is RFC-based, it is executed in the ATC central system for remote systems, the underlying Code Inspector check variant must consist of RFC-based checks only. See also Remote ATC checks in the SAP Help.
Best regards,
Olga.
Hi Olga,
We have configured the central ATC system – SAP_BASIS 7.52 and able to get S4H ATC violations from checked system (SAP_BASIS 7.40, SP17). Applied notes as stated in #2364916. Any how we are getting few “Check Failures” message along with ATC result. Can you plz suggest how it can be remediated.
In addition – if we need to run ATC on code base in system configured as central system how can it can be performed. Do we need to define current system in object provider?
Thanks & Regards
Rajesh
Hi Rajesh,
did you also process all manual post-implementaton steps as stated in the applied SAP notes?
You cannot check the ATC central check system itself with the remote ATC infrastructure. You need to check it locally (switch the system role to “Local ATC Checks only”).
Regards,
Olga.
Thanks Olga for responding,
We are getting tool failure for only 8 objects. Yes we had implemented manual corrections as stated in notes. Only 2270689 is pending as we are not able to download this note. Is this can be reason for tool failure?
Thanks & Regards
Rajesh Dadwal
Thanks Olga for responding,
We are getting tool failure for only 8 objects. Yes we had implemented manual corrections as stated in 2364916. Only 2270689 is pending in checked system as we are not able to download this note. But report RS_ABAP_INIT_ANALYSIS is available and we executed it.
Thanks & Regards
Rajesh Dadwal
Hi Rajesh,
yes, the 2270689 is relevant for RFC extractor, it must be implemented in the checked system. See “Technical requirements” chapter of the blog.
Regards,
Olga.
Thanks Olga,
As you suggested, Now we have enabled remote ATC check. The remote ATC output received was not consistent with results earlier extracted using custom code migration worklist (Using – SYCM*) . Result from remote ATC are fewer in violation count & SAP notes #.
Can you suggest what can be the possible reason. Right now remote ATC is not giving all results as compare to Custom code migration worklist.
Thanks & Regards
Rajesh
Hi Rajesh,
generally it should not happen.
Maybe you have your own custom namespaces (other as Z* or Y* namespaces) to be checked with ATC? Then you need to register custom namespaces of the checked systems at the central ATC system.
Otherwise if you could provide a concrete example of the findings detected with SYCM and not found with remote ATC, please open the OSS ticket. It must be looked at in detail.
Hope this helps.
Best Regards,
Olga.
Hi Olga,
Yes, we have custom namespace. I can only see 2 namespace in this report which are not registered but anyhow Remote ATC is still showing the violation from these unregistered packages.
One question – customer name space role is ‘C’ in current source system.It can be the possible issue? and need to be changed to ‘P’. But these settings are same for both approach. Please suggest.
Thanks & Regards
Rajesh
Hi Rajesh,
please register the namespaces for ATC as I proposed and decide on the namespace role (P or C) based on the Setting Up the Namespace for Development.
If you still get less ATC findings as by using SYCM, open OSS ticket and make concrete example.
Regards,
Olga.
Hi Olga,
This seems to be possible issue. When we are running ATC remotely it is only getting violations for package marked with role (P) in sub-system. All these packages are for custom development (Non Y* / Z*). However with SYCM extract based approach it is giving simplification violation.
I have changed role for few packages and now its giving results thou ATC run. Still output is not exact same as compared to SYCM.
I checked few programs which were giving issues in SYCM but if run ATC on same program in Central system locally (without remote ATC) it is not giving any violations.
As ATC variants is recommended approach, so believe better is trust on ATC results.
Thanks & Regards
Rajesh Dadwal
Hi,
I could make all the configurations and installed many notes. So far so good, I also could check the test remote system (C13) with Z check variants.
But when I select one of these variants FUNCTIONAL_DB or PERFORMANCE_DB, then I get a syntax error:
But: When I switch off the first 4 of 5 checks in this variant, then the error disappears.
I was searching the SAP knowledge database and also the internet, but nothing helps me to solve this problem.
Can you explain and solve for me what might be the problem?
Regards, Frowin
Hi
We solved this problem: SAP Note 2270689 was inconsistent.
We removed this note and installed it with the latest version.
Regards, Frowin
Hi,
We have many SAP systems in the company on different SAP_BASIS levels: 7.11, 7.31, 7.40.
Currently we could establish remote ATC connections to systems with SAP_BASIS 7.31 and 7.40, and also we could run remote ATC tests successfully.
Unfortunately in the technical requirements is written SAP_BASIS 7.11 is no supported.
These systems with SAP_BASIS 7.11 are our main system with target to do remote ATC, because we will upgrade for SAP S4/HANA in near future. Remote ATC would be a great help to do this Job.
Questions:
Any help for that?
Regards
Frowin
Hi Frowin,
unfortunately, the SAP_BASIS 7.11 is not supported due to the technical feasibility in this particular release. Please consider, 7.11 is not the underlying release for SAP ERP and therefore the SAP S/4HANA conversion path doesn’t exists for 7.11 and consequently S/4HANA related custom code checks with remote ATC don’t make any sense.
Best Regards,
Olga.
Hello,
we have set up an ATC Central System (SAP_BASIS 7.52). With this central ATC we want to check a remote system for S/4 HANA readiness.
Which Variant do i need for the Checkrun? I found the following variants:
S4HANA_READINESS_REMOTE and S4HANA_READINESS_1709 , which is the right one?
As the Object Provider i use the remote system.
Gruß
Toni
Hi Toni,
the S/4HANA release specific variants check for the simplification items of the corresponding S/4HANA releases. For example the S4HANA_READINESS_1610 checks all relevant simplification items for S/4HANA 1610 and so on.
If you need to check for S/4HANA 1709 readiness, you can use either S4HANA_READINESS_REMOTE or S4HANA_READINESS_1709. Both would be correct.
Viele Grüße,
Olga.
Thanks,
i did now the remote S/4 HANA Checks with the ATC and also with the SCI.
I wonder why do I get different results?
ATC = 168 “Errors”
SCI = 25 “Errors”
Gruß
Toni
Hi Toni,
if you used SCI based on the SAP NetWeaver 7.5 approach (Code Inspector in the context of an S/4HANA migration), then you would get other results, because in 7.52 with remote ATC we delivered more S/4HANA readiness checks).
You need to use remote ATC, this is the recommended approach.
Regards,
Olga.
OK,
thank you for the Info, that ATC should be used if possible.
One Question again. During the upgrade process to S/4 HANA via SUM, the code will also be checked, right?
Which mechanism does the SUM use?
Gruß
Toni
During the system conversion from ERP to S/4HANA (we call it system conversion, not an upgrade, because S/4HANA is a different product family) the SUM executes different tasks: database migraton, software update to S/4HANA, tables content conversion to the S/4HANA data model, but doesn’t check custom code.
If you mean SAP Readiness Check, which runs in the preparation phase before SUM, then it uses Code Inspector and Custom Code Analyzer (SYCM), but delivers only high- level overview over the affected custom code by the S/4HANA simplifications, for a deep-dive analysis still the remote ATC must be used.
Regards,
Olga.
Hello Olga,
thank you
and when i want to check my custom code for S/4 HANA, which Variants do i need to take? The same?
Gruß
Toni
Hi Toni,
this question duplicates your previous question about S/4HANA readiness variants. See my answer above.
Regards,
Olga.
Hi Olga,
Just wanted to say thanks for this informative blog.
I would like to ask if could we use a SAP CAL system as the central ATC check system or do we have to have to install a standalone SAP NetWeaver system (SAP_BASIS >=7.51) as we only require to do the S4 HANA checks as a one-off exercise.
if we can use a SAP CAL system do you have any links which I can reference.
Thanks, Raj
Hi Raj,
yes, you can use AS ABAP System in the CAL as ATC central check system.
Best regards,
Olga.
Hi Olga/All
I’m facing an issue with viewing code in the remote system.
For the Object Provider I have configured an RFC Destination with a Service User and in the Correction Systems I have configured an RFC Destination with a normal dialog user (me).
I am assuming that when I try to view the source code of a finding then the Correction System RFC destination would be used?
However it appears that the Object Provider is used.
Can you confirm if my thinking is correct.
Thanks
Ian
Hi Ian,
Did you try to change the target system for navigation to the correction system in the ATC result list as shown in the screenshot below?
If you did not change this setting, the target system for navigation would be the checked system.
Kind regard,
Michael
Hi Michael
No I hadn’t! Many thanks for this. I’d obviously not seen or certainly understood the option.
Problem solved!
Cheers
Ian
Hi Martin/All
As a follow on question, how have you defined the user for the RFC Destination for correction system? Do you use the current user, a specified dialog type user or a system user?
Thanks
Ian | https://blogs.sap.com/2016/12/13/remote-code-analysis-in-atc-technical-setup-step-by-step/ | CC-MAIN-2018-30 | refinedweb | 3,330 | 64.91 |
I have downloaded the Florida dataset from the state website at Master Address List Download
It seems to be paid for by the State of Florida, so hopefully okay to use. I brought it into MySQL, and it includes street number, prefix, street, city, zip and LAT and LON for over 9 million locations. It also contains sales tax information by destination, the reason I compiled it.
However,it seems like an amazing data source for OpenStreetMap The data on the Florida map is very sketchy in this area... in fact I have to look up LAT and LON on Google when making a favorite prior to my leaving the house.. using the Android App OsmAnd ... which I think uses the OpenStreetMap dats. As I have it in my local MySQL database, it could be exported nearly any way. Thoughts? Sound terrific? Sounds copyrighted so unavailable? I hope not! Please let me know what you think.
asked
24 Aug '16, 03:32
LouiePaul
26●1●1●2
accept rate:
0%
You should take a look at the guidelines for uploading datasets from other sources:
answered
24 Aug '16, 03:50
logrady
129●2●6
accept rate:
0%
And if the license is okay, you need a plan to deal with addresses that are already in OSM. Usually the existing ones are kept, so you need a way to identify and remove duplicates from the import. And you should come up with a QA plan to assure things go well. For this getting local mappers involved is a good idea.
@LouiePaul you need to follow the import guidelines and please: they have been born out of a decade of bad imports and while might seem strict, are so because it is necessary.
You will need community buy in and a plan how to conflate the imported data with existing data. This will be a fair bit of work.
The link I provided includes very detailed instructions about these issues (and more):
Step 1 - Prerequisites,
Step 2 - Community Buy-in,
Step 3 - License approval,
Step 4 - Documentation,
Step 5 - Import Review,
Step 6 - Uploading.
If you want a lat long it can be found moving the map centre then read the URL... so 13 is zoom level, 35.1541 North ( negative for South) and -90.0321 West ( positive = East)
Once you sign in you will be able to subscribe for any updates here
Answers
Answers and Comments
Markdown Basics
learn more about Markdown
This is the support site for OpenStreetMap.
Question tags:
import ×192
database ×115
florida ×2
question asked: 24 Aug '16, 03:32
question was seen: 2,570 times
last updated: 24 Aug '16, 13:46
[closed] Missing DB.php
How to setup PostGIS server and import .osm-file on Windows
How to create a development environment with an empty planet map?
How to keep a local modified database in sync with osm database
Failed to allocate space for node cache file
Importing PDF JOSM
What OSM is not
How do I import map data from a .dwg file to OpenStreetMap?
How to get data for two countries ?
Uploading a GPX from TTGpsLogger doesn't work
First time here? Check out the FAQ! | https://help.openstreetmap.org/questions/51686/over-9-million-street-addresses-available-in-florida-with-lat-lon-good-to-include | CC-MAIN-2022-27 | refinedweb | 533 | 71.55 |
Results 1 to 3 of 3
- Join Date
- Feb 2013
- 2
- Thanks
- 0
- Thanked 0 Times in 0 Posts
Vbs command structure to add several files to a single zip
What I am trying to do is to select several files that are dragged to the icon for my current vbs script and copy them all to a folder which can then be zipped as a single object to be attached to an email.
That is the long explanation but my current script can only handle file-types that are not restricted by Outlook. It works well for several zip or txt files but cannot handle .exe or other types that Outlook will not accept as attachments.
To make it a truly handy desktop icon, it needs to work for anything I drop onto it.
I have been told that I need to put everything into a single location before being zipped but nothing I have tried can do that without crashing the already working part of the script.
At this point, I am completely lost would appreciate assistance if anyone knows a way to manage this. I feel certain that this is not an impossible task.
- Join Date
- Dec 2009
- Location
- Earth
- 8,666
- Thanks
- 58
- Thanked 1,063 Times in 988 Posts
If you post the script we can make some suggestions.
cheers, Paul
- Join Date
- Mar 2004
- Location
- Manning, South Carolina
- 9,762
- Thanks
- 400
- Thanked 1,546 Times in 1,402 Posts
Questorfla,
I saw you post and thought "this is a job for PowerShell!" and where did I leave my cape.
So here's some powershell code I found on the web and did some modifications to to make it do closer to what I think you want.
You need to call this from a shortcut or if you want it available all the time from a Scheduled Task set to run on startup.
Your Shortcut's target needs to be: C:\Windows\System32\WindowsPowerShell\v1.0\powersh ell.exe -sta -file "G:\BEKDocs\Scripts\Test-DragAndDrop.ps1" of course you'll need to adjust the file path for the location of the .ps1 file on your machine.
Here's the code:
Code:
# PowerShell Drag & Drop sample # Usage: # powershell -sta -file dragdrop.ps1 # (-sta flag is required) # Sourec of Base Code: Function DragDropSample() { Add-Type -AssemblyName System.Windows.Forms $form = New-Object Windows.Forms.Form $form.text = "Drag&Drop sample" $listBox = New-Object Windows.Forms.ListBox $listBox.Dock = [System.Windows.Forms.DockStyle]::Fill $handler = { If ($_.Data.GetDataPresent( [Windows.Forms.DataFormats]::FileDrop)) { ForEach ($Filename in $_.Data.GetData( [Windows.Forms.DataFormats]::FileDrop)) { #***Delete following line if you don't want names listed *** $listBox.Items.Add($Filename) Copy-Item -Path "$Filename" -Destination "$FileDest" -Force } #End ForEach ($Filename... } #End If ($_.Data... } #End $handler $form.AllowDrop = $true $form.Add_DragEnter($handler) $form.Controls.Add($listBox) $form.ShowDialog() } #End Function DragDropSample #------------------------ Add a Helper ------------------------ $SWAArgs = @{name = “Win32ShowWindowAsync” namespace = 'Win32Functions'} $showWindowAsync = Add-Type –memberDefinition @” [DllImport("user32.dll")] public static extern bool ShowWindowAsync( IntPtr hWnd, int nCmdShow); “@ @SWAargs -PassThru Function Show-PowerShell() { [void]$showWindowAsync::ShowWindowAsync( (Get-Process –id $pid).MainWindowHandle, 10) } #End Function Show-PowerShell Function Hide-PowerShell() { [void]$showWindowAsync::ShowWindowAsync( (Get-Process –id $pid).MainWindowHandle, 2) } #End Function Hide-PowerShell #------------------------ Main Program ------------------------ Hide-PowerShell $FileDest = "G:\Test\FilesToZip" #*** Place your Filespec here! DragDropSample | Out-Null
Here's a sample run so you can see what it looks like and a view of File Explorer showing the files copied.
PaulTCopy.PNG
Read the comments in the file for adjusting to your environment and needs. You can resize the window if you want it more the size of an Icon but that makes the file names a little hard to read.
If you have never used PS you can find setup instructions here. Setting up PowerShell: Items 1-3
BTW: I notice that this is only your second post so let me Welcome you to the Lounge!
HTHMay the Forces of good computing be with you!
RG
PowerShell & VBA Rule!
My Systems: Desktop Specs
Laptop Specs | http://windowssecrets.com/forums/showthread.php/176186-Vbs-command-structure-to-add-several-files-to-a-single-zip | CC-MAIN-2017-22 | refinedweb | 674 | 64.91 |
proc stringDistance {a b} { set n [string length $a] set m [string length $b] for {set i 0} {$i<=$n} {incr i} {set c($i,0) $i} for {set j 0} {$j<=$m} {incr j} {set c(0,$j) $j} for {set i 1} {$i<=$n} {incr i} { for {set j 1} {$j<=$m} {incr j} { set x [expr {$c([- $i 1],$j)+1}] set y [expr {$c($i,[- $j 1])+1}] set z $c([- $i 1],[- $j 1]) if {[string index $a [- $i 1]]!=[string index $b [- $j 1]]} { incr z } set c($i,$j) [min $x $y $z] } } set c($n,$m) } # some little helpers: if {[catch { # DKF - these things (or rather improved versions) are provided by the 8.5 core package require Tcl 8.5 namespace path {tcl::mathfunc tcl::mathop} }]} then { proc min args {lindex [lsort -real $args] 0} proc max args {lindex [lsort -real $args] end} proc - {p q} {expr {$p-$q}} } proc stringSimilarity {a b} { set totalLength [string length $a$b] max [expr {double($totalLength-2*[stringDistance $a $b])/$totalLength}] 0.0 } # Testing... % stringSimilarity hello hello ;# identity implies perfect similarity 1.0 % stringSimilarity hello hallo ;# changed one out of five letters 0.8 % stringSimilarity hello Hallo ;# case matters 0.6 % stringSimilarity hello world ;# one match of five (l or o) 0.2 % stringSimilarity hello helplo ;# insert costs slightly less 0.818181818182 % stringSimilarity hello lohel ;# same letters but all in different positions 0.2 % stringSimilarity hello again ;# total dissimilarity 0.0[Nice work, of course; I particularly applaud the example evaluations.]
Both string* functions may be tuned to better fit the needs of the application. In stringDistance, the cost for inequality (presently constant 1, done by the incr z) could be derived from the characters in question, e.g. 0/O or I/1 could cost only 0.1, etc.; in stringSimilarity one could, if the strings are qualified as being either standard (like from a dictionary) or (possible) deviation, divide the distance by the length of the standard string (this would prevent the above effect that an insert consts slightly less, because it increases the total length.
[jruv] 2009-12-21 Testing with the below parameters.
% stringSimilarity hello h 0.0Obviously, they are not total dissimilarity. I made a small change to the stringSimilarity.
proc stringSimilarity {a b} { set totalLength [max [string length $a] [string length $b]] max [expr {double($totalLength-[levenshteinDistance $a $b])/$totalLength}] 0.0 }Testing...
% stringSimilarity hello h 0.2Here is another version of stringSimilarity
proc stringSimilarity {s t} { set sl [string length $s] set tl [string length $t] set ml [max $sl $tl] set dn [levenshteinDistance $s $t] # -- get match characters number set mn [expr $ml - $dn] # -- match number != 0? (mn-1)/tl + (1/tl)*(mn/sl) return [expr $mn==0?0:($mn-1+double($mn)/$sl)/$tl] }
Speed tuning: The subtractor helper "-" above makes the code look nicer than if an explicit expr were thrown in for each occurrence; however, keeping a second iterator whose value is one less than the original iterator brings runtime from 5.7 ms down to 4.2 ms (Sun Solaris; tested both "hello hello" and "hello again"):
proc stringDistance2 {a b} { set n [string length $a] set m [string length $b] for {set i 0} {$i<=$n} {incr i} {set c($i,0) $i} for {set j 0} {$j<=$m} {incr j} {set c(0,$j) $j} for {set i 1; set i0 0} {$i<=$n} {incr i; incr i0} { for {set j 1; set j0 0} {$j<=$m} {incr j; incr j0} { set x [expr {$c($i0,$j)+1}] set y [expr {$c($i,$j0)+1}] set z $c($i0,$j0) if {[string index $a $i0]!=[string index $b $j0]} { incr z } set c($i,$j) [min $x $y $z] } } set c($n,$m) } ;# RS
Artur Trzewik 2006-03-31: There is another one implementation, that I found in OmegaT java programm and have rewritten to Tcl, seems to be a little bit faster (30%).
# author Vladimir Levenshtein # author Michael Gilleland, Merriam Park Software # author Chas Emerick, Apache Software Foundation # author Maxym Mykhalchuk proc levenshteinDistance {s t} { set n [string length $s] set m [string length $t] if {$n==0} { return $m } elseif {$m==0} { return $n } for {set i 0} {$i<=$n} {incr i} { lappend d 0 } # indexes into strings s and t # int i; // iterates through s # int j; // iterates through t # int cost; // cost for {set i 0} {$i<=$n} {incr i} { lappend p $i } for {set j 1} {$j<=$m} {incr j} { set t_j [string index $t [expr {$j-1}]] lset d 0 $j for {set i 1} {$i<=$n} {incr i} { set s_i [string index $s [expr {$i-1}]] if {$s_i eq $t_j} { set cost 0 } else { set cost 1 } # minimum of cell to the left+1, to the top+1, diagonally left and up +cost lset d $i [min [expr {[lindex $d [expr {$i-1}]]+1}] [expr {[lindex $p $i]+1}] [expr {[lindex $p [expr {$i-1}]]+$cost}]] } # copy current distance counts to 'previous row' distance counts set _d $p set p $d set d $_d } # our last action in the above loop was to switch d and p, so p now # actually has the most recent cost counts lindex $p $n }
aricb 11-11-2008 Here is another implementation of the Levenshtein Distance algorithm. This one is based on the Wikipedia pseudocode at
proc levenshteinDistance {s t} { # special case: $s is an empty string if {$s eq ""} { return [string length $t] } # initialize first row in table for {set i 0} {$i <= [string length $t]} {incr i} { lappend prevrow $i } set i 0 foreach schar [split $s ""] { incr i set j 0 set row [list $i] foreach tchar [split $t ""] { incr j set cost [expr {$schar ne $tchar}] ;# $cost is 0 if same, 1 if different # difference between $schar and $tchar is minimum of: # [lindex $prevrow $j] + 1 = cost of deletion # [lindex $row $j-1] + 1 = cost of insertion # [lindex $prevrow $j-1] + $cost = cost of substitution (zero if same char) lappend row [expr {min([lindex $prevrow $j] + 1, [lindex $row [expr {$j-1}]] + 1, [lindex $prevrow [expr {$j-1}]] + $cost)}] } set prevrow $row } # Levenshtein distance is value at last cell of last row return [lindex $row end] }
DKF: An optimized version of Artur's code above is this:
proc levenshteinDistance {s t} { if {![set n [string length $t]]} { return [string length $s] } elseif {![set m [string length $s]]} { return $n } for {set i 0} {$i <= $m} {incr i} { lappend d 0 lappend p $i } for {set j 0} {$j < $n} {} { set tj [string index $t $j] lset d 0 [incr j] for {set i 0} {$i < $m} {} { set a [expr {[lindex $d $i]+1}] set b [expr {[lindex $p $i]+([string index $s $i] ne $tj)}] set c [expr {[lindex $p [incr i]]+1}] lset d $i [expr {$a<$b ? $c<$a ? $c : $a : $c<$b ? $c : $b}] } set nd $p; set p $d; set d $nd } return [lindex $p end] }Note that this is 100% bytecoded in Tcl 8.5.
IL This is a topic I've been dealing with a lot lately, and I'm finding (of course) that the nature of the math really depends on the data you're trying to score. In the field of genetic sequence matching you might be looking for common subsequences, in editorial fields you might be looking for misspellings. (escargo - Is that a joke?) IL-no, i'm not that corny :(, thanks for the headsupI've found that under a certain number of characters in a string, any percentage measurement really doesn't do much good. CAT and CAR (especially in an editorial context) are entirely different concepts but you can make the argument it only differs by one char, in this case that difference just ends up being 33%. It raised the question to me, can you ever really assign a percentage of relevancy by sheer numbers alone? Probably not, in most cases I regard string matching as highly domain specific.You should build a semantic network of networks simulating semiotic network, constructed from nontrivial symbol knots - you could do this by really simple statistical analysis and dictionary definitions from some commonly given source. From the dictionary you will get a definition, list of synonyms and antonyms, list of related terms and phrases (it *doesn't* *matter* *how* they are related). Select a word A and word B you are interested in (like CAT and CAR). Get the definitions. Now find uncommon terms in the definitions. Assign a number telling us how probable is it's appearing by chance only, calculated over some large body of language (documents related in any way to document we have the words from, preferably). Go from the beginning of the definition to the end (just like we are reading material), selecting words-nodes.Calculate p of relation forming an arc from term a to term b by multiplying probabilities of apperance - call that relation, "retrospection" or "retroaction" or "implication by chance" - it tells how much one of the word reflects the semantic connections of the other semantic connections, thus increasing a chance that an intelligent agent will put it there, in that order (we think backwards, like Google). Lappend it to a vector.Calculate p of relation "precognition", wytch tells us that a word b has a chance to appear after word a by reverse creation of sense (like afterthought of a writer), making it more probable that intelligent agent will backtrack and write up paragraph that contains it - this time add probabilities as those events are not really connected causally and should get some bonus for being so lonely. Lappend it to a vector.Calculate probability of relation "negation" - substract probabilities, just don't get a negative number - negation of nodes works both ways (is reflective) - it tells us how much they don't like each other - so if they are here in the text we are analyzing, it's an instance of unusual writing style, they should be far away from each other and they are not. Lappend it to a vector.Calculate probability of relation "self-propagation" by mutating p of the node by inverting it (substracting from 1 - the MUST in RFC sense) - if a node is unlikely to show up at random it will get a large nuber of self-propagation for it did show up - this relation is symmetrical, obviously. Lappend it to a vector.If there are several identical terms that are also chosen as a node it means we have an unlikely event of heavy semantic coupling - divide the p of the node by number of occurences plus one (one less degree of freedom). Otherwise simply lappend p of the singular node to a vector. You get a vector consisting of five relations and connections to word a at the beginning and word b at the end. This gives you a way to build a matrix that looks like a picture of a face for the word a and word b - add those vectors, one after another, until you are out of features (rare words, that are likely to be in turn defined somewhere using a lot of common, related words) to extract from the definition itself. Analyze the matrix - calculate connection of relations of the same type from two vectors that lie one after another (all the structure is really a faerie ring ;-}), don't worry if you get funny numbers - we do not really deal with a simple probabilistic space, so the 0..1 does not apply, you can normalize them later to fit.Now you could train our network by finding relations of higher order, but that's a long story and you can skip this step if you want. Take what you have - a list of lists, like a matrix of pixels showing some picture. Use some image recognition algorithms to calculate various degrees of similarity between the matrix-image complex symbolic representation anologous to real images showing us a CAT and a CAR (numbers converted to colors of pixels and other values important in image processing). If you are not sure that is close enough - make such images for every mutation of a word CAT and CAR you can think of to get more accurate results - warning - mutated words must be connected to some real-world objects, that is words you can find defined in a dictionary. If you want you can replace this with finding definitions for hypernodes in dictionary definitions of our words - first level is a level of image reflected truly in a mirror, second level is a level of reflection of image from the other side of the mirror (relation matrix is inverted), third level is a level of metalanguage - that is images that are about image and reflections of it (relation matrix is transposed, rows become columns, columns rows), fourth level is a level of disconnection - think of it like a secret society layer in a neural network, it affects the metalanguage level, but is not seen - nodes that are considered there are the words that are popular, not those that are rare (other way to invert without inverting anything in substance).Fifth level is a level of primodial chaos and love or desire to connect no matter what - it is connected back to itself and propagates random changes back to level one to create some lively noise, simmulating several people reading the same words and then talking leisurely to each other about their contents, naturally using those rare words they found out and a lot of common ones to keep conversation lively. Think about five mirrors, that are reflecting image of some entity one to another and at the same time reflecting itself in second set of mirrors that serves as memory, that changes constantly, reflecting reality realized. Easy to remember - you have two hands and each one has five fingers, now imagine yourself weaving something. It's you, weaving the reality and the story cannot ever end, obviously. You can get sense out of nonsense. It's a kind of magic. The similarity between the word CAT and CAR are a dynamic process too - in 50 years those words will mean something completely different, spelling might chance... I gave you a way to capture a gestalt of this process. Calculations are really simple and I hope my poetic zeal did not offend anyone. Milk and kisses...Also there is the notion that A has a relevancy towards B, but the reverse might not be true. ie. you can say THE has a 100% match against THE FLYING CIRCUS.[4] Here is a good summary I found on the topic of genetic sequence matching algorithms. I was using a variation of the Smith-Waterman algorithm to run tests against some supposedly related datasets.
For faster (though not as exact) string comparisons, see Fuzzy string search
To compare how similar sounding two strings are, try the soundex page. | http://wiki.tcl.tk/3070 | CC-MAIN-2016-07 | refinedweb | 2,498 | 50.91 |
Hello
I know that x86_64 is not supported at the moment (although VM does support
this mode in interpreter only way if ran with -Dvm.use_interpreter=true), so
I tried to do some porting at home where I run gentoo linux [1]. I didn't
succeed in running anything but moved somewhat in building classlib and VM
and want to share some thoughts which might be useful for all linux builds.
1. Many shared libraries in classlib are built without -fPIC option. As far as
I understand this prevents effective sharing of one library between many
processes, and for me linking just didn't work if sources were compiled
without -fPIC. I had to patch the following files to make classlib build on
x86_64:
build/make/components/classlib/pool.xml
build/make/components/vm/hythr.xml
build/make/components/vm/vmi.xml
build/make/targets/build.native.xml
build/make/targets/common_classlib.xml
I can create a JIRA issue with the patch because I think that all classlib
shared libraries should be built with -fPIC. Maybe there are other places
which I missed because my compilation was not finished.
2. The build/make/targets/common_classlib.xml file had -march=pentium3 which
to me doesn't seem to be necessary. I just deleted this option.
3. File vm/vmcore/src/thread/hythr/hythreads.h defines type
hythread_entrypoint_t like this:
typedef int(* hythread_entrypoint_t)(void*);
so this is a pointer to a function which returns int. In the function
hystart_wrapper in file vm/vmcore/src/thread/hythr/hythreads.cpp:243 there is
a line
return (void*) entrypoint(hyargs);
which converts int returned by entrypoint to a void* which producess a gcc
warning
[cc] /home/gregory/work/Harmony/Harmony-work/vm/vmcore/src/thread/hythr/hythreads.cpp:243:
warning: cast to pointer from integer of different size
I don't know exactly how safe it is to convert int to a void* in this place so
I just removed -Werror from build/make/targets/common_vm.xml but I think that
int should not be used in places where it may be treated as a pointer. Quite
possibly that code may cause a crash.
4. At this moment I've got VM built (JIT is not built on this platform so I
didn't even have to apply patches from HARMONY-443) but in deploy directory
there were very few API libraries which failed to be preloaded by VM. I've
added em64t architecture to build/make/deploy.xml for vmi and hy* libraries.
That's where compilation didn't work any more. Since port library comes only
in IA32 version, the file hysignal.c fails to compile, specifically functions
infoForGPR, infoForControl and infoForModule. And finally at this point I
found that a question that I was going to ask about x86_64 version of
hysignal.c was asked on Harmony already at [2]. I'll have to look
specifically how the aforementioned functions are used to try to do the port
to x86_66 but probably not tonight.
[1]
My configuration is
kernel: 2.6.15-gentoo-r1
gcc: gcc (GCC) 3.4.5 (Gentoo 3.4.5, ssp-3.4.5-1.0, pie-8.7.9)
binutils: 2.16.1
glibc: GNU C Library stable release version 2.3.5
[2]
--
Gregory Shimansky, Intel Middleware Products Division
---------------------------------------------------------------------
To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
For additional commands, e-mail: harmony-dev-help@incubator.apache.org | http://mail-archives.apache.org/mod_mbox/harmony-dev/200605.mbox/%3C200605110031.31171.gshimansky@gmail.com%3E | CC-MAIN-2014-10 | refinedweb | 574 | 56.76 |
Hi,
i am trying to run all tests in test/unit through Rubymine, which ends up in an error.
Running each test individually through RM works fine though.
The error is like so:
---------
NoMethodError: undefined method `kind_of?=' for #<Person:0x0000010809ad18>
activemodel (3.1.0) lib/active_model/attribute_methods.rb:385:in `method_missing'
activerecord (3.1.0) lib/active_record/attribute_methods.rb:60:in `method_missing'
factory_girl (2.1.0) lib/factory_girl/proxy/build.rb:21:in `set'
factory_girl (2.1.0) lib/factory_girl/attribute/static.rb:12:in `add_to'
...
---------
Doing a bit of debugging I ended up with tunit_in_folder_runner.rb (line 80) imposing this on the object in question, which can be shown, if the kind_of? is changed to is_a? . Then is_a? is shown in the NoMethodError.
--------- tunit_in_folder_runner.rb, around 80 ---------
def is_test_case_class?(obj)
begin
is_test_case = (obj.kind_of?(Class) == true) && obj.ancestors.include?(Test::Unit::TestCase) && (obj != Test::Unit::TestCase)
rescue Exception => e
is_test_case = false
end
is_test_case
end
---------
Iterating over the ObjectSpace adds a second attributes array to the Factories with a FactoryGirl::Attribute::Static (@name = 'kind_of?', @value = 'Class'). This eventually then gets send to ActiveRecord for evaluation ending up in the error above.
Environment:
RubyMine 3.2.3
rails (3.1.0)
factory_girl_rails (1.2.0)
shoulda (2.11.3)
Any idea on how to avoid this? Am I missing a configuration issue? Thanks.
I got the same problem and posted to
Except that I found the kind_of? attribute is a FactoryGirl::Attribute::Dynamic (rather than static)
Do someone have a solution? It is hard to figure out what FactoryGirl is doing. What else can we expect from Ruby.
'test-unit' gem provides 'testrb' script for launching tests.
Probably "testrb -b dir" runs all tests in a folder. If so, then we can use such way instead of iterating through ObjectSpace. Could you check either testrb script works with FactoryGirl or not?
Also you can launch all tests in project using rake tasks.
thanks for the answer, this works fine in RAILS_ROOT:
> testrb -Itest test/unit/
test has to be included due to "require 'test_helper'" in test/unit/* files.
From somewhere else (cd ..) -b has to be included and -I changed
> testrb -b./rails -Irails/test test/unit/
This seems to me a better approach then iterating through ObjectSpace. Even more as
ObjectSpace(aClassOrMod).each_object
does not work for most Classes or Modules. At least not for the ancestors of the Testclasses, but the class Object. Which does not narrow down the objects to a useful set ;-). Otherwise restricting to the root of Testclasses would suffice.
Thanks for the hint again.
In case anyone else finds this and is having the same problem... the following hack seems to solve the problem:
It's ugly, but it does prevent the ObjectSpace iteration from screwing up your factories... and now all of my tests pass without throwing the kind_of?= error.
: running all unit-tests does not work (FactoryGirl,Shoulda,TestUnit) tags the thick within a sloppy incident. The nostalgia disputes the freeing changeover. : running all unit-tests does not work (FactoryGirl,Shoulda,TestUnit) blanks the software. : running all unit-tests does not work (FactoryGirl,Shoulda,TestUnit) butters a headache. When will : running all unit-tests does not work (FactoryGirl,Shoulda,TestUnit) shy away over a crowded fever?
---------------------------------------------------------------------------------
[url=]russian brides women[/url]
Attachment(s):
manjula.jpg
heshika.jpg | https://intellij-support.jetbrains.com/hc/en-us/community/posts/206074249-running-all-unit-tests-does-not-work-FactoryGirl-Shoulda-TestUnit-?page=1 | CC-MAIN-2020-10 | refinedweb | 554 | 61.93 |
Re: Access Control Best Practices for shared hosting seem at odds with Web Site Starters
From: M. M. Rafferty (mmr_at_vistagrande.com)
Date: 09/08/05
- Previous message: Tom Kaminski [MVP]: "Re: NTLM"
- In reply to: David Wang [Msft]: "Re: Access Control Best Practices for shared hosting seem at odds with Web Site Starters"
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ] [ attachment ]
Date: Wed, 7 Sep 2005 22:20:59 -0700
Hi David,
I understand the HTTP PUT. And that it is different. (An IIS thing rather
than NTFS.)
However, the full context of the bullet I quoted appears to be the HTTP POST
situation. Here is the longer version:
"Never Allow Anonymous User (IUSR) Write Permission
Do not allow anonymous user (IUSR) to have write permission. Allowing
write permission means that if the attacker can gain a method to upload the
content to the server, then they can write anything onto the server. Guest
books and forum software/Web pages are typical applications that require
anonymous write access. More secure alternatives are applications that store
data in an external database such as Microsoft SQL Server, rather than in
database files stored in the Web content directory. There are many free or
inexpensive guest books and forums that support using a database in this
way."
I always thought "Best Practices" were sort of like a 10 for a gymnast --
the ideal one strives to achieve -- not the practical reality one accepts
because that is all they can manage under the circumstances.
As the server administrator, I must assume that an application will be
insecure. Attempting to make a decision to the contrary on a case by case
situation is not a scalable solution. And it is basically impossible with
ASP.NET applications where only the DLLs are uploaded.
As far as I can tell, in order to run most web applications, we need Full
Trust allowed. (Database access and a few other things required it?) I
don't grasp what that means in terms of what the application pool users can
do. Just because I can't do more than the first "hello world" sample in
ASP.NET doesn't mean someone else can't create an application that uses
features of Windows I didn't even realize existed. So I am worried that if
we were to loosen up security for these sorts of applications, there could
be unpleasant results.
For instance, if a client application (unintentionally) allows visitors to
upload anything to any place on the website, what then? Other than burning
CPU and disk space with the traditional uses of compromised sites (warez,
file sharing, spamming) will the risks be confined to that Application pool
and the associated web structure? Is there enough isolation built into
ASP.NET so that the OS and other users' code and data will be protected? Or
can DLLs in this configuration do damage that was impossible with the older
ASP scripts?
I've read the information in the Resource Kit and several other documents,
but I just don't know enough to jump to a conclusion from that data.
Especially when the supporting evidence seems to be pointing in two
different directions. I don't want to open a Pandora's box unknowingly
while I'm just at the beginner level with ASP.NET skills.
mmr
"David Wang [Msft]" <someone@online.microsoft.com> wrote in message
news:OqA2gT7sFHA.1168@TK2MSFTNGP10.phx.gbl...
> I think you misunderstand what "anonymous write" means:
>
>
> How the two types of "write" differ is this.
>
> In the case of HTTP PUT, it means that anyone who can send a PUT request
to
> the server can write a file somewher. This is clearly dangerous if
> authentication is not required or unprivileged user is authorized to send
> PUT, and it is the "anonymous write" that is clearly warned.
>
> In the case of HTTP POST or any other request to a server-side application
> who subsequently decides to write a file, this is less dangerous because
it
> is up to the server-side application to decide how read/write works. IIS
has
> very little say in this instance, if you read and understand my blog post.
> Security depends on the application itself.
>
>
> Yes, there is risk in allowing IUSR Write ACLs in the URL namespace, but
> that is something for the hoster to decide for their web applications.
> Personally, security is not absolute, and best practices are not
perfect --
> from a security perspective, I would never make it into a series of
> check-boxes -- I would seek to understand the web app and then make my
> choices.
>
> Security is NOT absolute. It is relative and a balance between
functionality
> and risk. There is no way that a single check list of Best Practices is
> going to functionally apply to all situations; no way. For the same
reason,
> the list of recommend ACLs that you are asking for also does not exist and
> needs to be generated by yourself for your situation.
>
> --
> //David
> IIS
>
> This posting is provided "AS IS" with no warranties, and confers no
rights.
> //
> "M. M. Rafferty" <mmr@vistagrande.com> wrote in message
> news:OxSbv0nsFHA.912@TK2MSFTNGP11.phx.gbl...
> The MS Shared Hosting Deployment Guide lists among best practices:
> Ensure strong permissions are used on Web content
> Use separate anonymous (IUSR) accounts for each Web site
> Never allow anonymous user (IUSR) Write permission
>
> The document also describes Isolated Shared Web Hosting where each
customer
> has their own application pool with a unique identity. It states that the
> host should "Ensure that the Customer-specific identity has the minimal
> necessary permissions to system resources" but exactly what that means in
> terms of ACLs apparently has been left as an exercise for the reader.
>
> And therein lies the problem. Or at least part of it.
>
> We are looking at the web site starter kits -- DotNetNuke and the
Community
> Server. It appears that both require write access for the application
pool
> identity below the web root. This has also come up in a few other
> applications such as shopping carts we have encountered. Usually, the
> requirement is to allow content management features, generally image
> uploads, via the browser.
>
> How is this not anonymous write access? Why would Microsoft recommend
this
> to its hosting partners?
>
> Also, another quirk that appears to be the case, at least from what we
> encountered in some experiments with the Community Server applications, is
> that one seems to require the application pool identity have list
permission
> starting at the root of the drive all the way down to the web folder.
This
> seems like a bit of a privacy breach at best since it would rely only on
> security through obscurity in our folder naming.
>
> Can someone explain this apparent contradiction?
>
> Ideally, can someone lay out the recommended ACLs for this scenario for a
> web host to have in place so that customers' sites are secured and
> isolated... and still able to run real ASP.NET applications?
>
> Thanks,
>
> Mary M. Rafferty
>
>
>
- Previous message: Tom Kaminski [MVP]: "Re: NTLM"
- In reply to: David Wang [Msft]: "Re: Access Control Best Practices for shared hosting seem at odds with Web Site Starters"
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ] [ attachment ] | http://www.derkeiler.com/Newsgroups/microsoft.public.inetserver.iis.security/2005-09/0082.html | crawl-002 | refinedweb | 1,199 | 60.95 |
Sends messages from a connected socket.
Standard C Library (libc.a)
#include <sys/types.h> #include <sys/socketvar.h> #include <sys/socket.h>
int send (Socket, Message, Length, Flags) int Socket; const void *Message; size_t Length; int Flags;
The send subroutine sends a message only when the socket is connected. The sendto and sendmsg subroutines can be used with unconnected or connected sockets.
To broadcast on a socket, first issue a setsockopt subroutine using the SO_BROADCAST option to gain broadcast permissions.
Specify the length of the message with the Length parameter. If the message is too long to pass through the underlying protocol, the system returns an error and does not transmit the message.
No indication of failure to deliver is implied in a send subroutine. A return value of -1 indicates some locally detected errors.
If no space for messages is available at the sending socket to hold the message to be transmitted, the send subroutine blocks unless the socket is in a nonblocking I/O mode. Use the select subroutine to determine when it is possible to send more data.
Upon successful completion, the send subroutine returns the number of characters sent.
If the send subroutine is unsuccessful, the subroutine handler performs the following functions:
The subroutine is unsuccessful if any of the following errors occurs:
The send subroutine is part of Base Operating System (BOS) Runtime.
The socket applications can be compiled with COMPAT_43 defined. This will make the sockaddr structure BSD 4.3 compatible. For more details refer to socket.h.
The connect subroutine, getsockopt subroutine, recv subroutine, recvfrom subroutine, recvmsg subroutine, select subroutine, sendmsg subroutine, sendto subroutine, setsockopt subroutine. shutdown subroutine, socket subroutine.
Sockets Overview and Understanding Socket Data Transfer in AIX Version 4.3 Communications Programming Concepts. | http://ps-2.kev009.com/tl/techlib/manuals/adoclib/libs/commtrf2/send.htm | CC-MAIN-2022-33 | refinedweb | 293 | 50.53 |
contains some excluded files (deprecated modules we don’t care about cleaning up and some third-party code that Django vendors) as well as some excluded errors that we don’t consider as gross violations. (i.e.
poll.get_unique_voters(), not
poll.getUniqueVoters()).
Use
InitialCapsfor class names (or for factory functions that return classes).
In docstrings, follow the style of existing docstrings and PEP 257.
In tests, use
assertRaisesMessage()instead of
assertRaises()so you can check the exception message. Use
assertRaisesRegex()only if you need regular expression matching.¶
Use isort to automate import sorting using the guidelines below.
Quick start:
$ pip install isort $ isort -rc .
This runs
isortrecursively from your current directory, modifying any files that don’t conform to the guidelines. If you need to have imports out of order (to avoid a circular import, for example) use a comment like this:
import module # isort:skip
Put imports in these groups: future, standard library, third-party libraries, other Django components, local Django component, try/excepts. Sort lines in each group alphabetically by the full module name. Place all
import modulestatements before
from module import objectsin each section. Use absolute imports for other Django.
For example (comments are for explanatory purposes only):django/contrib/admin/example.py
# future from __future__ import unicode_literals # standard library import json from itertools import chain # third-party import bcrypt # Django from django.http import Http404 from django.http.response import ( Http404, HttpResponse, HttpResponseNotAllowed, StreamingHttpResponse, cookie, ) # local Django from .models import LogEntry # try/except try: import yaml except ImportError: yaml = None CONSTANT = 'foo' class Example: # ...
Use convenience imports whenever available. For example, do this:
from django.views import View
instead of:
from django.views.generic.base import View
Template style¶
In Django template code, put one (and only one) space between the curly brackets and the tag contents.
Do this:
{{ foo }}
Don’t do this:
{{foo}}
View style¶
In Django views, the first parameter in a view function should be called
request.
Do this:
def my_view(request, foo): # ...
Don’t do this:
def my_view(req, foo): # ...
Model style¶ Metashould
- Custom manager attributes
class Meta
def __str__()
def save()
def get_absolute_url()
- Any custom methods
If
choicesis defined for a given model field, define each choice as a tuple of tuples, with an all-uppercase name as a class attribute on the model. Example:
class MyModel(models.Model): DIRECTION_UP = 'U' DIRECTION_DOWN = 'D' DIRECTION_CHOICES = ( (DIRECTION_UP, 'Up'), (DIRECTION_DOWN, 'Down'), )
Use of
django.conf.settings¶
Modules should not in general use settings stored in
django.conf.settings
at the top level (i.e. evaluated when the module is imported). The explanation
for this is as follows:
Manual configuration of settings (i.e. not relying on the
DJANGO_SETTINGS_MODULE environment variable) is allowed and possible as
follows:
from django.conf import settings settings.configure({}, SOME_SETTING='foo')
However, if any setting is accessed before the
settings.configure line,
this will not work. (Internally,
settings is a
LazyObject which
configures itself automatically when the settings are accessed if it has not
already been configured).
So, if there is a module containing some code as follows:
from django.conf import settings from django.urls import get_callable default_foo_view = get_callable(settings.FOO_VIEW)
…then importing this module will cause the settings object to be configured. That means that the ability for third parties to import the module at the top level is incompatible with the ability to configure the settings object manually, or makes it very difficult in some circumstances.
Instead of the above code, a level of laziness or indirection must be used,
such as
django.utils.functional.LazyObject,
django.utils.functional.lazy() or
lambda.
Miscellaneous¶
- Mark all strings for internationalization; see the i18n documentation for details.
- Remove
importstatements that are no longer used when you change code. flake8 will identify these imports for you. If an unused import needs to remain for backwards-compatibility, mark the end of with
# NOQAto.
- Please don’t put your name in the code you contribute. Our policy is to keep contributors’ names in the
AUTHORSfile. | https://docs.djangoproject.com/en/2.0/internals/contributing/writing-code/coding-style/ | CC-MAIN-2020-16 | refinedweb | 663 | 50.73 |
I've been trying to find a way to count the number of times sets of Strings occur in a transaction database (implementing the Apriori algorithm in a distributed fashion). The code I have currently is as follows:
val cand_br = sc.broadcast(cand)
transactions.flatMap(trans => freq(trans, cand_br.value))
.reduceByKey(_ + _)
}
def freq(trans: Set[String], cand: Array[Set[String]]) : Array[(Set[String],Int)] = {
var res = ArrayBuffer[(Set[String],Int)]()
for (c <- cand) {
if (c.subsetOf(trans)) {
res += ((c,1))
}
}
return res.toArray
}
RDD[Set[String]]
RDD[(K, V)
cand
cand
transactions.count() ~= 88000
cand.length ~= 24000
Probably, the right question to ask in this case would be: "what is the time complexity of this algorithm". I think it is very much unrelated to Spark's flatMap operation.
Given 2 collections of Sets of size
m and
n, this algorithm is counting how many elements of one collection are a subset of elements of the other collection, so it looks like complexity
m x n. Looking one level deeper, we also see that 'subsetOf' is linear of the number of elements of the subset.
x subSet y ==
x forAll y, so actually the complexity is
m x n x s where
s is the cardinality of the subsets being checked.
In other words, this
flatMap operation has a lot of work to do.
Now, going back to Spark, we can also observe that this algo is embarrassingly parallel and we can take advantage of Spark's capabilities to our advantage.
To compare some approaches, I loaded the 'retail' dataset [1] and ran the algo on
val cand = transactions.filter(_.size<4).collect. Data size is a close neighbor of the question:
Some comparative runs on local mode:
transactionspartitions up to # of cores (8): 33 secs
I also tried an alternative implementation, using
cartesian instead of
flatmap:
transactions.cartesian(candRDD).map{case (tx,cd) => (cd,if (cd.subsetOf(tx)) 1 else 0)}.reduceByKey(_ + _).collect
But that resulted in much longer runs as seen in the top 2 lines of the Spark UI (cartesian and cartesian with a higher number of partitions): 2.5 min
Given I only have 8 logical cores available, going above that does not help.
Is there any added 'Spark flatMap time complexity'? Probably some, as it involves serializing closures and unpacking collections, but negligible in comparison with the function being executed.
Let's see if we can do a better job: I implemented the same algo using plain scala:
val resLocal = reduceByKey(transLocal.flatMap(trans => freq(trans, cand)))
Where the
reduceByKey operation is a naive implementation taken from [2]
Execution time: 3.67 seconds.
Sparks gives you parallelism out of the box. This impl is totally sequential and therefore takes longer to complete.
Last sanity check: A trivial flatmap operation:
transactions.flatMap(trans => Seq((trans, 1))).reduceByKey( _ + _).collect
Execution time: 0.88 secs
Spark is buying you parallelism and clustering and this algo can take advantage of it. Use more cores and partition the input data accordingly.
There's nothing wrong with
flatmap. The time complexity prize goes to the function inside it. | https://codedump.io/share/0OhE2wSSsJbi/1/apache-spark-flatmap-time-complexity | CC-MAIN-2017-04 | refinedweb | 523 | 57.67 |
I have shiny new Particle Electron that I am trying to connect to Ubidots (Electron firmware 0.4.8). Ideally, I’d like to start sending batches of variables, presumably using the “collections” part of the API.
Using the HTTPCLIENT (V 0.0.5) in the Particle Library, I am able to send two variables using two independent POST commands (POST api/v1.6/variables/{variable_id}/values, “Writes a new value to a variable”), but when I try to up my game to posting “collections” (POST api/v1.6/collections/values, “Send values to several variables in a single request.”), I get a response status of “-1”. (I vaguely recall having the exact same problem with the Particle Core many months ago.)
This is how I seem to be populating the elements of the HTTP request (middle of variable ID replaced by dots):
Hostname: things.ubidots.com Path: /api/v1.6/collections/values Body: [{"variable": "56c...d16","value":119.000000}, {"variable": "56c...a30","value":134.000000}]
From this request, I get a response status of -1 (sometimes it’s 0, usually the first request after reboot).
Below is a skeleton of my Particle code showing the key elements of the procedure. Does anything look wrong? My basic connectivity is fine, as demonstrated by the ability to use POST for one variable at a time. Perhaps a bracket or quote is out of place, or could there be an issue with the HTTP library?
Thanks.
#include "HttpClient/HttpClient.h" HttpClient http; #define VARIABLE_ID_1 "56c...d16" #define VARIABLE_ID_2 "56c...a30" #define TOKEN "rVU...dyG" http_header_t headers[] = { { "Content-Type", "application/json" }, { "X-Auth-Token" , TOKEN }, { NULL, NULL } }; http_request_t request_1; http_response_t response_1; #define CLOUD_UPDATETIMERINTERVALSEC 30 String resultstr; double g_f1 = 0.0; double g_f2 = 0.0; unsigned long runTime; unsigned long runTimeSec; // Initialize void setup() { request_1.hostname = "things.ubidots.com"; request_1.port = 80; request_1.path = "/api/v1.6/collections/values"; Serial1.begin(115200); // communication between Electron and my device Serial.begin(9600); // diagnostic feed runTime = millis(); } // Main loop void loop() { static unsigned long cloud_UpdateTimer = millis(); //cloud_ update timer //do sensor readings (populate the g_f1 and g_f2 variables) sensorUpdate(); // Periodically send data to cloud if (millis()-cloud_UpdateTimer > 1000*CLOUD_UPDATETIMERINTERVALSEC) { // Convert g_f1 and g_f2 into strings String f1str = String(g_f1); String f2str = String(g_f2); resultstr = "[{\"variable\": \""VARIABLE_ID_1"\",\"value\":" + f1str + "}, {\"variable\": \""VARIABLE_ID_2"\",\"value\":" + f2str + "}]"; request_1.body = resultstr; http.post(request_1, response_1, headers); //reset update timer cloud_UpdateTimer = millis(); } } | https://ubidots.com/community/t/solved-unable-to-use-collections-api-to-post-2-variables-with-particle-electron/277 | CC-MAIN-2021-43 | refinedweb | 399 | 50.84 |
NAME
pmap_enter - insert a virtual page into a physical map
SYNOPSIS
#include <sys/param.h> #include <vm/vm.h> #include <vm/pmap.h> void pmap_enter(pmap_t pmap, vm_offset_t va, vm_page_t p, vm_prot_t prot, boolean_t wired);
DESCRIPTION
The pmap_enter() function inserts the given physical page p, into the physical map pmap, at the virtual address va, with the protection prot. If wired is TRUE, then increment the wired count for the page as soon as the mapping is inserted into pmap.
IMPLEMENTATION NOTES
This routine MAY NOT lazy-evaluate the entry; it is required by specification to make the requested entry at the time it is called.
SEE ALSO
pmap(9)
AUTHORS
This manual page was written by Bruce M Simpson 〈bms@spc.org〉. | http://manpages.ubuntu.com/manpages/hardy/man9/pmap_enter.9.html | CC-MAIN-2013-20 | refinedweb | 123 | 55.24 |
Hello,After I finished ep.8 (taking a vacation) last lesson I came back to it and tried to the same but with raw input.I screenshotted the code here : code worked fine before I changed it to this. I can write the answers on the output console but then this bug shows... I don't know what to do and I still try to figure out all the raw_input thing (they didnt use it much after the first explanation).If you can help me it will be very nice (: thank you
and of course i would like to know how to fix the code too
raw_input() returns Unicode. Your three functions all take Int in their parameters so some conversion of the inputs is necessary.
raw_input()
city = int(raw_input( ... ))
days = int(raw_input( ... ))
spending_money = int(raw_input( ... ))
print trip_cost(city, days, spending_money)
thanks, but where am i supposed to write those? inside the trip cost function? in line 25? where?btw how can i wrtie codes in the forum like you did?
The code above is written with four leading spaces on each line. This is known as basic block format.
We can preserve the formatting of program code and give it syntax highlighting with three backticks before and after the code block sample.
See this page for more details:
As to your first question, the exercise does not actually ask us to get user input, just call the function at the end. See the instructions and be sure to complete that last step. The above example code can be pasted at the very end, and may be rejected by the SCT, so pass the lesson first, then try that code.
I passed the lesson without it before I put the raw input. I just wanted to try putting raw input cuz they dont really explain how to use raw input in functions anywhere, and it makes more sense to put there raw input so the program can be somehow useful (if all the written values are right)
its working!!! thanks for the help!
def hotel_cost(nights):
return 140*nights
def plane_ride_cost(city):
if city == "Charlotte":
return 183
elif city == "Tampa":
return 220
elif city == "Pittsburgh":
return 222
elif city == "Los Angeles" or "LA":
return 475
def rental_car_cost(days):
car_cost = days*40
if days >= 7:
car_cost -=50
elif days >= 3:
car_cost -=20
return car_cost
def trip_cost(city,days,spending_money):
return hotel_cost(days) + plane_ride_cost(city)+rental_car_cost(days) + spending_money
days = int(raw_input('how much time is the vacation?'))
city = raw_input('where are u going?')
spending_money = int(raw_input('how much money are you planning to spend?'))
print trip_cost(city,days,spending_money)
"LA" doesn't get compared to city here. That expression is always truthy regardless of what value city has
"LA"
city
if "blah blah":
print "strings are truthy"
print 0 or "and or is incredibly simple, as in dumb"
strings are truthy
and or is incredibly simple, as in dumb
ionatan that wasn't the problem. it works fine with 'LA'
I'm not saying it's the problem. Your function exhibits correct behavior, but that comparison does not do what you think it does.
Your function will return 475 for any input that doesn't match one of your cities. Because that "condition" is always truthy. | https://discuss.codecademy.com/t/why-is-this-code-not-working-ep-8-related-but-my-changes-are-the-problem/31017/5 | CC-MAIN-2017-47 | refinedweb | 544 | 71.14 |
Nope. Look at the other posts in this thread, and even your own example that you posted first.Nope. Look at the other posts in this thread, and even your own example that you posted first.Code:while (!Graph.eof())
Doesn't work for me:Doesn't work for me:Graph>> Matrika[j][i]; // Matrika = matrice in which the numbers from the file are saved
should skip commas automatically, so you should be able to remove that Graph>>buu; line.
Code:#include <iostream> #include <fstream> using namespace std; int main() { ifstream inFile("C:\\TestData\\data.txt"); /* data.txt: -1,55,25,45,-1,-1,-1,-1 */ int num[10] = {0}; inFile>>num[0]; cout<<num[0]<<endl; //-1 inFile>>num[1]; cout<<num[1]<<endl; //0 ??? return 0; } | https://cboard.cprogramming.com/cplusplus-programming/60896-reading-numbers-files-3.html | CC-MAIN-2017-13 | refinedweb | 128 | 61.06 |
On Thu, Nov 13, 2008 at 1:28 PM, Bruce Snyder <bruce.snyder@gmail.com> wrote:
> On Thu, Nov 13, 2008 at 11:02 AM, Garrett Rooney
> <rooneg@electricjellyfish.net> wrote:
>> On Thu, Nov 13, 2008 at 11:43 AM, Bruce Snyder <bruce.snyder@gmail.com> wrote:
>>> On Thu, Nov 13, 2008 at 9:25 AM, Garrett Rooney
>>> <rooneg@electricjellyfish.net> wrote:
>>>
>>>> Does the document actually have a <feed> element at it's root? That's
>>>> the kind of error you'd get if you parsed (for example) an Atom
>>>> <entry> instead of an Atom <feed>.
>>>
>>> Yep, it sure does. I'm just using the Google News Atom URL for my testing:
>>>
>>>
>>
>> That's the problem. That's an atom 0.3 feed, abdera only supports the
>> actual 1.0 standard. The namespaces are different, which is why it
>> doesn't think it's the right kind of document. It wouldn't be
>> impossible to add support for 0.3, but it doesn't do it yet.
>
> Damn :-(.
>
> Would it be difficult for Abdera to support other Atom versions by
> just poking the feed and then deciding which parser version to use?
Well, I know you could do it by adding the old elements as essentially
an extension, but that's a fair amount of work, as you'd be adding a
lot of classes and essentially duplicating a lot of work that's
already done for the 1.0 version. Not sure if there's an easier way,
maybe convincing the existing code to accept either namespace. It's
hard to say what the cost/benefit would be here, as atom 0.3 does seem
to be going away relatively quickly.
-garrett | http://mail-archives.apache.org/mod_mbox/abdera-user/200811.mbox/%3C4585c4a60811131034j4b1fb2fcmf57beb8d40c07d66@mail.gmail.com%3E | CC-MAIN-2017-04 | refinedweb | 283 | 74.39 |
1. What is a datagram socket
A socket, or socket, is a communication mechanism by which a client/server (that is, a process to communicate)The system can be developed either locally on a single machine or across a network. That is, it allows processes that are not on the same machine but are connected over a network to communicate. Because of this, sockets clearly distinguish between clients and servers.
Compared to stream sockets, datagram sockets are easier to use. They are specified by the type SOCK_DGRAM and do not require a connection to be established or maintained. They are usually implemented in AF_INET through the UDP/IP protocol. They limit the length of data that can be sent and the datagram is transmitted as a single network message.UDP is not a reliable protocol, but it is faster because it always needs to establish and maintain a connection.
2. Workflow of client/server based on stream socket
1. Server
As with stream sockets, first the server application creates a socket with a system call to socket(), which is a similar file descriptor resource that the system assigns to the server process and cannot be shared with other processes.
Next, the server process names the socket (listening), and we use the system call bind() to name the socket. Then the server process waits for the client to connect to the socket.
The difference is that the system then calls recvfrom() to receive the data sent from the client program. The server program processes the data accordingly, and then sends the processed data back to the client program through a system call sendto().
Compared to Stream Sockets:
- In a program on a stream socket, data is received through a system call read, and data is sent through a system call write(), whereas in a datagram socket program, it is achieved through recvfrom() and sendto() calls.
Server programs that use datagram sockets do not require listen() calls to create a queue to store connections, nor accept() calls to receive connections and create a new socket descriptor
2. Client
- Clients based on datagram sockets are simpler than servers. Similarly, client applications call sockets first to create an unnamed socket. Just like servers, clients send data to servers and receive data from server programs through sendto() and recvfrom().
Compared to Stream Sockets:
- Clients using datagram sockets do not need to use the connect() system call to connect to the server program; they only need to send information to the IP port that the server is listening for and receive data sent back from the server when needed.
3. Socket Interface
- Universal Socket Interface See another blog: Streaming Sockets
1. recvfrom() system call
- This function stores the information sent to the program in a buffer buffer and records the program IP port of the data source. This function is blocked
int recvfrom(int sockfd, void *buffer, size_t len,int flags, struct sockaddr *src_from, socklen_t *src_len);
- The buffer stores the received data, len specifies the length of the buffer, and flags are usually set to 0 in applications. If src_front is not empty, the IP port of the data source program is recorded. If src_len is not empty, the length information is recorded in the variable that src_len points to.
2. sendto() system call
- This function sends information from the buffer buffer buffer to the program on the specified IP port
int sendto(int sockfd, void *buffer, size_t len, int flags, struct sockaddr *to, socklen_t tolen);
- Buffer stores the data to be sent, len is the length of the buffer, flags in applications are usually set to 0, to is the IP port of the program to which the data is to be sent, and to len is the length of the to parameter. The number of bytes of data to be sent is returned on success and -1 on failure.
4. Examples
server.c
#include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <sys/socket.h> #include <netinet/in.h> #include <signal.h> int main(int arc, char **argv) { int server_sockfd = -1; socklen_t server_len = 0; socklen_t client_len = 0; char buffer[512]; ssize_t result = 0; struct sockaddr_in server_addr; struct sockaddr_in client_addr; // Create Datagram Socket server_sockfd = socket(AF_INET, SOCK_DGRAM, 0); // Set the port, IP to listen on server_addr.sin_family = AF_INET; server_addr.sin_addr.s_addr = htonl(INADDR_ANY); server_addr.sin_port = htons(9739); server_len = sizeof(server_addr); // Bind (named) socket bind(server_sockfd, (struct sockaddr *)&server_addr, server_len); // Ignore information about the child process stopping or exiting, and the child process will be managed by the kernel when it becomes a zombie process signal(SIGCHLD, SIG_IGN); while (1) { // Receive data and use client_addr to store the IP port of the data source program // Functions block the ability to know when data is received from the client result = recvfrom(server_sockfd, buffer, sizeof(buffer), 0, (struct sockaddr *)&client_addr, &client_len); if (fork() == 0) { // Using subprocesses to process data buffer[0] += 'a' - 'A'; // Send processed data sendto(server_sockfd, buffer, sizeof(buffer), 0, (struct sockaddr *)&client_addr, client_len); printf("%c\n", buffer[0]); // Note that you must close the subprocess or the program will not work properly exit(0); } } // Close close(server_sockfd); }
client.c
#include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <netinet/in.h> #include <arpa/inet.h> int main(int argc, char **argv) { struct sockaddr_in server_addr; socklen_t server_len = 0; int sockfd = -1; char c = 'A'; // Take the first character of the first parameter if (argc > 1) { c = argv[1][0]; } // Create Datagram Socket sockfd = socket(AF_INET, SOCK_DGRAM, 0); // Set Server IP, Port server_addr.sin_family = AF_INET; server_addr.sin_addr.s_addr = inet_addr("127.0.0.1"); server_addr.sin_port = htons(9739); server_len = sizeof(server_addr); // Send data to server sendto(sockfd, &c, sizeof(char), 0, (struct sockaddr *)&server_addr, server_len); // The last two parameters are set to 0 because the data source is not concerned when the receiving server processes the sent data. recvfrom(sockfd, &c, sizeof(char), 0, 0, 0); printf("char from server = %c\n", c); // Close close(sockfd); exit(0); } | https://programmer.help/blogs/socket-datagram-socket.html | CC-MAIN-2021-43 | refinedweb | 992 | 59.53 |
1.1 imil 1: ## Distributing pkgsrc builds across different OSes 2: 3: You may want to use several machines to speed up your *pkgsrc* builds, but 4: as those computers are not running NetBSD, you may think they are useless as 5: build-helpers. This is where enters NetBSD's *cross-compiling* system in 6: conjuction with [distcc](). 7: 8: ### A classic scenario 9: 10: GNU/Linux is a very common OS you probably have on your network, in this 11: tutorial, we will focus on how to use a Debian GNU/Linux system as a 12: *distcc* node, but this procedure can apply to almost any pkgsrc-supported 13: platform. 14: 15: First things first, your GNU/Linux machine must have the following packages 1.4 ! imil 16: installed 1.1 imil 17: 18: * gcc 19: * g++ 20: * zlib1g-dev 21: * ncurses-base 22: 23: ### Building the cross-compiling chain 24: 25: Once done, download NetBSD's source tree. For example, if your target is a 1.4 ! imil 26: NetBSD 5.1.2 system 1.1 imil 27: 28: # pwd 29: /home/netbsd 30: # cvs -d anoncvs@anoncvs.fr.netbsd.org:/cvsroot co -rnetbsd-5-1-2-RELEASE src 31: 32: We will then use the *build.sh* script in order to build the tools needed 1.4 ! imil 33: for cross-compiling 1.1 imil 34: 35: # cd src 36: # ./build.sh -m amd64 tools 37: 38: Do *not* use the *-u* flag, as we need to configure *nbcompat* for the tools 39: to build correctly. 40: 41: Once finished, you should have a directory like 42: 43: tooldir.Linux-2.6.32-5-xen-amd64-x86_64 44: 45: On the *src* directory. This is where the cross-compiling toolkit resides. 46: 47: ### Installing and configuring distcc 48: 49: It is mandatory to configure *distcc* so it uses our cross-comping tools to 50: build binaries for another platform. On a Debian system, this is done in 51: */etc/default/distcc* 52: 53: STARTDISTCC="true" 54: # [...] 55: # Allowed networks 56: ALLOWEDNETS="127.0.0.1 192.168.0.0/24" 57: # IP where distcc will listen 58: LISTENER="192.168.0.7" 59: ### 60: # Here's the real trick, supersede $PATH so the firsts binaries 61: # distcc will look for are NetBSD's ones 62: ### 63: PATH=/home/netbsd/src/tooldir.Linux-2.6.32-5-xen-amd64-x86_64/x86_64--netbsd/bin:$PATH 64: 65: After that, simply start *distcc* 66: 67: /etc/init.d/distcc start 68: 69: ### Testing the setup 70: 71: On a NetBSD machine located in *distcc*'s allowed network, add the following to 72: */etc/mk.conf* 73: 74: PKGSRC_COMPILER=ccache distcc gcc 75: MAKE_JOBS=4 76: DISTCC_HOSTS=192.168.0.7 localhost 77: 78: And fire up the *make* command in a *pkgsrc* subdirectory containing a C-based 1.2 imil 79: package. You should see something like this in helper's */var/log/distccd.log* 1.4 ! imil 80: file 1.1 imil 81: 82: 83: 84: 85: 86: Of course, you may want to adjust the *MAKE_JOBS* according to the number of 87: nodes your build-cluster has. 88: 1.3 imil 89: ### pbulk and distcc 90: 91: If you intend to use distributed build while running 92: [pbulk](), you **must** add the 93: following to the */etc/mk.conf* of your *sandbox* 94: 95: .for DISTCCDEPS in devel/ccache sysutils/checkperms pkgtools/digest devel/distcc devel/popt devel/libtool-base lang/f2c devel/gmake 96: . if ${PKGPATH} == ${DISTCCDEPS} 97: IGNORE_DISTCC= yes 98: IGNORE_CCACHE= yes 99: . endif 100: .endfor 101: 102: Or the scanning phase will end up with circular dependencies 103: 104: Cyclic dependency for package: 105: ccache-3.1.4nb1 106: checkperms-1.11 107: digest-20111104 108: distcc-3.1nb1 109: popt-1.16nb1 110: libtool-base-2.2.6bnb5 111: f2c-20100903 112: ccache-3.1.4nb1 113: 1.1 imil 114: Enjoy your faster builds! | https://wiki.netbsd.org/cgi-bin/cvsweb/wikisrc/tutorials/pkgsrc/cross_compile_distcc.mdwn?annotate=1.4 | CC-MAIN-2016-44 | refinedweb | 661 | 67.35 |
== How to use SQLObject with one connection per thread ==
'''Note: This recipe does not work in CherryPy 3'''
This example follows the CherryPyDatabase recipe.
Many database libraries are not thread safe and can have problems when connections are shared between threads. SQLite has this problem and will be used in the example below. SQLObject solves this problem with the ConnectionHub connection. The following needs to be done:
* Create the SQLObject connection with ConnectionHub.
* Tell CherryPy to set up the connection when each thread starts up.
Here is an example:
{{{
import cherrypy
from sqlobject import *
from sqlobject.sqlite.sqliteconnection import SQLiteConnection
# Create the ConnectionHub object
conn = dbconnection.ConnectionHub()
class Test(SQLObject):
""" Basic table object that has one field (name) """
_connection = conn
name = StringCol()
class Root:
def index(self):
""" Page will print each name in the database """
names = Test.select()
for name in names:
yield name.name
yield '
' index.exposed = True def reset(self): """ Run this page first to create the table, and add data """ Test.dropTable(True) Test.createTable() Test(name='Bob') Test(name='Vlad') return "Reset Complete" reset.exposed = True def connect(threadIndex): """ Function to create a connection at the start of the thread """ conn.threadConnection = SQLiteConnection('test.db') # Tell cherrypy to run the connect() function when creating threads cherrypy.engine.on_start_thread_list = [connect] # Start the server cherrypy.quickstart(Root()) }}} When you run the example make sure to access first to set up the database and create the table. {{{ #!html | http://tools.cherrypy.org/wiki/SQLObjectThreadPerConnection?format=txt | CC-MAIN-2016-50 | refinedweb | 239 | 58.69 |
In this article: Google like Full Text Search, I had the joy of experimenting with the implementation of a Google like FTS Search engine and was wondering if I could get something more general to work. In that quest, I downloaded and tested the Irony project from.
And I began playing around with it… a lot. There are still some documentation pieces missing but the general idea is really simple and I suggest you test it yourself and try doing some grammars (it’s always useful to remember those Compiler days in the university).
To generalize the idea, I talked with Ivan and converged that the new searcher should be able to search in any kind of database configuration with any type of information. This should allow the user to configure a little library (the one that I will describe here) and provide powerful functionality as the one shown in these examples:
What I did was to simplify the problem to 4 types of information that cover all of the possibilities for a SQL Server column:
For each of these columns, I defined a grammar with similar operations between them. Some examples are given here:
You are welcome to browse each of the grammars defined for this library. They contain the operations needed for all combinations of the above (as long as the column type is coherent with the operation).
Now that we have the required grammars we need to know how to interpret them. For example, the numeric grammar should interpret to a SQL query that conforms to the requested expression.
The library contains the interpretation for all grammars to SQL like expressions. Please note that each of the interpreters contain an association to the column that is going to be filtered with this expression. In the following lines, we will explore the structure that defines the search to be performed.
All these interpreters were inspired by the excellent FTS interpreter given in the link above and so I leave for you the analysis of these classes.
Up until now, all we have is a way to interpret an expression (in any of the 4 defined types) and form a SQL expression with it. In other words, all we have is the WHERE clause for each of the columns that participate in the search.
WHERE
Now, we will define a grammar to define the columns and the search expressions for each of the columns. Let’s say we have 2 searchable columns: bio and name. We would like to have a grammar that understands the following:
bio=lived in germany & name=Jurgen
We could parse this expression, and interpret each of the columns with the corresponding interpreter. Finally, we would obtain a nicely formed SQL expression that would do the search for us. The grammar defined to deal with this is:
Expression =>
ColumnSearch |
Expression QuerySearchOp ColumnSearch
ColumnSearch =>
Name EqualOp Term
QuerySearchOp =>
AndOp |
OrOp
EqualOp =>
"="
AndOp =>
"&"
OrOp =>
"|"
The class defining this grammar gives the correct priorities to our operators and we quickly have a simple grammar to deal with the desired expression. However, the grammar needs to define only one search criteria per line. So, the search component adds a new line before each search term.
The search structure that works with this library contains the following:
A search structure contains a collection of tables. Each table contains a join (except for the first table) and a collection of columns. A column contains:
join
We are ready to pass a human like query to our interpreter and this should validate the expression and interpret each part of it to produce a valid SQL expression conforming to the given query. Specifically, the intervening classes are:
All we have to do is call the GetQuery() method of a SqlServerInterpreter object to obtain the requested SQL query. The constructor of the SqlServerInterpreter will ask for a structure that defines the columns and tables to search.
GetQuery()
SqlServerInterpreter
This solution is needed many times in a system, for different tables and different columns. This is why we should have many configurations to work with our library. Also, we need the configurations to be read from a database or defined directly on the client. The library provides the ability to collect, in a singleton hash table, the different configurations of the search structures.
Also, the resulting object is very important. The SearchResult class defines an object that contains the resulting query for the query expression and the resulting DataSet after performing the search. The class diagram for the Searcher and the SearchResult is the following:
SearchResult
DataSet
Searcher
The use of this structure is very simple. Here we have an example of a search to be performed in two tables: tbl_in_articulos and tbl_fa_precios. The join between these two tables is when tbl_in_articulos.id = tbl_fa_precios.id_articulo. The resulting DataSet should contain:
tbl_in_articulos
tbl_fa_precios
tbl_in_articulos.id = tbl_fa_precios.id_articulo
id
codigo_articulo
descripcion
precio
The constructor expects a SearchStructure and so we subclass this class, define the columns and tables in the constructor of this subclass, and add them to the structure with the given methods.
SearchStructure
1 public class ArticlesSearch : SearchStructure
2 {
3 public ArticlesSearch()
4 {
5 TableStructure basetbl = new TableStructure("tbl_in_articulos");
6
7 Column col = new Column("id", "id", "id",
8 Column.ColumnType.Numeric, basetbl.Name);
9 basetbl.addColumn(col);
10 col = new Column("codigo_articulo", "Codigo", "cod",
11 Column.ColumnType.String, basetbl.Name);
12 basetbl.addColumn(col);
13 col = new Column("descripcion", "Descripcion", "desc",
14 Column.ColumnType.String, basetbl.Name);
15 basetbl.addColumn(col);
16
17 this.addTable(basetbl, null);
18
19 TableStructure tbl = new TableStructure("tbl_fa_precios");
20
21 col = new Column("id_articulo", "id_articulo", "none",
22 Column.ColumnType.Numeric, tbl.Name);
23 col.Searchable = false;
24 col.AppearInResults = false;
25 tbl.addColumn(col);
26 col = new Column("precio", "Precio", "pre",
27 Column.ColumnType.Numeric, tbl.Name);
28 tbl.addColumn(col);
29
30 JoinStatement join = new JoinStatement(basetbl.Name + ".id" ,
31 tbl.Name + ".id_articulo",
32 JoinStatement.JoinType.Inner);
33 this.addTable(tbl, join);
34 }
35 }
Now, all we need to do is instantiate an object of this structure and use it with our searcher:
1 string query = txt.Text;
2 Searcher srch = new Searcher(new ArticlesSearch());
3 srch.ConnectionString =
4 ConfigurationManager.ConnectionStrings["SearchConnectionString"].ConnectionString;
5 SearchResult res = srch.search(query);
6 try
7 {
8 lbl.Text = res.Message.Replace("\n","
") + "
" + res.GeneratedQuery;
9
10 DataView dtView = new DataView(res.Data.Tables[0]);
11 gdv.DataSource = dtView;
12 gdv.DataBind();
13 } catch { ; }
The Searcher object is instantiated passing an ArticlesSearch object as a parameter. This defines the grammar to be used to parse the expressions. Next, we can define the connection string to be used by the searcher (the connection string used by default is ‘SearchConnectionString’). Finally, we search passing the text from the text box in our client.
ArticlesSearch
SearchConnectionString
The result is read as a DataSet in the data field of the SearchResult object returned. This can be passed to a gridview and we can have a very powerful searcher in seconds.
gridview
Another way to obtain the same result is to define the structure directly in the database. The library expects the following structure:
This structure can easily be loaded with the scripts distributed with the library below. The only thing here is that you should insert the records directly with SQL instructions. A complete example of a configuration equivalent to the one seen programmatically will be the following:
1 insert into SearchConfig
2 values (newid(), 'test')
3 /* inserted record with guid = DD9DD3DF-5340-4461-86CA-B3A7A589F404 */
4
5 insert into SearchTable
6 values
7 (newid(), 'tbl_in_articulos', 0, 'DD9DD3DF-5340-4461-86CA-B3A7A589F404')
8
9 insert into SearchTable
10 values
11 (newid(), 'tbl_fa_precios', 1, 'DD9DD3DF-5340-4461-86CA-B3A7A589F404')
12
13 select * from SearchJoin
14 /* inserted records with guid =
15 FBDDB7EB-66D1-425E-A22F-5DEBC16662DC
16 F84F27CD-D40A-473F-9A5E-B67346E641AA
17 respectively */
18
19 insert into SearchJoin
20 values
21 (newid(), 'F84F27CD-D40A-473F-9A5E-B67346E641AA',
22 'tbl_in_articulos.id', 'tbl_fa_precios.id_articulo', 'INNER')
23
24 /** The columns */
25 insert into SearchColumn
26 values
27 (newid(), 'FBDDB7EB-66D1-425E-A22F-5DEBC16662DC', 'id'
28 'Id', 'NUM', 'id', 1, 1, 0)
29
30 insert into SearchColumn
31 values
32 (newid(), 'FBDDB7EB-66D1-425E-A22F-5DEBC16662DC', 'codigo_articulo'
33 'Codigo', 'STR', 'cod', 1, 1, 0)
34
35 insert into SearchColumn
36 values
37 (newid(), 'FBDDB7EB-66D1-425E-A22F-5DEBC16662DC', 'descripcion'
38 'Descripcion', 'STR', 'desc', 1, 1, 0)
39
40 /** This is needed for the join */
41 insert into SearchColumn
42 values
43 (newid(), 'F84F27CD-D40A-473F-9A5E-B67346E641AA', 'id_articulo'
44 'ID_Articulo', 'NUM', 'none', 0, 0, 0)
45
46 insert into SearchColumn
47 values
48 (newid(), 'F84F27CD-D40A-473F-9A5E-B67346E641AA', 'precio'
49 'Precio', 'NUM', 'pre', 1, 1, 0)
In this case, to use the Searcher, all we need is the following lines:
Searcher
1 Searcher srch = new Searcher("test");
2 srch.ConnectionString =
3 ConfigurationManager.ConnectionStrings["SearchConnectionString"].ConnectionString;
4 SearchResult res = srch.search(query);
I have tried to provide the most functionality to the searcher. I currently use it with no problems for most intranet applications.
It sure needs some extra work to publish it to the world in a searcher in internet. The most critical issue regarding an internet publication would be the capability to suggest the possible term to expect. This should easily be done by Irony and I expect to include this feature in the library in the short term.
Another pending task in my to-do list is the ability to apply the search term to all columns. For example, when we just type: ‘lived in Germany’; we would like it to apply to all columns that are searchable. This is why the Column structure has a StandardSearch flag.
StandardSearch
Although the less than and bigger than operators are straight forward there are some issues when using it in an ASP.NET page because of the validation; once that is disabled everything works fine.
A generic searcher is possible combining different Human-like-query-to-SQL interpreters for different grammars. In this article, we have seen how this can be done for 4 different types of data that cover most of the possibilities in a SQL Server.
The library provides a very simple interface to incorporate powerful search capabilities to any table(s) in your database. The view capability (order by, filtering, etc.) is left to be worked in the client. The only task the searcher does is exactly that: search and return a data. | http://www.codeproject.com/Articles/104299/Very-Powerful-Generic-Irony-Based-Database-Searche?PageFlow=FixedWidth | CC-MAIN-2014-41 | refinedweb | 1,737 | 53.92 |
#include <sys/stream.h> #include <sys/ddi.h> void enableok(queue_t *q);
Architecture independent level 1 (DDI/DKI).
A pointer to the queue to be rescheduled.
The enableok() function enables queue q to be rescheduled for service. It reverses the effect of a previous call to noenable(9F) on q by turning off the QNOENB flag in the queue.
The enableok() function can be called from user, interrupt, or kernel context.
The qrestart() routine uses two STREAMS functions to restart a queue that has been disabled. The enableok() function turns off the QNOENB flag, allowing the qenable(9F) to schedule the queue for immediate processing.
1 void 2 qrestart(rdwr_q) 3 register queue_t *rdwr_q; 4 { 5 enableok(rdwr_q); 6 /* re-enable a queue that has been disabled */ 7 (void) qenable(rdwr_q); 8 }
noenable(9F), qenable(9F)
Writing Device Drivers for Oracle Solaris 11.2
STREAMS Programming Guide | http://docs.oracle.com/cd/E36784_01/html/E36886/enableok-9f.html | CC-MAIN-2015-40 | refinedweb | 147 | 66.54 |
Walkthrough: Using the Visual Studio IDE
The Visual Studio Integrated Development Environment (IDE) offers a set of tools that help you write and modify the code for your programs, and also detect and correct errors in your programs.
In this topic, you create a new standard C++ program and test its functionality by using features available in Visual Studio for the C++ developer.
This walkthrough covers the following:
Working with Projects and Solutions
Using Solution Explorer
Adding a Source File
Fixing Compilation Errors
Testing a Program
Debugging a Program
Visual Studio organizes your work in projects and solutions. A solution can contain more than one project, such as a DLL and an executable that references that DLL. For more information, see Introduction to Solutions, Projects, and Items.
The first step in writing a Visual C++ program with Visual Studio is to choose the type of project. For each project type, Visual Studio sets compiler settings and generates starter code for you.
To create a new project
On the File menu, point to New, and then click Project….
In the Project Types area, click Win32,and then, in the Visual Studio installed templates pane, click Win32 Console Application.
Type game as the project name.
When you create a new project, Visual Studio puts the project in a solution. Accept the default name for the solution, which by default is the same name as the project.
You can accept the default location, type a different location, or browse to a directory where you want to save the project.
Press OK to start the Win32 Application Wizard.
On the Overview page of the Win32 Application Wizard dialog box, press Next.
On the Application Settings page under Application type, select Console Application. Select the Empty Project setting under Additional options and click Finish.
You now have a project without source code files.
Solution Explorer makes it easy for you to work with files and other resources in your solution.
In this step, you add a class to the project and Visual Studio adds the .h and .cpp files to your project. You then add a new source code file to the project for the main program that tests the class.
To add a class to a project
If the Solution Explorer window is not visible, on the View menu click Solution Explorer.
Right-click the Header Files folder in Solution Explorer and point to Add. Then click Class.
In the Visual C++ category, click C++ and click C++ Class in the Visual Studio installed templates area. Click Add.
In the Generic C++ Class Wizard, type Cardgame as the Class name and accept the default file names and settings. Then click Finish.
Make these changes to the Cardgame.h file displayed in the editing area:
Add two private data members after the opening brace of the class definition:
Add a public constructor prototype that takes one parameter of type int:
Delete the default constructor generated for you. A default constructor is a constructor that takes no arguments. The default constructor looks similar to the following:
The Cardgame.h file should resemble this after your changes:
The line #pragma once indicates that the file will be included only one time by the compiler. For more information, see once.
For information about other C++ keywords included in this header file, see class (C++), int, Static (C++), and public (C++).
Double-click Cardgame.cpp in the Source Files folder to open it for editing.
Add the code for the constructor that takes one int argument:
When you begin typing pl or to, you can press Ctrl-Spacebar and auto-completion will finish typing players or totalparticipants for you.
Delete the default constructor that was generated for you:
The Cardgame.cpp file should resemble this after your changes:
For an explanation of #include, see The #include Directive.
In this step, you add a source code file for the main program that tests the class.
To add a new source file
From the Project menu, click Add New Item.
Alternatively, to use Solution Explorer to add a new file to the project, right-click the Source Files folder in Solution Explorer and point to Add. Then click New Item.
In the Visual C++ area, select Code. Then click C++ File (.cpp).
Type testgames as the Name and click Add.
In the testgames.cpp editing window, type the following code:
#include "Cardgame.h" int Cardgame::totalparticipants = 0; int main() { Cardgame *bridge = 0; Cardgame *blackjack = 0; Cardgame *solitaire = 0; Cardgame *poker = 0; bridge = new Cardgame(4); blackjack = new Cardgame(8); solitaire = new Cardgame(1); delete blackjack; delete bridge; poker = new Cardgame(5); delete solitaire; delete poker; return 0; }
For information about C++ keywords included in this source file, see new Operator (C++), delete Operator (C++), The if-else Statement, and The try, catch, and throw Statements.
On the Build menu, click Build Solution.
You should see output from the build in the Output window indicating that the project compiled without errors.
In this step, you deliberately introduce a Visual C++ syntax error in your code to see what a compilation error looks like and how to fix it. When you compile the project, an error message indicates what the problem is and where it occurred.
To fix compilation errors using the IDE
In testgames.cpp, delete the semicolon in the last line so that it resembles this:
On the Build menu, click Build Solution.
A message in the Output window indicates that building the project failed.
Click on the Go To Next Message button (the green, right-pointing arrow) in the Output window. The error message in the Output window and status bar area indicates there is a missing semicolon before the closing brace.
You can press the F1 key to view more help information about an error.
Add the semicolon back to the end of the line with the syntax error:
On the Build menu, click Build Solution.
A message in the Output window indicates that the project compiled correctly.
Running a program in Debug mode enables you to use breakpoints to pause the program to examine the state of variables and objects.
In this step, you watch the value of a variable as the program runs and deduce why the value is not what you might expect.
To run a program in Debug mode
Click on the testgames.cpp tab in the editing area if that file is not visible.
Set the current line in the editor by clicking the following line:
To set a breakpoint on that line, on the Debug menu, click Toggle Breakpoint, or press F9. Alternatively, you can click in the area to the left of a line of code to set or clear a breakpoint.
A red circle appears to the left of a line with a breakpoint set.
On the Debug menu, click Start Debugging (or press F5).
When the program reaches the line with the breakpoint, execution stops temporarily (because your program is in Break mode). A yellow arrow to the left of a line of code indicates that is the next line to be executed.
To examine the value of the totalparticipants variable, hover over it with the mouse. The variable name and its value of 12 is displayed in a tooltip window.
Right-click the totalparticipants variable and click Add Watch to display that variable in the Watch window. You can also select the variable and drag it to the Watch window.
On the Debug menu, click Step Over or press F10 to step to the next line of code.
The value of totalparticipants is now displayed as 13.
Right-click the last line of the main method (return 0;) and click Run to Cursor. The yellow arrow to the left of the code points to the next statement to be executed.
The totalparticipants number should decrease when a Cardgame terminates. At this point, totalparticipants should equal 0 because all Cardgame pointers have been deleted, but the Watch 1 window indicates totalparticipants equals 18.
There is a bug in the code that you will detect and fix in the next section.
On the Debug menu, click Stop Debugging or press Shift-F5 to stop the program.
In this step, you modify the program to fix the problem that was discovered above.
To fix a program that has a bug
To see what occurs when a Cardgame object is destroyed, view the destructor for the Cardgame class.
On the View menu, click Class View or click the Class View tab in the Solution Explorer window.
Expand the game project tree and click the Cardgame class.
The area underneath shows the class members and methods.
Right-click the ~Cardgame(void) destructor and click Go To Definition.
To decrease the totalparticipants when a card game terminates, type the following code between the opening and closing braces of the Cardgame::~Cardgame destructor:
The Cardgame.cpp file should resemble this after your changes:
On the Build menu, click Build Solution.
On the Debug menu, click Run or press F5 to run the program in Debug mode. The program pauses at the first breakpoint.
On the Debug menu, click Step Over or press F10 to step through the program.
Note that after each Cardgame constructor executes, the value of totalparticipants increases and after each pointer is deleted (and the destructor is called), totalparticipants decreases.
If you step to the last line of the program, just before the return statement is executed, totalparticipants equals 0.
Continue stepping through the program until it exits or on the Debug menu, click Run or press F5 to allow the program to continue to run until it exits. | http://msdn.microsoft.com/en-US/library/ms235632(v=vs.80).aspx | CC-MAIN-2014-41 | refinedweb | 1,606 | 71.85 |
Accessing XmlRpcPlugin from C#
It is possible to access Trac via the XmlRpcPlugin plug-in from Microsoft's .NET framework. This should work for any .NET language (including Visual Basic). The steps below are written in C#.
Step 1. Install the Trac XmlRpcPlugin
see XmlRpcPlugin
Step 2. Download and compile XML-RPC.NET
Step 3. Create a .NET project
To your new project, add the library you just compiled (above) as a reference.
Step 4. Create the interface
Some day it would be nice to have the complete WikiRPC described; for now, you can just piece together the methods you need as you need them. Create an interface that extends IXmlRpcProxy. In this example, the interface is named "Trac". Then, declare methods that are tagged as XmlRpcMethod as below.
I have included two examples. The first returns a list of all the pages on your wiki. The second posts a new page (or a new revision to an existing page) to the wiki.
using CookComputing.XmlRpc; namespace TracPusher { [XmlRpcUrl("")] public interface Trac : IXmlRpcProxy { [XmlRpcMethod("wiki.getAllPages")] string[] getAllPages(); [XmlRpcMethod("wiki.putPage")] bool putPage(string pagename, string content, struct PageAttributes attr); } // define the structure needed by the putPage method struct PageAttributes { public string comment; } }
Step 5. Application Code
Here's an example of how to use this. For simplicity, we'll just add a Main to the above
static void Main(string [] args) { Trac proxy; // Fill these in appropriately string user = "yourTracUserName"; string password = "yourTracPassword"; /// Create an instance of the Trac interface proxy = XmlRpcProxyGen.Create<Trac>(); // If desired, point this to your URL. If you do not do this, // it will use the one specified in the service declaration. // proxy.Url = ""; // Attach your credentials proxy.Credentials = new System.Net.NetworkCredential(user, password); PageAttributes attr; attr.comment = "This is the comment that goes with the new page"; bool rc = proxy.putPage("SandBox", // new page name "''hi chris'', this page was automatically added via XmlRpc from .NET", // new page contents attr // new page attributes ); Console.WriteLine("Result: " + rc); }
Step 6. Handling Certificate Problems (if any)
If your trac uses https, make sure it has a valid, verifyable certificate. If you use a self-signed or expired certificate, all is not lost, but you have to take a few additional steps.
First, you need to create a server certificate validation callback that always says the certificate is OK, even if it's really not. Example:
private bool AcceptCertificateNoMatterWhat(object sender, System.Security.Cryptography.X509Certificates.X509Certificate cert, System.Security.Cryptography.X509Certificates.X509Chain chain, System.Net.Security.SslPolicyErrors errors) { return true; }
Second, before you create the proxy, do this (for example, at the beginning of Main)
ServicePointManager.ServerCertificateValidationCallback = AcceptCertificateNoMatterWhat;
Attachments (1)
- Signatures.cs (5.0 KB) - added by rjollos 5 years ago.
Download all attachments as: .zip | https://trac-hacks.org/wiki/XmlRpcPlugin/DotNet?version=1 | CC-MAIN-2017-04 | refinedweb | 461 | 52.15 |
attach a pathname prefix
#include <sys/prfx.h> int qnx_prefix_attach( char *prefix, char *replace, unsigned unit );
The qnx_prefix_attach() function attaches a pathname prefix passed as prefix. You may optionally specify a replacement string passed as replace. If replace is NULL, no replacement is done. You may associate a unit with the prefix to simplify your code if you have multiple prefixes.
This function is used in one of two ways:
As an example of a resource manager, a filesystem manager might call qnx_prefix_attach() as follows, if it had two logical drives named / (a hard disk) and /fd (a floppy disk):
qnx_prefix_attach( "/", NULL, 0 ); qnx_prefix_attach( "/fd", NULL, 1 );
The device manager calls qnx_prefix_attach() with:
qnx_prefix_attach( "/dev", NULL, 0 );
When the open() function is called with a pathname that matches one of these prefixes, the prefix is stripped, the unit number returned in a message, and the open message is sent to the resource manager that created the prefix.
The prefix utility can be used to create an alias. If the prefix utility is invoked with:
prefix -A /lpt=//20/dev/par1
it calls qnx_prefix_attach() with:
qnx_prefix_attach( "/lpt", "//20/dev/par1", 0 );
When the open() function is called with a pathname that matches one of these alias prefixes, the prefix is replaced with replace, and the newly formed pathname is retried. This replacement and retry is limited to 32 times to prevent alias loops.
#include <stdio.h> #include <stdlib.h> #include <errno.h> #include <sys/prfx.h> char buf[1000]; void main() { print(); qnx_prefix_attach( "/dev/console", "//0/dev/tty1", 0 ); print(); qnx_prefix_detach( "/dev/console" ); print(); } void print() { if( qnx_prefix_query( 0, "", buf, 1000 ) != -1 ) printf( "%s\n", buf ); else printf( "Unable to print prefix tree.\n" ); }
QNX
If your application calls this function, it must run as root.
errno, qnx_prefix_detach(), qnx_prefix_query() | https://users.pja.edu.pl/~jms/qnx/help/watcom/clibref/qnx/qnx_prefix_attach.html | CC-MAIN-2022-33 | refinedweb | 298 | 62.27 |
I have been frustrated, like many others apparently, by the lack of XOR drawing capabilities in the .NET Framework. This is of particular interest when it�s needed to outline a selected Region of Interest in a graphic. I�ve seen several approaches to solve this problem that have, for the most part, stayed within the .NET Framework context, but they were fairly complex. Thus, I was inspired when I saw the article "Using GDI and GDI+ mixed drawing rubber band lines" by sedatkurt in the MFC/C++ >> GDI >> Unedited Reader Contributions in The CodeProject. I was sure it could be extended to rectangles as well.
Anyone who has used a graphics program has probably also used a selection rectangle to act on only a portion of the displayed image. In historical Windows® applications, this selection rectangle is normally drawn by the mouse from a starting corner with the left mouse button depressed until the button is released. The mouse position at each move event is the second corner of the drawn rectangle. Along the way, the rectangle is therefore drawn and erased many times until the final version is drawn at mouse button up. If the drawing mode is set to an XOR raster operation (ROP), the rectangle is usually completely visible since all bits of the pixels are XORed with the color of the drawing pen. More importantly, redrawing in XOR mode along the same rectangle with the same pen automatically restores the original pixel colors, effectively erasing the rectangle. Unfortunately, the .NET Framework Team did not include ROP drawing control in GDI+.
While I adopted the basic concept that sedatkurt showed in his code, I was
primarily interested in developing a strictly rubber band rectangle drawing
capability. Consequently, all and only the rubber band rectangle related code
is concentrated into the
RubberbandRectangle class in the
RubberbandRects
namespace in the file RubberbandRects.cs. This means hardcoding the pen style
to draw the rectangle border as a dotted line (PS_DOT), an XOR drawing mode for
the pen (
R2_XORPEN), a pen width of 1 pixel (doesn't have to be pretty, just
visible), and a brush style to NOT fill in the rectangle (
NULL_BRUSH) so the
underlying graphics are still visible. The resultant public API of
RubberbandRectangle is quite compact:
public class RubberbandRectangle { public RubberbandRectangle(); public int PenStyle { get/set } public void DrawXORRectangle( Graphics grp, int X1, int Y1, int X2, int Y2 ); }
The default constructor sets the pen style to the default value of
PS_DOT. I was
not originally going to include a capability to reset this value, but I relented
and made the pen style a property with get/set capability. The pen color is
fixed internally in the class as a predefined
BLACK_PEN in the
CreatePen call,
although I kept sedatkurt's RGB conversion macro in case.
The process of drawing the XORed rectangle begins with extracting the Win32 GDI
device context from the GDI+ Graphics object passed to the function. A black
dotted pen is created one pixel wide. The ROP drawing mode is then set to XOR and
the new pen selected into the device context; the old pen's handle is saved for
replacement later (always clean up resources when you're finished with them). A
stock
NULL_BRUSH is created and simultaneously selected into the device context,
again saving the old brush handle for later. The drawing is now performed, the old
brush and pen put back into the device context, and the new pen deleted. Note that
stock resources do not need to be deleted since they're only borrowed anyway. The
device context is released and the function is finished.
The demo application is in the file MainForm.cs. It is a simple
WindowsForm
with a number of rectangles painted on its client area in the
MainForm_Paint
event handler. However, I've also added a call to
DrawXORRectangle() if
a rubber band rectangle was present (the flag
haveRect
is set) so that the dotted rectangle is also then redrawn.
The rectangle drawing functionality is nearly the same as in sedatkurt's code.
The rubber banding operation is initiated in
MainForm_MouseDown where the
mouseDown
flag is set to indicate that the mouse button is down. The initial point of the
mouse down event is stored in
XDown and
YDown. All of this presumes that it was
the Left mouse button that is being pressed. I use a Right mouse button press
to initiate an operation to clear the rubber band rectangle from the screen
(clear the
haveRect flag and call
Invalidate()
).
The stage is now set for the actual drawing, which takes place in the
MainForm_MouseMove event handler. Drawing will actually only occur if the mouse
button is down. This prevents an attempt at drawing when the mouse is simply
run across the app. If the mouse is down and moving, a rectangle has already
been drawn and must be erased with a call to
DrawXORRectangle(). The
rectangle is then redrawn through a call to
DrawXORRectangle() with one
corner at the new mouse coordinates. The new coordinates are saved and the
moving flag set.
The final part of the rubber band rectangle drawing occurs in the
MainForm_MouseMove event handler. The
mouseDown and
mouseMove
flags are cleared and the
haveRect flag is set. We now have a dotted
rectangle in the client area of the app window.
This code was developed in SharpDevelop, an IDE available for free at. It is an evolving, beta level project written in C# and comes with full source code. While it still has warts, it is improving by the month and the price is right. For anything but large team, production code development, it works fine. I used version 0.96 which compiles with .NET Framework v1.1, so you will need the latest .NET version to run the demo app. There is no apparent reason it won't compile with Visual Studio 2003 or any other Visual Studio / .NET Framework pair as well.
The code in the
RubberbandRectangle class is a stripped down version I had evolved
keeping a lot of the enums and functions implied by sedatkurt's code and adding the
enums and functions appropriate for the rectangle (or rounded rectangle or ellipse or
polygon) drawing cases. In the end, I stripped it all out since I had no interest in
demonstrating it and the extension of what's left is fairly straight forward. A possible
extension in the same vein as rubber banding is the ability to move bitmaps around the
client area with the mouse. A brush made from the bitmap should be as effecting in
XOR drawing mode as a dotted pen. At any rate, enjoy.
General
News
Question
Answer
Joke
Rant
Admin | http://www.codeproject.com/KB/GDI-plus/MixedRubberBandApp1.aspx | crawl-002 | refinedweb | 1,121 | 61.26 |
Are C++ stnadard routines like std::cout wrriten using C liberary functions like fgets()?
Printable View
Are C++ stnadard routines like std::cout wrriten using C liberary functions like fgets()?
cout is an object representing an output stream, so a function like fgets has no real place there. What you may be thinking of is cin.getline, which has a couple of signatures. This one:
istream& getline( char* buffer, streamsize num, char delim );
acts the most like fgets, so it's probably implemented that way.
>Are C++ stnadard routines like std::cout wrriten using C liberary functions like fgets()?
Possibly, but not likely. It's easier (not really, but it's *better*) to work through a lower level API than to use a middle man.
[>>>>>>>>>>>>>>>>>>>>>>>>>>>EDITED<<<<<<<<<<<<<<<<< <<<<<<<<]
I read in msdn that fgets() and similar functions are standard in C Run-Time liberary. The CRT functions should be platform independent (Am I right?), so the lowest level of platform independent functions are fgetc(), _putw(), _getch(), _setmode(), etc. Am I right?
If C++ std liberaries like iostream has their own low level rutines and does not use C rutines, then why msdn Run-Time liberary reference is full of C rutines?
Another point: When we use Win32 API for creating a file for example. Some very low level rutines that are written by microsoft and are in kernel32.dll for example execute. These low level rutines does not use C or C++ std libraries to do their job. Am I correct?
Now if we imagine that Windows don't prophibit us from executing low level rutines and I want to write a program that formats a hard drive, am I beyond the OS hardware abstraction layer?
How Partition Magic formats a drive for example? What rutines does it use?
If you feel my questions are stupid or confuzing please tell me to explain more. It is very important for me to find the correct answers.
> Maybe I should ask how fgets() works? What rutines does it use?
Whatever the current platform has to offer probably.
Consider the std:: the nice level concrete on which you build your house.
Underneath, there is the whole jaggy mess of the current platform, but you don't need to know about that. In some places it can be quite thin, but in other places it could be much thicker.
I've edited my last post. So please read it again.
Platform independence in C++ just means that the interface will be the same across platforms. You can find the same functions that will do the same thing on any platform that has a conforming compiler. That does not mean that the functions are all implemented in the same way.
So the C and C++ standard libraries are both platform independent in their interfaces, but they both can be (and are) implemented differently on different platforms. Sometimes they are implemented in terms of each other, sometimes they are implemented with platform specific functions, sometimes they are implemented with platform specific assembly.
The C runtime library is available to C++ programmers if they choose to use it, which is why it is covered in MSDN, just like the C++ standard library.
OK. So for example when I use std::cin.get() in my code, the real code that will be replaced by a linker or (if its body is in a DLL and will be executed in RunTime) can be different in each platform. The standard is only for the arguments, return value and the duty of the function, the body can be different.
Am I correct?
How Partition Magic formats a drive for example? How OS let him do its job?
That is correct... in a way. It's at compile-time, not linking-time.
In the context of this discussion, i'm defining a C++ implementation as the combination of the compiler, linker and standard library (including the STL). Your implementation is portable if it conforms to the ISO C++ specifications. These specifications rule the language syntax and how the entities of the standard library should be presented to the user (we, the coders).
If an implementation changes these rules, it is not portable. For instance, if the syntax for a function declaration was altered by some implementation to function_name return_type(arglist,...); instead of the ISO defined return_type function_name (arglist,...);, this implementation would not be portable across other implementations.
On another implementation that conformed to ISO C++, that change would be caught at compile time and flagged as an error.
If the implementation overloaded getline() to include yet a new declaration with different parameters than those already defined in the standard library, the same would happen if the user decided to code with the new getline function and compile his code on another implementation.
However, despite being not portable across implementations, it could still be portable across systems. That implementation (assuming it was a Win32 one) could have a corresponding one for Linux, and Solaris, and DEC,...
That level of portability (portability across systems) is achieved by the implementation provider in how he coded the standard library (STL included). It is obvious that a std::cout cannot work the same way in windows, linux, Macintosh,... However, to the user it must look like so. Regardless of the system the user is coding for, std::cout syntax must be the same. The code that implements std::cout, on the other hand, is necessarily different.
But the code of std::cout is "added" at compile time. Not linking time.
>>How Partition Magic formats a drive for example? How OS let him do its job?
Partition Magic does not use either C or C++ standard functions for that because there are no such standard functions. It will have to go directly to the os api system calls, such as win32 api functions. I don't know which specific functions because I've never attempted to format a drive in a win32 program. Those os-specific functions can be called just like any of the standard c or c++ functions, just include the correct header file(s) and libraries. For win32 api functions, see or your compiler's online help files for win32 api functions.
Why at compile time? std::cout has external linkage and should be linked after compilation.Why at compile time? std::cout has external linkage and should be linked after compilation.Quote:
But the code of std::cout is "added" at compile time. Not linking time.
If I foun out correctly there are some functions in WinIoCtl.h that can do such things. Is it possible to write own code for formatting and partitioning?
Yes it is. ASM is your friend and worst nightmare.
The linkage type regulates the visibility of a name. Not if that name is added during compile or link-time.
External linkage names are those names that are visible from every translation unit of a program (global objects, extern const objects, classes...)
An Internal linkage name is a name that is local to the translation unit where it was declared (non extern const objects, names declared inside namespaces, static objects,...).
No linkage objects are all those names local to the scope in which they were declared, but not visible across the entire translation unit (local variables, enumerators declared inside a function, classes defined inside a function,...)
OK maybe I should use the term external refference. But I thought std::cin is in a lib and should be linked by linker to .obj. Or is in a DLL.
Yoy know, cin.get() is not an intrinsic function. I am now really confuzed.
If I am wrong please correct me (above post). | http://cboard.cprogramming.com/cplusplus-programming/81083-cplusplus-std-routines-printable-thread.html | CC-MAIN-2014-10 | refinedweb | 1,278 | 66.64 |
Sherpa can use the template models and combined them with the other models. Here we show a simple template fitting to the SED of a quasar. A set of accretion disk spectral models with the standard parameters (mass, accretion rate, inclination anlge) has been stored in the subdirectory Templates. table.txt ascii file index these spectra as required for Sherpa.
First import Sherpa packages
from sherpa.astro.ui import *
Define the optimization method and load the ascii data file ( $\log {\nu}, \log {\nu F_{\nu}}$, $1\sigma$ errors), plot the data and set the data filter. The SED covers broad-band and we filter the data to include only the optical-UV part for fitting with the disk models.
set_method('simplex') load_ascii('3098_errors.dat', ncols=3, dstype=Data1D) plot_data() notice(13.5,16.)
Input template model defined via an ascii index file table.txt. The model name for Sherpa session is required and given as 'tbl' in this initial step. set_model() assignes the template model to the SED data. get_model() brings the information about the initial model parameters, which are based on the first entry in the table.txt index file.
load_template_model('tbl', 'table.txt') set_model(tbl) get_model()
In the next step the SED is fit with the templates using simplex neldermead algorithm. The chi2 statistics will be used with the measurement errors entered together with the data. The fit returns the screen output with the information about the best fit parameters and related statistical values.
fit() plot_fit()
Dataset = 1 Method = neldermead Statistic = chi2 Initial fit statistic = 426180 Final fit statistic = 91.4691 at function evaluation 324 Data points = 7 Degrees of freedom = 4 Probability [Q-value] = 6.4175e-19 Reduced statistic = 22.8673 Change in statistic = 426089 tbl.mass 8.98957 tbl.rate 0.305722 tbl.angle 0.995458
plot_fit() generates a figure showing the data points overplotted with the line defined by the best fit mode parameters given above. The fit goes over the data points, but the reduced chi2 statistics is large, mainly due to the deviations at lower frequencies. We add a second model component to account for these deviations.
ignore() notice(14.5,15.4) fit() plot_fit()
Dataset = 1 Method = neldermead Statistic = chi2 Initial fit statistic = 26.2289 Final fit statistic = 19.3396 at function evaluation 275 Data points = 5 Degrees of freedom = 2 Probability [Q-value] = 6.31632e-05 Reduced statistic = 9.66979 Change in statistic = 6.88929 tbl.mass 8.98154 tbl.rate 0.29325 tbl.angle 0.980385
The reduced chi2 is still high. In order to obtain the confidence bounds for the best fit parameters we will set up the options for confidence before running it. 'sigma' defines the confidence level, 'max_rstat' defines the maximum value of reduced statistics allowed by confidence, if rstat is higher than the max_rstat conf() will fail.
set_conf_opt('sigma',1) set_conf_opt('max_rstat',10) conf()
tbl.mass -: WARNING: The confidence level lies within (8.961713e+00, 8.965722e+00) tbl.mass lower bound: -0.0178194 tbl.rate lower bound: -0.0232344 tbl.rate +: WARNING: The confidence level lies within (3.397280e-01, 3.428440e-01) tbl.rate upper bound: 0.0480356 tbl.angle -: WARNING: The confidence level lies within (9.657205e-01, 9.657121e-01) tbl.angle lower bound: -0.0146684 tbl.angle upper bound: ----- tbl.mass +: WARNING: The confidence level lies within (9.035475e+00, 9.035465e+00) tbl.mass upper bound: 0.0539335 Dataset = 1 Confidence Method = confidence Iterative Fit Method = None Fitting Method = neldermead Statistic = chi2gehrels confidence 1-sigma (68.2689%) bounds: Param Best-Fit Lower Bound Upper Bound ----- -------- ----------- ----------- tbl.mass 8.98154 -0.0178194 0.0539335 tbl.rate 0.29325 -0.0232344 0.0480356 tbl.angle 0.980385 -0.0146684 -----
The screen output contains the confidence levels as well as the print out of the runs. These values can be accessed later as well using get_conf_results()
get_conf_results()
get_conf_results().parmins
(-0.017819382963949693, -0.023234427094558918, -0.014668384427378611)
get_model() plot_fit()
ignore() notice(14.5,15.4) plot_fit() | https://nbviewer.ipython.org/github/anetasie/SherpaNotebooks/blob/master/Templates.ipynb | CC-MAIN-2021-49 | refinedweb | 661 | 59.5 |
In this C++ tutorial, let us have a look at Polymorphism, its implementation and sample C++ program for polymorphism.
Introduction of Polymorphism
Polymorphism refers to having more than one form. So, polymorphism means multiple forms of something. A real-life example of polymorphism is, a person at the same time can have different characteristic. Like a man at the same time is a father, a husband, an employee. So the same person posses different behaviour in different situations.
Implementation of Polymorphism
In C++, polymorphism is implemented through:
- Methods Overloading
- Constructor Overloading
- Operator Overloading
- Virtual Function
Methods Overloading
Two or more methods with the same name but different number or type of arguments are known as method overloading. The return type of the function has no impact on method overloading. When the call to such a function is made, the compiler matches the function call with the function having the same number and type of arguments.
Program for Method overloading in C++
#include<iostream.h> #include<conio.h> class shape { public: void area(int x, int y) { cout<<"\nArea of Rectangle ="<<x*y; } void area(int x) cout<<"\nArea of square ="<<x*x; } void area(float x, float y) { cout<<"\nArea of Cylinder = "3.14 * x * x * y; } void area(float x) { cout<<"\nArea of Circle = "<< 3.14 * x * x; } }; void main() { clrscr(); shape sh; sh.area(3); sh.area(4.6f); sh.area(12,45); sh.area(5.6f,7.8f); }
Output
Area of Square = 9 Area of Circle = 66.442397 Area of Rectangle = 540 Area of Cylinder = 768.069113
Constructor Overloading
Constructors are called whenever an object is instantiated. The name of the constructor is the same as that of the class in which it is defined. A class may have more than one constructor. Whenever a class has multiple constructors, they are implemented as overloaded methods. Whenever a class has more than one constructor, each constructor must have a different number or type of parameters. This is known as Constructor Overloading.
Program for Constructor Overloading in C++
#include<iostream> #include<conio.h> using namespace std; class Example { // Variable Declaration int a, b; public: //Constructor wuithout Argument Example() { // Assign Values In Constructor a = 50; b = 100; cout << "\nIm Constructor"; } //Constructor with Argument Example(int x, int y) { // Assign Values In Constructor a = x; b = y; cout << "\nIm Constructor"; } void Display() { cout << "\nValues :" << a << "\t" << b; } }; int main() { Example Object(10, 20); Example Object2; // Constructor invoked. Object.Display(); Object2.Display(); // Wait For Output Screen getch(); return 0; }
Output
Im Constructor Im Constructor Values :10 20 Values :50 100 | https://www.codeatglance.com/cpp-polymorphism/ | CC-MAIN-2020-40 | refinedweb | 428 | 55.03 |
Hey.
I am wanting to make this program so that the user can move the position of his characters. I have made a attempt but i am confused about some bits.
// Code Shark #include <iostream> using namespace std; int main() { char character = 'X', movement; do { system("Cls"); cout << "Do you want your character to move [L]eft, [R]ight, [U]p or"; cout << " [D]own or [E]xit: "; cin >> movement; switch (movement) { case 'L': cout << "\b" << character; break; case 'R': cout << " " << character; break; case 'U': //cout << ???; break; case 'D': cout << endl << character; /* the thing is if i just go endl here the character will go straight to the start of the new line and most likelys the character will be spaces away from it. */ break; case 'E': break; } } while(movement != 'E'); cin.get(); return 0; }
Some of the problems include: At the start of the Do loop the characters position will be deleted in my code because of the system clear. Another worry is that when the user wants to move up or down the character will be brought to the start of the line and that may not be directly under/above the last position. If you could help i would be very graceful. | https://www.daniweb.com/programming/software-development/threads/135465/moveable-character | CC-MAIN-2018-05 | refinedweb | 205 | 71.99 |
storeserver demo application problemAndraž Stošić Jan 26, 2013 10:33 AM
Hi,
i was following the tutorial on docs site to create first rhoconnect application. I was preety much copy/pasting everything but application still doesn't work - to be exact sycing doensn't work and I can't seem to find out where the problem is.
In rhoconnect project (named storeserver) i defined new model "product" and under query method of Product object i've passed in:
parsed = JSON.parse(RestClient.get("#{@base}.json").body)
@result = {}
parsed.each do |item|
@result[item["product"]["id"].to_s] = item["product"]
end
where @base variable is defined in initialize method as tutorial is stating:
@base = ''
i didn't change anything else in storeserver project. I've created another project (rhomobile application named storemanager) to test syncing with my rhoconnect application. The steps i've taken are:
- create new model with following parameters name, brand, price, quantity, sku
- uncommnet the sync line in product.rb
added syncserver = '' line in my rhoconfig.txt
I've started everything up (rhoconnect, redis, rhomobile in RhoSimulator as WindowsMobile app) and logged in. I can see the newly created user on rhoconect console, i get device id but no data are synced. I've noticed one message when i was browsing rhoconnect console.
If i click on user and on my newly created user than i click on Product (under sources section) and source:application:test:errors i see the followng message:
"query-error":{
"message":"Please provide some code to read records from the backend data source"
}
it all seems like i have problem in my rhoconnect product adapter in query method. But i don't see any errors there.
Thanks for your help!
Regards.
Re: storeserver demo application problemLars Burgess Jan 26, 2013 10:53 AM (in response to Andraž Stošić)
Did you comment out the raise in the method? When you generate a source adapter, the boilerplate query method looks like this:
def query(params=nil) # TODO: Query your backend data source and assign the records # to a nested hash structure called @result. For example: # @result = { # "1"=>{"name"=>"Acme", "industry"=>"Electronics"}, # "2"=>{"name"=>"Best", "industry"=>"Software"} # } raise SourceAdapterException.new("Please provide some code to read records from the backend data source") end
Line 8 is intended to be removed when you put your code in there. Otherwise it will always just raise an exception .
Re: storeserver demo application problemAndraž Stošić Jan 26, 2013 11:08 AM (in response to Lars Burgess)
Thanks. I missed that out
Cheers! | https://developer.zebra.com/thread/2807 | CC-MAIN-2017-34 | refinedweb | 420 | 54.12 |
18 Feb 15:03 2005
Re: Re: static versus dynamic typing policy
<jastrachan@...>
2005-02-18 14:03:39 GMT
2005-02-18 14:03:39 GMT
On 18 Feb 2005, at 13:54, Martin C. Martin wrote: > > > Pascal DeMilly wrote: > >> On Fri, 2005-02-18 at 01:47, Russel Winder wrote: >>> On Fri, 2005-02-18 at 08:10 +0000, jastrachan@... wrote: >>> >>> >>>> foo.bar // property bar or field maybe >>>> foo.getBar() // getter >>>> >>>> but >>>> >>>> foo.getBar >>>> foo.toString >>>> >>>> is weird & feels strange >>> >>> Personally I agree with you on this but this is just my opinion. I >>> would like to have the () after to show they are method calls. >> Personally I disagree in forcing me to put () after anything. As the >> script writer let me make that decision. I got bitten by the size and >> size() ambiguities but it didn't kill me. For the most part, there is >> no >> ambiguities and it looks so much better. >> Go and explain an end-user the difference between a property and a >> method. I believe Groovy has a great future in being a plugin language >> for end user wanting more from their application as well as a >> scripting >> language for developer > > Personally I agree with Pascal that parens should be optional, even > for 0-arg methods. I don't think "is weird & feels strange" is a good > enough reason; the question is not your first reaction, but whether > you'll get used to it after a couple months and it will feel natural. > When I moved from Applesoft Basic, where line numbers were central, to > Amiga Basic, which didn't have line numbers, it felt weird and strange > -- for a few days. > > I see no reason property vs. field vs. method shouldn't be an > implementation detail. foo.bar does some things and returns a result. > Why does it matter what goes on under the hood? If you want the user > to think of one as a property/field, and the other as an action, then > use a noun phrase for the first and a verb phrase for the second. > Thus bar vs. getBar/toString. FWIW we could still say that this is true from a language perspective... foo.bar // property foo.bar() // method and the runtime could, if we want it to, expose zero argument methods as properties. i.e. if there is no property available for a name, try a zero arg method of the same name. So you could foo.toString Its more a question of would folks think this is weird; as that'd open the door to strange code such as foo.bar foo.getBar foo.getBar() all being interchangeable. So I'm thinking we should be a little careful with zero arg methods to avoid polluting the namespace too much. James ------- | http://permalink.gmane.org/gmane.comp.lang.groovy.jsr/558 | CC-MAIN-2015-11 | refinedweb | 464 | 73.47 |
Have you ever wondered whether generating API documentation automatically is possible while writing the code? Yes, this is possible. For developers, writing documentation is a painful part of the process. Swagger, also known as OpenAPI, solves this problem by generating useful documentation and help pages for web APIs. It not only generates read-only help pages, but ones that are interactive as well, which can even be used for testing APIs.
I will walk you through the steps needed to use document generation capability for your API. We’ll be using the Swashbuckle NuGet package to add document generation ability to our ASP.Net Core project.
First, we install a NuGet package in our project. The NuGet package used here is Swashbuckle.AspNetCore.
In Visual Studio, go to Tools -> NuGet Package Manager -> Manage Nuget Packages for Solution. Search for the package named Swashbuckle.AspNetCore and install it in your project.
Installing Swashbuckle.AspNetCore NuGet package
The next few steps will show you how to configure Swagger by adding a few lines of code in your project Startup.cs file.
Step 1: Include the Swagger namespace.
Including Swagger namespace
Step 2: Then, register the Swagger service in the ConfigureServices method.
Configuring Swagger service
Step 3: Enable Swagger middleware in the Configure method to configure Swagger UI. That’s it, the configuration is now over.
Configuring Swagger UI
Step 4: Now run your API in a browser and navigate to your API base URL. In my case, it is localhost:44314. You will see an interactive docs page for your API up and running in no time without you writing even a single line for the document.
Conclusion
So, get started and add docs to your existing API following these simple steps. Using this, you can even create version-based docs for your API. Also, be sure to add authorization to your API docs to prevent outsiders from misusing them.
Additional reference:
[…] 4 Steps to Automatically Generate API Documentation for ASP.NET Core Projects (Bharat Dwarkani) […]
Hi actually no one gives solution , can u tell how to make angular code to hit API in server?
Hope you’re looking for this, it’s going to help you start connecting the angular app to the API.
You can refer this guide for a more detailed reference –
Or you could use a first world tool like Postman and let it automatically generate the documentation for you. Work smarter, not harder! | https://www.syncfusion.com/blogs/post/automatically-generate-api-docs-for-asp-net-core.aspx | CC-MAIN-2019-35 | refinedweb | 406 | 57.67 |
import "github.com/ipfs/go-ipfs/fuse/mount"
package mount provides a simple abstraction around a mount point
ForceUnmount attempts to forcibly unmount a given mount. It does so by calling diskutil or fusermount directly.
ForceUnmountManyTimes attempts to forcibly unmount a given mount, many times. It does so by calling diskutil or fusermount directly. Attempts a given number of times.
UnmountCmd creates an exec.Cmd that is GOOS-specific for unmount a FUSE mount
type Mount interface { // MountPoint is the path at which this mount is mounted MountPoint() string // Unmounts the mount Unmount() error // Checks if the mount is still active. IsActive() bool // Process returns the mount's Process to be able to link it // to other processes. Unmount upon closing. Process() goprocess.Process }
Mount represents a filesystem mount
Mount mounts a fuse fs.FS at a given location, and returns a Mount instance. parent is a ContextGroup to bind the mount's ContextGroup to.
Package mount imports 11 packages (graph) and is imported by 142 packages. Updated 2020-02-11. Refresh now. Tools for package owners. | https://godoc.org/github.com/ipfs/go-ipfs/fuse/mount | CC-MAIN-2020-40 | refinedweb | 177 | 68.16 |
When a request is received by the web server or IIS , the request is sent to the worker process.Worker process initializes ASP.NET runtime which handles the request.Request flows through a well defined pipeline to generate the response.
Following are the main components of the HTTP Request pipeline
- HttpModules
- HttpHandler
HttpModule is implemented as a class which has access to both the request and response.It handles the request lifecycle events and can modify the request or the response information.There can be multiple HttpModules for a request.
The events that Modules handle can also be handled in the global application class.For understanding the use of global application class ,please refer global application class.
Using Module to handle the request events has the following advantages.
- Module can be created once and used across different applications.
- Modules allow to keep the even handling logic separate from the application class.This results in better segregation of code.
HttpModule implements the interface IHttpModule.This interface provides the following methods
- Init(HttpApplication) This method is used to register the event handlers.
- Dispose() Disposes the resources which are used by the module.
Following are the steps to implement a custom HttpHandler
- Implement a class which implements the IHttpModule interface.
- Implement the Init method in the class.This method should subscribe to events we need For example, we can subscribe EndRequest event.
- Write the custom logic for the event handler we subscribed to in step 2.
- Finally we register the module in the Web.config file.
public class ICustomModule : IHttpModule { public void Dispose() { } public void Init(HttpApplication context) { context.EndRequest += new EventHandler(context_EndRequest); } void context_EndRequest(object sender, EventArgs e) { //modify the response } }
HttpHandler is a class which ultimately receives the request and generates the response.There can be only one HttpHandler for a request.There are handlers which are provided by the framework such as the page handler which generates the response for .aspx pages. HttpHandler implements the IHttpHandler interface.This interface provides the following members
ProcessRequest Method which generates the response for the request
IsReusable Property which specifies if the handler can be reused.
Some of the inbuilt handlers provided by ASP.NET are
ASP.NET page handler HTTP handler for ASP.NET (*.aspx) pages .
Web service handler HTTP handler for web services (*.asmx)
Following are the steps to implement a custom HttpHandler
- Implement a class which implements the IHttpHandler interface.
- Implement the ProcessRequest method in the class.This method should generate the response.
- Implement IsReusable boolean property to specify if the HttpHandler can be reused.
- Finally we register the Handler in the Web.config file.
public class ICustomHandler :IHttpHandler { public bool IsReusable { get { return false; } } public void ProcessRequest(HttpContext context) { } }
In web.config
<httpHandlers> <add verb="*" path="*.cst" type="ICustomHandler"/> </httpHandlers> | https://codecompiled.com/httphandler-and-httpmodule-in-asp-net | CC-MAIN-2021-49 | refinedweb | 460 | 51.75 |
Hello! My name is Clara and I would like to have some help with my project.
I have never taken any programming classes in my entire life and this is also my first engineering class. I know my country kind of sucks
For my first project I only copied and pasted the code from the arduino website.
I have another project and I have been working on some coding with a classmate but I am kind of lost because I would like to add a personal final touch to the project.
The first part of the code is for the humidity and temperature sensor. The second part is for the alarm (buzzer) and my classmate was trying to explain to me what the coding actually means but I am a slow learner and he is fast for me because he knows programming.
Now, I would like to add a part where when I push a button, the alarm stops.
I have been looking over the internet and everybody keeps talking about both the “on” and “off” button. I just want that button to stop the alarm, not to make it ring. I think it might be something easy to do but this coding is killing me.
One thing I noticed is that we have a lot of "void"and I read somewhere about the “break” It apparently can do what I want to accomplish but I am not so sure. Don’t get me wrong, I love what people can do with arduino and I’m all about taking risks but my classmate told me to be careful because I could easily break something or even burn my LCD screen if I keep playing around.
Oh! I would also like it to play some real music (I used my imagination for what it is doing right now. I did not care about the keys and all that! I personally think it is kind of annoying lol) or say something like “Danger! Please, check your 3D printing filament.” but I can worry about it later. Right now I’m only looking for a code for the stop button.
I don’t have my arduino with me right now but I think my pin 8 is free, if that’s a significant information.
Thank you in advance for your help
Here is the code:
#include <dht.h>
#include <LiquidCrystal.h>
LiquidCrystal lcd(12, 11, 5, 4, 3, 2);
dht DHT;
#define DHT11_PIN 7
void setup(){
lcd.begin(16, 2);
Serial.begin(9600);
});
Serial.println(DHT.humidity);
buzzer();
}
void buzzer()
{
if (DHT.humidity >= 19 or DHT.temperature >= 25) {
tone (8,5000,1000);
delay(250);
tone (8,1500,1000);
//delay(250); //buzzer
delay(400);
tone (8,2500,1000);
delay(450);
tone(8,3500,1000);
delay(400);
tone(8, 4000, 1000);
delay(250);
tone(8, 4200, 1000);
//delay(500);// buzzer
}} | https://forum.arduino.cc/t/some-help-needed-with-my-project/491765 | CC-MAIN-2022-40 | refinedweb | 475 | 79.7 |
You can subscribe to this list here.
Showing
2
results of 2
Well, I don't know if this is progress, but it's at least different.
I went back and got a new copy of pyserial from the main site. That installed a serial folder and seems to have resolved a lot of dependencies. However, it also seems to have added quite a few. Here's the situation now.
When I build my program in py2exe I get an executable with the following warnings:
The following modules appear to be missing
['FCNTL', 'System', 'System.IO.Ports', 'TERMIOS', 'clr', 'gdk', 'ltihooks']
I suppose the first 4 are requirements of pyserial, but that's just a guess.
Next, when I run the program, I get a new error in the log file:
Traceback (most recent call last):
File "main.py", line 12, in <module>
File "serial\__init__.pyc", line 19, in <module>
File "serial\serialwin32.pyc", line 12, in <module>
ImportError: No module named win32
I can confirm I have pywin32-214 installed. However, if I try to run "import win32" on the command line, it tells me there's no such module. I don't really know much about this module, just that I needed it for pyserial on XP. Oh, and I did check, and I do have a win32 directory in my site-packages path.
Last but not least, I thought I'd include my setup file. Leaving it out was on oversight earlier today:
from distutils.core import setup
import py2exe
setup( name = "main",
description = "Main program",
version = "1.0",
windows = [ {"script" : "main.py"} ],
options = { 'py2exe' : { 'packages':'encodings',
'includes':'ctypes, cairo, pango, pangocairo, atk, gobject'} },
data_files=['frontend.glade'] )
Thanks for your feedback, I'd like to try and get this resolved tomorrow. Oh, and as before, this is working through the interpreter, it's probably just some dumb dependency issues.
-Max
________________________________________
From: Alexander Belchenko [bialix@...]
Sent: Wednesday, September 23, 2009 4:50 PM
To: py2exe-users@...
Subject: Re: [Py2exe-users] py2exe and pyserial
I'm using pyserial since 2005 and have no problems with it in either python interpreter run, or exe
run. But I'm using Python 2.5.
Bottiger, Maxwell - AES пишет:
> Hey Folks,
>
> I'm trying to build an application using py2exe and everything was working well until I added pyserial to the mix. Now, when I build my executable I get the following messages:
>
> copying C:\Python26\lib\site-packages\py2exe\run_w.exe -> C:\Documents and Settings\Max\Desktop\Filter Changer\dist\main.exe
> The following modules appear to be missing
> ['gdk', 'ltihooks', 'serial']
>
> I'm not sure what's going on, but when I run my newly built application it fails, leaving me a log file which says:
>
> Traceback (most recent call last):
> File "main.py", line 12, in <module>
> ImportError: No module named serial
>
> Now, this works just fine when I run my script through the interpreter, but obviously py2exe isn't seeing the pyserial package and isn't loading it. I found a very brief mention of this in the FAQ, but I'm not clear on what I need to do. Anyway, has anyone else run into this problem? Thanks!
>
> -Max
------------------------------------------------------------------------------.
Hey check out my blog post, it explains it all
-Thadeus
On Wed, Sep 16, 2009 at 1:20 PM, Kris Schnee <kschnee@...> wrote:
> Hello. I've built EXEs using Py2EXE before, but recently started doing it
> again after a year or so off, using the latest versions of Python, Pygame,
> and Py2EXE. And now I can't seem to make it work. Has anyone successfully
> used Pygame with Py2EXE, and what am I doing wrong?
>
> I've got WinXP, and freshly installed Python 2.6 and matching versions of
> Pygame and Py2EXE. I wrote a minimal Python program and setup.py script
> (listed at bottom). I used the command line "python setup.py py2exe". An
> EXE got built. It worked.
>
> I wrote a minimal Pygame program that opens an SDL window and draws some
> text and dots. I tried building an EXE. The EXE crashed and referenced line
> 3 (pygame.font.init()), saying, "NotImplemented Error: font module not
> available (ImportError: DLL load failed: The specified module could not be
> found.)"
>
> I tried copying every relevant DLL: "copy
> C:\python26\lib\site-packages\pygame\*.dll C:\python26\dist". (Advice seen
> elsewhere said that the needed one is SDL_ttf.dll, but that wasn't enough.)
> Result: "Fatal Python error: (pygame parachute) Segmentation Fault". That's
> actually progress, as merely copying the one dll didn't get me even that
> far. This program works fine in IDLE, so the code's okay.
>
> So, I'm getting frustrated. I can't even install Python 2.3 now (a version
> I think I'd used successfully in the past) because the Windows installer is
> now missing and the Python source version crashes. I've also tried the
> alternate program "bbfreeze", which crashed in multiple ways, and have
> tried Py2EXE with the sample script "pygame2exe" seen at
> <>. Also I've tried
> copying "freesansbold.ttf" into the dist directory, which used to fix a
> similar mysterious error.
>
> What do I need to do to make my program work as an EXE? I'd appreciate
>
> Kris
>
> ------Details------
> Program "basic_test.py" (WORKS):
> <code>
> print "Here you can see a nice ice key which you can have for free."
> </code>
>
> Program "pygame_test.py" (DOESN'T WORK AS EXE):
> <code>
> import pygame
> pygame.init()
> pygame.font.init()
> screen = pygame.display.set_mode((640,480))
> screen.fill((0,0,128))
> f = pygame.font.Font(None,24)
> t = f.render("Here you can see a nice ice key which you can have for
> free.",1,(255,255,255))
> screen.blit(t,(10,50))
> screen.set_at((10,100),(255,255,0))
> screen.set_at((110,100),(255,255,0))
> screen.set_at((60,100),(255,255,0))
> pygame.display.update()
> </code>
>
> Program "setup.py" (amended w/appropriate program name):
> <code>
> from distutils.core import setup
> import py2exe
> setup(console=["basic_test.py"])
> ## Also tried "windows=..."
> ## Also tried
>
> "setup(console=["pygame_test.py"],options={"py2exe":{"includes":["pygame"]}})",
> also w/"pygame.font"
> </code>
>
> Exact error on running the EXE:
> <code>
> pygame_test:3: RuntimeWarning: use font: MemoryLoadLibrary failed loading
> pygame\font.pyd
> (ImportError: MemoryLoadLibrary failed loading pygame\font.pyd)
> Traceback (most recent call last):
> File "pygame_test.py", line 3, in <module>
> File "pygame\__init__.pyo", line 70, in __getattr__
> NotImplementedError: font module not available
> (ImportError: MemoryLoadLibrary failed loading pygame\font.pyd)
> </code>
>
>
>
> ------------------------------------------------------------------------------
>@...
>
> | http://sourceforge.net/p/py2exe/mailman/py2exe-users/?viewmonth=200909&viewday=24 | CC-MAIN-2014-23 | refinedweb | 1,074 | 68.16 |
Phoenix LiveView live_link
Subscribe to get FREE Tutorials by email. Start learning Elixir and Phoenix to build features and apps!.
We are going to see how to refactor the code with live_link, making the code simpler and reducing the number of messages exchanged between our browser and LiveView.
Pictures example with
live_redirect/2
Let’s consider the LiveView Pictures page example, and see how, in this case, is convenient to use
live_link/2 instead of
live-redirect/2.
In this example we have a list of thumbnails. When we click on a thumbnail, LiveView changes the URL and updates the page showing the full-size image.
defmodule DemoWeb.PicturesLive do
...
def render(assigns) do
~L"""
...
<%= for {id, pic} <- pictures do %>
<div class="column"
phx-
<%= pic.author %>
<img src="<%= picture_url(pic.img, :thumb) %>">
</div>
<% end %>
...
"""
end
end
Each thumbnail element has
phx-click="show" and
phx-value="<%= id %>" attributes. In this way when we click a thumbnail the
"show" event is sent to the server along with the picture id.
def handle_event("show", id, socket) do
{:noreply, live_redirect(socket, to: Routes.live_path(socket, DemoWeb.PicturesLive, id))}
end
The LiveView process handles this event with
handle_event("show", id, socket) function, sending a
live_redirect message back to the browser.
Inspect LiveView messages
Let’s see better what happens, checking the messages exchanged between the browser and Phoenix.
I’m using the Chrome inspector, but with other browsers should be similar.
We open the inspector, we select the network tab and clicking
WS we show only the WebSocket connections. Refreshing the page we now see the WebSocket connection used by Phoenix LiveView and clicking on it we can see the messages exchanged between the browser and LiveView.
Clicking on a thumbnail we see four messages
[...,"event",{"type": "click", "event": "show", "value":"XesILKdmkwM"}]
[...,"live_redirect",{"kind": "push", "to": "/pictures/XesILKdmkwM"}]
- When clicking a thumbnail, the browser sends a
"show"event message to LiveView.
- Phoenix sends back a live_redirect message with the new
URLand the LiveView front-end javascript library changes the URL to the new one using
history.pushState().
[...,"link",{"url":"/pictures/XesILKdmkwM"}]
[...,"phx_reply",{"response":{"diff": {...}}}]
- After changing the URL, the browser sends a message back to the server with the new URL.
- LiveView processes this message with
handle_params/3, renders the new page and sends back a response with the differences to apply on the page.
This back and forth of messages is redundant — we can avoid the first two messages (click and live_redirect) with
live_link.
live_link
By using
live_link we don't need to handle the click event ourself. We can then remove the two attributes,
phx-click and
phx-values, along with the
handle_event("show", id, socket) function.
defmodule DemoWeb.PicturesLive do
def render(assigns) do
...
<%= for {id, pic} <- pictures do %>
<%= live_link to: Routes.live_path(@socket, DemoWeb.PicturesLive, id) do %>
<div class="column">
<%= pic.author %>
<img src="<%= picture_url(pic.img, :thumb) %>">
</div>
<% end %>
<% end %>
...
end
end
We wrap our thumbnail tag with
live_link and use the same
Routes.live_path/3 function as we did with
live_redirect - since we are inside a LiveView template, we just need to use
@socket instead of
socket.
We see how in this way the code becomes simpler and using the browser inspector, we also see that for each click we now have just two messages.
[...,"link",{"url":""}]
[...,"phx_reply",{"response":{"diff": {...}}}]
With live_link, the browser changes immediately the URL without sending any click event (like we did before). Once changed, the new URL is sent to the server which re-renders the page and sends back a message with the differences to apply.
When using
live_link and when
live_redirect?
So, when to use
live_link and when
live_redirect?
In the Pictures example, the user interacts with the frontend clicking the links. It’s clear that
live_link is the right choice - it makes the code simpler and we get less exchanged messages.
In the animated URL example though, there is no user interaction (apart from starting/stopping the animation). Every 100 milliseconds the server tells the browser to change the URL, so in this
live_redirect is the way to go.
Subscribe to get FREE Tutorials by email. Start learning Elixir and Phoenix to build features and apps!
Originally published at Poeticoding. | https://medium.com/@asusmel/phoenix-liveview-live-link-58925e409df6 | CC-MAIN-2019-30 | refinedweb | 700 | 65.73 |
If I’m going to borrow some C++ code for a folder-picker dialog box, then first I have to figure out how to write a C++ DLL that is useable from C#. Microsoft has a nice tutorial about creating and using DLLs, but it’s in pure C++. “No problem,” I thought. “I know how to use P/Invoke.” Well, it turns out that it’s not so easy.
To keep things simple, I figured I’d write an app to do math, as in that MS tutorial. So I used Visual Studio to create a C# console application, and I wrote this code:
using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Runtime.InteropServices; namespace MixedDll { class Program { [DllImport("MathDll.dll")] static extern int Add(int a, int b); static void Main(string[] args) { Console.WriteLine(Add(21, 21)); Console.ReadLine(); } } }
As you can see, I’m planning to write a dll called MathDll.dll, which will export one function:
int Add(int a, int b). This requires adding a second project to our solution. In Visual Studio, each “solution” can have multiple “projects,” where each project builds some artifact like a dll or exe. As far as I can tell, you may only use one language within a given project.
Following the MS tutorial, I chose to create an empty C++ project rather than following any of the templates. Then in the settings (under Configuration Properties:General), I changed Configuration Type from .exe to .dll. I also added support for the Common Language Runtime (/clr). Then I added two files. First was MathDll.h:
extern "C" { __declspec(dllexport) int Add(int a, int b); }
The second file was MathDll.cpp:
#include "MathDll.h" extern int Add(int a, int b) { return a + b; }
Finally, I added a reference from the C# project to the C++ project. In the Solution Explorer on the right, I right-clicked on
References under
MixedDll. Then I went to the Projects tab and selected MathDll. Oops! Visual Studio wouldn’t let me add the reference! Everything built, but I couldn’t get the IDE to supply access to the dll at runtime. Well, I figured I could do that myself. So after building everything, I manually copied MathDll.dll to MixedDll/bin/Debug. That’s where MixedDll.exe gets built, so I figured it should be able to pick up the dll.
Unfortunately, my little hack didn’t work. When I ran the program, I got a
BadImageFormatException. It turns out this is what happens when you mix 32-bit and 64-bit binaries. As far as I can tell, a 32-bit exe can only use 32-bit dlls, and a 64-bit exe can only use 64-bit dlls. My C++ project was building as 32-bit, but my C# app was building as “Any CPU” (under Properties:Build:Platform Target), which I guess results in a 64-bit product.
The right solution would have been to build a 64-bit dll, but for some reason Visual Studio wasn’t giving me that option. So I switched the C# app to 32-bit and tried again. This time I was able to add the reference (no more kludgy manual copy)! When I rebuilt everything and ran the program, I saw my output:
42! | https://illuminatedcomputing.com/posts/2010/01/calling-a-cpp-dll-from-cs/ | CC-MAIN-2021-21 | refinedweb | 558 | 76.72 |
import “chaos”
As engineers we expect our systems and applications to be reliable. And we often test to ensure that at a small scale or in development. But when you scale up, the assumption that conditions will remain stable is wrong. Reliability at scale does not mean eliminating failure, failure is inevitable. It matters when it impacts our users and it matters how we handle it.
Ana will talk about the practice of Chaos Engineering and how we can proactively embrace failure as we scale our systems.
About Ana.
Feel free to follow and connect with Ana on Twitter, LinkedIn, and GitHub!
Have questions for Ana?
Please submit them in this thread. Ana would love to answer them!
Haven’t signed up for the free conference yet?
Grab your free tickets here
| https://discuss.dgraph.io/t/ana-margarita-medina-import-chaos/11553 | CC-MAIN-2020-50 | refinedweb | 132 | 77.33 |
table of contents
other versions
- jessie 3.74-1
- jessie-backports 4.10-2~bpo8+1
- stretch 4.10-2
- testing 4.15-1
- stretch-backports 4.15-1~bpo9+1
- unstable 4.15-1
other sections
NAME¶poll, ppoll - wait for some event on a file descriptor
SYNOPSIS¶
#include <poll.h>int poll(struct pollfd *fds, nfds_t nfds, int timeout);int poll(struct pollfd *fds, nfds_t nfds, int timeout);#define _GNU_SOURCE /* See feature_test_macros(7) */ #include <signal.h> #include <poll.h>#define _GNU_SOURCE /* See feature_test_macros(7) */ #include <signal.h> #include <poll.h>int ppoll(struct pollfd *fds, nfds_t nfds, const struct timespec *tmo_p, const sigset_t *sigmask);int ppoll(struct pollfd *fds, nfds_t nfds, const struct timespec *tmo_p, const sigset_t *sigmask);
DESCRIPTION¶poll() performs a similar task to select(2): it waits for one of a set of file descriptors to become ready to perform I/O.
struct pollfd { int fd; /* file descriptor */ short events; /* requested events */ short revents; /* returned events */ };The caller should specify the number of items in the fds array in nfds.
- *
- a file descriptor becomes ready;
- *
- the call is interrupted by a signal handler; or
- *
- the timeout expires.
-).
- POLLRDNORM
- Equivalent to POLLIN.
- POLLRDBAND
- Priority band data can be read (generally unused on Linux).
- POLLWRNORM
- Equivalent to POLLOUT.
- POLLWRBAND
- Priority data may be written.
ppoll()¶The.
struct timespec { long tv_sec; /* seconds */ long tv_nsec; /* nanoseconds */ };
RETURN VALUE¶.
ERRORS¶
- EFAULT
- The array given as argument was not contained in the calling program's address space.
- EINVAL
- The nfds value exceeds the RLIMIT_NOFILE value.
- ENOMEM
- There was no space to allocate file descriptor tables. | https://manpages.debian.org/stretch/manpages-dev/poll.2.en.html | CC-MAIN-2018-17 | refinedweb | 263 | 61.22 |
JS Interview Help: Prototype, Class(ES6), IIFE, Scope, Closures, Module Pattern
Please note — this is a BRIEF explanation, meant for 2–3 minute interview answers. Please use the resources below for a deeper understanding on each topic.
Prototypes
When a new Object is formed, the JS engine adds a __proto__ property to the new object. This points to the prototype object of the constructor function. All JS objects take in, or inherit, properties and methods from a prototype. For instance, Dog objects inherit from Dog.prototype. String objects inherit from String.prototype.
The JS prototype property lets you add new properties to object constructors. For example, using ES2015 notation:
The above prints “Ernie Bade is my name. I am 6 years old and my breed is Golden Retriever,” because we have added a property of about to the Dog prototype.
Classes
Classes are everywhere, especially in popular modern libraries, like React.js. Whether we realize it or not, (almost) every component utilizes class inheritance to use the React library. We can over complicate this thought process, and dive into some very deep discussion. However, for simplicity of this article, let’s use w3schools definition: JavaScript Classes are templates for JavaScript Objects.
What does this mean? Well, basically, they’re modernized prototypes. They’re “primarily syntactical sugar over JS’s existing prototype-based inheritance.”
So, taking the above Prototypal example, let’s convert it to an ES6 Class:
Above, we have the exact same Object being created. However, this time, we’re using ES6. We use a constructor method to instantiate each new instance of the Dog class, then passing in certain arguments to the class, we use those arguments in the context of the new instance itself. So, ernie is a new Dog with firstName set to “Ernie”, lastName set to “Bade”, age set to 6, and breed set to “Golden Retriever.”
There are many benefits to classes:
- Everything is contained in one area, allowing for easy navigation of separation of concerns. Each new instance of the Dog class will have the same properties.
- This way creates a single way to emulate classes in JS.
- More familiar to coders who use a class-based language.
So, why did we even discuss Prototypes, if classes are more common now? Legacy code. The internet wasn’t invented in 2015… There’s a lot of legacy code that utilizes Prototypes, and it’s our job to be able to understand, and eventually, modernize that code to today’s standards. Who knows — maybe in 2025 the ES6 standard will be considered legacy!
Instantly-Invoked Function Expressions (IIFE)
An Immediately-invoked Function Expression (IIFE) is a way to execute functions immediately, as soon as they are created. The pattern is simply invoking a function expression. A function declaration is the “normal” way of creating a named function. IIFEs are great because they don’t pollute the global object, and are a simple way to isolate variable declarations.
In the above example, we have a “normal” function declaration, called tennisBalls. When the JS engine sees this function, it returns undefined. When the JS engine sees lines 5–7, it not only returns undefined, it also invokes the function immediately. Since we placed parentheses around the entire function, and added an additional set of parentheses after it, it is immediately invoked.
The main reason to use IIFEs is to obtain data privacy. Since JS’s var scopes variables to their containing function, any variables declared in the IIFE can’t be accessed by the outside world.
Scope
Simply put, scope determines the accessibility of variables. There are two basic forms of scope: Local or Global scope. JS has function scope — each function creates a new scope. So, variables that are defined inside a function are in local scope, while variables outside of a function are in the global scope. For example, what do you think will print below?
It should print “Ernie is the goodest boy.” However, since the variable, ernie, is defined within the scope of namingErnie(), when line 5 is executed, the JS engine does not have access to this variable. In order to access ernie, we need to either move this variable into global scope OR move our console.log statement inside the function.
This brings up another topic, lexical scope. In lexical scope, a variable defined outside a function can be accessible inside another function defined after the variable declaration. However, just like with local scope, the variables defined inside a function are not accessible outside that function. For example:
This will print “Ernie is the goodest boy.” because the function, namingErnie hits line 3, it will look up in scope, to find a variable named ernie. Since this variable is declared and assigned a value on line 1, namingErnie looks up its lexical scope to use that declared variable in its context.
Closures
Every function in JavaScript has a closure. From w3schools, “a closure is a function having access to the parent scope, even after the parent function has closed.” Primarily, the closure has three scope chains: it has access to its own scope (variables defined between its curly brackets), it has access to the outer function’s variables, and it has access to the global variables.
This makes one of the most complicated, but amazing features of JS. Without closures, callbacks and event handlers would be hard to implement.
A closure is created when you define a function — not when you execute it. Then, every time you execute that function, its already-defined closure gives it access to all the function scopes available around it. Let’s get into some code.
Let’s walk through this. Inside our function, we declare a timesAsked variable, which counts the number of times Ernie has asked to go on a walk. We also increment it each time we call the function (line 4). Then, we establish our canWeGo variable, which points to a simple string. That string is then console.logged the number of times Ernie has asked to go on a walk (line 6). In theory, since we are calling this function three times (lines 10–12), each declaration should print an additional ‘Can we go on a walk?’ statement. Line 10 should console.log ‘Can we go on a walk?’. Line 11 should console.log ‘Can we go on a walk?Can we go on a walk?’. Finally, Line 12 should console.log ‘Can we go on a walk?Can we go on a walk?Can we go on a walk?’ However, each time this function is called, the counter variable is reset to 0, meaning that when Ernie asks us to go for a walk on line 12, our brains have forgotten about the previous 2 times he asked (lines 10 and 11), making him very agitated and antsy.
So, how do we fix this? Closure:
This is a very similar function to the original ernieGoesOnAWalk, but this time, we use closure to solve our issue. Here, we’ve established ernieGoesOnAWalk as an IIFE (this is so that the function is invoked as soon as it’s called), and within this IIFE, we’ve declared another function, with its own closure (line 4). Inside this returned function (still line 4), we increment the timesAsked by one, and console.log our phrase. Now, every time ernieGoesOnAWalk is called, the timesAsked variable is not re-established at 0, but rather, it’s incremented to the number of times ernieGoesOnAWalk is called. Closure makes it possible for the inner function to access its parent function and have private variables. The timesAsked is protected by the scope of the anonymous function, and can only be changed using the ernieGoesOnAWalk function.
Please note… This is a VERY complicated topic, and since scope is related to closure, it’s easy to mistake closure for scope. Read, re-read, and re-re-read articles on this subject to get a firm grasp on it.
Module Pattern
From Eloquent JS: A module is a piece of program that specifies which other pieces it relies on and which functionality it provides for other modules to use (its interface). It’s easy to use and encapsulates our code. Modules can load each other and use import or export for functionality. The export keyword labels variables and functions as accessible outside of the current module. The import keyword allows the functionality from outside modules. For instance, say we have two files: ernieBarks.js and familyGreeting.js:
//ernieBarks.js
export function bark() {
console.log('Ruff Ruff Ruff! Squirrel?')
}
In the above module, we have a simple function which console.logs ‘Ruff Ruff Ruff! Squirrel?’ (Ernie gets very distracted…). By export-ing this module, we are allowing it to be used in any imported module:
//familyGreeting.js
import { bark } from ./ernieBarks.js
bark()
Since the bark method is imported to familyGreeting.js, it is now accessible to this module. By calling this method, ‘Ruff Ruff Ruff! Squirrel?’ is printed in our console.
Why do all of this? Since all of our code is in one logical block, our code is more maintainable and easier to update. Also, we can reuse each module as many times as we want, and don’t need to define the same functions in multiple modules. In the above example, every file I import bark will be able to call that function, and use it. Say, if Ernie is sick and can only bark ‘Ruff!’ one time, I can change the code in this module alone, and it will automatically update all imported modules as well.
//updating ernieBarks.js
export function bark() {
console.log('Ruff!')
}
Resources and References:
-
Prototypes and Classes:
-
-
IIFEs:
-.
Scope:
-
-.
Closures:
-
-
Module Pattern
-
-
-
-
Thanks for reading! To view Michael’s portfolio, click here. Michael is a recent Flatiron School graduate, open for work, and always happy to talk code. Let’s connect on LinkedIn! Questions or comments are always welcome! | https://medium.com/swlh/js-interview-help-prototype-class-es6-iife-scope-closures-module-pattern-fd67c68aacb8 | CC-MAIN-2022-05 | refinedweb | 1,653 | 66.33 |
How on-demand rendering can improve mobile performance
It’s not always desirable to render a project at the highest frame rate possible, for a variety of reasons, especially on mobile platforms. Historically, Unity developers have used Application.targetFrameRate or Vsync count to throttle the rendering speed of Unity. This approach impacts not just rendering but the frequency at which every part of Unity runs. The new on-demand rendering API allows you to decouple the rendering frequency from the player loop frequency.
What is on-demand rendering?
On-demand rendering allows you to skip rendering frames while still running the rest of the player loop at a high frequency. This can be especially useful on mobile; bypassing rendering can bring significant performance and power savings, while still allowing the application to be responsive to touch events.
Why would I use on-demand rendering?
Here are some example scenarios of when you may want to lower the frame rate:
- Menus (e.g., the application entry point or a pause menu): Menus tend to be relatively simple Scenes and as such do not need to render at full speed. If you render menus at a lower frame rate, you will still receive input during a frame that is not rendered, allowing you to reduce power consumption and to keep the device temperature from rising to a point where the CPU frequency may be throttled, while keeping a smooth UI interaction.
- Turn-based games (e.g., chess): Turn-based games have periods of low activity when users think about their next move or wait for other users to make their move. During such times, you can lower the frame rate to prevent unnecessary power usage and prolong the battery life.
- Static content: You can lower the frame rate in applications where the content is static for much of the time, such as automotive user interface (UI).
- Performance management: If you want to manage power usage and device thermals to maximize battery life and prevent CPU throttling, particularly if you are using the Adaptive Performance package, you can adjust the rendering speed.
- Machine learning or AI applications: Reducing the amount of work the CPU devotes to rendering may give you a little bit of a performance boost for the heavy processing that is the central focus of your application.
Where is it supported?
Everywhere! On-demand rendering works on Unity 2019.3 with every supported platform (see the system requirements) and rendering API (built-in render pipeline, Universal Render Pipeline and High Definition Render Pipeline).
How do I use on-demand rendering?
The on-demand rendering API consists of only three properties in the namespace UnityEngine.Rendering.
OnDemandRendering.renderFrameInterval
This is the most important part. It allows you to get or set the render frame interval, which is a dividing factor of Application.targetFrameRate or QualitySettings.vSyncCount, to define the new frame rate. For example, if you set Application.targetFrameRate to 60 and OnDemandRendering.renderFrameInterval to 2, only every other frame will render, yielding a frame rate of 30 fps.
OnDemandRendering.effectiveFrameRate
This property gives you an estimate of the frame rate that your application will render at. The estimate is determined using the values of OnDemandRendering.renderFrameInterval, Application.targetFrameRate, QualitySettings.vSyncCount and the display refresh rate. But bear in mind that this is an estimate and not a guarantee; your application may render more slowly if the CPU is bogged down by work from other things such as scripts, physics, networking, etc.
OnDemandRendering.willThisFrameRender
This simply tells you if the current frame will be rendered to the screen. You can use non-rendered frames to do some additional CPU-intensive work such as heavy math operations, loading assets or spawning prefabs.
What else do I need to know?
- Even though frames will not be rendered as often, events will be sent to scripts at a normal pace. This means that you may receive input during a frame that is not rendered. To prevent the appearance of input lag we recommend that you call OnDemandRendering.renderFrameInterval = 1 for the duration of the input to keep buttons, movement, etc. responsive.
- Situations that are very heavy on scripting, physics, animation, etc. but are not rendering will not benefit from using on-demand rendering. The results may appear choppy and with negligible reduction in CPU and power usage.
Example
Here is a simple example showing how on-demand rendering could be used in a menu to render at 20 fps unless there is input.
Here is an example project demonstrating how on-demand rendering can be used in a variety of situations.
Are you using on-demand rendering?
Let us know in the forums how on-demand rendering is working for you. We’ve tested it on Windows, macOS, WebGL, iOS, and Android, both in the Unity Editor and with Standalone players, but we’re always open to more feedback.
19 replies on “How on-demand rendering can improve mobile performance”.
————————————————————-
Editor:
TargetFramerate isn’t intended to be manipulated frequently at runtime
What are the differences between use of OnDemandRendering.renderFrameInterval 2 with Application.targetFrameRate 60 or directly use Application.targetFrameRate 30 manually? Also, if you recommend using OnDemandRendering.renderFrameInterval 1. Where do we gain performance? Thanks in advance!
1) Application.targetFrameRate will affect how often Update is called. OnDemandRendering will keep your Update cycle at the targeted rate, but rendering will only be performed once every n cycles.
2) targetFramerate is not intended to be manipulated frequently at runtime. Using it as an alternative to renderFrameInterval seems to function well enough on iOS, but can cause flickering on some Android devices.
Thank you very much for the explanation, now is very clear for me!
I submitted a bug report regarding OnDemandRendering on Jan 28th and haven’t heard back. (1214921)
The issue seems to be that cameras rendering into render textures don’t get throttled correctly, and update their textures additively, affecting transparency.
Should I also start a forum thread to actually get attention?
Thanks for reporting the issue. We’re looking into it right now.
Thank you so much for the great information that really help improve mobile performance.
I think that any resource can be used unethically. I had not considered this way of discussing a bad habit of on-demand rendering, but this sounds to me an easy case to be verified in stress tests.
I’m getting errors upon opening the example project.
«The name ‘RenderPipeline’ does not exist in the current context»
Sorry about that. The project has been updated now to resolve those issues. If you download the project again you should be good to go.
Thank you for adding this feature! This see this feature helping the application I am developing significantly.
Recently there were a lot of smartphones coming with advertised 90 & 120Hz screen refresh rates, which in reality are refreshing screen at 60Hz or less most of the time.
With 120Hz Samsung and iPhone phones coming this year, I foresee already an avalanche of mobile games coming soon with *120 screen refresh rate support!!!* ads labelled all over them, which in reality won’t reach even 60FPS most of the time. With developers saying «oh, you know, our game actually runs at 120FPS, we are just using on-demand rendering at 30FPS most of the time to save your battery, you know». So this feature will come in handy for developers who want to cash in on the newest hype of high refresh screens, I suppose. Not so good for customers, who will be fooled again by yet another creative marketing.
I was hoping to see «Rendering layers», still its a cool feature and absolutely will use it as soon as possible. Thanks Unity, Keep it up
Congrats, this feature is very useful.
What’s the difference between this and setting the target frame rate manually?
You reduce input lag
What about rendering in layers e.g. Onion Skybox’s so basic turning without moving can work with minimum rendering overhead?
Lo intenté | https://blogs.unity3d.com/es/2020/02/07/how-on-demand-rendering-can-improve-mobile-performance/ | CC-MAIN-2020-10 | refinedweb | 1,337 | 56.55 |
10-06-2009 02:09 PM
Hi,
I am new the BB programming and as you can see in my other posts I am trying to transfer data from a PC to a Blackberry. I tried webservices (too complicated, integration with .NET webservices faulty), so I moved to Socket connections. From that I learned that I have to care about the protocol, e.g. error correcting, flow control etc myself. I wrote some kind of simple protocol but sometimes got the problem that the connection simply "hangs".
I am using a BB 8900, JDE 4.6.1 and a direct connection via WIFI, C# 2008 on W7 on the other side.
I did not want to give up, so I tried HTTP connections - hooray - lots of my code is not important anymore, so I can post my short apps here:
C# server side
using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Net; using System.Threading; using System.IO; namespace FileSyncer2 { class Program { static void Main(string[] args) { HttpListener oHttpListener = new HttpListener(); oHttpListener.Prefixes.Add(""); oHttpListener.Start(); Console.WriteLine("Listener started"); for (;;) { HttpListenerContext oContext = oHttpListener.GetContext(); new Thread(new Worker(oContext).ProcessRequest).Start(); } } class Worker { private HttpListenerContext oContext; public Worker(HttpListenerContext oContext) { this.oContext = oContext; } public void ProcessRequest() { string msg = oContext.Request.HttpMethod + " " + oContext.Request.Url + " " + oContext.Request.RawUrl; Console.WriteLine(msg); oContext.Response.StatusCode = (int)HttpStatusCode.OK; byte[] vBuffer = new byte[409600]; oContext.Response.ContentLength64 = new FileInfo("C:\\dummyfile.bin").Length; FileStream oSR = new FileStream("C:\\dummyfile.bin", FileMode.Open); BinaryReader oBR = new BinaryReader(oSR); int vReadBytes = -1; Stream oStream = oContext.Response.OutputStream; while (vReadBytes != 0) { vReadBytes = oBR.Read(vBuffer, 0, 409600); if (vReadBytes != 0) try { oStream.Write(vBuffer, 0, vReadBytes); } catch (Exception ex) { Console.WriteLine(ex.Message); } Console.WriteLine("Still Reading..."); } oStream.Close(); oStream.Dispose(); oBR.Close(); oSR.Close(); } } } } }
This does not seem to be too complicated, I am listening for a GET request on port 8080 and for any request I am sending the dummy file.
Now for the Blackberry part:
HttpConnection httpConnection = (HttpConnection) Connector.open(";deviceside=true;interface=wifi
", Connector.READ_WRITE); httpConnection.setRequestMethod(HttpConnection.GET", Connector.READ_WRITE); httpConnection.setRequestMethod(HttpConnection.GET ); int responseCode = httpConnection.getResponseCode(); if (responseCode == HttpConnection.HTTP_OK) { InputStream inputStream = httpConnection.openInputStream(); FileConnection oFileConnection = (FileConnection)Connector.open("); int responseCode = httpConnection.getResponseCode(); if (responseCode == HttpConnection.HTTP_OK) { InputStream inputStream = httpConnection.openInputStream(); FileConnection oFileConnection = (FileConnection)Connector.open(" ckberry/music/dummyfile.bin",Connector.READ_WRITE)ckberry/music/dummyfile.bin",Connector.READ_WRITE) ;. MathUtilities.round(100.0/vTotal*vRead)); } httpConnection.close(); oFileOutputStream.close(); oFileConnection.close(); }MathUtilities.round(100.0/vTotal*vRead)); } httpConnection.close(); oFileOutputStream.close(); oFileConnection.close(); }
That´s not too complicated as well. I put this in a loop and sometimes at the first try and sometimes later the progress simply stops. It then throws an exception at the c# code at oStream.Close(), but the progress bar is still somewhere between 0 and 100.
I´ve had the same problems with simple SocketConnections, the issue exists on both, simulator and 8900. Any ideas? This can´t be so complicated ....
TIA!
10-06-2009 02:43 PM - last edited on 10-06-2009 02:44 PM
Please post the detail message of the exception as well as the stack trace.
P.S. Try connecting your BlackBerry client to a known good HTTP server to confirm that the issue is not related to your server.
10-06-2009 02:51 PM
How big is the file you are sending?
The BlackBerry should throw an Exception at some stage, which will give you some clues. In your hang situations, what Exception do you see on the BlackBerry.
One thing that worries me in your Blackberry code is:
oGaugeField.setValue((int)net.rim.device.api.util.
This indicates that you are directly updating the GUI with the progress of your network activity, which suggests you are running on the Event Thread. Are you starting a separate Thread to process your http connection?
10-06-2009 03:14 PM
Hi,
thanks for the fast answer. The C# apps throws an InvalidOperationException saying that the stream cannot be closed until not all bytes have been written, in the try catch section I get the message that the app tries to access a non existing network connection. The BB gives java.io.IOException: TCP receive timed out.
I wrapped the gauge update into synchronized(Application.getEventLock()) - no change.
The file as about 7MB.
Which stack trace do you want?
I tried an IIS7 with the same file - same result!
10-06-2009 03:16 PM
10-06-2009 03:24 PM - last edited on 10-06-2009 03:26 PM
As peter_strange has already pointed out, please make sure this BlackBerry code is running in thread that is not the event thread. I also suggest you read the Javadocs for the HttpConnection interface and then also try the example mentioned in the "Example using ContentConnection" subsection (basically, open a DataInputStream and read the full content-length in one big operation) to see whether it makes a difference.
P.S. I suspect the issue might be down to the now classic issue where requesting to read too much from an InputStream of a socket/http connection may block until the stream is closed/reset by peer. If that's the case, then using the DataInputStream.readFully with the correct length should work...
10-06-2009 03:24 PM
Are there limits to the payload you can send with an http response? 7M seems a lot, but this is not an area of expertise.
Aviator168 - are you using http connection to send/receive this data?
10-06-2009 08:53 PM - last edited on 10-06-2009 09:07 PM
No. Just direct TCP (socket//server
ort;deviceside=true). I was able to transmit and recv for many MBytes in a few hours until I kill it. There are two seperate threads. One for recv and one for send, and I keep counters (keep track of bytes send and recv) on the thread objects. I make a menu item action to peek into those counters. I was able to leave the app running for hours and end.
EDIT: The only thing I am struggling is to find a way to increase the speed. Too bad I only have access to one phone.
EDIT
oesn't http connection go through the carrier's proxy? Wonder if that has anything to do with it.
10-07-2009 03:14 AM
Hi,
again thanks for the answers. I doubt that it would be possible to read the contents of the remote file completely because the filesizes can vary and do not have a limit, so there could be 100MB or more.
Aviator: could you send a code snippet of your app? HTTP traffic can you directly and by WIFI, too, not only through the carrier´s network.
There was also the notice that I shouldn´t put the download in the event thread. What does this mean? I have a main threat where I track the buttons and start a new thread from there which starts the download.
10-07-2009 06:10 AM - last edited on 10-07-2009 06:10 AM | http://supportforums.blackberry.com/t5/Java-Development/TCP-connections-suck-any-of-them/m-p/350059 | crawl-003 | refinedweb | 1,196 | 59.8 |
All,
I have a visual python program whose behavior changes
when I alternately comment and uncomment these two lines:
s.v += (force / s.mass) * dt
s.v = s.v + (force / s.mass) * dt
where s is a sphere instance and... well, see below.
I thought this was odd, so I asked Jonathan to take a look.
He did, and:
Gotme! (as in Gotcha!)
Jonathan's reply to me is reproduced below. There are some
good lessons here. I think there are three: 1.) the reference
vs. value feature, 2.) a += b is not entirely equivalent to a = a+b,
3.) defualt arguments in function defs are built into the function
object when the function is defined, not when it is called.
The refernce to MakeMass() is a reference to my program. I *think*
Jonathan's explanation is clear enough on its own without having to know
exactly what's inside MakeMass.
regards, with thanks to J.B.,
-gary
---------------------------- Original Message ----------------------------
Subject: Re: incrementing vectors "problem"
From: "Jonathan Brandmeyer" <jbrandmeyer@...>
Date: Wed, August 18, 2004 5:09 pm
To: gpajer@...
--------------------------------------------------------------------------
>On Wed, 2004-08-18 at 09:28, gpajer@... wrote:
> This is a work in progress. There must be a coding problem somewhere
>(there are a number of changes in the works) but I'm stuck at this
spot. >If you can just run it and let me know if they are the same or
different >for you, I'll have some idea about what I should do next.
(I'm not asking >you to debug my code!)
I was sufficiently perplexed that I did debug your code. You have been
burned by reference vs. value semantics.
Consider the following code:
from visual import *
spheres = []
velocity = vector(0,0,0)
spheres.append( sphere( v=velocity))
spheres.append( sphere( v=velocity))
spheres.append( sphere( v=velocity))
spheres.append( sphere( v=velocity))
twice = 2
while twice:
print "before iteration:"
for i in spheres:
print i.v
ctr = 0
for i in spheres:
print ctr
ctr += 1
print "before:", i.v
i.v += vector(.01, .01, .01)
# i.v = i.v + vector(.01, .01, .01)
print "after", i.v
twice -= 1
Run it, and be surprised. The problem is that when each sphere is
created, its 'v' attribute is a reference to the single vector pointed to
by 'velocity'. So, when the loop runs with += expressions, each of them
is changing the single global vector 'velocity', but when it is run with
'x = x + y' expressions, on the first iteration, each 'v' attribute is
reassigned to a new, unique vector: the returned result of the
addition.
What is the fix? Whenever you want a true copy, you can invoke the "copy
constructor" for a vector to break the reference cycle:
velocity = vector(0,0,0)
v = vector(velocity)
Note that all of the visual objects' vector attributes underlying "set
functions" do essentially the same thing.
Actually, there is one additional piece of information that is specific to
your code. In your case, the common vector was the default value for the
'velocity' argument in the MakeMass() function. When the
interpreter passes the closing line of the function definition, it creates
a callable object, named "MakeMass", and that object has a single copy of
any default arguments within it. Every time MakeMass's __call__() member
function is invoked without one of the arguments for which there is a
default, that parameter is replaced with a reference to the single
instance of the default value that was created when MakeMass was created.
I know its a damned subtle problem, but those are Python's semantics.
HTH,
-Jonathan | http://sourceforge.net/p/visualpython/mailman/visualpython-users/thread/3062.68.81.232.17.1093204778.squirrel@68.81.232.17/ | CC-MAIN-2014-49 | refinedweb | 601 | 67.35 |
Hi there. I'm building a small chat in which there is a main server that hosts a service, and there are many clients. Each client hosts a service too, so it can get any messages sent by other clients, through a call done by the server.
But I have a little problem. I need to know every client IP. I searched a lot, and I found it to be easier using HttpChannel, where I can just use something like:
This is great as I can do it inside the service.This is great as I can do it inside the service.Code:string clientAddress = HttpContext.Current.Request.UserHostAddress;
But I'm using a TcpChannel as a connection.
How can I, inside of the ServerService, get the client IP?How can I, inside of the ServerService, get the client IP?Code:namespace Server { class Server { static void Main(string[] args) { TcpChannel channel = new TcpChannel(8086); ChannelServices.RegisterChannel(channel, true); RemotingConfiguration.RegisterWellKnownServiceType( typeof(ServerService), "server", WellKnownObjectMode.Singleton); System.Console.WriteLine("press <enter> to quit..."); System.Console.ReadLine(); } } }
Thanks in advance.
My best regards. | http://cboard.cprogramming.com/csharp-programming/100802-getting-ip-service-using-tcp-connection.html | CC-MAIN-2015-48 | refinedweb | 182 | 61.43 |
Flask-Failsafe 0.2
A failsafe for the Flask reloader
Flask-Failsafe
A failsafe for the Flask reloader.
The Flask reloader works great until you make a syntax error and it fails importing your app. This extension helps keep you working smoothly by catching errors during the initialization of your app, and provides a failsafe fallback app to display those startup errors instead.
To use it, run your app via a small script script with a factory function to initialize your app:
from flask_failsafe import failsafe @failsafe def create_app(): # note that the import is *inside* this function so that we can catch # errors that happen at import time from myapp import app return app if __name__ == "__main__": create_app().run()
The @failsafe decorator catches any errors calling create_app() and returns a fallback app that will instead display the Flask error debugger.
If you use Flask-Script, you can pass the same @failsafe-decorated factory function to the Manager() class:
from flask.ext.script import Manager, Server from flask_failsafe import failsafe @failsafe def create_app(): from myapp import app return app manager = Manager(create_app) manager.add_command("runserver", Server()) if __name__ == "__main__": manager.run()
Changes
0.2 (2014-01-03)
Python 3 support (thanks to Asger Drewsen for the help)
0.1 (2012-09-14)
Initial release
- Author: Matt Good
- License: BSD
- Platform: any
- Package Index Owner: mgood
- DOAP record: Flask-Failsafe-0.2.xml | https://pypi.python.org/pypi/Flask-Failsafe | CC-MAIN-2017-34 | refinedweb | 231 | 52.19 |
22 November 2007 02:37 [Source: ICIS news]
SINGAPORE (ICIS news)--China’s Ministry of Commerce said on Thursday it had imposed preliminary anti-dumping duties (ADD) ranging from 5% to 54.1% on acetone imports from Japan, Singapore, South Korea and Taiwan.
The lowest ADD at 5% was levied on cargoes from ?xml:namespace>
The duties will kick in from Friday, 23 November.
Key sellers such as Japan’s Mitsui Chemicals as well as Taiwanese producers Formosa Chemicals & Fibre Corp and Taiwan Prosperity had ADD of 11.9%, 6.2% and 6.5% levied on their exports respectively ( please see table below).
The government had launched an ADD probe in March 2007 to assess the damage to the domestic acetone industry after receiving applications filed by domestic producers | http://www.icis.com/Articles/2007/11/22/9080653/china-imposes-add-on-asian-acetone.html | CC-MAIN-2015-18 | refinedweb | 129 | 56.35 |
/* Definitions for managing subprocesses in GNU Make.. */ #ifndef SEEN_JOB_H #define SEEN_JOB_H /* Structure describing a running or dead child process. */ struct child { struct child *next; /* Link in the chain. */ struct file *file; /* File being remade. */ char **environment; /* Environment for commands. */ char **command_lines; /* Array of variable-expanded cmd lines. */ unsigned int command_line; /* Index into above. */ char *command_ptr; /* Ptr into command_lines[command_line]. */ pid_t pid; /* Child process's ID number. */ #ifdef VMS int efn; /* Completion event flag number */ int cstatus; /* Completion status */ #endif char *sh_batch_file; /* Script file for shell commands */ unsigned int remote:1; /* Nonzero if executing remotely. */ unsigned int noerror:1; /* Nonzero if commands contained a `-'. */ unsigned int good_stdin:1; /* Nonzero if this child has a good stdin. */ unsigned int deleted:1; /* Nonzero if targets have been deleted. */ }; extern struct child *children; extern void new_job PARAMS ((struct file *file)); extern void reap_children PARAMS ((int block, int err)); extern void start_waiting_jobs PARAMS ((void)); extern char **construct_command_argv PARAMS ((char *line, char **restp, struct file *file, char** batch_file)); #ifdef VMS extern int child_execute_job PARAMS ((char *argv, struct child *child)); #else extern void child_execute_job PARAMS ((int stdin_fd, int stdout_fd, char **argv, char **envp)); #endif #ifdef _AMIGA extern void exec_command PARAMS ((char **argv)); #else extern void exec_command PARAMS ((char **argv, char **envp)); #endif extern unsigned int job_slots_used; extern void block_sigs PARAMS ((void)); #ifdef POSIX extern void unblock_sigs PARAMS ((void)); #else #ifdef HAVE_SIGSETMASK extern int fatal_signal_mask; #define unblock_sigs() sigsetmask (0) #else #define unblock_sigs() #endif #endif #endif /* SEEN_JOB_H */ | http://opensource.apple.com//source/gnumake/gnumake-110/make/job.h | CC-MAIN-2016-40 | refinedweb | 238 | 54.73 |
However, there is always room for improvement. My pupils wanted to change the BALLSPEED and have a winner, when a certain score is reached. Therefore, we amended the code as attached. BALLSPEED is initially 3 (versus 5) and gets faster every 60 seconds. No problem.
And we included an if-statement, whether p1Score or p2Score has reached END (for test purposes =3, later =10). We included a print-statement that gives the correct result in the shell. However, here comes our problem: the score on the game board is not updated to the final score, although the print-out in the shell is correct. Why do the statements in line 40 an 48 do not work?
attached the modified code and a photo of our home-made paddles with 10k-potentiometers.
Code: Select all
# pong game # source MagPi 77, pages 32 - 35 # modified by Raspberry Pi course at JFS Bad Bramstedt, Germany import pgzrun import random from gpiozero import MCP3004 import math from time import time, sleep # neu pot1 = MCP3004(0) pot2 = MCP3004(1) # Set up the colours BLACK = (0 ,0 ,0 ) WHITE = (255,255,255) p1Score = p2Score = 0 END = 3 # new: end of game, when first player has scored END RESTART = 15 # restart after xx seconds BALLSPEED = 3 # changed from 5 to initially 3 p1Y = 300 p2Y = 300 TIME = time() def draw(): global screen screen.fill(BLACK) screen.draw.line((400,0),(400,600),"green") drawPaddles() drawBall() screen.draw.text(str(p1Score) , center=(105, 40), color=WHITE, fontsize=60) screen.draw.text(str(p2Score) , center=(705, 40), color=WHITE, fontsize=60) winner() def winner(): #new global screen, p1Score, p2Score, BALLSPEED, TIME if p2Score==END: print("p1 = ",p1Score,"p2 = ",p2Score) print("Winner is Player2") screen.draw.tex[attachment=0]Pong_Paddles1024.jpg[/attachment]t(str(p2Score) , center=(705, 40), color=WHITE, fontsize=60) # ??? BALLSPEED = 0 sleep(RESTART) p1Score = p2Score = 0 BALLSPEED = 3 if p1Score==END: print("p1 = ",p1Score,"p2 = ",p2Score) print("inner is Player1") screen.draw.text(str(p1Score), center=(105, 40), color=WHITE, fontsize=60) # ??? BALLSPEED = 0 sleep(RESTART) p1Score = p2Score = 0 BALLSPEED = 3 def update(): updatePaddles() updateBall() def init(): global ballX, ballY, ballDirX, ballDirY ballX = 400 ballY = 300 a = random.randint(10, 350) while (a > 80 and a < 100) or (a > 260 and a < 280): a = random.randint(10, 350) ballDirX = math.cos(math.radians(a)) ballDirY = math.sin(math.radians(a)) def drawPaddles(): global p1Y, p2Y p1rect = Rect((100, p1Y-30), (10, 60)) p2rect = Rect((700, p2Y-30), (10, 60)) screen.draw.filled_rect(p1rect, "red") screen.draw.filled_rect(p2rect, "red") def updatePaddles(): global p1Y, p2Y p1Y = (pot1.value * 540) +30 p2Y = (pot2.value * 540) +30 if keyboard.up: if p2Y > 30: p2Y -= 2 if keyboard.down: if p2Y < 570: p2Y += 2 if keyboard.w: if p1Y > 30: p1Y -= 2 if keyboard.s: if p1Y < 570: p1Y += 2 def updateBall(): global ballX, ballY, ballDirX, ballDirY, p1Score, p2Score, BALLSPEED, TIME ballX += ballDirX*BALLSPEED ballY += ballDirY*BALLSPEED ballRect = Rect((ballX-4,ballY-4),(8,8)) p1rect = Rect((100, p1Y-30), (10, 60)) p2rect = Rect((700, p2Y-30), (10, 60)) if time() - TIME > 60: #new BALLSPEED += 1 #new TIME = time() #new print("BALLSPEED is now: ", BALLSPEED) if checkCollide(ballRect, p1rect) or checkCollide(ballRect, p2rect): ballDirX *= -1 if ballY < 4 or ballY > 596: ballDirY *= -1 if ballX < 0: p2Score += 1 init() if ballX > 800: p1Score += 1 init() def checkCollide(r1,r2): return ( r1.x < r2.x + r2.w and r1.y < r2.y + r2.h and r1.x + r1.w > r2.x and r1.y + r1.h > r2.y ) def drawBall(): screen.draw.filled_circle((ballX, ballY), 8, "white") pass init() pgzrun.go() | https://www.raspberrypi.org/forums/viewtopic.php?f=106&t=234306&p=1434614 | CC-MAIN-2020-29 | refinedweb | 599 | 65.73 |
In this tutorial we will learn how to use the cpplinq skip operator. The tests shown on this tutorial were performed using an ESP32 board from DFRobot.
Introduction
In this tutorial we will learn how to use the cpplinq skip operator. This operator will allow us to create a new sequence from an original one, by skipping a predefined number of elements and taking all the others.
In other words, if we have an array with 10 elements and we pass the value 2 to the skip operator, the final array will only have the last 8 elements.
The tests shown on this tutorial were performed using an ESP32 board from DFRobot, running the Arduino core.
The code
To get started, as usual, we include the cpplinq library and we declare the using of the cpplinq namespace.
#include "cpplinq.hpp" using namespace cpplinq;
Then, we move on to the Arduino setup, where we will write the rest of the code for our testing use case.
The first thing we will do is opening a serial connection, to later output the final results.
Serial.begin(115200);
After that, we will declare an array with 9 elements. This will be the original array on which we will apply the skip operator.
int ints[] = {1,2,3,4,5,6,7,8,9};
Then we will convert our array to a range object, allowing us to apply the cpplinq operators. This is done with a call to the from_array function, as we have done in other previous tutorials.
from_array(ints)
After this we will apply the skip operator. As already mentioned, this operator receives as input the number of elements we want to skip from the original sequence, and it will return a new one with the rest of the non-skipped elements.
For testing purposes, we will use the value 4, which means the first 4 elements should not be present in the final result. Looking into the original array, this means the values 1, 2, 3 and 4 should be the ones skipped.
skip(4)
After applying the skip operator, we will convert the resulting range to a C++ vector, so we can iterate it and print all its elements. Below we can see the full chain of operator invocations.
auto result = from_array(ints) >> skip(4) >> to_vector();
To finalize, we will iterate through all the elements of the array and print them to the serial port.,8,9}; auto result = from_array(ints) >> skip(4) >> to_vector(); for(int i=0; i<result.size(); ++i){ Serial.print(result[i]); Serial.print("|"); } } void loop() {}
Testing the code
To test the code, simply compile it and upload it to your ESP32, using the Arduino IDE. Once the procedure is finished, open the serial monitor tool.
You should get an output similar to figure 1. As can be seen, the first 4 elements from the original array were skipped, as expected.
>
- ESP32 Arduino cpplinq: Reversing an array
- ESP32 Arduino cpplinq: Removing duplicate elements | https://techtutorialsx.com/2019/05/17/esp32-arduino-cpplinq-the-skip-operator/ | CC-MAIN-2020-40 | refinedweb | 497 | 62.27 |
How to create a simple REST API with Python and Flask in 5 minutes
Duomly
・6 min read
This article was originally published at: Python API tutorial
Python is one of the most in-demand programming languages in 2020. There are a lot of job offers for Python developers and lots of people who would like to learn this programming language. As we mention in one of the previous articles about learning Python, practicing knowledge is the most important.
Taking into consideration that Python can be used to build an application’s back-end, I decided to create an article, describing how to create a simple REST API using Python, Flask, and flask_restful library. I’m going to build a basic CRUD resource for the list of students. To follow this tutorial, you need Python and pip installed on your computer. To check the API, I will use Postman.
Besides, this tutorial is focusing mostly on building the API, so I’m using the mocked data. In most cases, while you are making API, it would be connected to the database.
We will go through the following points during the development:
- Installing flask and flask_restful
- Create and initialize the file
- Mocked data
- Create StudentsList class and route
- Create get() and post() methods for StudentsList()
- Define Student class and route
- Create get(), update() and delete() methods
- Test the endpoints
To make it easier and more convenient, I prepared a video version of this tutorial for those who prefer learning from the movies.
Let's start!
1. Installing Flask and Flask_RESTful
In the beginning, we have to install all the required libraries. Flask is a microframework written in Python, used to build web apps. So, let’s use the following command to install it:
pip install Flask
If it’s ready, we can start installing the flask library Flask_RESTful:
pip install Flask-RESTful
If it’s done, we are ready to start building our API!
2. Create and initialize the file
When we installed everything necessary for creating our API, let’s create a file. I’ll call it api.py, and you can use any name you prefer, but remember that Python files should have .py extension. Please open the file in your favorite code editor, and let’s import a few things which are necessary to start our project.
from flask import Flask from flask_restful import Resource, Api, reqparse
While everything is essential in the top of our file, let’s initialize our API with the following code:
app = Flask(__name__) api = Api(app) STUDENTS = {} if __name__ == "__main__": app.run(debug=True)
Great, now our API is initialized. Let’s go to the next point where we are going to create our mocked data.
3. Mocked data
Inside the STUDENTS variable, we are going to create a dictionary of students ordered by id. Every student will have a name, age, and spec property. Let’s create four simple users:
STUDENTS = { '1': {'name': 'Mark', 'age': 23, 'spec': 'math'}, '2': {'name': 'Jane', 'age': 20, 'spec': 'biology'}, '3': {'name': 'Peter', 'age': 21, 'spec': 'history'}, '4': {'name': 'Kate', 'age': 22, 'spec': 'science'}, }
It’s ready, so we can move one step ahead and start creating our first class with a route.
4. Create StudentsList class and route
Now we can start doing interesting stuff. In the beginning, let’s create a class StudentsList and two methods inside it: get and post.
class StudentsList(Resource): def get(self); def post(self):
And when it’s ready, we should add a route that will be used as an URL to call the data from this class.
api.add_resource(StudentsList, '/students/')
Great, now we are almost ready to display our firs data from the endpoint, the last thing which left is to fill in the methods with some logic and run the first endpoints.
5. Create get() and post() methods for StudentsList()
This is a straightforward step. In the first get method of our API, we would like to return a list of all students. To do this, we are going to return our dictionary:
def get(self): return STUDENTS
Great, now it’s the time to create a post() method to have a possibility to add a new student to our list. For this, we need to create a parser variable just above the class StudentsList to be able to add params to our post() call, and later we can build a post method, where we generate new id and save new student based on passed arguments.
parser = reqparse.RequestParser()
def post(self): parser.add_argument("name") parser.add_argument("age") parser.add_argument("spec") args = parser.parse_args() student_id = int(max(STUDENTS.keys())) + 1 student_id = '%i' % student_id STUDENTS[student_id] = { "name": args["name"], "age": args["age"], "spec": args["spec"], } return STUDENTS[student_id], 201
Now, we are ready to check the first calls to our API. First, let’s run the code. I will do it from my code editor. While the code is running you should see the following image in the console:
Then, please go the Postman and set the GET method, paste the localhost like where our server works and pass the route at the end. In my case link looks like follows:
The result should display the full list of the students:
Let’s also check if the post method works as well. For this, you have to change the method to POST, and pass the arguments: name, age, and spec:
It looks like everything works great! Now it’s time to create another class and other endpoints.
6. Define Student class and route
Now we will create another class and route for that class. The Student class will manage get, update, and delete. Everything in this class concerns a single student got by student_id.
class Student(Resource): def get(self, student_id): def put(self, student_id): def delete(self, student_id):
Next, we are going to add a new route below the current one:
api.add_resource(Student, '/students/<student_id>')
7. Create get(), update() and delete() methods
In this step we will create a logic for get(), update() and delete() methods. First, we would like to return a single student by student_id. Let’s do it:
def get(self, student_id): if student_id not in STUDENTS: return "Not found", 404 else: return STUDENTS[student_id]
Great, next we will create the update() method logic. It will be very similar to the post() method from the previous class, but we won’t create the new id. First, we are going to check if the student with the given id exists. If yes, we will update the values; if no, we will return the information.
def put(self, student_id): parser.add_argument("name") parser.add_argument("age") parser.add_argument("spec") args = parser.parse_args() if student_id not in STUDENTS: return "Record not found", 404 else: student = STUDENTS[student_id] student["name"] = args["name"] if args["name"] is not None else student["name"] student["age"] = args["age"] if args["age"] is not None else student["age"] student["spec"] = args["spec"] if args["spec"] is not None else student["spec"] return student, 200
And as the last thing, we will create a delete() method. In this case, we also have to check if the student with the given id exists to be able to delete the item.
def delete(self, student_id): if student_id not in STUDENTS: return "Not found", 404 else: del STUDENTS[student_id] return '', 204
It seems like everything is ready! Let’s check it!
8. Testing the endpoints
Let’s run our code and open the Postman to be able to test the endpoints. Let’s start by getting a single student. For this we have to pass the link with user id at the end:
It works! Let’s try to update the student, set the PUT method, pass the link with the user id, and add some parameter to change them:
This one works as well! The last thing to check is delete method. So, let’s create a link with student id at the end and change the method to DELETE:
Everything works correctly!
Conclusion
In this article, we created a simple rest API with Python. We used the Flask framework and Flask_RESTful library to make it fast and easy. Our API allows us to get the list of all items, get one item by id, add a new item to the list, update item by id, and delete an item with the given id. For testing the endpoints, I used Postman. To write and run the code, I used the Visual Studio Code.
I hope you will find this tutorial helpful and use it as a base for your first Python API training! If you would like to master your Python knowledge, join Duomly and complete our renewed Python course!
Have a nice coding!
Anna from Duomly
Nice & Useful. Thank you!
The link to your
Python course!is broken.
O! Thanks for pointing! Fixed :)
Good post, one of the best features of flask_restfull is easy swagger documentation
More of it | https://dev.to/duomly/how-to-create-a-simple-rest-api-with-python-and-flask-in-5-minutes-3edg | CC-MAIN-2020-05 | refinedweb | 1,497 | 69.62 |
Thorsten is a coowner of Dezide (), which specializes in troubleshooting programs based on Bayesian-network technology. He is currently writing a second major in computer science in the area of expert systems. He can be contacted at nesotto@cs.aau.dk or thorsten.ottosen@dezide.com.
Object-oriented programming in C++ has always been a bit awkward for me because of the need to manage memory when filling containers with pointers. Not only must we manage memory, but the memory management strategy implies lots of extra syntax and typedefs that unnecessarily clutter the code. Ignoring garbage collection, there are two approaches to memory management:
- Smart pointers, as in Listing One(a).
- Making your own implementation, such as Listing One(b).
The good thing about smart pointers is that they are safe and fast to implement (just a typedef). On the downside, they are inefficient (in space and time) and have a clumsy syntax. Consequently, in this article, I describe the design and implementation of the second approach, which I call "pointer containers," that has all of the benefits of smart pointers but without the downside. The complete source code that implements this technique is available electronically; see "Resource Center," page 4.
Exception Safety
When dealing with memory, exception safety is always a concern. Basically, the concept can be summarized as follows (from stronger to weaker):
- The "nothrow" guarantee means code will not throw.
- The "strong" guarantee means code has roll-back semantics in case of exceptions.
- The "basic" guarantee means code will only preserve invariants in case of exceptionsin all cases, no resources leak.
I start by extending the interface of the container, as in Listing Two(a). Granted, it is easy to implement push_back(), as in Listing Two(b), but vector<T*>::push_back() might throw an exception if a new buffer cannot be allocated (leading to a leak of it). Consequently, you have to change the implementation, as in Listing Two(c). Now, push_back() has the strong exception-safety guarantee. Although this seems simple, even experienced programmers seem to forget it.
Listing Three(a) is a naïve implementation of the iterator range insert(). You must heap allocate copies because the container takes ownership of its pointers. This is not only a naïve implementation, but it is also horribly flawed. First, the elements are inserted in reverse orderif they ever get inserted! Second, before might be invalidated by vector<T*>::insert(), leading to illegal code. Third, there are potential memory leaks (as with push_back()). And fourth, it is potentially slow because you might cause several reallocations of a larger buffer. Listing Three(b) is better.
That's it, right? Well, yes and no. Yes, because I have removed all the stupid errors; and no, because you might still get several reallocations. So why not just call:
vec_.reserve(distance(first,last)+
vec_.size() );
as the first line in insert()? The answer is that reserve() can invalidate before. (As a caveat, the iterator type I might be an InputIterator, which means you cannot determine the distance between them, so you would need to do your own tagging. I will ignore that issue for now, although the implementation must not.) Furthermore, if you decided to implement several pointer containers in one generic wrapper, you cannot call container-specific operations, such as reserve(), without extra traits. Still, you have only achieved the basic exception-safety guarantee (which would be acceptable), you're left with a clumsy implementation, and inserting pointers one-at-a-time means you're not taking advantage of the fact that you know that N pointers must be inserted.
What you really would like to do is to use the iterator range version of insert() in vector. One idea would be to make an iterator wrapper whose operator*() would return a newly allocated copy. Iterators that do something extra are sometimes called "smart iterators" (see "Smart Iterators and STL," by Thomas Becker, C/C++ Users Journal, September 1998). This would again lead to basic exception safety; however, the C++ Standard does not exactly specify how and when insertions take place and I could imagine prohibited memory errors occurring. In particular, the Standard does not guarantee that one element is inserted before the next iterator indirection happens.
Therefore, I decided to use a different strategy and go for the strong exception-safety guarantee. The only thing it requires is a temporary exception-safe, fixed-sized buffer to hold the new pointers while they are being allocated. This is my scoped_deleter class and its role is similar to auto_ptr<T> in the implementation of push_back(): It holds pointers and deletes them if an exception occurs. Once it is created, the implementation looks like Listing Four. There you have ita generic, strongly exception-safe, and elegant implementation. Because copying a T* cannot throw, vec_.insert() is guaranteed to have the strong exception guarantee; hence, you never insert pointers that will also be deleted by the scoped_deleter if an exception occurs. The price you pay is one extra heap allocation for the internal buffer in scoped_deleter. An extra heap allocation is normally expensive, but it is acceptable here because you make N heap allocations while copying the pointers (and you could optimize this by using stack space for small buffers).
You can then continue in this fashion to ensure that the insert(), assign(), erase(), and clear() interfaces from std::vector are reimplemented in ptr_vector in an exception-safe manner.
Iterator Design
You might have wondered why new(T(*first )) would compile; after all, if the iterator is defined as the iterator of std::vector<T*>, you would certainly need two indirections (unless the constructor took a pointer argument).
The reason is that you want normal iterators to be indirected by default, and the "old" style iterators to be available if needed. Therefore, you have Listing Five in ptr_vector, along with similar versions for const iterators.
The indirect_iterator adaptor from Boost () will most likely be part of the new Standard. The adaptor does all the hard work for you and has several good consequences:
- First of all, it promotes pointerless programming and gives code a clean look. For example, compare Listing Six(a) to Listing Six(b). If you allowed direct pointer assignment, you would at least have a memory leak and (most likely) a pending crash if the same pointer is deleted twice.
- The second good consequence is that it allows for seamless integration between ptr_vector<T> and (for example) std::vector<T>. This can be useful if we deal with copyable, nonpolymorphic objects. In that case, you could say something like Listing Six(c). Of course, it will also work the other way around from v2 to v.
- The third benefit is that it is just safer to use standard algorithms.
- The fourth benefit is that you can use normal functors directly without speculating with wrappers that do the indirection. However, there are also situations where you want to use the old ptr_iterator. These situations are characterized by using mutating algorithms from the Standard Library, which need to copy objects. Copying the object is expensive (compared to copying a pointer) or simply prohibited (as in most polymorphic classes). So, if copying is cheap and possible, then you can just stick to the normal, safer iterators.
In short, iterators promote safety and interoperability while not restricting access to the nonindirected iterators when needed.
The Clonable Concept
Even though we have a solid iterator range insert(), we still demand that new T( *first ) is a valid expression. There are several reasons why this is not ideal. The first example I can think of involves objects that are created by object factories. A second example involves polymorphic objects that are not copyablea polymorphic object should very rarely be copyable. In both cases, you cannot use the call new T( *first ), so you need a hook of some kind.
The required indirection is given by the "Clonable" concept; let t be an object of type T, then T is clonable if new T( t ) is a valid expression or if allocate_clone() has been overloaded or explicitly specialized for T; see Listing Seven. What the implementation of ptr_vector now has to ensure is that the right version of allocate_clone is actually called. There are several pitfalls here and they all resemble the problems that the Standard has with calling std::swap(). One solution is simply to call allocate_clone() unqualified within ptr_vector and to define the overloaded version in T's namespace. This way, you rely on argument-dependent lookup and replace the function template specialization with a simpler overloaded function. (Another possibility would be to add another template parameter to ptr_vectorthe parameter could then be a type with allocate_clone() as a static function.)
Domain-Specific Functions
Because the pointer container manages resources, there is suddenly a whole new interface that makes sensean interface that deals with releasing, acquiring, and transferring ownership. Consider first the possibility in Listing Eight(a) to release a single object, which makes it simple and safe to hand over objects in the container. Should the caller forget to save the return value, you still have no leaks.
While all standard containers have copy semantics, providing the same for ptr_vector would be a bit strangeafter all, the objects you insert into the ptr_vector are only required to be clonable. The fact that copying (or cloning) a whole ptr_vector can be expensive also suggests that you need something different. Hence, you add the two functions in Listing Eight(b). The first makes a deep copy of the whole container using allocate_clone() and is easily made strongly exception safe. The second simply releases ownership of the whole container and it is also strongly exception safe. Notice that it cannot have the "nothrow" guarantee because you must allocate a new ptr_vector for the returned auto_ptr. The new constructor is then used to take ownership of the result of clone() or release(), and it gives the "nothrow" guarantee. What is really elegant about these functions is that they let you return whole containers as results from functions in an efficient and exception-safe manner.
Recall that the iterators prohibited direct pointer assignment. This is certainly a good thing, but you lack the ability to reset a pointer within the container. Therefore, you add two functions to ptr_vector; see Listing Eight(c). The first is a rather general version that makes sense on other containers, too, whereas the last only makes sense on random-access containers such as ptr_vector.
Now you can add these functions to your container; see Listing Nine. The idea behind these functions is that they let you transfer objects between different pointer containers efficiently without messing with the pointers directly. In a way, they are similar to splice() in std::list and can easily fulfill the strong exception-safety guarantee.
Caveats
There are still several issues that must be dealt withsupport for custom deleters, for instance. Because you can specify how allocation is performed with allocate_clone(), you should also be able to specify deallocation with deallocate_clone(). The introduction will affect the implementation greatly because you need to use it whenever deletion is done; this means that you must scrap std::auto_ptr<T> because it cannot support a custom deleter, and change scoped_deleter<T> similarly.
There are certain cases where you still want to use the old-fashioned ptr_iterators with mutating algorithms. The good news is that some mutating algorithms can be used with an indirecting functor; that is, a functor that compares the objects instead of the pointers. The bad news is that "some" is not "all," and algorithms such as remove() and unique() just copy elements instead of swapping them. This leads to both memory leaks and undefined behavior (see "The Standard Librarian: Containers of Pointers," by Matt Austern, cujcexp1910austern/). There is a workaround for remove() and its cousin remove_if(), but it is not very practical (see "A remove_if for vector<T*>," by Harold Nowak, C/C++ Users Journal, July 2001). Though I have not yet decided how to deal with this, I am leaning toward implementing these few error-prone functions as member functions.
While I've focused on the ptr_vector class, the same rules apply to all the standard containers. Thus, you have a wide range of pointer containers that suit almost any special situation. For sequences, the default choice should (as usual) be ptr_vector<T>. A ptr_list<T> should be reserved for the rare cases where you have a large container and insertions/deletions are done at places other than the two ends. By a "large container," I mean one that holds more than 100 elements (but this is only a rough estimate and the guideline differs from platform to platform). You have to remember that a list node often (but not always with special allocators) has to be on the heap each time an insertion takes place. Such an allocation can easily cost the same as moving 100 pointers in a ptr_vector<T>.
Conclusion
Clearly, implementing your own pointer containers is not a walk in the park. However, once done, you have a useful and safe utility that enables flawless object-oriented programming.
If you consider all the extra work that you have to do to make containers exception safe, it should not be surprising that garbage collection can be just as fast as manual memory management. The downside to garbage collectors, of course, is that they waste more memory (for example, a compacting garbage collector will probably double the memory used). It is ironic that with the use of smart pointers and pointer containers, we're close to not needing garbage collection at all.
DDJ
typedef vector< boost:: shared_ptr<T> > ptr_vector;(b)
template< class T > class ptr_vector { std::vector<T*> vec_; public: ~ptr_vector(); // delete objects // ... much more typedef <something> iterator; };Back to article
Listing Two (a)
void push_back( T* ); template< class I > void insert( iterator before, I first, I last );(b)
void push_back( T* t ) { vec_.push_back( t ); }(c)
void push_back( T* t ) { std::auto_ptr<T> p( t ); vec_.push_back( t ); p.release(); }Back to article
Listing Three (a)
void insert( iterator before, I first, I last ) template< class I > { while( first != last ) { vec_.insert( before, new T( *first ) ); ++first; } }(b)
void insert( iterator before, I first, I last ) template< class I > { while( first != last ) { auto_ptr<T> p(new T( *first )); before = vec_.insert( before, p.get() ); ++before; // to preserve order ++first; p.release(); } }Back to article
Listing Four
void insert( iterator before, I first, I last ) { size_t n = distance(first, last); scoped_deleter<T> sd( n ); for( ; first != last; ++first ) sd.add( new T( *first ) ) ); vec_.insert( before, sd.begin(), sd.end() ); sd.release(); // cannot throw }Back to article
Listing Five
typedef boost::indirect_iterator<ptr_iterator>::type iterator; typedef boost::indirect_iterator<ptr_iterator> iterator;/ ptr_iterator ptr_begin(); ptr_iterator ptr_end(); iterator begin(); iterator end();Back to article
Listing Six (a)
( *v.begin() )->foo(); *v.front() = X; v[4] = &X; // doh! v[3] = new X; // ooops!(b)
v.begin()->foo();. v.front() = X; v[3] = &X; // compile error(c)
vector<int> v; ptr_vector<int> v2; ... std::copy( v.begin(), v.end(), std::back_inserter( v2 ) );Back to article
Listing Seven
// primary template: template< class T > T* allocate_clone( const T& t ) { return new T( t ); } // overloaded function template template< class T > X<T>* allocate_clone( const X<T>& t ) { return factory::clone( t ); } // function template specialization template<> Polymorphic* allocate_clone<Polymorphic >( const Polymorphic& p ) { return p->clone(); }Back to article
Listing Eight (a)
auto_ptr<T> release_back(); auto_ptr<T> release( iterator );(b)
auto_ptr<ptr_vector<T> > clone() const; auto_ptr<ptr_vector<T> > release(); ptr_vector( auto_ptr<ptr_vector<T> > );(c)
void replace( iterator where, T* x ); void replace( size_type idx, T* x );Back to article
Listing Nine
template< class I > void transfer( iterator before, I first, I last, ptr_vector<T>& from ); void transfer( iterator before, I i, ptr_vector<T>& from );Back to article | http://www.drdobbs.com/cpp/pointer-containers/184406287 | CC-MAIN-2015-32 | refinedweb | 2,639 | 53.51 |
0 Members and 1 Guest are viewing this topic.
The problem with using the multi-byte functions is that the strings used with them is mixed with other string inside Simutrans, which is UTF-8. As I understood prissi, he used the short paths because they are in pure ASCII (which seems to be true from a quick test).
As far as I can tell all appropriate conversion is already implemented and occurring. The W postfixed functions receive their UTF-16 encoded forms of the internet UTF-8 strings and all UTF-16 results are encoded into UTF-8 for internal use. Exception being the 2 things mentioned in my last post, where with MSVC one of the API calls was to an old deprecated function which might not return UTF-8 (still not sure if the new one does...) and the other to the short file name conversion function which might mangle encoding or fail as it was not really designed for Unicode use.
When dealing with the everything related to the file system, Simutrans relies, directly or indirectly, on the ANSI API.
char const* dr_query_homedir(){ static char buffer[PATH_MAX+24];#if defined _WIN32 WCHAR bufferW[PATH_MAX+24]; if( SHGetFolderPathW(NULL, CSIDL_PERSONAL, NULL, SHGFP_TYPE_CURRENT, bufferW) ) { DWORD len = PATH_MAX; HKEY hHomeDir; if( RegOpenKeyExA(HKEY_CURRENT_USER, "Software\\Microsoft\\Windows\\CurrentVersion\\Explorer\\Shell Folders", 0, KEY_READ, &hHomeDir) != ERROR_SUCCESS ) { return 0; } RegQueryValueExW(hHomeDir, L"Personal", 0, 0, (LPBYTE)bufferW, &len); } wcscat( bufferW, L"\\Simutrans" ); WideCharToMultiByte( CP_UTF8, 0, bufferW, -1, buffer, MAX_PATH, NULL, NULL );#elif defined __APPLE__ sprintf(buffer, "%s/Library/Simutrans", getenv("HOME"));#elif defined __HAIKU__ BPath userDir; find_directory(B_USER_DIRECTORY, &userDir); sprintf(buffer, "%s/simutrans", userDir.Path());#else sprintf(buffer, "%s/simutrans", getenv("HOME"));#endif dr_mkdir(buffer); // create other subdirectories#ifdef _WIN32 strcat(buffer, "\\");#else strcat(buffer, "/");#endif char b2[PATH_MAX+24]; sprintf(b2, "%smaps", buffer); dr_mkdir(b2); sprintf(b2, "%ssave", buffer); dr_mkdir(b2); sprintf(b2, "%sscreenshot", buffer); dr_mkdir(b2); return buffer;}
void dr_mkdir(char const* const path){#if defined(_WIN32) && !defined(__CYGWIN__) WCHAR pathW[MAX_PATH]; MultiByteToWideChar( CP_UTF8, 0, path, -1, pathW, sizeof(pathW) ); CreateDirectoryW( pathW, NULL );#else mkdir(path, 0777);#endif}
Making Unicode out of this multibyte string results in gabarge, since it assumes a wrong codepage.
The problem is that windows on Japanese installation using Fat32 may return a path in ShiftJIS.
Unfourtunately, the documentation states that _setmbcp does not allow to set MB codepage to UTF-8. I think this needs some forther experimentation, since the curretn routines assume the wide char is unicode and not CP932 or something else.
(And you are allowed to mix short and long notation. Win95 used that internally all the time, and hence Winodws 10 still allows it.)
The correct solution is to simply stay away from all Windows API using "codepages". That is just a backwards compatibility layer for programs with no knowledge of Unicode.
The main problem are no fopen, but the calls to gzopen for the compression libaries. There are no UTF16 calls for it. So the only way to do this (and the main reason for using not the w functions) is to be able to open savegames with gzopen.
#if (defined(_WIN32) || defined(__CYGWIN__)) && !defined(Z_SOLO)ZEXTERN gzFile ZEXPORT gzopen_w OF((const wchar_t *path, const char *mode));#endif
Ok, getcwd does not returns a short path
The main problem are no fopen, but the calls to gzopen for the compression libaries. There are no UTF16 calls for it.
Ok, getcwd does not returns a short path, but an invalid one. "simutransト” is returned as "C\:simutrans?" in western code page and with shiftJIS ト in the japanese codepage. Of course any attempt at accessing this is doomed to fail at this point.
Now added short path conversion for program and user dir to fix this. | https://forum.simutrans.com/index.php?topic=17414.0 | CC-MAIN-2018-51 | refinedweb | 625 | 52.49 |
Jeff Turner wrote:
...
>
> What happened to ViProM? Wasn't that meant to be a virtual project
> model, built from Gump/Maven/Whatever? Or is that just my imagination?
It's in the works, yes.
>.
It's the authors. Not yet used, but it would have all the extra info
about the pwoplw involved.
> IMHO, we should:
>
> 1) Deprecate the <todo> section in favour of using an issue tracker.
> Even in Forrest, no-one bothers to keep it up to date.
On other projects it's used. Only because Forrest does not use it
doesn't mean that it's useless.
Furthermore, one thing is having a possible todo format, another is
requiring it.
Once we have in place the possibility of specifying the sources of the
files in a user sitemap, we will be able to integrate all incoming todo
feeds; what we need to define now is our internal format for them.
> 2) Adopt a Wiki-based text format for the changelog. Two excellent
> examples are Log4j's and Ant's:
>
>
>
>
Let's start with the xml formats first. Then we can also add a more
textual representation of them.
> 3) Adopt Maven's project.xml as our default metadata format, as it seems
> clean, simple, and in wide use.
Gump's format is also in wide use.
Maven's format has a shortcoming, that it does not provide the semantics
for multiple subprojects. But if I had a Maven-built program, I would
not want to use the Gump format of course.
Hence I would propose that we adopt Gump's format with extra namespaced
elements for things we need, and also support the Maven format.
This is the same concept of what I proposed before of also supporting
the xml Maven format and navigation.xml.
One thing is supporting, another is adopting as "standard".
--
Nicola Ken Barozzi nicolaken@apache.org
- verba volant, scripta manent -
(discussions get forgotten, just code remains)
--------------------------------------------------------------------- | http://mail-archives.apache.org/mod_mbox/forrest-dev/200309.mbox/%3Cbk4j9n$lcg$1@sea.gmane.org%3E | CC-MAIN-2016-44 | refinedweb | 323 | 68.16 |
Closed Bug 504587 Opened 13 years ago Closed 13 years ago
TM: investigate v8-richards performance
Categories
(Core :: JavaScript Engine, defect, P1)
Tracking
()
RESOLVED DUPLICATE of bug 518487
mozilla1.9.2
People
(Reporter: sayrer, Assigned: brendan)
References
Details
Attachments
(1 file, 22 obsolete files)
This test supposedly traces relatively well, but our perf is not particularly competitive (it is much faster than Fx3). Julian is going to take a look at the quality of LIR and asm we're generating.
A couple of preliminary comments: js/src/v8/run-richards.js isn't so useful for repeatable comparative benchmarking, because it appears to run as many iterations as it can in one second. Am using SunSpider/tests/v8-richards.js instead, as it repeatably runs for 350 iterations.. Of course this doesn't say anything specific about instruction counts or where all the cycles went. But it gives some ballpark feel for what the main costs are. Continuing to investigate.
status update: jsshell running v8-richards runs for about 3.5 billion x86 insns, depending on build options. Of these: 246 050 915 insns in 22 368 256 calls to js_UnboxDouble 68 058 108 insns in 7 562 012 calls to js_UnboxInt32 70 388 157 insns in 3 351 817 calls to js_BoxInt32 94 199 942 insns in 2 140 908 calls to js_BoxDouble Further investigation shows almost all of the insns in these 4 calls are on the fast paths (lowest bit is set, so just >> 1 and return, etc). I implemented a partial inline of js_UnboxInt32, so that the fast path is done directly in LIR, calling out to the fn only if the lowest bit is not set. (using forward branches and labels in LIR). That works, but is slower. Profiling shows a substantial increase in icache misses and branch mispredictions. Am investigating why. This benchmark appears to have a surprisingly high icache miss rate anyway, about 1.2%. Considering that there are only 34 active traces, this is a bit strange. (can 34 traces fill up a 64k I1 cache?)
This is very obviously WIP, but shows it's relatively easy to do. It gives a small (1%) slowdown on v8-richards, so is not recommended. New strategy (now I realise I'm looking for roughly a factor of 2 performance loss) is to build chrome and try reduce the size of the test case whilst preserving the performance difference.
Some more low level figures. These compare 32-bit TM compiled "-O2" with 32-bit V8 compiled "-O3 -fomit-frame-pointer", so TM is at a small relative disadvantage, but not much. What's evident is: * V8 requires just over half the instructions that TM does * V8 manages to do the whole thing in fixed point, no FP, which is not the case with TM. TM does almost 29 million FP loads/stores, V8 does about 34 thousand. * Both V8 and TM comfortably keep the data in D1 (essentially zero D1 miss rate), but only V8 manages to keep the insns in I1 (268k I1 misses vs almost 42 million for TM). I'm sure this has nothing much to do with Nanojit -- it must be a higher level phenomenon to do with jstracer. -------------- #### v8 #### -------------- user times: 0.308 0.332 0.360 ==24316== I refs: 1,669,315,545 ==24316== I1 misses: 268,223 ==24316== L2i misses: 8,569 ==24316== I1 miss rate: 0.01% ==24316== L2i miss rate: 0.00% ==24316== ==24316== D refs: 713,605,023 (500,682,881 rd + 212,922,142 wr) ==24316== D1 misses: 148,545 ( 88,461 rd + 60,084 wr) ==24316== L2d misses: 40,063 ( 6,893 rd + 33,170 wr) ==24316== D1 miss rate: 0.0% ( 0.0% + 0.0% ) ==24316== L2d miss rate: 0.0% ( 0.0% + 0.0% ) ==24316== ==24316== L2 refs: 416,768 ( 356,684 rd + 60,084 wr) ==24316== L2 misses: 48,632 ( 15,462 rd + 33,170 wr) ==24316== L2 miss rate: 0.0% ( 0.0% + 0.0% ) ==24316== ==24316== Branches: 346,592,089 (343,912,720 cond + 2,679,369 ind) ==24316== Mispredicts: 9,462,357 ( 8,114,948 cond + 1,347,409 ind) ==24316== Mispred rate: 2.7% ( 2.3% + 50.2% ) ==24435== IR-level counts by type: ==24435== Type Loads Stores AluOps ==24435== ------------------------------------------- ==24435== I1 0 0 688,484,494 ==24435== I8 10,831,622 612,699 135,873,871 ==24435== I16 16,250 20,675 140,534 ==24435== I32 489,816,561 212,667,488 1,496,876,771 ==24435== I64 1,340 2,040 6,550 ==24435== I128 0 0 0 ==24435== F32 870 6 6 ==24435== F64 15,241 18,652 44,521 ==24435== V128 0 0 0 -------------- JSSHELL ORIGINAL -------------- user times: 0.864 0.872 0.888 ==21158== I refs: 3,584,825,170 ==21158== I1 misses: 41,953,492 ==21158== L2i misses: 5,672 ==21158== I1 miss rate: 1.17% ==21158== L2i miss rate: 0.00% ==21158== ==21158== D refs: 1,670,310,720 (1,122,641,178 rd + 547,669,542 wr) ==21158== D1 misses: 158,251 ( 132,178 rd + 26,073 wr) ==21158== L2d misses: 36,050 ( 17,362 rd + 18,688 wr) ==21158== D1 miss rate: 0.0% ( 0.0% + 0.0% ) ==21158== L2d miss rate: 0.0% ( 0.0% + 0.0% ) ==21158== ==21158== L2 refs: 42,111,743 ( 42,085,670 rd + 26,073 wr) ==21158== L2 misses: 41,722 ( 23,034 rd + 18,688 wr) ==21158== L2 miss rate: 0.0% ( 0.0% + 0.0% ) ==21158== ==21158== Branches: 542,647,807 ( 542,522,855 cond + 124,952 ind) ==21158== Mispredicts: 11,744,745 ( 11,713,792 cond + 30,953 ind) ==21158== Mispred rate: 2.1% ( 2.1% + 24.7% ) ==25013== IR-level counts by type: ==25013== Type Loads Stores AluOps ==25013== ------------------------------------------- ==25013== I1 0 0 1,234,299,003 ==25013== I8 1,630,040 591,149 131,858,369 ==25013== I16 6,766,107 4,364,883 4,385,942 ==25013== I32 1,081,810,453 491,730,952 2,599,907,232 ==25013== I64 30,245,994 25,237,135 4,287 ==25013== I128 0 0 0 ==25013== F32 14 0 0 ==25013== F64 2,176,798 26,686,976 94,042,506 ==25013== V128 0 0 2,140,908
(In reply to comment #1) >. Ignore these numbers, they are bogus. AFAIK all other numbers in this report are legit, though.
I believe the slowness is at least in part due to inadequate removal of redundant guards in the LIR of the innermost loop. That is: 1 Scheduler.prototype.schedule = function () { 2 this.currentTcb = this.list; 3 while (this.currentTcb != null) { 4 if (this.currentTcb.isHeldOrSuspended()) { 5 this.currentTcb = this.currentTcb.link; 6 } else { 7 this.currentId = this.currentTcb.id; 8 this.currentTcb = this.currentTcb.run(); 9 } 10 } 11 }; The hottest trace begins at the start of the while loop and runs through the then-branch of the if. It contains 48 guards. Examination of these shows that at least one of the guards to do with the access of .currentTcb at the line marked 3 are redundant w.r.t guards which appear earlier in the trace, which have to do with the access of .currentTcb at the line marked 2. Am investigating why these didn't get CSEd and removed. I think all the hot traces involve this loop, so perhaps they are all similarly afflicted.
Julian, this is great. We've know of redundant guard problems since last year, it will be huge to eliminate them. Hope it's straightforward once diagnosed. /be
In the traces I looked at, the redundant guards were not being removed because their guard expressions were not CSEd together, due to containing loads. It appears that LIR offers a way to say a load is CSEable, and NJ knows what to do with these, but jstracer.cpp almost completely fails to mark any loads it generates as CSEable. grep LIR_ldc jstracer.cpp shows just one instance. I got a quick upper bound on how much better we could do if loads generated by jstracer were properly annotated. There are 43 places in jstracer where loads are formed. I did a binary search, switching them individually between cse-able and non-cse-able, to find the largest set that could be marked cse-able and still have trace-tests.js work properly, and found that only 5 places needed to be marked non-cse-able. Of course this is a complete hack and is unsafe, but it does give an upper bound. Results are modest. In the hottest trace, the number of guards is reduced from 45 to 41. Overall run time is reduced by around 8%, and the instruction count is reduced by about 6%, from 3.584 billion to 3.370 billion. So not the factor 2 we need, but a start. One interesting thing is that if I make it treat all loads as cse-able, then the number of guards in the hottest trace falls from 45 to 26. Of course it then doesn't work, but it would be interesting to look at why so many more guards vanish.
I extended the profiling patch in bug 503424 to count the number of static exits and static instruction bytes in each fragment, in order to study the effects of messing with CSE & guard removal. The results are astonishing (or at least, I was astonished). It seems like the tracer has gone wild on this benchmark. It generates 34 traces, the longest of which has no less than 860 exits in it. The total amount of code generated is 92kb, so it's no surprise that cachegrind reports serious icache thrashing on a Core 2 with a 32K I1 (41.9 million I1 misses, vs a mere 268k I1 misses for v8, as per comment #4). I wonder if there's a way to quantify the extent to which the tracer is generating multiple overlapping traces. I'm guessing here, but 92k of code with 3661 static exits feels excessive for this small benchmark, and I suspect rampant trace duplication. The longest fragment, FragID=000034, begins at the top level loop that calls runRichards() multiple times, although where it goes after that I don't know. It seems reasonable (desirable, even) to compile this whole benchmark into one huge trace. But it doesn't seem desirable to compile it into a whole set of mostly mutually redundant overlapping traces, if that's what is happening. Profile follows. "se" is static exits, "sib" is static instruction bytes. ----------------- Per-fragment execution counts ------------------ Total count = 11836222 0: (31.55% 3734848) 31.55% 3734848 (se 45, sib 1000) FragID=000002 1: (50.99% 6035396) 19.44% 2300548 (se 149, sib 3304) FragID=000003 2: (63.55% 7521844) 12.56% 1486448 (se 181, sib 4091) FragID=000010 3: (71.98% 8519342) 8.43% 997498 (se 118, sib 2847) FragID=000016 4: (76.35% 9037339) 4.38% 517997 (se 218, sib 5030) FragID=000004 5: (79.31% 9387336) 2.96% 349997 (se 92, sib 2360) FragID=000023 6: (82.08% 9714934) 2.77% 327598 (se 27, sib 807) FragID=000008 7: (84.82% 10039380) 2.74% 324446 (se 185, sib 4274) FragID=000019 8: (87.55% 10363128) 2.74% 323748 (se 52, sib 1294) FragID=000017 9: (90.04% 10657476) 2.49% 294348 (se 67, sib 1700) FragID=000018 10: (92.08% 10899324) 2.04% 241848 (se 52, sib 1261) FragID=000012 11: (93.73% 11093922) 1.64% 194598 (se 23, sib 574) FragID=000011 12: (95.20% 11267869) 1.47% 173947 (se 73, sib 1861) FragID=000025 13: (95.89% 11349767) 0.69% 81898 (se 54, sib 1324) FragID=000013 14: (96.58% 11431315) 0.69% 81548 (se 128, sib 3315) FragID=000014 15: (97.27% 11512513) 0.69% 81198 (se 117, sib 2864) FragID=000020 16: (97.71% 11565711) 0.45% 53198 (se 52, sib 1297) FragID=000021 17: (98.06% 11606658) 0.35% 40947 (se 119, sib 3150) FragID=000024 18: (98.33% 11638856) 0.27% 32198 (se 52, sib 1297) FragID=000022 19: (98.59% 11668954) 0.25% 30098 (se 67, sib 1658) FragID=000009 20: (98.84% 11699050) 0.25% 30096 (se 78, sib 2074) FragID=000028 21: (99.08% 11727048) 0.24% 27998 (se 89, sib 2186) FragID=000026 22: (99.31% 11755046) 0.24% 27998 (se 105, sib 2721) FragID=000029 23: (99.49% 11776044) 0.18% 20998 (se 52, sib 1351) FragID=000027 24: (99.64% 11793541) 0.15% 17497 (se 61, sib 1596) FragID=000031 25: (99.75% 11806139) 0.11% 12598 (se 7, sib 107) FragID=000005 26: (99.85% 11818736) 0.11% 12597 (se 136, sib 3399) FragID=000015 27: (99.96% 11830984) 0.10% 12248 (se 21, sib 700) FragID=000030 28: (99.99% 11834482) 0.03% 3498 (se 132, sib 3253) FragID=000032 29: (99.99% 11835179) 0.01% 697 (se 64, sib 1699) FragID=000006 30: (99.99% 11835527) 0.00% 348 (se 128, sib 3385) FragID=000007 31: (100.00% 11835875) 0.00% 348 (se 57, sib 1544) FragID=000033 32: (100.00% 11836222) 0.00% 347 (se 860, sib 23196) FragID=000034 33: (100.00% 11836222) 0.00% 0 (se 0, sib 0) FragID=000001 Total code bytes = 92519, total static exits = 3661
Sounds like overspecialization. Look at the individual side exits where the tree splits up to see what we specialized and why its so variant (maybe object shape?).
Investigating further using enhanced exit-profiling patch in bug 503424 comment #14. One thing that's immediately obvious is that for the top 10 hot frags (accounting for 90% of the total frag count), most runs never make it to the trace end -- that's definitely a minority. Cursory examination suggests traces are exiting early due to control flow in the app, not due to shape or type problems.
#11: This is pretty good news. We can actually do something about excessive tail duplication due to control flow. Ed had a mechanism in tamarin tracing to estimate tail duplication and when a trace gets too branchy, it just stops early (inside the loop) and we start a new trace. That new trace is then targeted by all future traces that reach the same point in the program. Essentially we merge traces back together, at the expensive of some memory traffic (take a side exit, and immediately reload the state). This inhibits tree compilation and loop-based optimization, but it beats getting annihilated by too many traces. Implementing this would also greatly speed up 3d-cube.
Poking around with the profiler on a couple of other tests (trace-test, v8-crypto) it looks like it is very common for hot traces not to make it to the end. Some do, but many don't.
. > Implementing this would also greatly speed up 3d-cube. A quick spin with the profiler doesn't support the notion that 3d-cube is suffering from heavily duplicated or overlong traces. Although I'm not sure if I used the right workload.
(In reply to comment #14) > (In reply to comment #12) > > Ed had a mechanism in tamarin tracing to > > estimate tail duplication and when a trace gets too branchy, > > Got any further details on this? (pointers to code or comments?) the general sketch: insert special opcodes into the interpreted bytecode stream which mark merge nodes in the original control flow graph. when tracing, limit the # of merge nodes traversed on a single trace. this would let the tracer do some initial tail duplication but eventually it would stop. by stopping, traces could begin on a CFG merge node and all other traces leading there would transfer to the single trace starting there. if the limit was set to 1, then trace trees == superblocks, and there would be no tail duplication. implementation details are still visible in tamarin-tracing. It did help control code duplication but we never came up with a good way to set the limit -- any given setting was only good for a narrow range of tests, and we didnt' explore more adaptive ways to choose a trace length limit. > > Implementing this would also greatly speed up 3d-cube. > > A quick spin with the profiler doesn't support the notion that 3d-cube > is suffering from heavily duplicated or overlong traces. Although I'm > not sure if I used the right workload. last time I looked, s3d-cube suffered more from code bloat due to excessive inlining than it did from tail duplication. its been a little while so obviously YMMV.
(In reply to comment #14) > . This is interesting, but I'm not sure I understand correctly. Can you quantify this in more detail? Here is an example of what I'm getting at: Apparently for these 10 traces, 27% of entries to those traces reach the end. (Btw, what does "reach the end" mean? Exit via LOOP_EXIT?) So presumably this means the native code at the end of the trace is executed 27% of the time.. So we run 2x as many x86 instructions? Can you measure our overhead (recording, compilation, and trace switching costs)?
. ---- jsshell ---- 3,583,157,156 PROGRAM TOTAL 3,103,031,257 ???:??? (generated code) 223,682,650 jsbuiltins.cpp:js_UnboxDouble(int) 79,213,596 jsbuiltins.cpp:js_BoxDouble(JSContext*, 68,058,108 jsbuiltins.cpp:js_UnboxInt32(int) 36,869,987 jsbuiltins.cpp:js_BoxInt32(JSContext*, 9,384,010 jsarray.cpp:js_Array_dense_setelem(JSCo 3,415,448 jstracer.cpp:bool VisitFrameSlots<Deter 3,158,402 nanojit/Assembler.cpp:nanojit::Assemble ---- v8 ---- 1,669,313,816 PROGRAM TOTALS 1,613,301,031 ???:??? (generated code) 7,058,265 ???:v8::internal::Scanner::ScanIdentifi 4,817,178 ???:v8::internal::Scanner::Scan() 3,977,770 ???:strcmp 3,072,383 ???:v8::internal::String::ComputeLength 3,059,655 ???:unibrow::Utf8::ReadBlock(unibrow::B 1,926,402 ???:v8::internal::Utf8SymbolKey::IsMatc 1,331,737 ???:v8::internal::Runtime::FunctionForN 1,273,022 ???:v8::internal::Token::Lookup(char co 1,207,470 ???:v8::internal::Scanner::Next() 1,097,319 ???:unibrow::CharacterStream::Length()
Numbers in comment #17 are insn counts, btw.
(In reply to comment #17) > ) Sounds right to me. For some reason I also find this confusing to think about. I suppose if we had basic block profiling it would be easier to understand sub-trace execution counts. :-) > >. That is a very good question and well worth revisiting. AIUI, HOTLOOP=2 was chosen by empirical tuning. I think that for the initial tuning set (mainly SunSpider), most code runs either once or 1000+ times. That makes 2 the ideal choice because then we start running natively ASAP on code that will pay off. > >. OK. Your stats [snipped] are clear enough--it's definitely something with the generated code. We could probably estimate trace switching costs as the number of trace entries times the estimated cost of jumping from one trace to another. I think it's 10-15 instructions or so. If the traces are long, it's probably not that important, unless we tend to exit them *really* early.
All this stuff about overlong traces is, I believe, secondary. Current front-runner theory (with much help from jorendorff) is that property accesses are poorly optimized. v8-richards has lots of accesses of the form foo.bar.xyzzy, and many of these are duplicated along the traces. For example: (around line 304): TaskControlBlock.prototype.isHeldOrSuspended = function () { return (this.state & STATE_HELD) != 0 || (this.state == STATE_SUSPENDED); }; produces a trace where the left side of the || is false, and so goes to the right side: 00000: 304 getthisprop "state" 00003: 304 name "STATE_HELD" 00006: 304 bitand 00007: 304 zero 00008: 304 ne 00009: 304 or 19 (10) 00012: 304 getthisprop "state" 00015: 304 name "STATE_SUSPENDED" 00018: 304 eq 00019: 304 return 00020: 304 stop The first getthisprop "state" produces 5 guards, fine, ok: xt6: xt guard(class is With) -> pc=0x81f2048 imacpc=(nil) sp+24 rp+4 xt7: xt eq9 -> pc=0x81f2048 imacpc=(nil) sp+24 rp+4 xf9: xf guard(native-map) -> pc=0x81f2048 imacpc=(nil) sp+24 rp+4 xf10: xf guard_kshape -> pc=0x81f2048 imacpc=(nil) sp+24 rp+4 xt8: xt eq11 -> pc=0x81f2048 imacpc=(nil) sp+24 rp+4 but the second one produces 4, which strikes me as grossly redundant: xt10: xt guard(class is With) -> pc=0x81f2054 imacpc=(nil) sp+24 rp+4 xf11: xf guard(native-map) -> pc=0x81f2054 imacpc=(nil) sp+24 rp+4 xf12: xf guard_kshape -> pc=0x81f2054 imacpc=(nil) sp+24 rp+4 xt11: xt eq16 -> pc=0x81f2054 imacpc=(nil) sp+24 rp+4 Examination of the controlling expressions for xf9 and xf11 shows they are identical, so xf11 should be removable. NJ can nearly do that, but fails because they both involve three memory loads, and so it fails to CSE those and hence fails to identify the controlling expressions overall as identical. To fix this properly would require marking those loads as non-aliasing any intervening stores or helper calls, of which there are both in this example. NJ can then common them up. Required actions: - (definitely) look in jstracer where these loads are made, and mark them ldc (cse-able). This of course requires justifying that they are cse-able. - (probably) make LIR's notion of aliasing more flexible, so that we can express the facts like: this store writes the stack, but that load reads the heap, so the store doesn't invalidate CSEs involving the load. Redo CseFilter to handle this. Also add such alias-set annotations to helper calls. So if that works out it'd help remove one guard, at least. Whether it will help remove others, I don't know. There are lots of other similar examples. For example, the main loop of this program is: while (this.currentTcb != null) { if (this.currentTcb.isHeldOrSuspended()) { this.currentTcb = this.currentTcb.link; } else { this.currentId = this.currentTcb.id; this.currentTcb = this.currentTcb.run(); } } Lots of duplicated poking of this, this.currentTcb, et al, within traces. Feels like there should be quite some mileage to be had here. ---------- Jason also mentioned something to the effect that the "class is With" tests are redundant, or mutually redundant, or constant, or some such. Unfortunately I didn't understand the details, or how it's proposed to detect and exploit such redundancy. Could you please clarify?
(minor additional comment): Also in a getprop translation (not a getthisprop), I saw the code below, which amounts to if (ld5 == 0) leave the trace if ((ld5 & 7) != 0) leave the trace I set GNU superopt the task of finding a short branch free sequence that computes x == 0 || (x & 7) != 0, but so far it only found sequences using the x86 carry flag, which aren't expressible in LIR. Hence no good. About to try emitting guard code for SideExit=0xf7f193e4 exitType=BRANCH ld4 = ld cmov2[28] ld5 = ld ld4[4] eq4 = eq ld5, 0 xt3: xt eq4 -> pc=0x81f0d0d imacpc=(nil) sp+8 rp+0 (GuardID=006) About to try emitting guard code for SideExit=0xf7f193e4 exitType=BRANCH JSVAL_TAGMASK and2 = and ld5, JSVAL_TAGMASK eq5 = eq and2, 0 xf4: xf eq5 -> pc=0x81f0d0d imacpc=(nil) sp+8 rp+0 (GuardID=007) I _think_ that's generated by guard(false, lir->ins_eq0(v_ins), exit); guard(true, lir->ins2i(LIR_eq, lir->ins2(LIR_piand, v_ins, INS_CONSTWORD(JSVAL_TAGMASK)), JSVAL_OBJECT), exit); in TraceRecorder::unbox_jsval.
Sigh, this was a forgotten action item from early tracemonkey daze: get CSE going for shape guards. I remember talking about it with Andreas but I failed to file a bug. Sorry about that. We should be able to use ldc instructions for things like loading obj->map and then obj->map (cast to JSScope*) -> shape, for a given obj. Jason, thoughts? Comment 21 suggests a better jsval tagging scheme (bug 360324), to separate null from object tag (we already distinguish the trace types). I'll update bug 360324. /be
I would suggest solving this at a slightly higher level, since shape guards are not merely redundant when they are common-able. Consider the following sequence of operations: x.a = 1 x.b = 2 x.c = 3 The shape evolves, and each time we emit a guard for a different shape. That can't be cse-d. However, the guards are still redundant because traces are a linear sequence of instructions. You can't get to x.b = 2 without going past x.a = 1, which already contains a guard. The shape evolves in a predictable (constant) fashion. My suggested solution is a set of objects that have been shape-guarded along this trace (or a bitset in parallel to the tracker). Its not as high level and pretty as solving this in lir, but it might be easier and more effective. Along branches it would be nice if this information can be communicated to attaching traces, but I guess that is not super important. It will lead to a couple more redundant guards at every branch. At least we don't get tons of redundant guards on the same trace though. Lets just hack it up and see how much things improve with the simple per-trace fix. I bet this also helps sunspider here and there.
I don't know that the last part is sound yet. Jason, your brainwaves are wanted. /be
This patch speeds up richards for me nicely, from 1212 (best of three runs without the patch) to 1409. But for some reason I haven't had time to find, the patch slows down v8 raytrace from 240 to 213 (approximately: best-of-three again). Julian, would you have time to measure the detailed difference this makes? Thanks. /be
(In reply to comment #25) > This patch speeds up richards for me nicely, from 1212 (best of three runs > without the patch) to 1409. Nice! Unfortunately causes 2 regressions on trace-tests: testComparisons,testMethodInitSafety I'd be willing to bet it's due to the use of ldcs. I was playing around last night with ldc-ifying loads associated with offsetof(JSScope, shape) and offsetof(JSObject, map) and got precisely the same two failures. I think that backing off on the latter ones made the failure go away. Investigating.
(In reply to comment #26) I spoke too soon re the ldcs, unfortunately: * 31064:d2362511eaa9 (tip as of just now) runs trace-tests, no fails * with Brendan's patch, 2 failures: testComparisons,testMethodInitSafety * changing the 4 x ldcp back to ldp in the patch does not help
> We should be able to use ldc instructions for things like loading obj->map > and then obj->map (cast to JSScope*) -> shape, for a given obj. Jason, > thoughts? ldc-ifying a field load is OK if the field is const. ldc-ifying obj->map or map->shape (or JSObjectMap::shape as brendan's patch has it) would definitely be bad. Both can change! (If there are no intervening calls/stores that we can't account for) But all the ldc-ification in Brendan's actual patch looks ok ...though this one is a little precarious: guard(HAS_FUNCTION_CLASS(JSVAL_TO_OBJECT(v)), lir->ins2(LIR_eq, lir->ins2(LIR_piand, - lir->insLoad(LIR_ldp, v_ins, offsetof(JSObject, classword)), + lir->insLoad(LIR_ldcp, v_ins, offsetof(JSObject, classword)), INS_CONSTWORD(~JSSLOT_CLASS_MASK_BITS)), INS_CONSTPTR(&js_FunctionClass)), This one is safe not because the classword is actually constant over the lifetime of all objects--for arrays, it isn't--but because we never mutate functions into nonfunctions or vice versa. Tricksy. That deserves a comment maybe. > Nice! Unfortunately causes 2 regressions on trace-tests: > testComparisons,testMethodInitSafety > > I'd be willing to bet it's due to the use of ldcs. I just wrote "I'll take that bet!" and mid-aired with your retraction. Rats. :)
Comment on attachment 393245 [details] [diff] [review] patch to remove obj-is-native tests/guards, use ldc*, and minimize shape guard to once per object per trace > >-static const JSObjectMap SharedArrayMap = { &js_ArrayObjectOps }; >+static const JSObjectMap SharedArrayMap = { &js_ArrayObjectOps, JSObjectMap::SHAPELESS }; > Native array objects. Nice. >+bool >+TraceRecorder::alreadyGuardedShape(JSObject* obj) >+{ >+ if (!guardedShapeTable.ops) { >+ JS_DHashTableInit(&guardedShapeTable, JS_DHashGetStubOps(), NULL, >+ sizeof(JSDHashEntryStub), JS_DHASH_MIN_SIZE); >+ } >+ >+ JSDHashEntryStub* stub = (JSDHashEntryStub*) >+ JS_DHashTableOperate(&guardedShapeTable, obj, JS_DHASH_ADD); >+ if (!stub) >+ return false; >+ >+ if (!stub->key) { >+ stub->key = obj; >+ return false; >+ } >+ JS_ASSERT(stub->key == obj); >+ return true; >+} This isn't safe. Imagine we are tracing a function f(a, b) { a.x = b.x = 1; } and at recording time its called with a == b == obj. We would emit a guard for a, but not for b. Next time we might call with a != b and then miss the shape guard for b (since we don't enforce/guard that a and b must alias). What you should check instead is that the definition of a (tracker.get(a)) has been shape-guarded on. That's always safe. > map_ins = map(obj2_ins); >- LIns* ops_ins; >- if (!map_is_native(obj2->map, map_ins, ops_ins)) >- ABORT_TRACE("non-native map"); >- >- LIns* shape_ins = addName(lir->insLoad(LIR_ld, map_ins, offsetof(JSScope, shape)), >+ LIns* shape_ins = addName(lir->insLoad(LIR_ld, map_ins, offsetof(JSObjectMap, shape)), > "obj2_shape"); This is a really nice simplification. Does this improve our SS score? There is a lot of array access in there.
Thanks, Andreas -- I should have known better, was in a hurry. Indeed still have not had time to measure this patch's effect on SS or other benchmarketing scores. This rev does fix the raytrace regression, and still wins big on richards (1744 new, 1445 old best of 3). Help wanted on perf-testing and analysis, as before. /be
Attachment #393245 - Attachment is obsolete: true
Did someone say perf analysis? Adding Gregor :)
Comment on attachment 393317 [details] [diff] [review] remove obj-is-native tests/guards, use ldc*, and minimize shape guard to once per obj_ins per trace Looks good. Does this pass tryserver?
Fails testMethodInitSafety in trace-test.js -- digging into it as I am able, not able to spend much time on it tonight, seem to have some kind of food poisoning... /be
> What you should check instead is that the definition of a (tracker.get(a)) has > been shape-guarded on. That's always safe. It's not always safe, because objects can change shape! Calling out from trace to most builtins, probably including everything that uses bailExit, must invalidate all preceding shape guards.
I'd love to review a patch with just ldc and JSObjectMap::shape. Those seem like they are probably ready to land (though I haven't thought through the consequences of moving shape--at least we're losing some assertion coverage there). Eliminating redundant guards will take a while.
Jason, nothing is safe in the presence of shape-modifying builtins. Not even elimination of redundant ldc loads since the shape mutation might occur invisible in the builtin. So you might fail a guard because you CSEed the load (ldc), even though its different now (and expected to be different). We should do Brendan's patch but purge as needed.
(In reply to comment #36) > (ldc), even though its different now (and expected to be different). We should > do Brendan's patch but purge as needed. IIUC you mean we should do something like Brendan's patch, and emit ldcs, but at a (potentially) shape modifying builtin, we need to invalidate all from-memory CSEs. Right? In that case I don't think we have a way to express that in LIR. LIR only knows "this load aliases no stores (including stores done by helper calls)" or "this load aliases all stores (...)".
No, I don't think that's what comment 36 means to say. Brendan's patch uses CSEable loads in the appropriate cases, i.e. where the field never changes, so a CSE load actually fits Nanojit's model. (In reply to comment #36) > Jason, nothing is safe in the presence of shape-modifying builtins. Not even > elimination of redundant ldc loads since the shape mutation might occur > invisible in the builtin. Well, yeah, using LIR_ldc to load JSScope::shape is a total non-starter. But it is safe to use it for those other fields, right? > So you might fail a guard because you CSEed the load > (ldc), even though its different now (and expected to be different). Heh. That too. > We should do Brendan's patch but purge as needed.? /be
(In reply to comment #39) > ? Strike "Instead", we need what Jason said. The testMethodInitSafety failure is due to a (currently; see bug 497789 soon -- I hope -- for less random method evolution shaping) random shape generated due to method call scope branding[1]. If this happens during recording, we should purge the guardedShapeTable entries for the reshaped object. If it happens on-trace then we should bail off trace. /be [1] JS lacks classes with methods, but when you call a function-valued property for the first time on a given receiver, the receiver's property map is branded in a cheap-ish way to approximate inferred nominal typing.
Maybe I should spin this out into a separate bug. Happy to do so. I could also split it up, but I'm trying to go forward in a straight line (which carries risk, for sure). /be
Attachment #393317 - Attachment is obsolete: true
Attachment #393693 - Flags: review?(jorendorff)
(In reply to comment #28) > This one is safe not because the classword is actually constant over the > lifetime of all objects--for arrays, it isn't--but because we never mutate > functions into nonfunctions or vice versa. I didn't comment on this (the context shows the array class exclusion), but I will do it now. /be
(In reply to comment #41) > Created an attachment (id=393693) [details] > patch, v3 Cool. Runs trace-tests ok. Gives a fairly reliable 8%-9% reduction in run time for a less than 6% reduction in insn count, possibly due to 23% reduction in I1 misses. Other figures (data refs, etc) are unsurprising. cpu(s) insns(M) I1miss(M) Before: 0.81 3581 30.52 After: 0.74 3381 23.61 Cpu times averaged over 10 runs. Further details to follow, incl overall sunspider measurements.
Attachment #393693 - Attachment is obsolete: true
Attachment #393834 - Flags: review?(jorendorff)
Attachment #393693 - Flags: review?(jorendorff)
(In reply to comment #43) SunSpider, averaged over 500 runs, shows no overall gain; it's a mixed bag, with changes in the -3 % to +3% range; overall a 0.4% lossage. I need to peer at traces & trace profiles for selected slowdowns, eg string/unpack-code. However 509648 is atm blocking performance analysis, so that's the current priority. TEST COMPARISON FROM TO ============================================================================= ** TOTAL **: ?? 1000.1ms +/- 0.4% 1004.1ms +/- 0.4% ============================================================================= 3d: - 153.8ms +/- 0.6% 153.0ms +/- 0.6% cube: ?? 42.0ms +/- 0.8% 42.3ms +/- 1.0% morph: 1.026x as fast 35.5ms +/- 0.8% 34.6ms +/- 0.7% raytrace: - 76.2ms +/- 0.6% 76.1ms +/- 0.6% access: 1.007x as fast 147.8ms +/- 0.5% 146.7ms +/- 0.5% binary-trees: 1.029x as fast 37.0ms +/- 0.7% 36.0ms +/- 0.6% fannkuch: *1.007x as slow* 68.6ms +/- 0.5% 69.1ms +/- 0.5% nbody: 1.032x as fast 29.5ms +/- 0.7% 28.5ms +/- 0.7% nsieve: *1.028x as slow* 12.7ms +/- 1.5% 13.1ms +/- 0.8% bitops: 1.031x as fast 40.9ms +/- 0.7% 39.7ms +/- 0.6% 3bit-bits-in-byte: *1.046x as slow* 1.6ms +/- 2.9% 1.7ms +/- 3.0% bits-in-byte: ?? 8.8ms +/- 0.8% 8.9ms +/- 0.8% bitwise-and: ?? 2.5ms +/- 2.0% 2.5ms +/- 3.2% nsieve-bits: 1.053x as fast 28.0ms +/- 0.8% 26.6ms +/- 0.6% controlflow: ?? 37.1ms +/- 0.4% 37.2ms +/- 0.6% recursive: ?? 37.1ms +/- 0.4% 37.2ms +/- 0.6% crypto: *1.016x as slow* 57.7ms +/- 0.5% 58.6ms +/- 0.7% aes: *1.016x as slow* 31.8ms +/- 0.6% 32.3ms +/- 0.8% md5: *1.019x as slow* 16.3ms +/- 0.5% 16.6ms +/- 0.8% sha1: *1.012x as slow* 9.6ms +/- 0.7% 9.7ms +/- 1.0% date: 1.013x as fast 142.8ms +/- 0.4% 141.0ms +/- 0.5% format-tofte: ?? 67.6ms +/- 0.5% 67.8ms +/- 0.6% format-xparb: 1.028x as fast 75.2ms +/- 0.5% 73.2ms +/- 0.6% math: ?? 42.7ms +/- 0.7% 42.9ms +/- 0.7% cordic: ?? 12.9ms +/- 1.4% 13.0ms +/- 1.0% partial-sums: ?? 22.2ms +/- 0.6% 22.3ms +/- 0.7% spectral-norm: ?? 7.6ms +/- 1.1% 7.6ms +/- 0.9% regexp: ?? 37.9ms +/- 0.9% 38.4ms +/- 0.9% dna: ?? 37.9ms +/- 0.9% 38.4ms +/- 0.9% string: *1.021x as slow* 339.4ms +/- 0.5% 346.5ms +/- 0.5% base64: *1.028x as slow* 18.0ms +/- 0.7% 18.5ms +/- 0.7% fasta: *1.026x as slow* 66.6ms +/- 0.5% 68.4ms +/- 0.5% tagcloud: *1.013x as slow* 107.8ms +/- 0.6% 109.2ms +/- 0.6% unpack-code: *1.021x as slow* 116.0ms +/- 0.6% 118.4ms +/- 0.6% validate-input: *1.028x as slow* 31.1ms +/- 1.5% 31.9ms +/- 0.8%
Happy to split this up it if helps, or give it away to someone else to split. Not sure why this isn't pure win. This revision dumps the guardedShapeTable from TraceRecorder::deepAbort, which would seem to be important for correctness. /be
Attachment #393834 - Attachment is obsolete: true
Attachment #394060 - Flags: review?(jorendorff)
Attachment #393834 - Flags: review?(jorendorff)
Ahem. /be
Attachment #394060 - Attachment is obsolete: true
Attachment #394137 - Flags: review?(jorendorff)
Attachment #394060 - Flags: review?(jorendorff)
Tryserver results look good -- please double-check me here (look for a0b08627fb50 at). /be
(In reply to comment #48) > Tryserver results look good -- please double-check me here lgtm
Sunspider results for patch v4 (comment #46): Sunspider v8 indicates clear wins for richards and crypto, minor improvements for earley-boyer and raytrace. Minor regression for deltablue -- investigating. Sunspider default set differences are practically at the noise level, but there are as many minor regressions as minor improvements. Most of these are listed as significant by the analysis program. Overall stated result is 1.003x as fast (fwiw). ------------------------- v8 set --- 100 runs ------------------------- TEST COMPARISON FROM TO ============================================================================= ** TOTAL **: - 13770.0ms +/- 0.3% 13720.0ms +/- 0.2% ============================================================================= v8: - 13770.0ms +/- 0.3% 13720.0ms +/- 0.2% crypto: 1.040x as fast 939.8ms +/- 1.5% 903.4ms +/- 1.6% deltablue: *1.008x as slow* 8313.1ms +/- 0.4% 8382.1ms +/- 0.3% earley-boyer: 1.006x as fast 2009.5ms +/- 0.3% 1998.1ms +/- 0.4% raytrace: 1.005x as fast 1709.4ms +/- 0.3% 1700.7ms +/- 0.3% richards: 1.085x as fast 798.2ms +/- 0.2% 735.7ms +/- 0.1% ----------------------- default set --- 100 runs ----------------------- dflt 500 runs results: TEST COMPARISON FROM TO ============================================================================= ** TOTAL **: 1.003x as fast 959.3ms +/- 0.1% 956.2ms +/- 0.1% ============================================================================= 3d: - 148.9ms +/- 0.3% 148.8ms +/- 0.3% cube: *1.017x as slow* 39.7ms +/- 0.4% 40.3ms +/- 0.5% morph: *1.009x as slow* 35.3ms +/- 0.5% 35.6ms +/- 0.5% raytrace: 1.014x as fast 73.9ms +/- 0.5% 72.9ms +/- 0.3% access: 1.019x as fast 143.6ms +/- 0.2% 140.9ms +/- 0.2% binary-trees: 1.025x as fast 35.5ms +/- 0.5% 34.7ms +/- 0.4% fannkuch: - 66.9ms +/- 0.3% 66.8ms +/- 0.2% nbody: 1.067x as fast 29.2ms +/- 0.4% 27.4ms +/- 0.3% nsieve: *1.013x as slow* 11.8ms +/- 0.4% 12.0ms +/- 0.5% bitops: - 39.9ms +/- 0.3% 39.8ms +/- 0.3% 3bit-bits-in-byte: ?? 1.6ms +/- 2.7% 1.6ms +/- 2.7% bits-in-byte: - 8.5ms +/- 0.6% 8.5ms +/- 0.6% bitwise-and: ?? 2.4ms +/- 1.8% 2.4ms +/- 2.0% nsieve-bits: - 27.4ms +/- 0.3% 27.3ms +/- 0.3% controlflow: 1.018x as fast 36.7ms +/- 0.3% 36.1ms +/- 0.3% recursive: 1.018x as fast 36.7ms +/- 0.3% 36.1ms +/- 0.3% crypto: ?? 56.1ms +/- 0.4% 56.3ms +/- 0.7% aes: ?? 30.9ms +/- 0.5% 31.0ms +/- 1.3% md5: ?? 15.8ms +/- 0.5% 15.9ms +/- 0.4% sha1: - 9.4ms +/- 0.5% 9.3ms +/- 0.7% date: 1.020x as fast 139.7ms +/- 0.2% 137.0ms +/- 0.2% format-tofte: 1.012x as fast 66.9ms +/- 0.2% 66.1ms +/- 0.3% format-xparb: 1.028x as fast 72.8ms +/- 0.2% 70.8ms +/- 0.2% math: - 41.4ms +/- 0.3% 41.4ms +/- 0.3% cordic: - 12.4ms +/- 0.6% 12.4ms +/- 0.5% partial-sums: - 21.6ms +/- 0.3% 21.6ms +/- 0.3% spectral-norm: - 7.3ms +/- 0.6% 7.3ms +/- 0.7% regexp: 1.018x as fast 35.7ms +/- 0.2% 35.1ms +/- 0.2% dna: 1.018x as fast 35.7ms +/- 0.2% 35.1ms +/- 0.2% string: *1.011x as slow* 317.4ms +/- 0.1% 320.9ms +/- 0.1% base64: *1.022x as slow* 17.5ms +/- 0.5% 17.8ms +/- 0.4% fasta: *1.009x as slow* 65.9ms +/- 0.1% 66.5ms +/- 0.2% tagcloud: - 103.3ms +/- 0.2% 103.3ms +/- 0.2% unpack-code: *1.011x as slow* 100.6ms +/- 0.2% 101.7ms +/- 0.2% validate-input: *1.050x as slow* 30.0ms +/- 0.2% 31.5ms +/- 0.3%
Trace-level comments for patch v4 (comment #46), for v8-richards: 34 traces (Fragments) created. Number of instruction bytes generated falls from 92461 to 82955 (good) Number of static exits generated falls from 3661 to 3069 (quite striking) The longest generated trace falls from 23715 bytes/860 exits to 19215 bytes/614 exits. This is all good. It's pretty obvious that the greatest improvements are on the longer traces, but that's as expected given the nature of the patch.
(In reply to comment #51) To reiterate comment #14, the trace completion rate stands at a measly 27%. This means that 73% of trace starts involve a slow trace-to-trace transition, right? Of course the patch doesn't change this, since it merely removes exits which would never have been taken anyway. (hmm. that'd be a good way to verify correctness of the patch. hmm)
Improving trace-to-trace transition sounds like a natural next step. We can now recover register allocation information from LIR, so we could peek into registers or read straight from spill slots, bypassing the stores to the stack/globals the trace we just came from did.
(In reply to comment #50) > Minor regression for deltablue -- investigating. Summary: patch v4 is effective even here, but tickles inherent badness in the trace selector. Patch v4 appears to give 0.8% slowdown on deltablue. Turns out: v4 is actually effective on deltablue too. It reduces the instruction count by 3.4%, yet it runs slower, probably because it increases the number of icache misses by 11.5%. All other microarchitectural bogeys I can measure (D1/L2 misses, & mispredicts) are more or less unchanged. It appears that the patch interacts with the trace selector in some way, causing it to generate (even) more duplicated translations of innermost-y loops. The total number of fragments started increases by 0.1% (to 61.0 million). Their average static length decreases by more than 6%, reflecting the effectiveness of the patch. But the total number of fragments seems to have gone up, hence the total amount of code being run, and I1 misses, is up. Before the patch, the top 50 traces account for 84% of the total executed. Afterwards, they account for only 79%.
(In reply to comment #47) > Ahem. Brendan, what's the difference between v5 and v4? Is it just spaceleak avoidance fixes, or do I need to rerun all the perf measurements?
As per discussion with Julian, earlier investigation of deltablue showed that it has deeply nested loops that get their vars hoisted: for (var ...) function for (var ...) function for (var ...) Each loop is captured with a tree. For each var we capture one tree with type(var) = undefined and then the actual loop version of it. This leads to a combinatorial explosion as we go further out from the innermost loop. This urgently calls for block shrink-wrapping vars (var to let conversion), or add liveness analysis that can tell that the undefined value of the var is never read. But that basically is the proof necessary for shrink wrapping, so I think we should go with that.
(In reply to comment #46) > Created an attachment (id=394060) [details] > patch, v4 Summary: Congratulations if you made it this far without drowning in numbers. Speaking wrt patch v4, not v5, which I haven't tested: * IMO, we should land this and have done. It's a clear win at the individual-trace-translation level. * In both richards and deltablue, we are generating way too much code, due to excessive trace duplication. This particularly marked for deltablue, in which: - there are many duplicated traces (at least 28 variants of the innermost loop) - almost all of them exit practically at the start TM's performance on deltablue is very poor compared to v8. Andreas will open a new bug for deltablue performance.
(In reply to comment #54) > Turns out: v4 is actually effective on deltablue too. It reduces > the instruction count by 3.4%, yet it runs slower, probably because > it increases the number of icache misses by 11.5%. On rereading: 11.5% by itself is meaningless. I mean, 11.5% increase from an already high baseline. If each executed insn counts as one icache read, then the icache miss rate has increased from 1.69% to 1.96%.
I think there are holes in this scheme. Testing today.
(In reply to comment #60) Yes. I thought it odd that it produces different traces for deltablue, not merely shorter versions of the same ones..
3. It looks like JSScope::extend and JSScope::replacingShapeChange change an object's shape without going through JSScope::generateOwnShape. (JSScope::remove could do it too, but doesn't bother.)
>- * properties without magic getters and setters), and its scope->shape >+ * properties without magic getters and setters), and its scope->map.shape This change is bogus. >-#define OBJ_SHAPE(obj) (OBJ_SCOPE(obj)->shape) >+#define OBJ_SHAPE(obj) ((obj)->map->shape) Up to now, OBJ_SCOPE asserts that obj is native. To retain assertion coverage, this could assert that the result is not SHAPELESS. Maybe that is silly. It's a shame to lose good asserts though. >@@ -1819,6 +1823,7 @@ TraceRecorder::deepAbort() > { > debug_only_print0(LC_TMTracer|LC_TMAbort, "deep abort"); > deepAborted = true; >+ forgetGuardedShapes(); > } Why is this necessary? The trace is going to be thrown away, so it seems like this can't affect correctness. >@@ -7828,8 +7877,8 @@ TraceRecorder::map_is_native(JSObjectMap > return false; > #undef OP > >- ops_ins = addName(lir->insLoad(LIR_ldp, map_ins, int(offsetof(JSObjectMap, ops))), "ops"); >- LIns* n = lir->insLoad(LIR_ldp, ops_ins, op_offset); >+ ops_ins = addName(lir->insLoad(LIR_ldcp, map_ins, int(offsetof(JSObjectMap, ops))), "ops"); >+ LIns* n = lir->insLoad(LIR_ldcp, ops_ins, op_offset); This means JSObjectMap::ops and the relevant members of JSObjectMap must be immutable. To a great extent we can enforce this in C++, and I think we should. struct JSObjectMap { // Here the first const means individual ops don't change, and the second // means we never swap out the whole vtable for another. The JIT depends // on both promises of immutability. const JSObjectOps * const ops; ... explicit JSObjectMap(const JSObjectOps *ops) : ops(ops) {} }; At the least a comment there about this especially unobvious dependency is merited. >@@ -7988,7 +8042,7 @@ TraceRecorder::guardPropertyCacheHit(LIn > lir->insLoad(LIR_ld, >- addName(lir->insLoad(LIR_ldp, cx_ins, offsetof(JSContext, runtime)), >+ addName(lir->insLoad(LIR_ldcp, cx_ins, offsetof(JSContext, runtime)), > "runtime"), > offsetof(JSRuntime, protoHazardShape)), Similarly: struct JSContext { ... JSRuntime * const runtime; ... explicit JSContext(JSRuntime *rt) : runtime(rt) {} }; >@@ -8348,12 +8404,11 @@ TraceRecorder::guardPrototypeHasNoIndexe > return JSRS_STOP; > > while (guardHasPrototype(obj, obj_ins, &obj, &obj_ins, exit)) { >+ if (alreadyGuardedShape(obj_ins, obj)) >+ continue; >+ Is it possible to have a TR::guardShape method instead of calling alreadyGuardedShape before each shape guard?
(In reply to comment #55) > (In reply to comment #47) > > Ahem. > > Brendan, what's the difference between v5 and v4? Is it just > spaceleak avoidance fixes, or do I need to rerun all the perf > measurements? Belated reply: yes, just the leak fix. (In reply to comment #56) > ... > This urgently calls for block shrink-wrapping vars (var to let conversion), or > add liveness analysis that can tell that the undefined value of the var is > never read. But that basically is the proof necessary for shrink wrapping, so I > think we should go with that. Bug 456588, on my list and I should get to it soon. But mrbkap or jorendorff could steal and get it done with my advice. (In reply to comment #62) >. Oops. Will fix. Jason, many thanks for reading this -- I've been preoccupied over the last week with various stuff that has meant I needed your brain (but in a good non-zombie way ;-). >. I changed JSScope::generateOwnShape to call js_LeaveTrace, but then convinced myself that the recording-time defense of purging guardedShapeTable meant we did not need the runtime pessimism. The assumption is that shape evolution other than by generateOwnShape() is deterministic from recording start to end, and then replays exactly once we've guarded (due to LIR's linear SSA nature) on trace -- and of course re-guarded after a purge. Is this sound? It ought to be, in my opinion. Anything adding non-determinism here is a bug to fix. (In reply to comment #63) > 3. It looks like JSScope::extend and JSScope::replacingShapeChange change an > object's shape without going through JSScope::generateOwnShape. But such changes are predictable. > (JSScope::remove could do it too, but doesn't bother.) Yes, that's long-standing code. I'm not sure we can rewind to the previous shape when lastProp is deleted and there are no middle deletes. Have to think about delete harder. Will work on a new patch based on all these comments now. /be
To be more precise, if the recorder witnesses a generateOwnShape, it purges guardedShapeTable but does not abort recording. New shapes for the given scope evolve unpredictably from this point on, and the first dependency on such a shape will be guarded again. When the trace executes, the generateOwnShape need not leave trace, however. Perhaps there is no subsequent guard on a mispredicted shape. So it's premature to leave trace always. The re-guarded shape should mismatch. This all suggests that generateOwnShape is too blunt an instrument, and we should build up more deterministic and trace-stable shape evolutionary paths. But that is for bug 497789. /be
Passes the spanky new trace-test python3.1-driven tests. Hope it's ok to take this bug -- further work in followup bugs so we can have one patch landing per resolved bug still seems best. Julian, please keep up the great anaysis, here or in new bugs. /be
Assignee: jseward → brendan
Attachment #394137 - Attachment is obsolete: true
Attachment #394567 - Flags: review?(jorendorff)
Comment on attachment 394567 [details] [diff] [review] patch, v6 function B() {} B.prototype.x = 1; var d = new B; var names = ['z', 'z', 'z', 'z', 'z', 'z', 'z', 'x']; for (var i = 0; i < names.length; i++) { x = d.x; // guard on shapeOf(d) d[names[i]] = 2; // unpredicted shape change y = d.x; // guard here is elided } assertEq(y, 2); // Assertion failed: got 1, expected 2
Attachment #394567 - Flags: review?(jorendorff) → review-
In comment 68, the shape change happens in SetProperty_tn (soon to be SetPropertyByName), but the moral of the story is, whenever we call into arbitrary native code, anything can happen. You could probably arrange it so that if a shape changes unexpectedly in native code, we detect it in JSScope and deep-bail there. That's a trade-off though: better best-case behavior for horrible worst-case behavior. (Deep-bailing unconditionally kicks us back to the interpreter; we cannot at present patch a deep-bail exit.) It's not clear to me that the worst case is pathological. I bet it could happen in real-world code (e.g. two variables refer to different objects at record time, but the same object at run time).
(In reply to comment #68) > (From update of attachment 394567 [details] [diff] [review]) > function B() {} [snip] I'd like to make sure test cases like this, that find a bug even in development, get added to the suite. Should we just add this right away?
(In reply to comment #69) > In comment 68, the shape change happens in SetProperty_tn (soon to be > SetPropertyByName), but the moral of the story is, whenever we call into > arbitrary native code, anything can happen. But the recorder is generating that call, so it can purge the entire guardedShapeTable, eh? Great feedback as always -- got any more on the other bits of the patch? I like the const-ipation (painless). I'll get a new patch up shortly. /be
Attachment #394567 - Attachment is obsolete: true
Attachment #394617 - Flags: review?(jorendorff)
Bugzilla interdiff lies. Here's the non-noise diff-of-patches output: 545c545,555 < @@ -10880,13 +10934,10 @@ TraceRecorder::prop(JSObject* obj, LIns* --- > @@ -9894,6 +9948,9 @@ TraceRecorder::enterDeepBailCall() > // Tell nanojit not to discard or defer stack writes before this call. > LIns* guardRec = createGuardRecord(exit); > lir->insGuard(LIR_xbarrier, guardRec, guardRec); > + > + // Forget about guarded shapes, since deep bailers can reshape the world. > + forgetGuardedShapes(); > } > > JS_REQUIRES_STACK void /be
For further nutty consistency, I renamed like so: 249c249 < + tr->purgeGuardedShapeTableByObject(object); --- > + tr->forgetGuardedShapesForObject(object); 405c405 < +PurgeGuardedShapeByObject(JSDHashTable *table, JSDHashEntryHdr *hdr, uint32 number, void *arg) --- > +ForgetGuardedShapesForObject(JSDHashTable *table, JSDHashEntryHdr *hdr, uint32 number, void *arg) 414c414 < +TraceRecorder::purgeGuardedShapeTableByObject(JSObject* obj) --- > +TraceRecorder::forgetGuardedShapesForObject(JSObject* obj) 417c417 < + JS_DHashTableEnumerate(&guardedShapeTable, PurgeGuardedShapeByObject, obj); --- > + JS_DHashTableEnumerate(&guardedShapeTable, ForgetGuardedShapesForObject, obj); 628c628 < + void purgeGuardedShapeTableByObject(JSObject* obj); --- > + void forgetGuardedShapesForObject(JSObject* obj); /be
There's a place where we emit a deep-bail call without calling enterDeepBailCall. It's in TR::emitNativeCall. (definitely silly, please fix it) If this new approach is correct, then I think we can assert in JSScope, when the shape changes: // Check for unpredictable shape change. If this fails, the alreadyGuarded // cache may have elided necessary shape guards. JS_ASSERT_IF(JS_ON_TRACE(cx), cx->deepBail); Maybe that assertion is too strict and we would get false alarms. I hope not.
(In reply to comment #70) > > Should we just add this right away? yes. r=me
As requested (cx->bailExit, not cx->deepBail). Passes good old js/tests and the new trace-test regime. Tryserver'ing now, expect a clean bill of health. Great to see daylight at the end of this tunnel (and it's not an oncoming train! ;-). /be
Attachment #394617 - Attachment is obsolete: true
Attachment #394617 - Flags: review?(jorendorff)
(In reply to comment #61) > (In reply to comment #60) > > Yes. I thought it odd that it produces different traces for > deltablue, not merely shorter versions of the same ones. Julian, if you have time could you please test the latest patch and confirm it doesn't do something unexpected on deltablue? Thanks, /be
(In reply to comment #77) > Passes good old js/tests and the new trace-test regime. Can someone explain what the new trace-test regime is? I seemed to have missed the memo...
You have to install python from source and then run the python script in trace-test.
(In reply to comment #80) > You have to install python from source and then run the python script in > trace-test. there should be one that works with python 2.3+ at this point, covering any recent OSX or linux.
I tested patch v7 (not the latest) overnight and it still doesn't seem conclusively a win to me. I'll try patch v8 today. One thing I've learn the hard way is that Linux is a really sucky platform to do measurement runs on, since the per-run variation is so high. Compared to the patch v4 results, the gain for richards has fallen from 8.5% to 6.9%, and the 4% gain for crypto is now less than 1%. This might be measurement noise or it might reflect increasing conservatism in v7 resulting from efforts to make it completely correct. I'll try with patch v8 on a quiet MacOS box. Am tired of dealing with Linux's measurement noise. Are we sure that v8-crypto is entirely deterministic? (eg, it doesn't start off doing any random number generation nonsense that varies from run to run?) I ask because it seems to have a much higher measurement noise level than the rest of them (see below). (50 run averages): ** TOTAL **: - 13182.4ms +/- 0.3% 13177.2ms +/- 0.3% v8: - 13182.4ms +/- 0.3% 13177.2ms +/- 0.3% crypto: - 948.8ms +/- 1.9% 941.8ms +/- 2.2% deltablue: ?? 8140.9ms +/- 0.4% 8153.4ms +/- 0.4% earley-boyer: *1.013x as slow* 1976.7ms +/- 0.4% 2002.4ms +/- 0.3% raytrace: *1.012x as slow* 1312.3ms +/- 0.3% 1327.8ms +/- 0.4% richards: 1.069x as fast 803.7ms +/- 0.1% 751.7ms +/- 0.1%
\\(In reply to comment #79) > (In reply to comment #77) > > Passes good old js/tests and the new trace-test regime. > > Can someone explain what the new trace-test regime is? I seemed to have missed > the memo... $ make -n check /usr/bin/python ../trace-test/trace-test.py \ -x slow ./dist/bin/js To add a new test, put some code in a file, js/src/trace-test/tests/*/*.js and hg add it. The new regime will automatically pick it up.
$ grep -w random v8/crypto.js while(rng_pptr < rng_psize) { // extract some randomness from Math.random() t = Math.floor(65536 * Math.random()); // PKCS#1 (type 2, random) pad input string s to n bytes, and return a bigint while(n > 2) { // random non-zero pad // Undo PKCS#1 (type 2, random) padding and, if valid, return the plaintext // Generate a new random private key B bits long, using public expt E /be
(In reply to comment #77) > Created an attachment (id=394756) [details] >%
(In reply to comment #77) > Created an attachment (id=394756) [details] > patch, v8 Numbers for default test set for patch v8, same setup as #85, 500 iterations. Seems like much of a muchness. ** TOTAL **: 1.001x as fast 1214.3ms +/- 0.0% 1213.1ms +/- 0.0% 3d: 1.003x as fast 188.6ms +/- 0.0% 188.1ms +/- 0.0% cube: - 51.0ms +/- 0.1% 51.0ms +/- 0.1% morph: - 39.2ms +/- 0.1% 39.2ms +/- 0.1% raytrace: 1.005x as fast 98.3ms +/- 0.0% 97.9ms +/- 0.1% access: *1.005x as slow* 178.9ms +/- 0.1% 179.9ms +/- 0.0% binary-trees: *1.040x as slow* 50.5ms +/- 0.1% 52.5ms +/- 0.1% fannkuch: 1.006x as fast 76.2ms +/- 0.0% 75.8ms +/- 0.1% nbody: 1.024x as fast 34.6ms +/- 0.1% 33.8ms +/- 0.1% nsieve: *1.013x as slow* 17.6ms +/- 0.2% 17.8ms +/- 0.2% bitops: 1.010x as fast 47.8ms +/- 0.1% 47.4ms +/- 0.1% 3bit-bits-in-byte: - 2.1ms +/- 1.4% 2.1ms +/- 1.4% bits-in-byte: 1.002x as fast 11.0ms +/- 0.2% 11.0ms +/- 0.1% bitwise-and: - 3.2ms +/- 1.1% 3.2ms +/- 1.1% nsieve-bits: 1.013x as fast 31.4ms +/- 0.1% 31.0ms +/- 0.1% controlflow: *1.017x as slow* 44.8ms +/- 0.1% 45.5ms +/- 0.1% recursive: *1.017x as slow* 44.8ms +/- 0.1% 45.5ms +/- 0.1% crypto: *1.011x as slow* 72.7ms +/- 0.2% 73.6ms +/- 0.2% aes: ?? 41.0ms +/- 0.4% 41.1ms +/- 0.4% md5: *1.031x as slow* 20.3ms +/- 0.2% 20.9ms +/- 0.1% sha1: *1.010x as slow* 11.4ms +/- 0.4% 11.5ms +/- 0.4% date: 1.009x as fast 173.9ms +/- 0.0% 172.4ms +/- 0.0% format-tofte: 1.014x as fast 82.5ms +/- 0.1% 81.4ms +/- 0.1% format-xparb: 1.004x as fast 91.4ms +/- 0.0% 91.0ms +/- 0.0% math: *1.003x as slow* 43.3ms +/- 0.2% 43.4ms +/- 0.2% cordic: *1.011x as slow* 14.1ms +/- 0.2% 14.3ms +/- 0.3% partial-sums: - 19.9ms +/- 0.2% 19.8ms +/- 0.2% spectral-norm: ?? 9.3ms +/- 0.4% 9.4ms +/- 0.4% regexp: ?? 52.7ms +/- 0.1% 52.8ms +/- 0.1% dna: ?? 52.7ms +/- 0.1% 52.8ms +/- 0.1% string: 1.004x as fast 411.5ms +/- 0.0% 409.9ms +/- 0.0% base64: 1.013x as fast 24.0ms +/- 0.1% 23.7ms +/- 0.2% fasta: *1.004x as slow* 82.7ms +/- 0.1% 83.0ms +/- 0.0% tagcloud: *1.002x as slow* 130.2ms +/- 0.0% 130.4ms +/- 0.0% unpack-code: 1.013x as fast 132.9ms +/- 0.1% 131.2ms +/- 0.0% validate-input: - 41.6ms +/- 0.1% 41.6ms +/- 0.1%
(In reply to comment #85) > >% For the raytrace regression there, instruction counts are down around 1.5% but the I1 and to a lesser extent L2 miss rates are up. I'll investigate, but requires first rebasing the frag profiling patch.
No LIR_dbreak-generated int3 traps in my testing. /be
Attachment #394756 - Attachment is obsolete: true
Attachment #394883 - Flags: review?(jorendorff)
Comment on attachment 394883 [details] [diff] [review] patch, v9 with dvander's lir->insAssert/LIR_dbreak patch For the nanojit changes. /be
Attachment #394883 - Flags: review?(graydon)
Status: NEW → ASSIGNED
Priority: -- → P1
Target Milestone: --- → mozilla1.9.2
Comment on attachment 394883 [details] [diff] [review] patch, v9 with dvander's lir->insAssert/LIR_dbreak patch Debug-break stuff? looks fine to me.
Attachment #394883 - Attachment is obsolete: true
Attachment #394944 - Flags: review?(jorendorff)
Attachment #394883 - Flags: review?(jorendorff)
(In reply to comment #90) > (From update of attachment 394883 [details] [diff] [review]) > Debug-break stuff? looks fine to me. Looks ok to me too. I tried changing the insAssert condition to something obviously false, and checked it really does trap. Which it does.
Fragment profiling results for patch v8 (which, aiui, is functionally equivalent to v9 and v9a). Compared to baseline (which you can see in comment #9), the number of fragment starts increases from 11.836 million to 12.863 million. This is worrying, since my understanding was that the patch merely removed unused guards, and would not affect the higher level trace selection mechanism. Indeed, this is new behavior. In version 4 of patch (see comment #51), which was the last version I fragprofiled, the number of frag starts was unaffected by the patch (11.836 mill). The program still runs circa 5% faster because the traces are shorter overall, despite the extra 1 million trace-to-trace transfers. ------------- Details: I think almost all the extra activity manifests itself in extra entries to FragID=000013. This trace is 50ish guards long. I just show the first three here: Recording starting from ../SunSpider/tests/v8-richards.js:184@11 (FragID=000013) 00070: 331 callprop "run" About to try emitting guard code for SideExit=0x825f9bc exitType=BRANCH start state = iparam 0 ecx sp = ld state[0] rp = ld state[4] cx = ld state[8] eos = ld state[12] eor = ld state[16] ld630 = ld state[880] $global0 = i2f ld630 ld631 = ld state[872] $global1 = i2f ld631 ld632 = ld state[888] $global2 = i2f ld632 ld633 = ld state[864] $global3 = i2f ld633 ld634 = ld state[840] $global4 = i2f ld634 ld635 = ld state[856] $global5 = i2f ld635 ld636 = ld state[936] $global6 = i2f ld636 $args0 = ld sp[-24] $args1 = ld sp[-16] $arguments0 = ld sp[-8] $stack0 = ld sp[0] $stack1 = ld sp[8] $stack2 = ld sp[16] $arguments0 = ld sp[24] $var0 = ld sp[32] $stack0 = ld sp[40] map = ld $stack0[0] ops = ld map[0] ld637 = ld ops[12] ptr guard(native-map) = eq ld637, ptr xf463: xf guard(native-map) -> pc=0x81ee922 imacpc=(nil) sp+48 rp+4 (GuardID=001) About to try emitting guard code for SideExit=0x825fa60 exitType=BRANCH shape = ld map[16] 265 guard_kshape = eq shape, 265 xf464: xf guard_kshape -> pc=0x81ee922 imacpc=(nil) sp+48 rp+4 (GuardID=002) About to try emitting guard code for SideExit=0x825fb04 exitType=BRANCH proto = ld $stack0[8] 0 eq398 = eq proto, 0 xt359: xt eq398 -> pc=0x81ee922 imacpc=(nil) sp+48 rp+4 (GuardID=003) Without the patch, the trace is executed 81898 times, and always goes to the end: FragID=000013, total count 81898: Looped (057) 81898 (100.00%) With patch v8, this fragment is executed 12 x more often. But all of those extra entries are thrown out at the first shape check: FragID=000013, total count 1079398: GuardID=002 997500 (92.41%) Looped (052) 81898 ( 7.59%) So it seems like the trace is being entered a lot more, but all those new entries fail the first shape check.
(In reply to comment #76) > (In reply to comment #70) > > > > Should we just add this right away? > > yes. r=me A few days behind on this, but I think everyone should feel free, at any time, to add any tracing testcase to the suite, reviewed at the discretion of the committer. I don't think anyone will disagree, but it seems best to make sure everyone's on the same page with this.
at tm-tip, before this patch: recorder: started(6), aborted(2), completed(34), different header(0), trees trashed(0), slot promoted(0), unstable loop variable(2), breaks(0), returns(0), unstableInnerCalls(1), blacklisted(0) monitor: triggered(72), exits(72), type mismatch(0), global mismatch(0) after: recorder: started(4), aborted(2), completed(31), different header(0), trees trashed(0), slot promoted(0), unstable loop variable(1), breaks(0), returns(0), unstableInnerCalls(1), blacklisted(0) monitor: triggered(66), exits(66), type mismatch(0), global mismatch(0) I could have sworn there was something much different yesterday but now I can't reproduce it.
Revised to include #ifdef DEBUG code to dump shape guards to /tmp/shapes.dump. Julian, this is rough but it might be usable to say more about guards in profiler output. We would need to map GuardId to this information somehow, possibly just in the .dump file. /be
Attachment #394944 - Attachment is obsolete: true
Attachment #394944 - Flags: review?(jorendorff)
(In reply to comment #94) > A few days behind on this, but I think everyone should feel free, at any time, > to add any tracing testcase to the suite, reviewed at the discretion of the > committer. Well, if (as in this case) the test passes on tip, absolutely. But usually a new test case is relevant because it fails.
(In reply to comment #95) > at tm-tip, before this patch: I see that too, although the numbers are not as different as yours. There's one more "started" and one more "aborted", but that's it. This is with 31574:12a9bea2d331 (Sat Aug 15). Without patch: recorder: started(4), aborted(1), completed(33), different header(0), trees trashed(4), slot promoted(0), unstable loop variable(1), breaks(0), returns(0), unstableInnerCalls(1), blacklisted(0) monitor: triggered(70), exits(70), type mismatch(0), global mismatch(0) With patch v8: recorder: started(5), aborted(2), completed(33), different header(0), trees trashed(4), slot promoted(0), unstable loop variable(1), breaks(0), returns(0), unstableInnerCalls(1), blacklisted(0) monitor: triggered(71), exits(71), type mismatch(0), global mismatch(0)
> Revised to include #ifdef DEBUG code to dump shape guards to /tmp/shapes.dump. > Julian, this is rough but it might be usable to say more about guards in > profiler output. Trying to clarify what you mean. AIUI a shape is a 24 bit int which uniquely identifies a structural type. The shape guards throw us off-trace if the shape of an object does not match its shape at recording-time. So what info do you want the profiler to collect? I'm thinking, something like: for each shape guard that fails, the expected and actual shape. And also a mapping from the shapes themselves to the type it refers to, as printed out by the v10 patch into /tmp/shapes.log. Is that right? Or completely the wrong direction?
I collected more info re the with/without-patch discrepancy notes in comments #95 and #98. This is easily reproduced using r31775 with and without patch v9 (r31775 since it no longer applies to tip). I first changed all the ldcp's back to ldp's. Difference persists. So it's not to do with bogus claims about non-aliasing. I disabled eliding of unnecessary shape checks in TraceRecorder::guardShape. Difference persists. So the impression I get is, even after disabling the code generation changes that are driven by (what I assume is) a redundant shape check analysis, the difference is still there. ----------------- diffing the output of TMFLAGS=minimal shows that, relatively late in the day (after 32 of 34 total recordings are made), the v9-ised version starts to record a new trace, that the baseline one doesn't. But then it aborts: Recording starting from ../../../SunSpider/tests/v8-richards.js:535@608 Abort recording of tree ../../../SunSpider/tests/v8-richards.js:535@608 at ../../../SunSpider/tests/v8-richards.js:331@70: Inner tree is trying to grow, abort outer recording. and that's for the reason BRANCH_EXIT. ---------------- I also compared outputs for TMFLAGS=tracer,minimal,stats,abort. This seems to show a divergence very much earlier in the process, after completion of the 6th recording. The first differing line is trying to attach another branch to the tree (hits = 0) which only the baseline version does, not the v9 version. Baseline version continues with a few lines of "Looking for compat peer", leaving/entering a trace, synthesising a shallow frame. The two versions finally reconverge at the line trying to attach another branch to the tree (hits = 1) IOW the baseline version has trying to attach another branch to the tree (hits = 0) (a few lines of other stuff) trying to attach another branch to the tree (hits = 1) whereas the v9 version only has trying to attach another branch to the tree (hits = 1) ----------------------- Another thing I noticed is that in many places where lists of types are printed, the v9 version would have an extra type listed: checkType slot 10: interp=I typemap=I isNum=1 promoteInt=1 that's not present with the baseline version. ------------ Is any of this meaningful/relevant?
Here's a small case which produces one trace without the patch and two with it: $ TMFLAGS=minimal ./Rbase/js/src/BuildD/js -j ./foo.js Recording starting from ./foo.js:5@8 (FragID=000001) Recording completed at ./foo.js:5@25 via endLoop (FragID=000001) $ TMFLAGS=minimal ./Rv10/js/src/BuildD/js -j ./foo.js Recording starting from ./foo.js:5@8 (FragID=000001) Recording completed at ./foo.js:5@25 via endLoop (FragID=000001) Recording starting from ./foo.js:5@8 (FragID=000002) Recording completed at ./foo.js:5@25 via endLoop (FragID=000002) Test case is testDeepPropertyShadowing from trace-tests: function testDeepPropertyShadowing() { function h(node) { var x = 0; while (node) { x++; node = node.parent; } return x; } var tree = {__proto__: {__proto__: {parent: null}}}; h(tree); h(tree); tree.parent = {}; assertEq(h(tree), 2); } testDeepPropertyShadowing();
No change other than merging. We understand better what's going on. I'll write it up when I have time and see about a real fix. /be
Attachment #395192 - Attachment is obsolete: true
Tested on top of patch for bug 471214 (see bug 471214 comment 73), I stopped in every JSScope::generateOwnShape and verified that each such shape was generated for a specific method assignment to a prototype object, or a scope branding on first call to a method of an as-yet unbranded prototype. These look minimal, although there is more to do in bug 497789 to share shape evolutionary paths. However, this patch gives three aborts: Abort recording of tree richards.js:190@11 at richards.js:531@30: No compatible inner tree. Abort recording of tree base.js:159@23 at richards.js:470@47: Inner tree is trying to grow, abort outer recording. Abort recording of tree base.js:159@23 at richards.js:337@70: Inner tree is trying to grow, abort outer recording. Without this patch, with the patch for bug 471214, the aborts are: Abort recording of tree richards.js:190@11 at richards.js:531@30: No compatible inner tree. Abort recording of tree base.js:159@23 at richards.js:470@47: Inner tree is trying to grow, abort outer recording. With no patches applied, the same two aborts present.. I could use fragment profiler data, again -- Julian, I hope your travel went well enough! Thanks, /be
Attachment #397381 - Attachment is obsolete: true
(In reply to comment #103) >. The relevant code is in AttemptToExtendTree: Fragment* c; if (!(c = anchor->target)) { Allocator& alloc = *JS_TRACE_MONITOR(cx).allocator; c = new (alloc) Fragment(cx->fp->regs->pc); c->root = anchor->from->root; debug_only_printf(...); anchor->target = c; ... Where anchor is the VMSideExit* passed in. With redundant guards, we have two anchors for the same ip, so we see null anchor->target twice and create a new fragment, c, with zero hits count. With the patch, we have one guard, one side exit, one place to create and save anchor->target aka c, so we find it and its hits counter reaches the critical HOTEXIT threshold. With the v11 patch on top of the bug 471214 patch, I wonder whether the fragment profiler still shows same compiled fragments, more executions of one particular one -- where the extra executions all branch-exit early. Need that profiler! /be
Interdiff lies, patch v12 is just a refresh. /be
(In reply to comment #105) > Created an attachment (id=397582) [details] > patch v12 Comparing (32099:c1a97865c476 + bug 471214 att 397899) to (32099:c1a97865c476 + bug 471214 att 397899 + this bug patch v12) Note that bug 471214 att 397899 is not the final version of that bug's patch, but at least does not appear to cause any perf regressions. Summary: v12 is faster than baseline (as before), and has lower insn counts and I1 misses. However it still has different behaviour at the fragprofiling level -- entries increase from 11,836,222 to 12,863,820. ten runs native: base 8.76 v12 8.36 base ==8163== I refs: 3,735,417,294 ==8163== I1 misses: 32,483,178 ==8163== L2i misses: 5,789 ==8163== I1 miss rate: 0.86% ==8163== L2i miss rate: 0.00% ==8163== ==8163== D refs: 1,790,558,493 (1,187,668,439 rd + 602,890,054 wr) ==8163== D1 misses: 159,349 ( 133,304 rd + 26,045 wr) ==8163== L2d misses: 41,320 ( 20,611 rd + 20,709 wr) ==8163== D1 miss rate: 0.0% ( 0.0% + 0.0% ) ==8163== L2d miss rate: 0.0% ( 0.0% + 0.0% ) ==8163== ==8163== L2 refs: 32,642,527 ( 32,616,482 rd + 26,045 wr) ==8163== L2 misses: 47,109 ( 26,400 rd + 20,709 wr) ==8163== L2 miss rate: 0.0% ( 0.0% + 0.0% ) ==8163== ==8163== Branches: 541,992,592 ( 541,898,572 cond + 94,020 ind) ==8163== Mispredicts: 11,523,392 ( 11,497,063 cond + 26,329 ind) ==8163== Mispred rate: 2.1% ( 2.1% + 28.0% ) v12 ==8187== I refs: 3,594,439,441 ==8187== I1 misses: 28,277,581 ==8187== L2i misses: 5,794 ==8187== I1 miss rate: 0.78% ==8187== L2i miss rate: 0.00% ==8187== ==8187== D refs: 1,737,612,513 (1,131,324,134 rd + 606,288,379 wr) ==8187== D1 misses: 191,409 ( 163,599 rd + 27,810 wr) ==8187== L2d misses: 44,319 ( 23,062 rd + 21,257 wr) ==8187== D1 miss rate: 0.0% ( 0.0% + 0.0% ) ==8187== L2d miss rate: 0.0% ( 0.0% + 0.0% ) ==8187== ==8187== L2 refs: 28,468,990 ( 28,441,180 rd + 27,810 wr) ==8187== L2 misses: 50,113 ( 28,856 rd + 21,257 wr) ==8187== L2 miss rate: 0.0% ( 0.0% + 0.0% ) ==8187== ==8187== Branches: 494,290,098 ( 494,191,015 cond + 99,083 ind) ==8187== Mispredicts: 11,331,061 ( 11,304,453 cond + 26,608 ind) ==8187== Mispred rate: 2.2% ( 2.2% + 26.8% ) base Total count = 11836222 Entry counts Entry counts Static ------Self------ ----Cumulative--- Exits IBytes FragID 0: 31.55% 3734848 31.55% 3734848 46 1043 000002 1: 19.44% 2300548 50.99% 6035396 150 3315 000003 2: 12.56% 1486448 63.55% 7521844 182 4107 000010 3: 8.43% 997498 71.98% 8519342 119 2874 000016 4: 4.38% 517997 76.35% 9037339 219 5047 000004 5: 2.96% 349997 79.31% 9387336 93 2390 000023 6: 2.77% 327598 82.08% 9714934 27 831 000008 7: 2.74% 324446 84.82% 10039380 186 4280 000019 8: 2.74% 323748 87.55% 10363128 53 1324 000017 9: 2.49% 294348 90.04% 10657476 68 1729 000018 v12 Total count = 12863820 Entry counts Entry counts Static ------Self------ ----Cumulative--- Exits IBytes FragID 0: 29.03% 3734848 29.03% 3734848 45 1079 000002 1: 17.88% 2300548 46.92% 6035396 142 3432 000003 2: 11.56% 1486448 58.47% 7521844 176 4238 000010 3: 8.39% 1079398 66.86% 8601242 52 1368 000013 4: 7.75% 997498 74.62% 9598740 112 2933 000016 5: 4.03% 517997 78.64% 10116737 211 5250 000004 6: 2.72% 349997 81.37% 10466734 88 2424 000023 7: 2.55% 327598 83.91% 10794332 27 855 000008 8: 2.52% 324446 86.43% 11118778 180 4407 000018 9: 2.52% 323748 88.95% 11442526 50 1310 000017
(In reply to comment #109) > rebased again, still essentially v12 plus tracking changes Performance numbers for this patch are essentially the same as in comment #107. (when applied to TM r32149, which is the latest version to which it will apply cleanly).
Splitting the patch up into dependency bugs. First one is bug 516069. /be
Fragment profiling help welcome. I really hope this doesn't introduce anything unexpected. The LIR_dbreak-based assertions should confirm it's memozing constant shape guards correctly. The SubShapeOf(cx, obj, shape) built-in, used when a predictable shape (one along OBJ_SCOPE(obj)'s property tree lineage at scope->lastProp, where the scope has not had its own overriding shape assigned) is guarded, tries to match the recording-time shape to an older property on the ancestor line, in case the object being used to access the wanted property has been extended with new properties since recording-time. SubShapeOf helps when objects related by pure property extension are accessed from the same site. The comment in TR::guardShape says what I think we must do next: figure out how to implement a PIC instead of branch-exiting on unrelated shape access from the same site. v8/richards.js is all about this: it uses different JS constructors with prototypes, where the unrelated shapes have methods and properties named the same and at the same offset (slot), and it expects the VM to optimize. Taking a branch exit means re-recording and growing the tree, but with the same code except for the guarded shape. Branch exit from an inner tree while recording an outer tree gets you an aborted outer tree, and we hope to re-record inner then outer with the new shape guarded in the inner, but again if the code is the same (same slot, etc.), then this is suboptimcal compard to a PIC. And branch exits can reach MAX_BRANCHES and fail hard. I'll file a followup bug if this patch passes review and frag-profiling muster, and get on with the PIC work tomorrow. /be
Attachment #400163 - Attachment is obsolete: true
Attachment #400655 - Flags: review?(jorendorff)
Attachment #400655 - Attachment is obsolete: true
Attachment #400655 - Flags: review?(jorendorff)
Comment on attachment 400655 [details] [diff] [review] the residuum SubShapeOf is badly broken (say trace-tests) but I have no time left today to work on this. /be
I think this is a dup -- anyone disagree, undup and take ownership. /be
Status: ASSIGNED → RESOLVED
Closed: 13 years ago
Resolution: --- → DUPLICATE
FWIW, I've (safely) implemented the CSE-ing of loads and am getting roughly a 20% speed-up. That's higher than the 8% Julian got in comment 8 when he did it unsafely; I guess other things have changed in the meantime that make the CSE-ing have a bigger benefit. | https://bugzilla.mozilla.org/show_bug.cgi?id=504587 | CC-MAIN-2022-33 | refinedweb | 13,341 | 76.22 |
At the new page that is the object of Page class I have 2 entries. Then I want to use their text property for transmittion content onto Main Activity tab (Application.Android). For example first entry will be "name' and the second will be "key". I want to transmit name.Text and key.Text in Main Activity to work further with them, maybe turn on WiFi network with this name and this key. I tried plugin namespaces, construct my own class (because fortunately I can see the common classes on Main Activity), but there is no luck with it. What do I do wrong? Can anyone help me with with problem? As I understood I am able to turn on device's services only through main Activity. I can be wrong.
P.S. I'd started learning Xamarin about 2 month ago
Answers
@Andrey_V
Do you want to pass the data in PCL project toAndroid project?
You could try to use
MessagingCenter.
For example, you could use
Subscribe()in MainActivity and call
Send()to pass the data to MainActivity by a button click or something else.
I just want to import the current data from current instruments (like f.e. entry) from c# page I would name it the first part of project onto Main Activity tab in the second part of the project.
But first I will check your offer and if I will stuck again or it won't be that I'm looking for I leave a new comment
Thank you
@BillyLiu
I suppose that I have to use Dependency Service | https://forums.xamarin.com/discussion/comment/340643/ | CC-MAIN-2019-18 | refinedweb | 264 | 73.27 |
The QAction class provides an abstract user interface action that can be inserted into widgets. More...
#include <QAction>
Inherits QObject.
Inherited by QMenuItem and QWidgetAction.:
openAct = new QAction(QIcon(":/images/open.png"), tr("&Open..."), this); openAct->setShortcut(tr("Ctrl+O")); openAct->setStatusTip(tr("Open an existing file")); connect(openAct, SIGNAL(triggered()), this, SLOT(open())); fileMenu->addAction(openAct); fileToolBar->addAction(openAct);
We recommend that actions are created as children of the window they are used in. In most cases actions will be children of the application's main window.
See also QMenu, QToolBar, and Application Example.
This enum type is used when calling QAction::activate()
This enum describes how an action should be moved into the application menu on Mac OS X.:
See also QAction::setChecked().
This property holds whether the action is checked.
Only checkable actions can be checked. By default, this is false (the action is unchecked).
Access functions:
See also checkable..
Access functions:
This property holds the action's font.
The font property is used to render the text set on the QAction. The font will can be considered a hint as it will not be consulted in all cases based upon application and style.
Access functions:
See also QAction::setText() and QStyle..
There is no default icon text.
Access functions:
See also setToolTip() and setStatusTip(). (usually just before the first application window is shown).
This property was introduced in Qt 4.2.
Access functions:
This property holds the action's primary shortcut key.
Valid keycodes for this property can be found in Qt::Key and Qt::Modifier. There is no default shortcut key.
Access functions:
This property holds the context for the action's shortcut.
Valid values for this property can be found in Qt::ShortcutContext. The default value is Qt::WindowShortcut.
Access functions:
This property holds the action's status tip.
The status tip is displayed on all status bars provided by the action's top-level parent widget.
Access functions:
See also setToolTip() and showStatusText().:
See also iconText.
This property holds the action's tooltip.
This text is used for the tooltip. If no tooltip is specified, the action's text is used.
Access functions:
See also setStatusTip() and setShortcut()..
Access functions:
This property holds the action's "What's This?" help text.
The "What's This?" text is used to provide a brief description of the action. The text may contain rich text. There is no default "What's This?" text.
Access functions:
See also QWhatsThis and Q3StyleSheet.
Constructs an action with parent. If parent is an action group the action will be automatically inserted into the group.
Constructs an action().
Constructs an action with an icon().
Destroys the object and frees allocated resources.
Returns the action group for this action. If no action group manages this action then 0 will be returned.
See also QActionGroup and QAction::setActionGroup().
Sends the relevant signals for ActionEvent event.
Action based widgets use this API to cause the QAction to emit signals as well as emitting their own.
Returns a list of widgets this action has been added to.
This function was introduced in Qt 4.2.
See also QWidget::addAction().
This signal is emitted when an action has changed. If you are only interested in actions in a given widget, you can watch for QWidget::actionEvent() sent with an QEvent::ActionChanged.
See also QWidget::actionEvent().
Returns the user data as set in QAction::setData.
See also setData().
This is a convenience slot that calls activate(Hover).
This signal is emitted when an action is highlighted by the user; for example, when the user pauses with the cursor over a menu option, toolbar button, or presses an action's shortcut key combination.
See also QAction::activate().
Returns true if this action is a separator action; otherwise it returns false.
See also QAction::setSeparator().
Returns the menu contained by this action. Actions that contain menus can be used to create menu items with submenus, or inserted into toolbars to create buttons with popup menus.
See also setMenu() and QMenu::addAction().
Returns the parent widget.
Sets this action group to group. The action will be automatically added to the group's list of actions.
Actions within the group will be mutually exclusive.
See also QActionGroup and QAction::actionGroup().
Sets the action's internal data to the given userData.
See also data().
This is a convenience function for the enabled property, that is useful for signals--slots connections. If b is true the action is disabled; otherwise it is enabled.
Sets the menu contained by this action to the specified menu.
See also menu().().
Sets shortcuts as the list of shortcuts that trigger the action. The first element of the list is the primary shortcut.
This function was introduced in Qt 4.2.
See also shortcuts() and shortcut.
This is an overloaded member function, provided for convenience.().
Returns the list of shortcuts, with the primary shortcut as the first element of the list.
This function was introduced in Qt 4.2.
See also setShortcuts().
Updates the relevant status bar for the widget specified by sending a QStatusTipEvent to its parent widget. Returns true if an event was sent; otherwise returns false.
If a null widget is specified, the event is sent to the action's parent.
See also statusTip.
This is a convenience function for the checked property. Connect to it to change the checked state to its opposite state..
This is a convenience slot that calls activate(Trigger).. | http://doc.trolltech.com/4.3/qaction.html | crawl-002 | refinedweb | 909 | 62.24 |
Bummer! This is just a preview. You need to be signed in with a Basic account to view the entire video.
Maya User Interface Overview5:27 with Jason Baskin
The first time you open up Maya, you may be overwhelmed by all of the icons and menu options that are scattered across the screen. Once you become familiar with Maya's interface design, getting around in Maya becomes much simpler.
Check out your Project Downloads for a .pdf on all the Maya Hotkeys, as well as all the project files for this course!
Important Hotkey:
- Tap spacebar over Camera View to maximize it. Tap again to show additional Camera Views.
Status Line/Toolbar: Allows you to specify which 3D action to focus on.
Channel Box: Allows you to view and edit specific details about objects you have created within your scene.
Marking Menu: Hold down the Spacebar to display Maya’s various menu items at the location of the mouse cursor. Make use of this shortcut to access menu items rather than moving the mouse to specific regions of the screen.
Quick Layout Buttons: Allows for different window arrangements, and are specified by clicking on the icons along the lower left side of the Maya UI. These reveal additional windows useful for animation, scene organization, and material creation.
Maya Tool Box: Includes tools for selecting, translating, rotating, and scaling scene elements.
Layer section: Allows you to lock and hide elements within your scene. (Not used in this course)
Attribute Editor: Displays detailed information about how an object is constructed along with its display and render settings. (Not used in this course)
- 0:00
The first time you open up Maya you might be a little overwhelmed by
- 0:03
all of the icons and menu options that are scattered across the screen.
- 0:07
But once you become familiar with Maya's interface design,
- 0:10
getting around in Maya becomes a lot simpler.
- 0:13
Let's take a look at the various sections of the user interface or UI.
- 0:16
Running across the top of the screen you'll see Maya's main menu.
- 0:20
Through a series of drop down menus you can gain access to
- 0:23
every common menu Maya has to offer.
- 0:26
The commands on the upper left are always visible to you whether you're focusing on
- 0:30
modeling, texturing, animation or other aspects of 3D production.
- 0:34
These six basic menu options allow you to open and
- 0:38
import files, create basic 3D shapes,
- 0:41
activate important modification tools and access additional display windows.
- 0:45
The remaining drop down menus along the upper right portion of the screen
- 0:49
will give you access to menus specifically related
- 0:52
to the tasks your are focusing on in Maya.
- 0:55
Just below the main menu you'll find a section called a Status Line Toolbar.
- 0:59
This section allows you to specify which 3D task you wanna focus on in Maya.
- 1:05
The drop down menu on the far left allows you to activate a specific module or
- 1:09
set of commands.
- 1:11
Maya LT includes Modeling, Rigging, Animation and Shading modules,
- 1:16
and the student and professional versions of Maya also include additional modules.
- 1:21
You'll notice that whenever a new module is selected,
- 1:24
many items in the main menu update to show module specific commands.
- 1:28
For example when we're working in the modeling module
- 1:31
we'll have access to mesh and curve tools.
- 1:34
But if we were to switch to the animation module we'd see a list of
- 1:37
animation specific tools.
- 1:39
Such as these commands that relate to animation key frames and
- 1:43
playback settings.
- 1:44
But in this course we will only be working within the modeling modules.
- 1:47
So you don't need to worry about any of these other areas of Maya for
- 1:50
the time being.
- 1:52
To the right of the module drop down menu you'll see a few icons that give the user
- 1:56
easy access to file commands.
- 1:58
Opening and saving scenes for example, as well as selection and snapping tools
- 2:03
that will allow us to precisely move and shape geometry within our scene.
- 2:08
Beneath the status line toolbar you'll find a section of shelves.
- 2:12
These are just icons that give you easy access to common commands.
- 2:17
The shelf commands are organized into different tabs but
- 2:20
all the commands found in the shelves can also be found up in the main menu above.
- 2:24
The intuitive shelf icons just offer easy access to tools.
- 2:30
Like the modules, the shelf tabs organize commands based on different categories.
- 2:34
And, in this course, we'll only be working with tools found in the polygons tab.
- 2:39
I'll click on this cylinder icon to add a primitive object to our scene.
- 2:43
This will give us something to look at as we begin working with cameras.
- 2:47
Under the shelves you'll find the camera view port,
- 2:49
which displays the objects within the 3D scene.
- 2:51
The first time you open Maya, you may just see a single perspective view.
- 2:56
But tapping on the spacebar with the mouse over this view
- 3:00
will bring up three additional cameras, the top, front and side view.
- 3:05
These cameras are orthographic,
- 3:07
which means that they display scene elements without perspective.
- 3:10
They're therefore especially useful for accurately gauging a model's size and
- 3:14
proportions.
- 3:15
Across the top edge of each of these camera views you'll see additional
- 3:18
camera-specific menus that allow you to modify display settings.
- 3:23
Moving the mouse over any of the camera view ports and
- 3:26
tapping the spacebar again, will maximize the window to full size.
- 3:30
So by positioning the mouse and
- 3:31
using the spacebar, you can easily switch views as needed.
- 3:35
Holding down the spacebar also displays Maya's various menu items
- 3:39
at the location of the mouse cursor.
- 3:41
As you become more familiar with Maya's commands you can make use of this shortcut
- 3:45
known as a marking menu.
- 3:47
To access menu items rather than moving the mouse
- 3:49
to specific regions of the screen.
- 3:52
Different window arrangements can also be specified
- 3:54
by clicking on the icons along the lower left side of the Maya UI.
- 3:58
These quick layout buttons, will also reveal additional windows useful for
- 4:03
animation, scene organization and material creation.
- 4:07
Just above the quick layout buttons on the left side of the screen
- 4:10
you'll find the Maya toolbox which includes tools for
- 4:13
selecting, translating, rotating and scaling scene elements.
- 4:17
And Maya includes an extensive set of hot keys
- 4:20
which allow even quicker access to many of these tools.
- 4:24
On the far right you'll find the channel box and attribute editor.
- 4:27
This is the section in Maya where you'll be able to view and
- 4:30
edit specific details about objects you've created within your scene.
- 4:34
For example, if I click on the cylinder I created a moment ago the channel box will
- 4:38
display information about where that object is positioned in 3D space.
- 4:42
Animators pay special attention to the channel box as they pose characters and
- 4:46
set key frames.
- 4:48
Beneath the channel box is the layers section of Maya
- 4:51
which allows us to lock and hide elements within our scene.
- 4:54
We won't need to work with layers much for
- 4:56
this project since we'll be building a relatively simple object.
- 4:59
But layers can definitely be helpful for managing larger scenes.
- 5:04
Just beneath the camera view ports, you'll see an animation timeline used for
- 5:07
adding motion to a scene.
- 5:09
And beneath the timeline is a section that allows users to enter commands
- 5:13
using the Maya scripting language.
- 5:15
We won't be working with animation or scripting in this course either but
- 5:19
it's good to have a sense of how the Maya UI's organized from the start so
- 5:22
that it doesn't feel so intimidating as you begin developing your 3D skills. | https://teamtreehouse.com/library/3d-art-with-maya-lt/getting-started-in-maya-lt/maya-user-interface-overview | CC-MAIN-2016-50 | refinedweb | 1,478 | 68.91 |
The purpose of this program is for the PC to 'generate' 4 numbers (and put them in computer[]).The purpose of this program is for the PC to 'generate' 4 numbers (and put them in computer[]).Code:#include <iostream> using namespace std; const int MAX_P = 4; const int MAX_C = 10; int inArray(int a[], int sc) { for(int i = 0; i < MAX_P; i++) { if(sc == a[i]) { return 1; } else { return 0; } } } int main() { int computer[MAX_P], user[MAX_P]; int i; for (i = 0; i < MAX_P; i++) { computer[i] = (rand() % MAX_C); cout << computer[i]; } cout << endl << endl; for (i = 0; i < MAX_P; i++) { cout << "Geef een nummer voor " << i << ": "; cin >> user[i]; } for (i = 0; i < MAX_P; i++) { if (inArray(computer, user[i])) { if (user[i] == computer[i]) { cout << "getal op de goede plek" << endl; } else { cout << "getal wel in de reeks" << endl; } } else { cout << "getal niet in reeks" << endl; } } cin.get(); return 0; }
Our user has to fill in 4 numbers and the computer matches them and tells the user which ones are on the right position or not even part of the array.
I really can't grasp why, but this code doesn't seem to work out..
If i lower MAX_P to 1 it works, but with any higher number it justs checks the first number for validity.
Maybe someone can help me with this?
thanx in advance | http://cboard.cprogramming.com/cplusplus-programming/61261-loop-doesn%27t-seem-function-properly.html | CC-MAIN-2014-52 | refinedweb | 231 | 69.96 |
Details
Bug
- Status: Resolved (View Workflow)
Major
- Resolution: Done
- 1.2.Final
-
-
- None
Description
My scenario is the following:
I have an @Alternative MockMailService class which should only be used during testing to not send out 5k mails to customers and employees when running the unit and integration test suite.
@Alternative @ApplicationScoped @Exclude(ifProjectStage=Production.class) public class MockMailService implements MailService {...}
Of course I only need to activate it in beans.xml:
<beans> <alternatives> <class>org.acme.MockMailService</class> </alternatives> </beans>
This is perfectly fine in CDI 1.0 but might be interpreted as not be allowed in the CDI 1.2 wording paragraph 5.1.1.2. "Declaring selected alternatives for a bean archive".
Please note that we introduced a check in CDI 1.0 purely to help the customer eagerly detect possible wrong configuration. E.g. if he simply has a typo in the classname. It was not intended to restrict useful use cases!
What the intention was: all is fine IF one of
- There exists a class T with the given name
- That class T (or a contained producer field or producer method) is annotated with @Alternative
- There is a Bean<T> with isAlternative() == true | https://issues.redhat.com/browse/CDI-627 | CC-MAIN-2022-27 | refinedweb | 198 | 50.84 |
This is a mobile version, full one is here.
Yegor Bugayenko
14 May 2014
Object-Oriented GitHub API
GitHub is an awesome platform for maintaining Git sources and tracking project issues. I moved all my projects (both private and public) to GitHub about three years ago and have no regrets. Moreover, GitHub gives access to almost all of its features through RESTful JSON API.
There are a few Java SDK-s that wrap and expose the API. I tried to use them, but faced a number of issues:
- They are not really object-oriented (even though one of them has a description that says it is)
- They are not based on JSR-353 (JSON Java API)
- They provide no mocking instruments
- They don't cover the entire API and can't be extended
Keeping in mind all those drawbacks, I created my own library—jcabi-github. Let's look at its most important advantages.
Object Oriented for Real
GitHub server is an object. A collection of issues is an object, an individual issue is an object, its author is an author, etc. For example, to retrieve the name of the author we use:
GitHub github = new RtGitHub(/* credentials */); Repos repos = github.repos(); Repo repo = repos.get(new Coordinates.Simple("jcabi/jcabi-github")); Issues issues = github.issues(); Issue issue = issues.get(123); User author = new Issue.Smart(issue).author(); System.out.println(author.name());
Needless to say,
GitHub,
Repos,
Repo,
Issues,
Issue,
and
User are interfaces. Classes that implement them are not visible in the library.
Mock Engine
MkGitHub class is a mock version of a GitHub server. It behaves
almost exactly the same as a real server and is the perfect
instrument for unit testing. For example, say that you're
testing a method that is supposed to post a new issue to GitHub
and add a message into it. Here is how the unit test would look:
public class FooTest { @Test public void createsIssueAndPostsMessage() { GitHub github = new MkGitHub("jeff"); github.repos().create( Json.createObjectBuilder().add("name", owner).build() ); new Foo().doTheThing(github); MatcherAssert.assertThat( github.issues().get(1).comments().iterate(), Matchers.not(Matchers.emptyIterable()) ); } }
This is much more convenient and compact than traditional mocking via Mockito or a similar framework.
Extensible
It is based on JSR-353 and uses
jcabi-http for HTTP request
processing. This combination makes it highly customizable and extensible,
when some GitHub feature is not covered by the library (and there are many of them).
For example, you want to get the value of
hireable attribute of a
User.
Class
User.Smart doesn't have a method for it. So, here is how you would get it:
User user = // get it somewhere // name() method exists in User.Smart, let's use it System.out.println(new User.Smart(user).name()); // there is no hireable() method there System.out.println(user.json().getString("hireable"));
We're using method
json() that returns an instance of
JsonObject
from JSR-353 (part of Java7).
No other library allows such direct access to JSON objects returned by the GitHub server.
Let's see another example. Say, you want to use some feature
from GitHub that is not covered by the API. You get a
Request
object from
GitHub interface and directly access the HTTP entry point of the server:
GitHub github = new RtGitHub(oauthKey); int found = github.entry() .uri().path("/search/repositories").back() .method(Request.GET) .as(JsonResponse.class) .getJsonObject() .getNumber("total_count") .intValue();
jcabi-http HTTP client is used by jcabi-github.
Immutable
All classes are truly immutable and annotated with
@Immutable.
This may sound like a minor benefit, but it was very important for me.
I'm using this annotation in all my projects to ensure my classes are truly immutable.
Version 0.8
A few days ago we released the latest version 0.8. It is a major release, that included over 1200 commits. It covers the entire GitHub API and is supposed to be very stable. The library ships as a JAR dependency in Maven Central (get its latest versions in Maven Central):
<dependency> <groupId>com.jcabi</groupId> <artifactId>jcabi-github</artifactId> </dependency> | http://www.yegor256.com/2014/05/14/object-oriented-github-java-sdk.amp.html | CC-MAIN-2018-09 | refinedweb | 686 | 57.37 |
memcached_callback_get, memcached_callback_set - Get and set a callback
C Client Library for memcached (libmemcached, -lmemcached)
#include <memcached.h> memcached_return_t memcached_callback_set (memcached_st *ptr, memcached_callback_t flag, void *data); void * memcached_callback_get (memcached_st *ptr, memcached_callback_t flag, memcached_return_t *error);
libmemcached(3) can have callbacks set key execution points. These either provide function calls at points in the code, or return pointers to structures for particular usages.
memcached_callback_get() takes a callback flag and returns the structure or function set by memcached_callback_set().
memcached_callback_set() changes the function/structure assigned by a callback flag. No connections are reset.
You can use MEMCACHED_CALLBACK_USER_DATA to provide custom context if required for any of the callbacks
When memcached_delete() is called this function will be excuted. At the point of its execution all connections have been closed.
When memcached_delete() is called this function will be excuted. At the point of its execution all connections have been closed.
You can set a value which will be used to create a domain for your keys. The value specified here will be prefixed to each of your keys. The value can not be greater then MEMCACHED_PREFIX_KEY_MAX_SIZE - 1 and will reduce MEMCACHED_MAX_KEY by the value of your key. The prefix key is only applied to the primary key, not the master key. MEMCACHED_FAILURE will be returned if no key is set. In the case of a key which is too long MEMCACHED_BAD_KEY_PROVIDED will be returned.
This allows you to store a pointer to a specifc piece of data. This can be retrieved from inside of memcached_fetch_execute(). Cloning a memcached_st will copy the pointer to the clone.
DEPRECATED: use memcached_set_memory_allocators instead.
DEPRECATED: use memcached_set_memory_allocators instead.
DEPRECATED: use memcached_set_memory_allocators instead.
This function implements the read through cache behavior. On failure of retrieval this callback will be called. You are responsible for populating the result object provided. This result object will then be stored in the server and returned to the calling process. You must clone the memcached_st in order to make use of it. The value will be stored only if you return MEMCACHED_SUCCESS or MEMCACHED_BUFFERED. Returning MEMCACHED_BUFFERED will cause the object to be buffered and not sent immediatly (if this is the default behavior based on your connection setup this will happen automatically).
The prototype for this is: memcached_return_t (*memcached_trigger_key)(memcached_st *ptr, char *key, size_t key_length, memcached_result_st *result);
This function implements a trigger upon successful deletion of a key. The memcached_st structure will need to be cloned in order to make use of it.
The prototype for this is: typedef memcached_return_t (*memcached_trigger_delete_key)(memcached_st *ptr, char *key, size_t key_length);
memcached_callback_get() return the function or structure that was provided. Upon error, nothing is set, null is returned, and the memcached_return_t argument is set to MEMCACHED_FAILURE.
memcached_callback_set() returns MEMCACHED_SUCCESS upon successful setting, otherwise MEMCACHED_FAILURE on error.
To find out more information please check:
Brian Aker, <brian@tangent.org>
memcached(1) libmemcached(3) memcached_strerror(3) | http://search.cpan.org/~timb/Memcached-libmemcached-1.001701/lib/Memcached/libmemcached/memcached_callback.pm | CC-MAIN-2017-13 | refinedweb | 472 | 50.02 |
I'm curious as to where OSX stores the names of mounted volumes. For example, if I connect my external USB hard drive, mount it and change the name to something else, how does OSX remember the name the next time I mount it? It seems like this should be stored on the volume itself, but I don't see any file that might contain this name. The only file that's created is the ".DS_Store" file, but this does not contain the volume name as far as I can tell.
If it's not store on the volume then how does the OS realize it's the same device being connected?
This information is stored outside the file systems in the device's partition table (or equivalent data structure). That's why you don't have to mount volumes to e.g. see their names in Disk Utility.
It depends on how the disks are formatted, modern OS X uses GPT by default.
Internally, OS X also uses GUIDs/UUIDs (128 bit numbers) to identify volumes.
Use /usr/sbin/diskutil to access metadata about disks and volumes.
/usr/sbin/diskutil
I posted this as a comment to the accepted answer, but I think it really is an answer, so I'm reposting it.
I think it is stored on the volume itself, but not in a file. I'm not 100% certain however of where it is on the disk. I believe it is stored in what Apple calls the "Finder Info" of the volume (which, if I remember correctly, is part of the volume header data which is stored in sector 2 of the volume). Note that an HFS+ volume has a name even when it is not saved on a partition (for example if it's just a file, as is the case for so called "disk images", which really, at least in some cases, are just "partition images").
Even the volume identifier (which is not actually an UUID and is 64 bits only) is stored there.
The volume UUID that Apple shows you (which is 128 bits) is computed every time for display purposes using the Version 3 UUID algorithm from the 64 bits volume id and a fixed "namespace" id.
The UUID of the partition which is stored in the GPT is a separate thing (Apple calls it the "Media UUID").
By posting your answer, you agree to the privacy policy and terms of service.
asked
3 years ago
viewed
928 times
active
5 months ago | http://superuser.com/questions/369796/where-does-osx-store-volume-names/369799 | CC-MAIN-2015-22 | refinedweb | 421 | 69.72 |
I'm reading through the C++ header file supplied with the 2011-12-19 release of the AMD APP SDK.
I noticed an unimplemented static member function:
const cl::Platform cl::Platform::null()
It has no docs. What is that all about?
Also, I was trying to sort out what this function did:
static cl_int cl::Platform::get(cl::Platform *platform)
Also no docs. It appears to assign the pointer with a new Platform that's constructed from the first Platform ID. I actually thought there was a memory leak, but then I looked up alloca(). I'd have just used a boost::scoped_array<> for that.
This appears to do the same thing, though returning the first Platform instead of assigning it through a provided pointer:
static cl::Platform cl::Platform::get (cl_int *errResult=NULL)
Does anyone know if these are in the standard? Is there any other doc describing the C++ interface besides opencl-cplusplus-1.1.pdf on Khronos' site?
A scoped_array would work, but boost wasn't a possibility here for two reasons. The first is that boost was disallowed int he design, the second is that scoped_ptr would do a heap allocation which is, I think, not allowed in the C++ bindings as per the requirements (they would be unusable for too many people if they did heap allocation).
The C++ bindings could certainly do with more documentation. The purpose of those get methods is to support default platforms, queues etc that we were unable to support in the standard OpenCL interfaces.
Thanks for confirming my understanding of the functions.
The bit about scoped_array<> was simply meant as an example of an alternate approach, though I agree with your decision not to introduce a boost dependency. Given the availability of a C interface that's already optimized for use on heavily restricted platforms, my own opinion would be to aim a bit higher, with the C++ interface. Though I have the luxury of having opinions without responsibility in the matter.
I suppose alloca() could have portability implications, but I guess those can be addressed if & when necessary.
--
One idea worth considering might be to place nonstandard extensions in a subclass. That would tend to minimize conflicts with future versions of the the standard. The subclass could even have the same name, differing only in the namespace in which it exists. | https://community.amd.com/t5/archives-discussions/questions-about-cl-platform/m-p/96948 | CC-MAIN-2022-05 | refinedweb | 394 | 62.78 |
I am trying to create a program where it will read in the student name, nationality, and grades, and calculate their tuition fee based on those information.
I began to write a bit of code by I have ran into some problems. When I try to run my code, the compiler says "warning C4700: local variable 'name' used without having been initialized" but continues to run the code. Then the program shuts down. What is happening?
In addition, on my fifth line I have "char *firstName;". I know that char* represents a pointer, but what does that mean and why can't I just use a string?
Code:
#include <iostream>
using namespace std;
struct Student {
char *firstName;
char *lastName;
bool canadianship;
int grades[10];
};
Student newStudent[10];
void addFirstName() {
char *name;
cout << "Student Name: ";
cin >> name;
newStudent[0].firstName = name;
}
int main() {
addFirstName();
cout << newStudent[0].firstName;
return 0;
} | http://cboard.cprogramming.com/cplusplus-programming/119736-noob-program-keeps-crashing-printable-thread.html | CC-MAIN-2015-22 | refinedweb | 149 | 65.52 |
). Feedback. Feedback. Feedback. Feedback: Feedback
new Rock();
storage is allocated and the constructor is called. It is guaranteed that the object will be properly initialized before you can get your hands on it. Feedback
Note that the coding style of making the first letter of all methods lowercase does not apply to constructors, since the name of the constructor must match the name of the class exactly. Feedback: Feedback
Tree t = new Tree(12); // 12-foot tree
If Tree(int) is your only constructor, then the compiler wont let you create a Tree object any other way. Feedback. Feedback. Feedback
One. Feedback
You refer to all objects and methods by using names. Well-chosen names make it easier for you and others to understand your code. Feedback. Feedback
Most programming languages (C in particular) require you to have a unique identifier for each function. So you could not have one function called print( ) for printing integers and another called print( ) for printing floatseach function requires a unique name. Feedback. Feedback. Feedback. Feedback
If the methods have the same name, how can Java know which method you mean? Theres a simple rule: each overloaded method must take a unique list of argument types. Feedback
If you think about this for a second, it makes sense. How else could a programmer tell the difference between two methods that have the same name, other than by the types of their arguments? Feedback
Even differences in the ordering of arguments are sufficient to distinguish two methods: (Although you dont normally want to take this approach, as it produces difficult-to-maintain code.) Feedback
//: c04:OverloadingOrder.java // Overloading based on the order of the arguments. import com.bruceeckel.simpletest.*; public class OverloadingOrder { static Test monitor = new Test(); static void print(String s, int i) { System.out.println("String: " + s + ", int: " + i); } static void print(int i, String s) { System.out.println("int: " + i + ", String: " + s); } public static void main(String[] args) { print("String first", 11); print(99, "Int first"); monitor.expect(new String[] { "String: String first, int: 11", "int: 99, String: Int first" }); } } ///:~
The two print( ) methods have identical arguments, but the order is different, and thats what makes them distinct. Feedback
A primitive can be automatically promoted from a smaller type to a larger one, and this can be slightly confusing in combination with overloading. The following example demonstrates what happens when a primitive is handed to an overloaded method:
//: c04:PrimitiveOverloading.java // Promotion of primitives and overloading. import com.bruceeckel.simpletest.*; public class PrimitiveOverloading {2(double x) { System.out.println("f2(double)"); } void f3(short x) { System.out.println("f3(short)"); } void f3(int x) { System.out.println("f3(int)"); } void f3(long x) { System.out.println("f3(long)"); } void f3(float x) { System.out.println("f3(float)"); } void f3(double x) { System.out.println("f3(double)"); } void f4(int x) { System.out.println("f4(int)"); } void f4(long x) { System.out.println("f4(long)"); } void f4(float x) { System.out.println("f4(float)"); } void f4(double x) { System.out.println("f4(double)"); } void f5(long x) { System.out.println("f5(long)"); } void f5(float x) { System.out.println("f5(float)"); } void f5(double x) { System.out.println("f5(double)"); } void f6(float x) { System.out.println("f6(float)"); } void f6(double x) { System.out.println("f6(double)"); } void f7(double x) { System.out.println("f7(double)"); } void testConstVal() { System.out.println("Testing with 5"); f1(5);f2(5);f3(5);f4(5);f5(5);f6(5);f7(5); } void testChar() { char x = 'x'; System.out.println("char argument:"); f1(x);f2(x);f3(x);f4(x);f5(x);f6(x);f7(x); } void testByte() { byte x = 0; System.out.println("byte argument:"); f1(x);f2(x);f3(x);f4(x);f5(x);f6(x);f7(x); } void testShort() { short x = 0; System.out.println("short argument:"); f1(x);f2(x);f3(x);f4(x);f5(x);f6(x);f7(x); } void testInt() { int x = 0; System.out.println("int argument:"); f1(x);f2(x);f3(x);f4(x);f5(x);f6(x);f7(x); } void testLong() { long x = 0; System.out.println("long argument:"); f1(x);f2(x);f3(x);f4(x);f5(x);f6(x);f7(x); } void testFloat() { float x = 0; System.out.println("float argument:"); f1(x);f2(x);f3(x);f4(x);f5(x);f6(x);f7(x); } void testDouble() { double x = 0; System.out.println("double argument:"); f1(x);f2(x);f3(x);f4(x);f5(x);f6(x);f7(x); } public static void main(String[] args) { PrimitiveOverloading p = new PrimitiveOverloading(); p.testConstVal(); p.testChar(); p.testByte(); p.testShort(); p.testInt(); p.testLong(); p.testFloat(); p.testDouble(); monitor.expect(new String[] { "Testing with 5", "f1(int)", "f2(int)", "f3(int)", "f4(int)", "f5(long)", "f6(float)", "f7(double)", "char argument:", "f1(char)", "f2(int)", "f3(int)", "f4(int)", "f5(long)", "f6(float)", "f7(double)", "byte argument:", "f1(byte)", "f2(byte)", "f3(short)", "f4(int)", "f5(long)", "f6(float)", "f7(double)", "short argument:", "f1(short)", "f2(short)", "f3(short)", "f4(int)", "f5(long)", "f6(float)", "f7(double)", "int argument:", "f1(int)", "f2(int)", "f3(int)", "f4(int)", "f5(long)", "f6(float)", "f7(double)", "long argument:", "f1(long)", "f2(long)", "f3(long)", "f4(long)", "f5(long)", "f6(float)", "f7(double)", "float argument:", "f1(float)", "f2(float)", "f3(float)", "f4(float)", "f5(float)", "f6(float)", "f7(double)", "double argument:", "f1(double)", "f2(double)", "f3(double)", "f4(double)", "f5(double)", "f6(double)", "f7(double)" }); } } ///:~
Youll see that the constant value 5 is treated as an int, so if an overloaded method is available that takes an int, it is used. In all other cases, if you have a data type that is smaller than the argument in the method, that data type is promoted. char produces a slightly different effect, since if it doesnt find an exact char match, it is promoted to int. Feedback
What happens if your argument is bigger than the argument expected by the overloaded method? A modification of the preceding program gives the answer:
//: c04:Demotion.java // Demotion of primitives and overloading. import com.bruceeckel.simpletest.*; public class Demotion {(char x) { System.out.println("f2(char)"); }3(char x) { System.out.println("f3(char)"); } void f3(byte x) { System.out.println("f3(byte)"); } void f3(short x) { System.out.println("f3(short)"); } void f3(int x) { System.out.println("f3(int)"); } void f3(long x) { System.out.println("f3(long)"); } void f4(char x) { System.out.println("f4(char)"); } void f4(byte x) { System.out.println("f4(byte)"); } void f4(short x) { System.out.println("f4(short)"); } void f4(int x) { System.out.println("f4(int)"); } void f5(char x) { System.out.println("f5(char)"); } void f5(byte x) { System.out.println("f5(byte)"); } void f5(short x) { System.out.println("f5(short)"); } void f6(char x) { System.out.println("f6(char)"); } void f6(byte x) { System.out.println("f6(byte)"); } void f7(char x) { System.out.println("f7(char)"); } void testDouble() { double x = 0; System.out.println("double argument:"); f1(x);f2((float)x);f3((long)x);f4((int)x); f5((short)x);f6((byte)x);f7((char)x); } public static void main(String[] args) { Demotion p = new Demotion(); p.testDouble(); monitor.expect(new String[] { "double argument:", "f1(double)", "f2(float)", "f3(long)", "f4(int)", "f5(short)", "f6(byte)", "f7(char)" }); } } ///:~
Here, the methods take narrower primitive values. If your argument is wider, then you must cast to the necessary type by placing the type name inside parentheses. If you dont do this, the compiler will issue an error message. Feedback
You should be aware that this is a narrowing conversion, which means you might lose information during the cast. This is why the compiler forces you to do itto flag the narrowing conversion. Feedback
It is common to wonder Why only class names and method argument lists? Why not distinguish between methods based on their return values? For example, these two methods, which have the same name and arguments, are easily distinguished from each other: Feedback: Feedback
f();
how can Java determine which f( ) should be called? And how could someone reading the code see it? Because of this sort of problem, you cannot use return value types to distinguish overloaded methods. Feedback
As mentioned previously, a default constructor (a.k.a. a no-arg constructor) is one without arguments that is used to create a basic object. If you create a class that has no constructors, the compiler will automatically create a default constructor for you. For example: Feedback
//: c04:DefaultConstructor.java class Bird { int i; } public class DefaultConstructor { public static void main(String[] args) { Bird nc = new Bird(); // Default! } } ///:~
The line Feedback
new Bird();
creates a new object and calls the default constructor, even though one was not explicitly defined. Without it, we would have no method to call to build our object. However, if you define any constructors (with or without arguments), the compiler will not synthesize one for you: Feedback
class Hat { Hat(int i) {} Hat(double d) {} }
Now if you say: Feedback
new Hat();
the compiler will complain that it cannot find a constructor that matches. Its as if when you dont put in any constructors, the compiler says You are bound to need some constructor, so let me make one for you. But if you write a constructor, the compiler says Youve written a constructor so you know what youre doing; if you didnt put in a default its because you meant to leave it out. Feedback
If you have two objects of the same type called a and b, you might wonder how it is that you can call a method f( ) for both those objects: Feedback
class Banana { void f(int i) { /* ... */ } } Banana a = new Banana(), b = new Banana(); a.f(1); b.f(2);
If theres only one method called f( ), how can that method know whether its being called for the object a or b? Feedback
To allow you to write the code in a convenient object-oriented syntax in which you send a message to an object, the compiler does some undercover work for you. Theres a secret first argument passed to the method f( ), and that argument is the reference to the object thats being manipulated. So the two method calls become something like: Feedback
Banana.f(a,1); Banana.f(b,2);
This is internal and you cant write these expressions and get the compiler to accept them, but it gives you an idea of whats happening. Feedback
Suppose youre inside a method and youd like to get the reference to the current object. Since that reference is passed secretly by the compiler, theres no identifier for it. However, for this purpose theres a keyword: this. The this keywordwhich can be used only inside a methodproduces the reference to the object the method has been called for. You can treat this reference just like any other object reference. Keep in mind that if youre calling a method of your class from within another method of your class, you dont need to use this. You simply call the method. The current this reference is automatically used for the other method. Thus you can say: Feedback
class Apricot { void pick() { /* ... */ } void pit() { pick(); /* ... */ } }
Inside pit( ), you could say this.pick( ) but theres no need to.[20] The compiler does it for you automatically. The this keyword is used only for those special cases in which you need to explicitly use the reference to the current object. For example, its often used in return statements when you want to return the reference to the current object: Feedback
//: c04:Leaf.java // Simple use of the "this" keyword. import com.bruceeckel.simpletest.*; public class Leaf { static Test monitor = new Test(); int i = 0; Leaf increment() { i++; return this; } void print() { System.out.println("i = " + i); } public static void main(String[] args) { Leaf x = new Leaf(); x.increment().increment().increment().print(); monitor.expect(new String[] { "i = 3" }); } } ///:~
Because increment( ) returns the reference to the current object via the this keyword, multiple operations can easily be performed on the same object. Feedback
When you write several constructors for a class, there are times when youd like to call one constructor from another to avoid duplicating code. You can make such a call by by using the this keyword. Feedback
Normally, when you say this, it is in the sense of this object or the current object, and by itself it produces the reference to the current object. In a constructor, the this keyword takes on a different meaning when you give it an argument list. It makes an explicit call to the constructor that matches that argument list. Thus you have a straightforward way to call other constructors: Feedback
//: c04:Flower.java // Calling constructors with "this." import com.bruceeckel.simpletest.*; public class Flower { static Test monitor = new Test(); int petalCount = 0; String s = new String("null"); Flower(int petals) { petalCount = petals; System.out.println( "Constructor w/ int arg only, petalCount= " + petalCount); } Flower(String ss) { System.out.println( "Constructor w/ String arg only, s=" + ss); s = ss; } Flower(String s, int petals) { this(petals); //! this(s); // Can't call two! this.s = s; // Another use of "this" System.out.println("String & int args"); } Flower() { this("hi", 47); System.out.println("default constructor (no args)"); } void print() { //! this(11); // Not inside non-constructor! System.out.println( "petalCount = " + petalCount + " s = "+ s); } public static void main(String[] args) { Flower x = new Flower(); x.print(); monitor.expect(new String[] { "Constructor w/ int arg only, petalCount= 47", "String & int args", "default constructor (no args)", "petalCount = 47 s = hi" }); } } ///:~
The constructor Flower(String s, int petals) shows that, while you can call one constructor using this, you cannot call two. In addition, the constructor call must be the first thing you do, or youll get a compiler error message. Feedback
This example also shows another way youll see this used. Since the name of the argument s and the name of the member data s are the same, theres an ambiguity. You can resolve it using this.s, to say that youre referring to the member data. Youll often see this form used in Java code, and its used in numerous places in this book. Feedback
In print( ) you can see that the compiler wont let you call a constructor from inside any method other than a constructor. Feedback
With the this keyword in mind, you can more fully understand what it means to make a method static. It means that there is no this for that particular method. You cannot call non-static methods from inside static methods[21] (although the reverse is possible), and you can call a static method for the class itself, without any object. In fact, thats primarily what a static method is for. Its as if youre creating the equivalent of a global function (from C). However, global functions are not permitted in Java, and putting the static method inside a class allows it access to other static methods and to static fields. Feedback
Some people argue that static methods are not object-oriented, since they do have the semantics of a global function; with a static method, you dont send a message to an object, since theres no this. This is probably a fair argument, and if you find yourself using a lot of static methods, you should probably rethink your strategy. However, statics are pragmatic, and there are times when you genuinely need them, so whether or not they are proper OOP should be left to the theoreticians. Indeed, even Smalltalk has the equivalent in its class methods. Feedback
Programmers know about the importance of initialization, but often forget the importance of cleanup. After all, who needs to clean up an int? But with libraries, simply letting go of an object once youre done with it is not always safe. Of course, Java has the garbage collector to reclaim the memory of objects that are no longer used. Now consider an unusual case: suppose your object allocates special memory without using new. The garbage collector only knows how to release memory allocated with new, so it wont know how to release the objects special memory. To handle this case, Java provides a method called finalize( ) that you can define for your class. Heres how its supposed to work. When the garbage collector is ready to release the storage used for your object, it will first call finalize( ), and only on the next garbage-collection pass will it reclaim the objects memory. So if you choose to use finalize( ), it gives you the ability to perform some important cleanup at the time of garbage collection. Feedback: Feedback
1. Your objects might not get garbage collected.
2. that in the process of creating your object, it draws itself on the screen. If you dont explicitly erase its image from the screen, it might never get cleaned up. If you put some kind of erasing functionality inside finalize( ), then if an object is garbage collected and finalize( ) is called (theres no guarantee this will happen), then the image will first be removed from the screen, but if it isnt, the image will remain. Feedback. Feedback, nothe books CD ROM and at.) C and C++ are the only languages currently supported by native methods, but since they can call subprograms in other languages, you can effectively call anything. Inside the non-Java code, Cs malloc( ) family of functions might be called to allocate storage, and unless you call free( ), that storage will not be released, causing a memory leak. Of course, free( ) is a C and C++ function, so youd need to call it in a native method inside your finalize( ). Feedback
After reading this, you probably get the idea that you wont use finalize( ) much.[22] Youre correct; it is not the appropriate place for normal cleanup to occur. So where should normal cleanup be performed? Feedbacknot possible in Java), then the destruction happens at the closing curly brace of the scope in which the object was created. If the object was created using new (like in Java), the destructor is called when the programmer calls the C++ operator delete (which doesnt
In contrast, Java doesnt allow you to create local objectsyou must always use new. But in Java, theres no delete to call to release the object, because the garbage collector releases the storage for you. So from a simplistic standpoint, you could say that because of garbage collection, Java has no destructor. Youll see as this book progresses, however, that the presence of a garbage collector does not remove the need for or utility of destructors. (And you should never call finalize( ) directly, so thats not an appropriate avenue for a solution.) If you want some kind of cleanup performed other than storage release, you must still explicitly call an appropriate method in Java, which is the equivalent of a C++ destructor without the convenience. Feedback
Remember that neither garbage collection nor finalization is guaranteed. If the JVM isnt close to running out of memory, then it might not waste time recovering memory through garbage collection. Feedback. Feedback. Feedback. Feedback). Feedback
If you come from a programming language where allocating objects on the heap is expensive, you may naturally assume that Javas scheme of allocating everything (except primitives) on the heap is also expensive. However, it turns out that the garbage collector can have a significant impact on increasing the speed of object creation. This might sound a bit odd at firstthat storage release affects storage allocationbut its the way some JVMs work, and it means that allocating storage for heap objects in Java can be nearly as fast as creating storage on the stack in other languages. Feedback
For example, you can think of the C++ heap as a yard where each object stakes out its own piece of turf. This real estate can become abandoned sometime later and must be reused. In some JVMs, the Java heap is quite different; its more like a conveyor belt that moves forward every time you allocate a new object. This means that object storage allocation is remarkably rapid. The heap pointer is simply moved forward into virgin territory, so its effectively the same as C++s stack allocation. (Of course, theres a little extra overhead for bookkeeping, but its nothing like searching for storage.) Feedback
Now you might observe that the heap isnt in fact a conveyor belt, and if you treat it that way, youll eventually start paging memory a lot (which is a big performance hit) and later run out. The trick is that the garbage collector steps in, and while it collects the garbage it compacts all the objects in the heap so that youve effectively moved the heap pointer closer to the beginning of the conveyor belt and farther away from a page fault. The garbage collector rearranges things and makes it possible for the high-speed, infinite-free-heap model to be used while allocating storage. Feedback
To understand how this works, you need to get a little better idea of the way different garbage collector (GC) schemes work. A simple but slow garbage collection technique is is called reference counting. This means that each object contains a reference counter, and every time a reference is attached to an object, the reference count is increased. Every time a reference goes out of scope or is set to null, the reference count is decreased. Thus, managing reference counts is a small but constant overhead that happens throughout the lifetime of your program. The garbage collector moves through the entire list of objects, and when it finds one with a reference count of zero it releases that storage. The one drawback is that if objects circularly refer to each other they can have nonzero reference counts while still being garbage. Locating such self-referential groups requires significant extra work for the garbage collector. Reference counting is commonly used to explain one kind of garbage collection, but it doesnt seem to be used in any JVM implementations. Feedback
In faster schemes, garbage collection is not based on reference counting. Instead, it is based on the idea that any nondead object must ultimately be traceable back to a reference that lives either on the stack or in static storage. The chain might go through several layers of objects. Thus, if you start in the stack and the static storage area and walk through all the references, youll find all the live objects. For each reference that you find, you must trace into the object that it points to and then follow all the references in that object, tracing into the objects they point to, etc., until youve moved through the entire web that originated with the reference on the stack or in static storage. Each object that you move through must still be alive. Note that there is no problem with detached self-referential groupsthese are simply not found, and are therefore automatically garbage. Feedback
In the approach described here, the JVM uses an adaptive garbage-collection scheme, and what it does with the live objects that it locates depends on the variant currently being used. One of these variants is stop-and-copy. This means thatfor reasons that will become apparentthe program is first stopped (this is not a background collection scheme). Then, each live object that is found is copied from one heap to another, leaving behind all the garbage. In addition, as the objects are copied into the new heap, they are packed end-to-end, thus compacting the new heap (and allowing new storage to simply be reeled off the end as previously described). Feedback
Of course, when an object is moved from one place to another, all references that point at (i.e., that reference) the object must be changed. The reference that goes from the heap or the static storage area to the object can be changed right away, but there can be other references pointing to this object that will be encountered later during the walk. These are fixed up as they are found (you could imagine a table that maps old addresses to new ones). Feedback
There are two issues that make these so-called copy collectors inefficient. The first is the idea that you have two heaps and you slosh all the memory back and forth between these two separate heaps, maintaining twice as much memory as you actually need. Some JVMs deal with this by allocating the heap in chunks as needed and simply copying from one chunk to another. Feedback
The second issue is the copying. Once your program becomes stable, it might be generating little or no garbage. Despite that, a copy collector will still copy all the memory from one place to another, which is wasteful. To prevent this, some JVMs detect that no new garbage is being generated and switch to a different scheme (this is the adaptive part). This other scheme is called mark-and-sweep, and its what earlier versions of Suns JVM used all the time. For general use, mark-and-sweep is fairly slow, but when you know youre generating little or no garbage, its fast. Feedback
Mark-and-sweep follows the same logic of starting from the stack and static storage and tracing through all the references to find live objects. However, each time it finds a live object, that object is marked by setting a flag in it, but the object isnt collected yet. Only when the marking process is finished does the sweep occur. During the sweep, the dead objects are released. However, no copying happens, so if the collector chooses to compact a fragmented heap, it does so by shuffling objects around. Feedback
The stop-and-copy refers to the idea that this type of garbage collection is not done in the background; instead, the program is stopped while the garbage collection occurs. In the Sun literature youll find many references to garbage collection as a low-priority background process, but it turns out that the garbage collection was not implemented that way, at least in earlier versions of the Sun JVM. Instead, the Sun garbage collector ran when memory got low. In addition, mark-and-sweep requires that the program be stopped. Feedback
As previously mentioned, in the JVM described here memory is allocated in big blocks. If you allocate a large object, it gets its own block. Strict stop-and-copy requires copying every live object from the source heap to a new heap before you could free the old one, which translates to lots of memory. With blocks, the garbage collection can typically copy objects to dead blocks as it collects. Each block has a generation count to keep track of whether its alive. In the normal case, only the blocks created since the last garbage collection are compacted; all other blocks get their generation count bumped if they have been referenced from somewhere. This handles the normal case of lots of short-lived temporary objects. Periodically, a full sweep is madelarge objects are still not copied (they just get their generation count bumped), and blocks containing small objects are copied and compacted. The JVM monitors the efficiency of garbage collection and if it becomes a waste of time because all objects are long-lived, then it switches to mark-and-sweep. Similarly, the JVM keeps track of how successful mark-and-sweep is, and if the heap starts to become fragmented, it switches back to stop-and-copy. This is where the adaptive part comes in, so you end up with a mouthful: Adaptive generational stop-and-copy mark-and-sweep. Feedback
There are a number of additional speedups possible in a JVM. An especially important one involves the operation of the loader and what is called a just-in-time (JIT) compiler. A JIT compiler partially or fully converts a program into native machine code so that it doesnt need to be interpreted by the JVM and thus runs much faster. When a class must be loaded (typically, the first time you want to create an object of that class), the .class file is located, and the byte codes for that class are brought into memory. At this point, one approach is to simply JIT compile all the code, but this has two drawbacks: it takes a little more time, which, compounded throughout the life of the program, can add up; and it increases the size of the executable (byte codes are significantly more compact than expanded JIT code), and this might cause paging, which definitely slows down a program. An alternative approach is lazy evaluation, which means that the code is not JIT compiled until necessary. Thus, code that never gets executed might never be JIT compiled. The Java HotSpot technologies in recent JDKs take a similar approach by increasingly optimizing a piece of code each time it is executed, so the more the code is executed, the faster it gets. Feedback
Java goes out of its way to guarantee that variables are properly initialized before they are used. In the case of variables that are defined locally to a method, this guarantee comes in the form of a compile-time error. So if you say: Feedback
void f() { int i; i++; // Error -- i not initialized }
youll get an error message that says that i might not have been initialized. Of course, the compiler could have given i a default value, but its more likely that this is a programmer error and a default value would have covered that up. Forcing the programmer to provide an initialization value is more likely to catch a bug. Feedback
If a primitive is a field in a class, however, things are a bit different. Since any method can initialize or use that data, it might not be practical to force the user to initialize it to its appropriate value before the data is used. However, its unsafe to leave it with a garbage value, so each primitive field of a class is guaranteed to get an initial value. Those values can be seen here: Feedback
//: c04:InitialValues.java // Shows default initial values. import com.bruceeckel.simpletest.*; public class InitialValues { static Test monitor = new Test(); boolean t; char c; byte b; short s; int i; long l; float f; double d; void print(String s) { System.out.println(s); } void printInitialValues() { print("Data type Initial value"); print("boolean " + t); print("char [" + c + "]"); print("byte " + b); print("short " + s); print("int " + i); print("long " + l); print("float " + f); print("double " + d); } public static void main(String[] args) { InitialValues iv = new InitialValues(); iv.printInitialValues(); /* You could also say: new InitialValues().printInitialValues(); */ monitor.expect(new String[] { "Data type Initial value", "boolean false", "char [" + (char)0 + "]", "byte 0", "short 0", "int 0", "long 0", "float 0.0", "double 0.0" }); } } ///:~
You can see that even though the values are not specified, they automatically get initialized (The char value is a zero, which prints as a space). So at least theres no threat of working with uninitialized variables. Feedback
Youll see later that when you define an object reference inside a class without initializing it to a new object, that reference is given a special value of null (which is a Java keyword). Feedback: Feedback
class Measurement { Depth d = new Depth(); // . . .
If you havent given d an initial value and you try to use it anyway, youll get a run-time error called an exception (covered in Chapter 9). Feedback
You can even call a method to provide an initialization value:
class CInit { int i = f(); //... }
This method can have arguments, of course, but those arguments cannot be other class members that havent been initialized yet. Thus, you can do this: Feedback
class CInit { int i = f(); int j = g(i); //... }
But you cannot do this: Feedback
class CInit { int j = g(i); int i = f(); //... }
This is one place in which the compiler, appropriately, does complain about forward referencing, since this has to do with the order of initialization and not the way the program is compiled. Feedback
This approach to initialization is simple and straightforward. It has the limitation that every object of type InitialValues will get these same initialization values. Sometimes this is exactly what you need, but at other times you need more flexibility. Feedback
The constructor can be used to perform initialization, and this gives you greater flexibility in your programming because you can call methods and perform actions at run time to determine the initial values. Theres one thing to keep in mind, however: You arent precluding the automatic initialization, which happens before the constructor is entered. So, for example, if you say:
class Counter { int i; Counter() { i = 7; } // . . .
then i will first be initialized to 0, then to 7. This is true with all the primitive types and with object references, including those that are given explicit initialization at the point of definition. For this reason, the compiler doesnt try to force you to initialize elements in the constructor at any particular place, or before they are usedinitialization is already guaranteed.[24] Feedback
Within a class, the order of initialization is determined by the order that the variables are defined within the class. The variable definitions may be scattered throughout and in between method definitions, but the variables are initialized before any methods can be calledeven the constructor. For example: Feedback
//: c04:OrderOfInitialization.java // Demonstrates initialization order. import com.bruceeckel.simpletest.*; //); // Reinitialize t3 } Tag t2 = new Tag(2); // After constructor void f() { System.out.println("f()"); } Tag t3 = new Tag(3); // At end } public class OrderOfInitialization { static Test monitor = new Test(); public static void main(String[] args) { Card t = new Card(); t.f(); // Shows that construction is done monitor.expect(new String[] { "Tag(1)", "Tag(2)", "Tag(3)", "Card()", "Tag(33)", "f()" }); } } ///:~
In Card, the definitions of the Tag objects are intentionally scattered about to prove that theyll all get initialized before the constructor is entered or anything else can happen. In addition, t3 is reinitialized inside the constructor. Feedback
From the output, you can see that, the t3 reference gets initialized twice: once before and once during the constructor call. (The first object is dropped, so it can be garbage collected later.) This might not seem efficient at first, but it guarantees proper initializationwhat would happen if an overloaded constructor were defined that did not initialize t3 and there wasnt a default initialization for t3 in its definition? Feedback
When the data is static, the same thing happens; if its a primitive and you dont initialize it, it gets the standard primitive initial values. If its a reference to an object, its null unless you create a new object and attach your reference to it. Feedback
If you want to place initialization at the point of definition, it looks the same as for non-statics. Theres only a single piece of storage for a static, regardless of how many objects are created. But the question arises of when the static storage gets initialized. An example makes this question clear: Feedback
//: c04:StaticInitialization.java // Specifying initial values in a class definition. import com.bruceeckel.simpletest.*; { static Test monitor = new Test(); public static void main(String[] args) { System.out.println("Creating new Cupboard() in main"); new Cupboard(); System.out.println("Creating new Cupboard() in main"); new Cupboard(); t2.f2(1); t3.f3(1); monitor.expect(new String[] { "Bowl(1)", "Bowl(2)", "Table()", "f(1)", "Bowl(4)", "Bowl(5)", "Bowl(3)", "Cupboard()", "f(2)", "Creating new Cupboard() in main", "Bowl(3)", "Cupboard()", "f(2)", "Creating new Cupboard() in main", "Bowl(3)", "Cupboard()", "f(2)", "f2(1)", . Feedback
From the output, you can see that the static initialization occurs only if its necessary. If you dont create a Table object and you never refer to Table.b1 or Table.b2, the static Bowl b1 and b2 will never be created. They are initialized only when the first Table object is created (or the first static access occurs). After that, the static objects are not reinitialized. Feedback
The order of initialization is statics first, if they havent already been initialized by a previous object creation, and then the non-static objects. You can see the evidence of this in the output. Feedback
Its helpful to summarize the process of creating an object. Consider a class called Dog: Feedback
Java allows you to group other static initializations inside a special static clause (sometimes called a static block) in a class. It looks like this: Feedback
class Spoon { static int i; static { i = 47; } // . . .
It appears to be a method, but its just the static keyword followed by a block of code. This code, like other static initializations, is executed only once: the first time you make an object of that class or the first time you access a static member of that class (even if you never make an object of that class). For example: Feedback
//: c04:ExplicitStatic.java // Explicit static initialization with the "static" clause. import com.bruceeckel.simpletest.*; class Cup { Cup(int marker) { System.out.println("Cup(" + marker + ")"); } void f(int marker) { System.out.println("f(" + marker + ")"); } } class Cups { static Cup c1; static Cup c2; static { c1 = new Cup(1); c2 = new Cup(2); } Cups() { System.out.println("Cups()"); } } public class ExplicitStatic { static Test monitor = new Test(); public static void main(String[] args) { System.out.println("Inside main()"); Cups.c1.f(99); // (1) monitor.expect(new String[] { "Inside main()", "Cup(1)", "Cup(2)", "f(99)" }); } // static Cups x = new Cups(); // (2) // static Cups y = new Cups(); // (2) } ///:~
The static initializers for Cups run when either the access of the static object c1 occurs on the line marked (1), or if line (1) is commented out and the lines marked (2) are uncommented. If both (1) and (2) are commented out, the static initialization for Cups never occurs. Also, it doesnt matter if one or both of the lines marked (2) are uncommented; the static initialization only occurs once. Feedback
Java provides a similar syntax for initializing non-static variables for each object. Heres an example:
//: c04:Mugs.java // Java "Instance Initialization." import com.bruceeckel.simpletest.*; class Mug { Mug(int marker) { System.out.println("Mug(" + marker + ")"); } void f(int marker) { System.out.println("f(" + marker + ")"); } } public class Mugs { static Test monitor = new Test(); Mug c1; Mug c2; { c1 = new Mug(1); c2 = new Mug(2); System.out.println("c1 & c2 initialized"); } Mugs() { System.out.println("Mugs()"); } public static void main(String[] args) { System.out.println("Inside main()"); Mugs x = new Mugs(); monitor.expect(new String[] { "Inside main()", "Mug(1)", "Mug(2)", "c1 & c2 initialized", "Mugs()" }); } } ///:~
You can see that the instance initialization clause: Feedback
{ c1 = new Mug(1); c2 = new Mug(2); System.out.println("c1 & c2 initialized"); }
looks exactly like the static initialization clause except for the missing static keyword. This syntax is necessary to support the initialization of anonymous inner classes (see Chapter 8). Feedback
Initializing arrays in C is error-prone and tedious. C++ uses aggregate initialization to make it much safer.[25] Java has no aggregates like C++ does, since everything is an object in Java. It does have arrays, and these are supported with array initialization. Feedback
An array is simply a sequence of either objects or primitives that are all the same type and packaged together under one identifier name. Arrays are defined and used with the square-brackets indexing operator [ ]. To define an array, you simply follow your type name with empty square brackets: Feedback
int[] a1;
You can also put the square brackets after the identifier to produce exactly the same meaning: Feedback
int a1[];
This conforms to expectations from C and C++ programmers. The former style, however, is probably a more sensible syntax, since it says that the type is an int array. That style will be used in this book. Feedback
The compiler doesnt allow you to tell it how big the array is. This brings us back to that issue of references. All that you have at this point is a reference to an array, and theres been no space allocated for the array. To create storage for the array, you must write an initialization expression. For arrays, initialization can appear anywhere in your code, but you can also use a special kind of initialization expression that must occur at the point where the array is created. This special initialization is a set of values surrounded by curly braces. The storage allocation (the equivalent of using new) is taken care of by the compiler in this case. For example: Feedback
int[] a1 = { 1, 2, 3, 4, 5 };
So why would you ever define an array reference without an array? Feedback
int[] a2;
Well, its possible to assign one array to another in Java, so you can say: Feedback
a2 = a1;
What youre really doing is copying a reference, as demonstrated here: Feedback
//: c04:Arrays.java // Arrays of primitives. import com.bruceeckel.simpletest.*; public class Arrays { static Test monitor = new Test(); public static void main(String[] args) { int[] a1 = { 1, 2, 3, 4, 5 }; int[] a2; a2 = a1; for(int i = 0; i < a2.length; i++) a2[i]++; for(int i = 0; i < a1.length; i++) System.out.println( "a1[" + i + "] = " + a1[i]); monitor.expect(new String[] { "a1[0] = 2", "a1[1] = 3", "a1[2] = 4", "a1[3] = 5", "a1[4] = 6" }); } } ///:~
You can see that a1 is given an initialization value but a2 is not; a2 is assigned laterin this case, to another array. Feedback
Theres something new here: All arrays have an intrinsic member (whether theyre arrays of objects or arrays of primitives) that you can querybut not changeto tell you how many elements there are in the array. This member is length. Since arrays in Java, like C and C++, start counting from element zero, the largest element you can index is length - 1. If you go out of bounds, C and C++ quietly accept this and allow you to stomp all over your memory, which is the source of many infamous bugs. However, Java protects you against such problems by causing a run-time error (an exception, the subject of Chapter 9) if you step out of bounds. Of course, checking every array access costs time and code and theres no way to turn it off, which means that array accesses might be a source of inefficiency in your program if they occur at a critical juncture. For Internet security and programmer productivity, the Java designers thought that this was a worthwhile trade-off. Feedback
What if you dont know how many elements youre going to need in your array while youre writing the program? You simply use new to create the elements in the array. Here, new works even though its creating an array of primitives (new wont create a nonarray primitive): Feedback
//: c04:ArrayNew.java // Creating arrays with new. import com.bruceeckel.simpletest.*; import java.util.*; public class ArrayNew { static Test monitor = new Test(); static Random rand = new Random(); public static void main(String[] args) { int[] a; a = new int[rand.nextInt(20)]; System.out.println("length of a = " + a.length); for(int i = 0; i < a.length; i++) System.out.println("a[" + i + "] = " + a[i]); monitor.expect(new Object[] { "%% length of a = \\d+", new TestExpression("%% a\\[\\d+\\] = 0", a.length) }); } } ///:~
The expect( ) statement contains something new in this example: the TestExpression class. A TestExpression object takes an expression, either an ordinary string or a regular expression as shown here, and a second integer argument that indicates that the preceding expression will be repeated that many times. TestExpression not only prevents needless duplication in the code, but in this case, it allows the number of repetitions to be determined at run time. Feedback
The size of the array is chosen at random by using the Random.nextInt( ) method, which produces a value from zero to that of its argument. Because of the randomness, its clear that array creation is actually happening at run time. In addition, the output of this program shows that array elements of primitive types are automatically initialized to empty values. (For numerics and char, this is zero, and for boolean, its false.) Feedback
Of course, the array could also have been defined and initialized in the same statement:
int[] a = new int[rand.nextInt(20)];
This is the preferred way to do it, if you can. Feedback
If youre dealing with an array of nonprimitive objects, you must always use new. Here, the reference issue comes up again, because what you create is an array of references. Consider the wrapper type Integer, which is a class and not a primitive: Feedback
//: c04:ArrayClassObj.java // Creating an array of nonprimitive objects. import com.bruceeckel.simpletest.*; import java.util.*; public class ArrayClassObj { static Test monitor = new Test(); static Random rand = new Random(); public static void main(String[] args) { Integer[] a = new Integer[rand.nextInt(20)]; System.out.println("length of a = " + a.length); for(int i = 0; i < a.length; i++) { a[i] = new Integer(rand.nextInt(500)); System.out.println("a[" + i + "] = " + a[i]); } monitor.expect(new Object[] { "%% length of a = \\d+", new TestExpression("%% a\\[\\d+\\] = \\d+", a.length) }); } } ///:~
Here, even after new is called to create the array: Feedback
Integer[] a = new Integer[rand.nextInt(20)];
its only an array of references, and not until the reference itself is initialized by creating a new Integer object is the initialization complete: Feedback
a[i] = new Integer(rand.nextInt(500));
If you forget to create the object, however, youll get an exception at run time when you try to use the empty array location. Feedback
Take a look at the formation of the String object inside the print statements. You can see that the reference to the Integer object is automatically converted to produce a String representing the value inside the object. Feedback
Its also possible to initialize arrays of objects by using the curly-brace-enclosed list. There are two forms:
//: c04:ArrayInit.java // Array initialization. public class ArrayInit { public static void main(String[] args) { Integer[] a = { new Integer(1), new Integer(2), new Integer(3), }; Integer[] b = new Integer[] { new Integer(1), new Integer(2), new Integer(3), }; } } ///:~
The first form is useful at times, but its more limited since the size of the array is determined at compile time. The final comma in the list of initializers is optional. (This feature makes for easier maintenance of long lists.) Feedback
The second form provides a convenient syntax to create and call methods that can produce the same effect as Cs variable argument lists (known as varargs in C). These can include unknown quantities of arguments as well as unknown types. Since all classes are ultimately inherited from the common root class Object (a subject you will learn more about as this book progresses), you can create a method that takes an array of Object and call it like this: Feedback
//: c04:VarArgs.java // Using array syntax to create variable argument lists. import com.bruceeckel.simpletest.*; class A { int i; } public class VarArgs { static Test monitor = new Test(); static void print(Object[] x) { for(int i = 0; i < x.length; i++) System.out.println(x[i]); } public static void main(String[] args) { print(new Object[] { new Integer(47), new VarArgs(), new Float(3.14), new Double(11.11) }); print(new Object[] {"one", "two", "three" }); print(new Object[] {new A(), new A(), new A()}); monitor.expect(new Object[] { "47", "%% VarArgs@\\p{XDigit}+", "3.14", "11.11", "one", "two", "three", new TestExpression("%% A@\\p{XDigit}+", 3) }); } } ///:~
You can see that print( ) takes an array of Object, then steps through the array and prints each one. The standard Java library classes produce sensible output, but the objects of the classes created hereA and VarArgsprint the class name, followed by an @ sign, and yet another regular expression construct, \p{XDigit}, which indicates a hexadecimal digit. The trailing + means there will be one or more hexadecimal digits. Thus, the default behavior (if you dont define a toString( ) method for your class, which will be described later in the book) is to print the class name and the address of the object. Feedback
Java allows you to easily create multidimensional arrays:
//: c04:MultiDimArray.java // Creating multidimensional arrays. import com.bruceeckel.simpletest.*; import java.util.*; public class MultiDimArray { static Test monitor = new Test(); static Random rand = new Random(); public static void main(String[] args) { int[][] a1 = { { 1, 2, 3, }, { 4, 5, 6, }, }; for(int i = 0; i < a1.length; i++) for(int j = 0; j < a1[i].length; j++) System.out.println( "a1[" + i + "][" + j + "] = " + a1[i][j]); // 3-D array with fixed length: int[][][] a2 = new int[2][2][4]; for(int i = 0; i < a2.length; i++) for(int j = 0; j < a2[i].length; j++) for(int k = 0; k < a2[i][j].length; k++) System.out.println("a2[" + i + "][" + j + "][" + k + "] = " + a2[i][j][k]); // 3-D array with varied-length vectors: int[][][] a3 = new int[rand.nextInt(7)][][]; for(int i = 0; i < a3.length; i++) { a3[i] = new int[rand.nextInt(5)][]; for(int j = 0; j < a3[i].length; j++) a3[i][j] = new int[rand.nextInt(5)]; } for(int i = 0; i < a3.length; i++) for(int j = 0; j < a3[i].length; j++) for(int k = 0; k < a3[i][j].length; k++) System.out.println("a3[" + i + "][" + j + "][" + k + "] = " + a3[i][j][k]); // Array of nonprimitive objects: Integer[][] a4 = { { new Integer(1), new Integer(2)}, { new Integer(3), new Integer(4)}, { new Integer(5), new Integer(6)}, }; for(int i = 0; i < a4.length; i++) for(int j = 0; j < a4[i].length; j++) System.out.println("a4[" + i + "][" + j + "] = " + a4[i][j]); Integer[][] a5; a5 = new Integer[3][]; for(int i = 0; i < a5.length; i++) { a5[i] = new Integer[3]; for(int j = 0; j < a5[i].length; j++) a5[i][j] = new Integer(i * j); } for(int i = 0; i < a5.length; i++) for(int j = 0; j < a5[i].length; j++) System.out.println("a5[" + i + "][" + j + "] = " + a5[i][j]); // Output test int ln = 0; for(int i = 0; i < a3.length; i++) for(int j = 0; j < a3[i].length; j++) for(int k = 0; k < a3[i][j].length; k++) ln++; monitor.expect(new Object[] { "a1[0][0] = 1", "a1[0][1] = 2", "a1[0][2] = 3", "a1[1][0] = 4", "a1[1][1] = 5", "a1[1][2] = 6", new TestExpression( "%% a2\\[\\d\\]\\[\\d\\]\\[\\d\\] = 0", 16), new TestExpression( "%% a3\\[\\d\\]\\[\\d\\]\\[\\d\\] = 0", ln), "a4[0][0] = 1", "a4[0][1] = 2", "a4[1][0] = 3", "a4[1][1] = 4", "a4[2][0] = 5", "a4[2][1] = 6", "a5[0][0] = 0", "a5[0][1] = 0", "a5[0][2] = 0", "a5[1][0] = 0", "a5[1][1] = 1", "a5[1][2] = 2", "a5[2][0] = 0", "a5[2][1] = 2", "a5[2][2] = 4" }); } } ///:~
The code used for printing uses length so that it doesnt depend on fixed array sizes. Feedback
The first example shows a multidimensional array of primitives. You delimit each vector in the array by using curly braces:
int[][] a1 = { { 1, 2, 3, }, { 4, 5, 6, }, };
Each set of square brackets moves you into the next level of the array. Feedback
The second example shows a three-dimensional array allocated with new. Here, the whole array is allocated at once:
int[][][] a2 = new int[2][2][4];
But the third example shows that each vector in the arrays that make up the matrix can be of any length:
int[][][] a3 = new int[rand.nextInt(7)][][]; for(int i = 0; i < a3.length; i++) { a3[i] = new int[rand.nextInt(5)][]; for(int j = 0; j < a3[i].length; j++) a3[i][j] = new int[rand.nextInt(5)]; }
The first new creates an array with a random-length first element and the rest undetermined. The second new inside the for loop fills out the elements but leaves the third index undetermined until you hit the third new. Feedback
You will see from the output that array values are automatically initialized to zero if you dont give them an explicit initialization value.
You can deal with arrays of nonprimitive objects in a similar fashion, which is shown in the fourth example, demonstrating the ability to collect many new expressions with curly braces:
Integer[][] a4 = { { new Integer(1), new Integer(2)}, { new Integer(3), new Integer(4)}, { new Integer(5), new Integer(6)}, };
The fifth example shows how an array of nonprimitive objects can be built up piece by piece:
Integer[][] a5; a5 = new Integer[3][]; for(int i = 0; i < a5.length; i++) { a5[i] = new Integer[3]; for(int j = 0; j < a5[i].length; j++) a5[i][j] = new Integer(i*j); }
The i*j is just to put an interesting value into the Integer. Feedback
This seemingly elaborate mechanism for initialization, the constructor, should give you a strong hint about the critical importance placed on initialization in the language. As Bjarne Stroustrup, the inventor of C++, was designing that language, one of the first observations he made about productivity in C was that improper initialization of variables causes a significant portion of programming problems. These kinds of bugs are hard to find, and similar issues apply to improper cleanup. Because constructors allow you to guarantee proper initialization and cleanup (the compiler will not allow an object to be created without the proper constructor calls), you get complete control and safety. Feedback
In C++, destruction is quite important because objects created with new must be explicitly destroyed. In Java, the garbage collector automatically releases the memory for all objects, so the equivalent cleanup method in Java isnt necessary much of the time (but when it is, as observed in this chapter, you must do it yourself). In cases where you dont need destructor-like behavior, Javas garbage collector greatly simplifies programming and adds much-needed safety in managing memory. Some garbage collectors can even clean up other resources like graphics and file handles. However, the garbage collector does add a run-time cost, the expense of which is difficult to put into perspective because of the historical slowness of Java interpreters. Although Java has had significant performance increases over time, the speed problem has taken its toll on the adoption of the language for certain types of programming problems. Feedback
Because of the guarantee that all objects will be constructed, theres actually more to the constructor than what is shown here. In particular, when you create new classes using either composition or inheritance, the guarantee of construction also holds, and some additional syntax is necessary to support this. Youll learn about composition, inheritance, and how they affect constructors in future chapters. Feedback
Solutions to selected exercises can be found in the electronic document The Thinking in Java Annotated Solution Guide, available for a small fee from.
[19] In some of the Java literature from Sun, they instead refer to these with the awkward but descriptive name no-arg constructors. The term default constructor has been in use for many years, so I will use. | http://www.faqs.org/docs/think_java/TIJ306.htm | CC-MAIN-2019-18 | refinedweb | 9,211 | 54.73 |
C++ Determine Perfect Square Program
Hello Everyone!
In this tutorial, we will demonstrate the logic of determine if the given number is a Perfect Square or not, in the C++ programming language.
Logic:
The
sqrt(x) method returns the square root the
x.
For a better understanding of its implementation, refer to the well-commented CPP code given below.
Code:
#include <iostream> #include <math.h> using namespace std; // Returns true if the given number is a perfect square bool isPerfectSquare(int n) { // sqrt() method is defined in the math.h library // to return the square root of a given number int sr = sqrt(n); if (sr * sr == n) return true; else return false; } int main() { cout << "\n\nWelcome to Studytonight :-)\n\n\n"; cout << " ===== Program to determine if the entered number is perfect square or not ===== \n\n"; // variable declaration int n; bool perfect = false; // taking input from the command line (user) cout << " Enter a positive integer : "; cin >> n; // Calling a method that returns true if the // number is a perfect square perfect = isPerfectSquare(n); if (perfect) { cout << "\n\nThe entered number " << n << " is a perfect square of the number " << sqrt(n); } else { cout << "\n\nThe entered number " << n << " is not a perfect square"; } cout << "\n\n\n"; return 0; }
Output:
Let's see another number with number 81.
We hope that this post helped you develop a better understanding of the concept to check for perfect square, in C++. For any query, feel free to reach out to us via the comments section down below.
Keep Learning : ) | https://studytonight.com/cpp-programs/cpp-determine-perfect-square-program | CC-MAIN-2021-04 | refinedweb | 258 | 52.73 |
Hello everyone! In this article, I am going to share with you my approach for creating and managing a global store in Vue 3 without Vuex.
Required skills: Vue, JavaScript, TypeScript
Advantages:
1. No Vuex (extra dependency)
2. Compatible with a new Composition API
3. Similar to Vuex syntax, so you don't need to get used to something new😄
5. Typed
Let's start.
@/store/index.ts
import auth from "./modules/auth";import { Store } from "./types";const store: Store = { modules: { auth } };export function commit<T>(
moduleName: string,
mutation: string,
payload?: T
) {
const foundModule = store.modules![moduleName];…
Front-end Vue developer | https://oshap1044.medium.com/?source=post_internal_links---------4---------------------------- | CC-MAIN-2021-25 | refinedweb | 101 | 54.69 |
We can improve on the diagnostics given by the rules in the previous article in this series, Progressive validation for complex content models.
Diagnosing Similar Names
One of the most common typos is simply to make a mistake in upper-case/lower-case. We can generate Schematron code to check this:
<sch:rule <sch:reportThe unexpected element "<sch:name/>" has been used, which is close to an element in the schema: the element "Address". </sch:report> </sch:rule>
And here is the XSLT for generating those Schematron rules:
<xsl:for-each <xsl:sort <xsl:variable <xsl:if <sch:rule <sch:reportThe unexpected element "<sch:name/>" has been used, which is close to an element in the schema: the element "<xsl:value-of" <xsl:if in the {<xsl:value-of} namespace</xsl:if>. </sch:report> </sch:rule> <xsl:if> </xsl:for-each>
This code actually catches two problems: have you made an upper-/lower-case typo or have you used an element with a name in the current namespace but using a different namespace.
Actually, the code as it is will generate a false positive if the same element name is used in multiple namespaces. So I will give it a
role attribute of “Note” (as in Note, Caution, Warning). The
role attribute lets you know what function a particular assertion plays in its rule or pattern.
These generated rules get put in the pattern that checks for typos, after the checks for defined names, but before the wildcard catch-all entry at the end: this way elements that have correct names and namespaces are dealt with before these rules, and any names that have other problems get dealt with by the default. In Schematron, a schema is made from patterns: each pattern contains rules, and each rules contains assertions (assert or report elements): every assertion in a rule is tested in the context (an XPath that may match nodes of interest from the document) provided by the rule; the rules however form a case statement, so that if some node matches one rule they won’t be tested by a subsequent rule in the same pattern.
Towards terser, more declarative schemas
It is almost axiomatic that automatically generated code is ugly and unfriendly. Look at compiler generators for example. Of course, getting consistent code that does the same thing many times is why you use a code generator like Schematron in the first place rather than writing the XSLT yourself, in many cases.
But it is certainly possible to make the code more friendly and more declarative. In Converting Schematron to XML Schemas I showed how to use abstract rules to provide extra declarative information so that there is enough information to convert back to a kind of W3C XML Schema. It doesn’t go so far, but the idea is that abstract rules (and abstract patterns, together with the role attribute) provide the abstraction for grouping assertions and representing types.
I won’t go into the code, it is trivial, but the idea is that there are quite a few rules or assertions that don’t have any dynamic content (sometimes it is handled by the
diagnostic element, other times we don’t expect the rule to ever generate messages, see Expressing untested and untestable constaints in Schematron) and we can use abstract patterns to make things much more declative, readable and terse.
Here is an example, for the rules that swallow elements names that are defined in the current namespace
<sch:rule <sch:assertThe element name "<sch:name/>" is defined.</sch:assert> </sch:rule> <sch:rule <sch:extends </sch:rule> <sch:rule <sch:extends </sch:rule>
And here is an example for detecting various kinds of text content:
<sch:rule <sch:assertElement "<sch:name/>" should have no text content.</sch:assert> </sch:rule> <sch:rule <sch:assertElement "<sch:name/>" should be completely empty (no XML comments, PIs, or elements).</sch:assert> </sch:rule> <sch:rule <sch:extends <sch:extends <sch:assertElement "<sch:name/>" should be completely empty (no XML comments, PIs).</sch:assert> <sch:rule> <sch:rule <sch:extends </sch:rule> <sch:rule <sch:extends <sch:rule> <sch:rule <sch:extends </sch:rule> <sch:rule <sch:extends </sch:rule>
Much easier to read than having all those assertions expanded!
Also of interest: | http://www.oreillynet.com/xml/blog/2008/01/converting_xml_schemas_to_sche_8.html?CMP=OTC-TY3388567169&ATT=Converting+XML+Schemas+to+Schematron+9+Friendlier+schemas | crawl-003 | refinedweb | 714 | 51.92 |
Hi everyone,
Its my first post, and by the way its my first semester studying C language. And it's the first programming experience.
I was asked to write a program that , for each even number between 700 and 1100 , it lists its equivalent value as a sum of primes (assuming godbach was right so far!! lol ) ..the thing is that i couldn't do it using ordinary loops. so i used an array in which i have stored the primes i want to process.
For that said, Can anyone tell me why the output of my program outputs only relatively bigger primes as operands for the sum. I mean it's correct but why it skip situations in which small prime numbers can be used like 3.
700 = 419 + 281 ya right but why not 700 = 23 + 677 .
here is my code
Code:#include <stdio.h> #include <stdlib.h> #include <math.h> #define start 700 #define finish 1100 int is_prime(int); int check(int); int main(){ for(int i = start ; i <= finish ; i += 2){ check(i); } system("PAUSE"); return(0); } int is_prime(int n){ for(int i = 2 ; i <= sqrt(n) ; i++){ if(!(n%i)) return(0); } return(1); } int check(int x){ int k[x]; int j = 0; for (int i = 1 ; i < x ; i += 2){ if (is_prime(i)){ k[j] = i; j++; } } for (int n = 0 ; n < j ; n++){ for (int a = 0 ; a <= j/2 ;a++){ if (x == k[n]+k[a]){ printf("%d = %d + %d\n",x,k[n],k[a]); return (0); } } } } | http://cboard.cprogramming.com/c-programming/102044-godbach-conjecture.html | CC-MAIN-2014-52 | refinedweb | 258 | 79.4 |
On Tue, Feb 16, 2010 at 3:05 PM, Michal Suchanek <address@hidden> wrote: > 2010/2/16 Vladimir 'φ-coder/phcoder' Serbinenko <address@hidden>: >> Michal Suchanek wrote: >>>> With typeof macro this can be made type-neutral avoiding potential >>>> mistakes. >>>> +static inline long >>>> +grub_min (long x, long y) >>>> +{ >>>> + if (x > y) >>>> + return y; >>>> + else >>>> + return x; >>>> +} >>>> + >>>> >>> >>> I don't see how typeof would be used. As I understand the docs it can >>> only set types relative to something what is already defined (and in >>> some cases actually dereference/call it) and there is nothing defined >>> at the point these functions are declared to copy the type from. >>> >> #include <stdio.h> >> #define swap(a,b) {typeof (a) mytemp ## __LINE__; mytemp ## __LINE__ = >> b; b = a; a = mytemp ## __LINE__; } >> > > Unlike inlines this pollutes the local namespace with unexpected > identifiers.. Perhaps the temporary variable should be at least > prefixed with an underscore or something. The braces introduce a block and the variable goes out of scope, in fact there's no need for __LINE__ because of this. | https://lists.gnu.org/archive/html/grub-devel/2010-02/msg00138.html | CC-MAIN-2022-33 | refinedweb | 172 | 61.7 |
What is a Sorting Algorithm?
Sorting algorithms are a set of instructions that take an array or list as an input and arrange the items into a particular order.
Sorts are most commonly in numerical or a form of alphabetical (called lexicographical) order, and can be in ascending (A-Z, 0-9) or descending (Z-A, 9-0) order.
Why Sorting Algorithms are Important
Since sorting can often reduce the complexity of a problem, it is an important algorithm in Computer Science. These algorithms have direct applications in searching algorithms, database algorithms, divide and conquer methods, data structure algorithms, and many more.
Trade-Offs of Algorithms
When using different algorithms some questions have to be asked. How big is the collection being sorted? How much memory is at disposal to be used? Does the collection need to grow?
The answers to these questions may determine what algorithm is going to work best for the situation. Some algorithms like merge sort may need a lot of space to run, while insertion sort is not always the fastest but it doesn't require many resources to run.
You should determine what the requirements of the system are and its limitations before deciding what algorithm to use.
Some Common Sorting Algorithms
Some of the most common sorting algorithms are:
- Selection Sort
- Bubble Sort
- Insertion Sort
- Merge Sort
- Quick Sort
- Heap Sort
- Counting Sort
- Radix Sort
- Bucket Sort
But before we get into each of these, let's learn a bit more about what makes classifies a sorting algorithm.
Classification of a Sorting Algorithm
Sorting algorithms can be categorized based on the following parameters:
- Based on Number of Swaps or Inversion This is the number of times the algorithm swaps elements to sort the input.
Selection Sortrequires the minimum number of swaps.
- Based on Number of Comparisons This is the number of times the algorithm compares elements to sort the input. Using Big-O notation, the sorting algorithm examples listed above require at least
O(nlogn)comparisons in the best case and
O(n^2)comparisons in the worst case for most of the outputs.
- Based on Recursion or Non-Recursion Some sorting algorithms, such as
Quick Sort, use recursive techniques to sort the input. Other sorting algorithms, such as
Selection Sortor
Insertion Sort, use non-recursive techniques. Finally, some sorting algorithm, such as
Merge Sort, make use of both recursive as well as non-recursive techniques to sort the input.
- Based on Stability Sorting algorithms are said to be
stableif the algorithm maintains the relative order of elements with equal keys. In other words, two equivalent elements remain in the same order in the sorted output as they were in the input.
Insertion sort,
Merge Sort, and
Bubble Sortare stable
Heap Sortand
Quick Sortare not stable
- Based on Extra Space Requirement Sorting algorithms are said to be
in placeif they require a constant
O(1)extra space for sorting.
Insertion sortand
Quick-sortare
in placesort as we move the elements about the pivot and do not actually use a separate array which is NOT the case in merge sort where the size of the input must be allocated beforehand to store the output during the sort.
Merge Sortis an example of
out placesort as it require extra memory space for its operations.
Best possible time complexity for any comparison based sorting
Any comparison based sorting algorithm must make at least nLog2n comparisons to sort the input array, and Heapsort and merge sort are asymptotically optimal comparison sorts. This can be easily proved by drawing a decision tree diagram. you have the following problem in front of you
You that you need to create an array of lists, that is of buckets. Elements now need to be inserted into these buckets on the basis of their properties. Each of these buckets can then be sorted individually using Insertion Sort.
Pseudo Code for Bucket Sort:
void bucketSort(float[] a,int n) { for(each floating integer 'x' in n) { insert x into bucket[n*x]; } for(each bucket) { sort(bucket); } }
Counting Sort
Counting Sort is a sorting technique based on keys between a specific range. It works by counting the number of objects having distinct key values (kind of hashing). Then doing some arithmetic to calculate the position of each object in the output sequence.
Example:
For simplicity, consider the data in the range 0 to 9. Input data: 1, 4, 1, 2, 7, 5, 2 1) Take a count array to store the count of each unique object. Index: 0 1 2 3 4 5 6 7 8 9 Count: 0 2 2 0 1 1 0 1 0 0 2) Modify the count array such that each element at each index stores the sum of previous counts. Index: 0 1 2 3 4 5 6 7 8 9 Count: 0 2 4 4 5 6 6 7 7 7 The modified count array indicates the position of each object in the output sequence. 3) Output each object from the input sequence followed by decreasing its count by 1. Process the input data: 1, 4, 1, 2, 7, 5, 2. Position of 1 is 2. Put data 1 at index 2 in output. Decrease count by 1 to place next data 1 at an index 1 smaller than this index.
Properties
- Space complexity: O(K)
- Best case performance: O(n+K)
- Average case performance: O(n+K)
- Worst case performance: O(n+K)
- Stable: Yes (K is the number of distinct elements in the array)
Implementation in JavaScript
let numbers = [1, 4, 1, 2, 7, 5, 2]; let count = []; let i, z = 0; let max = Math.max(...numbers); // initialize counter for (i = 0; i <= max; i++) { count[i] = 0; } for (i=0; i < numbers.length; i++) { count[numbers[i]]++; } for (i = 0; i <= max; i++) { while (count[i]-- > 0) { numbers[z++] = i; } } // output sorted array for (i=0; i < numbers.length; i++) { console.log(numbers[i]); }
C++ Implementation
#include <iostream> void countSort(int upperBound, int lowerBound, std::vector<int> numbersToSort) //lower and upper bounds of numbers in vector { int range = upperBound - lowerBound; //create a range large enough to get every number between the min and max std::vector<int> counts (range); //initialize of counts of the size of the range std::fill(counts.begin(), counts.end(), 0); //fill vector of zeros for (int i = 0; i < numbersToSort.size(); i++) { int index = numbersToSort[i] - lowerBound; //For example, if 5 is the lower bound and numbersToSort[i] is 5. index will be 0 and the counts[index]+= 1; //count of 5 will be stored in counts[0] } std::cout << counts << std::endl; }
Swift Implementation
func countingSort(_ array: [Int]) { // Create an array to store the count of each element let maxElement = array.max() ?? 0 var countArray = [Int](repeating: 0, count: Int(maxElement + 1)) for element in array { countArray[element] += 1 } var z = 0 var sortedArray = [Int](repeating: 0, count: array.count) for index in 1 ..< countArray.count { //print index element required number of times while countArray[index] > 0 { sortedArray[z] = index z += 1 countArray[index] -= 1 } } print(sortedArray) }
Insertion Sort.
[ 8 3 5 1 4 2 ]
Step 1 :
key = 3 //starting from 1st index. Here `key` will be compared with the previous elements. In this case, `key` is compared with 8. since 8 > 3, move the element 8 to the next position and insert `key` to the previous position. Result: [ 3 8 5 1 4 2 ]
Step 2 :
key = 5 //2nd index 8 > 5 //move 8 to 2nd index and insert 5 to the 1st index. Result: [ 3 5 8 1 4 2 ]
Step 3 :
key = 1 //3rd index 8 > 1 => [ 3 5 1 8 4 2 ] 5 > 1 => [ 3 1 5 8 4 2 ] 3 > 1 => [ 1 3 5 8 4 2 ] Result: [ 1 3 5 8 4 2 ]
Step 4 :
key = 4 //4th index 8 > 4 => [ 1 3 5 4 8 2 ] 5 > 4 => [ 1 3 4 5 8 2 ] 3 > 4 ≠> stop Result: [ 1 3 4 5 8 2 ]
Step 5 :
key = 2 //5th index 8 > 2 => [ 1 3 4 5 2 8 ] 5 > 2 => [ 1 3 4 2 5 8 ] 4 > 2 => [ 1 3 2 4 5 8 ] 3 > 2 => [ 1 2 3 4 5 8 ] 1 > 2 ≠> stop Result: [1 2 3 4 5 8]
The algorithm shown below is a slightly optimized version to avoid swapping the
key element in every iteration. Here, the
key element will be swapped at the end of the iteration (step).
InsertionSort(arr[]) for j = 1 to arr.length key = arr[j] i = j - 1 while i > 0 and arr[i] > key arr[i+1] = arr[i] i = i - 1 arr[i+1] = key
Here is a detailed implementation in JavaScript:
function insertion_sort(A) { var len = array_length(A); var i = 1; while (i < len) { var x = A[i]; var j = i - 1; while (j >= 0 && A[j] > x) { A[j + 1] = A[j]; j = j - 1; } A[j+1] = x; i = i + 1; } }
A quick implementation in Swift is shown below:
var array = [8, 3, 5, 1, 4, 2] func insertionSort(array:inout Array<Int>) -> Array<Int>{ for j in 0..<array.count { let key = array[j] var i = j-1 while (i > 0 && array[i] > key){ array[i+1] = array[i] i = i-1 } array[i+1] = key } return array }
The Java example is shown below:
public int[] insertionSort(int[] arr) for (j = 1; j < arr.length; j++) { int key = arr[j] int i = j - 1 while (i > 0 and arr[i] > key) { arr[i+1] = arr[i] i -= 1 } arr[i+1] = key } return arr;
And in c....
void insertionSort(int arr[], int n) { int i, key, j; for (i = 1; i < n; i++) { key = arr[i]; j = i-1; while (j >= 0 && arr[j] > key) { arr[j+1] = arr[j]; j = j-1; } arr[j+1] = key; } }
Properties:
- Space Complexity: O(1)
- Time Complexity: O(n), O(n* n), O(n* n) for Best, Average, Worst cases respectively.
- Best Case: array is already sorted
- Average Case: array is randomly sorted
- Worst Case: array is reversely sorted.
- Sorting In Place: Yes
- Stable: Yes
Heapsort
Heapsort is an efficient sorting algorithm based on the use of max/min heaps. A heap is a tree-based data structure that satisfies the heap property – that is for a max heap, the key of any node is less than or equal to the key of its parent (if it has a parent).
This property can be leveraged to access the maximum element in the heap in O(logn) time using the maxHeapify method. We perform this operation n times, each time moving the maximum element in the heap to the top of the heap and extracting it from the heap and into a sorted array. Thus, after n iterations we will have a sorted version of the input array.
The algorithm is not an in-place algorithm and would require a heap data structure to be constructed first. The algorithm is also unstable, which means when comparing objects with same key, the original ordering would not be preserved.
This algorithm runs in O(nlogn) time and O(1) additional space [O(n) including the space required to store the input data] since all operations are performed entirely in-place.
The best, worst and average case time complexity of Heapsort is O(nlogn). Although heapsort has a better worse-case complexity than quicksort, a well-implemented quicksort runs faster in practice. This is a comparison-based algorithm so it can be used for non-numerical data sets insofar as some relation (heap property) can be defined over the elements.
An implementation in Java is as shown below :
import java.util.Arrays; public class Heapsort { public static void main(String[] args) { //test array Integer[] arr = {1, 4, 3, 2, 64, 3, 2, 4, 5, 5, 2, 12, 14, 5, 3, 0, -1}; String[] strarr = {"hope you find this helpful!", "wef", "rg", "q2rq2r", "avs", "erhijer0g", "ewofij", "gwe", "q", "random"}; arr = heapsort(arr); strarr = heapsort(strarr); System.out.println(Arrays.toString(arr)); System.out.println(Arrays.toString(strarr)); } //O(nlogn) TIME, O(1) SPACE, NOT STABLE public static <E extends Comparable<E>> E[] heapsort(E[] arr){ int heaplength = arr.length; for(int i = arr.length/2; i>0;i--){ arr = maxheapify(arr, i, heaplength); } for(int i=arr.length-1;i>=0;i--){ E max = arr[0]; arr[0] = arr[i]; arr[i] = max; heaplength--; arr = maxheapify(arr, 1, heaplength); } return arr; } //Creates maxheap from array public static <E extends Comparable<E>> E[] maxheapify(E[] arr, Integer node, Integer heaplength){ Integer left = node*2; Integer right = node*2+1; Integer largest = node; if(left.compareTo(heaplength) <=0 && arr[left-1].compareTo(arr[node-1]) >= 0){ largest = left; } if(right.compareTo(heaplength) <= 0 && arr[right-1].compareTo(arr[largest-1]) >= 0){ largest = right; } if(largest != node){ E temp = arr[node-1]; arr[node-1] = arr[largest-1]; arr[largest-1] = temp; maxheapify(arr, largest, heaplength); } return arr; } }
Implementation in C++
#include <iostream> using namespace std; void heapify(int arr[], int n, int i) { int largest = i; int l = 2*i + 1; int r = 2*i + 2; if (l < n && arr[l] > arr[largest]) largest = l; if (r < n && arr[r] > arr[largest]) largest = r; if (largest != i) { swap(arr[i], arr[largest]); heapify(arr, n, largest); } } void heapSort(int arr[], int n) { for (int i = n / 2 - 1; i >= 0; i--) heapify(arr, n, i); for (int i=n-1; i>=0; i--) { swap(arr[0], arr[i]); heapify(arr, i, 0); } } void printArray(int arr[], int n) { for (int i=0; i<n; ++i) cout << arr[i] << " "; cout << "\n"; } int main() { int arr[] = {12, 11, 13, 5, 6, 7}; int n = sizeof(arr)/sizeof(arr[0]); heapSort(arr, n); cout << "Sorted array is \n"; printArray(arr, n); }
Radix Sort
Prerequisite: Counting Sort
QuickSort, MergeSort, and HeapSort are comparison-based sorting algorithms. CountSort is not. It has the complexity of O(n+k), where k is the maximum element of the input array. So, if k is O(n), CountSort becomes linear sorting, which is better than comparison based sorting algorithms that have O(nlogn) time complexity.
The idea is to extend the CountSort algorithm to get a better time complexity when k goes O(n2). Here comes the idea of Radix Sort.
The.
An implementation in C:; } }
Selection Sort
Selection Sort is one of the simplest sorting algorithms. This algorithm gets its name from the way it iterates through the array: it selects the current smallest element, and swaps it into place.
Here's how it works:
- Find the smallest element in the array and swap it with the first element.
- Find the second smallest element and swap with with the second element in the array.
- Find the third smallest element and swap wit with the third element in the array.
- Repeat the process of finding the next smallest element and swapping it into the correct position until the entire array is sorted.
But, how would you write the code for finding the index of the second smallest value in an array?
An easy way is to notice that the smallest value has already been swapped into index 0, so the problem reduces to finding the smallest element in the array starting at index 1.
Selection sort always takes the same number of key comparisons — N(N − 1)/2.
Implementation in C/C++
The following C++ program contains an iterative as well as a recursive implementation of the Selection Sort algorithm. Both implementations are invoked in the
main() function.
#include <iostream> #include <string> using namespace std; template<typename T, size_t n> void print_array(T const(&arr)[n]) { for (size_t i = 0; i < n; i++) std::cout << arr[i] << ' '; cout << "\n"; } int minIndex(int a[], int i, int j) { if (i == j) return i; int k = minIndex(a, i + 1, j); return (a[i] < a[k]) ? i : k; } void recurSelectionSort(int a[], int n, int index = 0) { if (index == n) return; int k = minIndex(a, index, n - 1); if (k != index) swap(a[k], a[index]); recurSelectionSort(a, n, index + 1); } void iterSelectionSort(int a[], int n) { for (int i = 0; i < n; i++) { int min_index = i; int min_element = a[i]; for (int j = i + 1; j < n; j++) { if (a[j] < min_element) { min_element = a[j]; min_index = j; } } swap(a[i], a[min_index]); } } int main() { int recurArr[6] = { 100,35, 500, 9, 67, 20 }; int iterArr[5] = { 25, 0, 500, 56, 98 }; cout << "Recursive Selection Sort" << "\n"; print_array(recurArr); // 100 35 500 9 67 20 recurSelectionSort(recurArr, 6); print_array(recurArr); // 9 20 35 67 100 500 cout << "Iterative Selection Sort" << "\n"; print_array(iterArr); // 25 0 500 56 98 iterSelectionSort(iterArr, 5); print_array(iterArr); // 0 25 56 98 500 }
Implementation in JavaScript
function selection_sort(A) { var len = A.length; for (var i = 0; i < len - 1; i = i + 1) { var j_min = i; for (var j = i + 1; j < len; j = j + 1) { if (A[j] < A[j_min]) { j_min = j; } else {} } if (j_min !== i) { swap(A, i, j_min); } else {} } } function swap(A, x, y) { var temp = A[x]; A[x] = A[y]; A[y] = temp; }
Implementation in Python
def seletion_sort(arr): if not arr: return arr for i in range(len(arr)): min_i = i for j in range(i + 1, len(arr)): if arr[j] < arr[min_i]: min_i = j arr[i], arr[min_i] = arr[min_i], arr[i]
Implementation in Java
public void selectionsort(int array[]) { int n = array.length; //method to find length of array for (int i = 0; i < n-1; i++) { int index = i; int min = array[i]; // taking the min element as ith element of array for (int j = i+1; j < n; j++) { if (array[j] < array[index]) { index = j; min = array[j]; } } int t = array[index]; //Interchange the places of the elements array[index] = array[i]; array[i] = t; } }
Implementation in MATLAB
function [sorted] = selectionSort(unsorted) len = length(unsorted); for i = 1:1:len minInd = i; for j = i+1:1:len if unsorted(j) < unsorted(minInd) minInd = j; end end unsorted([i minInd]) = unsorted([minInd i]); end sorted = unsorted; end
Properties
- Space Complexity: O(n)
- Time Complexity: O(n2)
- Sorting in Place: Yes
- Stable: No
Bubble Sort
Just like the way bubbles rise from the bottom of a glass, bubble sort is a simple algorithm that sorts a list, allowing either lower or higher values to bubble up to the top. The algorithm traverses a list and compares adjacent values, swapping them if they are not in the correct order.
With a worst-case complexity of O(n^2), bubble sort is very slow compared to other sorting algorithms like quicksort. The upside is that it is one of the easiest sorting algorithms to understand and code from scratch.
From technical perspective, bubble sort is reasonable for sorting small-sized arrays or specially when executing sort algorithms on computers with remarkably limited memory resources.
Example:
First pass through the list:
- Starting with
[4, 2, 6, 3, 9], the algorithm compares the first two elements in the array, 4 and 2. It swaps them because 2 < 4:
[2, 4, 6, 3, 9]
- It compares the next two values, 4 and 6. As 4 < 6, these are already in order, and the algorithm moves on:
[2, 4, 6, 3, 9]
- The next two values are also swapped because 3 < 6:
[2, 4, 3, 6, 9]
- The last two values, 6 and 9, are already in order, so the algorithm does not swap them.
Second pass through the list:
- 2 < 4, so there is no need to swap positions:
[2, 4, 3, 6, 9]
- The algorithm swaps the next two values because 3 < 4:
[2, 3, 4, 6, 9]
- No swap as 4 < 6:
[2, 3, 4, 6, 9]
- Again, 6 < 9, so no swap occurs:
[2, 3, 4, 6, 9]
The list is already sorted, but the bubble sort algorithm doesn't realize this. Rather, it needs to complete an entire pass through the list without swapping any values to know the list is sorted.
Third pass through the list:
[2, 4, 3, 6, 9]=>
[2, 4, 3, 6, 9]
[2, 4, 3, 6, 9]=>
[2, 4, 3, 6, 9]
[2, 4, 3, 6, 9]=>
[2, 4, 3, 6, 9]
[2, 4, 3, 6, 9]=>
[2, 4, 3, 6, 9]
Clearly bubble sort is far from the most efficient sorting algorithm. Still, it's simple to wrap your head around and implement yourself.
Properties
- Space complexity: O(1)
- Best case performance: O(n)
- Average case performance: O(n*n)
- Worst case performance: O(n*n)
- Stable: Yes
Video Explanation
Example in JavaScript
let arr = [1, 4, 7, 45, 7,43, 44, 25, 6, 4, 6, 9], sorted = false; while(!sorted) { sorted = true; for(var i=0; i < arr.length; i++) { if(arr[i] < arr[i-1]) { let temp = arr[i]; arr[i] = arr[i-1]; arr[i-1] = temp; sorted = false; } } }
Example in Java.
public class BubbleSort { static void sort(int[] arr) { int n = arr.length; int temp = 0; for(int i=0; i < n; i++){ for(int x=1; x < (n-i); x++){ if(arr[x-1] > arr[x]){ temp = arr[x-1]; arr[x-1] = arr[x]; arr[x] = temp; } } } } public static void main(String[] args) { for(int i=0; i < 15; i++){ int arr[i] = (int)(Math.random() * 100 + 1); } System.out.println("array before sorting\n"); for(int i=0; i < arr.length; i++){ System.out.print(arr[i] + " "); } bubbleSort(arr); System.out.println("\n array after sorting\n"); for(int i=0; i < arr.length; i++){ System.out.print(arr[i] + " "); } } }
Example in C++
// Recursive Implementation void bubblesort(int arr[], int n) { if(n==1) //Initial Case return; bool swap_flag = false; for(int i=0;i<n-1;i++) //After this pass the largest element will move to its desired location. { if(arr[i]>arr[i+1]) { int temp=arr[i]; arr[i]=arr[i+1]; arr[i+1]=temp; swap_flag = true; } } // IF no two elements were swapped in the loop, then return, as array is sorted if(swap_flag == false) return; bubblesort(arr,n-1); //Recursion for remaining array }
Example in Swift
func bubbleSort(_ inputArray: [Int]) -> [Int] { guard inputArray.count > 1 else { return inputArray } // make sure our input array has more than 1 element var numbers = inputArray // function arguments are constant by default in Swift, so we make a copy for i in 0..<(numbers.count - 1) { for j in 0..<(numbers.count - i - 1) { if numbers[j] > numbers[j + 1] { numbers.swapAt(j, j + 1) } } } return numbers // return the sorted array }
Example in Python
def bubbleSort(arr): n = len(arr) for i in range(n): for j in range(0, n-i-1): if arr[j] > arr[j+1] : arr[j], arr[j+1] = arr[j+1], arr[j] print(arr)
Example in PHP
function bubble_sort($arr) { $size = count($arr)-1; for ($i=0; $i<$size; $i++) { for ($j=0; $j<$size-$i; $j++) { $k = $j+1; if ($arr[$k] < $arr[$j]) { // Swap elements at indices: $j, $k list($arr[$j], $arr[$k]) = array($arr[$k], $arr[$j]); } } } return $arr;// return the sorted array } $arr = array(1,3,2,8,5,7,4,0); print("Before sorting"); print_r($arr); $arr = bubble_sort($arr); print("After sorting by using bubble sort"); print_r($arr);
Example in C
#include <stdio.h> int BubbleSort(int array[], int n); int main(void) { int arr[] = {10, 2, 3, 1, 4, 5, 8, 9, 7, 6}; BubbleSort(arr, 10); for (int i = 0; i < 10; i++) { printf("%d", arr[i]); } return 0; } int BubbleSort(int array[], n) { for (int i = 0 ; i < n - 1; i++) { for (int j = 0 ; j < n - i - 1; j++) //n is length of array { if (array[j] > array[j+1]) // For decreasing order use { int swap = array[j]; array[j] = array[j+1]; array[j+1] = swap; } } } }
Quick Sort
Quick sort is an efficient divide and conquer sorting algorithm. Average case time complexity of Quick Sort is O(nlog(n)) with worst case time complexity being O(n^2) depending on the selection of the pivot element, which divides the current array into two sub arrays.
For instance, the time complexity of Quick Sort is approximately
O(nlog(n)) when the selection of pivot divides original array into two nearly equal sized sub arrays.
On the other hand, if the algorithm, which selects of pivot element of the input arrays, consistently outputs 2 sub arrays with a large difference in terms of array sizes, quick sort algorithm can achieve the worst case time complexity of O(n^2).
The steps involved in Quick Sort are:
- Choose an element to serve as a pivot, in this case, the last element of the array is the pivot.
- Partitioning: Sort the array in such a manner that all elements less than the pivot are to the left, and all elements greater than the pivot are to the right.
- Call Quicksort recursively, taking into account the previous pivot to properly subdivide the left and right arrays. (A more detailed explanation can be found in the comments below)
Example Implementations in Various Languages
Implementation in JavaScript:
const arr = [6, 2, 5, 3, 8, 7, 1, 4]; const quickSort = (arr, start, end) => { if(start < end) { // You can learn about how the pivot value is derived in the comments below let pivot = partition(arr, start, end); // Make sure to read the below comments to understand why pivot - 1 and pivot + 1 are used // These are the recursive calls to quickSort quickSort(arr, start, pivot - 1); quickSort(arr, pivot + 1, end); } } const partition = (arr, start, end) => { let pivot = end; // Set i to start - 1 so that it can access the first index in the event that the value at arr[0] is greater than arr[pivot] // Succeeding comments will expound upon the above comment let i = start - 1, j = start; // Increment j up to the index preceding the pivot while (j < pivot) { // If the value is greater than the pivot increment j if (arr[j] > arr[pivot]) { j++; } // When the value at arr[j] is less than the pivot: // increment i (arr[i] will be a value greater than arr[pivot]) and swap the value at arr[i] and arr[j] else { i++; swap(arr, j, i); j++; } } //The value at arr[i + 1] will be greater than the value of arr[pivot] swap(arr, i + 1, pivot); //You return i + 1, as the values to the left of it are less than arr[i+1], and values to the right are greater than arr[i + 1] // As such, when the recursive quicksorts are called, the new sub arrays will not include this the previously used pivot value return i + 1; } const swap = (arr, firstIndex, secondIndex) => { let temp = arr[firstIndex]; arr[firstIndex] = arr[secondIndex]; arr[secondIndex] = temp; } quickSort(arr, 0, arr.length - 1); console.log(arr);
Implementation in C
[] = {10, 7, 8, 9, 1, 5}; int n = sizeof(arr)/sizeof(arr[0]); quickSort(arr, 0, n-1); printf("Sorted array: n"); printArray(arr, n); return 0; }
Implementation in Python3
import random z=[random.randint(0,100) for i in range(0,20)] def quicksort(z): if(len(z)>1): piv=int(len(z)/2) val=z[piv] lft=[i for i in z if i<val] mid=[i for i in z if i==val] rgt=[i for i in z if i>val] res=quicksort(lft)+mid+quicksort(rgt) return res else: return z ans1=quicksort(z) print(ans1)
Implementation in MATLAB
a = [9,4,7,3,8,5,1,6,2]; sorted = quicksort(a,1,length(a)); function [unsorted] = quicksort(unsorted, low, high) if low < high [pInd, unsorted] = partition(unsorted, low, high); unsorted = quicksort(unsorted, low, pInd-1); unsorted = quicksort(unsorted, pInd+1, high); end end function [pInd, unsorted] = partition(unsorted, low, high) i = low-1; for j = low:1:high-1 if unsorted(j) <= unsorted(high) i = i+1; unsorted([i,j]) = unsorted([j,i]); end end unsorted([i+1,high]) = unsorted([high,i+1]); pInd = i+1; end
The space complexity of quick sort is
O(n) . This is an improvement over other divide and conquer sorting algorithms, which take
O(nlong(n)) space.
Quick sort achieves this by changing the order of elements within the given array. Compare this with the merge sort algorithm which creates 2 arrays, each length
n/2, in each function call.
However there does exist the problem of this sorting algorithm being of time
O(n*n) if the pivot is always kept at the middle. This can be overcomed by utilizing a random pivot
Complexity
Best, average, worst, memory: n log(n)n log(n)n 2log(n). It's not a stable algorithm, and quicksort is usually done in-place with O(log(n)) stack space.
The space complexity of quick sort is O(n). This is an improvement over other divide and conquer sorting algorithms, which take O(n log(n)) space.
Timsort
Timsort is a fast sorting algorithm working at stable O(N log(N)) complexity.
Timsort is a blend of Insertion Sort and Mergesort. This algorithm is implemented in Java’s Arrays.sort() as well as Python’s sorted() and sort(). The smaller parts are sorted using Insertion Sort and are later merged together using Mergesort.
A quick implementation in Python:
def binary_search(the_array, item, start, end): if start == end: if the_array[start] > item: return start else: return start + 1 if start > end: return start mid = round((start + end)/ 2) if the_array[mid] < item: return binary_search(the_array, item, mid + 1, end) elif the_array[mid] > item: return binary_search(the_array, item, start, mid - 1) else: return mid """ Insertion sort that timsort uses if the array size is small or if the size of the "run" is small """ def insertion_sort(the_array): l = len(the_array) for index in range(1, l): value = the_array[index] pos = binary_search(the_array, value, 0, index - 1) the_array = the_array[:pos] + [value] + the_array[pos:index] + the_array[index+1:] return the_array def merge(left, right): """Takes two sorted lists and returns a single sorted list by comparing the elements one at a time. [1, 2, 3, 4, 5, 6] """ if not left: return right if not right: return left if left[0] < right[0]: return [left[0]] + merge(left[1:], right) return [right[0]] + merge(left, right[1:]) def timsort(the_array): runs, sorted_runs = [], [] length = len(the_array) new_run = [the_array[0]] # for every i in the range of 1 to length of array for i in range(1, length): # if i is at the end of the list if i == length - 1: new_run.append(the_array[i]) runs.append(new_run) break # if the i'th element of the array is less than the one before it if the_array[i] < the_array[i-1]: # if new_run is set to None (NULL) if not new_run: runs.append([the_array[i]]) new_run.append(the_array[i]) else: runs.append(new_run) new_run = [] # else if its equal to or more than else: new_run.append(the_array[i]) # for every item in runs, append it using insertion sort for item in runs: sorted_runs.append(insertion_sort(item)) # for every run in sorted_runs, merge them sorted_array = [] for run in sorted_runs: sorted_array = merge(sorted_array, run) print(sorted_array) timsort([2, 3, 1, 5, 6, 7])
Complexity:
Tim sort has a stable Complexity of O(N log(N)) and compares really well with Quicksort. A comparison of complexities can be found on this chart.
Merge Sort
Merge Sort is a Divide and Conquer algorithm. It divides input array in two halves, calls itself for the two halves and then merges the two sorted halves. The major portion of the algorithm is given two sorted arrays, and we have to merge them into a single sorted array. The whole process of sorting an array of N integers can be summarized into three steps-
- Divide the array into two halves.
- Sort the left half and the right half using the same recurring algorithm.
- Merge the sorted halves.
There is something known as the Two Finger Algorithm that helps us merge two sorted arrays together. Using this subroutine and calling the merge sort function on the array halves recursively will give us the final sorted array we are looking for.
Since this is a recursion based algorithm, we have a recurrence relation for it. A recurrence relation is simply a way of representing a problem in terms of its subproblems.
T(n) = 2 * T(n / 2) + O(n)
Putting it in plain English, we break down the subproblem into two parts at every step and we have some linear amount of work that we have to do for merging the two sorted halves together at each step.
Complexity
The biggest advantage of using Merge sort is that the time complexity is only n*log(n) to sort an entire Array. It is a lot better than n^2 running time of bubble sort or insertion sort.
Before we write code, let us understand how merge sort works with the help of a diagram.
- Initially we have an array of 6 unsorted integers Arr(5, 8, 3, 9, 1, 2)
- We split the array into two halves Arr1 = (5, 8, 3) and Arr2 = (9, 1, 2).
- Again, we divide them into two halves: Arr3 = (5, 8) and Arr4 = (3) and Arr5 = (9, 1) and Arr6 = (2)
- Again, we divide them into two halves: Arr7 = (5), Arr8 = (8), Arr9 = (9), Arr10 = (1) and Arr6 = (2)
- We will now compare the elements in these sub arrays in order to merge them.
Properties:
- Space Complexity: O(n)
- Time Complexity: O(n*log(n)). The time complexity for the Merge Sort might not be obvious from the first glance. The log(n) factor that comes in is because of the recurrence relation we have mentioned before.
- Sorting In Place: No in a typical implementation
- Stable: Yes
- Parallelizable :yes (Several parallel variants are discussed in the third edition of Cormen, Leiserson, Rivest, and Stein's Introduction to Algorithms.)
C++ Implementation
void merge(int array[], int left, int mid, int right) { int i, j, k; // Size of left sublist int size_left = mid - left + 1; // Size of right sublist int size_right = right - mid; /* create temp arrays */ int Left[size_left], Right[size_right]; /* Copy data to temp arrays L[] and R[] */ for(i = 0; i < size_left; i++) { Left[i] = array[left+i]; } for(j = 0; j < size_right; j++) { Right[j] = array[mid+1+j]; } // Merge the temp arrays back into arr[left..right] i = 0; // Initial index of left subarray j = 0; // Initial index of right subarray k = left; // Initial index of merged subarray while (i < size_left && j < size_right) { if (Left[i] <= Right[j]) { array[k] = Left[i]; i++; } else { array[k] = Right[j]; j++; } k++; } // Copy the remaining elements of Left[] while (i < size_left) { array[k] = Left[i]; i++; k++; } // Copy the rest elements of R[] while (j < size_right) { array[k] = Right[j]; j++; k++; } } void mergeSort(int array[], int left, int right) { if(left < right) { int mid = (left+right)/2; // Sort first and second halves mergeSort(array, left, mid); mergeSort(array, mid+1, right); // Finally merge them merge(array, left, mid, right); } }
JavaScript Implementation
function mergeSort (arr) { if (arr.length < 2) return arr; var mid = Math.floor(arr.length /2); var subLeft = mergeSort(arr.slice(0,mid)); var subRight = mergeSort(arr.slice(mid)); return merge(subLeft, subRight); }
First we check the length of the array. If it is 1 then we simply return the array. This would be our base case. Else, we will find out the middle value and divide the array into two halves. We will now sort both of the halves with recursive calls to MergeSort function.
function merge (a,b) { var result = []; while (a.length >0 && b.length >0) result.push(a[0] < b[0]? a.shift() : b.shift()); return result.concat(a.length? a : b); }
When we merge the two halfs, we store the result in an auxilliary array. We will compare the starting element of left array to the starting element of right array. Whichever is lesser will be pushed into the results array and we will remove it from there respective arrays using [shift() operator. If we still end up with values in either of left or right array, we would simply concatenate it in the end of the result. Here is the sorted result:
var test = [5,6,7,3,1,3,15]; console.log(mergeSort(test)); >> [1, 3, 3, 5, 6, 7, 15]
A Merge Sort YouTube Tutorial
Here's a good YouTube video that walks through the topic in detail.
Implementaion in JS
const list = [23, 4, 42, 15, 16, 8, 3] const mergeSort = (list) =>{ if(list.length <= 1) return list; const middle = list.length / 2 ; const left = list.slice(0, middle); const right = list.slice(middle, list.length); return merge(mergeSort(left), mergeSort(right)); } const merge = (left, right) => { var result = []; while(left.length || right.length) { if(left.length && right.length) { if(left[0] < right[0]) { result.push(left.shift()) } else { result.push(right.shift()) } } else if(left.length) { result.push(left.shift()) } else { result.push(right.shift()) } } return result; } console.log(mergeSort(list)) // [ 3, 4, 8, 15, 16, 23, 42 ]
Implementation in C
#include<stdlib.h> #include<stdio.h> void merge(int arr[], int l, int m, int r) { int i, j, k; int n1 = m - l + 1; int n2 = r - m; int L[n1], R[n2]; for (i = 0; i < n1; i++) L[i] = arr[l + i]; for (j = 0; j < n2; j++) R[j] = arr[m + 1+ j]; i = 0; j = 0; k = l; while (i < n1 && j < n2) { if (L[i] <= R[j]) { arr[k] = L[i]; i++; } else { arr[k] = R[j]; j++; } k++; } while (i < n1) { arr[k] = L[i]; i++; k++; } while (j < n2) { arr[k] = R[j]; j++; k++; } } void mergeSort(int arr[], int l, int r) { if (l < r) { int m = l+(r-l)/2; mergeSort(arr, l, m); mergeSort(arr, m+1, r); merge(arr, l, m, r); } } void printArray(int A[], int size) { int i; for (i=0; i < size; i++) printf("%d ", A[i]); printf("\n"); } int main() { int arr[] = {12, 11, 13, 5, 6, 7}; int arr_size = sizeof(arr)/sizeof(arr[0]); printf("Given array is \n"); printArray(arr, arr_size); mergeSort(arr, 0, arr_size - 1); printf("\nSorted array is \n"); printArray(arr, arr_size); return 0;
Implementation in C++
Let us consider array A = {2,5,7,8,9,12,13} and array B = {3,5,6,9,15} and we want array C to be in ascending order as well.
void mergesort(int A[],int size_a,int B[],int size_b,int C[]) { int token_a,token_b,token_c; for(token_a=0, token_b=0, token_c=0; token_a<size_a && token_b<size_b; ) { if(A[token_a]<=B[token_b]) C[token_c++]=A[token_a++]; else C[token_c++]=B[token_b++]; } if(token_a<size_a) { while(token_a<size_a) C[token_c++]=A[token_a++]; } else { while(token_b<size_b) C[token_c++]=B[token_b++]; } }
Implementation in Python
def merge(left,right,compare): result = [] i,j = 0,0 while (i < len(left) and j < len(right)): if compare(left[i],right[j]): result.append(left[i]) i += 1 else: result.append(right[j]) j += 1 while (i < len(left)): result.append(left[i]) i += 1 while (j < len(right)): result.append(right[j]) j += 1 return result def merge_sort(arr, compare = lambda x, y: x < y): #Used lambda function to sort array in both(increasing and decresing) order. #By default it sorts array in increasing order if len(arr) < 2: return arr[:] else: middle = len(arr) // 2 left = merge_sort(arr[:middle], compare) right = merge_sort(arr[middle:], compare) return merge(left, right, compare) arr = [2,1,4,5,3] print(merge_sort(arr))
Implementation) { // TODO Auto-generated method stub int arr[]= {2,9,8,3,6,4,10,7}; int[] so=mergesort(arr,0,arr.length-1); for(int i=0;i<arr.length;i++) System.out.print(so[i]+" "); } }
Example) { int arr[] = {2, 9, 8, 3, 6, 4, 10, 7}; int[] so = mergesort(arr, 0, arr.length - 1); for (int i = 0; i < arr.length; i++) System.out.print(so[i] + " "); } } | https://www.freecodecamp.org/news/sorting-algorithms-explained-with-examples-in-python-java-and-c/ | CC-MAIN-2020-45 | refinedweb | 6,676 | 55.17 |
Your answer is one click away!
I am converting some Javascript code into C# and having a bit of trouble with the Google math longs and how the function. This is actually a version of the Delphi random function - according to my co dev.
In javascript I have this.
function _nextRandom(maxValue, seedValue) { if (seedValue !== null) _seed = new goog.math.Long(seedValue); _seed = _seed.multiply(134775813).add(_one); _seed = new goog.math.Long(_seed.getLowBitsUnsigned()); return _seed.multiply(new goog.math.Long(maxValue)).getHighBits() >>> 0; }
In C# I have this - so far.
private int _nextRandom(int maxValue, int seedValue) { if (seedValue != 0) _seed = seedValue; _seed = _seed * 134775813 + 1; _seed = (long)((int)_seed); // get lower 32 bits return (int)(((ulong)_seed * (ulong)maxValue) >> 32); // get upper 32 bits }
Max value is always 254 and the first time _nextRandom is run seedValue is 1024 every other time afterwards its 0 (in C#) or null (in JS)
Here the output from the C# is correct only for positive values, negative ones are incorrect
Casting the values as byte makes the values nearly match but not exactly.
Does anyone have any ideas why this is happening?
A couple of problems:
_seedto be a 64 bit
long. It should be a 32 bit
int.
_seedand
maxValueto
uintbefore performing the 64 bit multiplication.
The following C# code replicates the Delphi PRNG:
private static int _seed = 0; private static int _nextRandom(int maxValue, int seedValue) { if (seedValue != 0) _seed = seedValue; _seed = _seed * 0x08088405 + 1; return (int)(((ulong)(uint)_seed * (uint)maxValue) >> 32); }
Obviously this code is not threadsafe but I am sure you already know that. A cleaner implementation would be to wrap this in a class so that you could create distinct instances of the PRNG with their own seed. | http://www.devsplanet.com/question/35269333 | CC-MAIN-2017-22 | refinedweb | 294 | 57.37 |
08 July 2011 17:27 [Source: ICIS news]
LONDON (ICIS)--Bayer plans to extend its jobs guarantee for workers in ?xml:namespace>
The original 2009 pact with Bayer’s works council is due to expire in 2012. Under the first agreement, Bayer hoped to avoid job cuts for operational reasons.
Bayer and the works council expect to come to an agreement on the extension before the end of the year, company and council said in a joint statement.
Bayer is committed to investment in
However, Bayer said it needs to be able to react quickly to changes and challenges in the market, which implies “flexibility and mobility” in working conditions.
As such, the company cannot exclude that some tasks may need to be shifted or outsourced to other firms in
Bayer also affirmed previously announced plans to outsource some jobs within its services and administrative functions in Germany.
The company also intends to proceed with plans to cut some 700 positions in healthcare in Germany, as well as 300 positions in its Bayer CropScience business, it said. | http://www.icis.com/Articles/2011/07/08/9476268/bayer-plans-to-extend-jobs-pact-for-workers-in-germany-to-2015.html | CC-MAIN-2015-22 | refinedweb | 177 | 54.86 |
JavaScript Object-Oriented Programming Part 1
An Object
An object is a collection of properties. These properties can either be primitive data types, other objects, or functions (which in this case are called methods, but more on this later). A constructor function (or simply, constructor) is a function used to create an object – this too we’ll discuss in detail later. JavaScript comes with many built-in objects, such as the
Array, Image, and
Date objects. Many of you are familiar with
Image objects from creating those ever-so-cute rollover effects. Well, when you use the code
var Image1 = new Image();
Image1.src = "myDog.gif";
you have in fact created a new
Image object, and assigned a property of your new
Image object: the
src property.
Image1 is a new
Image object; in other words, it is an instance of the Image object. Using JavaScript’s dot-structure ( . ), the code above then accesses and sets the
src property of your new Image object. Now, let’s learn how to create our own objects.
function myFunc(){
}
var myObject = new myFunc();
alert(typeof myObject); // displays "object"
We’ve just created our own object. In fact we’ve created a
myFunc object.
myFunc() is a constructor function; it lays out the blueprint from which objects that are created from it will follow (although, in this case, it doesn’t lay out much of a blueprint). So, how does JavaScript know to create an instance of the
myFunc object, rather than to return its results? Let’s compare the example above with the following, more conventional use of a function:
function myFunc(){
return 5;
}
var myObject = myFunc();
alert(typeof myObject); // displays "number"
In this case, we’ve assigned 5 to
myObject. So, what’s the difference between these two scripts? Answer: the new keyword. It tells JavaScript to create an object following the blueprint set forth in the
myFunc() constructor function. In fact, when we create an
Image object, we do the same thing, except that instead of using our own constructor function, we use one of JavaScript’s built-in constructor functions, the
Image() constructor function.
So far, we’ve learned how to create a constructor function, and how to create an object from that constructor function. In our example, we’ve created a
myFunc() constructor and created an instance of the
myFunc object, which we assigned to the variable
myObject.
This is all fine and dandy, but what’s the point? Well, just like our
Image object,
myObject can be assigned properties:
function myFunc(){
}
var myObject = new myFunc();
myObject.StringValue = "This is a String";
alert(myObject.StringValue); // displays "This is a String"
And voila, we’ve now created a property for our object. However, if we create another instance of the
myFunc object (using the
myFunc() constructor function), we also have to assign the
StringValue property to this new instance. For example:
function myFunc(){
}
var myObject = new myFunc();
myObject.StringValue = "This is a String";
var myObject2 = new myFunc();
alert(myObject2.StringValue); // displays "undefined"
So, how can we create properties that exist for all
myFunc objects? Within the
myFunc() constructor function, we can do just that. The
this keyword inside a constructor function refers to the object that’s being created. Example:
function myFunc(){
this.StringValue = "This is a String";
}
var myObject = new myFunc();
var myObject2 = new myFunc();
alert(myObject2.StringValue); // displays "This is a String"
Now, all
myFunc objects will have a
StringValue property, assigned with the initial value of "This is a String", but every object can have its own distinctive value for
StringValue. In other words, we can change the
StringValue property for one
myFunc object, without affecting the others:
function myFunc(){
this.StringValue = "This is a String";
}
var myObject = new myFunc();
myObject.StringValue = "This is myObject's string";
var myObject2 = new myFunc();
alert(myObject.StringValue); // displays "This is myObject's string"
alert(myObject2.StringValue); // displays "This is a String"
We can also achieve similar results if we pass arguments to our constructor function:
function myFunc(StringValue){
this.StringValue = StringValue;
}
var myObject = new myFunc("This is myObject's string");
var myObject2 = new myFunc("This is a String");
alert(myObject.StringValue); // displays "This is myObject's string"
alert(myObject2.StringValue); // displays "This is a String"
In the
myFunc() constructor,
this.StringValue refers to the property being assigned to the newly created object, while
StringValue refers to the function’s local variable that was passed as an argument. So, now that we’ve assigned properties to objects, what about methods?
Object Methods
In addition to properties, objects can have methods. An object’s method is a function it can perform. Let’s take a look at this example. For this one, let’s create a
Circle object. First, we’re going to have to define our functions, and then make them methods of our
Circle object. Let’s define our
Circle() constructor and a
Circle object or two:
function Circle(radius){
this.radius = radius;
}
var bigCircle = new Circle(100);
var smallCircle = new Circle(2);
Now, let’s define some functions that we might use:
function getArea(){
return (this.radius*this.radius*3.14);
}
function getCircumference(){
var diameter = this.radius*2;
var circumference = diameter*3.14;
return circumference;
}
Note that if you were going for accuracy, you could use Math.PI instead of 3.14, but we’ll use this simplified representation of pi to keep the numbers in our examples nice and round.
These functions are easy, except for one thing: what does
this.radius refer to?
this always refers to the current object, in this case, the
Circle object. So
this.radius refers to the
radius property of the
Circle object. So, how do we attach these functions to our object? It’s not as hard as you might think. Let’s change our Circle() constructor:
function Circle(radius){
this.radius = radius;
this.getArea = getArea;
this.getCircumference = getCircumference;
}
The above assigns the functions
getArea and
getCircumference to our
Circle object, making them methods – functions belonging to our
Circle object. We can use methods just like any normal function, but we must first access the object in which the method is encapsulated:
alert(bigCircle.getArea()); // displays 31400
alert(bigCircle.getCircumference()); // displays 618
alert(smallCircle.getArea()); // displays 12.56
alert(smallCircle.getCircumference()); // displays 12.56 | https://www.sitepoint.com/oriented-programming-1-2/ | CC-MAIN-2019-13 | refinedweb | 1,043 | 58.08 |
Writing a Kubernetes Operator in Golang
I decided to write this post after struggling to find documentation on how to write a Kubernetes operator that went through the code with the reader using a real life example.
The example we’re going to use here is: In our Kubernetes cluster each Namespace represents a teams sandbox environment, we would like to lock down access so that teams can only play in their sandbox.
This can be achieved by assigning a group to a user that has a RoleBinding for their particular Namespace and a ClusterRole with edit access. Here is the YAML representation of what I mentioned above:
We could manually create this RoleBinding each time but after 100 namespaces this might get a bit tedious. This is where Kubernetes operators help as they allow us to automate the creation of resources in Kubernetes based on changes to resources. In our case we want to create a RoleBinding when a Namespace is created.
First we start by creating a
main function which will perform the required setup for running the operator and then invoke the operator action:
Here we’re doing the following:
- Setting up a listener for specific OS signals to trigger a graceful shutdown of the operator.
- Using a
WaitGroupto make sure that before terminating the application we stop all executing go functions gracefully.
- Gaining access to the cluster by creating a
clientset.
- Running the
NamespaceControllerwhich will be where all our logic will sit.
Then we need the core of the logic which in our case is called the
NamespaceController:
Here we’re setting up a
SharedIndexInformer which will listen to changes to namespaces in an efficient way using a data cache. Then we attach an
EventHandler to the informer so that when a Namespace is added the
createRoleBinding function is invoked.
The next step is to define the
createRoleBinding function:
Here we’re receiving the Namespace as
obj and casting it into a Namespace object. We then define our RoleBinding based loosely on the YAML file we mentioned at the beginning using the provided Namespace object and then create the RoleBinding. Finally we log whether the creation is successful or not.
The last function we need to define is the
Run function:
Here we’re telling the
WaitGroup that we’re about to invoke a go function and then we run the
namespaceInformer that we defined earlier. When the stop signal is closed it will end the go function, tell the
WaitGroup it’s no longer running and then this function will return.
Information about building and running this operator in a Kubernetes cluster can be found in the github repository.
Now we have an operator that creates a RoleBinding when a Namespace is created within the Kubernetes cluster. | https://medium.com/@mtreacher/writing-a-kubernetes-operator-a9b86f19bfb9 | CC-MAIN-2018-51 | refinedweb | 462 | 55.68 |
Technical Support
On-Line Manuals
RL-ARM User's Guide (MDK v4)
#include <net_config.h>
U8* http_get_var (
U8* env, /* Pointer to a string of environment variables */
void* ansi, /* Buffer to store the environment variable value */
U16 maxlen ); /* Maximum length of environment variable value */
The http_get_var function processes the string env,
which contains the environment variables, and identifies where the
first variable ends. The function obtains and stores the first
variable and its value into the buffer pointed by ansi, in
ansi format.
The maxlen specifies the maximum length that can be stored
in the ansi buffer. If the decoded environment variable value
is longer than this limit, the function truncates it to maxlen
to fit it into the buffer.
The http_get_var function is a system function that is in
the RL-TCPnet library. The prototype is defined in net_config.h.
Note
The http_get_var function returns a pointer to the
remaining environment variables to process. It returns NULL if there
are no more environment variables to process.
cgi_process_data, cgi_process_var
void cgi_process_var (U8 *qs) {
U8 var[40];
do {
/* Loop through all the parameters. */
qs = http_get_var (qs, var, 40);
/* Check the returned string, 'qs' now points to the next. */
if (var[0] != 0) {
/* Returned string is non 0-length. */
if (str_scomp (var, "ip=") == __TRUE) {
/* My IP address parameter. */
sscanf (&var[3], "%bd.%bd.%bd.%bd",&LocM.IpAdr[0],&LocM.IpAdr[1],
&LocM.IpAdr[2],&LocM.IpAdr[3]);
}
else if (str_Scomp (var, "msk=") == __TRUE) {
/* Net mask parameter. */
sscanf (&var[4], "%bd.%bd.%bd.%bd",&LocM.NetMask[0],&LocM.NetMask[1],
&LocM.NetMask[2],&LocM.NetMask[3]);
}
else if (str_scomp (var, "gw=") == __TRUE) {
..
}
}
} while (q. | https://www.keil.com/support/man/docs/rlarm/rlarm_http_get_var.htm | CC-MAIN-2020-34 | refinedweb | 270 | 57.67 |
React Suite is a set of react component libraries for enterprise system products. Built by HYPERS front-end team and UX team, mainly serving company's big data products.
After three major revisions, a large number of components and rich functionality have been accumulated.
React Suite Design Prototype and specification, click to view.
React Suite supports the latest, stable releases of all major browsers and platforms. IE<=9 is no longer supported since React Suite 3.0. React Suite is designed and implemented for use on modern desktop browsers rather than mobile browsers.
React Suite supports server side rendering. Support Next.js to build applications.
React Suite is available as an npm package.
npm i rsuite --save
or if you prefer Yarn
yarn add rsuite
Here's a simple example
import { Button } from 'rsuite'; import 'rsuite/styles/less/index.less'; // or 'rsuite/dist/styles/rsuite.min.css' ReactDOM.render(<Button>Button</Button>, mountNode);
Live preview on CodeSandbox
You can go through full documentation or start with following sections
Detailed changes for each release are documented in the release notes.
You can learn about our development plan through Trello and hope that you can get involved. repo.
$ git clone git@github.com:<YOUR NAME>/rsuite.git $ cd rsuite $ npm install $ npm run dev repo.
$ git clone git@github.com:<YOUR NAME>/rsuite.github.io.git $ cd rsuite.github.io $ npm install $ npm run dev
Make sure you've read the guidelines before you start contributing.
Editor preferences are available in the .prettierrc for easy use in common code editors. Read more and download plugins at.
If you like React Suite, you can show your support by either
This project exists thanks to all the people who contribute.
React Suite is MIT licensed. Copyright (c) 2016-present, HYPERS.
You need to Sign in for post a comment | https://gitee.com/rsuite/rsuite | CC-MAIN-2019-18 | refinedweb | 305 | 60.92 |
Results 1 to 3 of 3
Thread: TypeCasting
- Join Date
- Mar 2012
- 70
- Thanks
- 6
- Thanked 0 Times in 0 Posts
TypeCasting
I have a program that is suppose to find the percentage of occupied rooms in the hotel. I have most of my program running, but when I try to typecast my percentage the percentage isn't coming out right and I am not sure as to why.
Can someone please take a look at my code to see why this isn't working?
Code:
#include <iostream> #include <iomanip> #include <string> using namespace std; int main() { //Varibles int occupied = 0,floors = 0,rooms = 0, heartBreak = 0; int percentage = 0, total = 0, occupiedTotal = 0,unoccupied = 0, unoccupiedTotal = 0; int mostempty = 0, HBfloor; //Input cout << "How many floors does the hotel have? "; cin >> floors; while (floors < 1 ) // Validate input { cout << "Please enter floors greater than or equal to ten.\n"; cout << "How many floors does the hotel have? "; cin >> floors; } //Run a for loop to get the amount of rooms, and it's occupancy rate! for (int cnt = 1; cnt<=floors; cnt++) { if (cnt == 2)//Skip Floor two { continue; } cout << "How many rooms are on floor "<<cnt<<"?"; cin >> rooms; while (rooms < 10) // Validate input { cout << "Please enter rooms greater than or equal to ten.\n"; cout << "How many rooms are on floor "<<cnt<<"?"; cin >> rooms; } total += rooms;//Accumulator cout<<"How many of those are occupied?"; cin>> occupied; while(occupied > rooms)//validate input { cout<<"Error, Number of occupied rooms cannot exceed rooms listed \n"; cout<<"How many of those are occupied?"; cin>> occupied; } occupiedTotal += occupied;//Accumulator unoccupied = rooms - occupied; if (unoccupied > mostempty)//find the floor with the least amount of rooms occupied { mostempty = unoccupied; HBfloor = cnt; } } //Processing Section unoccupied = total - occupiedTotal; percentage = (double)total/ occupiedTotal; //output cout<<"The hotel has a total " <<total<< " rooms!"<<endl; cout<<occupiedTotal<< " are occupied."<<endl; cout<<unoccupied<<" are empty."<<endl; cout << fixed << showpoint << setprecision(1); cout<<"The occupancy rate is "<<percentage<<"%"<<endl; cout<<"The heartbreak floor is "<<HBfloor<<" with "<<mostempty<<" empty rooms." <<endl; system("pause"); return 0; }
- Join Date
- May 2002
- Location
- Marion, IA USA
- 6,298
- Thanks
- 4
- Thanked 84 Times in 83 Posts
This doesn't make sense. You would see why this isn't working if you run it in a debugger and look at what is happening with each value.
percentage = (double)total/ occupiedTotal;
Why are you using int type here for all 3 variables? And why are you trying to typecast one of them?Spookster
CodingForums Supreme Overlord
All Hail Spookster
- Join Date
- Mar 2012
- 70
- Thanks
- 6
- Thanked 0 Times in 0 Posts
Sorry I forgot to change that I was playing with it earlier and forgot to change it back, but it didn't work the other way around either it should've been this to (and was to begin with) :Code:
percentage = occupiedTotal/ (double)total;
Last edited by pbracing33b; 10-27-2012 at 02:36 AM. | http://www.codingforums.com/computer-programming/279592-typecasting.html | CC-MAIN-2016-07 | refinedweb | 487 | 58.42 |
All Articles | Submit Article |
IntroductionBasic of FXCOPUsing the Existing rulesAdding a custom rule all Connection object should be closedStep 1Step 2Step 3Step 4Step 5 Step 6Step 7.I have been writing and recording lot of architecture related videos on design patterns, UML, FPA estimation, Enterprise application blocks, C# projects etc you can watch my videos on
As the name suggest FXCOP where COP means the police. So it’s a code analysis tool which runs rules against the .NET assemblies and gives a complete report about the quality of the project. As it runs on the assembly it can be run on any language like C#, VB.NET etc. You can download the latest copy of FXCOP from Below figure gives a visual outlook of how FXCOP runs the rules on the assembly and then displaying the broken rules in a different pane.
Using selected in the second tab ‘Rules’..Step 1:- To make custom rules the first step is to create a class library. FXCOP expects in the later stage. The description tag defines what error message should be thrown in case the rules are broken.
<>
Step 2:- The first thing is to reference and import the FXCOP SDK. So add reference to the FxCopSdk.DLL. You can find the FxCopSdk.dll where FxCop is installed.
Once you have added reference you need to import the DLL in the class file.
using System;
using System.Collections.Generic;
using System.Text;
using Microsoft.FxCop.Sdk;
Step 3 :- For defining custom rule you need to inherit from ‘BaseIntrospectionRule’ class of FxCop. So below is the.
public class clsCheck : BaseIntrospectionRule
{
public clsCheck(): base("clsCheck", "MyRules.Connection", typeof(clsCheck).Assembly)
{
}
public override ProblemCollection Check(Member member)
{
…….
……..
}}
Step 4 :- So the first thing is we covert the member in to the Method object. We have defined two Boolean variables one which specifies that the a connection object has been opened and the second that connection object has been closed.
Method method = member as Method;
bool boolFoundConnectionOpened = false;
bool boolFoundConnectionClosed = false;
Instruction objInstr=null;
Step 5 :- Every method has a instruction collection which is nothing but the actual code. So we loop through all instructions and check do we find the string of SQLConnection if yes we set the Boolean value of found connection to true. If we find that we have found ‘Dbconnection.close’ we set the found connection closed to true.;
}
Step 6 :- Now the last thing. We check if the connection is opened was it closed if not than;
Step 7:- Once you have compiled the DLL add the DLL as rules and run the analyze project button. Below is the project which was analyzed and the ‘objConnection’ object which was open was never closed..
You can download the source code which is attached with the article
Submit Article | http://www.dotnetfunda.com/articles/article175.aspx | crawl-002 | refinedweb | 467 | 65.73 |
In the last tutorial we discussed inheritance. Here we will discuss polymorphism, which is one of the feature of Object oriented programming(OOPs).
What is polymorphism in programming?
Polymorphism is the capability of a method to do different things based on the object that it is acting upon. In other words, polymorphism allows you define one interface and have multiple implementations. I know it sounds confusing. Don’t worry we will discuss this in detail.
- It is a feature that allows one interface to be used for a general class of actions.
- An operation may exhibit different behavior in different instances.
- The behavior depends on the types of data used in the operation.
- It plays an important role in allowing objects having different internal structures to share the same external interface.
- Polymorphism is extensively used in implementing inheritance.:
In Java, it is possible to define two or more methods of same name in a class, provided that there argument list or parameters are different. This concept is known as Method Overloading.
I have covered method overloading and Overriding below. To know more about polymorphism types refer my post Types of polymorphism in java: Static, Dynamic, Runtime and Compile time Polymorphism.
1) Method Overloading
- To call an overloaded method in Java, it is must to use the type and/or number of arguments to determine which version of the overloaded method to actually call.
- Overloaded methods may have different return types; the return type alone is insufficient to distinguish two versions of a method. .
- When Java encounters a call to an overloaded method, it simply executes the version of the method whose parameters match the arguments used in the call.
- It allows the user to achieve compile time polymorphism.
- An overloaded method can throw different exceptions.
- It can have different access modifiers. having 1 int parameter, second one has 2 int parameters and third one is having double arg. The methods are invoked or called with the same type and number of parameters used.
Output:
a: 10 a and b: 10,20 double a: 5.5 O/P : 30.25
Rules for Method Overloading
- Overloading can take place in the same class or in its sub-class.
- Constructor in Java can be overloaded
- Overloaded methods must have a different argument list.
- Overloaded method
should always be the part of the same class(can also take place in sub class), with same name but different parameters.
- The parameters may differ in their type or number, or in both.
- They may have the same or different return types.
- It is also known as compile time polymorphism.
2) Method Overriding
Child class has the same method as of base class. In such cases child class overrides the parent class method without even touching the source code of the base class. This feature is known as method overriding.
Example:
public class BaseClass { public void methodToOverride() //Base class method { System.out.println ("I'm the method of BaseClass"); } } public class DerivedClass extends BaseClass { public void methodToOverride() //Derived Class method { System.out.println ("I'm the method of DerivedClass"); } } public class TestMethod { public static void main (String args []) { // BaseClass reference and object BaseClass obj1 = new BaseClass(); // BaseClass reference but DerivedClass object BaseClass obj2 = new DerivedClass(); // Calls the method from BaseClass class obj1.methodToOverride(); //Calls the method from DerivedClass class obj2.methodToOverride(); } }
Output:
I'm the method of BaseClass I'm the method of DerivedClass
Rules for Method Overriding:
- applies only to inherited methods
- object type (NOT reference variable type) determines which overridden method will be used at runtime
- Overriding method can have different return type (refer this)
- Overriding method must not have more restrictive access modifier
- Abstract methods must be overridden
- Static and final methods cannot be overridden
- Constructors cannot be overridden
- It is also known as Runtime polymorphism.
super keyword in Overriding:
When invoking a superclass version of an overridden method the super keyword is used.
Example::
Vehicles are used for moving from one place to another Car is a good medium of transport. | http://beginnersbook.com/2013/03/polymorphism-in-java/ | CC-MAIN-2016-50 | refinedweb | 668 | 55.74 |
import "gopkg.in/src-d/go-git.v4/plumbing/format/config"
Package config implements encoding and decoding of git config files.
Configuration File ------------------ The Git configuration file contains a number of variables that affect the Git commands'. Includes ~~~~~~~~ You can include one config file from another by setting the special `include.path` variable to the name of the file to be included. The variable takes a pathname as its value, and is subject to tilde expansion.. See below for examples. [include] path = /path/to/foo.inc ; include by absolute path path = foo ; expand "foo" relative to the current file path = ~/foo ; expand "foo" in your `$HOME` directory
common.go decoder.go doc.go encoder.go option.go section.go
const ( // NoSubsection token is passed to Config.Section and Config.SetSection to // represent the absence of a section. NoSubsection = "" )
Config contains all the sections, comments and includes from a config file.
New creates a new config instance.
AddOption adds an option to a given section and subsection. Use the NoSubsection constant for the subsection argument if no subsection is wanted.
RemoveSection removes a section from a config file.
RemoveSubsection remove s a subsection from a config file.
Section returns a existing section with the given name or creates a new one.
SetOption sets an option to a given section and subsection. Use the NoSubsection constant for the subsection argument if no subsection is wanted.
A Decoder reads and decodes config files from an input stream.
NewDecoder returns a new decoder that reads from r.
Decode reads the whole config from its input and stores it in the value pointed to by config.
An Encoder writes config files to an output stream.
NewEncoder returns a new encoder that writes to w.
Encode writes the config in git config format to the stream of the encoder.
Include is a reference to an included config file.
Includes is a list of Includes in a config file.
type Option struct { // Key preserving original caseness. // Use IsKey instead to compare key regardless of caseness. Key string // Original value as string, could be not normalized. Value string }
Option defines a key/value entity in a config file.
IsKey returns true if the given key matches this option's key in a case-insensitive comparison.
Get gets the value for the given key if set, otherwise it returns the empty string.
This matches git behaviour since git v1.8.1-rc1, if there are multiple definitions of a key, the last one wins.
See:
In order to get all possible values for the same key, use GetAll.
GetAll returns all possible values for the same key.
type Section struct { Name string Options Options Subsections Subsections }
Section is the representation of a section inside git configuration files. Each Section contains Options that are used by both the Git plumbing and the porcelains. Sections can be further divided into subsections. To begin a subsection put its name in double quotes, separated by space from the section name, in the section header, like in the example below:
[section "subsection"]
All the other lines (and the remainder of the line after the section header) are recognized as option variables, in the form "name = value" (or just name, which is a short-hand to say that the variable is the boolean "true"). The variable names are case-insensitive, allow only alphanumeric characters and -, and must start with an alphabetic character:
[section "subsection1"] option1 = value1 option2 [section "subsection2"] option3 = value2
AddOption adds a new Option to the Section. The updated Section is returned.
HasSubsection checks if the Section has a Subsection with the specified name.
IsName checks if the name provided is equals to the Section name, case insensitive.
Option return the value for the specified key. Empty string is returned if key does not exists.
Remove an option with the specified key. The updated Section is returned.
SetOption adds a new Option to the Section. If the option already exists, is replaced. The updated Section is returned.
func (s *Section) Subsection(name string) *Subsection
Subsection returns a Subsection from the specified Section. If the Subsection does not exists, new one is created and added to Section.
func (s *Subsection) AddOption(key string, value string) *Subsection
AddOption adds a new Option to the Subsection. The updated Subsection is returned.
func (s *Subsection) IsName(name string) bool
IsName checks if the name of the subsection is exactly the specified name.
func (s *Subsection) Option(key string) string
Option returns an option with the specified key. If the option does not exists, empty spring will be returned.
func (s *Subsection) RemoveOption(key string) *Subsection
RemoveOption removes the option with the specified key. The updated Subsection is returned.
func (s *Subsection) SetOption(key string, value ...string) *Subsection
SetOption adds a new Option to the Subsection. If the option already exists, is replaced. The updated Subsection is returned.
type Subsections []*Subsection
func (s Subsections) GoString() string
Package config imports 4 packages (graph) and is imported by 16 packages. Updated 2019-08-03. Refresh now. Tools for package owners. | https://godoc.org/gopkg.in/src-d/go-git.v4/plumbing/format/config | CC-MAIN-2020-10 | refinedweb | 840 | 59.5 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.