text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
System Information Operating system: Windows 10 Home Graphics card: GTX 1070 8GB Blender Version Broken: 2.80, .4dd0a90f4213, 2019-01-01 Worked: 2.79b release Short description of error Blender crashes while rendering and animation when object data is modified inside the frame_change_pre handler using the Python API. The crashes only seem to occur while rendering and not during playback. The frequency of crashes seems to be related to the frequency at which the object data is modified. This is part of an addon that reads and renders geometry data from a sequence of files. The addon does this by creating a Blender object, and on each frame swap out the object's mesh data with new geometry and then delete the old geometry. I have a created a simplified script that reproduces this crash within the first few frames of rendering: import bpy def frame_change_pre(scene): # The addon would load geometry from a file, but for a simplified test an icosphere works vertices = [ (0.0000, 0.0000, -1.0000), (0.7236, -0.5257, -0.4472), (-0.2764, -0.8506, -0.4472), (-0.8944, 0.0000, -0.4472), (-0.2764, 0.8506, -0.4472), (0.7236, 0.5257, -0.4472), (0.2764, -0.8506, 0.4472), (-0.7236, -0.5257, 0.4472), (-0.7236, 0.5257, 0.4472), (0.2764, 0.8506, 0.4472), (0.8944, 0.0000, 0.4472), (0.0000, 0.0000, 1.0000) ] triangles = [ (0, 1, 2), (1, 0, 5), (0, 2, 3), (0, 3, 4), (0, 4, 5), (1, 5, 10), (2, 1, 6), (3, 2, 7), (4, 3, 8), (5, 4, 9), (1, 10, 6), (2, 6, 7), (3, 7, 8), (4, 8, 9), (5, 9, 10), (6, 10, 11), (7, 6, 11), (8, 7, 11), (9, 8, 11), (10, 9, 11) ] # Create a new mesh with geometry new_mesh_data = bpy.data.meshes.new("mesh_data" + str(scene.frame_current)) new_mesh_data.from_pydata(vertices, [], triangles) # Swap the new mesh data and delete the old mesh data mesh_cache = bpy.data.objects.get("mesh_cache") old_mesh_data = mesh_cache.data mesh_cache.data = new_mesh_data bpy.data.meshes.remove(old_mesh_data) # This is what causes the crash: the more frequently the mesh cache data is accessed, the more frequent the crash occurs. # For a simplified test, we will repeatedly set smooth shading on the mesh data polygons. This also happens if setting the # object location/scale/matrix_world, mesh data materials, and other data. for i in range(1000): for p in mesh_cache.data.polygons: p.use_smooth = True # Create a cache object to store the current frame mesh mesh_cache_data = bpy.data.meshes.new("mesh_cache_data") mesh_cache_data.from_pydata([], [], []) mesh_cache_object = bpy.data.objects.new("mesh_cache", mesh_cache_data) bpy.context.scene.collection.objects.link(mesh_cache_object) bpy.app.handlers.frame_change_pre.append(frame_change_pre) This is the error Blender reports (EXCEPTION_ACCESS_VIOLATION): Error : EXCEPTION_ACCESS_VIOLATION Address : 0x00007FF6F95E6650 Module : C:\\Users\\ryanl\\Downloads\\blender-2.80.0-git.4dd0a90f4213-windows64\\blender.exe Exact steps for others to reproduce the error Attached is a .blend file including the script that reproduces this issue. - Open the .blend file - Press 'Run Script' - Begin rendering the animation (Blender > Render > Render Animation)
https://developer.blender.org/T60094
CC-MAIN-2019-04
refinedweb
508
53.41
Hi There! Is your Android app lacking pizazz or direction? Adding Google Maps is an excellent way to take your app from dull to distinct. But how does one make this transition? In this tutorial, I will walk you through the process of configuring and implementing Google Maps in your Android mobile and wearable applications. Feeling up the challenge? Then let's get started! What you will need: Required: - 1 - Smartphone with Android operating system - 1 - A copy of Android Studio - 1 - A Google (Gmail) Account Optional: - 1 - Android wearable device Once you have all these things, head over to the first step... Step 1: Setting Up Before diving into Google Maps development, there's a couple of things we need to set up first. Some pertaining to software, some pertaining to hardware... On the software side of things... Android Studio: To begin development using Google Maps you must first have Android Studio installed. If you're asking yourself: "What is this Android Studio you speak of?" Then head over to the Android developer website and get yourself a copy of this nifty tool for free. Google Play Services SDK You will also need the Google Play Services SDK package for Android Studio. While installing Android studio, it should give you an option to install the package. If not, Google has an excellent guide on installing and configuring the Google Play Services SDK on their developer website. You can check out the guide here. On the hardware side of things... If you haven't already taken the Android programming plunge before, then chances are you'll need to set up a couple of things on your device before getting started. Enabling Developer Options: On your device, head over to: Settings > About Tablet Scroll down to the very bottom of the page and tap on the Build numbersectionseven (7) times until you see a little popup window informing you of your new status as an Android Developer. From there, back out to the main settings screen and you should see a new section at the bottom of the page called Developer options; click on it.Make sure that Developer options is turned on at the top of the screen. Once enabled, there's one more thing you need to do. On the Developer options page, find a section called USB debugging and make sure that it is set to enabled. Once you have all this set up, let's create our first Google Maps Project! Step 2: Create a New Project Are you excited to start working with Maps yet? If so, then let's continue! New Project: Our Google Map endeavors deserve their own space, so let's create a new app! There's a couple ways you can go about it: - Follow this guide! :D - Create a new one yourself. - Import the included .zip project into your Android Studio. For this tutorial, we'll start fresh and create a new project in Android Studio. Start up the editor and when the welcome screen pops up, click: Quick Start > Start a new Android Studio project When the new project dialog comes up, choose a name for your app (you can leave all the other fields as-is). I'm calling mine MapsPlayground. Once you've chosen a name, click next. Target Android Devices: In this next screen, select the correct SDK that you would like you're app to compile with. If you have a newer phone, or plan to use Android Wear, then make sure to select API 21 or higher. In this tutorial, I'm going to show you how to develop for Google Maps on both mobile and wearable platforms. So, I'm choosing API 21. If you're feeling adventurous, then try out Android Wear! (if you're feeling apprehensive, don't worry. I'll guide you through every step of the way :D) When you've chosen your target SDK, click next. Add an Activity to Mobile: Here, Android Studio gives you the option to let you add some initial elements to your app. Because this tutorial is building from the ground up, we'll go with a Blank Activity. Click next to name your activity. Customize the Mobile Activity: On this next screen, choose a name for your activity. For this tutorial, the default MainActivity is good enough for what we need. Click next. (optional) Add an Activity to Wear: If you chose to incorporate Android Wear into this app, then awesome! Here you can pick the type of activity that will be automatically generated by Android Studio. Just like the mobile side, I'll be choosing a Blank Wear Activity to demonstrate Google Maps from scratch. (optional) Customize the Wearable Activity: If you added a wearable activity, then go ahead and name it here. To make the Activities easily distinguishable, I'll be changing the wearable activity name to MainWearActivity. Click finish to create your app! Build... After clicking finish, it might take a little while for Android Studio to initially compile the project. Feel free to take a break and grab a cup of tea (or coffee if that's your thing...). By the time you come back, you're project will be built and you should see this: INSERT IMAGE Now that we've got all that set up, we are really close to starting development! In the next step we'll configure our project to work with the Google Maps API. Step 3: Configuring Google Maps API If you've made it this far, awesome! The process we will do here to configure our app is exactly the same for each app that you create that uses the Google Maps Android API. It seems tedious at first, but after a little repetition it becomes second nature. This section is EXTREMELY important, so be sure to pay attention to each step! API Keys: In order to use the Google Maps API, you need to configure your project with a couple of different API keys. To do that, you need to register your app on the Google Developers Console (don't worry, its free :D). First, we need to get your app's certificate information. There are 2 different types of certificates, a debug certificate and a release certificate. For this project, we only need a debug certificate because we are not releasing our app on the Google Play Store (yet...). Obtaining the debug certificate depends on the type of computer you have. Follow the steps below for your operating system: Mac and Linux: Open a terminal window and navigate to: ~/.android/ List the SHA-1 fingerprint: keytool -list -v -keystore ~/.android/debug.keystore -alias androiddebugkey -storepass android -keypass android Windows: Open a cmd window and navigate to: C:\Users\your_user_name\.android\ List the SHA-1 fingerprint: keytool -list -v -keystore "%USERPROFILE%\.android\debug.keystore" -alias androiddebugkey -storepass android -keypass android After running the commands above, you should see output similar to this: Out of all that seemingly meaning garbage, make sure to hold on to the line that begins with SHA1. We'll need that string when we register on the Google Developers Console. Create an API Project: Now that we have our app's debug certificate, we can register our app on the Google Developers Console. There are two ways of doing that: - The Console will actually guide you through the process of activating the Google Maps Android API. If you want this route, follow this link. Once you have activated the API, skip to the next step! - You can activate the Google Maps Android API yourself. To do that, follow the steps below: Head over to the Google Developers Console, and sign in with your Google account. Create a new project and give it a name of your choosing. Once the project is created, look over to the sidebar on the left side of the window. There, click: APIs > Google Maps Android API > Enable API Once added, go back to the left sidebar and click: Credentials > Create new Key > Android key There, copy/paste the SHA1 line from the commands we ran earlier into the window so that the output looks similar to: INSERT IMAGE Make sure to copy the API key that the Console gives you. We'll need that in our next step! Next, we need to configure our manifest with these shiny new API keys. When finished, head over to the next step! Step 4: Configuring AndroidManifest.xml This is the final stretch to implementing our Google Maps app. Just one more step to go! Now that we have all those API keys configured on the Developers Console, we need to shift gears and configure the AndroidManifest.xml in your project. So what are we waiting for? Start up Android Studio and open your AndroidManifest.xml. (if you have an Android Wear module too, then open that manifest as well and copy the code below into both files) Just inside the element, add the following as a child of : <meta-data android:<br> Replace the API_KEY value with the API key that you got from the Developers Console. Specify the Google Play services version number by adding the following as a child of : <meta-data android: Add the following permissions as a children of the element: <uses-permission android:<br><uses-permission android: <uses-permission android: <uses-permission android: One last thing, Google Maps requires OpenGL ES to render the maps on screen. We need to specify this external service as a requirement in the manifest. To do that, add the following element as a chile of the element: <uses-feature android: Excellent! When you've pasted in all that code, the completed result should looks something like this: INSERT IMAGE Now rebuild your project by going to: Build > Rebuild Project Congratulations! You have just completed configuring all the necessary Google Maps API requirements. You are one step closer to becoming a Maps developer. :D Ready to dive into the code? Head over to the next step and lets get coding! Step 5: Google Maps for Mobile Great job for making it this far! In this section we'll cover how to add Google Maps to your mobile application. Getting excited? Let's get started! Concept: Google Map layouts for Android is based extensively on fragments. Instead of creating your own, the Google Maps API includes them with the majority of the backend done for you. As the developer, all you have to do is manage their lifecycle, layout, and interaction. I'll cover all of these topics below. Layout: Adding a Google Map to your app is very simple. Because the Maps are fragments, you can inflate them into virtually any ViewGroup that you want. To start, open up activity_main.xml and remove the default TextView that is placed in the layout. Instead, drag and drop a FrameLayout in its place. Give the FrameLayout an ID called "mapContainer". Your completed layout should look like this: IMAGE HERE Lifecycle: If you are familiar with fragments from any previous development experience, then you understand how great they are. Google Maps is no different. The API provides a fragment class called MapFragment that can be manipulated to your liking. Also, because they are fragments, that means that part of their lifecycle and their layout manipulation is handled via the FragmentManager. Pretty cool huh? Here, is where things get interesting. Open your MainActivity.java. At the top of your class declaration, implement OnMapReadyCallback and then import the method: public class MainActivity extends Activity implements OnMapReadyCallback { ... @Override public void onMapReady(GoogleMap googleMap) {} ... This callback will trigger when your map is ready to be used. We'll get to that in a minute. Now create two private member variables to your activity; one for the GoogleMap, another for the MapFragment. Your code should look like this: private GoogleMap googleMap; private MapFragment mapFragment; Now in your onCreate() method, lets create the fragment. To do that, do the following: ... this.mapFragment = MapFragment.newInstance(); // Initialize mapFragment object. FragmentTransaction ft = getFragmentManager().beginTransaction(); ft.add(R.id.mapContainer, mapFragment); // Inflate mapFragment inside the FrameLayout. ft.commit(); // Commit the transaction. this.mapFragment.getMapAsync(this); // Sets MainActivity as OnMapReady callback listener.... In order for us to be able to edit our map later on, we need a reference to it once it has been created. So, add this code into your onMapReady() callback method: this.googleMap = googleMap; // Initialize the local GoogleMap object. That's it! We have implemented all the callback listeners that we need and now have a reference to our GoogleMap object. Go ahead and try running your app at this point. You should see your map inflate into the FrameLayout inside your activity_main.xml. Interaction: Now that we have a reference to our GoogleMap object, there's a lot of cool things that we can do. I'll show you a few examples. Drawing a Marker: Say you want to mark a point on the map. The API allows you to create markers with optional content inside them and place them at a given location. Try this in your app for placing a marker: ... MarkerOptions markerOptions = new MarkerOptions(); markerOptions.position(new LatLng(-12.34, 56.78)) // Place marker at given lat/lng. .visible(true) // Make the marker visible. .title("hello world") // Content of the marker. .icon(BitmapDescriptorFactory.fromResource(R.drawable.blue_dot)); // Custom icon. Marker currentMarker = googleMap.addMarker(markerOptions); // Add marker and save. ... Drawing a Path: You know how directions in the actual Google Maps app draws your route on the map? Well with reference to your map object, you can do something very similar using a neat object called a Polyline. Polylines allow you connect the dots between multiple latitude/longitude coordinates on the map. Here's an example of a method that you can also put in your app: ... // Updates route path on map with a given location. private void updatePrimaryPath(LatLng current) { PolylineOptions options = new PolylineOptions(); // Create new polyline options. options.color(Color.RED).width(5).visible(true); // Polyline attributes. if (lastLatLng != null) { options.add(lastLatLng); // Add coordinate to polyline. options.add(current); // Add coordinate to polyline. googleMap.addPolyline(options); // Draw polyline on map. } lastLatLng = current; // Set the last used position as the current one. ... Centering Camera On a Location: One last thing that you can do with the map that will be readily useful in your programming endeavors is centering the camera on a specific latitude/longitude coordinate. This allows you to automatically shift the view of the map to different spots with different zoom levels. Here is some implementation to get you started: ... // Center the map on a specified location with a desired zoom level. private void centerMapOnLocation(LatLng loc) {<br> googleMap.animateCamera(CameraUpdateFactory.newLatLngZoom(loc, ZOOM_LEVEL)); } ... There are tons of things that you can do with the Google Maps Android API. When you feel comfortable with the basics, feel free to get explore the API and add new functionality to your app. If you incorporated Android Wear into your MapPlayground app, then head on over to the next step to learn about the Google Maps API for Android Wear. Otherwise, skip to the last step! Step 6: Google Maps for Wear Welcome adventurous ones! This section is for those brave souls that ventured into the uncharted territory of Android Wear. Congratulations on making through each step of the tutorial :D This section will touch on the Google Maps API implementation for Android Wear. Feeling up to it? Great, lets get started! Concept: As I'm sure you've guessed, Android Wear is a relatively new mobile platform designed for wearables. What's really cool about Android Wear, is that development for it is nearly identical to the traditional Mobile platform. So, all the development we just did in the previous step for Google Maps on Mobile is exactly the same for Android Wear! In light of the dominant parallelism between the two platforms, I'll focus on a few key elements that really only pertain to Android Wear. Layout Design: When incorporating Google Maps into your Wear app, ideally, you want your view to be immersive. The problem with that is the user cannot use traditional swipe-to-exit gestures to quit out of the map. Luckily, there is a nifty little tool we can use called a DismissOverlayView that, when invoked via a OnLongClickListener, overlays the entire view with an exit button. To use DismissOverViews there are two components, the layout placeholder and the Java implementation. Check out the implementation for both below: <!-- Place this DismissOverlayView anywhere in your layout for it to be called. --> <android.support.wearable.view.DismissOverlayView android: // Set up DismissOverlayView in onCreate(). mDismissOverlay = (DismissOverlayView) findViewById(R.id.dismiss_overlay); mDismissOverlay.setIntroText(R.string.basic_wear_long_press_intro); mDismissOverlay.showIntroIfNecessary(); ... // Show the DismissOverlayView when map is long-pressed. @Override public void onMapLongClick(LatLng latLng) { mDismissOverlay.show(); } Step 7: Wrap Up Summing it all up, we've seen how to set up and configure Google APIs with new projects, and implement the Google Maps API on the Android Mobile, and the Android Wear platforms. Now you have all the tools you need to begin development for Google Maps on Android Mobile and Android Wear! I hope that this tutorial has helped you learn about Android development; increasing your skill set and confidence in developing more complex apps in the future. Exploring new topics is fun and exciting and I hope that by learning Google Maps you have broadened your creativity horizons. cheers!
http://www.instructables.com/id/Google-Maps-API-for-Android/
CC-MAIN-2017-22
refinedweb
2,911
65.52
TextView¶ Synopsis¶ #include <ts/TextView.h> This class acts as a view in to memory allocated / owned elsewhere. It is in effect a pointer and should be treated as such (e.g. care must be taken to avoid dangling references by knowing where the memory really is). The purpose is to provide string manipulation that is fast, efficient, and non-modifying, particularly when temporary “copies” are needed. Description¶ TextView is a subclass of std::string_view and has all of those methods. In addition it provides a number of ancillary methods of common string manipulation methods. A TextView should be treated as an enhanced character pointer that both a location and a size. This is when makes it possible to pass sub strings around without having to make copies or allocation additional memory. This comes at the cost of keeping track of the actual owner of the string memory and making sure the TextView does not outlive the memory owner, just as with a normal pointer type. Internal for Traffic Server any place that passes a char * and a size is an excellent candidate for using a TextView as it is more convinient and no more risky than the existing arguments. In deciding between std::string_view and TextView remember that these easily and cheaply cross convert. In general if the string is treated as a block of data, std::string_view is better. If the contents of the string are to be examined / parsed non-uniformly then TextView is better. For example, if the string is used simply as a key or a hash source, use std::string_view. Or, if the string may contain substrings of interests such as key / value pairs, then use a TextView. TextView provides a variety of methods for manipulating the view as a string. These are provided as families of overloads differentiated by how characters are compared. There are four flavors. - Direct, a pointer to the target character. - Comparison, an explicit character value to compare. - Set, a set of characters (described by a TextView) which are compared, any one of which matches. - Predicate, a function that takes a single character argument and returns a bool to indicate a match. If the latter three are inadequate the first, the direct pointer, can be used after finding the appropriate character through some other mechanism. The increment operator for TextView shrinks the view by one character from the front which allows stepping through the view in normal way, although the string view itself should be the loop condition, not a dereference of it. TextView v; size_t hash = 0; for ( ; v ; ++v) hash = hash * 13 + * v; Because the view acts as a container of characters, this can be done non-destructively. TextView v; size_t hash = 0; for (char c : v) hash = hash * 13 + c; Views are cheap to construct therefore making a copy to use destructively is very inexpensive. MemSpan provides a find method that searches for a matching value. The type of this value can be anything that is fixed sized and supports the equality operator. The view is treated as an array of the type and searched sequentially for a matching value. The value type is treated as having no identity and cheap to copy, in the manner of a integral type. Parsing with TextView¶ A primary use of TextView is to do field oriented parsing. It is easy and fast to split strings in to fields without modifying the original data. For example, assume that value contains a null terminated string which is possibly several tokens separated by commas. #include <ctype.h> parse_token(const char* value) { TextView v(value); // construct assuming null terminated string. while (v) { TextView token(v.extractPrefix(',')); token.trim(&isspace); if (token) { // process token } } } If value was bob ,dave, sam then token would be successively bob, dave, sam. After sam was extracted value would be empty and the loop would exit. token can be empty in the case of adjacent delimiters or a trailing delimiter. Note that no memory allocation at all is done because each view is a pointer in to value and there is no need to put nul characters in the source string meaning no need to duplicate it to prevent permanent changes. What if the tokens were key / value pairs, of the form key=value? This is can be done as in the following example. #include <ctype.h> parse_token(const char* source) { TextView in(source); // construct assuming null terminated string. while (in) { TextView value(in.extractPrefix(',')); TextView key(value.trim(&isspace).splitPrefix('=').rtrim(&isspace)); if (key) { // it's a key=value token with key and value set appropriately. value.ltrim(&isspace); // clip potential space after '='. } else { // it's just a single token which is in value. } } } Nested delimiters are handled by further splitting in a recursive way which, because the original string is never modified, is straight forward. History¶ The first attempt at this functionality was in the TSConfig library in the ts::Buffer and ts::ConstBuffer classes. Originally intended just as raw memory views, ts::ConstBuffer in particular was repeated enhanced to provide better support for strings. The header was eventually moved from lib/tsconfig to lib/ts and was used in in various part of the Traffic Server core. There was then a proposal to make these classes available to plugin writers as they proved handy in the core. A suggested alternative was Boost.StringRef which provides a similar functionality using std::string as the base of the pre-allocated memory. A version of the header was ported to Traffic Server (by stripping all the Boost support and cross includes) but in use proved to provide little of the functionality available in ts::ConstBuffer. If extensive reworking was required in any case, it seemed better to start from scratch and build just what was useful in the Traffic Server context. The next step was the TextView class which turned out reasonably well. It was then suggested that more support for raw memory (as opposed to memory presumed to contain printable ASCII data) would be useful. An attempt was made to do this but the differences in arguments, subtle method differences, and return types made that infeasible. Instead MemSpan was split off to provide a void* oriented view. String specific methods were stripped out and a few non-character based methods added.
https://docs.trafficserver.apache.org/en/latest/developer-guide/internal-libraries/TextView.en.html
CC-MAIN-2018-34
refinedweb
1,052
54.93
Embed flash file in jsp page Embed flash file in jsp page  ... to embed video, sound, flash files in the web page. We can add Real Media, Quick Time.... This is example of jsp code to embed a flash animation file named 'pic.swf' in web using jsps.But it's not loading. I used tag like this <object id JSP-embedded flash games JSP-embedded flash games Is it possible to get the scores in a embedded flash game in a jsp embed ganttChart on JSP page embed ganttChart on JSP page How I can embed ganttChart on JSP page ??? Im already created ganttChart by using this example: To create a simple Gantt chart, try the following code: import java.awt.Color; import HTML5 embed tag example, Example of <embed> tag in HTML5. of embed tag, you can add video, audio, or image etc. It is a singleton tag...HTML5 embed tag example, Example of <embed> tag in HTML5. Introduction:In this tutorial, you will see the use of embed tag of html5 Tomahawk document tag Tomahawk document tag This tag is used to embed whole... use this tag in place of <html> tag in our JSP page. It has only Tag extension in JSP Tag extension in JSP What is Tag extension in JSP model? JSP supports the authors to add their own custom tags that perform custom actions. This is performed by using JSP tag extension API. A java class is written Add a jsp file to java application Add a jsp file to java application How to add a JSP file to java... into current JSP. The purpose of this tag is to reuse JSP, HTML, or plain text...;%@include file="success.jsp" %><br/> <h2>Here You are including a Jsp How to embed an executable plugin in my website? How to embed an executable plugin in my website? I want to embed an executable plugin in my website that will return client's Mac address run time. I am working with jsp/servlet. Please Help JSP Tag Libraries JSP page can reuse it. Tag libraries shorten the necessity to embed large... JSP Tag Libraries JSP Tag Libraries is the collection of standard tags. JSP tags are the Java and HTML tags | Declare tag methods in jsp | Embed flash file in jsp... Method of the Form In JSP | What Is Tag Libray In JSP? | Declaring Tag... | No attribute and No body Tag in JSP | for loop in JSP | Request Header ROW - JSP-Servlet ADD ROW Hi Sir, How to use add row and delete row concept in jsp . Hi Friend, Please visit the following link: Thanks include directive tag JSP include directive tag What is include directive tag in JSP Embed flex video Embed flex video Hi, I want to embed "application/x-shockwave-flash" content in my mxml.Basically I want to hit one url , get "application/x-shockwave-flash" content from it and display it in my mxml/swf.How to do:include action tag jsp:include action tag In this section we will discuss about JSP "include" action tag. The jsp "include" tag is used to include... into Servlet. The include tag is used to include another resource it may be jspsp:forward tag usage and stntax jsp:forward tag usage and stntax jsp:forward tag usage and syntax with an example jsp - JSP-Interview Questions by adding a tag library of JSP tags for common tasks, such as XML data processing... was released.JSTL provides an effective way to embed logic within a JSP page without using...JavaServer Pages Standard Tag Library (JSTL) Hi, I need some JSP Actions is used to embed applet. The jsp:plugin tag actually generates...; The jsp:param action is used to add the specific parameter to current request. The jsp:param tag can be used inside a jsp:include, jsp:forward or j include directive tag syntax and example JSP include directive tag syntax and example The syntax and example of the JSP include directive tag. Hi, The syntax of the JSP include... of the JSP include directive tag is: <%@include Thanks Action Tag (Data Tag) Example Action Tag (Data Tag) Example In this section, we are going to describe the action tag. The action tag is a generic tag that is used to call actions directly from a JSP page by specifying the action Struts ForwardAction vs Forward tag in jsp - Struts Struts ForwardAction vs Forward tag in jsp difference between struts ForwardAction class and Forward tag in jsp Include Tag (Data Tag) Example or a JSP page) to the current page. Add the following code snippet...; tag includes another jsp using the value parameter ...Include Tag (Data Tag) Example   JSF validator Tag JSF validator Tag This tag is used to add and register the validator... (in faces-config file and validator tag in jsp file ) should match. Code jsp jsp sir i am trying to connect the jsp with oracle connectivity..."); while(rs.next(){ String name=rs.getString("name"); String add=rs.getString("address"); out.println(name+" "+add); } rs.slose(); st.close(); connection.close Scripting Variables in JSP Custom Tag Scripting Variables in JSP Custom Tag  ... and using scripting variables in custom tag: In JSP 1.1 you have to define... the scripting variables in the tag handler. In JSP 1.2 you have to define How to access the Title tag from xml to jsp How to access the Title tag from xml to jsp How to access the Title tag from xml to jsp Please visit the following link: The above link will provide you an example jsp are there user category and user........both having their own add,edit and delete... do only his add,edit and delete...if user want to do add,edit or delete Set Tag (Data Tag) Example Set Tag (Data Tag) Example In this section, we are going to describe the Set tag. The set tag is a generic tag... reference that variable each time rather than the complex expression. Add jsp ;" import = "java.io.*" errorPage = "" %> <jsp:useBean...;lt;/table> <input type = "button" value = "Add Param Tag (Data Tag) Example Param Tag (Data Tag) Example In this section, we are going to describe the param tag. The param tag is a generic tag that is used to parameterize other tags. For example the include tag and bean tag> Declaring Tag Libraries In JSP Declaring Tag Libraries In JSP  ... the <%@taglib %> directive of the JSP. This tag has some own attributes... folder in your JSP application directory. If you declare the tag library through Tutorials to create dynamic web pages using Java technology. JSP allows you to embed...; with .We can embed any amount of java code in the JSP Declaratives.... In this tutorial you will learn how to add cookies through jsp page and then show Text Tag (Data Tag) Example Text Tag (Data Tag) Example In this section, we are going to describe the text tag. The text tag is a generic tag... in the resource bundle, then the body of the tag will be used as default message Select Tag (Form Tag) Example is a UI tag that is used to render an HTML input tag of type Add...; } } Create a jsp using the tag <...Select Tag (Form Tag) Example   Struts tag - Struts Struts tag I am new to struts, I have created a demo struts application in netbean, Can any body please tell me what are the steps to add new tags to any jsp page Bean Tag (Data Tag) Example Bean Tag (Data Tag) Example In this section, we are going to describe the Bean Tag. The Bean tag is a generic tag... specification. This tag has a body which can contain a number of Param elements to set any Optgroup Tag (Form Tag) Example a select tag <s:select>. Add the following code snippet...;/action> Create a jsp using the tag <s:optgroup> within the <...Optgroup Tag (Form Tag) Example   jsp - JSP-Servlet JSP declaration tag example Can you give me a JSP declaration tag example Password Tag (Form Tag) Example is a UI tag that renders an HTML input tag of type Add...;/result> </action> Create a jsp using the tag <s...Password Tag (Form Tag) Example   Textarea Tag (Form Tag) Example is a UI tag that is used to render an HTML textarea. Add the following code.../textareaTag.jsp</result> </action> Create a jsp using the tag <...Textarea Tag (Form Tag) Example   Tag Handler in JSP Tag Handler in JSP  ... library descriptor file and the tag handlers. With the help of this a JSP is able... taglib PUBLIC "-//Sun Microsystems, Inc.//DTD JSP Tag Library 1.1//EN" Optiontransferselect Tag (Form Tag) Example Optiontransferselect Tag (Form Tag) Example In this section, we are going to describe the Optiontransferselect tag. The Optiontransferselect tag is a UI tag that creates an option transfer jsp:plugin in jsp jsp:plugin in jsp What is the jsp:plugin action ? This action lets you insert the browser-specific OBJECT or EMBED element needed to specify that the browser run an applet using the Java plugin Checkbox Tag (Form Tag) Example /checkboxTag.jsp</result> </action> Create a jsp using the tag <s...Checkbox Tag (Form Tag) Example In this section, we are going to describe the checkbox tag. The checkbox tag Textfield Tag (Form Tag) Example tag is a UI tag that is used to render an HTML input field of type text. Add.../textfieldTag.jsp</result> </action> Create a jsp using the tag <...Textfield Tag (Form Tag) Example   Custom Iterator Tag in JSP Custom Iterator Tag in JSP Example program to make custom iterator tag in JSP This example will demonstrate you how you can make a custom iterator tag in JSP? You can File Tag (Form Tag) Example </result> </action> Create a jsp using the tag <s...File Tag (Form Tag) Example In this section, we are going to describe the file tag. The file tag is a UI tag Label Tag (Form Tag) Example Label Tag (Form Tag) Example In this section, we are going to describe the label tag. The label tag is a UI tag... controls. Add the following code snippet into the struts.xml file. struts.xml How to access the following tag from xml to jsp How to access the following tag from xml to jsp How can i get... the following link: The above link... into jsp table JSP - Java Beginners the following link: Hope...JSP Hai friends, I want to use the flash image on jsp or java script, if i click the image then it goes to the next page...... If any one Reset Tag (Form Tag) Example ; Create a jsp using the tag <s:reset>. It renders a reset button...Reset Tag (Form Tag) Example In this section, we are going to describe the reset tag. The reset tag is a UI Updownselect Tag (Form Tag) Example ; Create a jsp using the tag <s:updownselect> that creates a select component...Updownselect Tag (Form Tag) Example In this section, we are going to describe the updownselect tag Introduction to the JSP Java Server Pages to embed java code in html pages. JSP files are finally compiled...; with .We can embed any amount of java code in the JSP Declaratives... JSP Scriptlets begins with <% and ends %> .We can embed any amount Property Tag (Data Tag) Example Property Tag (Data Tag) Example In this section, we are going to describe the property tag. The property tag is a generic tag that is used to get the property of a value, which Radio Tag (Form Tag) Example ;} } Create a jsp using the tag <s:radio>... .style1 { color: #FFFFFF; } Radio Tag (Form Tag) Example... to describe the radio tag. The radio tag is a UI tag that renders a radio button input Advertisements If you enjoyed this post then why not add us on Google+? Add us to your Circles
http://www.roseindia.net/tutorialhelp/comment/93389
CC-MAIN-2015-18
refinedweb
1,991
64.2
That is a nice message to get. Well, that is easy. There are many ways to do it. I will make use of the dictionary to make my life easy. def generate_key(n): letters = "ABCDEFGHIJKLMNOPQRSTUVWXYZ" key = {} cnt = 0 for c in letters: key[ c] = letters[(cnt + n) % len(letters)] cnt += 1 return key def get_decryption_key(key): dkey = {} for c in key: dkey[key[ c]] = c return dkey def encrypt(key, message): cipher = "" for c in message: if c in key: cipher += key[ c] else: cipher += c return cipher # This is setting up your Caesar Cipher key key = generate_key(3) # Hmm... I guess this will print the key print(key) # This will encrypt the message you have chose with your key message = "YOU ARE AWESOME" cipher = encrypt(key, message) # I guess we should print out your AWESOME message print(cipher) If you look at it like this. There is a flaw in the system. Can you see what? Yes, of course you can. We are in the 2020ies and not back in the times of Caesar. The key space is too small. Breaking it basically takes the following code. # this is us breaking the cipher print(cipher) for i in range(26): dkey = generate_key(i) message = encrypt(dkey, cipher) print(message) You read the code correct. There are only 26 keys. That means, that even back in the days of Caesar this could be done in hand. This leads us to the most valuable lesson in cryptography and most important principle. Let’s just recap what happened here. Alice sent a message to Bob that Eve captured. Eve did not understand it. This leads to the most important lesson in cryptography. Kerckhoff’s Principle. Eve should not be able to break the ciphers even when she knows the cipher.Kerckhoff’s Principle That is seems counterintuitive, right? Yes, but think about it, if you system is secure against any attack even if you reveal your algorithm, then it would give you more confidence that it is secure. You security should not be based on keeping the algorithm secret. No it should be based on the secret key. No. Most government ciphers are kept secret. Many secret encryption algorithms that leaked were broken. This also includes the one used for mobile traffic in the old G2 network. A5/1 and the export version A5/2.
https://www.learnpythonwithrune.org/how-caesar-cipher-teaches-us-the-most-valuable-lesson-learn-kerckhoffs-principle-in-5-steps-with-python-code/
CC-MAIN-2021-25
refinedweb
393
75.4
From time to time, I run into developers who ask the question, “what is the point of using an interface?” This is usually followed by a statement such as… “I just new up a class whenever I need one, and don’t really see the point of using an interface.” “I just new up a class whenever I need one, and don’t really see the point of using an interface.” When I hear this, the first thing I do is to ask to see some of their unit tests. Typically, this is met with a few moments of awkward silence, and then there is a declaration similar to… “Well, I don’t write unit tests. I don’t have time for them, and I really don’t see the point of them.” And my favorite… “Where’s the value add?” Perhaps you have worked with developers like this. I have sometimes observed that an associated trait of this approach to programming comes along with a statement or two on how “fast” they write their code. Now, sometimes fast is good, but rarely at the expense of code that is tightly-coupled, untestable, or not object oriented. The reason for this is that according to Agile Development (2007), more time (as much as 80-90 percent of the life of the production code) is spent maintaining your code compared to the time spent initially writing it. So take the time to write it correctly using proven methods. According to Osherove (2015) , tightly-coupled and untestable code will cost you and the team greatly in the long run in terms of bug fixes, changing existing features, or in terms of adding new features. Left to continue, some systems erode to the point where they just stop working and the only alternative is a complete re-write. In any case, let’s suppose that our developer’s name is Bart. Everything goes along okay for a while, but then one day, inevitably, Bart is either out sick, or Bart decides to take some vacation. Of course a bug is found, and the development manager asks you to take a look at Bart’s code, and please come up with a fix. So you fire up your debugger and start walking though the code, and oh my, what do you find? — an assortment of monolithic classes and long methods, that in-turn, call other monolithic classes and long methods. Since there are no interfaces in play, there are dependencies strewn about directly accessing things like databases, web services, the file system, EventLog, or any number of external dependencies. Logging is taking place, but each class creates an instance of the logging class as needed. You understand that the logging code is a cross-cutting concern. However, you see that the same logging code repeats over and over again, and is scattered throughout many methods and classes. This adds additional noise to the code making it that much more unreadable. Mr. Groves, in his book AOP in .NET (2015) discusses the advantages of Aspect Oriented Programming, (AOP — which relies on interfaces). Bart is not familiar with these techniques as he is not a fan of interfaces. However, an Aspect is precisely where the cross-cutting concerns like logging, security, authentication, and common things like defensive coding belong. (We will see how to create an Aspect to get rid of the repeating code in another post.) In any case, perhaps you would like to write some tests for this code, but because there are no interfaces, there are no seams to pass in the required dependencies. So you are stuck. Most likely, you will run a console application, set break points, and try to find out where the bug lives. And when you think you’ve found a bug and want to make changes to the code, you’re left with a sinking feeling that perhaps when you check in your changes, you just might break something else. Because Bart wrote no tests providing a safety net, at best, this approach is now a crap shoot. Does this scenario sound familiar? If so, you might be wondering, what does this have to do with interfaces? The answer is: everything! It is the use of an interface that allows us to begin to write decoupled code. Each interface can serve as a seam where we can pass in a dependency of our own choice, real or faked, instead of relying on some hard coded reference to an external resource or other dependency such as the logging class dependency. The power of the interface lies in the fact that an interface is nothing more than a required behavior. The interface is often called a contract, but I like to use the word behavior as the true focus and intent of an interface. By stating the ‘behavioral intent’ through an interface, we “decouple” any concrete class’s implementation of the interface. Put another way, we are saying that any class that implements our interface must exhibit certain behaviors. How the class implements the behavior, we no longer care. As long as the behavioral contract is implemented and followed, we are good to go! Before we take a look at using an interface, let’s take a look at a class that does not use an interface, and hence provides us no seam. This class uses an external dependency and its tight coupling prevents us from writing true unit tests. At best, we are forced to write an integration test that must call an external dependency. Now, is this bad? I would argue three times: yes, yes, yes! It is essential that we are able to test the behavior of our code without depending on external entities or services. These types of tests are called unit tests by definition. (If you follow Test Driven Development, you naturally will have testable and decoupled code, but this is the subject of another post.) Integration tests that rely on external entities and other systems certainly have their place, but they do not replace the need for solid unit tests that allow us to “refactor with impunity.” Let’s look at our Service class without an interface. Service public class Service { // HTTP implementation pseudocode public string GetUserFirstName(string logonId) { HttpWebRequest request = (HttpWebRequest)WebRequest.Create(""); var result = request.GetResponse(logonId); return result.ToString(); } } We have no way to pass in our own dependency, because the Service class has taken on the hard coded responsibility of creating an HTTP request method. So even with this simple class and its one simple method, we are forever tied to an HTTP external dependency. But what if the endpoint is down and we want to test our code, or we’d like to make and test some related business logic changes? We are pretty much stuck. Let’s introduce a simple interface to see how our dependency situation changes. The behavior that we want to enforce and test is in the GetUserFirstName method. So, let’s put that behind a simple interface and create our first seam. GetUserFirstName public interface IService { string GetUserFirstName(string LogonId); } Now any class that implements our interface must provide an implementation of this method. Let’s do two things to leverage the interface’s behavior specification. First, we will modify our existing Service class to require an IService instance as part of its constructor. IService Secondly, we will have the Service class implement the IService interface. In the first case, requiring an instance of the interface in the constructor is a typical dependency inversion technique. We have moved the responsibility for creating the dependency from the Service class, and pushed it (or inverted it) back to the caller — hence the name Dependency Inversion. The caller now must supply an instance of IService, and this is exactly what we want in our decoupled code! In the second modification, implementing the IService interface, we want to guarantee that the GetUserFirstName method appears on the Service class. Let’s take a look at the resulting Service class code to see what we have done. public class Service: IService { IService service; public Service(IService service) { this.service = service; } public string GetUserFirstName(string LogonId) { return service.GetUserFirstName(LogonId); } } Notice that our interface is helping to shape our code! By implementing IService, we have to provide the GetUserFirstName method. We also leveraged the Service class’s constructor to require an instance of IService. Using the passed in instance of IService, we can now delegate the work of GetUserFirstName to whatever instance of IService we choose to pass in. How do we know that we can successfully delegate our return call to service.GetUserFirstName(LogonId)? service.GetUserFirstName(LogonId) We can be sure because we know any instance of IService must implement the GetUserFirstName method. Further, we required and set the local service variable from the Service class constructor to be of the IService type. In our case, any class that implements the IService interface “IS” an IService and can be treated as such. It is important to note that the interface behavior is at work for us by definition, both in the constructor and in the Service class method implementation. By taking these simple steps, we have added polymorphic behavior to our code — and this is one of the pillars of object oriented programming. Polymorphic means “many forms.” But in terms of our IService example, it simply means that the caller is not aware of the specific implementation details, and it does not need to be aware — it has merely to call the GetUserFirstName method and the work is handled by our IService instance. Now the IService instance could be a web service class (we will have a separate class for each implementation), File System, SharePoint System, or it could be a test class implementation of IService; or any other implementation that may come along based on future requirements. This constructor inversion technique is the basis for implementing the Strategy Pattern, where we “encapsulate what varies.” Many other design patterns in SOLID rely on Dependency Inversion to supply specific concrete instances of interfaces, based on the client’s needs. The use of an IoC (Inversion of Control) container is often used to pre-wire and register these dependencies, relating each specific interface to a specific concrete class implementation. This is done in the root of the application at startup, known as the composition root. (IoC is the subject of another post.) In any case, we have created our first seam, and it is this concept that enables decoupled and testable object oriented code. Now, a unit test might easily be written that looks like this… [Test] [Category("Unit")] public void GetFirstName_UsingNamedTestService_ReturnsFirstNameFromTestService() { var factory = new Factory(); IService instance = factory.FactoryMethod("testService"); var result = instance.GetUserFirstName("101"); Assert.AreEqual("FirstNameFromTestService", result); } In my next post, I’ll demonstrate using structure map to create the factory method shown above. Without using a switch, the factory will return a specific instance of IService, solely based on the client passing in a simple string (the named instance.) string This named instance can be seen on line 6 in the unit test above, and is the string “testService”. testService This is the essence of the strategy pattern and is used throughout codebases by developers who adhere to the principles of SOLID programming. All of this object oriented goodness is made possible thanks to the use of the Interface. This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) public class Service: IService { IService service; public Service(IService service) { this.service = service; } public string GetUserFirstName(string LogonId) { return service.GetUserFirstName(LogonId); } } return service.GetUserFirstName(LogonId); Hello, and thank you for your comment! In his book, 50 Specific Ways to Improve Your C#, author Bill Wagner addresses your question with his Item 14: “Prefer Defining and Implementing Interfaces to Inheritance.” He goes on to say why in detail, but the opening lines are quite telling. “Abstract base classes provide a common ancestor for a class hierarchy. An interface describes related methods comprising functionality that can be implemented by a type… Base classes describe what an object is; interfaces describe one way in which an object behaves.” Now to be fair, SOLID principles suggest “programming to abstractions instead of concretions,” and an abstract class is an abstraction by definition as is an interface. But like Bill, I prefer to describe a behavior via an interface, and leave it to the class creator to decide how best to implement that behavior. I see that as a much more granular approach to creating a seam than starting with a base class and then overriding methods. This is not to say that abstract base classes do not have their place – the .Net framework contains many. In the IService example, yes of course there will be a concrete web service class that does call out to an HTTP endpoint, and its test will be an integration test. But with the new version that takes an IService, the Service class is no longer tightly coupled. The point of introducing an interface is to open a seam where unit tests can be written that do not rely on an external endpoint or any external system. This is because the interface defines only the behavior that must be present. Again, doing this this allows us to test the behavior and shape of the method calls (the business or domain logic if you prefer) without relying on external entities. It allows us to assert known values and test against those values. The unit tests give us a safety net to help ensure that whatever refactoring we decide to do later on has not broken any existing production code. If you can write your unit tests using abstract classes and that meets your testing needs then it sounds fine to use abstract classes. But compared to in interface, I suspect that might be complicating the unit tests rather than simplifying them. I hope this helps. References Wagner, B. (2018). More effective C#: 50 specific ways to improve your C#. Boston: Addison-Wesley. General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
https://www.codeproject.com/Articles/1222790/Why-Use-An-Interface?msg=5472799
CC-MAIN-2021-25
refinedweb
2,383
61.56
27 May 2010 13:08 [Source: ICIS news] SINGAPORE (ICIS news)--BASF-YPC Co is to restart its 320,000 tonne/year monoethylene glycol (MEG) plant at Nanjing, in China’s Jiangsu province, in mid-June after nearly two months of maintenance work, a source close to the company said on Thursday. The unit was shut on 20 April along with its upstream ethylene cracker, which had its capacity expanded from 600,000 tonnes/year to 740,000 tonnes/year. “There is no fixed date for the restart [of the MEG plant], as this would depend on whether its upstream cracker could resume full operation,” said the source. He added that the company had planned to bring the cracker on line in early June. BASF-YPC is a 50:50 joint venture between German chemical major BASF and China Petroleum & Chemical Corp (Sinopec). The expansion project at BASF-YPC’s ?xml:namespace> For more on BASF
http://www.icis.com/Articles/2010/05/27/9363039/basf-ypc-to-restart-nanjing-meg-plant-in-mid-june.html
CC-MAIN-2015-11
refinedweb
156
68.1
[Date Index] [Thread Index] [Author Index] Re: What characters are allowed in mathematica variable names? i.e. how On Monday, April 16, 2012 6:10:20 AM UTC-4, David Bailey wrote: > On 13/04/2012 10:00, Jesse Perla wrote: > Even is you like your program to be stored in a clear test .m file, do > you realise that you can edit these in Mathematica, much as if they were > a notebook, and the result will save back as a .m file. It is even > possible to add headings, sub headings and text cells to your code. > These are hidden as comments in the .m file! Probably notebooks should > have been organised in this way! Actually, I like debugging in the FrontEnd or workbench (or at least editing .m files as the binary notebook files do not play nice with SVN, diffs, separating code with output, or having multiple people working on the same code base). As long as there are no escaped characters in variable names, the code is reasonably easy to read in both text and diff form. As people seem to wonder why me (and likely others) need to run command-line, let me give some context to the issue: I need to run both on Windows FrontEnd and linux command-line as my problem is too time consuming to run on my desktop. The issue is that he job/batch mode of clusters is the way they force everyone to submit jobs so that they can schedule when and where they are run, which makes any hope of using a live session in a notebook impossible. Another issue is that every cluster has elaborate multi-level ssh security methods which make it difficult/impossible to connect directly wit h the FrontEnd. Sorry if I overstated the hypothetical of not using the FrontEnd, I just wanted to filter out comments on how I shouldn't be using command-line or reading clear-text files. Thanks for all of the responses. It is a matter of taste, but it to summarize people's answers to my questions: * You can use a $, as in "my$variable". But you need to be extremely careful with Mathematica symbols and name mangling, in particular it is very dangerous to use numbers such as "my$12" * camelCase is always an option of course, but besides the aesthetics reasons (many find them more difficult to read), you need to be very careful with capitalized letters for the first character as they can clash with mathematica symbols. This is an issue if you are trying to transcribe a lot of math faithfully in the standard notation of your discipline (e.g. "x" for anelement of a set, and "X" for the set itself. Or "x" for the log of a variable, and "X" for the variable in levels). * I originally had used the "`" symbol, which is also used for scoping. e.g "my`variable". I figured it couldn't really hurt, even if it was also used for scoping/namespaces. Don't do this, as the subtle changes in scoping seemed to math other coding difficult. In particular, I ran into a problem when I tried to store out all globals with DumpSave.
http://forums.wolfram.com/mathgroup/archive/2012/Apr/msg00360.html
CC-MAIN-2017-09
refinedweb
539
65.66
Hey folks, I've researched this before asking, but all I can find is trivial apps that all use the same type of example. My first, of many, questions involves Controllers->Views. Of all of the examples I have seen they always return one view. For example, the Home/Index control will return a view like so: return View(model); But, what if you need multiple views for the same page? Say you have to return two pieces of data, for other parts of the page, would you need to add all control logic in the single Index Controller View(), or can I add different ActionResults for the same page? Kind of a noob question, but I finally stopped planning, and started building. Thanks for your help. Welcome aboard Patri....uh Chaser! Ok, the View() method has many overloads. Controller.View Method (System.Web.Mvc) You can return any kind of view you want at any time during an action. The same goes for action results. ::smile:: My first Ah ha! Basically you can return all data, for different parts of the page, buy creating a "master" model, such as your "IndexViewModel"? Return it to the view, and pull the data from the controller to the corresponding code to be used in the page, ala Razor, for example. I hope I am understanding it right. EDIT: Or a ViewData["Hello"] = "World"; for simple string variables. For multiple piece of data, you'd use a compound viewmodel. Let's say I wanted to return a list of menu items, and some articles... public class MenuItem{public string Title { get; set; }public string Url { get; set; }} public class ArticleItem{public string Title { get; set; }public string Message { get; set; }} public class IndexViewModel{public IEnumerable<MenuItem> MenuItems { get; set; }public IEnumerable<ArticleItem> ArticleItems { get; set; }} Just build and return the IndexViewModel. Correct, though usually, if you are using a master view then create a MasterViewModel class and put anything in it that is common to all views... public class MasterViewModel{public string DEBUG_MESSAGE { get; set; }public IEnumerable<MenuItem> MenuItems { get; set; }public string CopyrightNotice { get; set; }} Then base you other top level view models from that... public class HomeIndexViewModel : MasterViewModel{public IEnumerable<ArticleItem> ArticleItems { get; set; }} Then create a base controller class and add a special handler... public abstract MasterController : Controller{ public ActionResult PreparedView(MasterViewModel model){// set up master view model elements here...return View(model);} And in your top level controller... public ActionResult Index(){var articles = repository.GetAll();var items = new List<ArticleItem>();// fill items and set model...var model = new IndexViewModel { ArticleItems = items };return PreparedView(model); }} Hope that made sense. I am also available on MSN Live Chat (serenarules@yadtel.net) at nearly all hours. I have a lot of time on my hands. =) So, the masterviewmodel is basically a master page from WebForms where certain parts will never change. Then, add page specific features to each page by inheriting it. Followed by, The master control, that inherits from the Controller Object, with an ActionResult which will display the master page items and also the specific Index() ActionResult that calls the PreparedView(model) data? So, in a sense, the PreparedView() is the last step before it reaches the view (and not the ActionResult Index())? Your defiantly the go-to person! Thanks for your help Essentially. Think of it like this: View uses HomeIndexViewModel_Layout uses MasterViewModel The "model" object is passed down from view to _layout and is downcast in the process, so that _layout has access to only the base model properties. I've prepared a little solution for you that might help. Just unzip, load into VS and run. No DB required, though it does have an in-memory db and some repository examples, just to have something to pull from. Yeah, it's so much easier to understand when you actually have it in front of you! OK, i'm starting to get it, slowly but surely. I'm going to alter your code to see what I come up with, after the foundation you laid. I will defiantly get back with you tomorrow! Again, thanks for your time. You're welcome Chaser. =) Morning SerenaRules, I added my question to the code, to make it easer to understand: using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Mvc; using PaperChaser.Models; using PaperChaser.Models.Domain; using PaperChaser.Models.Domain.Persistence; using PaperChaser.Models.Persistence; namespace PaperChaser.Controllers { [HandleError] public class PersonController : MasterController { private IPersonRepository personRepository; public PersonController(IPersonRepository personRepository) { this.personRepository = personRepository; } public PersonController() { this.personRepository = new PersonRepository(); } public ActionResult Index() { var people = personRepository.GetAll(); var items = new List<PersonIndexViewModelItem>(); foreach (var person in people) items.Add(new PersonIndexViewModelItem { Id = person.Id, Name = person.Name, Comment = person.Comment }); var model = new PersonIndexViewModel { Items = items }; /*Would I add LINQ here, and use the sam "model" variable, as above? As you add the "model" var to the "PrepareView(model) view below. So, I can create multiple bindings to a page.*/ return PrepareView(model); } } } Such as binding 2+ grids to a page.... var model = new PersonIndexViewModel { Items = items, Others = others }; /* just add it to the existing model (of course, you'll need to edit the model definition also */ return PrepareView(model); Alright, that makes sense. This isn't as scary as I thought it would be. Learning a new architecture can be daunting, but 6 months from now, I will be able to help people out with their MVC questions Time to strain my eyes with dark/light contrast with the IDE! What persistence engine are you planning on using? Now that you're a bit more familiar with mvc, seeing a more complete app (with comments) might ease the task. Also, here are some things I consider to be good practice tips, with reasons. 1) Don't pass entities to your views. Depending on how you've scoped your persistence context / session, it will most likely have been disposed before the view is rendered. If you've used lazy load to iterate through child collections on an entity in a view, it will throw an exception. Not passing entities into a view also reduces the chances of accidental changes to the back end. 2) Use a view model specific to your view. If you only need to see name information on a contact list, but not phone and address (wating to save that for a detail veiw), then simply don't include the extra information in your view model definition. View models are similar to a db View into a given Table, showing just what you want, and how you want it. 3) Use a master view model to pass things common to all pages into your _layout template. Self explanatory. 4) Keep your controllers clean. If you find a given action's code becomming large and spaghetti like, offload it to a service class. 5) Familiarize yourself with inversion of control, and use that to inject interfaced objects into your controllers, services and repositories. Being able to remove the default ctors from these objects also means you can remove the "using" clauses needed to make them work, decoupling your objects from the interfaced objects. 6) Use automapper to map your entities to view models. Mapping complex view models can generate a lot of looped code. This can get messy real quick. Setup simple this-to-that maps with automapper and get your models later with a single call. 7) Separate your application layers! Remember how I had other namespaces in the samples "Models" folder for domain and persistence? In practice, these would be in their own projects and referenced from the mvc app. 8) Use the Authorize attribute. Regardless of whether or not you're using built in membership, or rolled your own using IPrincipal and IIdentity, use this attribute to quickly resolve the current users rights to an action. There's a lot more, but for now, these should help you out. Don't fret if one or more of the above are alien right now. You'll run across things in your own explorations. Some of that makes sense, while some of it looks like Mandarin . lol. Like you said, I will learn as I go. Quick question, why do you Overload this method: private IPersonRepository personRepository; public PersonController(IPersonRepository personRepository) { this.personRepository = personRepository; } public PersonController() { // do really do this in production, use IoC this.personRepository = new PersonRepository(); } Do I have to overload every page? And I take this pice of code is to connect to the data-layer/EF? this.personRepository = new PersonRepository(); And I will defiently come back to this thread to remember those tips, hell, I'm just going to copy-and-paste it! EDIT: I'm going to persistent with the URL or TempData... Thanks Good questions! You noticed my original comment in the "default" ctor that said "do not do this is production"? This relates to #5 on my list above. First, one of the goals of good programming (not just mvc) is decoupling consuming classes from the objects they consume. That means, the consuming class (the controller in this case), should only care about the consumed classes interfaces. Second, you should eventually use an IoC container to resolve these dependencies, eliminating the need for a default ctor. Leaving just the one with the repository interface as a parameter. Third, this is also important during testing, where you might want to substitute a mock class built on the same interface! On the question of the repository class: yes, repositories are wrapper classes used to delegate complex queries to an underlying provider. Whether that is EF, L2S, or NH is unimportant. They abstract away the needs to know exact details: var people = personRepository.GetWhereNameLike("sam"); :should return the same thing, regardless of which provider you are using. One thing to note here is that when you introduce IoC into your app, and remove that "default" ctor, you won't have that line: this.personRepository = new PersonRepository(); That line is bad because it couples the controller with knowledge of implementation. [EDIT] I just realised I left out the word "not" in my original code comment. Sorry if that confused! Tell you what I'm going to do... Since you've made the leap, I want you to get it right from the beginning. Therefore, I will provide a simple solution that details each element with comments and thoughts. Maybe others will find it usefull as well. I will return shortly with a link (it'll probably be to big to upload here). Here is something I wrote for another member, as an example. It shows probably more than you want or need at this point, but it's a good way to see how things work in a larger application (even though I only provided the foundation and basic account stuff). In this sample I didn't have a need for a master model, so over look that. The rest should be informative. Right off the bat, I followed three related classes/pages: AccountRegisterModel.cs, AccountController.cs, and Register.cshtml. After scanning the pages up-and-down I realized how it all goes together, as far as MVC. I plan on going through your other classes, Domain and Enterprise Foundation, which is where they are called from the Controller. This is f'n awesome man. Im finally seeing how this pattern works. I'm going to go over this some more for a couple of hours to learn as much as I can. SP, give SerenaRules member of the Year! Thanks dude. Just helping fill the gap left behind by my retirement. =) Let me know if you have any questions. Keep in mind, that solution uses DDD-Lite (not full blown DDD) and there are no unit tests. We'll cover those much later on.
https://www.sitepoint.com/community/t/the-mvc-journey-begins/80701
CC-MAIN-2015-48
refinedweb
1,976
58.18
. 14 Comments In C# private static const string strSomething = “lalala” or static readonly string strSomething = “lalala” Can’t something like this be done in managed C++? Upper case at will. I’m not sure where your going with this…. you could just #pragma once #pragma managed using namespace System; namespace Links { static const System::String ^baseUrl = “”; } The only issue is this only works with managed code. Just include this snipet in a header file for all your constants. Then you just reference as Links::baseUrl Definition namespace Links { public static string baseUrl=””; } Usage public string getFullUrl() { return Links.baseUrl + “/stuff.html”; } Unfortunately, “static const String^ baseUrl = “http://…”;” inside a namespace generates an error about static/global variables not being allowed to be of type “String^”. Otherwise I wouldn’t be asking :) O. i .c try this then: Wrap it in a singleton object, that way you can access it as mykeys::baseUrl (shameless rip from elsewhere) Erh, ok? You still need a way to name a string literal inside your singleton wrapper. If you tell me how you’re going to do that in a one liner that’s as convenient as static const char* baseUrl = “http://”; and that doesn’t create any objects then we can forget the whole singleton thing and just use the string-literal-naming technique you were going to use inside MySettings. For now I’ll stick with static const System::String^ baseUrl() { return “http://…” ; } The code produced by the following three examples seems to be the same: using namespace System ; namespace Foo { String^ AddStrings(String^ left, String^ right) { return left + right ; } // Example 1: With /clr not /clr:pure or /clr:safe static const char* c_baseUrl = “” ; String^ Foo() { return AddStrings(c_baseUrl, “foo.html”) ; } // Example 2: A workable but annoying way of naming literals static const String^ mc_baseUrl() { return “” ; } String^ Bar() { return AddStrings(mc_baseUrl(), “bar.html” ; } // Example 3: Literals. String^ Baz() { return AddStrings(“”, “baz.html”) ; } } ; My knowledge about C++/CLI ist not very deep, but if I remember right, this could work: #include using namespace System; namespace test { gcroot const foo = "Hello World"; } Drave I hate that commet function. Somebody once need to tell me, how I display code in the comments … Well nopaste FTW: Drave Olly, noticed you had an abysmal looking tag cloud on the site, have a look at for a better one :) Ta muchly … By the way, to do the sourcecode thing … you do the following: [ sourcecode language=’c++’ ] … code [ /sourcecode ] Without the extra spaces inside the []s. Can’t say for sure in C++, but in C#, static is redundant if you are doing const. The simplest form is: public class Literals { public const string FOO = “Foo”; } So in general code you would do: string myVar = Literals.FOO; Ah, now I see what you are getting at. I don’t think, in the .NET world, you can have a variable outside of a class even in C++. So what you want (in C# form) is this: public class Links { ///Prevent instantiation private Links() {} public const string BASE_URL = “”; } which would be referenced as so: public void SomeMethod() { Console.WriteLine(Links.BASE_URL); } C++/CLI actually allows you to have non-encapsulated globals (it just puts them in a secret “global” namespace). But yep, that’s what I was trying to do. Although the scary thing is that the C# construct appears to actually cause each instance of the class to have its own reference to the string in memory which increases the size of the instanced members of the class for every named string literal. Kinda bloaty. But I might be wrong on this – having to spend more time in C++/CLI than C# which is annoying. Use kw “static” to make it/them class members instead of instance members -> no temporary kicks in. Using kw “ref” class marks it as a managed class, let it live on the “heap” and avoids copying. Comes as near to a global named set of constant as possible (to me). I am not familiar with managed C++, so you might want to check it. LMM #include “stdafx.h” using namespace System; ref class Links ^args) { public: static System::String^ baseUrl = “”; static System::String^ kfs = “”; }; int main(array { Console::WriteLine(Links::baseUrl); Console::WriteLine(Links::kfs); Console::ReadLine(); return 0; }
https://kfsone.wordpress.com/2009/03/23/net-named-string-literals/
CC-MAIN-2018-26
refinedweb
713
67.89
Can anyone help on removing the g factor from accelerometer readings. I am using SensorEventListener with onSensorChanged() method for getting Sensor.TYPE_ACCELEROMETER data. I need only pure acceleration values in all directions. So at any state if the device is stable (or in constant speed), it should give (0.0,0.0,0.0) roughly. Currently, depending on its pitch and roll, it gives me va Dear fellow iOS Developers: I am working on an app that would play a beep for each position the user holds their iOS device (standing, lying on its front/back, or on its side). At the moment, I am able to play a sound when the user has the device on its side, however, the problem is that because I have the accelerometer values linked with the slider, the beeping sound is continuous How to use Android Accelerometer feature to measure the distance when phone moved position? What is the best way to build this kind of application example on Youtube I have a problem with accelerometer update interval, or a sensitivity filter or something third, I dont know. No matter what I put in the update interval the image jumps every second, you can see it. if i put 1/60 it's moving smoothly but jumps a little every second. Help anybody, the code is below: -(void)accelerometer:(UIAccelerometer *)accelerometer didAccelerate:(UIAccelerati Im trying to use a shaking motion to clear the content of an inkpresenter, however shaking is just crashing my application. Here is the code: public class ShakeDetect{ private Accelerometer _accelerometer = null; object SyncRoot = new object(); private int _minimumShakes; ShakeRecord[] _shakeRecordList; private int _shakeRec I'm having some problems with the accelerometer. When I first started develop my game the controls felt very snappy and precise, but when adding more graphical elements the accelerometer feels like it's reacting very late and sometimes not as precise as before. I'm having a framerate of around 40fps. This is where I read the values (as i'm supposed to I guess) - (void I am new to android. I want to play a audio file when my mobile get shaking. I know the acceleration code. But when I wrote mediaplayer code I didn't get any output from my device. public void onSensorChanged(SensorEvent event) { // TODO Auto-generated method stub if (event.sensor.getType() == Sensor.TYPE_ACCELEROMETER) { getAcceleromete I have read Apple's documentation on CMMotion and everything that needs to be done in order to NSLog accelerometer data appears to be done in my app. However, when I run my app and flip the screen there is no data returned from the Accelerometer. I have no runtime errors. Here is my code: - (void)viewWillAppear:(BOOL)animated{ [super viewWillAppear:animated]; Accelerometer working I am doing a somewhat strange implementation. My application uses GPS data continuously and using that I have to do some task. Also I have implemented accelerometer stuff to detect whether device is stationary or moving. When the device is stationary I stop GPS fetching by calling stopUpdatingLocation, and when device is moving I call startUpdatingLocation. This works stopUpdatingLocation startUpdatingLocation I am in the process of developing an Android application with the intent of monitoring accelerometer data in the background. My own device is running Android 2.2. When the device is locked it no longer collects the accelerometer data. I am aware of the partial wake lock option, but due to the nature of my application it simply isn't ideal. I have done quite a bit of intensive Googli
http://bighow.org/tags/Accelerometer/1
CC-MAIN-2017-39
refinedweb
590
52.49
Typewriter Typewriter-inspired modes for Sublime Text 3 Details Installs - Total 2K - Win 717 - OS X 1K - Linux 316 Readme - Source - raw.githubusercontent.com Typewriter This plugin provides two typewriter-inspired modes for Sublime Text 3*: Typewriter Scrolling keeps the view centered on the current line, when there is more than half a screenful of text, à la iA Writer, WriteRoom and the like. This mode keeps you from craning your neck to look at the bottom of the screen for hours on end. (If you happen to write for hours on end.) Thanks to Rahul Ramadas scrolling mode includes mouse support. Typewriter Typing moves your cursor to the end of the file and disables mouse clicks and all commands that move the cursor and/or select text – leaving you only with letters, numbers, symbols, Backspace, Delete and Enter. Stay in the flow of writing and don't let your inner editor stifle your verbiage ever again. (Also: experience the joy of not being able to go back and correct your typos.) * The ST2 version is no longer being maintained. Installation This plugin is available through Package Control. Select the Package Control: Install Package command via the palette and choose the Typewriter package. Manual Install - Download the latest release from here: - Extract it in your Packages folder. (You can find out its location by clicking the Preferences > Browse Packages…menu inside Sublime Text.) – or – Clone the repository in your Packages folder: git clone Usage The typewriter modes can be toggled via the command palette. Just search for typewriter. There are three commands: one to toggle each mode and one to toggle both modes together. You can use the typewriter_mode_scrolling setting to enable or disable the Scrolling mode. For example, I have "typewriter_mode_scrolling" : true, in my Distraction Free settings. The Typing mode needs to be triggered by the typewriter_typing_toggle command. In earlier versions, you could also toggle this mode via a setting, but this doesn't work well in the current version. Settings The scrolling mode should work fine as configured. But there are two settings which allow you to tweak the commands that trigger scrolling mode. Look in the settings file for more details. You can also use offset the centered line in scrolling mode using the typewriter_mode_scrolling_offset setting; positive numbers move the centered line up, while negative ones move it down. Warnings - For best results in Scrolling mode you should set "scroll_past_end": true. (By default it is set to truein Windows and Linux, but falsein OSX.) - Mouse clicking is disabled in typing mode, but it is possible to scroll. Changelog & History - 0.4.0 - Rahul Ramadas added mouse support. - 0.3.1 - Expanded usage warnings - 0.3.0 - The mouse is now disabled in both Scrolling and Typing mode. The Typing mode now moves the cursor to the end of the file and is much more robust in general. Because the APIs used are only available for ST3, I will no longer be maintaining the much glitchier ST2 version. (It's still available, though.) - 0.2.3 - Added offset option for Scrolling mode (requested by Luis Martins), along with some small fixes & tweaks. - 0.2.2 - Typing mode now supports OSX as well. - 0.2.1 - Renamed settings to typewriter_mode_scrollingand typewriter_mode_typingso they won't conflict with BufferScroll. - 0.2 - Added Typing mode for Windows & Linux. typewriter_moderenamed typewriter_scrolling. - 0.1 - Initial release. Typewriter was created upon my request by castles_made_of_sand & facelessuser. Issues/Todo - Neither mode is designed to work with multiple cursors, though nothing terrible is likely to happen. I need to do some more testing with cloned views as well. - Figure out how to toggle Typing mode via setting correctly. - Per Issue #6, change Scrolling mode to a command that can toggle scroll_past_endif needed. - Add Markdown syntax (Current state: rough draft) Add color scheme designed for prose/long text (Current state: near complete)See: Writerly Alternatives - BufferScroll also provides a version of “typewriter scrolling” and many other features besides. - MarkdownEditing has added typewriter scrolling among other more Markdown-specific features.
https://packagecontrol.io/packages/Typewriter
CC-MAIN-2016-22
refinedweb
671
65.73
Chapter 14 Using Swing Components In the previous chapter, we discussed a number of concepts, including how Java's user interface facility is put together and how the larger pieces work. You should understand what components and containers are, how you use them than fits in a single chapter. In this chapter, we'll cover all the basic user interface components. In the next chapter, we'll cover some of the more involved topics: text components, trees, tables, and creating your own components. Buttons and Labels We'll start with the simplest components: buttons and labels. Frankly, there isn't much to say about them. If you've seen one button, you've seen them all; and you've already seen buttons in the applications in Chapter 2, A First Application ( HelloJava3and HelloJava4). A button generates an ActionEventwhen the user presses it. To receive these events, your program registers an ActionListener, which must implement the actionPerformed( )method. The argument passed to actionPerformed( )is the event itself. There's one more thing worth saying about buttons, which applies to any component that generates an action event. Java lets us specify an "action command" string for buttons (and other components, like menu items, that can generate action events). The action command is less interesting than it sounds. It is just a Stringthat serves to identify the component that sent the event. By default, the action command of a JButtonis the same as its label; it is included in action events, so you can use it to figure out which button an event came from. To get the action command from an action event, call the event's getActionCommand( )method. The following code checks whether the user pressed the Yes button: public void actionPerformed(ActionEvent e){ if (e.getActionCommand( ).equals("Yes") { //the user pressed "Yes"; do something ... } } You can change the action command by calling the button's setActionCommand( )method. The following code changes button myButton's action command to "confirm": myButton.setActionCommand("confirm"); It's a good idea to get used to setting action commands explicitly; this helps to prevent your code from breaking when you or some other developer "internationalizes" it, or otherwise changes the button's label. If you rely on the button's label, your code will stop working as soon as that label changes; a French user might see the label Ouirather than Yes. By setting the action command, you eliminate one source of bugs; for example, the button myButtonin the previous example will always generate the action command confirm, regardless of what its label says. Swing buttons can have an image in addition to a label. The JButtonclass includes constructors that accept an Iconobject, which knows how to draw itself. You can create buttons with captions, images, or both. A handy class called ImageIcontakes care of loading an image for you and can be used to easily add an image to a button. The following example shows how this works: //file: PictureButton.java import java.awt.*; import java.awt.event.*; import javax.swing.*; public class PictureButton extends JFrame { public PictureButton( ) { super("PictureButton v1.0"); setSize(200, 200); setLocation(200, 200); Icon icon = new ImageIcon("rhino.gif"); JButton button = new JButton(icon); button.addActionListener(new ActionListener( ) { public void actionPerformed(ActionEvent ae) { System.out.println("Urp!"); } }); Container content = getContentPane( ); content.setLayout(new FlowLayout( )); content.add(button); } public static void main(String[] args) { JFrame f = new PictureButton( ); f.addWindowListener(new WindowAdapter( ) { public void windowClosing(WindowEvent we) { System.exit(0); } }); f.setVisible(true); } } The example creates an ImageIconfrom the rhino.gif file. Then a JButtonis created from the ImageIcon. The whole thing is displayed in a JFrame. This example also shows the idiom of using an anonymous inner class as an ActionListener. There's even less to be said about JLabelcomponents. They're just text strings or images housed in a component. There aren't any special events associated with labels; about all you can do is specify the text's alignment, which controls the position of the text within the label's display area. As with buttons, JLabels can be created with Icons if you want to create a picture label. The following code creates some labels with different options: // default alignment (CENTER) JLabel label1 = new JLabel("Lions"); // left aligned JLabel label2 = new JLabel("Tigers", SwingConstants.LEFT); //label with no text, default alignment JLabel label3 = new JLabel( ); // create image icon Icon icon = new ImageIcon("rhino.gif"); // create image label JLabel label4 = new JLabel(icon); // assigning text to label3 label3.setText("and Bears"); // set alignment label3.setHorizontalAlignment(SwingConstants.RIGHT); The alignment constants are defined in the SwingConstantsinterface. Now we've built several labels, using a variety of constructors and several of the class's methods. To display the labels, just add them to a container by calling the container's add( )method. The other characteristics you might like to set on labels, such as changing their font or color, are accomplished using the methods of the Componentclass, JLabel's distant ancestor. For example, you can call setFont( )and setColor( )on a label, as with any other component. Given that labels are so simple, why do we need them at all? Why not just draw a text string directly on the container object? Remember that a JLabelis a JComponent. That's important; it means that labels have the normal complement of methods for setting fonts and colors that we mentioned earlier, as well as the ability to be managed sensibly by a layout manager. Therefore, they're much more flexible than a text string drawn at an absolute location within a container. Speaking of layouts--if you use the setText( )method to change the text of your label, the label's preferred size may change. But the label's container will automatically lay out its components when this happens, so you don't have to worry about it. Swing can interpret HTML-formatted text in JLabeland JButtonlabels. The following example shows how to create a button with HTML-formatted text: JButton button = new JButton( "<html>" + "S<font size=-1>MALL<font size=+0> " + "C<font size=-1>APITALS"); Checkboxes and Radio Buttons A checkbox is a labeled toggle switch. Each time the user clicks it, its state toggles between checked and unchecked. Swing implements the checkbox as a special kind of button. Radio buttons are similar to checkboxes, but they are usually arranged in groups. Click on one radio button in the group, and the others automatically turn off. They are named for the preset buttons on old car radios. Checkboxes and radio buttons are represented by instances of JCheckBoxand JRadioButton, respectively. Radio buttons can be tethered together using an instance of another class called ButtonGroup. By now you're probably well into the swing of things (no pun intended) and could easily master these classes on your own. We'll use an example to illustrate a different way of dealing with the state of components and to show off a few more things about containers. A JCheckBoxsends ItemEvents when it's pushed. Since a checkbox is a kind of button, it also fires ActionEvents when it becomes checked. For something like a checkbox, we might want to be lazy and check on the state of the buttons only at some later time, such as when the user commits an action. It's like filling out a form; you can change your choices until you submit the form. The following application, DriveThrough, lets us check off selections on a fast food menu, as shown in Figure 14-1. DriveThroughprints the results when we press the Place Order button. Therefore, we can ignore all the events generated by our checkboxes and radio buttons and listen only for the action events generated by the regular button. //file: DriveThrough.java import java.awt.*; import java.awt.event.*; import javax.swing.*; public class DriveThrough { public static void main(String[] args) { JFrame f = new JFrame("Lister v1.0"); f.setSize(300, 150); f.setLocation(200, 200); f.addWindowListener(new WindowAdapter( ) { public void windowClosing(WindowEvent we) { System.exit(0); } }); JPanel entreePanel = new JPanel( ); final ButtonGroup entreeGroup = new ButtonGroup( ); JRadioButton radioButton; entreePanel.add(radioButton = new JRadioButton("Beef")); radioButton.setActionCommand("Beef"); entreeGroup.add(radioButton); entreePanel.add(radioButton = new JRadioButton("Chicken")); radioButton.setActionCommand("Chicken"); entreeGroup.add(radioButton); entreePanel.add(radioButton = new JRadioButton("Veggie", true)); radioButton.setActionCommand("Veggie"); entreeGroup.add(radioButton); final JPanel condimentsPanel = new JPanel( ); condimentsPanel.add(new JCheckBox("Ketchup")); condimentsPanel.add(new JCheckBox("Mustard")); condimentsPanel.add(new JCheckBox("Pickles")); JPanel orderPanel = new JPanel( ); JButton orderButton = new JButton("Place Order"); orderPanel.add(orderButton); Container content = f.getContentPane( ); content.setLayout(new GridLayout(3, 1)); content.add(entreePanel); content.add(condimentsPanel); content.add(orderPanel); orderButton.addActionListener(new ActionListener( ) { public void actionPerformed(ActionEvent ae) { String entree = entreeGroup.getSelection().getActionCommand( ); System.out.println(entree + " sandwich"); Component[] components = condimentsPanel.getComponents( ); for (int i = 0; i < components.length; i++) { JCheckBox cb = (JCheckBox)components[i]; if (cb.isSelected( )) System.out.println("With " + cb.getText( )); } } }); f.setVisible(true); } } DriveThroughlays out three panels. The radio buttons in the entreePanelare tied together through a ButtonGroupobject. We add( )the buttons to a ButtonGroupto make them mutually exclusive. The ButtonGroupobject is an odd animal. One expects it to be a container or a component, but it isn't; it's simply a helper object that allows only one RadioButtonto be selected at a time. In this example, the button group forces you to choose a beef, chicken, or veggie entree, but not more than one. The condiment choices, which are JCheckBoxes, aren't in a button group, so you can request any combination of ketchup, mustard, and pickles on your sandwich. When the Place Order button is pushed, we receive an ActionEventin the actionPerformed( )method of our inner ActionListener. At this point, we gather the information in the radio buttons and checkboxes and print it. actionPerformed( )simply reads the state of the various buttons. We could have saved references to the buttons in a number of ways; this example demonstrates two. First, we find out which entree was selected. To do so, we call the ButtonGroup's getSelection( )method. This returns a ButtonModel, upon which we immediately call getActionCommand( ). This returns the action command as we set it when we created the radio buttons. The action commands for the buttons are the entrée names, which is exactly what we need. To find out which condiments were selected, we use a more complicated procedure. The problem is that condiments aren't mutually exclusive, so we don't have the convenience of a ButtonGroup. Instead, we ask the condiments JPanelfor a list of its components. The getComponents( )method returns an array of references to the container's child components. We'll use this to loop over the components and print the results. We cast each element of the array back to JCheckBoxand call its isSelected( )method to see if the checkbox is on or off. If we were dealing with different types of components in the array, we could determine each component's type with the instanceofoperator. Lists and Combo Boxes JLists and JComboBoxes are a step up on the evolutionary chain from JButtons and JLabels. Lists let the user choose from a group of alternatives. They can be configured to force the user to choose a single selection or to allow multiple choices. Usually, only a small group of choices are displayed at a time; a scrollbar lets the user move to the choices that aren't visible. The user can select an item by clicking on it. He or she can expand the selection to a range of items by holding down Shift and clicking on another item. To make discontinuous selections, the user can hold down the Control key instead of the Shift key. A combo box is a cross-breed between a text field and a list. It displays a single line of text (possibly with an image) and a downward pointing arrow at one side. If you click on the arrow, the combo box opens up and displays a list of choices. You can select a single choice by clicking on it. After a selection is made, the combo box closes up; the list disappears and the new selection is shown in the text field. Like every other component in Swing, lists and combo boxes have data models that are distinct from visual components. The list also has a selection model that controls how selections may be made on the list data. Lists and combo boxes are similar because they have similar data models. Each is simply an array of acceptable choices. This similarity is reflected in Swing, of course: the type of a JComboBox's data model is a subclass of the type used for a JList's data model. The next example demonstrates this relationship. The following example creates a window with a combo box, a list, and a button. The combo box and the list use the same data model. When you press the button, the program writes out the current set of selected items in the list. Figure 14-2 shows the example; the code itself follows. /file: Lister.java import java.awt.*; import java.awt.event.*; import javax.swing.*; public class Lister { public static void main(String[] args) { JFrame f = new JFrame("Lister v1.0"); f.setSize(200, 200); f.setLocation(200, 200); f.addWindowListener(new WindowAdapter( ) { public void windowClosing(WindowEvent we) { System.exit(0); } }); // create a combo box String [] items = { "uno", "due", "tre", "quattro", "cinque", "sei", "sette", "otto", "nove", "deici", "undici", "dodici" }; JComboBox comboBox = new JComboBox(items); comboBox.setEditable(true); // create a list with the same data model final JList list = new JList(comboBox.getModel( )); // create a button; when it's pressed, print out // the selection in the list JButton button = new JButton("Per favore"); button.addActionListener(new ActionListener( ) { public void actionPerformed(ActionEvent ae) { Object[] selection = list.getSelectedValues( ); System.out.println("-----"); for (int i = 0; i < selection.length; i++) System.out.println(selection[i]); } }); // put the controls the content pane Container c = f.getContentPane( ); JPanel comboPanel = new JPanel( ); comboPanel.add(comboBox); c.add(comboPanel, BorderLayout.NORTH); c.add(new JScrollPane(list), BorderLayout.CENTER); c.add(button, BorderLayout.SOUTH); f.setVisible(true); } } The combo box is created from an array of strings. This is a convenience--behind the scenes, the JComboBoxconstructor creates a data model from the strings you supply and sets the JComboBoxto use that data model. The list is created using the data model of the combo box. This works because JListexpects to use a ListModelfor its data model, and the ComboBoxModelused by the JComboBoxis a subclass of ListModel. The button's action event handler simply prints out the selected items in the list, which are retrieved with a call to getSelectedValues( ). This method actually returns an object array, not a string array. List and combo box items, like many other things in Swing, are not limited to text. You can use images, or drawings, or some combination of text and images. You might expect that selecting one item in the combo box would select the same item in the list. In Swing components, selection is controlled by a selection model. The combo box and the list have distinct selection models; after all, you can select only one item from the combo box, while it's possible to select multiple items from the list. Thus, while the two components share a data model, they have separate selection models. We've made the combo box editable. By default, it would not be editable: the user could choose only one of the items in the drop-down list. With an editable combo box, the user can type in a selection, as if it were a text field. Non-editable combo boxes are useful if you just want to offer a limited set of choices; editable combo boxes are handy when you want to accept any input but offer some common choices. There's a great class tucked away in the last example that deserves some recognition. It's JScrollPane. In Lister, you'll notice we created one when we added the Listto the main window. JScrollPanesimply wraps itself around another Componentand provides scrollbars as necessary. The scrollbars show up if the contained Component's preferred size (as returned by getPreferredSize( )) is greater than the size of the JScrollPaneitself. In the previous example, the scrollbars show up whenever the size of the Listexceeds the available space. You can use JScrollPaneto wrap any Component, including components with drawings or images or complex user interface panels. We'll discuss JScrollPanein more detail later in this chapter, and we'll use it frequently with the text components in the next chapter. Borders Any Swing component can have a decorative border. JComponentincludes a method called setBorder( ); all you have to do is call setBorder( ), passing it an appropriate implementation of the Borderinterface. Swing provides many useful Borderimplementations in the javax.swing.borderpackage. You could create an instance of one of these classes and pass it to a component's setBorder( )method, but there's an even simpler technique. The BorderFactoryclass can create any kind of border for you using static "factory" methods. Creating and setting a component's border, then, is simple: JLabel labelTwo = new JLabel("I have an etched border."); labelTwo.setBorder(BorderFactory.createEtchedBorder( )); Every component has a setBorder( )method, from simple labels and buttons right up to the fancy text and table components we'll cover in the next chapter. BorderFactoryis convenient, but it does not offer every option of every border type. For example, if you want to create a raised EtchedBorderinstead of the default lowered border, you'll need to use EtchedBorder's constructor rather than a method in BorderFactory, like this: JLabel labelTwo = new JLabel("I have a raised etched border."); labelTwo.setBorder( new EtchedBorder(EtchedBorder.RAISED) ); The Borderimplementation classes are listed and briefly described here: - BevelBorder - This border draws raised or lowered beveled edges, giving an illusion of depth. - SoftBevelBorder - This border is similar to BevelBorder, but thinner. - EmptyBorder - Doesn't do any drawing, but does take up space. You can use it to give a component a little breathing room in a crowded user interface. - EtchedBorder - A lowered etched border gives the appearance of a rectangle that has been chiseled into a piece of stone. A raised etched border looks like it is standing out from the surface of the screen. - LineBorder - Draws a simple rectangle around a component. You can specify the color and width of the line in LineBorder's constructor. - MatteBorder - A souped-up version of LineBorder. You can create a MatteBorderwith a certain color and specify the size of the border on the left, top, right, and bottom of the component. MatteBorderalso allows you to pass in an Iconthat will be used to draw the border. This could be an image ( ImageIcon) or any other implementation of the Iconinterface. - TitledBorder - A regular border with a title. TitledBorderdoesn't actually draw a border; it just draws a title in conjunction with another border object. You can specify the locations of the title, its justification, and its font. This border type is particularly useful for grouping different sets of controls in a complicated interface. - CompoundBorder - A border that contains two other borders. This is especially handy if you want to enclose a component in an EmptyBorderand then put something decorative around it, like an EtchedBorderor a MatteBorder. The following example shows off some different border types. It's only a sampler, though; many more border types are available. Furthermore, the example only encloses labels with borders. You can put a border around any component in Swing. The example is shown in Figure 14-3; the source code follows. //file: Borders.java import java.awt.*; import java.awt.event.*; import javax.swing.*; import javax.swing.border.*; public class Borders { public static void main(String[] args) { // create a JFrame to hold everything JFrame f = new JFrame("Borders"); f.addWindowListener(new WindowAdapter( ) { public void windowClosing(WindowEvent we) { System.exit(0); } }); f.setSize(300, 300); f.setLocation(200, 200); // Create labels with borders. int center = SwingConstants.CENTER; JLabel labelOne = new JLabel("raised BevelBorder", center); labelOne.setBorder( BorderFactory.createBevelBorder(BevelBorder.RAISED)); JLabel labelTwo = new JLabel("EtchedBorder", center); labelTwo.setBorder(BorderFactory.createEtchedBorder( )); JLabel labelThree = new JLabel("MatteBorder", center); labelThree.setBorder( BorderFactory.createMatteBorder(10, 10, 10, 10, Color.pink)); JLabel labelFour = new JLabel("TitledBorder", center); Border etch = BorderFactory.createEtchedBorder( ); labelFour.setBorder( BorderFactory.createTitledBorder(etch, "Title")); JLabel labelFive = new JLabel("TitledBorder", center); Border low = BorderFactory.createLoweredBevelBorder( ); labelFive.setBorder( BorderFactory.createTitledBorder(low, "Title", TitledBorder.RIGHT, TitledBorder.BOTTOM)); JLabel labelSix = new JLabel("CompoundBorder", center); Border one = BorderFactory.createEtchedBorder( ); Border two = BorderFactory.createMatteBorder(4, 4, 4, 4, Color.blue); labelSix.setBorder(BorderFactory.createCompoundBorder(one, two)); // add components to the content pane Container c = f.getContentPane( ); c.setLayout(new GridLayout(3, 2)); c.add(labelOne); c.add(labelTwo); c.add(labelThree); c.add(labelFour); c.add(labelFive); c.add(labelSix); f.setVisible(true); } } Menus A JMenuis a standard pull-down menu with a fixed name. Menus can hold other menus as submenu items, enabling you to implement complex menu structures. In Swing, menus are first-class components, just like everything else. You can place them wherever a component would go. Another class, JMenuBar, holds menus in a horizontal bar. Menu bars are real components, too, so you can place them wherever you want in a container: top, bottom, or middle. But in the middle of a container, it usually makes more sense to use a JComboBoxrather than some kind of menu. Menu items may have associated images and shortcut keys; there are even menu items that look like checkboxes and radio buttons. Menu items are really a kind of button. Like buttons, menu items fire action events when they are selected. You can respond to menu items by registering action listeners with them. There are two ways to use the keyboard with menus. The first is called mnemonics. A mnemonic is one character in the menu name. If you hold down the Alt key and type a menu's mnemonic, the menu will drop down, just as if you had clicked on it with the mouse. Menu items may also have mnemonics. Once a menu is dropped down, you can select individual items in the same way. Menu items may also have accelerators. An accelerator is a key combination that selects the menu item, whether or not the menu that contains it is showing. A common example is the accelerator Ctrl-C, which is frequently used as a shortcut for the Copy item in the Edit menu. The following example demonstrates several different features of menus. It creates a menu bar with three different menus. The first, Utensils, contains several menu items, a submenu, a separator, and a Quit item that includes both a mnemonic and an accelerator. The second menu, Spices, contains menu items that look and act like checkboxes. Finally, the Cheese menu demonstrates how radio button menu items can be used. This application is shown in Figure 14-4 with one of its menus dropped down. Choosing Quit from the menu (or pressing Ctrl-Q) removes the window. Give it a try. //file: DinnerMenu.java import java.awt.*; import java.awt.event.*; import javax.swing.*; public class DinnerMenu extends JFrame { public DinnerMenu( ) { super("DinnerMenu v1.0"); setSize(200, 200); setLocation(200, 200); // create); utensils.addSeparator( ); // do some fancy stuff with the Quit item JMenuItem quitItem = new JMenuItem("Quit"); quitItem.setMnemonic(KeyEvent.VK_Q); quitItem.setAccelerator( KeyStroke.getKeyStroke(KeyEvent.VK_Q, Event.CTRL_MASK)); quitItem.addActionListener(new ActionListener( ) { public void actionPerformed(ActionEvent e) { System.exit(0); } }); utensils.add(quitItem); // create the Spices menu JMenu spices = new JMenu("Spices"); spices.setMnemonic(KeyEvent.VK_S); spices.add(new JCheckBoxMenuItem("Thyme")); spices.add(new JCheckBoxMenuItem("Rosemary")); spices.add(new JCheckBoxMenuItem("Oregano", true)); spices.add(new JCheckBoxMenuItem("Fennel")); // create the Cheese menu JMenu cheese = new JMenu("Cheese"); cheese.setMnemonic(KeyEvent.VK_C); ButtonGroup group = new ButtonGroup( ); JRadioButtonMenuItem rbmi; rbmi = new JRadioButtonMenuItem("Regular", true); group.add(rbmi); cheese.add(rbmi); rbmi = new JRadioButtonMenuItem("Extra"); group.add(rbmi); cheese.add(rbmi); rbmi = new JRadioButtonMenuItem("Blue"); group.add(rbmi); cheese.add(rbmi); // create a menu bar and use it in this JFrame JMenuBar menuBar = new JMenuBar( ); menuBar.add(utensils); menuBar.add(spices); menuBar.add(cheese); setJMenuBar(menuBar); } public static void main(String[] args) { JFrame f = new DinnerMenu( ); f.addWindowListener(new WindowAdapter( ) { public void windowClosing(WindowEvent we) { System.exit(0); } }); f.setVisible(true); } } Figure 14-4. The DinnerMenu application Yes, we know. Quit doesn't belong in the Utensils menu. If it's driving you crazy, you can go back and add a File menu as an exercise when we're through. Creating menus is pretty simple work. You create a JMenuobject, specifying the menu's title.[1] Then you just add JMenuItems to the JMenu. You can also add JMenus to a JMenu; they show up as submenus. This is shown in the creation of); In the second line, we set the mnemonic for this menu using a constant defined in the KeyEventclass. You can add those pretty separator lines with a single call: utensils.addSeparator( ); The Quit menu item has some bells and whistles we should explain. First, we create the menu item and set its mnemonic, just as we did before for the Utensils menu: JMenuItem quitItem = new JMenuItem("Quit"); quitItem.setMnemonic(KeyEvent.VK_Q); Now we want to create an accelerator for the menu item. We do this with the help of a class called KeyStroke: quitItem.setAccelerator( KeyStroke.getKeyStroke(KeyEvent.VK_Q, Event.CTRL_MASK)); Finally, to actually do something in response to the menu item, we register an action listener: quitItem.addActionListener(new ActionListener( ) { public void actionPerformed(ActionEvent e) { System.exit(0); } }); Our action listener exits the application when the Quit item is selected. Creating the Spices menu is just as easy, except that we use JCheckBoxMenuItems instead of regular JMenuItems. The result is a menu full of items that behave like checkboxes. The next menu, Cheese, is a little more tricky. We want the items to be radio buttons, but we need to place them in a ButtonGroupto ensure they are mutually exclusive. Each item, then, is created, added to the button group, and added to the menu itself. The final step is to place the menus we've just created in a JMenuBar. This is simply a component that lays out menus in a horizontal bar. We have two options for adding it to our JFrame. Since the JMenuBaris a real component, we could add it to the content pane of the JFrame. Instead, we use a convenience method called setJMenuBar( ), which automatically places the JMenuBarat the top of the frame's content pane. This saves us the trouble of altering the layout or size of the content pane; it is adjusted to coexist peacefully with the menu bar. The PopupMenu Class One of Swing's nifty components is JPopupMenu, a menu that automatically appears when you press the appropriate mouse button inside of a component. (On a Windows system, for example, clicking the right mouse button invokes a popup menu.) Which button you press depends on the platform you're using; fortunately, you don't have to care--Swing figures it out for you. The care and feeding of JPopupMenuis basically the same as any other menu. You use a different constructor ( JPopupMenu( )) to create it, but otherwise, you build a menu and add elements to it the same way. The big difference is you don't need to attach it to a JMenuBar. Instead, just pop up the menu whenever you need it. The following example, PopupColorMenu, contains three buttons. You can use acontains three buttons. You can use a JPopupMenuto set the color of each button or the frame itself, depending on where you press the mouse. Figure 14-5 shows the example in action; the user is preparing to change the color of the bottom button. //file: PopUpColorMenu.java import java.awt.*; import java.awt.event.*; import javax.swing.*; public class PopUpColorMenu extends JFrame implements ActionListener { JPopupMenu colorMenu; Component selectedComponent; public PopUpColorMenu( ) { super("PopUpColorMenu v1.0"); setSize(100, 200); setLocation(200, 200); addWindowListener(new WindowAdapter( ) { public void windowClosing(WindowEvent e) { System.exit(0); } }); MouseListener mouseListener = new( )) { selectedComponent = e.getComponent( ); colorMenu.show(e.getComponent(), e.getX(), e.getY( )); } } }; final Container content = getContentPane( ); content.setLayout(new FlowLayout( )); JButton button = new JButton("Uno"); button.addMouseListener(mouseListener); content.add(button); button = new JButton("Due"); button.addMouseListener(mouseListener); content.add(button); button = new JButton("Tre"); button.addMouseListener(mouseListener); content.add(button); colorMenu = new JPopupMenu("Color"); colorMenu.add(makeMenuItem("Red")); colorMenu.add(makeMenuItem("Green")); colorMenu.add(makeMenuItem("Blue")); getContentPane( ).addMouseListener(mouseListener); setVisible(true); } public void actionPerformed(ActionEvent e) { String color = e.getActionCommand( ); if (color.equals("Red")) selectedComponent.setBackground(Color.red); else if (color.equals("Green")) selectedComponent.setBackground(Color.green); else if (color.equals("Blue")) selectedComponent.setBackground(Color.blue); } private JMenuItem makeMenuItem(String label) { JMenuItem item = new JMenuItem(label); item.addActionListener( this ); return item; } public static void main(String[] args) { new PopUpColorMenu( ); } } Because the popup menu is triggered by mouse events, we need to register a MouseListenerfor any of the components to which it applies. In this example, all three buttons and the content pane of the frame are eligible for the color popup menu. Therefore, we add a mouse event listener for all of these components explicitly. The same instance of an anonymous inner MouseAdaptersubclass is used in each case. In this class, we override the mousePressed( ), mouse-Released( ), and mouseClicked( )methods to display the popup menu when we get an appropriate event. How do we know what an "appropriate event" is? Fortunately, we don't need to worry about the specifics of our user's platform; we just need to call the event's isPopupTrigger( )method. If this method returns true, we know the user has done whatever normally displays a popup menu on his or her system. Once we know that the user wants to raise a popup menu, we display the popup menu by calling its show( )method with the mouse event coordinates as arguments. If we wanted to provide different menus for different types of components or the background, we'd create different mouse listeners for each different kind of component. The mouse listeners would invoke different kinds of popup menus as appropriate. The only thing left is to handle the action events from the popup menu items. We use a helper method called makeMenuItem( )to register the PopUpColorMenuwindow as an action listener for every item we add. The example implements ActionListenerand has the required actionPerformed( )method. This method reads the action command from the event, which is equal to the selected menu item's label by default. It then sets the background color of the selected component appropriately.color of the selected component appropriately. The JScrollPane Class We used JScrollPaneearlier in this chapter without explaining much about it. In this section we'll remedy the situation. A JScrollPaneis a container that can hold one component. Said another way, a JScrollPanewraps another component. By default, if the wrapped component is larger than the JScrollPaneitself, the JScrollPanesupplies scrollbars. JScrollPanehandles the events from the scrollbars and displays the appropriate portion of the contained component. Technically, JScrollPaneis a Container, but it's a funny one. It has its own layout manager, which can't be changed. It can accommodate only one component at a time. This seems like a big limitation, but it isn't. If you want to put a lot of stuff in a JScrollPane, just put your components into a JPanel, with whatever layout manager you like, and put that panel into the JScrollPane. When you create a JScrollPane, you can specify the conditions under which its scrollbars will be displayed. This is called the scrollbar display policy; a separate policy is used for the horizontal and vertical scrollbars. The following constants can be used to specify the policy for each of the scrollbars: - HORIZONTAL_SCROLLBAR_AS_NEEDED - Displays a scrollbar only if the wrapped component doesn't fit. - HORIZONTAL_SCROLLBAR_ALWAYS - Always shows a scrollbar, regardless of the contained component's size. - HORIZONTAL_SCROLLBAR_NEVER - Never shows a scrollbar, even if the contained component won't fit. If you use this policy, you should provide some other way to manipulate the JScrollPane. - VERTICAL_SCROLLBAR_AS_NEEDED - Displays a scrollbar only if the wrapped component doesn't fit. - VERTICAL_SCROLLBAR_ALWAYS - Always shows a scrollbar, regardless of the contained component's size. - VERTICAL_SCROLLBAR_NEVER - Never shows a scrollbar, even if the contained component won't fit. If you use this policy, you should provide some other way to manipulate the JScrollPane. By default, the policies are HORIZONTAL_SCROLLBAR_AS_NEEDEDand VERTICAL_SCROLLBAR_AS_NEEDED. Here's an example that uses a JScrollPaneto display a large image. The application itself is very simple; all we do is place the image in an ImageComponent, wrap a JScrollPanearound it, and put the JScrollPanein a JFrame's content pane. Here's the code: //file: ScrollPaneFrame.java import java.awt.*; import java.awt.event.*; import javax.swing.*; public class ScrollPaneFrame { public static void main(String[] args) { String filename = "Piazza di Spagna.jpg"; if (args.length > 0) filename = args[0]; JFrame f = new JFrame("ScrollPaneFrame v1.0"); f.setSize(300, 300); f.setLocation(200, 200); f.addWindowListener(new WindowAdapter( ) { public void windowClosing(WindowEvent e) { System.exit(0); } }); Image image = Toolkit.getDefaultToolkit( ).getImage(filename); f.getContentPane( ).add( new JScrollPane(new ImageComponent(image))); f.setVisible(true); } } And here's the ImageComponent. It waits for the image to load, using a MediaTracker, and sets its size to the size of the image. It also provides a paint( )method to draw the image. This takes a single call to drawImage( ). The first argument is the image itself; the next two are the coordinates of the image relative to the ImageComponent; and the last is a reference to the ImageComponentitself ( this), which serves as an image observer. (We'll discuss image observers in Chapter 18, Working with Images and Other Media; for the time being, take this on faith.) //file: ImageComponent.java import java.awt.*; import javax.swing.*; public class ImageComponent extends JComponent { Image image; Dimension size; public ImageComponent(Image image) { this.image = image; MediaTracker mt = new MediaTracker(this); mt.addImage(image, 0); try { mt.waitForAll( ); } catch (InterruptedException e) { // error ... }; size = new Dimension (image.getWidth(null), image.getHeight(null)); setSize(size); } public void paint(Graphics g) { g.drawImage(image, 0, 0, this); } public Dimension getPreferredSize( ) { return size; } } Finally, ImageComponentprovides a getPreferredSize( )method, overriding the method it inherits from Component. This method simply returns the image's size, which is a Dimensionobject. When you're using JScrollPane, it's important for the object you're scrolling to provide a reliable indication of its size. Figure 14-6 shows the ScrollPaneFramewith the ImageComponent. The JSplitPane Class A split pane is a special container that holds two components, each in its own sub-pane. A splitter bar adjusts the sizes of the two sub-panes. In a document viewer, you could use a split pane to show a table of contents next to a full document. The following example capitalizes on the ImageComponentclass from the previous example. It displays two ImageComponents, wrapped in JScrollPanes, in either side of a JSplitPane. You can drag the splitter bar back and forth to adjust the sizes of the two contained components. //file: SplitPaneFrame.java import java.awt.*; import java.awt.event.*; import javax.swing.*; import javax.swing.border.*; public class SplitPaneFrame { public static void main(String[] args) { String fileOne = "Piazza di Spagna.jpg"; String fileTwo = "L1-Light.jpg"; if (args.length > 0) fileOne = args[0]; if (args.length > 1) fileTwo = args[1]; // create a JFrame to hold everything JFrame f = new JFrame("SplitPaneFrame"); f.addWindowListener(new WindowAdapter( ) { public void windowClosing(WindowEvent we) { System.exit(0); } }); f.setSize(300, 200); f.setLocation(200, 200); Image leftImage = Toolkit.getDefaultToolkit( ).getImage(fileOne); Component left = new JScrollPane(new ImageComponent(leftImage)); Image rightImage = Toolkit.getDefaultToolkit( ).getImage(fileTwo); Component right = new JScrollPane(new ImageComponent(rightImage)); JSplitPane split = new JSplitPane(JSplitPane.HORIZONTAL_SPLIT, left, right); split.setDividerLocation(100); f.getContentPane( ).add(split); f.setVisible(true); } } This example is shown in Figure 14-7.example is shown in Figure 14-7. The JTabbedPane Class If you've ever dealt with the System control panel in Windows, you already know what a JTabbedPaneis. It's a container with labeled tabs. When you click on a tab, a new set of controls is shown in the body of the JTabbedPane. In Swing, JTabbedPaneis simply a specialized container. Each tab has a name. To add a tab to the JTabbedPane, simply call addTab( ). You'll need to specify the name of the tab as well as a component that supplies the tab's contents. Typically, it's a container holding other components. Even though the JTabbedPaneonly shows one set of components at a time, be aware that all the components on all the pages are in memory at one time. If you have components that hog processor time or memory, try to put them into some "sleep" state when they are not showing. The following example shows how to create a JTabbedPane. It adds standard Swing components to a first tab, named Controls. The second tab is filled with an instance of ImageComponent, which was presented earlier in this chapter. //file: TabbedPaneFrame.java import java.awt.*; import java.awt.event.*; import javax.swing.*; import javax.swing.border.*; public class TabbedPaneFrame { public static void main(String[] args) { // create a JFrame to hold everything JFrame f = new JFrame("TabbedPaneFrame"); f.addWindowListener(new WindowAdapter( ) { public void windowClosing(WindowEvent we) { System.exit(0); } }); f.setSize(200, 200); f.setLocation(200, 200); JTabbedPane tabby = new JTabbedPane( ); // create a controls pane JPanel controls = new JPanel( ); controls.add(new JLabel("Service:")); JList list = new JList( new String[] { "Web server", "FTP server" }); list.setBorder(BorderFactory.createEtchedBorder( )); controls.add(list); controls.add(new JButton("Start")); // create an image pane String filename = "Piazza di Spagna.jpg"; Image image = Toolkit.getDefaultToolkit( ).getImage(filename); JComponent picture = new JScrollPane(new ImageComponent(image)); tabby.addTab("Controls", controls); tabby.addTab("Picture", picture); f.getContentPane( ).add(tabby); f.setVisible(true); } } The code is not especially fancy, but the result is an impressive-looking user interface. The first tab is a JPanelthat contains some other components, including a JListwith an etched border. The second tab simply contains an ImageComponentwrapped in a JScrollPane. The running example is shown in Figure 14-8. Scrollbars and Sliders JScrollPaneis such a handy component that you may not ever need to use scrollbars by themselves. In fact, if you ever do find yourself using a scrollbar by itself, chances are you really want to use another component called a slider. There's not much point in describing the appearance and functionality of scrollbars and sliders. Instead, let's jump right in with an example that includes both components. Figure 14-9 shows a simple example with both a scrollbar and a slider. Here is the source code for this example: //file: Slippery.java import java.awt.*; import java.awt.event.*; import javax.swing.*; import javax.swing.event.*; public class Slippery extends JFrame { public Slippery( ) { super("Slippery v1.0"); setSize(220, 160); setLocation(200, 200); Container content = getContentPane( ); JPanel main = new JPanel(new GridLayout(2, 1)); JPanel scrollBarPanel = new JPanel( ); final JScrollBar scrollBar = new JScrollBar(JScrollBar.HORIZONTAL, 0, 48, 0, 255); int height = scrollBar.getPreferredSize( ).height; scrollBar.setPreferredSize(new Dimension(175, height)); scrollBarPanel.add(scrollBar); main.add(scrollBarPanel); JPanel sliderPanel = new JPanel( ); final JSlider slider = new JSlider(JSlider.HORIZONTAL, 0, 255, 128); slider.setMajorTickSpacing(48); slider.setMinorTickSpacing(16); slider.setPaintTicks(true); sliderPanel.add(slider); main.add(sliderPanel); content.add(main, BorderLayout.CENTER); final JLabel statusLabel = new JLabel("Welcome to Slippery v1.0"); content.add(statusLabel, BorderLayout.SOUTH); // wire up the event handlers scrollBar.addAdjustmentListener(new AdjustmentListener( ) { public void adjustmentValueChanged(AdjustmentEvent e) { statusLabel.setText("JScrollBar's current value = " + scrollBar.getValue( )); } }); slider.addChangeListener(new ChangeListener( ) { public void stateChanged(ChangeEvent e) { statusLabel.setText("JSlider's current value = " + slider.getValue( )); } }); } public static void main(String[] args) { JFrame f = new Slippery( ); f.addWindowListener(new WindowAdapter( ) { public void windowClosing(WindowEvent e) { System.exit(0); } }); f.setVisible(true); } } All we've really done here is added a JScrollBarand a JSliderto our main window. If the user adjusts either of these components, the current value of the component is displayed in a JLabelat the bottom of the window. The JScrollBarand JSliderare both created by specifying an orientation, either HORIZONTALor VERTICAL. You can also specify the minimum and maximum values for the components, as well as the initial value. The JScrollBarsupports one additional parameter, the extent. The extent simply refers to what range of values is represented by the slider within the scroll bar. For example, in a scrollbar that runs from 0 to 255, an extent of 128 means that the slider will be half the width of the scrollable area of the scrollbar. JSlidersupports the idea of tick marks, which are lines drawn at certain values along the slider's length. Major tick marks are slightly larger than minor tick marks. To draw tick marks, just specify an interval for major and minor tick marks, and then paint the tick marks: slider.setMajorTickSpacing(48); slider.setMinorTickSpacing(16); slider.setPaintTicks(true); JSlideralso supports labeling the ticks with text strings, using the setLabel-Table( )method. Responding to events from the two components is straightforward. The JScrollBarsends out AdjustmentEvents every time something happens; the JSliderfires off ChangeEvents when its value changes. In our simple example, we display the new value of the changed component in the JLabelat the bottom of the window. Dialogs A dialog is another standard feature of user interfaces. Dialogs are frequently used to present information to the user ("Your fruit salad is ready.") or to ask a question ("Shall I bring the car around?"). Dialogs are used so commonly in GUI applications that Swing includes a handy set of pre-built dialogs. These are accessible from static methods in the JOptionPaneclass. Many variations are possible; JOptionPanegroups them into four basic types: - message dialog - Displays a message to the user, usually accompanied by an OK button. - confirmation dialog - Ask a question and displays answer buttons, usually Yes, No, and Cancel. - input dialog - Asks the user to type in a string. - option dialogs - The most general type--you pass it your own components, which are displayed in the dialog. A confirmation dialog is shown in Figure 14-10. Let's look at examples of each kind of dialog. The following code produces a message dialog: JOptionPane.showMessageDialog(f, "You have mail."); The first parameter to showMessageDialog( )is the parent component (in this case f, an existing JFrame). The dialog will be centered on the parent component. If you pass nullfor the parent component, the dialog is centered in your screen. The dialogs that JOptionPanedisplays are modal, which means they block other input to your application while they are showing. Here's a slightly fancier message dialog. We've specified a title for the dialog and a message type, which affects the icon that is displayed: JOptionPane.showMessageDialog(f, "You are low on memory.", "Apocalyptic message", JOptionPane.WARNING_MESSAGE); Here's how to display the confirmation dialog shown in Figure 14-10: int result = JOptionPane.showConfirmDialog(null, "Do you want to remove Windows now?"); In this case, we've passed nullfor the parent component. Special values are returned from showConfirmDialog( )to indicate which button was pressed. There's a full example below that shows how to use this return value. Sometimes you need to ask the user to type some input. The following code puts up a dialog requesting the user's name: String name = JOptionPane.showInputDialog(null, "Please enter your name."); Whatever the user types is returned as a String, or nullif the user presses the Cancel button. The most general type of dialog is the option dialog. You supply an array of objects that you wish to be displayed; JOptionPanetakes care of formatting them and displaying the dialog. The following example displays a text label, a JTextField, and a JPasswordField. (Text components are described in the next chapter.)); We've also specified a dialog title ("Login") in the call to showOptionDialog( ). We want OK and Cancel buttons, so we pass OK_CANCEL_OPTIONas the dialog type. The QUESTION_MESSAGEargument indicates we'd like to see the question mark icon. The last three items are optional: an Icon, an array of different choices, and a current selection. Since the icon parameter is null, a default is used. If the array of choices and the current selection parameters were not null, JOptionPanemight try to display the choices in a list or combo box. The following application includes all the examples we've covered: import javax.swing.*; public class ExerciseOptions { public static void main(String[] args) { JFrame f = new JFrame("ExerciseOptions v1.0"); f.setSize(200, 200); f.setLocation(200, 200); f.setVisible(true); JOptionPane.showMessageDialog(f, "You have mail."); JOptionPane.showMessageDialog(f, "You are low on memory.", "Apocalyptic message", JOptionPane.WARNING_MESSAGE); int result = JOptionPane.showConfirmDialog(null, "Do you want to remove Windows now?"); switch (result) { case JOptionPane.YES_OPTION: System.out.println("Yes"); break; case JOptionPane.NO_OPTION: System.out.println("No"); break; case JOptionPane.CANCEL_OPTION: System.out.println("Cancel"); break; case JOptionPane.CLOSED_OPTION: System.out.println("Closed"); break; } String name = JOptionPane.showInputDialog(null, "Please enter your name."); System.out.println(name);); if (result == JOptionPane.OK_OPTION) System.out.println(userField.getText( ) + " " + new String(passField.getPassword( ))); System.exit(0); } } File Selection Dialog A JFileChooseris a standard file-selection box. As with other Swing components, JFileChooseris implemented in pure Java, so it looks and acts the same on different platforms. Selecting files all day can be pretty boring without a greater purpose, so we'll exercise the JFileChooserin a mini-editor application. Editorprovides a text area in which we can load and work with files. (The JFileChoosercreated by Editoris shown in Figure 14-11.) We'll stop just shy of the capability to save and let you fill in the blanks (with a few caveats): import java.awt.*; import java.awt.event.*; import java.io.*; import javax.swing.*; public class Editor extends JFrame implements ActionListener { public static void main(String[] s) { new Editor( ); } private JEditorPane textPane = new JEditorPane( ); public Editor( ) { super("Editor v1.0"); addWindowListener(new WindowAdapter( ) { public void windowClosing(WindowEvent e) { System.exit(0); } }); Container content = getContentPane( ); content.add(new JScrollPane(textPane), BorderLayout.CENTER); JMenu menu = new JMenu("File"); menu.add(makeMenuItem("Open")); menu.add(makeMenuItem("Save")); menu.add(makeMenuItem("Quit")); JMenuBar menuBar = new JMenuBar( ); menuBar.add(menu); setJMenuBar(menuBar); setSize(300, 300); setLocation(200, 200); setVisible(true); } public void actionPerformed(ActionEvent e) { String command = e.getActionCommand( ); if (command.equals("Quit")) System.exit(0); else if (command.equals("Open")) loadFile( ); else if (command.equals("Save")) saveFile( ); } private void loadFile ( ) { JFileChooser chooser = new JFileChooser( ); int result = chooser.showOpenDialog(this); if (result == JFileChooser.CANCEL_OPTION) return; try { File file = chooser.getSelectedFile( ); java.net.URL url = file.toURL( ); textPane.setPage(url); } catch (Exception e) { textPane.setText("Could not load file: " + e); } } private void saveFile( ) { JFileChooser chooser = new JFileChooser( ); chooser.showSaveDialog(this); // Save file data... } private JMenuItem makeMenuItem( String name ) { JMenuItem m = new JMenuItem( name ); m.addActionListener( this ); return m; } } Editoris a JFramethat lays itself out with a JEditorPane(which will be covered in the next chapter) and a pull-down menu. From the pull-down File menu, we can Open, Save, or Quit. The actionPerformed( )method catches the events associated with these menu selections and takes the appropriate action. The interesting parts of Editorare the privatemethods loadFile( )and saveFile( ). loadFile( )creates a new JFileChooserand calls its showOpen-Dialog( )method. A JFileChooserdoes its work when the showOpenDialog( )method is called. This method blocks the caller until the dialog completes its job, at which time the file chooser disappears. After that, we can retrieve the designated file with the getFile( )method. In loadFile( ), we convert the selected Fileto a URLand pass it to the JEditorPane, which displays the selected file. As you'll learn in the next chapter, JEditorPanecan display HTML and RTF files. You can fill out the unfinished saveFile( )method if you wish, but it would be prudent to add the standard safety precautions. For example, you could use one of the confirmation dialogs we just looked at to prompt the user before overwriting an existing file. The Color Chooser Swing is chock full of goodies. JColorChooseris yet another ready-made dialog supplied with Swing; it allows your users to choose colors. The following very brief example shows how easy it is to use JColorChooser: import java.awt.*; import java.awt.event.*; import javax.swing.*; public class LocalColor { public static void main(String[] args) { final JFrame f = new JFrame("LocalColor v1.0"); f.addWindowListener(new WindowAdapter( ) { public void windowClosing(WindowEvent e) { System.exit(0); } }); f.setSize(200, 200); f.setLocation(200, 200); final Container content = f.getContentPane( ); content.setLayout(new GridBagLayout( )); JButton button = new JButton("Change color..."); content.add(button); button.addActionListener(new ActionListener( ) { public void actionPerformed(ActionEvent e) { Color c = JColorChooser.showDialog(f, "Choose a color", content.getBackground( )); if (c != null) content.setBackground(c); } }); f.setVisible(true); } } This examples shows a frame window with a single button. When you click on the button, a color chooser pops up. After you select a color, it becomes the background color of the frame window. Basically all we have to do is call JColorChooser's static method showDialog( ). In this example, we've specified a parent component, a dialog title, and an initial color value. But you can get away with just specifying a parent component. Whatever color the user chooses is returned; if the user presses the Cancel button, nullis returned. 1. Like the text of JButtons and JLabels, menu labels can contain simple HTML. Back to: Learning Java © 2001, O'Reilly & Associates, Inc. webmaster@oreilly.com
http://oreilly.com/catalog/learnjava/chapter/ch14.html
crawl-002
refinedweb
8,258
50.43
Hi guys, I really need some help here. I am not a developer, but in the last year I developed with a friend an algorithm in Python “that works like a trading system”. This algorithm consists of three different files (.py), all linked together. But after one year and thousands lines of code, developing with Spyder is becoming a pain in the ass: the IDE is fat and slow, the editor has not useful features like PyCharm and the general UX is too much to handle, so I started looking for something else. And I discovered Atom. It looks awesome, it seems fast as **ck, I installed it and tried to run our code. Remember when I said I am not a developer? Let’s begin with the the noob questions: - How can I run my Python code from the Atom editor to a console, like in Spyder? I mean, we like to select portions of code and run those specific lines separately, is that possible in Atom? - I tried to install the Hydrogen package but Atom seems unable to complete the process, so I installed Script, but I don’t understand how it works. For example, when I run (cmd + i) import pandasit works, or at least it seems like that, but when I run database = pandas.DataFrame(xxx)the “console” (?) that pops up below returns: Traceback (most recent call last): File "<string>", line 47, in <module> NameError: name 'pandas' is not defined [Finished in 0.054s] How is that possible that pandas is not defined if I just imported it? - Can someone explain me what I need to replicate the Spyder IDE with Atom? Thanks and sorry fo the boring questions. Have a nice day and good work!
https://discuss.atom.io/t/from-anaconda-spyder-ide-to-atom-how-pro-noob-here/26839
CC-MAIN-2019-09
refinedweb
290
71.44
Homework help Up Vote 0 Down Vote #1 24 Minutes Ago | Add to kulrik's Reputation | Flag Bad Post Im having trouble with this problem: The Maclaurin series for arctan(x) is a formula which allows us to compute an approximation to arctan(x) as a polynomial in x. The formula is: arctan(x) = x - x3/3 + x5/5 - x7/7 + x9/9 - x11/11 + . . . Write a method called calculateArctan in the class Mymath . The method reads in a double x, and a positive integer k, and prints out the partial sum from the first k terms in this series. Write a main method to test the method calculateArctan. Sample output Enter x: 0.5 Enter integer k: 3 The partial sum from the first k terms: 0.4645833333333333 The arctan of 0.5 : 0.4636476090008061 I started it out and this is all I got: import java.util.Scanner; public class MyMath { public void calculateArctan() { double x; double sum; double z; int i=1; Scanner input = new Scanner(System.in); System.out.println("input x"); x=input.nextDouble(); double k; System.out.println("input k"); z=0; sum=0; while (i<=k){ { sum=sum + Math.pow(x, i)/i; if(i%2==0) sum=sum-Math.pow(x,i)/i; z = z + 1; } }//end while } public static void main(String[]args) { MyMath m; m = new MyMath(); m.calculateArctan(); } }//end method theres alot of errors, and im havin alot of trouble on it, so PLEASE HELP
https://www.daniweb.com/programming/software-development/threads/229175/please-help
CC-MAIN-2018-05
refinedweb
246
67.15
SharePoint/MOSS information. Think of Akamai as a very intelligent, caching, reverse proxy server, which sits in front of your SharePoint farm. External user requests go to Akamai servers first. If Akamai has the content cached, it serves the content directly back to the client; otherwise Akamai requests the content from your SharePoint farm and then sends it to the client. Akamai does intelligent caching; that is, it can cache static portions of a page like images and stylesheets, while still getting the dynamic parts of the page from your SharePoint server. In any case, the client never directly hits your SharePoint farm. Akamai is much more than just a reverse proxy server. Akamai has many farms of servers (at last report, more than 20,000 servers) distributed in 70 countries. Akamai has a dynamic mapping system, which uses heuristics and network performance historical data, to route the client request to the optimal Akamai farm. Your SharePoint farm is referred to as the “origin farm” in Akamai literature. The basic content flow is illustrated in this table. User browser makes request. => Request routed to optimal Akamai farm Akamai (Edge Server) checks cache, requests non-cached content. Request non-cached content Origin (SharePoint Farm) generates non-cached or dynamic content Client browser renders Akamai’s response <= Akamai combines origin’s response with locally cached content Origin responds with content Akamai is the only external user your origin farm will see. Due to this, blob and output caching is usually of limited benefit when using Akamai. You contract with Akamai for its services. Akamai will assign a representative who will assist you in setting up origin farm to communicate with Akamai’s edge servers, and in configuring the Akamai edge server to meet your application’s needs. Akamai provides training and support as needed. You should plan on several weeks to get everything working. The high level steps are: a. Whether to use SSL. b. The origin farm URL and port number if non-standard. c. Cache key format, and whether to ignore case when comparing cache keys. d. Compression to use when communicating with the origin farm. e. Time to Live (TTL) rules, which determine what types of content Akamai should cache, and for how long. f. Prefetch rules. 4. Testing the Akamai configuration. Akamai has extensive reports to help you identify and tune page performance by adjusting the caching rules and other edge server settings. 5. Go Live. You can access the Akamai portal to periodically review performance reports, so you can adjust edge server configuration as needed. Akamai is most beneficial in content publishing scenarios; that is, where most users are anonymous and the content is mostly static. This scenario makes the caching provided by Akamai most effective. This translates to a SharePoint publishing portal. Since edge server cache TTL should be as long as possible, there needs to be a way to notify the edge server when a new page version has been published; thereby forcing the Akamai cached version of the page to be refreshed. Akamai provides the Content Control Utility (CCU) web service for this purpose. The CCU is a SOAP web service that allows you to specify the refreshing of specific cached objects, or to remove specific objects. The CCU provides the option of using invalidation-based or removal-base refreshing. Requests are propagated through the Akamai network, and most removals are completed within 10 minutes of the request. One limit is that files submitted for CCU requests should contain no more than about 100 URLs per SOAP request. Your Akamai representative will give you the end-point URL for this web service, and a user name and password required for authentication to the web service. A typical SharePoint configuration is to have an Authoring farm when content pages are created and edited. Content deployment jobs periodically push new page versions to the Public farm, which is the origin farm. To automate the CCU web service calls, you can install event handles in the Public farm. The event handlers call the CCU web service for new, changed, and deleted pages. The architecture might look like the following diagram. Event receivers are installed on all libraries and lists which contain content that can be changed by authors. This includes modifications by Content Deployment jobs, or manual copying/deleting of assets in the Public farm. The asynchronous (after) events are used to minimize performance impacts. The event handler is installed the Global Assembly Cache (GAC). This allows it to be called from any site in the Public farm. The event handler packages the URLs and then sends them to Akamai’s CCU web service. These events are captured by the event handlers: ¾ ItemDeleted ¾ ItemUpdated By default these libraries and lists in the public portal root site collection, and every subsite, should have event receivers installed. Additional lists and libraries may be added as needed. ¾ Pages Library ¾ Site Collection Image Library ¾ Site Collection Documents Library ¾ Style Library ¾ Images Library ¾ Reusable Content List A SOAP request is formatted according the Akamai’s Content_Control_Interfaces.pdf document. The Akamai user name and password are stored in SharePoint’s single sign-on repository for protection. SharePoint object model calls are used to retrieve the user name and password to make the web service call. The invalidate action is specified to minimize impact if the cache does not truly need to be updated. The response code is checked to determine if a retry is necessary. It is most effective to use asynchronous methods to call the CCU method, to prevent excessive blocking if the CCU web service is slow in responding. The async call might look like this picture. The async response handler would look like this picture: Akamai can provide a huge performance boost for public portals serving global audiences. Most of the effort to set up Akamai is transparent to SharePoint. The exception is the need to forcefully update the Akamai cache when new content is published. Cache refreshing can be forced by calling the CCU web service provided by Akamai. Total response time is composed of 3 major components. This can be expressed as a formula: The network transmission time can be a major component for remote users accessing SharePoint over WAN links. Reducing the number of bytes transmitted can reduce the network time. IIS Compression can accomplish this reduction in the number of bytes transmitted. IIS compression is highly configurable. Compression can be scoped to: The compression level is a number between 0 and 10, where 10 is the greatest compression. More CPU resources are required for higher compression numbers. When SharePoint installed, IIS is configured to compress both static and dynamic files. By default static compression is on the file types: HTM, HTML, and TXT at level 10. By default static compression is on the file types: ASP and EXE at level 0. Compressed static responses are cached to disk. Once the compressed page is cached, there is no further CPU overhead until the cache expires. Static compression can have dramatic effects; for example: core.js is 257 KB on disk, but IIS static compression reduces it to 54 KB. Dynamic compression requires trial-and-error testing to find the optimal settings. Dynamic compression can affect CPU resources because IIS does not cache compressed underutilized CPUs. If web site generates a large volume of dynamic content, consider whether the additional processing cost of HTTP compression can be reasonably afforded. If the % Processor Time counter is already 80 percent or higher, enabling HTTP compression is not recommended. To evaluate how much of your processor is typically being used, follow these steps: Use the performance logs to determine the sweet spot at which Network Interface Bytes is reduced the most while % Processor Time remains below 80%. Both static and dynamic compression can be configured at multiple scopes by scripting or by using the Metabase GUI tool. The script C:\Inetpub\AdminScripts\adsutil.vbs is the recommended approach for configuration. This allows automated scripts to be applied to all web servers. There are 2 types of compression: gzip and deflate. Gzip is actually a superset of deflate. Both should be configured the same (same types of files and same level of compression), so browsers using either compression method with get similar results. These are the recommended starting values for gzip and deflate compression. These values can be adjusted based upon performance counter captures, to optimize the network bytes transmitted while keeping the CPU load within reasonable bounds. Open a command prompt. Change directory to C:\Inetpub\AdminScripts Capture the current settings for a historical record. Enable a setting; for example, turn on global static compression if it is currently off: Add CSS and JS file types to the static compressed file types. CSS (Cascading Stylesheets) and JS (JavaScript) will provide the most performance gains with SharePoint Add AXD, ASMX, & ASPX to dynamic file list. Tip: Don't compress JPG/JPEG images (already compressed). Consider testing the impact of varying compression levels in a laboratory environment closely monitoring CPU utilization and potential impact to the Web servers. Typically a compression level between 7 and 9 provides optimum performance vs. CPU load in most circumstances. Tip: Start with dynamic compression level set to 4, and then try increasing to see the CPU impact. In some cases you may wish to enable or disable compression at only the site or site element level as opposed to global level. Use the path to the web site in the adsutil command line. You can determine the site path using the IIS Metabase Explorer application available in the IIS 6.0 Resource Kit, or by enumerating sites using the adsutil.vbs script. >cscript adsutil.vbs enum w3svc Microsoft (R) Windows Script Host Version 5.6 Copyright (C) Microsoft Corporation 1996-2001. All rights reserved. KeyType : (STRING) "IIsWebService" MaxConnections : (INTEGER) 4294967295 AnonymousUserName : (STRING) "IUSR_EPGOPSR2BASE" AnonymousUserPass : (STRING) "**********" ConnectionTimeout : (INTEGER) 120 AllowKeepAlive : (BOOLEAN) True DefaultDoc : (STRING) "Default.htm,Default.asp,index.htm" HttpCustomHeaders : (LIST) (1 Items) "X-Powered-By: ASP.NET" (many lines removed here) [/w3svc/1513483211] [/w3svc/1669737538] [/w3svc/1720207907] [/w3svc/2004785039] [/w3svc/809964160] [/w3svc/941433650] [/w3svc/AppPools] [/w3svc/Filters] [/w3svc/Info] You can match the w3svc web site numbers to the web site name using the IIS Manager MMC. In the following picture we see the collaboration web site is web site ID 809964160. If we assume the collaboration web site is only accessed by users on the local LAN where network bandwidth is not an issue, we can disable dynamic compress for just this web site by using the web site metabase path (w3svc/809964160/root/) to set the DoDynamicCompression parameter. After completing the configuration changes, always restart IIS. At a command prompt, either enter Or With static compression set to 10, and dynamic compression set to 4, Fiddler captured these statistics for a publishing site home page. The round trip cost and elapsed time values are estimates based upon typical network latency. These results are for the same page with static and dynamic compression disabled. Note the following differences: Description With Compression Without Compression Bytes Received: 218,719 772,669 RESPONSE BYTES (by Content-Type) image/gif: 30,479 text/css: 17,003 ~headers: 18,432 image/jpeg: 13,634 text/html: 28,335 application/x-javascript: 110,836 image/gif: 30,565 text/css: 103,482 ~headers: 15,093 text/html: 72,976 application/x-javascript: 536,919 Japan / Northern Europe (DSL) Round trip cost: 6.45s Elapsed Time: 14.45s Round trip cost: 5.85s Elapsed Time: 31.85s China (DSL) Round trip cost: 19.35s Elapsed Time: 27.35s Round trip cost: 17.55s Elapsed Time: 43.55s Using Granular Compression in IIS 6.0 Webcast: Metabase Property Reference (IIS 6.0) Using HTTP Compression for Faster Downloads (IIS 6.0) HTTP Compression, Internet Information Services 6.0, and SharePoint Products and Technologies The Problem Before talking about SharePoint, it is necessary to talk about the Windows operating system. Security authorization is based on Access Control Lists, or ACLs. An ACL is a list of access control entries (ACE). Each ACE identifies a security principal; such as a user or AD group, and the access rights allowed, denied, or audited for that security principal. The original designers of the Windows OS set a maximum size of 64K bytes for ACLs, which at the time seemed much more that would be needed. An obvious question is how many users or AD groups can an ACL contain? The answer is, “It depends”. Since an ACL is composed of ACEs, the question becomes “How many ACEs can an ACL contain”? An ACE is a variable length structure. The variable part is the user or group’s security ID, or SID. The security identifier (SID) structure is a variable-length structure used to uniquely identify users or groups. The variability is driven by the domain topology in which the SID was created and assigned. A deeply nested forest domain structure will result in longer SIDs than a flat domain structure for example. Given this variability, the commonly heard numbers of ACEs in an ACL range from 1,000 to 2,000. Up to this point, we have been talking about operating system ACLs. These types of ACLs can impact search crawling (discussed later), but not authorization within SharePoint itself. SharePoint has its own type of ACL it uses for authorization within SharePoint. SharePoint uses its ACLs to make access decisions; as well as, UI trimming decisions. SharePoint ACEs are optimized for SharePoint’s use cases; hence the format is simpler than operation system ACEs. In SharePoint, every user or AD group explicitly added to a site collection gets an entry in the site collection’s UserInfo table, is assigned a principal ID (PID), and gets an entry in the Perms table. The Perms table contains SharePoint’s ACLs, which are composed of SharePoint ACEs. A SharePoint ACE consists of the PID, and an 8 byte permissions mask. So if inheritance is NOT broken, every user or NT group given access to a site collection has one row in the UserInfo table and one row in the Perms table. If a user which receives access indirectly through a NT group accesses the site, a row is added to the UserInfo table to contain that user’s display name, email address, and depending upon the operations that user performs on the site, a copy of their operating system SID; but no new row is added to the Perms table since the original row for the user’s group is sufficient to define the SharePoint permissions for that user. The SharePoint search crawler keeps a copy of an object’s ACL in the search database to enable security trimming at search query time. To keep things simple and efficient for query processing, the search crawler translates SharePoint ACLs into operating system ACLs when building the index. Herein lays a problem. If the translated SharePoint ACL generates an operating system ACL larger than 64K, the search crawl throws an error, “parameter is incorrect”. If this error occurs at the site collection level, none of that site collection’s content will be indexed. This results in a situation in which the site collection itself works fine, but none of the site collection content can be searched. One way is to explicitly add enough individual users or AD groups to a site collection’s membership so that the mapping of the SharePoint ACL to operating system ACL during search crawling results in an operating system ACL greater than 64K. The first section of this post says this will happen between 1,000 and 2,000 unique users or AD groups, depending upon the size of each user or group’s SID. While unlikely, this is a possibility. The other, more likely way, is through broken permission inheritance within a site collection. Let’s walk through a common scenario. A site collection is created for a large community of users; for example, a sales and marketing group with 5,000 members. Using best practice, we create a Sales and Marketing AD group, add the 5,000 users either directly or by nesting already existing AD groups. We then grant this AD group access to the site collection. The end result is one row in the UserInfo table and one row in the Perms table. Both the SharePoint ACL and mapped operating system ACL are less than 100 bytes long. Life is good; search indexing is happy and efficient. We then start loading sales and marketing documents into an array of subsites and document libraries. It becomes apparent one subsite is for marketing research. We only want 25 people to have access. What do we do? We break inheritance with the site collection and give those 25 people explicit access to the marketing research subsite. We now have at least 25 rows in the UserInfo table, but more importantly, 25 additional rows in the Perms table. Let’s say this scenario repeats itself for subsites divided by product line and country/region; since it is important to keep product strategy restricted. Maybe we break inheritance for 10 other subsites of 25 users each. The net result is an additional 250 rows in the Perms table. Now comes the killer scenario. We add another AD group to site collection member ship which includes all full time employees (FTEs); containing a total of 30,000 members, either directly or through nested AD group membership. This group has read access to all sales and marketing content except the previously mentioned subsites with broken inheritance. This action by itself only adds one more row to the Perms table, which is not a problem; however, the plot thickens… It is determined that certain FTEs in manufacturing and other support organizations need contributor access to specific documents scattered throughout the sales and marketing site collection. One-by-one, inheritance is broken on specific documents, and individual FTE users are granted contributor access to just those documents. Each time this happens, rows are added to the Perms table. Over time, perhaps 5 unique FTE users are granted broken inheritance access to 300 different individual documents, adding 1,500 rows to the Perms table. Broken inheritance has become a growing cancer. Eventually the Perms table reaches 1,800 rows. The Perm row for the site collection itself now contains 1,800+ ACEs. The incremental crawl that night starts throwing “parameter is incorrect” errors while crawling the site collection. Why? Because the operating system ACL the search crawling is trying to generate from the 2,000+ ACEs in the site collection Perms table exceeds the 64K ACL max size. Plan ahead when designing the information architecture (IA). Besides the number of AD groups and users explicitly granted membership in your site collections, consider what would have if you need to start breaking inheritance for granular access control at the list, library, or individual document level. Several customers have already hit problem. I know of one major manufacturer whose Extranet supply chain site is based on one site collection. As more-and-more broken inheritance was applied to individual documents to keep each supplier’s information private to that supplier, the site collection has been unsearchable. The only after-the-fact solution is painful refactoring of the site collection structure to more granularly designed site collections, whose usage is carefully aligned with the intended user communities. I recently encountered a mystery. I was testing a newly written utility that called the object model method Content.ContentSources to get a list of the search content sources. The utility functioned perfectly on my single server virtual development farm. When it came time to deploy to a medium farm, the utility was installed on the Central Administration server, which was separate from the Index server. The utility stopped working! (This is why you should always test in a multiple server environment). Now I had to figure out what was causing the failure. The root cause was Office Server Web Service security. When the utility ran on a single server, SharePoint automatically made direct object model calls (known as short-circuiting web service calls, to optimize performance), so the SearchAdmin.asmx web service was not called. When the utility ran on a separate server, SharePoint under-the-covers converted the object model call to a web service call to the Index server. The first problem was quickly discovered in the Application Event Log. There was an SSL failure reported. The Office Server Web Service by default uses SSL to secure intra-farm communications. A search of KB articles found KB962928 (). This article matched my scenario. The .NET Framework 3.5 SP1 had just been installed on the farm. This installation can corrupt the SSL certificate used by the Office Server Web Service. You can refer to the KB article for details. In summary the fix is to run SelfSSL.exe found in the IIS 6.0 Resource Kit. This must be done on all servers in the farm. To run SelfSSL.exe you need to know the Office Server Web Service identifier, which you can find in the IIS Manager MMC. It is 1720207907, as seen in the following picture. Run the command as explained in the KB article on every farm server, using the correct Identifier value. A sample command session follows: C:\Program Files\IIS Resources\SelfSSL>net stop osearch The Office SharePoint Server Search service is stopping. The Office SharePoint Server Search service was stopped successfully. C:\Program Files\IIS Resources\SelfSSL>selfssl /s:1720207907 /v:1000 Microsoft (R) SelfSSL Version 1.0 Do you want to replace the SSL settings for site 1720207907 (Y/N)?y The self signed certificate was successfully assigned to site 1720207907. C:\Program Files\IIS Resources\SelfSSL> Unfortunately, the next attempt at testing the utility also failed. This time the Application Event Log on the Index server had an entry 1314, from ASP.NET. The web service call was not even getting to SharePoint. It was being refused by the ASP.NET handler in IIS. (I highly recommend reading Inside SharePoint Enterprise Project Management with SharePoint. This article goes into great detail on how SharePoint web service authentication/authorization works.) There is a wealth of information in the event description. Notice the message says “URL authorization failed”. We can also see the requested URL was “/SharedServices1/Search/SearchAdmin.asmx”, and the requesting user identity was “LITWAREINC\Administrator”. But wait, the requesting user was a farm administrator. How could a farm administrator be denied access to a SharePoint URL? This sounds like a web.config issue, not a SharePoint issue. Event code: 4007 Event message: URL authorization failed for the request. Event time: 4/11/2009 4:28:24 PM Event time (UTC): 4/11/2009 9:28:24 PM Event ID: 418e2b58d47e4e0e81c213f24f64d642 Event sequence: 2 Event occurrence: 1 Event detail code: 0 Application information: Application domain: /LM/W3SVC/1720207907/root/SharedServices1-1-128839589029265968 Trust level: Full Application Virtual Path: /SharedServices1 Application Path: C:\Program Files\Microsoft Office Servers\12.0\WebServices\Shared\ Machine name: MOSS Process information: Process ID: 4448 Process name: w3wp.exe Account name: LITWAREINC\sspservice Request information: Request URL: Request path: /SharedServices1/Search/SearchAdmin.asmx User host address: 192.168.150.2 User: LITWAREINC\Administrator Is authenticated: True Authentication Type: Negotiate Thread account name: LITWAREINC\sspservice The web.config file on the Index server is listed below. Unimportant sections have been removed for brevity. The key lines are the authorizations. The first authorization, for the root level, looks fine. WSS_ADMIN_WPG membership includes all farm administrators including the requesting account “LITWAREINC\Administrator”, so this is not the problem. The second authorization for the specific location “SharedServices1” looks more interesting. This authorization only lists 2 accounts and no groups. Neither account is “LITWAREINC\Administrator”, which is why ASP.NET denied access to the web service call. <?xml version="1.0" encoding="utf-8"?> <configuration> <configSections> . . . </configSections> <system.web> <authorization> <allow roles=".\WSS_ADMIN_WPG" /> <deny users="*" /> </authorization> <webServices> . . . </webServices> </system.web> <location path="SharedServices1" inheritInChildApplications="true"> <microsoft.office.server> </microsoft.office.server> <system.web> <authorization> <allow users="litwareinc\SPAppPool,litwareinc\SPFarmAdmin" /> </authorization> </system.web> </location> </configuration> So now the question is where do these account names come from? A little digging through documentation and blogs revealed the answer. These are the application pool accounts for SharePoint web application sites. Everytime you create a new application pool through Central Administration when creating or extending a web application, the application pool identity is added to this list. In this farm there are 2 application pool identities, SPAppPool (for non-administrative application pools) and SPFarmAdmin (for Central Admin and SSP application pools). It is important to notice that since litwareinc\administrator is not used as an application pool identity, this account does not appear in the authorized users list. Now that we know what the problem is, how do we get litwareinc\administrator into the authorized list? We cannot directly edit web.config. Other than supportability questions, SharePoint automatically rewrites this list every minute. Even it we manually changed web.config, SharePoint would remove our change. Since the utility had to run as litwareinc\administrator, the only way to make this happen was to create a dummy web application through Central Administration, specifying to created a new application pool with the identity litwareinc\administrator. Since we are not really using this application pool, we can stop the dummy web application and the application pool to minimize the performance impact. After this, SharePoint added litwareinc\administrator to the web.config authorized list. The utility now works perfectly from any server in the farm. The takeaway of this story is if you are writing code that has to call the Office Server Web Service, impersonate a SharePoint application pool account when making the web service call, realizing some object model method calls are converted under-the-covers to web service calls depending upon which server in the farm your code is running on. The Content Editor Web Part (CEWP) has a Rich Text Editor. This allows non-technical authors to generate custom content using a web part. This is a great feature for team collaboration sites, but some customers also use the CEWP on publishing site pages. Avoiding the discussion of web parts verses field controls in published pages, there are issues using the CEWP to create content on published pages. A Rich Text Editor sounds like a great feature; however, there is a problem with content deployment, described by Andrew Connell. (There is a related problem for sites that can be accessed through multiple AAMs as described by Maxime Bombardier.) The basic problem is the Rich Text Editor forces all URLs to be absolute. If you look at the HTML generated by the above HTML editor, you will see: As Andrew Connell points out: If you have a link to in a CEWP on a page and then do content deployment to, the link will be pointing back to the staging site (which will... or should... be inaccessible). The absolute URL is not fixed up automatically during content deployment, so the target page will still point to the original URL location, not a location within the target farm. This means the absolute URL must be corrected in the target farm itself. In the preceding example, we want the target farm HTML to be a relative URL that points to locations within the target farm: Maxime Bombardier’s control adapter strategy can be leveraged. With slight modification, Maxime’s code can be modified to convert absolute URLs to relative URLs, so they effectively point to the appropriate location in the target farm. How do we convert an absolute URL to a relative URL? Looking at the above example, we need to strip out the host portion of the URL; that is, we need to remove “” The difference between the content deployment fix we need, and Maxime’s AAM fix, is what gets stripped. Maxime’s AAM fix strips the AAM host names of the current web application. The content deployment fix needs to strip the AAM host names of the authoring web application, which the target farm does not know. In other words, we need a way to tell the control adapter which host names to strip. The solution is to make the host names configurable. This solution uses the SharePoint Config Store solution developed by Chris O'Brien. So, the primary difference between this control adapter and Maxime’s control adapter is the GetAlternativeUrls method. This method’s logic is changed to read AAMs from the Config store list, rather than using the object module to get the AAMs from the current web application. using COB.SharePoint.Utilities; using System; using System.Collections.Generic; using System.IO; using System.Text; using System.Web; using System.Web.UI; using System.Web.UI.Adapters; using Microsoft.SharePoint; using Microsoft.SharePoint.Administration; namespace Litware.SharePoint.WebPartPages.CewpControlAdapter { public class ContentEditorWebPartAdapter : ControlAdapter { protected override void Render(System.Web.UI.HtmlTextWriter writer) { StringBuilder sb = new StringBuilder(); // Allow the CEWP to render itself into a string that we provide HtmlTextWriter htw = new HtmlTextWriter(new StringWriter(sb)); base.Render(htw); string output = sb.ToString(); // Wrap the adatper rendering logic in a try-catch so any error // in the adapter won't prevent the CEWP from rendering. try { // Now we post-process the CEWP rendering to convert absolute URLs to relative URLs string[] alternativeUrls = GetAlternativeUrls(); if (alternativeUrls != null) { foreach (string replaceableUrl in alternativeUrls) { // Do a simple String.Replace() of the alternativeUrls to generate a relative url searchFor = replaceableUrl; replaceWith = "/"; output = output.Replace(searchFor, replaceWith); } } } catch (Exception ex) { // log exception here } // Finally, write the rendering to the page writer.Write(output); } private string[] GetAlternativeUrls() { string[] alternativeUrls = null; try { alernativeUrls = (string[])HttpContext.Current.Cache["alternativeUrls"]; if (alternativeUrls == null) { string temp = String.Empty; // Get the URLs to be replaced from the config store temp = ConfigStore.GetValue("CEWP Adapter", "AAMs"); if (String.IsNullOrEmpty(temp)) throw new ArgumentNullException("AAMs config store parameter is null or empty"); // Split apart the config parameter using a semicolon separater value char[] separaters = { ';' }; alternativeUrls = temp.Split(separaters); // Validate the config value if (alternativeUrls == null || alternativeUrls.Length == 0) throw new ArgumentNullException( "AAMs config store parameter is null or empty after split"); for (int i = 0; i < alternativeUrls.Length; i++) { // Ensure the URL is "/" terminated for consitent replacement behavior string replaceableUrl = alternativeUrls[i]; if (!string.IsNullOrEmpty(replaceableUrl) && !replaceableUrl.EndsWith("/")) alternativeUrls[i] += "/"; } // Sort, and then reverse the array // to put the longest; that is, the most specific // URLs first in the list Array.Sort(alternativeUrls); Array.Reverse(alternativeUrls); // Cache for 5 minutes to allow for a somewhat quick refresh // if the configuration values are changed HttpContext.Current.Cache.Add("alternativeUrls",alternativeUrls, null, DateTime.Now.AddMinutes(5), System.Web.Caching.Cache.NoSlidingExpiration, System.Web.Caching.CacheItemPriority.Normal, null); } } catch (Exception ex) { // log exception here } return alternativeUrls; } } } Build the output assembly as strongly named, and then deploy it to the GAC on every WFE. A simple solution package (wsp file) can be created to automate the deployment to all WFEs. Creating a solution package is automatic if you create your project using the Visual Studio 2008 extensions for Windows SharePoint Services 3.0, v1.2. The control adapter is associated with the CEWP through browser file entries. The default browser file, compat.browser, is in the App_Browsers folder of each web application’s virtual directory. We don’t want to update this file; instead, we will create a separate file for our control adapter. Additional browser files can be added to the same directory; however, additional files will not be recognized by ASP.NET until compat.browser is recompiled. Recompilation is forced by opening compat.browser in a text editor like Notepad, making an innocuous change (e.g., add and then delete a space), and then saving the file. The customized browser file, litware.browser, is created and placed in the App_Browsers folder of every web application where the control adapter is to be enabled. Without this browser file, the control adapter is not called even though it may be installed in the GAC. The default compat.browser is then updated and saved to force a recompiled of the local application browser files as described in the preceding paragraph. Looking at the litware.browser code, the entry controlType attribute tells ASP.NET to associate this control adaptor with the CEWP. The adapterType attribute has the control adapter type and assembly. The refID=”Default” attribute tells ASP.NET to apply the adapter to all browser types. <?xml version="1.0" encoding="utf-8" ?> <browsers> <browser refID="Default"> <controlAdapters> <adapter controlType="Microsoft.SharePoint.WebPartPages.ContentEditorWebPart" adapterType="Litware.SharePoint.WebPartPages.CewpControlAdapter.ContentEditorWebPartAdapter, Litware.SharePoint.WebPartPages.CewpControlAdapter, Version=1.0.0.0, Culture=neutral, PublicKeyToken=daf6fd1bbe0cfc20" /> </controlAdapters> </browser> </browsers> Note, App_Browser directory must be updated on every WFE in the target farm. Browser Definition File Schema (browsers Element) Securing Browser Definition Files. The Office Services Web service is used by Office SharePoint Server 2007 as a communication channel between Web servers and application servers. This service uses the following ports: Access to the web service methods is restricted to the farm administrator group, WSS_ADMIN_WPG. None of the web service methods can be called from user code. Depending on features installed, the Office Server Web Services Web application exposes the following internal Web services, which are not available for calls from custom code: Friendly Name Location Search Web Service SearchAdmin.asmx Microsoft Office SharePoint Server 2007 Search Administration Web Service. Search Application Web Service /SSP/Search/SearchAdmin.asmx Microsoft Office SharePoint Server 2007 Search Application Administration Web Service. Excel Service Soap /SSP/ExcelCalculationServer/ExcelService.asmx Microsoft Office SharePoint Server 2007 Excel Services Application Web Service. The object model automatically short circuits the web services, i.e. invokes the underlying functionality without invoking the web service, when the target server is also the client primarily for performance reasons. Hence, the web services are not used... Runs in the Office Server Web Services virtual server root application pool, i.e. an application pool that does not belong to any SSP. This GLOBAL application pool runs as NetworkService. It is used to retrieve low level computer configuration settings before any SSP is created, e.g. system drive info, verify path correctness, the computer's IP Address. It is also used to create/configure a propagation share. The web method that implements this functionality is special: It impersonates the WindowsIdentity making the request. That identity must be a local admin on the remote server (only local administrators can create/configure shares). Allowed access: WSS_ADMIN_WPG. Primarily used for SSP administration of Search configuration. A web service associated with a specific SSP on a specific server (indexer and/or query server). Runs as the SSP web service credentials (the credentials that you enter in the SSP creation/details page). The SSP web service account can read/write from/to the SSP database and the Search database (only the ones that belong to its SSP). Allowed access: WSS_ADMIN_WPG and the SSP administration application pool identity. Network traffic can be secured with either SSL on port 56738, or with IPSec on either port. IPSec is an IP level feature, which means all traffic on the configured ports is protected; whereas, SSL is an application level protection mechanism. IPSec has the advantage of limiting which pairs of servers can communicate, by configuring the IP addresses. This feature can significantly lock down a server farm. Search service account SSP administration site application pool identity Global web service account SSP (Application) web service It has read/write access to the Search registry hive. The question is, when an information management expiration policy is defined, is the expiration period applied immediately or at some future time? If the answer is, at some future time, exactly what is that time, and can you set it? Contrived Example To take a concrete example, create a document library. Next, modify the default view allow us to see what is happening. Go to Settings, Views, and click on All Documents. Add the columns Created, Exempt from Policy, and Expiration Date to the view. Press the OK button to save the view changes. Now Go to Settings, Document Library Settings, and click on Information management policy settings. Return to the library, All Documents view. Upload several documents. The uploaded documents will have an Expiration Date of the current date plus 30 days as expected. Go back to the Settings, and change the retention period to Created + 60 days. Upload some more documents. The newly uploaded documents correctly show an expiration date of the current date plus 60 days, but the previously uploaded documents still have an expiration date of the current date plus 30 days. It appears there is an inconsistency. Information Policy Jobs The key to this inconsistency is the Information Management Policy timer job. This job runs once daily by default. It iterates all the web applications/site collections/sites/lists in the farm, looking for information policy changes. When a policy change is found, all affected item's metadata is updated; consequently, the expiration dates of the library documents in our contrived example are not updated until this job runs. When this job eventually runs, the inconsistency will be corrected. There is an stsadm command to change the schedule of this job, SetPolicySchedule. stsadm -o setpolicyschedule -schedule <recurrence string> Parameter Value Required? Description schedule A valid Windows SharePoint Services Timer service (SPTimer) schedule in the form of any one of the following schedules: An acceptable default value is "once every 24 hours." Yes Sets how often the policy framework processes changes to a policy. The value should be a properly formatted SPTimer argument. Since this job could affect performance in a large farm, be careful when scheduling it. Daily is probably sufficient. It is also a good idea to have it run about an hour before Expiration Policy job, so the Expiration Policy job will find up-to-date item metadata when it applies the expiration policy to items. You can find current information on both the Information Management Policy and Expiration Policy jobs by going to Central Administration > Operations > Timer Job Definitions. This will show you the frequency and last run time of each job, but not the complete schedule. There is currently no stsadm command to change the Expiration Policy job; however, Mattias Lindberg has a code sample to for the job to run using the object model. Summary The following My Site recommendations are a composite of best practices taken from experiences at Microsoft and other large customers. My Sites (even if they are as small as possible and only really used to store a profile picture) complicate backup/recovery, and add complexity and risk to ensuring the availability of the rest of the SSP farm. Large organizations (100K+ employees) should consider putting My Sites into a separate farm. Very large organizations might consider multiple My Site farms, perhaps regionally located. This minimizes the number of content databases per farm and places the My Sites geographically closer to the site owners. Having fewer content databases per farm eases administrative burdens. Having My Sites hosted closer to site owners reduces the affects of network latency, thereby enhancing their usage experience. Multiple My Site farms can also provide more flexibility in managing the effort and time required to deploy updates and service packs to any given farm. Always create a dedicated Web Application to host My Sites. This allows leveraging web application policies to define security, facilitates content database management, and enables creation of zones for external access. Do not customize the My Site site definition. Besides being unsupported, poorly designed customizations can severely impact server performance and unnecessarily consume valuable CPU and memory resources. Any customization should be done through feature stapling, see and Install the latest service pack (currently SP1). Be sure to get the latest post service pack hotfixes applied, in particular 21243 (Office QFE). There is an issue where incremental crawls will not pick up all changes on My Sites without the post SP1 hotfix. Use a separate content source for People Profiles rather than allowing it to default to the Local Office SharePoint Server Sites content source. My Sites full crawls can be time consuming due to the large number of site collections. Creating a separate content source enables independent crawl configuration; such as, the type of crawl and crawl frequency for My Sites. Remember, a user profile page will exist for all employees following a full Active Directory profile import, even if no My Sites have been created yet. Profile pages allow basic employee information to be exposed in search results. Allow users to create their own My Site on demand. Do not pre-provision My Sites. Generally, pre-provisioning is a time consuming process potentially taking many days or weeks. It gains little, and costs storage and administrative headaches. Make My Sites available to everyone on day 1. This allows for "viral" adoption by early adopters. This will eventually encourage others to create My Sites, thereby getting the momentum rolling. Send out invitations to a small group of "pilot" users who would be interested in trying out My Sites, based on their role in the organization. The pilot group might contain a few hundred users. This gets a critical mass of My Sites in place quickly. Try regional roll-outs via "soft launches", which include poster campaigns or brown bag lunch training at selected campuses and offices. About the 3rd or 4th month, promote the My Site feature in a story on the Intranet portal home page. Around the 6th month, incorporate the concept of "filling out your profile" into new employee orientation as a specific training exercise. Now essentially all new hires will have a My Site (because they need one to store their profile picture). Encourage high profile "executive blogging" to drive awareness and adoption of My Sites. Blogging topics might include annual business planning, corporate strategy, rumor control, etc. This can demystify blogging by encouraging many participants to make daily posts about what they were doing and what they are thinking about. Note: this implies more frequent incremental crawling to incorporate blog entries into the search index. Also, consider setting up a "Blogs" scope on the search home page to facilitate blog discovery. This can be accomplished by setting up scope based on the Content Type of blog posts. An example follows: Note that this scope will pick up Blog Posts no matter where the Blog resides, as long as those a SharePoint Blogs crawled by a SharePoint Content Source. Consider adding a link on the profile page that allows others to send an email to ask the person to fill out their profile ("peer nagging"). Explore holding a contest or raffle – If you fill out (or update) your profile this month you are eligible to win a prize. A customer recently asked for “vanity” URLs for each of the major departments; HR, Finance, Legal, etc.; so for example, HR would be, and legal would be. No problem you say, just create a web application for each department, give the web application default zone the vanity URL, create corresponding DNS entries, and the requirement is fulfilled. The problem is there are over 30 departments, not to mention foreign subsidiaries, and possibly other “I want my own vanity URL” requests from other corporate groups. Since web applications are heavy resource consumers, each one requiring one IIS web site per zone, basing vanity URLS on web applications would not be feasible. The traditional out-of-the-box site collection paths did not meet the requirements; Wildcard inclusion; e.g., Explicit inclusion: e.g., For both wildcard inclusion and explicit inclusion, the vanity part of the URL is at the end, which is not what was desired. Host-named site collections (what used to be called "scalable hosting mode" in WSS v2) provide exactly the needed capability. Don’t confuse the terms host-named with host headers. They are different concepts. The host-named concept applies to the internal SharePoint virtual path mapping mechanism; whereas host headers apply to IIS web sites independent of SharePoint. Host-named site collections effectively allow use of an arbitrary URL, which is associated with an existing web application. There can be many host-named sites for a web application. The net result is freedom to have as many vanity URLs as necessary, while limiting the number of web applications. We can have URLs like,,, etc. Host-named sites cannot be created through the UI. You must use stsadm. This should not deter you, since the syntax is simple. The secret is to add the –hhurl parameter to the stsadm createsite command. As with all good things, there are some limitations and complications. The following blog is an excellent read:. Quoting from this blog:. This whitepaper is also highly recommended reading. It discusses how to enable SSL, and many other configuration issues: Assume we want to create a site collection with the vanity URL in the web application. Step 1: Open a command prompt as a farm administrator, and then enter this command, being sure to include the –hhurl parameter: >stsadm -o createsite -url -ownerlogin litwareinc\administrator -owneremail administrator@litwareinc.com -hhurl -sitetemplate STS#1 -title "Finance Department" -quota DepartmentalSite Operation completed successfully. The new site collection appears in the site collection list of the extranet.litwareinc.com web application. Step 2: Create a DNS entry for this new name, pointing to the IP address. For local test, make an entry the hosts file. You can now open a browser and navigate to the site collection. Step 3: Create a search context for the new site collection, or include it as a starting address within an existing site collection. Start a crawl to include the site contents in the search index. To test the search, create a simple text document in a shared document library. When the next increment crawl completes, you can then search for the document to verify the search results are using the vanity URL. The stsadm site creation command will give a warning if the host web application is using Kerberos authentication (negotiate). >stsadm -o createsite -url –ownerlogin mossfs\administrator -owneremail administrator @mossfs.com -hhurl -sitetemplate STS#1 -title "Finance Department" WARNING: SharePoint no longer customizes Integrated Authentication security settings. This Web application may be using Kerberos, which can require manual configuration. See om/?id=832769 for more information. It is necessary to register the vanity URL with Active Directory using setspn. >setspn -a http/finance.litwareinc.com litwareinc\administrator In my testing, I stumbled into another complication. Without realizing it at first, I created a site collection in the Intranet zone of a web application, which was configured for Kerberos. After creating the site collection, adding the DNS (host file) entry, and executing setspn, I got a “This site is under construction” error page every time I tried to browse to the site. I eventually worked-around this by explicitly adding a host header to the IIS web site. I consider this an unsupported solution, and so I recommend extensive testing before applying it to a production scenario. Not all timer jobs are visible in the Central Administration timer job definition page. There are MOSS 2007 timer jobs in the SSP application which don’t appear in the Central Administration page. It makes sense that these jobs are not visible, since there is nothing you can modify or disable. All the same, it would be nice to know what these jobs are, and what their schedules are, in case you want to schedule other potentially conflicting activities. You can see the names of these SSP jobs by using stsadm enumssptimerjobs command, as in the following console sample (view entire article ...) I recently heard a question, “How do you change SSP web application associations through the object model (OM)?” Searching the OM online help didn’t find any results. Any yet theoretically, it must be possible, since you can set the web application SSP association through Central Administration. Then the thought occurred, why not reverse engineer the Central Administration page to see how it is done? Here are the steps. This technique can be applied to any administration page if you are curious how the product team wrote the code. (view entire article...) The out-of-the-box UI provides a means to manually remove user profiles. Navigate to SSP Admin > User Profile and Properties > View User Profiles. Search for the user’s profile, and then click the “Delete” context menu item or the Delete toolbar button. This is fine for an occasional profile deletion; but what if you need to delete thousands of profiles? We recently ran into this situation. (View entire article ...) Now that daylight savings has arrived in the United States, have you noticed problems with timer jobs not running when expected? I recently encountered this trying to deploy solution packages using stsadm scripts. In the past, these scripts ran flawlessly. The old solutions were retracted and deleted, the new solution versions added and deployed across all farm servers within a few minutes, and life was good. Last week I was working with a customer to deploy a new staging farm. We got to the point at which the solution deployment scripts were run. We waited, and waited, and waited. The script was hung at stsadm –o execadmsvcjobs, which was called after executing several stsadm –o deploysolution -name solutionpackage1.wsp -immediate -force –allowGacDeployment statements. What should have happened is the timer jobs to deploy the solution across the farm servers should have executed within about a minute. Instead, over an hour passed. Then suddenly, the jobs ran. What caused this odd behavior, which had never happened before with the same scripts? Was there a farm configuration problem? Then a thought occurred. The time had changed to daylight savings a week before; moving the clocks ahead one hour. Could there be a connection? A little research brought the problem to light. WSS 3.0 SP1 includes fixes to timer job DST scheduling problems. See 938663 ( ) One-time timer jobs in Windows SharePoint Services 3.0 are delayed by at least one hour when the jobs are scheduled to occur during daylight saving time (DST). A quick check in Central Administration > Operations > Servers in Farm, showed the installed version number was 12.0.0.4518. It should have been 12.0.0.6219 if SP1 was installed. We installed WSS 3.0 SP1, and the timer job scheduling problem disappeared! I strongly encourage installation of WSS/MOSS SP1. It fixes many issues, DST just being one example. Taken time to read and following the installation instructions precisely! Remember, you have to update all servers in the farm at once. There is no rolling update, so if you have a multiple server farm with large content databases, plan on doing this over a weekend to avoid disrupting your user community. There is a table in the SSP database named UserProfileEventLog. The table maintains a history of user profile property changes. By default, it retains 7 days of history. This table can cause problems in a couple of ways. First issue: size. This table contains one row per change of a property in a user profile. The row contains the old and new property values, along with associated metadata like the datetime the property was changed. The implication is that the table can grow large. Assume you just configured the profile import and are ready to start the first Active Directory import. Further, assume your organization has 100,000 user accounts, and each account be populated with 12 AD attributes. The full import will result in 100,000 X 12 = 1.2 millions rows being inserted into UserProfileEventLog. To extend the example, assume you also have a BDC import connection populating another 20 properties from your company's HR system. That adds another 2 million rows. There are now 3.2 million rows in the event log table. If each row is 100 bytes (old value, new value, plus metadata), the table is now approximately 300 MB in size. Second issue: deleting old entries. The change history is kept for a configurable number of days. The concern is how many rows will have to be deleted on a given day? The number of days of history is 7 days be default, but can be set using stsadm.exe -o profilechangelog -title <SSP Name> -daysofhistory <number of days> -generateanniversaries (). The critical issue is that MOSS has to remove a full day of history every day to honor the daysofhistory setting. Using the numbers in the preceding paragraph, 7 days after the first full import, MOSS is going to delete 3.2 millions rows of data all at once! What makes this an issue is that this is done with a single SQL statement, something like DELETE FROM UserProfileEventLog WHERE EventId < @MinEventTime. Think of the implications. As a single statement, this will hold locks until all 3.2 million rows are deleted. These locks might prevent other database transactions from completing; but it also means the transaction log (even with simple recovery mode) cannot be truncated, and will therefore grow until at least this DELETE statement completes. I have seen this delete statement run for 4 hours, with the transaction log quadrupling in size. This is little you can do to avoid this. Although you can adjust the number of days of history, MOSS will eventually try to delete an entire day's history at some point. The deletion is done by an internal timer job buried within the SSP. The job is hard coded to run at 10:00 PM daily. You cannot change this scheduled time. What are the take-aways?
http://blogs.msdn.com/jimmiet/
crawl-002
refinedweb
8,978
56.86
#include <eventdispatcher.h> #include <eventdispatcher.h> List of all members. Definition at line 33 of file eventdispatcher.h. Creates a new EventDispatcher object. You should not need to use this class directly. Definition at line 20 of file eventdispatcher.cpp. [virtual] Virtual Destructor. Definition at line 24 of file eventdispatcher.cpp. Looks for handlers for the given Event, identified by its type. Definition at line 43 of file eventdispatcher.cpp. Looks for handlers for the given Event, and removes the handlers if requested. Definition at line 28 of file eventdispatcher.cpp. Registers the given EventHandler to be notified about Events with the given context. The context will usually be an IQ ID. Definition at line 53 of file eventdispatcher.cpp. Removes the given EventHandler. Definition at line 61 of file eventdispatcher.cpp.
http://camaya.net/api/gloox-trunk/classgloox_1_1EventDispatcher.html
crawl-001
refinedweb
132
55.71
The following form allows you to view linux man pages. #include <unistd.h> pid_t fork(void);- variables, and other pthreads objects; the use of pthread_atfork(3) may be helpful for dealing with problems that this can cause. * The child inherits copies of the parent's set of open file descrip-). SVr4, 4.3BSD, POSIX.1-2001. Under Linux, fork() is implemented using copy-on-write pages, so the only penalty that it incurs is the time and memory required to dupli- cate the parent's page tables, and to create a unique task structure for the child. Since version 2.3.3, rather than invoking the kernel's fork() system call, the glibc fork() wrapper that is provided as part of the NPTL webmaster@linuxguruz.com
http://www.linuxguruz.com/man-pages/fork/
CC-MAIN-2017-51
refinedweb
126
61.16
F1 Score vs ROC AUC vs Accuracy vs PR AUC: Which Evaluation Metric Should You Choose? So you are working on a machine learning project and thinking: - When accuracy is a better evaluation metric than ROC AUC? - When should I not use accuracy? - What is the F1 score good for? - If my problem is highly imbalanced should I use ROC AUC or PR AUC? As always it depends, but understanding the trade-offs between different metrics is crucial when it comes to making the correct decision. In this blog post I will: - Talk about some of the most common binary classification metrics like F1 score, ROC AUC, PR AUC, and Accuracy - Compare them using an example binary classification problem - tell you what you should consider when deciding to choose one metric over the other (F1 score vs ROC AUC). Ok, let’s do this! You will learn about Evaluation metrics recap I will start by introducing each of those classification metrics. Specifically: - What is the definition and intuition behind it, - The non-technical explanation, - How to calculate or plot it, - When should you use it. Note: If you have read my previous blog post, “24 Evaluation Metrics for Binary Classification (And When to Use Them)”, you may want to skip this section and scroll down to the evaluation metrics comparison. 1. Accuracy It measures how many observations, both positive and negative, were correctly classified. You shouldn’t use accuracy on imbalanced problems. Then, it is easy to get a high accuracy score by simply classifying all observations as the majority class. In Python you can calculate it in the following way:) Since accuracy score is calculated on the predicted classes (not prediction scores) we need to apply a certain threshold before computing it. The obvious choice is the threshold of 0.5 but it can be suboptimal. Let’s see an example of how accuracy depends on the threshold choice: You can use charts like the one above to determine the optimal threshold. In this case, choosing something a bit over standard 0.5 could bump the score by a tiny bit 0.9686->0.9688 but in other cases, the improvement can be more substantial. So, when does it make sense to use it? - When your problem is balanced using accuracy is usually a good start. An additional benefit is that it is really easy to explain it to non-technical stakeholders in your project, - When every class is equally important to you. 2. F1 score Simply put, it combines precision and recall into one metric by calculating the harmonic mean between those two. It is actually a special case of the more general function F beta: When choosing beta in your F-beta score the more you care about recall over precision the higher beta you should choose. For example, with F1 score we care equally about recall and precision with F2 score, recall is twice as important to us. With 0<beta<1 we care more about precision and so the higher the threshold the higher the F beta score. When beta>1 our optimal threshold moves toward lower thresholds and with beta=1 it is somewhere in the middle. It can be easily computed by running: from sklearn.metrics import f1_score y_pred_class = y_pred_pos > threshold f1_score(y_true, y_pred_class) from sklearn.metrics import f1_score y_pred_class = y_pred_pos > threshold f1_score(y_true, y_pred_class) It is important to remember that F1 score is calculated from Precision and Recall which, in turn, are calculated on the predicted classes (not prediction scores). How should we choose an optimal threshold? Let’s plot F1 score over all possible thresholds: We can adjust the threshold to optimize F1 score. Notice that for both precision and recall you could get perfect scores by increasing or decreasing the threshold. Good thing is, you can find a sweet spot for F1 score. As you can see, getting the threshold just right can actually improve your score from 0.8077->0.8121. When should you use it? - Pretty much in every binary classification problem where you care more about the positive class. It is my go-to metric when working on those problems. - It can be easily explained to business stakeholders which in many cases can be a deciding factor. Always remember, machine learning is just a tool to solve a business problem. 3. ROC AUC AUC means area under the curve so to speak about ROC AUC score we need to define ROC curve first. It is a chart that visualizes the tradeoff between true positive rate (TPR) and false positive rate (FPR). Basically, for every threshold, we calculate TPR and FPR and plot it on one chart. Of course, the higher TPR and the lower FPR is for each threshold the better and so classifiers that have curves that are more top-left-side are better. An extensive discussion of ROC Curve and ROC AUC score can be found in this article by Tom Fawcett. We can see a healthy ROC curve, pushed towards the top-left side both for positive and negative classes. It is not clear which one performs better across the board as with FPR < ~0.15 positive class is higher and starting from FPR~0.15 the negative class is above. In order to get one number that tells us how good our curve is, we can calculate the Area Under the ROC Curve, or ROC AUC score. The more top-left your curve is the higher the area and hence higher ROC AUC score. Alternatively, it can be shown that ROC AUC score is equivalent to calculating the rank correlation between predictions and targets. From an interpretation standpoint, it is more useful because it tells us that this metric shows how good at ranking predictions your model is. It tells you what is the probability that a randomly chosen positive instance is ranked higher than a randomly chosen negative instance. from sklearn.metrics import roc_auc_score roc_auc = roc_auc_score(y_true, y_pred_pos) from sklearn.metrics import roc_auc_score roc_auc = roc_auc_score(y_true, y_pred_pos) - You should use it when you ultimately care about ranking predictions and not necessarily about outputting well-calibrated probabilities (read this article by Jason Brownlee if you want to learn about probability calibration). - You should not use it when your data is heavily imbalanced. It was discussed extensively in this article by Takaya Saito and Marc Rehmsmeier. The intuition is the following: false positive rate for highly imbalanced datasets is pulled down due to a large number of true negatives. - You should use it when you care equally about positive and negative classes. It naturally extends the imbalanced data discussion from the last section. If we care about true negatives as much as we care about true positives then it totally makes sense to use ROC AUC. 4. PR AUC | Average Precision Similarly to ROC AUC in order to define PR AUC we need to define what Precision -Recall curve. It is a curve that combines precision (PPV) and Recall (TPR) in a single visualization. For every threshold, you calculate PPV and TPR and plot it. The higher on y-axis your curve is the better your model performance. You can use this plot to make an educated decision when it comes to the classic precision/recall dilemma. Obviously, the higher the recall the lower the precision. Knowing at which recall your precision starts to fall fast can help you choose the threshold and deliver a better model. We can see that for the negative class we maintain high precision and high recall almost throughout the entire range of thresholds. For the positive class precision is starting to fall as soon as we are recalling 0.2 of true positives and by the time we hit 0.8, it decreases to around 0.7. Similarly to ROC AUC score you can calculate the Area Under the Precision-Recall Curve to get one number that describes model performance. You can also think of PR AUC as the average of precision scores calculated for each recall threshold. You can also adjust this definition to suit your business needs by choosing/clipping recall thresholds if needed. from sklearn.metrics import average_precision_score average_precision_score(y_true, y_pred_pos) from sklearn.metrics import average_precision_score average_precision_score(y_true, y_pred_pos) - when you want to communicate precision/recall decision to other stakeholders - when you want to choose the threshold that fits the business problem. - when your data is heavily imbalanced. As mentioned before, it was discussed extensively in this article by Takaya Saito and Marc Rehmsmeier. The intuition is the following: since PR AUC focuses mainly on the positive class (PPV and TPR) it cares less about the frequent negative class. - when you care more about positive than negative class. If you care more about the positive class and hence PPV and TPR you should go with Precision-Recall curve and PR AUC (average precision). Evaluation metrics comparison We will compare those metrics on a real use-case. Based on a recent kaggle competiton I created an example fraud-detection problem: - I selected only 43 features - I sampled 66000 observations from the original dataset - I adjusted the fraction of the positive class to 0.09 - I trained a bunch of lightGBM classifiers with different hyperparameters. I wanted to have an intuition as to which models are “truly” better. Specifically, I suspect that the model with only 10 trees is worse than a model with 100 trees. Of course, with more trees and smaller learning rates it gets tricky but I think it is a decent proxy. So for combinations of learning_rate and n_estimators, I did the following: - defined hyperparameter values: MODEL_PARAMS = {'random_state': 1234, 'learning_rate': 0.1, 'n_estimators': 10} MODEL_PARAMS = {'random_state': 1234, 'learning_rate': 0.1, 'n_estimators': 10} - trained the model: model = lightgbm.LGBMClassifier(**MODEL_PARAMS) model.fit(X_train, y_train) model = lightgbm.LGBMClassifier(**MODEL_PARAMS) model.fit(X_train, y_train) - predicted on test data: y_test_pred = model.predict_proba(X_test) y_test_pred = model.predict_proba(X_test) - logged all the metrics for each run: log_binary_classification_metrics(y_test, y_test_pred) log_binary_classification_metrics(y_test, y_test_pred) For full code base go to this repository or inspect the code per experiment. You can also go here and explore experiment runs with: - evaluation metrics - performance charts - metric by threshold plots On this problem, all of those metrics are ranking models from best to worst very similarly but there are slight differences. Also, the scores themselves can vary greatly. In the next sections, we will discuss it in more detail. 5. Accuracy vs ROC AUC The first big difference is that you calculate accuracy on the predicted classes while you calculate ROC AUC on predicted scores. That means you will have to find the optimal threshold for your problem. Moreover, accuracy looks at fractions of correctly assigned positive and negative classes. That means if our problem is highly imbalanced we get a really high accuracy score by simply predicting that all observations belong to the majority class. On the flip side, if your problem is balanced and you care about both positive and negative predictions, accuracy is a good choice because it is really simple and easy to interpret. Another thing to remember is that ROC AUC is especially good at ranking predictions. Because of that, if you have a problem where sorting your observations is what you care about ROC AUC is likely what you are looking for. Now, let’s look at the results of our experiments: The first observation is that models rank almost exactly the same on ROC AUC and accuracy. Secondly, accuracy scores start at 0.93 for the very worst model and go up to 0.97 for the best one. Remember that predicting all observations as majority class 0 would give 0.9 accuracy so our worst experiment BIN-98 is only slightly better than that. Yet the score itself is quite high and it shows that you should always take an imbalance into consideration when looking at accuracy. Note: There is an interesting metric called Cohen Kappa that takes imbalance into consideration by calculating the improvement in accuracy over the “sample according to class imbalance” model. Read more about Cohen Kappa here. 6. F1 score vs Accuracy Both of those metrics take class predictions as input so you will have to adjust the threshold regardless of which one you choose. Remember that F1 score is balancing precision and recall on the positive class while accuracy looks at correctly classified observations both positive and negative. That makes a big difference especially for the imbalanced problems where by default our model will be good at predicting true negatives and hence accuracy will be high. However, if you care equally about true negatives and true positives then accuracy is the metric you should choose. If we look at our experiments below: In our example, both metrics are equally capable of helping us rank models and choose the best one. The class imbalance of 1-10 makes our accuracy really high by default. Because of that, even the worst model has very high accuracy and the improvements as we go to the top of the table are not as clear on accuracy as they are on F1 score. 7. ROC AUC vs PR AUC What is common between ROC AUC and PR AUC is that they both look at prediction scores of classification models and not thresholded class assignments. What is different however is that ROC AUC looks at a true positive rate TPR and false positive rate FPR while PR AUC looks at positive predictive value PPV and true positive rate TPR. Because of that if you care more about the positive class, then using PR AUC, which is more sensitive to the improvements for the positive class, is a better choice. One common scenario is a highly imbalanced dataset where the fraction of positive class, which we want to find (like in fraud detection), is small. I highly recommend taking a look at this kaggle kernel for a longer discussion on the subject of ROC AUC vs PR AUC for imbalanced datasets. If you care equally about the positive and negative class or your dataset is quite balanced, then going with ROC AUC is a good idea. Let’s compare our experiments on those two metrics: They rank models similarly but there is a slight difference if you look at experiments BIN-100 and BIN 102. However, the improvements calculated in Average Precision (PR AUC) are larger and clearer. We get from 0.69 to 0.87 when at the same time ROC AUC goes from 0.92 to 0.97. Because of that ROC AUC can give a false sense of very high performance when in fact your model can be doing not that well. 8. F1 score vs ROC AUC One big difference between F1 score and ROC AUC is that the first one takes predicted classes and the second takes predicted scores as input. Because of that, with F1 score you need to choose a threshold that assigns your observations to those classes. Often, you can improve your model performance by a lot if you choose it well. So, if you care about ranking predictions, don’t need them to be properly calibrated probabilities, and your dataset is not heavily imbalanced then I would go with ROC AUC. If your dataset is heavily imbalanced and/or you mostly care about the positive class, I’d consider using F1 score, or Precision-Recall curve and PR AUC. The additional reason to go with F1 (or Fbeta) is that these metrics are easier to interpret and communicate to business stakeholders. Let’s take a look at the experimental results for some more insights: Experiments rank identically on F1 score (threshold=0.5) and ROC AUC. However, the F1 score is lower in value and the difference between the worst and the best model is larger. For the ROC AUC score, values are larger and the difference is smaller. Especially interesting is the experiment BIN-98 which has F1 score of 0.45 and ROC AUC of 0.92. The reason for it is that the threshold of 0.5 is a really bad choice for a model that is not yet trained (only 10 trees). You could get a F1 score of 0.63 if you set it at 0.24 as presented below: Note: If you would like to easily log those plots for every experiment I attach a logging helper at the end of this post. Final thoughs In this blog post you’ve learned about a few common metrics used for evaluating binary classification models. We’ve discussed how they are defined, how to interpret and calculated them and when should you consider using them. Finally, we compared those evaluation metrics on a real problem and discussed some typical decisions you may face. With all this knowledge you have the equipment to choose a good evaluation metric for your next binary classification problem! Get notified of new articles By submitting the form you give concent to store the information provided and to contact you. Please review our Privacy Policy for further information. Neptune is the most lightweight experiment tracking tool Track and share your: - Metrics and results - Hyperparameters - Charts and visualizations - Data versions - Model binaries - Notebook checkpoints Bonus: To make things a little bit easier I have prepared: - logging helper function that calculates and logs all the metrics, performance charts and metric by threshold charts described in this post - binary classification metrics cheetsheat which contains information about 24 evaluation metrics for binary classification problems. Check those out below! Logging helper function If you want to log all of those metrics and performance charts that we covered for your machine learning project with just one function call and explore them in Neptune. - install the package: pip install neptune-contrib[all] pip install neptune-contrib[all] - import and run: import neptunecontrib.monitoring.metrics as npt_metrics npt_metrics.log_binary_classification_metrics(y_true, y_pred) import neptunecontrib.monitoring.metrics as npt_metrics npt_metrics.log_binary_classification_metrics(y_true, y_pred) - explore everything in the app: Binary classification metrics cheatsheet We’ve created a nice cheatsheet for you which takes all the content I went over in this blog post and puts it on a few-page, digestible document which you can print and use whenever you need anything binary classification metrics related. Get your binary classification metrics cheatsheet.
https://neptune.ai/blog/f1-score-accuracy-roc-auc-pr-auc
CC-MAIN-2020-10
refinedweb
3,049
53.81
Hope this isn't too much of a noob question, but I can't seem to find good information on it: I'd like to add an animated sprite (i.e. with proper Animation Controller) to a Unity 4.6 UI Canvas, but don't quite know what's the right way to do it. For static images I know to just add a UI->Image, but there's no UI->Sprite. Is the combination of Sprite Renderer and Canvas Renderer even supported? I tried adding an animation controller to a UI Image, but that didn't play the animation. Then I tried removing the 'Image' component and adding a Sprite Renderer instead, which sort of works but not really (animation plays, but is displayed in scene view only, and the sprite is tiny). The same happens if I do it the other way around, start with a sprite and then add the Canvas Renderer. Tips appreciated! :) Thanks Maybe there's a unity script now, but we just had a script that iterated through a bunch of images, assigning them to the Image. Its probable SpriteRenderer is legacy, and doesn't work with the ugui. Thanks for the answer. I know I could code something up in a script that does it manually, but I was hoping I could use the animation tools that Unity offers, just for convenience. It seems unlikely that the Sprite Renderer is legacy, since it hasn't been that long since all the 2D stuff was introduced, afaik.. But perhaps it was never meant to be used in combination with the UI system (?). Bump? This has been bugging me as well... Find an answer to this? I currently have a UI Image animating with an Animator, but I have another UI Image in the same canvas that just won't animate. It makes no sense. Edit: Just made the sprites smaller, and now it animates. Answer by hersheys72 · Jan 06, 2016 at 05:21 AM To change from a sprite to an animation you have to follow the steps found at : Answer by legendbone · Jan 05, 2015 at 10:16 AM You can set the canvas to screen space-camera ,then set an uicamera and pick the ui layer.but there cause another question :the canvas sort order is no longer working. Answer by HappySaila · Jan 28, 2016 at 01:27 PM What i basically did was change the sprites of an image on your GUI. This code just plays like a 10 frame nuke bomb on the players hud public class NukeScript : MonoBehaviour { public Image image; public Sprite[] sprites; public float animationSpeed; public IEnumerator nukeMethod() { //destroy all game objects for (int i=0; i < sprites.Length; i++) { image.sprite = sprites[i]; yield return new WaitForSeconds(animationSpeed); } } } oh,,, animationSpeed/sprites.Length, this will end the animation on your desired time. Answer by piyusnitnware · Dec 02, 2016 at 08:17 AM Create an empty UI game object and make child your animated sprites to newly created UI gameObject. Now you can use it anywhere in your UI.. 4.6 play animation on game over 0 Answers 1 Answer UI 4.6 APK size problem : Sprite both in Atlas and Inspector 0 Answers Set active button after sprite animation. 0 Answers Animation not rendering on canvas set to overlay 1 Answer
https://answers.unity.com/questions/828315/animated-sprite-in-ui-canvas.html
CC-MAIN-2020-45
refinedweb
554
71.24
Scrollbar of Datagrid Flash as3Joselyn6 Nov 10, 2011 4:39 PM Hello to all: Somebody can tell me, How remove the scrollbar from the datagrid to display a list that it grows as large or small like the list size? and how to appear the scrollbar in the stage? thanks in advance. Regards. Joselyn 1. Re: Scrollbar of Datagrid Flash as3Ned Murphy Nov 10, 2011 5:50 PM (in response to Joselyn6) You should look thru the help documents and review all of the DataGrid's properties. You will find properties that help you determine the width and height values and others that allow you to set whteher or not the scrollbars appear. For instance. the following could make the DataGrid adjust its size for the number of rows it holds... dataGrid.height = dataGrid.headerHeight+dataGrid.length*dataGrid.rowHeight; And the following will set the vertical scrollbar to not appear... dataGrid.verticalScrollPolicy = ScrollPolicy.OFF; 2. Re: Scrollbar of Datagrid Flash as3Joselyn6 Nov 14, 2011 7:11 PM (in response to Ned Murphy) Hello Ned, Thanks very much, for answer me, as you say I've viewed the help documents and review all of the DataGrid's properties, and I finded the properties of the verticalScrollPolicy and I've set the aDg.horizontalScrollPolicy = ScrollPolicy.OFF; but it show me this error "1119: Access to a possibly undefined ScrollPolicy property through a reference with static type fl.controls: DataGrid." I think the error should not have because it is a property, Isn´t it? 3. Re: Scrollbar of Datagrid Flash as3Joselyn6 Nov 15, 2011 12:53 PM (in response to Joselyn6) Hello Ned, Please, excuseme because I make a mistake I wanted to say "aDg.verticalScrollPolicy = ScrollPolicy.OFF. instead aDg,horizontalScrollPolicy = ScrollPolicy.OFF" Thanks in advance Ned. Regards Joselyn 4. Re: Scrollbar of Datagrid Flash as3Ned Murphy Nov 15, 2011 1:33 PM (in response to Joselyn6) You might need to import the ScrollPolicy class before you can use it... import fl.controls.ScrollPolicy; 5. Re: Scrollbar of Datagrid Flash as3Joselyn6 Nov 16, 2011 10:40 PM (in response to Ned Murphy) Hello Ned, I did it, I import fl.controls.ScrollPolicy and doesn't ticking error thank you, but it does not display the list the datagrid, do you have any idea, why doesn't works? Here is the code you advice: aDg.height = aDg.headerHeight+aDg.length*aDg.rowHeight; aDg.verticalScrollPolicy = ScrollPolicy.OFF; Thanks for any help. Regards Joselyn
https://forums.adobe.com/thread/923776
CC-MAIN-2016-44
refinedweb
409
60.24
How to Calculate Levenshtein Distance in Java? Last modified: November 3, 2018 1. Introduction In this article, we describe the Levenshtein distance, alternatively known as the Edit distance. The algorithm explained here was devised by a Russian scientist, Vladimir Levenshtein, in 1965. We’ll provide an iterative and a recursive Java implementation of this algorithm. 2. What is the Levenshtein Distance? The Levenshtein distance is a measure of dissimilarity between two Strings. Mathematically, given two Strings x and y, the distance measures the minimum number of character edits required to transform x into y. Typically three type of edits are allowed: - Insertion of a character c - Deletion of a character c - Substitution of a character c with c‘ Example: If x = ‘shot’ and y = ‘spot’, the edit distance between the two is 1 because ‘shot’ can be converted to ‘spot’ by substituting ‘h‘ to ‘p‘. In certain sub-classes of the problem, the cost associated with each type of edit may be different. For example, less cost for substitution with a character located nearby on the keyboard and more cost otherwise. For simplicity, we’ll consider all costs to be equal in this article. Some of the applications of edit distance are: - Spell Checkers – detecting spelling errors in text and find the closest correct spelling in dictionary - Plagiarism Detection (refer – IEEE Paper) - DNA Analysis – finding similarity between two sequences - Speech Recognition (refer – Microsoft Research) 3. Algorithm Formulation Let’s take two Strings x and y of lengths m and n respectively. We can denote each String as x[1:m] and y[1:n]. We know that at the end of the transformation, both Strings will be of equal length and have matching characters at each position. So, if we consider the first character of each String, we’ve got three options: - Substitution: - Determine the cost (D1) of substituting x[1] with y[1]. The cost of this step would be zero if both characters are same. If not, then the cost would be one - After step 1.1, we know that both Strings start with the same character. Hence the total cost would now be the sum of the cost of step 1.1 and the cost of transforming the rest of the String x[2:m] into y[2:n] - Insertion: - Insert a character in x to match the first character in y, the cost of this step would be one - After 2.1, we have processed one character from y. Hence the total cost would now be the sum of the cost of step 2.1 (i.e., 1) and the cost of transforming the full x[1:m] to remaining y (y[2:n]) - Deletion: - Delete the first character from x, the cost of this step would be one - After 3.1, we have processed one character from x, but the full y remains to be processed. The total cost would be the sum of the cost of 3.1 (i.e., 1) and the cost of transforming remaining x to the full y The next part of the solution is to figure out which option to choose out of these three. Since we do not know which option would lead to minimum cost at the end, we must try all options and choose the best one. 4. Naive Recursive Implementation We can see that the second step of each option in section #3 is mostly the same edit distance problem but on sub-strings of the original Strings. This means after each iteration we end up with the same problem but with smaller Strings. This observation is the key to formulate a recursive algorithm. The recurrence relation can be defined as: D(x[1:m], y[1:n]) = min { D(x[2:m], y[2:n]) + Cost of Replacing x[1] to y[1], D(x[1:m], y[2:n]) + 1, D(x[2:m], y[1:n]) + 1 } We must also define base cases for our recursive algorithm, which in our case is when one or both Strings become empty: - When both Strings are empty, then the distance between them is zero - When one of the Strings is empty, then the edit distance between them is the length of the other String, as we need that many numbers of insertions/deletions to transform one into the other: - Example: if one String is “dog” and other String is “” (empty), we need either three insertions in empty String to make it “dog”, or we need three deletions in “dog” to make it empty. Hence the edit distance between them is 3 A naive recursive implementation of this algorithm: public class EditDistanceRecursive { static int calculate(String x, String y) { if (x.isEmpty()) { return y.length(); } if (y.isEmpty()) { return x.length(); } int substitution = calculate(x.substring(1), y.substring(1)) + costOfSubstitution(x.charAt(0), y.charAt(0)); int insertion = calculate(x, y.substring(1)) + 1; int deletion = calculate(x.substring(1), y) + 1; return min(substitution, insertion, deletion); } public static int costOfSubstitution(char a, char b) { return a == b ? 0 : 1; } public static int min(int... numbers) { return Arrays.stream(numbers) .min().orElse(Integer.MAX_VALUE); } } This algorithm has the exponential complexity. At each step, we branch-off into three recursive calls, building an O(3^n) complexity. In the next section, we’ll see how to improve upon this. 5. Dynamic Programming Approach On analyzing the recursive calls, we observe that the arguments for sub-problems are suffixes of the original Strings. This means there can only be m*n unique recursive calls (where m and n are a number of suffixes of x and y). Hence the complexity of the optimal solution should be quadratic, O(m*n). Lets look at some of the sub-problems (according to recurrence relation defined in section #4): - Sub-problems of D(x[1:m], y[1:n]) are D(x[2:m], y[2:n]), D(x[1:m], y[2:n]) and D(x[2:m], y[1:n]) - Sub-problems of D(x[1:m], y[2:n]) are D(x[2:m], y[3:n]), D(x[1:m], y[3:n]) and D(x[2:m], y[2:n]) - Sub-problems of D(x[2:m], y[1:n]) are D(x[3:m], y[2:n]), D(x[2:m], y[2:n]) and D(x[3:m], y[1:n]) In all three cases, one of the sub-problems is D(x[2:m], y[2:n]). Instead of calculating this three times like we do in the naive implementation, we can calculate this once and reuse the result whenever needed again. This problem has a lot of overlapping sub-problems, but if we know the solution to the sub-problems, we can easily find the answer to the original problem. Therefore, we have both of the properties needed for formulating a dynamic programming solution, i.e., Overlapping Sub-Problems and Optimal Substructure. We can optimize the naive implementation by introducing memoization, i.e., store the result of the sub-problems in an array and reuse the cached results. Alternatively, we can also implement this iteratively by using a table based approach: static int calculate(String x, String y) { int[][] dp = new int[x.length() + 1][y.length() + 1]; for (int i = 0; i <= x.length(); i++) { for (int j = 0; j <= y.length(); j++) { if (i == 0) { dp[i][j] = j; } else if (j == 0) { dp[i][j] = i; } else { dp[i][j] = min(dp[i - 1][j - 1] + costOfSubstitution(x.charAt(i - 1), y.charAt(j - 1)), dp[i - 1][j] + 1, dp[i][j - 1] + 1); } } } return dp[x.length()][y.length()]; } This algorithm performs significantly better than the recursive implementation. However, it involves significant memory consumption. This can further be optimized by observing that we only need the value of three adjacent cells in the table to find the value of the current cell. 6. Conclusion In this article, we described what is Levenshtein distance and how it can be calculated using a recursive and a dynamic-programming based approach. Levenshtein distance is only one of the measures of string similarity, some of the other metrics are Cosine Similarity (which uses a token-based approach and considers the strings as vectors), Dice Coefficient, etc. As always the full implementation of examples can be found over on GitHub.
https://www.baeldung.com/java-levenshtein-distance
CC-MAIN-2019-04
refinedweb
1,399
61.97
import sys # import the system defined sys module--you can use the # functions in sys as long as you prefix them with sys. import foo # import foo.py. You can use the functions in foo as long as # you prefix them with foo. from foo import min # import the min function from foo into the current namespace. # You can use min without qualification from foo import * # import all functions from foo into the current namespace. # You can use all functions in foo without qualification import sounds.effects.echo # import echo.py from the sounds.effects package # You can use the functions in echo as long as # you prefix them with the full package name # sounds.effects.echo. from sounds.effects import echo # You can use the functions in echo as # long as you prefix them with echo (you no # longer need the fully qualified package name) from sounds.effects import * # assume that effects is a directory. this # command does not work like you expect. See the # packages section below for details silhouette/ __init__.py graphics/ __init__.py rect.py circle.py events/ __init__.py mouse.py keyboard.py If you want the command from silhouette.events import * to import mouse.py and keyboard.py, then the __init__.py file for events should be: __all__ = ["mouse", "keyboard"] Even after importing mouse and keyboard, you will still have to access their functions by qualifying them with either mouse or keyboard.
http://web.eecs.utk.edu/~bvz/teaching/cs365Sp16/notes/Python/PythonModules.html
CC-MAIN-2017-51
refinedweb
238
68.87
Notifications You’re not receiving notifications from this thread. How do I create a virtual balance model in Rails? This is for a game application I'm creating. I have a user model, a stock model and a UserStock join table for the previous two models. User was created with devise. I intend to attach a certain balance ($100000) that can be used to buy and sell stocks. What's the most efficient way I can get this done? Any guides that can help me with this? Also, anything that can help me configure the buy and sell actions? (Simple action of incrementing the balance when a stock is sold by a user or decrement of balance when a stock is bought) I would advise a transactions (or similar) model. You can then create a record with a "type" of "credit" of 100,000. You can then create a transaction for each purchase and sell event to track how that balance moves. This is give you nice auditability into how the balance changed and why. It is also very easy to get the balance in 1 query user.transactions.sum(:amount). As far as performing the stock sale itself along with the balance update...it sounds like you have a classic service class situation. Example below. class StockTransfer def initialize(user:, stock:, quantity:) @user = user @stock = stock @quantity = quantity end def call # insert logic here for the stock transfer # logic for the transaction @user.transactions.create!( type: (quantity > 0 ? 'credit' : 'debit'), event: (quantity > 0 ? 'purchase' : 'sale'), stock: @stock ) end end Hey, thanks for the reply. Should I add a column "balance" to my devise user model? Because I intend to show users their remaining balance on their profile. Currently what I've done is, added a balance column to my user model, generated a model called AccountTransactions (transaction_type, amount, user_id and transaction_number). I have also created an "operations" folder (the part that you've called "service class" and have added your code. How do I proceed from here? Also, could you elaborate on "events" and how to go about them? EDIT: updating my code (This is from my operations/execute_transactions.rb service class file. Not sure why I have the @transaction_type instance variable specified, not sure what to do with it since it's already being specified within my execute method. class StockTransfer def initialize(user:, amount:, transaction_type:, quantity:, transaction_number:, stock:) @user = user @amount = amount @transaction_type = transaction_type @quantity = quantity #@transaction_number = transaction_number @stock = stock end def execute # logic for the transaction @user.transactions.create!( # type: (quantity > 0 ? 'credit' : 'debit'), event: (quantity > 0 ? 'purchase' : 'sale'), stock: @stock, transaction_type: (quantity > 0? 'credit' : 'debit') ) # insert logic here for the stock transfer if transaction_type == "credit" @user.update!(balance: @user.balance + @amount) elsif transaction_type == "debit" @user.update!(balance: @user.balance - @amount) end end end One more question: How do I default the credited balance amount to 100000 to all users initially? You are real close. If you re-read my post I answer your last question ;) It is generally not recommended to store the balance as a single field on user because of read/write concurrency issues. The sum method I have you would work perfectly for current balance purposes. Everything else for the most part looks fine, you just have some honing to do (adapting the cargo-culting). Feel free to hit me up in the gorails slack community my username is caseyprovost. I would be happy to pair review with you via screenhero or similar. Hi. Unfortunately I have not subscribed to Gorails, how do I get access to the slack forum? Is there any other way I can contact you? (Maybe skype?) I sent you a request. Although it does seem correct, nothing seems to be happening when I try to update AccountTransaction.create("enter parameters here"). user balance still remains as "nil". After creating some basic associations for my AccountTransaction model and moving into my console, I type: User.first.account_transactions.create(amount: "1000", transaction_type: "credit", transaction_number: "1") With my transaction logic technically this should credit 1000 to my user balance, correct? But User.first.balance still returns nil. You seem pretty close. balanace should a method on user. Something like this: def balance account_transactions.sum(:amount) end You also want to make sure the record was created via User.first.account_transactions.count. If it is > than 0 then you should be all set. I hit you up on Skype too so we can chat a bit faster there and iterate if that works for you. Hi every one: I am having similar challenges regarding the creation of a payment system with a type of credits (simple virtual money). Regarding the balance I am using this method in the User model and is working fine. Just in the case Rohan has some issues with Casey sugestion. before_validation :load_defaults def load_defaults if self.new_record? self.balance = 1000 end end This related I have a question: I am trying the amount of @order pass and update the @balance. In my models both are integers. I am trying this from Orders_controller but doesnt work. Do you have any sugestion? Do you need to see more code? Any comment is welcome!! class OrdersController < ApplicationController def create @order = current_user.orders.create(order_params) @user.balance = balance: (@user.balance += @order.price) redirect_to user_orders_path end end private def order_params params.require(:order).permit(:price, :user_id) end end I was trying as well with this diferent approach with code similar to the code Rohan use. But its not working. Any sugestion? class OrdersController < ApplicationController def create @order = current_user.orders.create(order_params) redirect_to user_orders_path end end def balance if @order.save @user.balance_update!(balance: @user.balance + @order.price) end end private def order_params params.require(:order).permit(:price, :user_id) end end All of you are attempting to re-implement book-keeping from first principles. When what you need is Plutus.
https://gorails.com/forum/how-do-i-create-a-virtual-balance-model-in-rails
CC-MAIN-2022-21
refinedweb
978
59.5
. Versions of WMI in Windows 2000 and earlier are compliant with the CIM 2.0 specification. Qualifiers have the following limitations: The following table lists WMI standard qualifiers. Applies to: classes, associations, indications Indicates whether the class is abstract and serves only as a base for new classes. The default is FALSE. You cannot create instances of abstract classes. The absence of this qualifier indicates that the class is not abstract; therefore, this qualifier is required for all abstract classes. Applies to: references Indicates whether the reference is the parent component of an aggregation association. The default is FALSE. Usage: The Aggregation and Aggregate qualifiers are used together – Aggregation qualifies the association, and Aggregate specifies the parent reference. Applies to: associations Indicates whether the association is an aggregation. The default is FALSE. Used with Aggregate. This qualifier is required for all aggregation associations. Applies to: properties, references, methods Alternate name for a property or method in the schema. The default is NULL. Applies to: properties, parameters Type of the qualified array. Valid values are: Usage: Apply this type of qualifier only to properties and parameters that are arrays (defined by using bracket syntax). Applies to: properties, methods, parameters Map of significant bit positions where each significant position can be "on" or "off". Each "on" bit maps to a corresponding value in the BitValues array. By having multiple bits "on", multiple concurrent values in the BitValues array are indicated. The default is NULL. For more information, see BitMap and BitValues. Translation of a bit position value into an associated string. The default is NULL. Applies to: methods Indicates whether the method creates instances. These methods are not constrained to acting on a single instance or a single class. For example, a constructor can create association instances as well as instances of the class that defines the constructor. The Constructor qualifier is intended for information only and it is not expected that it is acted on by the object manager. The object manager does not have to call constructor methods when an object is created. Also when a constructor is called, the object manager does not have to invoke any constructor methods defined for any parent class of the original class. The default is FALSE. Applies to: classes Name of the method by which instances of this class are created. The value is either "PutInstance" or the name of another method that creates the instances. The default is NULL. Usage: This qualifier can only be used if the Support1sCreate qualifier is present. Name of the method by which instances of this class are deleted. The value is either "DeleteInstance", or the name of another method that deletes the instances. The default is NULL. Usage: This qualifier can only be used if the SupportsDelete qualifier is present. Applies to: any Description of a named element. The default is NULL. Indicates whether the method deletes instances. Methods using the Destructor qualifier delete the instance(s) to which the destructor is applied, and are not constrained to acting on a single instance or class. For example, a destructor might delete association instances as well as instances of the class that defines the destructor. The Destructor qualifier is intended for information only and it is not expected that it is acted on by the object manager. There is no obligation for the object manager to call a method that has the Destructor qualifier when an instance is deleted. Also, when a destructor is called, the object manager does not have to invoke any destructor methods defined for any parent class of the original class. The default is FALSE. Name displayed in the UI instead of the actual name of the element. The default is NULL. Indicates whether the property represents a non-negative integer, which can increase or decrease, but never exceed a maximum value. The default is FALSE. The maximum value of the property cannot be greater than 2^n - 1. N can be 8, 16, 32, or 64 depending on the data type of the property to which this qualifier is applied. The value of a gauge has its maximum value whenever the information being modeled is greater or equal to that maximum value. If the information being modeled subsequently decreases below the maximum value, the gauge also decreases. This qualifier is applicable only to properties with an unsigned integer data type. Applies to: parameters Indicates whether the parameter is used to pass values to a method. The default is TRUE. Indicates whether the parameter is both an input and output parameter. Applies to: properties, references. Applies to: properties Indicates that the property is expensive to return and requires a lot of processor time and memory. WMI improves the performance of queries by not attempting to return the properties marked with the Lazy qualifier. Because WQL queries that contain lazy properties can fail, properties marked with the Lazy qualifier are not returned by SQLColumns. Applies to: classes, properties, associations, indications, references Set of values that indicate a path to a location where you can find more information about the origin of a property, class, association, indication, or reference. The mapping string can be a directory path, a URL, a registry key, an include file, reference to a CIM class, or some other format. The default is NULL. Maximum number of values a given reference can have for each set of other reference values in the association. The default is NULL. For example, if an association relates A instances to B instances and there must be at most one A instance for each B instance, then the reference to A should have a maximum of one qualifier. Maximum length (in characters) of a string data item and indicates support of fixed-length arrays. If a fixed-length array is encountered, the MaxLen qualifier contains the fixed length found during parsing. If a variable-length array is encountered, then this qualifier is not used. MaxLen is used to suggest the maximum number of elements that should be stored in an array. When overriding the default value, any unsigned integer value (uint32) can be specified. A value of NULL (default) implies unlimited length. Maximum value of the object. The default is NULL. Minimum cardinality of the reference (the minimum number of values a given reference can have for each set of other reference values in the association). The default is 0. For example, if an association relates A instances to B instances, and there must be at least one A instance for each B instance, then the reference to A should have a minimum of one qualifier. Indicates the minimum value of the object. The default is NULL. Set of values that indicate correspondence between an object's property and other properties in the CIM schema. The default is NULL. Object properties are identified using the following syntax. <schema name> "_" <class or association name> "." <property name> Location of an instance, the value of which is <namespacetype>://<namespacehandle> The default is NULL. Usage: This qualifier cannot be used with the NonlocalType qualifier. Type of location of an instance. Its value is <namespacetype>. The default is NULL. Usage: This qualifier cannot be used with the Nonlocal qualifier. Value that indicates that the associated property is NULL (the property does not have a valid or meaningful value). The default is NULL. The conventions and restrictions used for defining NULL values are the same as those applicable to the ValueMap qualifier. Note this qualifier cannot be overridden. It is unreasonable to permit a subclass to return a different NULL value than that of the parent class. Indicates whether the parameter returns values from a method. The default is FALSE. Applies to: properties, methods, references Parent class or subordinate construct (property, method, or reference) which is overridden by the property, method, or reference of the same name in the derived class. The default is NULL. The format is: [<class>.]<subordinate construct> If the class name is omitted, the override applies to the subordinate construct in the parent class in the inheritance tree. Usage: The Override qualifier can refer to constructs based on the same meta model only. It is not allowed to change a construct name or signature during an override operation. Indicates whether the property value on a subclass overrides the value in a parent class. The functional implication is that, if you perform a query against the parent class, and if your WHERE clause includes this property, the parent must return an instance with the overridden value. As a result, Windows Management adjusts the WHERE clause of the query sent to the parent class to exclude references to this property. Name of the key being propagated. The default is NULL. Use of this qualifier assumes the existence of only one weak qualifier on a reference that has the containing class as its target. The associated property must have the same value as the property named by the qualifier in the class on the other side of the weak association. The format is: Usage: When the Propagated qualifier is used, the Key qualifier must be specified with a value of TRUE. Indicates whether the property is readable. The default is TRUE. Windows XP: Modifying a read-only property will not return an error and does not modify the property. Indicates whether a non-null value is required for the property. The default is FALSE. Applies to: classes, associations, indications, schemas Minor revision number of the schema object. The default is NULL. Usage: The Version qualifier must be present to supply the major version number when the Revision qualifier is used. Applies to: properties, methods Name of the schema in which the feature is defined. The default is NULL. Applies to: classes, associations, indications, references Location of an instance. The default is NULL. The qualifier's value is <namespacetype>://<namespacehandle>. Usage: The Source qualifier cannot be used with the SourceType qualifier. Type of location of an instance. The value of this qualifier is <namespacetype>. The default is NULL. Usage: The SourceType qualifier cannot be used with the Source qualifier. Indicates whether the class supports the creation of instances. The default is FALSE. Indicates whether the class supports the deletion of instances. The default is FALSE. Indicates whether the class supports the modification (updating) of instances. The default is FALSE. Indicates whether the class can have subclasses. The default is FALSE. If a subclass is declared, the compiler generates an error. Usage: This qualifier cannot coexist with the Abstract qualifier. If both the Terminal and Abstract qualifiers are specified, the compiler generates an error. Type of unit in which the associated data item is expressed. The default is NULL. For example, a size data item might have a value of "bytes" for Units. Set of permissible values for a property, method return type, or method parameter. The default is NULL. Usage: This qualifier can be used alone or in combination with the Values qualifier. When used in combination with the Values qualifier, the location of the value in the ValueMap array provides the location of the corresponding entry in the Values array. Use the ValueMap qualifier only with string and integer values. The syntax for representing an integer value in the value map array is [+|=]digit[*digit]. The content, maximum number of digits, and represented value are constrained by the type of the associated property. For example, uint8 may not be signed, must be less than four digits, and must represent a value less than 256. Set of values translating an integer value into an associated string. The default is NULL. This property also specifies an array of string values to be mapped to an enumeration property. This qualifier can be applied to either an integer property or a string property, and the mapping can be implicit or explicit. If the mapping is implicit, integer or string property values represent ordinal positions in the Values array. If the mapping is explicit, the property must be an integer, and valid property values are listed in the array defined by the ValueMap qualifier. For more information, see Value Map. If a ValueMap qualifier is not present, the Values array is indexed (zero-relative) by using the value in the associated property, method return type, or method parameter. If a ValueMap qualifier is present, the values index is defined by the location of the property value in the value map. Applies to: classes, schemas, associations, indications Major version number of the schema object. The default is NULL. The version number is incremented when changes are made to the schema that alter the interface. Indicates whether the keys of the referenced class include the keys of the other participants in the association. The default is FALSE. This qualifier is used when the identity of the referenced class depends on the identity of the other participants in the association. No more than one reference to any given class can be weak. The other classes in the association must define a key. The keys of the other classes in the association are repeated in the referenced class and are tagged with a Propagated qualifier. Indicates that applications or scripts can change the property value. The account that runs the application must have access to the namespace that contains instances of the class. The provider implementation may also limit access to provider data. A value of TRUE indicates that the property is readable and writeable by consumers that are allowed access by WMI and the provider. The default is FALSE. A property that lacks the Write qualifier may still be writeable. The provider implementation may allow any properties in the provider classes to be changed, whether the Write qualifier is present. Windows XP: Modifying a read-only property does not return an error and does not modify the property. Indicates whether the property is writeable at instance creation. This qualifier may be used in conjunction with the WriteAtCreate qualifier. The default is FALSE. Indicates whether the property is writeable at instance update. This qualifier may be used in conjunction with the WriteAtCreate qualifier. The default is FALSE. Send comments about this topic to Microsoft Build date: 6/15/2009
http://msdn.microsoft.com/en-us/library/aa393650(VS.85).aspx
crawl-002
refinedweb
2,360
50.33
Step 4: Add a Movie to the DynamoDB Table In this step of the Microsoft .NET and DynamoDB Tutorial, you add a new movie record to the Movies table in Amazon DynamoDB. The Main function in DynamoDB_intro starts by creating a DynamoDB document model Document and then waits on WritingNewMovie_async, which is implemented in the 04_WritingNewItem.cs file. /** *. */ using System; using System.Threading.Tasks; using Amazon.DynamoDBv2.DocumentModel; namespace DynamoDB_intro { public static partial class Ddb_Intro { /*-------------------------------------------------------------------------- * WritingNewMovie *--------------------------------------------------------------------------*/ public static async Task WritingNewMovie_async( Document newItem ) { operationSucceeded = false; operationFailed = false; int year = (int) newItem["year"]; string name = newItem["title"]; if( await ReadingMovie_async( year, name, false ) ) Console.WriteLine( " The {0} movie \"{1}\" is already in the Movies table...\n" + " -- No need to add it again... its info is as follows:\n{2}", year, name, movie_record.ToJsonPretty( ) ); else { try { Task<Document> writeNew = moviesTable.PutItemAsync(newItem, token); Console.WriteLine(" -- Writing a new movie to the Movies table..."); await writeNew; Console.WriteLine(" -- Wrote the item successfully!"); operationSucceeded = true; } catch (Exception ex) { Console.WriteLine(" FAILED to write the new movie, because:\n {0}.", ex.Message); operationFailed = true; } } } } WritingNewMovie_async begins by checking to determine whether the new movie has already been added to the Movies table. If it has not, it waits for the DynamoDB Table.PutItemAsyn method to add the new movie record.. Next Step Step 5: Read and Display a Record from the DynamoDB Table
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GettingStarted.NET.04.html
CC-MAIN-2020-34
refinedweb
228
60.92
Widget and requests module Hi, I was trying to check whether it is possible to use the requests module in widget but somehow the widget displays an error saying that there was an error loading the widget. I though the problem was with my widget but I get the same error if I add the module requests to then widget example Launcher that comes with pythonista. Is this not supported? To overcome this problem I was trying to run a script called from the widget itself (that use the requests module). Does anyone knows how to do it without using 'webbrowser.open' which opens pythonista in the foreground? @djorge webbrowser is not allowed in appex mode, try, instead of webbrowser.open def button_action(self, sender): #import webbrowser #webbrowser.open(sender.name) from objc_util import UIApplication,nsurl app=UIApplication.sharedApplication() urlapp = nsurl(sender.name) app.openURL_(urlapp) where I've tried 'pythonisyta3://script_name?action=run' and it is ok but it also starts Pythonista @cvp Thats weird. I'm importing appear and using webbrowser.open with that URL and it doesn't complain. @djorge Sorry but found in Pythonista doc
https://forum.omz-software.com/topic/4879/widget-and-requests-module/1
CC-MAIN-2020-29
refinedweb
189
68.47
john larson16,593 Points Number game extra credit. I think I got it. It works anyway. import random def game(): def pc_guess(a, b): return random.randint(a, b) a = 1 #default value for a b = 10 #default value for b my_num = input("Enter a number 1 - 10: ") try: my_num = int(my_num) except ValueError: print("ERROR: enter a number 1 - 10, ie: 1,2,3...") if ValueError: game() #restart game if error is called else: #if no error, run the following code times = 0 #just to track how many loops run while True: times += 1 #each iteration add 1 rn = pc_guess(a, b)#call the random number fn, default values print("try {}, pc guess is {}".format(times, rn)) #show iteration and pc guess print("a = {} and b = {}".format(a, b)) # show the current value of a and b if rn < my_num: a += 1 # raise lower param if the pc guess is low elif rn > my_num: b -= 1 #lower upper param if pc guess is high else: # if it's not < or > print("{}, Everyones a winner!".format(rn)) break replay = input("Play again? Y/n ") if replay != "n": game() else: print("Thank you for playing.") game()
https://teamtreehouse.com/community/number-game-extra-credit-i-think-i-got-it-it-works-anyway
CC-MAIN-2020-29
refinedweb
195
72.36
Thank you Michael! With CMAKE and instructions here I was able to build gr succesfully :) Now to wait for my rtl-sdr stick.. hi! :-) 73 Mikko, OH2FXD On Mar 31, 2012, at 4:57 PM, Michael Dickens wrote: > Mikko - It looks like you're using the GNU Autotools for building, yes? If > so, try using CMake instead; 'volk' in particular compiles must more robustly > using CMake. I don't know that I've ever gotten volk to compile using GNU > Autotools, but I've almost never had an issue when using CMake. Hope this > helps! - MLD > > On Mar 31, 2012, at 9:36 AM, Mikko Lähteenmäki wrote: > >> Hello, I have following environment: >> >> Newest Xcode + homebrew port for other needed software/libs. >> gr is installed via SVN. >> >> When running make I get this problem which I cant pass: >> >> Making all in hid >> Makefile:935: *** extraneous `else'. Stop. >> make[5]: *** [all-recursive] Error 1 >> make[4]: *** [all] Error 2 >> make[3]: *** [all-recursive] Error 1 >> make[2]: *** [all] Error 2 >> make[1]: *** [all-recursive] Error 1 >> make: *** [all] Error 2 >> >> and make check produces this error message: >> >> dyld: Symbol not found: _volk_machine_sse2_32 >> Referenced from: /Users/ifreq/gnuradio/volk/lib/.libs/libvolk.0.dylib >> Expected in: flat namespace >> in /Users/ifreq/gnuradio/volk/lib/.libs/libvolk.0.dylib >> /bin/sh: line 1: 65660 Trace/BPT trap: 5 ${dir}$tst >> >> >> Waiting for possible fix, thanks in advance! :-) > > > _______________________________________________ > Discuss-gnuradio mailing list > address@hidden > >
https://lists.gnu.org/archive/html/discuss-gnuradio/2012-03/msg00585.html
CC-MAIN-2019-39
refinedweb
240
64.2
Hey Guys, we understand your concerns and are trying to get a single document file. Patience please :) Purpose of online doc was to reduce the size of installer. Type: Posts; User: mahesh.sayibabu; Keyword(s): Hey Guys, we understand your concerns and are trying to get a single document file. Patience please :) Purpose of online doc was to reduce the size of installer. This works fine for me. I tested this with PyS60 1.9.7 and 5800XM. Can you test this again with 1.9.7 ? Please log a bug here with screen shots and device info (sw version, locale etc), if you still... Can you try with another file? Just to rule out the issue is with the file. Try if this works. s = audio.Sound.open(u"E:\\TEST.WAV") s.record() time.sleep(5) s.stop() Try with the latest PyS60 releases. It is currently not in our product backlog. Please file an enhancement request for the same. I don't see any reason for this behavior. Anyways, you try the following (Ensure the phone is set with current date): Try installing PyS60 1.9.6 and check if it installs [OR/AND] Update... Did you mean "StringIO" ? The reason is that since ensmble is squeezed with Python 2.5.1, it works only with those Python versions that are byte code compatible with it. OpenC APi, mktime is fine tuned now and hopefully you should see the difference with the next OpenC release. Strange! 'import _example' should have worked. Hope the PYD is packaged and installed on the device. Try to compile and import elemlist module, present under extras\elemlist of PyS60 source... Change this import statement in the code import socket with import btsocket and change all usages of socket module to btsocket. Check with this code. It is tested on E71 with PyS60 1.9.6 The directory, file and the file contents are created as expected. import codecs import os, os.path dir_path =... It too early to say anything about this. Are you able to get it working now ? What is the actual problem, you are not able to install the SIS file or not able to launch the installed application ? 1. example.c, replace these lines DL_EXPORT(void) initexample(void) { Py_InitModule("example", example_methods); } with, Also, the PYD name should be in a specific format: kf_<module_name>.pyd For example, if your module name is `sdk12graphics', the pyd name in the mmp file should be as follows: .MMP file ... Are you sure that it has created C:\MyApp folder instead of C:\Data\MyApp ? You can check using import os os.listdir('c:') os.listdir('c:\\Data') Some file browsers might show 'c:\Data'... I did not find any issue in executing the code. Which PyS60 version did you try with and on which device ? I tried with PyS60 1.9.6 on E71. Also, did you install the ssl SIS available with the... Since you are developing an extension for PyS60 1.9.6, you should not link against `python222.lib`. Remove this entry from the MMP file. Also, wondering why you need to link against... We will look into this to find the actual cause of this. Thanks for reporting the issue. New 1.9.6 runtime available now. With this, its possible to install 1.9.6 runtime on S60 3rdEd Fp2 devices also. New 1.9.6 runtime available now. With this, its possible to install 1.9.6 runtime on S60 3rdEd Fp2 devices also. You can try "SISContents". Its a pretty cool tool! You don't have to include PIPS sis file. Latest runtime sis (1.9.6) has it embedded. I was able to successfully create a merged sis for Scriptshell. See the package file below. Just replace... We have found the cause of this behavior and it will be fixed soon.
http://developer.nokia.com/Community/Discussion/search.php?s=773484663c48b0cc7d6278df8600de26&searchid=1843356
CC-MAIN-2013-48
refinedweb
647
79.06
#include <sys/stream.h> int putq(queue_t *q, mblk_t *bp); Architecture independent level 1 (DDI/DKI). Pointer to the queue to which the message is to be added. Message to be put on the queue. The putq() function is used to put messages on a driver's queue after the module's put routine has finished processing the message. The message is placed after any other messages of the same priority, and flow control parameters are updated. If QNOENB is not set, the service routine is enabled. If no other processing is done, putq() can be used as the module's put routine. The putq() function returns 1 on success and 0 on failure. Upon failure, the caller should call freemsg(9F) to free the pointer to the message block. The putq() function can be called from user, interrupt, or kernel context. See the datamsg(9F) function page for an example of putq(). datamsg(9F), putbq(9F), qenable(9F), rmvq(9F) Writing Device Drivers for Oracle Solaris 11.2 STREAMS Programming Guide
http://docs.oracle.com/cd/E36784_01/html/E36886/putq-9f.html
CC-MAIN-2016-30
refinedweb
172
67.55
Introduction Among the popular operating systems, they have all standardized on using standard input, standard output, and standard error with file desciptors 0, 1, and 2 respectively. This allows you to pipe the inputs and outputs to different locations. Let's look at how to utilize standard input, output, and error in Python. To learn more about piping, redirection, stdin, stdout, and stderr in general, see my tutorial STDIN, STDOUT, STDERR, Piping, and Redirecting. Basic usage In Python, the sys.stdin, sys.stdout, and sys.stderr are file-like objects that can perform expected operations like read() and write(). Let's look at how to use these objects. Refer to the official sys package documentation for full information. Standard output Standard output print(x) is basically a shortcut for sys.stdout.write(x + '\n') import sys # Standard output - sys.stdout print(type(sys.stdout)) sys.stdout.write('Hello\n') sys.stdout is a io.TextIOWrapper objects so you can read and write to them like a regular file. See for more details about the io.TextIOWrapper class. To pipe the output of your Python program to a file, you can do it from the shell like this: python myapp.py > output.txt Standard error Standard error works just like standard output and can be used the same way. Standard error has file descriptor 2 where standard output has file descriptor 1. This is beneficial if you want to separate warning and error messages from the actual output of your application. For example, if your program outputs an XML file, you don't want error strings injected in the middle of your XML file. # Standard error - sys.stderr print(type(sys.stderr)) sys.stderr.write("Error messages can go here\n") To pipe standard error from the shell to a file while leaving standard output going to the terminal: python myapp.py 2>errors.txt To pipe standard error in to standard output, you can do: python myapp.py 2>&1 Standard input Standard input defaults to your keyboard in the terminal, but you can also pipe in files or the output from a previous program to your standard input. Here is a basic example of reading one byte from standard input: # Standard input - sys.stdin print(type(sys.stdin)) letter = sys.stdin.read(1) # Read 1 byte print(letter) # Can also do things like `sys.stdin.readlines()` If you want interactive input from the user, it is better to use input() instead of sys.stdin.read() when asking for user input, but sys.stdin.readlines() can be useful for reading a file that was piped in from the shell like this: # Feed `input_file.txt` to `sys.stdin` of the Python script python my_script.py < input_file.txt To pipe the standard output of one program to the standard input of your Python program, you can do it like this: cat data.txt | python myapp.py Dunder properties: sys.__stdin__, sys.__stdout__, sys.__stderr__ The dunder properties sys.__stdin__, sys.__stdout__ and sys.__stderr__ always contain references to the original streams. If you re-assign sys.stdout to point somewhere else like a StringIO object, you can always assign it back to the original value with the following. Changing sys.stdout to a StringIO object can be useful especially when unit testing. Check out my tutorial Python Use StringIO to Capture STDOUT and STDERR. from io import StringIO import sys temp_output = StringIO() # Replace stdout with the StringIO object sys.stdout = temp_output # Now, if you print() or use sys.stdout.write # it goes to the string objc print('This is going to the StringIO obecjt.') sys.stdout.write('This is not going to the "real" stdout, yet') # Then we can restore original stdout sys.stdout = sys.__stdout__ print("Contents of the StringIO object") print("===============================") print(temp_output.getvalue()) fileinput.input() shortcut This function will return standard input separated by line, or if file names were provided as command-line arguments, it will provide all the lines from the files provided. It is similar to ARGF in Ruby. This gives you the option to pipe in a file from the shell or to provide a list of file paths for input. For example, you can either pipe in files via standard input or provide a list of filenames as arguments to the application: python my_app.py < file1.txt python my_app.py file1.txt file2.txt file3.txt Here is an example of it in a script: # fileinput_example.py import fileinput lines_of_data = fileinput.input() print(type(lines_of_data)) # fileinput.FileInput # One option: Join each line together to one long string print(''.join(lines_of_data)) # Another option: Iterate through each line # for line in lines_of_data: # print(line.strip()) Here is how you can run the program to pipe in files or provide file names: # Pipe file in via stdin python fileinput_example.py < file1.txt # Provide list of files as arguments python fileinput_example.py file1.txt file2.txt file3.txt Conclusion After reading this guide, you should know how to access and read/write from standard input, standard output, and standard error in Python. You should also know how to use a StringIO object to capture output, and use the fileinput.input() function to get data.
https://www.devdungeon.com/content/using-stdin-stdout-and-stderr-python
CC-MAIN-2022-40
refinedweb
866
58.79
Prepare an income tax return (with all appropriate forms and schedules) for the Smiths for 2018 following these guidelines: Alice J. and Bruce M. Smith are married taxpayers who file a joint return. Their Social Security numbers are 123-45-6789 and 111-11-1111, respectively. Alice's birthday is September 21, 1968, and Bruce's is June 27, 1967. They live at 473 Pierre Avenue, Anytown, CA 91850. Alice is the office manager for Dowell Dental Clinic, 733 Some Street, Anytown, CA 91850 (employer identification number 12-7654321). Bruce is the manager of a Super Burgers fast-food outlet owned and operated by Mercury Corporation, 1247 University Avenue, Hauppauge, CA 11788 (employer identification number 11-1111111). The following information is shown on their Wage and Tax Statements (Form W–2) for 2018. The Smiths provide over half of the support of their two children, Cynthia (born January 25, 1993, Social Security number 123-45-6788) and John (born February 7, 1997, Social Security number 123-45-6786). Both children are full-time students and live with the Smiths except when they are away at college. Cynthia earned $4,200 from a summer internship in 2018, and John earned $3,800 from a part-time job. During 2018, the Smiths provided 60% of the total support of Bruce's widower father, Sam Smith (born March 6, 1939,th. The Smiths had the following expenses relating to their personal residence during 2018: The Smiths had the following medical expenses for 2018: The medical expenses for Sam represent most of the 60% that Bruce contributed toward his father's support. Other relevant information follows: • When they filed their 2017 state return in 2018, the Smiths paid additional state income tax of $900 • During 2018, Alice and Bruce attended a dinner dance sponsored by the Anytown Police Disability Association (a qualified charitable organization). The Smiths paid $300 for the tickets. The cost of comparable entertainment would normally be $50. • The Smiths contributed $6,000 to Anytown Presbyterian Church and gave used clothing (cost of $1,200 and fair market value of $350) to the Salvation Army. All donations are supported by receipts, and the clothing is in very good condition. • In 2018, the Smiths received interest income of $2,750, which was reported on a Form 1099–INT from Third National Bank. • In December 2018, the Smiths’ dog knocked over a lit candle which caused damage to their living room. The Smiths paid a contractor $15,000 to make repairs but were not reimbursed for any of the damage by their insurance company. • The Smiths do not keep the receipts for the sales taxes they paid and had no major purchases subject to sales tax. • All members of the Smith family had health insurance coverage for all of 2018. • Alice and Bruce paid no estimated Federal income tax. Neither Alice nor Bruce wants to designate $3 to the Presidential Election Campaign!
https://www.transtutors.com/questions/prepare-an-income-tax-return-with-all-appropriate-forms-and-schedules-for-the-smiths-5110811.htm
CC-MAIN-2020-29
refinedweb
488
53.41
Join devRant Search - "ncurses" -7 - - Cross platform terminal library is just about complete! Here's the same program running in both Windows and "Ubuntu" (WSL, but it's using the ncurses back-end nonetheless) What my library does: - Double-buffers the console for drawing (like curses does) - Translates input into a standard structure (Linux and Windows have different input systems, obviously) - Does the same thing for output - Even supports color!3 - I love Windows Subsystem for Linux for one reason. #include <ncurses.h> I love making ncurses shit. Fuck. YES.6 -. - I'm a happy programmer. My thing works. ASCII art studio. Running in Linux console using c++ and ncurses. Mouse compatible. CW tech demo 2 coming soon... Interactive tech demo this time! - -.3 - - .NET Core broke cause of the recent ncurses update… And I just HAD to do a full system upgrade on my Arch laptop right… Guess I'll try to downgrade ncurses! - Thought I would help the webdev find a memory leak so step one build a developer version of chromium. Problem one ncurses and libtinfo 😅 got to love the split! Problem two gpg keys on old nucurses compat libs 😅. Linux is not for the faint hearted 😎 - - Finally managed to build a working Wordsearch system with OpenCV and curses. I curse ncurses till the end of days. - I am doing a small application using C++ as a side project to learn C++ Developement. It would be better if i have a small UI . Will ncurses be okay ? Does it have any utility now ? Is there a better way ? Please keep in mind i am a beginner in terms of application developement..1 Top Tags
https://devrant.com/search?term=ncurses
CC-MAIN-2020-34
refinedweb
277
68.16
Hi there! Well first of all I?d like to thank everyone from the list, it?s been very helpful to read tips and ask stuff here. Second, I?m having a problem with the 2D graphics engine I?m developing. I?m doing the event management system right now and decided that I should create a class named EventHandler that has a pure virtual function named HandleEvent. This function is the one that, obviously, handles inputs and any other ocurrencies differently depending on the element of the game we are talking about, like a sprite, a tile, or, talking high-level, items, characters and so on. Many classes must inherit from EventHandler so they are obliged to implement HandleEvent() and can be all referenced on a single vector that I will use later to check some things out. BUT the problem is: I have every class on a header file, and its function codes on a correspondent .cpp file. So each of these header files supposedly must include my ‘eventHandler.h’, causing multiple referencing. And I can?t just use forward referencing to EventHandler, because of the inheritance. What do I do? I?m using Dev-C++, and all my header files have those #ifndef #define #endif directives. Thanks in advance, Carol_________________________________________________________________ O Windows Live Spaces ? seu espa?o na internet com fotos (500 por m?s), blog e agora com rede social
https://discourse.libsdl.org/t/class-inheritance-problem/13768
CC-MAIN-2022-21
refinedweb
234
66.64
PureScript compiles to JavaScript and looks really similar to Haskell. Working with it has been pretty fun due to the language and also the helpful documentation. One opinion is that type safety should extend to styling as well. The libraries for purescript-css and purescript-pux-css (a library similar to Redux and Elm) introduce type-safe CSS to PureScript. From the Pux docs: CSS can and should be composed in a type-safe manner by using purescript-css and purescript-pux-css. Purescript-css provides a monad for specifying styles, and purescript-pux-css provides the css method for rendering to tuples, and a style attribute that takes CSS directly and returns an Attribute. While type-safety seems neat, the library is pretty slugglish and in my opinion doesn’t offer more (but frequently less) than vanilla CSS. This is due to confirming each attribute’s naming (some follow React’s camelCase over hyhens), and creating some hacky solutions that mess with typeclasses in order to compile. For example, the backgroundImage property has a different typeclass than the backgroundColor property, which leads to limited styling of elements. Still, once you get the hang of the syntax you can do some neat things, and have the comfort of type safety in the back of your mind. Below are a few components that show purescript-css syntax and what you can do with simple CSS. Box-Shadow Spread Radius The box-shadow property is commonly written with three parameters and a color. There is a little-known fourth length parameter for box-shadow1 that makes the shadow larger or smaller. Because the box-shadow property in purescript-css only takes three Color values, we need to make our own function that can be applied with a fourth Color value. This will let us do a few cool things that don’t work out of the box with purescript-css or pux-css, while also learning PureScript and CSS. To start, add the fourth length parameter by importing the necessary attributes, either from the PureScript CSS repo or with import CSS.Box at the top of your project file Then, create a boxShadow function that takes a fourth parameter, rather than the existing three: boxShadow :: forall a. Size a -> Size a -> Size a -> Size a -> Color -> CSS boxShadow w x y z c = prefixed (browsers <> fromString "box-shadow") (w ! x ! y ! z ! c) Below is markup for the Pux view and a component using our new function. Each element takes two arrays: attributes and children. Once the boxShadow function is in place, you can do several different things. For example, creating a shadow on a single side of a box: resetButton = button [ style $ do backgroundColor (yellowgreen) boxShadow (0.0 # px) (5.0 # px) (4.0 # px) (-4.0 # px) (black) ] [ text "Reset"] Or creating shadows “normally” on two adjacent sides: resetButton = button [ style $ do backgroundColor (yellowgreen) boxShadow (3.0 # px) (3.0 # px) (6.0 # px) (-3.0 # px) (black) ] [ text "Reset"] Next steps can includes figuring out how to pass a list of attributes (comma seperated in vanilla CSS) to boxShadow to allow for shadows on oppsite sides, or multiple Glows, such as below: Text Glow Another component is one that relies on the text-shadow property. This lets the user create a (somewhat corny) effect of glowing text due to a small shadow the same color of the text. Note that while in vanilla CSS, the text-shadow property will inherit the color, but with PureScript the color must be explicitly stated. import CSS.Text.Shadow (textShadow) resetButton = button [ style $ do fontSize (1.2 # em) fontWeight (weight 400.0) backgroundColor (rgb 34 0 51) textShadow (0.0 # px) (0.0 # px) (0.1 # em) (rgb 34 0 51) ] [ text "Reset"] Source I originaly learned this material from Lea Verou’s CSS Secrets. It’s great and my go-to for CSS.
https://jamesanaipakos.com/2017-02-24-CSS-Secrets-for-PureScript-CSS
CC-MAIN-2018-17
refinedweb
651
64.2
I can't reproduce this. I don't have libiconv installed. I'll try to look into the matter. Search Criteria Package Details: pdftk 2.02-11 Dependencies (2) Required by (6) - gtg-git (requires pdftk) (optional) - kde-servicemenus-pdf - pdf-append - pdf-reverse - pdf-zip - pdfchain Sources (2) Latest Comments valandil commented on 2016-09-27 13:08 thx1138 commented on 2016-09-25 01:56 Installing libiconv 1.14-1 and adding "-liconv" to Makefile.Arch: ... export LDLIBS= -lgcj -liconv seems to fix the problem. thx1138 commented on 2016-09-25 01:30 With: gcc 6.2.1-1 gcc-gcj 6.2.1-1 gcc-gcj-ecj 4.9-1 pdftk build ends with: g++ -O2 -fPIC attachments.o report.o passwords.o pdftk.o /var/build/pdftk/src/pdftk-2.02-dist/pdftk/../java/java_lib.o -lgcj -o pdftk /usr/lib/gcc/x86_64-pc-linux-gnu/6.2.1/../../../../lib/libgcj.so: undefined reference to `libiconv_open' /usr/lib/gcc/x86_64-pc-linux-gnu/6.2.1/../../../../lib/libgcj.so: undefined reference to `libiconv_close' /usr/lib/gcc/x86_64-pc-linux-gnu/6.2.1/../../../../lib/libgcj.so: undefined reference to `libiconv' collect2: error: ld returned 1 exit status make: *** [Makefile.Base:49: pdftk] Error 1 valandil commented on 2016-09-09 15:06 @JanusDC, gcc was updated about 12 hours ago. I just haven't had time to update gcc-gcj. JanusDC commented on 2016-09-09 15:03 I am experiencing the same issue as bakgwailo. I have gcc-gcj 6.1.1 and gcc 6.2.1. I do not see gcc 6.1.1 in the repo, nor AUR, and neither gcc-gcj 6.2.1. valandil commented on 2016-09-01 11:22 @bakgwailo Make sure that your gcc and gcc-gcj versions match. bakgwailo commented on 2016-09-01 03:30 Getting a compile error: g++ -DPATH_DELIM=0x2f -DASK_ABOUT_WARNINGS=false -DUNBLOCK_SIGNALS -fdollars-in-identifiers -fPIC -DPDFTK_VER=\"2.02\" -O2 -fPIC -I/tmp/yaourt-tmp-joe/aur-pdftk/src/pdftk-2.02-dist/pdftk/../java pdftk.cc -c pdftk.cc:33:21: fatal error: gcj/cni.h: No such file or directory #include <gcj/cni.h> ^ compilation terminated. make: *** [Makefile.Base:46: pdftk.o] Error 1 ==> ERROR: A failure occurred in build(). Aborting... ==> ERROR: Makepkg was unable to build pdftk. ==> Restart building pdftk ? [y/N] valandil commented on 2016-08-31 15:13 I can't think of an issue with adding -fPIC, so I added it. valandil commented on 2016-08-24 06:51 I'll revisit the issue when I come back from vacation. For some reason, I have always been able to compile the package without needing the -fPIC flag. alex.forencich commented on 2016-08-23 23:20 Is the -fPIC flag ever going to be added to this package?
https://aur.archlinux.org/packages/pdftk/
CC-MAIN-2016-44
refinedweb
471
54.08
Using session with reloader Other languages: français | ... Problem There are some issues in using sessions when running the application in debug mode. Is there any work-around? Solution web.py runs the program in debug mode when run using the builtin webserver. Simplest fix for this is to disable debug mode, which can be done by setting web.config.debug = False. import web web.config.debug = False # rest of your code If you want to use sessions in debug mode then here is a work-around. Since debug mode enables module reloading, the reloader loads the main module twice (once as main and once with its name), 2 session objects will be created. This can be avoided by storing the session in some global place to avoid creating the second one. Here is a sample code which saves session in web.config. import web urls = ("/", "hello") app = web.application(urls, globals()) if web.config.get('_session') is None: session = web.session.Session(app, web.session.DiskStore('sessions'), {'count': 0}) web.config._session = session else: session = web.config._session class hello: def GET(self): print 'session', session session.count += 1 return 'Hello, %s!' % session.count if __name__ == "__main__": app.run()
http://webpy.org/cookbook/session_with_reloader
CC-MAIN-2017-17
refinedweb
199
61.73
Streaming via a C API presents a conundrum in Haskell: on the one hand, the C code is side effecting, on the other, we would like to provide a lazy API. If you are not familiar with the problem of side effects and laziness, have a look at the paper by Wadler (in particular the array updates example) and perhaps the source for the IO monad. Consider the problem of streaming compression via a C library. We wish to ensure that effects of calling these C functions take place in the correct order (like IO), and we wish to ensure that specific effects take place before values are read; unlike IO we do not care if some effects are not performed if their results are not needed. A function of type decompress :: BSL.ByteString -> IO BSL.ByteString would force all side effects (i.e. allocations) to occur even if we only need part of the result. Morally, we would like the work to occur based on (lazy) evaluation rather than control flow. Concretely, consider decompress from my lz4-hs package: decompress :: BSL.ByteString -> BSL.ByteString decompress bs = runST $ do let bss = BSL.toChunks bs (ctx, buf) <- LazyST.unsafeIOToST $ do (err, preCtx) <- lZ4FCreateDecompressionContext lZ4FGetVersion ctx <- castForeignPtr <$> newForeignPtr lZ4FFreeCompressionContext (castPtr preCtx) check err dstBuf <- mallocForeignPtrBytes bufSz pure (ctx, dstBuf) BSL.fromChunks <$> loop ctx buf bss where bufSz :: Integral a => a bufSz = 32 * 1024 loop :: LzDecompressionCtxPtr -> ForeignPtr a -> [BS.ByteString] -> LazyST.ST s [BS.ByteString] loop _ _ [] = pure [] loop ctx buf (b:bs') = do (nxt, res) <- stepChunk ctx buf b case nxt of Nothing -> (res:) <$> loop ctx buf bs' Just next -> (res:) <$> loop ctx buf (next:bs') stepChunk :: LzDecompressionCtxPtr -> ForeignPtr a -> BS.ByteString -> LazyST.ST s (Maybe BS.ByteString, BS.ByteString) stepChunk !ctx !dst b = LazyST.unsafeIOToST $ BS.unsafeUseAsCStringLen b $ (buf, sz) -> withForeignPtr dst $ \d -> alloca $ \dSzPtr -> alloca $ \szPtr -> do poke dSzPtr (fromIntegral bufSz) poke szPtr (fromIntegral sz) res <- lZ4FDecompress ctx d dSzPtr buf szPtr nullPtr check res bRead <- peek szPtr bWritten <- peek dSzPtr outBs <- BS.packCStringLen (castPtr d, fromIntegral bWritten) let remBs = if fromIntegral bRead == sz then Nothing else Just (BS.drop (fromIntegral bRead) b) pure (remBs, outBs) This relies on the following c2hs code: {# fun pure LZ4F_getVersion as ^ {} -> `CUInt' #} type LZ4FErrorCode = CSize {# typedef LZ4F_errorCode_t LZ4FErrorCode #} data LzDecompressionCtx {# pointer *LZ4F_dctx as LzDecompressionCtxPtr foreign finalizer LZ4F_freeDecompressionContext as ^ -> LzDecompressionCtx #} {# fun LZ4F_createDecompressionContext as ^ { alloca- Ptr LzDecompressionCtx' peek*,CUInt' } -> `LZ4FErrorCode' #} data LzDecompressOptions {# pointer *LZ4F_decompressOptions_t as LzDecompressOptionsPtr -> LzDecompressOptions #} {# fun LZ4F_decompress as ^ { LzDecompressionCtxPtr' , castPtrPtr a' , castPtr Ptr CSize' , castPtrPtr b' , castPtr Ptr CSize' ,LzDecompressOptionsPtr' } -> `CSize' coerce #} corresponding to the following C header: typedef size_t LZ4F_errorCode_t; LZ4FLIB_API unsigned LZ4F_getVersion(void); typedef struct LZ4F_dctx_s LZ4F_dctx; typedef LZ4F_dctx* LZ4F_decompressionContext_t; typedef struct { unsigned stableDst; unsigned reserved[3]; } LZ4F_decompressOptions_t; LZ4FLIB_API LZ4F_errorCode_t LZ4F_createDecompressionContext(LZ4F_dctx** dctxPtr, unsigned version); LZ4FLIB_API LZ4F_errorCode_t LZ4F_freeDecompressionContext(LZ4F_dctx* dctx); LZ4FLIB_API size_t LZ4F_decompress(LZ4F_dctx* dctx, void* dstBuffer, size_t* dstSizePtr, const void* srcBuffer, size_t* srcSizePtr, const LZ4F_decompressOptions_t* dOptPtr); Such APIs are common in C: a stateful object LZ4F_decompressionContext_t is behind a pointer; we perform a series of steps that always has the same result, but we need to track side effects since the data pointed to by LzDecompressionCtxPtr is mutating throughout the computation (recall the example of array updates in the Wadler paper). Each stepChunk needs to bunched together - we must perform lZ4FDecompress before BS.packCStringLen but at the same time IO [BS.ByteString] is not precisely what we want: it would mean lZ4FDecompress had to be called on each chunk of the input to read the first chunk of the output, failing to live up to the promise of laziness. We resolve this by calling unsafeIOToST on the result of stepChunk, that is, each step is lifted into the lazy ST monad; stepChunk only does work when a new chunk of the BSL.ByteString is needed. In fact, if decompress is not lazy, we get pathological memory overuse. Consider the following program: module Main (main) where import Codec.Lz4 import qualified Data.ByteString.Lazy as BSL import System.FilePath ((</>)) import System.IO.Temp (withSystemTempDirectory) main :: IO () main = sequence_ [ compressDump , decompressDump ] decompressDump :: IO () decompressDump = withSystemTempDirectory "lz4" $ \fp -> BSL.writeFile (fp </> "valgrind-3.15.0.tar") =<< (decompress <$> BSL.readFile "valgrind-3.15.0.tar.lz4") compressDump :: IO () compressDump = withSystemTempDirectory "lz4" $ \fp -> BSL.writeFile (fp </> "valgrind-3.15.0.tar.lz4") =<< (compress <$> BSL.readFile "valgrind-3.15.0.tar") With the lazy ST monad, we get the following heap profile: Had we used the strict ST monad: This shows that laziness is necessary to have sensible memory use. Contrary to superstition, laziness is not synonymous with worse performance; some space leaks are strictness-induced. Moreover, I think this provides a superior API. C libraries handle streaming compression in various ways (a stateful decoder, callbacks); in Haskell, we use a familiar lazy linked list. The streaming and non-streaming cases are essentially the same in Haskell (have a look at lz4-hs), which is not the case in C (see lz4frame.h and lz4.h). I haven't seen this technique put forward or evaluated anywhere before; in fact, I only figured the above out by reading Herbert Valerio Riedel's source code for lzma. So, for reference, if you want to use this technique: Use ForeignPtr over Ptr. Edward Z Yang has a series on c2hs, and you can look at the c2hs wiki as well. I have not had any success getting lazy streaming to work with ordinary Ptrs and explicit frees. Each step of the loop should return a LazyST s a rather than an IO a; one gets an ST s a from an IO a via unsafeIOToST. There is no need to use unsafeIOToST more than once per iteration. Also of note, I have not had any success using unsafeInterleaveIO between steps as zstd does.
http://blog.vmchale.com/article/lazy-io
CC-MAIN-2022-27
refinedweb
955
53.41
Free "1000 Java Tips" eBook is here! It is huge collection of big and small Java programming articles and tips. Please take your copy here. Take your copy of free "Java Technology Screensaver"!. JavaFAQ Home » Java Notes by Fred Swartz When you create a new instance (a new object) of a class using the new keyword, a constructor for that class is called. Constructors are used to initialize the instance variables (fields) of an object. Constructors are similar to methods, but with some important differences. new this super These differences in syntax between a constructor and method are sometimes hard to see when looking at the source. It would have been better to have had a keyword to clearly mark constructors as some languages do. this() this public class Point { int m_x; int m_y; //============ Constructor public Point(int x, int y) { m_x = x; m_y = y; } //============ Parameterless default constructor public Point() { this(0, 0); // Calls other. Object; } Normally, you won't need to call the constructor for your parent class because it's automatically generated, but there are two cases where this is necessary. RSS feed Java FAQ News
http://www.javafaq.nu/java-article751.html
CC-MAIN-2015-27
refinedweb
188
64.41
- How to set transparency in Excel macro - I am trying to split continuous data from multiple sources and put them in richtextbo - Auto generate numbers in vb.net form - read from txtfile in vb.net and also give the details from the same txtfile - is txtAge+LastName an acceptable control name? - How do I assign an access key to a button VB.Net? - GetPixel() always returns 4294967 - Datagrideview font color of single cell - Filtering columns from datagridview by textbox - Signature generation with VB.net - Outlook Calendar Export snipit - I don't know how to receive results from Google translate using Selenium. - How can I achieve this? - Code Run Dynamically From Textbox in Vb.Net - Listbox alignment in visual studio 2010 - How to maximize richtext document with the screen - what does := mean in visual basic .net (console application) - Macro assigned to an update button - Crystal report missing one row - how do I print a reverse order of these numbers in this while loop - Datagridview did not refresh/update - Pass data from HTML file to Visual Basic .exe From - Auto fill data from web file to - How do I create a file by receive file from remote message queue ? - How to display another form with datagridview just below a textbox to filter grid dat - vbc : error BC30420: 'Sub Main' was not found in 'Product - Getting exception while calling db2 stored proc in vb.net - the line that txtEmailAdd.Text is where I am getting hit with the error? - I need a function to get final value in min.sec. - Cascading combo box not working - Process.GetProcessesByName method not working for me on one PC - how to have a codes for clear form - What's wrong with this authenticate user code, please? - how to tag button to another form - keyence bt-w250 serialport problem IO Exception - How to Check if character is Uppercase - can't format my decimal as currency - input string was not in correct format - Errors in Visual Studio register.aspx file - How to Send Command for any hardware device - Error trying to install NuGet package in VS 2017 - Reusable Code (Global Variables) - VB.Net equivalent to Excel's DataImport WebQuery - Why/How two objects require Widening/Narrowing conversions - error in insert into satatement - How to retrieve data using months and username from combo box in vb.net? - What would I declare hashedPassword, please? - Inputes in datagridview - need help to activate macro with clearcontents and need to protect sheet - Error: attribute name must be followed by an equals (=) sign and a value - I have to create a program that creates outputs for different logic gates in Visual B - Sorting strings with initial alphnumeric part producing alphabetic/numeric ordering? - Access VBA to Connect Http Request Like Python Used - Making an Autorun DVD - Problem With Capturing Frame from WebCam - Question - it says the value of image cannot be converted into byte - Adding a Form in Panel in vb.net - Parameter @uniqueCode has no default value - How to join these 2 lines - Excel connection strings - Parent/Child Forms - myMessage.body question - How reinitialize an array with modular scope? - Missing .exe file in bin while creating a database on Visual Basics - How to convert in C# code to vb.net - i have a problem with the cmdNavBar error alway appearing Operation is Not allowed wh - how can i convert integer to string - Data Grid Views - Forms talking to one another - there is already an open DataReader associated with this Command which must be closed - Excel to SQL - convert a byte array to an image that will display in a PictureBox on a VB form data - Array. how. To delete Duplicate elements from vector - how to spin thru items in a listview from a posted form - Renaming thousands of pdf files with different filenames - Webbrowser in visual studio: access elements in network tab - How to calculate the difference in time - Authentication is required for relay - Error in email code - Setting Height Paper in Crystal Report - I have opened terminal using VB macro from excel, How to set the terminal app in focu - generating rndm number and save in a list in vb.net - Correct path to image in Solution Explorer (VS 2013)? - 'Handles clause requires a WithEvents variable - error - SMTP form errors - My Winform application UI hang after taking remote control. - Login Form w/ Schedule, creating a form that displays a dynamic name - How to make displays proportionate - Unlocking Password Protected Word Document - How can I run visual foxpro program in .net - i need to code a number pyramid in visual studio - Radio Buttons - VB.Net - How do I get vb.net output from xml? - Load text from webpage into listbox - How do I insert a set number of pages into a word document dependent on answer to que - Calculating End of Month 23 months from today - Radio Buttons - How do I get data from SQL database to my dynamic textboxes VB.NET - Calling specific Cell DataGridview - CAS scale indicator programming - Detecting when the Enter key is pressed? - Why would DataGridView.ColumnCount be valid on one form and be 0 when referenced from - Assigning the number one to a random position of a 10x10 array - How to print invoice - How to get the full name - Print contents of textboxes and pictureboxes - Problem with date - Module accessing a forms controls - Special Case of Bin packing problem - Active directory search through vbnet - Target - open in new window - Multi page application to run with button click from page 2 after data is entered on - Creating User Control with two controls - Hyperlink as string question - Putting Outlook Contacts in a Combox - Populating An Array With String Values - How do I update EXIF Metadata? - Why my For Each block does not work? - How can I access public access level from one project to the other ? - Duplicate rows in DataGridView using vb.net windows appliction - How to use vb.net application two lan connected pc ? - How can I read data from an OPC group on an OPC server? - Image Processing In VB.NET - Need help with VB.NET and SSIS. I Have 0 Knowledge of VB.NET - Reading data from an OPC server with VB2008 - Populate array with user inputs - How to add a figure to only a particular group in vb.net RDLC report - ReportViewer: Print Layout Error - Problem saving image after editing - Access specific user inbox folder - Autoscroll bar position changes when publishing - An unhandled exception of type 'System.ArgumentException' occurred in System.Drawing. - What will this error do ti your iPad or laptop - Connect bluetooth serial port for send files - empty path name is not legal - I want to edit byte data from a .png file - increase byte value - Problem with system32 - when I connect Visual basic.net 2010 with sql server 2008 by dataset and i drag table - Variable declaration - 3 Ways - What are the differences? - Webcam automatic capture after failed login three times - overide the Allow Deny message from Outlook from Excel VBA - get Lengh video. as HH:MM:SS:FF - Make a synchronous call using asynchronous function - Reading Excel - Text Box Punctuation Out of Place - separate datagrid column value into two columns - Searching for a value in a text file - DataGridView Processing - Byte Array to varbinary(max) - How do I filter datagridview exported from excel in VB.NET - Odd dates I need to process in an SQL database... - How do I filter multiple columns in datagridview exported from excel in VB.NET - VB.Net Menustrip access keys working when they shouldn't - Datalist control - query string - How can i access my database in other PC using LAN connection. - WSDL File - all textbox are lockunlock on group in vb.net - Selecting combo box items based on a condition in another combo box - Redirecting user - How to take a specific line of from a text file and output the line into a listbox - DateTime in SQL - Autoclose function after another process is opened? - calling winAPI functions in vb.net - How to pass multiple values to web service in VB.net - Shared Variable in Both Windows Application as Well as Windows Service visual Basic - How to select mult items in listbox from backend code(VB)? - Problem Calling Marshal Visual Basic 2012 - Print Scrollable windows Form in vb.net - Global Variables for SQL - How to pass data to public property in another form - Rcdl Report using dataset not displaying report. - Move Dynamically generated labels - Check if form fields are empty - ODBC Update Row with Beginedit not save in Table - How do i loop an array(csv) and save every line in separate file - Printing Problem - creating dll of .jsx files - Unable to compare a string with a Structure - Inheritance propertychanged from Class don't work - Programmatically populate textboxes in vb 2012 and sqlserver - listbox.selecteditem to populate datagridview - How to set a countdown and conduct a comparison of time in visual basic - VB BindingSource.EndEdit() and DataTable.GetChanges() - How do I get this to work from Highest to lowest? - Help Parsing data based on received data from Com Port - Getting total of a record in a field - Adding font to label from a textbox using radio buttons - using loops for text boxes - Is there a replacement for the MSHFlexGrid in .Net? - Program fails to write out its log file - Hello Guys! I have a question how can i display or print out the longest Capitol Citi - HttpWebRequest - GET & POST In Single Request - Could some one help me Having Major issue with trying to figure out coding buttons - Why won't user's details go into Access db? - Loading an Image in a Picture Box once a Selection has been made from a List Box - Getting the navigation button width by its index - My rtf.dll was deleted when I installed vs2013 - Modify ActiveX controls in Excel with VB.NET - Objects Over ControlBox - How do I alphabetize words? - Datagridview not sorting data correctly - How to accept only Propper Case characters in a textbox in vb.net? - How to count the number of rows in a database and input it's count on a textbox? - Can someone help me duplicate the effect now that I achieved speed ? - What is this line of code doing? - Process EXCEL.exe never close ! Help please - How to insert data on mdb access that browse in datagridview in vb.net - Aggregate function Access + VB.net - How to retain textbox value after postback in vb.net? - Selecting a cell in a DataGridView - How to copy mdb access and convert into excel file in vb.net? - How to use Progress Bar while processing an application. - How to select multiple items on a listbox? - How to browse MDB file in DataGridView and TIF Images in ImageViewer? - How to browse an mdb file in OpenFileDialog & connect it to DataGridView? - select data from data grid view between two dates - Spaces to be removed from string - To communicate between projects in VB-2005 - Sub Main not being executed in complied program - How to duplicate emboss effect into second function - Trouble executing MYSQL prepared statement in VB.NET - Checkedlistbox items displayed in horizontal - Won't save the last Entry in Database in VB 2010 - How do I get sub procedure to know what cell was clicked in my asp:Table cell? - Sample Code of Barcode Scanner in Vb.net - Adding all items to a list box with each item separated by a ; - An unhandled exception of type 'System.StackOverflowException' occurred in System.Win - use asp:menu to onclick vb.net sub procedure - Deserialize JSON Array in vb.net - Use asp:Table in aspx.vb with hyperlinks - HELP trying to figure out how to code buttons that match in VB 2012 - DataGridView - how do you shuffle a deck of cards? - AT Commands For Vb.Net ASp.net To Send SMS Using GSM MODEM - How do I convert Hexadecimal to String in VB.NET - Can ADO used for copying queries between access 2007 databases - Autocompletion on A RichTextBox - data environment - Identifying Errors in Form Code - Is there any way that code stored as a variable in vb can be executed? - How to play sounds from resources in a public class. - setup - hycam2 ..how to add in visual studio 2010 express in the add refrence COM ..pls help - Copying Value of One Array into another - column-OicAdmin-does-not-belong to table - system.typeinitializationexception - how to find some specific words from a paragraphs and print them into a label - RubberBanding Drawing in VB.NET on Win8/10 - load image from mssql database - Populate combo box with table names with space - Date Problem in vb 2008 and MySql - How to retrive date format from access with vb.net ? - Importing my database to my vb as an user account; (error)cannot find table 0. - populate a text box - Proper exit of Excel template - Backcolor Property set in User control - Type Expected, err in vb 2015. trying to define a mysql connection - Handling a date in a datetime format - datagridview to show data from database - How to refresh a form - how to use value on form to report viewer in vb.net - Combo boxes in vb.net - Can a radio button have more than 1 function? At the top of my form is a textbox tha - Explain use frame in visual basic? - I can't run programe which has Far Point spreadsheet(Grid) It shows an exception inte - How can i make auto increment in my textbox vb.net and database of access - make data grid view pop up on form after button is clicked - wrapping you class and then referencing it from another project - converting console application to a windows form - after making the color to a label how we can access the label color - Export Vb.net Datagridview to Microsoft Excel 2013 - How to Dim or Fade an image in a picturebox, or form background? - Autosaving file by date as part of the file name - SOAP request and response problem - problems for to hide the default printer selection option in vb.net by using cryatal - how to open user control form in splitcontainer - Click event in an If to check if already clicked... - DOTMatrix Printer :Print set of Characters - How to Auto jump to the next TextBox after the restriction of MaxLength? - Add new Rows - Problem on Update Acsess database And add or do over one time database - Start Menu App Needs To Open On The Lower Left Hand Corner - When textbox is null it will not insert data - RowCount in VB.NET - google weather api not working - how to create website in vb.net (new at vb .net) - Programming vba label/text box - Split text in VB.NET - System.Net.Sockets -i am using .net.sockes to comunicate with equipment, it - How do I change a timeSpan to integer - How to increment Customer_Id when add button is pressed Vb6 using Ms access - Convert String to System.Drawing.Image - How to access toolstripmenuitem using index value - How do I Iterate through checkboxes three at a time - How to deploy a project with SQL? - Multicolumn Combobox - datagridview to datagridview - if textbox value is negative then increment 1 - Making a Windows Form App Multi User - Format Date Entered by User to Month/Year - How to make a dynamic array with ReDim and Preserve? - My code should return the speed but it doesnt, please help!!!!! - Title appear with mouse pointer (Enter) - I want to create a new DropDown menu with different - How can I pass Date-Range to crystal reports in vb.net - codes to filtering DataGridViewCheckBoxColumn - DAtabase is not closing - sharing a form from another project in vb.net - how to Add column cell value to previos cell value and get result - Formatting of visual basic form is disturbed when form maximized - Asp.net VB Web Service - The Operations times out - How to change the .net framework of a program ? - How to set 0,1 as 0.1 on Sine and Cos function ? - Additional information: Cannot find table 0. DataGridView - how to focus the text in the textbox - How do I link up arrows with rectangles while moving? - Find what comes after a match in a string - Linking more than 2 forms together in VB.net - to update a class for a text box - Checkbox - copy one column from text file to another text file - Looping for Datatable - Error on Designer.VB file in Design mode on inherithed class - Double value 0.0 classed as Nothing? - Print documents linked to checked boxes in separate document - Using variable in button to give value to label - How Set A Class Property To True - AddHandler - Button Click ? - i want to preview what i'm printing out - Using Resources file to store CSV Data File - Open file with progressbar get maximum - how to conditional formatting a value in a text box - Changing image location at Runtime - read longblob from mysq then play the video - how do i format the text in my textbox - Excel file locked for editing - how to create message box when in vb.net after click the button such as delete? - VB 2010 Repeat (need help) - copy selected row form table1 in form1 to table2 in form2 - How to get and use multi column combobox in VB.net for a data gridview - "Add 5 minutes to current time which is in string" - VB.Net XML Retrieve entire node based on first element. - Splitting a line by comma in VB - VB.NET 2010 Unicode in texbox - How to scroll the controls in vb.net web form? - How to count Lines? - HttpWebRequest - can I store the endpoint URL for faster access? - VB Check if record exist - Decimal formatting - How to count special characters? - Read CSV Files, then Display in Table format using variables - Unable to update record - VS2008 report using values from a parametized Stored procedure - How to find particular node from duplicates - Timer does not count when in a IF statement - Need Help - Detecting if a application is currently running then changing image to reflect that. - Targeting a textbox within a TabPage - OleDbException was unhandled: Number of query values and destination fields are not t - Some problem with the windows form - I am working on vba PowerPoint. I am new in vba so kindly bear me. I'm trying to capt - i want to get inverse log of any number .what is the function name to make this? - Correct method for constantly updating DGV via backgroundWorker - convert just superscript format of rtf to html <sup> - retrive image from ms access database - key events in a winforms custom control class - Running Codes from a text file in Resource Which is Reverse text - vb.net - Creating a function tha will allow multiple or different textbox to input only number - How to access the resource file and assign it to a Resourcetype? - How to move text from a text file to a text box line by line - voice based commands for media player - Login form error - How to create delete command in vb.net? - RunWorkerAsync using UI input value - About the Data Grid View in VB.NET - What can I do about this? - How to run my application in maximized window state - VB.NET ip camera video recording source code - I want to create a login form in vb.net using the ms access as a backend tool - Error Connecting with microsoft access office 2010 from visual basic - Subs in separate threads - crystal report asking login for access db - Updating a value if cell is DBNull - Using loops to find a convergence point in calculating an integral - how can i make textbox(n).text, how can i control n to write something in there - How to save data in unbound datagridview in vb.net - Limit dice rolls? Add points when clicking a "dice" as background Image on button? - Populate DataGridVieComboBox with adaptable - Best way to search an array for a value and return offset position - How to change font size dynamically? - why does "Conversion from string "" to type 'Decimal' is not valid." keep appearing - Tables is not a member of System.data.datatable - please help. how to copy the checked listview1 items into listview2.... - Systax Error (Missing Operator) - Converting Integer to Double - Putting in Access 97, a DateTime 'date' usingVB.NET - Converting String to Integer - JSON decompressing - Drill down unknown levels JSON response - Use a randomizer to trigger certain sounds - Load an exe file and run it from memory - Open Detail Form from Search Form - Read last 7 lines from text box and store into string variables - what are reserved words in Visual basic - how to call stored procedure in vb.net - migrating CurrentDB and CurrentProject in MS Access VBA to VB.Net - visual studio professional : reportviewer tasks missing from form. - how to display appointments in a form reminders - TextBox Validation - Changing the format of Date in visual basic - Using a C++ DLL from VB, and Passing Pointers? (again) - Estabhlishing a Connection to my Database - How to use ms-access database in VB.net - Remove double spaces in a textbox - The Print button in the dialog box PrintPreviewDialog not perform the print - Passing data across a network WCF - AppBar API Positioning - How to minimize the panel when a check box is clicked - Problem Error datetime access to excel - How can I close a running program like the Task Manager does? - About selection of first 10 characters - Me.Close() does not terminate app - Adding New Language to a program - How To Get Icons For Pictures Files in vb.net - play multifiles video in media player - Load and save a document that is stored in Byte Array - Why does VB.Net not support pointers? - Argument matching parameter 'item" cannot convert from 'DBnull' to 'ListViewSubItem'. - visual basics string manipulation. - I'm trying to use the performance counter (commited bytes) in visual basic - Opening files from a flash drive using combobox and button - Scoreboard - Dynamic object names for Data Entry - Problem with setting up time delay in a For-Next loop - Forms access null exception. - Click Button In Another Application - visual basic - button to run command or vb script (from same directory) - wait for store precedure to complete - Is it possible to make an application that locates a matching picture? - load file name with double click - create a backgroundimage in vb.net form - form1.vb cannot open - decimal point comma regionalization issue - Changing dataGridView rows by colors depending on time passed by - What's a reliable third party app for re-sizing screen resolution. - DirectorySearcher protocol error with Notes Domino directory - Find or Filter in ADO.NET using Visual Basic.NET - change paper size for a customized kiosk printer - Need helps with my poker hands evaluation code - how to change font of dynamically created textbox by selecting font from combobox - Reading PostData from WebBrowser - Save and Retrieve word document in Sql Database - Checkboxes...I need a push in the right direction. - sql server database 2005 connection error - I want to connect between two columns in two databases - retaining zoom settings on exit of app - Convert excel vba to vb.net - Is a given Type Me or one of my base classes? - datagridview select query from string to date - Compare 2 excel worksheet, consolidate differences in 3rd sheet - Set a value to a text object of a crystal report before it loads - How to get string formula output - Want to copy folder (contains xml files)from one location to another using vb script, - Conversion from string "LLL" to type 'Integer' is not valid - DataGridView change a cell from ChekBox to Text - is it possible to record user voices programmatically in asp.net? - vb.net implementation of excel 2010 webquery - How to kill a process by the description not its name - Asp.Net how to create seperate event handler for programmatically generated controls - Joining 3 tables from 2 separate Access databases into a 1 query - Subscript out of range error - Passing a pointer in to DLL using VB.net - How to access Two Project Form share each other - How to establish LAN connectivity - Commit checkbox value immediately - Shading under/between vb line chart - LinkButton in Repeater: CommandArgument does not change value - Disregard Negative value of number - Want connect VB.NET console application to SQL server - reading binary outputfrom a Sirf GPS - How to array textbox(i).text this means how to array all the text of the textboxes in - how to make totals of the items in a combo box - P2P over internet with standard port - how to increase textbox size dynamically in vb.net - WPF RxFramework Guide - C# Data Grid Toolkit - Sorting scores from text file into an array - how to validate textbox from accepting duplicates and combo box from displaying dupli - error on dbms path while runing vb project in different systems - Make vb.net 2010 read french words with System.Speech - Crystal Reports Duplicate Result - BindingSource with ComboBox - error updating OLEDbadapter from Dataset - BindingSource with comboboxes using Autocomplete feature - ERROR: arithmetic overflow error converting expression to data type smalldatetime - Inserting values in calendarcolumn in vb.net 2010 - Error converting varchar to numeric - Autogenerate serial number in textbox - Up/Down keys do not work with AutoComplete SuggestAppend textbox - Drag and drop works dragging down but not up - How to add functionality to an existing Interface Class and Implementations - Textbox property to select Appended text - automatically generate sr no in textbox - i have 7 listbox no. of items when i click 3 listbox item it will dispaly image - Populating Datagridview - change font size and font color line by line - Updating Datagridview Value - I need to know the code for spacing - Datagridview Items into Crystal Reports - Export Acces form records to email message.. - String or binary data would be truncated. The statement has been terminated - Calling a Form of Project1 from Form2 of Project 2 - send a mail with attcachment - How to convert a date(dd/mm) to number of days in a year - how to install my program and the sql server on customers PC ? - How to create an animation - combo box click event problem - entering multiple values in text box? - How to update all the name that display in datagridview - attach a file to email - Deselect text when right-click DGV cell - How to Print dgv which have multiple number of pages - Multiple checkboxes checking at once - Function Return Statement Always Returns 0? - Detach Process in vb.net 2012 - cut and paste - Passing Values Between ASP.net pages - Getting a .bat file location in the application location and executing it - Load a list of images from a directory - Problems using SOAP/WSE from VB .Net ! To be exactly I keep getting problems with P - how to get hyperlink from xml to listbox - Save and Edit data from Listview to MS Access - select the Recording Device in vb.net application - how to put Listview new data always on top? - VB.NET Selecting Items In CheckBoxList - Looping the Array to be used in data point in Chart - How do I make this work, Private Sub TextBox1_TextChanged(ByVal sender As System.Obje - phase only correlation - Checkboxes with equivalent values - without using button how we can insert the value in database - Crystal Report Generates Blank Report - VB.Net app not working on 32 bit machines - Need Help modifying xml file using VB script - An unhandled exception of type 'System.Data.SqlClient.SqlException' occurred - DGV editing on a row that is removed - Displaying Results from Comparing texts from 2 forms - How to sort items in a listbox without using the sort property? - Transferring data ffrom datagrid1 to datagrid2 - while sorting an Integer array, numbers are sorting wrongly. - Serialization of arrays (Recipe program)
https://bytes.com/sitemap/f-332.html
CC-MAIN-2021-10
refinedweb
4,578
60.14
Content-type: text/html strlog, log - STREAMS log driver driver #include <sys/strlog.h> int strlog( short mid, short sid, char level, ushort flags char fmt, [,value]...); Specifies the STREAMS module ID number for the driver or module submitting the log message. Specifies the sub-ID number of a minor device associated with the STREAMS module or driver identified by mid. Specifies a level for screening lower-level event messages from a tracer. Contains several flags that can be set in various combinations. The flags are as follows: The message is for the error logger. The message is for the tracer. The message is for the console logger. Provides a notification of a fatal error. Makes a request to mail a copy of a message to the system administrator. The following are additional flags. The strlog interface does not use these flags: The message is a warning. The message is a note. A printf style format string. This accepts the %x, %l, %o, %u, %d, %c, and %s conversion specifications. Numeric or character arguments for process-specific information. There is no maximum number of arguments that can be specified. The STREAMS log driver allows user-level processes, and STREAMS drivers and modules, to perform error logging and event tracing. This is done via a user interface and a kernel interface. The interface that this driver presents to user-level processes is a subset of the ioctl() system calls and STREAMS message formats. These processes can be error loggers, trace loggers, or other user processes, that generate error or event messages. The user interface collects log messages from the log driver, and also generates log messages from user processes. The driver also accepts log messages from STREAMS drivers and modules in the kernel via its function call interface. The kernel interface enters requests or calls from STREAMS drivers and modules into log messages. If any of the fields of the trace_ids structure contain a value of -1, /dev/streams/log will accept whatever value it receives in that field. Otherwise, strlog only accepts messages only if the values of mid and sid are the same as their counterparts in the trace_ids structure, and if the message's level is equal to or less than the level value in the trace_ids structure. Once the logger process has sent the I_STR ioctl() call, the STREAMS log driver begins to send log messages matching the restrictions to the logger process. The logger process obtains the log messages via the getmsg(2) system call. The control part of the messages passed in this call includes a log_ctl structure, which indicates the mid, sid and level, time in ticks since the boot time that the message was submitted, the corresponding time in seconds since January 1, 1970, and a sequence number. The time in seconds since January 1, 1970 is provided so that the date and time of the message can be easily computed. The time in ticks since boot time is provided so that the relative timing of log messages can be determined. In addition to the information contained in the log_ctl structure, there is also a priority indication. The priority indication consists of a priority code and a facility code (found in /sys/syslog.h). The valid values for priority codes are the following, based on the setting(s) in flags: If SL_CONSOLE is set in flags. If SL_CONSOLE and SL_WARN are set in flags. If SL_CONSOLE and SL_FATAL are set in flags. If SL_CONSOLE and SL_ERROR are set in flags. If SL_CONSOLE and SL_NOTE are set in flags. If SL_CONSOLE and SL_TRACE are set in flags. The valid values for facility codes are the following: If the message originates from the kernel. If the message originates from a user process. However, these processes may sometimes set another facility code value instead. A user process, other than an error or trace logger, can send a log message to strlog(). The driver will accept only the flags and level fields of the log_ctl structure in the control part of the message, and a properly formatted data part of the message. The data part of the message is properly formatted if it contains a null-terminated format string, followed by any arguments packed one word each after the end of the string. A different series of sequence numbers is provided for error and trace logging streams. These sequence numbers are intended to help track the delivery of the messages. A gap in a sequence of numbers indicates that the logger process did not successfully deliver them. This can happen if the logger process stops sending messages for one reason or another (see the strace and strerr command reference pages for more information). The data part of messages contains unexpanded text of the format string (null terminated), followed by any arguments packed one word each after the end of the string. The following examples illustrate how to use the strlog interface for some basic uses. This code example segment illustrates how a STREAMS module can generate a console log message: strlog(TMUX,minor(mydev),0,SL_CONSOLE|SL_FATAL, "TMUX driver (minor:%d) suffers resource shortage.", minor(mydev)); This code example illustrates how a user process can register itself with the STREAMS log driver using the ioctl() command, I_ERRLOG. struct strioctl iocerr: iocerr.ic_cmd = I_ERRLOG; iocerr.ic_timout = 0; iocerr.ic_len = 0; iocerr.ic_dp = NULL; ioctl(logfd, I_STR, &iocerr) Tru64 UNIX does not provide a console logger. Note, however, that other systems may provide console loggers. Specifies the clone interface. Specifies the header file for STREAMS logging. Specifies the header file for STREAMS options and ioctl() commands. If any of the following conditions occurs, strlog() driver's ioctl() command sets errno to the corresponding value: The I_TRCLOG ioctl() call did not contain any trace_ids structures. The I_STR ioctl() call could not be recognized. The driver does not return any errors for incorrectly formatted messages that user processes send. Commands: strace(8), strerr(8). Interfaces clone(7), streamio(7). Functions: getmsg(2), putmsg(2), write(2). delim off
http://backdrift.org/man/tru64/man7/log.7.html
CC-MAIN-2017-22
refinedweb
1,007
57.27
How can I count SPI clocks in Raspberry Pi? I am interfacing MCP3208 with Raspberry Pi3 using SPI communication. This ADC completes one conversion in 24 clock cycles. In my data acquisition system, how can I count these 24 SPI clock pulses so as to generate a busy signal till that time? The ADC should not accept any incoming signal during that conversion period. Can anybody suggest me some idea and the corresponding C code? See also questions close to this topic - inet_ntoa gives the same result when called with two different addresses // char ip1[] = "127.0.0.1"; char ip2[] = "211.100.21.179"; printf("ip1: %s\nip2: %s\n", ip1, ip2); // long l1 = inet_addr(ip1); long l2 = inet_addr(ip2); printf("ip1: %ld\nip2: %ld\n", l1, l2); // struct in_addr addr1, addr2; memcpy(&addr1, &l1, 4); memcpy(&addr2, &l2, 4); printf("%u\n", addr1.s_addr); printf("%u\n", addr2.s_addr); // printf("%s\n", inet_ntoa(addr1)); printf("%s\n", inet_ntoa(addr2)); // printf("%u,%s\n", addr1.s_addr, inet_ntoa(addr1)); printf("%u,%s\n", addr2.s_addr, inet_ntoa(addr2)); printf("%s <--> %s\n", inet_ntoa(addr1), inet_ntoa(addr2)); The output is: ip1: 127.0.0.1 ip2: 211.100.21.179 ip1: 16777343 ip2: 3004523731 16777343 3004523731 127.0.0.1 211.100.21.179 16777343,127.0.0.1 3004523731,211.100.21.179 211.100.21.179 <--> 211.100.21.179 // why the same?? I know printfparse arg from right to left or vise versa is platform-dependent, but why the output is the same value, please help to explain. - String to Dictionary or Array in C# I have a string as followed: Charlie Sheen is Cool BBBB He likes to run BBB When does he run? BBBB He lives here BBBB ? [ I like Charlie Sheen BBBB I would like to separate the string into two different Arrays. One for the phrases on top of the B's and for the B's itself. Or even better, have a dictionary with the phrases as the key's and the B's as values. I would also like to ignore all empty lines and lines that do not start with a letter. How would I go about doing it? string[] newText = file.Split(new string[] { "\n", "\r\n",},StringSplitOptions.RemoveEmptyEntries); int count = 0; Regex symbol = new Regex("^[[]]$"); //these are the symbols I want to detect Dictionary<string, string> patternedText = new Dictionary<string,string>(); foreach(string s in newText){ int bCount = 0; count++; bCount++; //position of where it detects a match if(symbol.isMatch(s)){ newText[count-bCount] = s; }else{ newText[count] = s; patternedText.Add(newText[count],newText[count+1]); } Basically I want the format of the array to be: Phrase with symbols BBB Phrase with symbols BBBB Phrase BBB with symbols And filter out all empty lines and the symbols on any line to be added to only the phrases. - Simplistic ADT queue implementation using provided issue /****************************************************************************/ /* File: queue.h */ /****************************************************************************/ /****************************************************************************/ /* */ /* Simple Queue ADT */ /* */ /* Declaration */ /* */ /****************************************************************************/ typedef struct { int key; int value; } data_t; typedef struct queueNode { struct queueNode *next, *prev; data_t *data; } QueueNode; typedef struct queueType { QueueNode *head; QueueNode *tail; } Queue; /******************************************************************** Alright, so above I have a file provided for me called Queue.h, this makes sense to me in some respect as it gives data_t a pair of values (value and key) and stores that in a QueueNode, which is placed in either the Tail or Head of a queue. The problem I'm having is that I DO NOT KNOW how to use this to create a Class called Queue.c that does the following functions: voidinitQueue(Queue *self){ } void enQueue(Queue *self, data_t *data){ } QueueNode *frontNode(Queue *self){ } data_t *frontValue(Queue *self){ } data_t *dequeue(Queue *self){ } void removeNode(queue *self, queueNode *p){ } QueueNode *findNode(queue *self, data_t *data){ } void printQ(queue *self, char *label){ } With this (not meant to be altered) main(): /* int main () { Queue myQueue; QueueNode *p; data_t data[10], d2; int i; initQueue (&myQueue); for (i = 0; i < 10; i++) { data[i].key = i; data[i].value = 10*i; enQueue (&myQueue, &data[i]); } printQ (&myQueue, "MyQueue:" ); } */ Now I'm not going to ask anyone to do this for me, that doesn't help me at all in the long run, but I have absolutely no idea what to do. I have a basic understanding of how a queue works, my problem is how to execute it. I'm also learning java in another class right now and it's begun to confuse me what rule apply to what class (as in school class, not java class) If someone could walk me through how one's meant to work with a few of these (or at least initQueue) I'd be really grateful, Ive been working on this for 12 hours now and I'm getting desperate.... Edit: Update: This is what i have now: void initQueue(Queue *self) { self->head = NULL; self->tail = NULL; } void enQueue(Queue *self, data_t *data) { self->head = data; } - RPi SD Card copier error "could not set flags" I am trying to copy my SD card to and msata ssd attached by usb trough an X850 addon board on my RPi3 to be used as usb boot. I open up the SD card copier and choose to copy from my sd card to my external ssd. When i press start it creates the partitions etc. but the i get the error "could not set flags". I have literally no idea to do and searches didnt give me anything about anyone having the same problem. Both the sdcard and msata SSD are 32gb. Where should i start? - Android ROM and Raspberry PI 3 plus NFC We are working with Android Things on Raspberry pi 3, everything works great, recently we got an NXP board called OM5578/PN7150, basically to use NFC efficiently. Through NXP guide (AN11690 NXPNCI Android Porting Guidelines) I need: - Add the drive in the Kernel - Customize the AOSP (Android Open Source Project) I would like to know: Which version of the kernel is best for recompiling by adding the drive with Raspberry pi 3 support Which version of AOSP is recommended for Raspberry pi 3 in kiosk mode (similar to Android Things). Another board is not an option. Thanks. - Execute command in terminal on startup on Pi I have a simple card reader program on a Pi that reads input from a USB Card Reader (acts like a keyboard) and writes it to a text file. Hence, for this program to work, the program needs to be run on the terminal so the raw input from the card reader will be detected. I want this program to run every time my pi turns on so I need a way to open the terminal and execute the code inside the terminal on startup. Can anyone help me out? import datetime import time card = raw_input() t = datetime.datetime.now() while True: f = open("Laptop Sign Out" + '.txt', 'a') f.write("Card Number: " + card[1:10] + " Time: " + t.strftime("%m-%d-%Y %H:%M:%S")) f.write('\n') f.write(';') f.write('\n') f.close() time.sleep(5) gpio.cleanup() - SPI configuation as master I need to configure SPI_1 as master in Nucleo STM32F103RB. Here's what I did so far (Keil uvision 4). // Configuring GPIOs // SPI_SCK GPIOA_CRL |= 0x00A00000; //Alternate function push-pull // SPI_MOSI GPIOA_CRL |= 0xA0000000; //Alternate function push-pull // SPI_MISO GPIOA_CRL |= 0x04000000; //Input floating However I need to set nSS for multiple slaves. How do I do that? Which GPIO pins should I use to connect nSS pins to multiple slaves. Also how many maximum slaves can I attach? - How to read accelerometer readings in linux machine I am doing some experimental project, where i need to connect my Linux machine to accelerometer, and need to read data from it. I tried in many forums for any help but with out any success. Is there any API's available in C/C++ to read accelerometer output ? - How can I properly debug an invalid SPI read? I've been hitting my head against this one for a while and would really appreciate some input. I have two exactly similar pieces of hardware based on an Arietta G25 (AT91SAMA5x-based). On SPI0 CS0 sits a device for which I have a sample program, which uses the spidevkernel driver. One of the two units is running a stock debian image, kernel version 4.9.60. The other is running a vanilla linux kernel (exactly the same version) with busybox on top. Both are compiled with the same toolchain. On device A (the debian one), the program manages to read SPI. On device B, the program reads 0 consistently off the SPI device. Tried the following: - Swapping kernels does not change the outcome. - Identical device trees does not change the outcome - Identical bootloader does not change the outcome - Swapping the filesystems entirely just changes which gateway fails I have never seen anything like this before, and am at a complete impasse as to what could be causing this. I am at the point where I'm starting to think debian is doing some sort of magic outside of the kernel in order to make spidevbehave differently, but I just cannot see it and would really appreciate some input as to what the hell is going on. The code (from a third party, simplified) is as follows: void spi_read(void *spi_target, uint8_t address, uint8_t *data) { int spi_device; uint8_t out_buf[3]; uint8_t command_size; uint8_t in_buf[ARRAY_SIZE(out_buf)]; struct spi_ioc_transfer k; spi_device = *(int *)spi_target; out_buf[0] = READ_ACCESS | (address & 0x7F); out_buf[1] = 0x00; command_size = 2; memset(&k, 0, sizeof(k)); /* clear k */ k.tx_buf = (unsigned long) out_buf; k.rx_buf = (unsigned long) in_buf; k.len = command_size; k.cs_change = 0; a = ioctl(spi_device, SPI_IOC_MESSAGE(1), &k); *data = in_buf[command_size - 1]; } SPI setup is identical:: spi mode 0 spidev spi32766.0: setup: bpw 8 mode 0x0 -> csr0 00000002 spidev spi32766.0: setup mode 0, 8 bits/w, 8000000 Hz max -->: msb: 0 bits per word Output of a sample read (using full debug) on the working box: [ 3857.890625] spidev spi32766.0: activate 8, mr 000e0031 [ 3857.890625] atmel_spi f0004000.spi: start pio xfer c7226e00: len 2 tx c736d000 rx c739c000 bitpw 8 [ 3857.890625] atmel_spi f0004000.spi: start pio xfer c7226e00: len 2 tx c736d000 rx c739c000 bitpw 8 [ 3857.890625] spidev spi32766.0: xfer c7226e00: len 2 tx c736d000/0x00000000 rx c739c000/0x00000000 [ 3857.890625] spidev spi32766.0: DEactivate 8, mr 000f0031 And on the broken box: spidev spi32766.0: activate 8, mr 000e0031 spidev spi32766.0: xfer c7be7600: len 3 tx c7101000/0x00000000 rx c7141000/0x00000000 spidev spi32766.0: DEactivate 8, mr 000f0031 Thank you!
http://codegur.com/46685822/how-can-i-count-spi-clocks-in-raspberry-pi
CC-MAIN-2018-09
refinedweb
1,772
72.05
Simple. - backtrader administrators last edited by @tw00000 said in Simple buy-and-hold strategy: My only guess is that either self.buy() or self.close() is not working as intended here. Sorry, but with a statement like that I don't really feel like writing a proper answer. Please have a look at your own code.! I think that position closing can be also simplified (didn't tested by myself): if (len(self.datas) - len(self)) == 1: self.close() It seems to me that such script should issue closing order before last bar. But, again, I didn't test it. As a wild guess about several buysignals - might be a case that openprice was so different from closeprice that btwas not able to buy number of stocks based on AllInSizercalcs. Try with size=1. - abhishek.anand 1 last edited by @tw00000 said in Simple buy-and-hold strategy: if self.datetime.date().strftime("%Y-%m-%d") == pd.read_pickle('data').tail(1).date[0].strftime("%Y-%m-%d"): Hey! I am trying to implement something similar. By executing this same line i am getting the below error - File "C:\Users\40100147\AppData\Roaming\Python\Python37\site-packages\pandas\io\common.py", line 651, in get_handle handle = open(handle, ioargs.mode) PermissionError: [Errno 13] Permission denied: 'data' Can you explain how you are using pd.pickle here ? I am using a single source of data. Below is the complete next() code def next(self): pos = self.getposition(self.data).size if not pos: print("buying") self.buy() if self.datetime.date().strftime("%Y-%m-%d") == pd.read_pickle('data').tail(1).date[0].strftime("%Y-%m-%d"): self.close() print("closing") else: trade_setup = None - abhishek.anand 1 last edited by @ab_trader Hey! Did this line work for you ? I am getting the same value for len(self.data) and len(self) at every next step so the difference is never 1. @abhishek-anand-1 i never checked this by myself.
https://community.backtrader.com/topic/1466/simple-buy-and-hold-strategy
CC-MAIN-2022-21
refinedweb
327
61.73
. Yeah, MDN’s “in the process of being dropped” boilerplate is a little ambitious. Even when Safari/iOS finally add support for hrefwithout xlink, browsers are required by the current spec to maintain backwards compatibility support for the xlinkversion, and I wouldn’t expect that to change in a future spec, either. There’s no security reason to drop it, the change is all about making it nicer for authors. And there’s nothing nice to authors about breaking old websites! There have been other changes in the SVG spec where features have been dropped completely, but only in cases where those features never gained universal support and/or much use. xlink:hrefdoesn’t meet that test. So xlink:hrefwill likely stay “deprecated” forever (not the recommended solution, but browsers must maintain support), never shifting over to “obsolete” (doesn’t work no more). Surprised by the support from IE11. So, if I want to use just href, what should I do? Wait for Safari to get an update or maybe put both attributes? I feel the latter makes little sense, though… If you aren’t updating the articles, you’re part of the reason deprecated code never gets updated. And why browser developers get away with not prioritizing adding support for new standards. Browser developers hate when old sites break because of code support being pulled, but also hate when new sites break because their browser doesn’t support the new standards. Because in the latter case, that is a much bigger driver for users to switch browsers. If svgs start looking weird on iOS, Apple is going to add support right away before users switch to that other browser their friend uses tbat never has that problem. Insightful post, btw. I removed ‘xlink:href’ from my SVGs and switched to using just ‘href’, then found that iOS browsers (Chrome, Safari) would no longer handle the links. My workaround was simply to use both variants: is there any reason (aside from the inelegance of repetition) not to do this? xlink:href may be deprecated, but dropping it from your code while there are still widely-used browsers that depend on it seems like a bad idea. Don’t forget the namespace for xlink. Had this issue a while back and wrote about it There doesn’t seem to be any reason not to just use both attributes, right? Using both attributes is even suggested in the SVG2 spec; I tried writing both and it seems to work. So my policy will be: – Keep the old code as it is. Probably will work forever and if, at some time in future, deprecation is effective and browsers drop support, it will be very old, and probably icons will be just one concern :-). And nobody will reclaim you those 50 bucks. – In new code write both. It will work now and after deprecation. If you use macros, components, mixins… whatever, it will be trivial, otherwise just some more typing. – When iOS and Safari supports it, drop the namespaced version. You should be able to make the href=""on SVG work with JavaScript. Years back, Eric Meyer was supportive of the idea, or came up with the idea, of allowing href=""on more elements in the HTML5 spec. I was also supportive of this idea, but unfortunately, the idea was rejected and never made it into the HTML5 spec. I’m pretty sure Eric wrote a script to make the linking work for the purposes of a demonstration (see the second link). This, of course, could be used to enable the linking until browser support improves. Looking at browsers does not tell the full story. You need to consider support in editors and renderers for standalone SVGs. In the wild, the vast majority of existing SVG files are authored with Inkscape or Adobe Illustrator. Both do not support hrefwithout xlink:, reading or writing. There are a lot of legitimate reasons to do server side rendering to a raster format. Chances are you will use ImageMagick, which in practice means the actual rendering is delegated either to librsvg or Inkscape. librsvg (a tool developed for rendering icons on the Gnome desktop, and by far the fastest renderer available) needs the namespace, also. Don’t even think about older tools like the Java Batik library. (The other widespread application is SVG-to-PDF conversion, but I cannot comment on that.) Leaving out the namespace might sound “modern”, but it will not produce reusable code for the forseable future. Keeping it is nothing more than a few extra bytes that won’t bother anyone. Hi, if i am used ‘href’ than its not working on chrome, still xlink:href working in SVG. Not deprecated…. It’s still deprecated in the spec, but it’s up to browsers whether or not to continue supporting it. So, yeah, you’re correct that Xlink:href is not deprecated in Chrome. :) A heads up — the latest Safari Technology Preview release (v63) added href support yesterday (August 15). See:
https://css-tricks.com/on-xlinkhref-being-deprecated-in-svg/
CC-MAIN-2020-29
refinedweb
837
71.65
Introduction to Python Class Constants An unchangeable value, assiged, is called a constant. Meaning, constants are the containers that hold some information and cannot be changed further. Write once, read many times constants are created in the class of Python. Usually, constants are defined in the level of a module. They are written in capital letters. Each word is separated by an underscore. Though in reality, generally the usage of class constants in Python isn’t done. Because the constants modules or global variables, used are throughout the Python programs. Example: TOTAL, GROSS_BUDGET or assigning them like: PI = 3.14 So, for the examples, creating a module file – constant.py is done. Syntax: As above mentioned, the syntax for Python class constants is something like this: Declaring a value to a constant PI = 3.14 A value is declared and is known for being constant, is represented by giving capital letters to any variable considered. Objects such as dictionaries and lists are mutable, that is, changeable. If const is bound to such objects, for example, a list object, the name is always bound to the object list. The contents of the list though may change. Meaning, unbound or rebound of items and more items can be removed and added with the object through methods like append. How Does Python Class Constants Work? In a human understanding, or non-technical, the constants can be considered as something you buy and don’t have an exchange policy. Or in other words, you buy some books, take any number of books, and taking bag as a constant, those books once placed in the bag, cannot be replaced back again. Now, getting into the actual working: In Python, variables are named locations that are basically used to store data within the memory. The unique Identifier name is given each and every variable. Variables do not need the declaration prior to reserving memory. Variable initialization automatically happens after assigning a value to a variable. Example: count = 0 employee_name = hemanth age, agee, ageee = 22 But then, what if you want to create a global variable that is unchangeable and unique for itself? For that, you need to declare a constant which is pretty much the same as declaring a variable but with capital letters. Declaration of any variable with capitals considered a constant. Working: The Python constants are first created. These are identified to be constants as they are capitalized letters. They are given with an underscore in between to potrait it as a single variable. The variable is then assigned a value which is globally the same or consistent throughout the program. Once allocated, it is the same anytime. Declaring a Constant in Python: - constant.py is created earliest GOLDEN_RATIO = 1.62 PI = 3.14 And then, in the main, you import the constant module. - Value assigning to a constant in Python main.py is created later: import constant print(constant.GOLDEN_RATIO) print(constant.PI) Examples of Python Class Constants - Always declare a constant by capitalizing on the letters. - Nomenclature is always a purpose. - CamelCase notation is used. - Do not start with a digit. - Python modules have constants put into Python modules and are meant not to be changed. - Use underscores when you need them. - Underscores makes the code pretty neat and readable. A word is neat and understandable than a letter. Let’s say you want to declare a variable number of case studies as a constant. You will realize that declaring it as CASE_STUDY is much more of a tip-off than saying it CS or C. - You cannot start a constant variable with a digit. - You cannot use special symbols like @, #, &, %, $ etc. - You can always access the class constants from a superclass to a subclass. Example #1 Code: NAME = 'hemanth' YOB = 1999 ID_NUM = 17783 print(NAME) Output: Example #2 Code: NUM1 = 50 num2 = 65 print(num2+NUM1) Output: Accessing class constants from a subclass in a Python: self.a_constant is used when you need to access any class constant that was defined previously in the superclass. An example will make a better understanding: Example #3 Code: #access class constants from superclass to subclass class C1: a_constant = 0.167355 class C2(C1): def this_constant(self): print(self.a_constant) an_object = C2() an_object.this_constant() Output: Conclusion Other programming languages like C++ and Java does not provide this type of constants, unlike Python. Writing a variable in upper case letters, may it be any variable, is considered and termed a constant. String, Tuples, and Numbers are immutable. Binding a name with const to tuples, strings, and numbers the name is always is bound to the object. And also, the content of the object’s will also always be the same because the object is immutable. Recommended Articles This is a guide to Python Class Constants. Here we discuss the Introduction and how does python class constants work along with different examples and its code implementation. you may also have a look at the following articles to learn more –
https://www.educba.com/python-class-constants/
CC-MAIN-2020-24
refinedweb
832
67.15
This post continues the topic about Advanced Usage of Styled Components and covers more cool styling which you can do with styled components. So, let's start! In this post Im going to use the same hamburger menu component which I was creating in the previous post. So let's refresh the code: // Menu.js import {MenuButton, Line, MenuNavigation, NavList, NavItem, NavLink} from "./Menu.styles"; import { useState } from "react"; export const Menu = () => { const [display, setDisplay] = useState(false); const handleClick = () => { setDisplay(!display); }; return ( <> <MenuButton onClick={handleClick}> <Line></Line> <Line></Line> <Line></Line> </MenuButton> <MenuNavigation displayIt={display}> <NavList> <NavItem> <NavLink href="/">About</NavLink> </NavItem> <NavItem> <NavLink primary Home </NavLink> </NavItem> </NavList> </MenuNavigation> </> ); }; //Menu.styles.js import styled from "styled-components"; export const MenuButton = styled.div` cursor: pointer; width: 3rem; height: 100%; display: flex; flex-direction: column; justify-content: space-around; align-items: center; `; export const Line = styled.div` width: 80%; height: 3px; background-color: black; margin: 0.2rem; `; export const MenuNavigation = styled.div` position: fixed; width: 200px; max-width: 70%; height: 100%; left: 0; margin-top: 1.4rem; z-index: 200; background-color: white; padding: 1rem 2rem; transition: all 0.7s ease; box-shadow: 0px 8px 30px rgba(0, 0, 0, 0.2); display: ${(props) => (props.displayIt ? "block" : "none")}; `; export const NavList = styled.ul` margin: 0; padding: 0; list-style: none; display: flex; flex-direction: column; align-items: center; `; export const NavItem = styled.li` margin: 5px 0; box-sizing: border-box; width: 100%; display: block; `; export const NavLink = styled.a` color: #8f5c2c; text-decoration: none; width: 100%; box-sizing: border-box; display: block; padding: 0.5rem; ${(props) => props.primary && ` background: green; color: white; `} `; And this is the output - a nice-looking hamburger menu and when we toggle it, we can see the expanded menu sliding out from left side of the page like this: Hover Effect Lets add a hover effect to our menu links, so when we hover over them, the background colour will be different. As you know, we add hover effect by using :hover pseudo-class in CSS. You can use pseudo-classes in styled components right the same way: :hover { background-color: #f0e5d8; } Now our style for links look like this: export const NavLink = styled.a` color: #8f5c2c; text-decoration: none; width: 100%; box-sizing: border-box; display: block; padding: 0.5rem; ${(props) => props.primary && ` background: green; color: white; `}; :hover { background-color: #f0e5d8; } `; You can use any pseudo-classes like :active, :focus or :visited and many others with styled components (list of the most used pseudo-classes you can find here) Media Queries We probably want our Hamburger menu to be visible only on mobile devices. So we can add a media query to the MenuButton styles like this: @media screen and (min-width: 500px) { display: none; } So, as you can see, media queries are working as well in a usual way with styled components. Using Classes What if we want to style a particular element by using className attribute? We can do that! But here comes a tricky part :) Let's consider we want to style our menu links using className attribute. We have added blu and red classes to them: <NavList> <NavItem> <NavLink className="blue" href="/"> About </NavLink> </NavItem> <NavItem> <NavLink primary Home </NavLink> </NavItem> <NavItem> <NavLink className="red" href="/"> Contact </NavLink> </NavItem> </NavList> We can access those classes from styled components in 2 ways: - Using .selector with a class name This approach can be used only on the parent element to target its child element. It refers to the child element of the component. So, to target one of our links - NavLink - we need to use class selector in its parent - NavItem: export const NavItem = styled.li` margin: 5px 0; box-sizing: border-box; width: 100%; display: block; .blue {color: blue;} <- this is our NavLink with class Blue `; Now one of our links has blue colour: 2.Using & selector together with . and class name This approach lets us target the className of the main component itself, so we can target Red class from NavLink: export const NavLink = styled.a` color: #8f5c2c; text-decoration: none; width: 100%; box-sizing: border-box; display: block; padding: 0.5rem; ${(props) => props.primary && ` background: green; color: white; `}; :hover { background-color: #f0e5d8; } &.red { <- this is out NavLink with class Red background: red; } `; And now our link has red background color: Ampersand & is pretty important selector in styled components. It can be used to increase the specificity of rules on the component; this can be useful if you are dealing with a mixed styled-components and vanilla CSS environment where there might be conflicting styles. If you look now at hover effect of NavLink with class red, you can see that it's gone. That is because & has higher specificity over tags styles. To get the effect back, we need to add it inside the & block of code: &.red { background: red; :hover { background-color: #f0e5d8; } } Now we have hover effect back: Phew that was a lot we have explored today! I hope you learned something new together with me :) But its not all what we can do with styled-components. To be continued... P.S. You can find the link to the project HERE if you need it. Thank you for reading my blog. Feel free to connect on LinkedIn or Twitter :) Discussion (0)
https://dev.to/olenadrugalya/advanced-usage-of-styled-components-for-your-react-app-part-2-3p6f
CC-MAIN-2021-10
refinedweb
882
64.3
Distributed Lock using Zookeeper This article is by Stephen Mouring, Jr. On my project we have a number of software components that run concurrently, some on a cron, and some as part of our build process. Many of these components need to mutate data in our data store and have the possibility of conflicting with one another. What is worse is that many of these processes run on separate machines making language level or even file system level synchronization impossible. Zookeeper is a natural solution to the problem. It is a distributed system for, among other things, managing coordination across a cluster of machines. Zookeeper manages information as a hierarchical system of "nodes" (much like a file system). Each node can contain data or can contain child nodes. Zookeeper supports several types of nodes. A node can be either "ephemeral" or "persistent" meaning it is either deleted when the process that created it ends or it remains until manually deleted. A node can also be "sequential" meaning each time a node is created with a given name, a sequence number is postfixed to that name. This allows you to create a series of nodes with the same name that are ordered in the same order they were created. To solved our problem we need to have a locking mechanism that works across processes and across machines that allows one holder of the lock to execute at a given time. Below is the Java code we wrote to solve the problem. I will go through it step by step. public class DistributedLock { private final ZooKeeper zk; private final String lockBasePath; private final String lockName; private String lockPath; public DistributedLock(ZooKeeper zk, String lockBasePath, String lockName) { this.zk = zk; this.lockBasePath = lockBasePath; this.lockName = lockName; } public void lock() throws IOException { try { // lockPath will be different than (lockBasePath + "/" + lockName) becuase of the sequence number ZooKeeper appends lockPath = zk.create(lockBasePath + "/" + lockName, null, Ids.OPEN_ACL_UNSAFE, CreateMode.EPHEMERAL_SEQUENTIAL); final Object lock = new Object(); synchronized(lock) { while(true) { List<String> nodes = zk.getChildren(lockBasePath, new Watch() { @Override public void process(WatchedEvent event) { synchronized (lock) { lock.notifyAll(); } } }); Collections.sort(nodes); // ZooKeeper node names can be sorted lexographically if (lockPath.endsWith(nodes.get(0)) { return; } else { lock.wait(); } } } } catch (KeeperException e) { throw new IOException (e); } catch (InterruptedException e) { throw new IOException (e); } } public void unlock() throws IOException { try { zk.delete(lockPath, -1); lockPath = null; } catch (KeeperException e) { throw new IOException (e); } catch (InterruptedException e) { throw new IOException (e); } } } (Disclaimer: Credit for this code goes to Aaron McCurry for developing the core mechanism of this lock as well as the design for using ZooKeeper. Kudos to Aaron!) Each process that wants to use the lock should instantiate an object of the DistributedLock class. The DistributedLock constructor takes three parameters. The first parameter is a reference to the ZooKeeper client. The second parameter is the "base path" where you want your lock nodes to reside in. Remember that ZooKeeper stores its nodes like a file system, so think of this base path as the directory you want your lock nodes created in. The third parameter is the name of the lock to use. Note you should use the same lock name for every process that you want to share the same lock. The lock name is the common reference that multiple processes lock on. Note: This class can support multiple locks if you use a different lock name for each lock you want to create. Say you have two data stores (A and B). You have several processes that need mutate A and B. You could use two different lock names (say LockA and LockB) to represent the locks for each data store. Any process that needs to mutate data store A could create a DistributedLock with a lockname of LockA. Likewise, any process that needs to mutate data store B could create a DistributedLock with a lockname of LockB. A proces that needs to mutate both datastores would create two DistributedLock objects (one with lock name of LockA and one with a lock name of LockB). Once your process has created a DistributedLock object it can then call the lock() method to attempt to acquire the lock. The lock() method will block until the lock is acquired. // lockPath will be different than (lockBasePath + "/" + lockName) becuase of the sequence number ZooKeeper appends lockPath = zk.create(lockBasePath + "/" + lockName, null, Ids.OPEN_ACL_UNSAFE, CreateMode.EPHEMERAL_SEQUENTIAL); First of all, the lock() method creates a node in ZooKeeper to represent its "position in line" waiting for the lock. The node created is EPHEMERAL which means if our process dies for some reason, its lock or request for the lock with automatically disappear thanks to ZooKeeper's node management, so we do not have worry about timing out nodes or cleaning up stale nodes. final Object lock = new Object(); synchronized(lock) { while(true) { List<String> nodes = zk.getChildren(lockBasePath, new Watch() { @Override public void process(WatchedEvent event) { synchronized (lock) { lock.notifyAll(); } } }); // Sequential ZooKeeper node names can be sorted lexographically! Collections.sort(nodes); // Are we the "topmost" node? (The node with the lowest sequence number that is.) if (lockPath.endsWith(nodes.get(0)) { return; } else { lock.wait(); } } } To understand the code above you need to understand how ZooKeeper works. ZooKeeper operates through a system of callbacks. When you call getChildren() you can pass in a "watcher" that will get called anytime the list of children changes. The gist of what we are doing here is this. We are creating an ordered list of nodes (sharing the same name). Whenever the list changes, every process that has registered a node is notified. Since the nodes are ordered, one node will be "on top" or in other words have the lowest sequence number. That node is the node that owns the lock. When a process detects that its node is the top most node, it proceeds to execute. When it is finished, it deletes its node, triggering a notification to all other processes who then determine who the next node is who has the lock. The tricky part of the code from a Java perspective is the use of nested synchronized blocks. The nested synchronization structure is used to ensure that the DistributedLock is able to process every update it gets from ZooKeeper and does not "lose" an update if two or more updates come from ZooKeeper in quick succession. The inner synchronized block in the Watcher method is called from an outside thread whenever ZooKeeper reports a change to its children. Since the Watcher callback is in a synchronized block keyed to the same Java lock object as the outer synchronized block, it means that the update from ZooKeeper cannot be processed until the contents of the outer synchronized block is finished. In other words, when an update comes in from ZooKeeper, it fires a notifyAll() which wakes up the loop in the lock() method. That lock method gets the updated children and sets a new Watcher. (Watchers have to be reset once they fire as they are not a perpetual callback. They fire once and then disappear.) If the newly reset Watcher fires before the rest of the loop executes, it will block because it is synchronized on the same Java lock object as the loop. The loop finishes its pass, and if it has not acquired the distrubted lock, it waits on the Java lock object. This frees the Watcher to execute whenever a new update comes, repeating the cycle. Once the lock() method returns, it means your process has the dsitributed lock and can continue to execute its business logic. Once it is complete it can release the lock by calling the unlock() method. public void unlock() throws IOException { try { zk.delete(lockPath, -1); lockPath = null; } catch (KeeperException e) { throw new IOException (e); } catch (InterruptedException e) { throw new IOException (e); } } All unlock() does is explictly delete this process's node which notifies all the other waiting processes and allows the next one in line to go. Because the nodes are EPHEMERAL, the process can exit without unlocking and ZooKeeper will eventually reap its node allowing the next process to execute. This is a good thing because it means if your process ends prematurely without you having a chance to call unlock() it will not block the remaining processes. Note that it is best to explicitly call unlock() if you can, because it is much faster than waiting for ZooKeeper to reap your node. You will delay the other processes less if you explicity unlock.
https://dzone.com/articles/distributed-lock-using
CC-MAIN-2015-35
refinedweb
1,419
63.19
- NAME - DESCRIPTION - handlers - accessors - new PARAMHASH OVERRIDE - onclick - onchange - ondblclick - onmousedown - onmouseup - onmouseover - onmousemove - onmouseout - onfocus - onblur - onkeypress - onkeydown - onkeyup - onselect - _handler_setup - handlers_used - javascript - javascript_attrs - javascript_preempt - class - title - key_binding - key_binding_label - id - label - key_binding_javascript - render_key_binding - handler_allowed HANDLER_NAME NAME Jifty::Web::Form::Element - Some item that can be rendered in a form DESCRIPTION arguments passed to onclick (or any similar method) is a string, a hash reference, or a reference to an array of multiple hash references. Strings are inserted verbatim. Hash references can take a number of possible keys. The most important is the mode of the fragment replacement, if any; it is specified by providing at most one of the following keys: - append => PATH Add the given PATHas a new fragment, just before the close of the CSS selector given by "element", which defaults to the end of the current region. - prepend => PATH Add the given PATHas a new fragment, just after the start of the CSS selector given by "element", which defaults to the start of the current region. - popout => PATH Displays the given PATHas a new fragment in a lightbox-style popout. - replace_with => PATH Replaces the region specified by the regionparameter (which defaults to the current region) with the fragment located at the given PATH. If undefis passed as the PATH, acts like a "delete". - refresh => REGION Refreshes the given REGION, which should be a Jifty::Web::PageRegion object, or the fully qualified name of such. - refresh_self => 1 Refreshes the current region; this is the default action, if a non-empty argsis supplied, but no other mode is given. - delete => REGION Removes the given REGIONfrom the page, permanently. The following options are also supported: - toggle => BOOLEAN If set to true, then the link will possibly toggle the region to empty, if the region's current path is the same as the path the region is trying to be set to. - region => REGION The region that should be updated. This defaults to the current region. - element => CSS SELECTOR A css selector specifying where the new region should be placed; used with "append" and "prepend", above. The "get_element" in Jifty::Web::PageRegion method may be useful in specifying elements of parent page regions. - submit => MONIKER A Jifty::Action, Jifty::Action moniker, hashref of { action => Jifty::Action::Subclass, arguments => { argument => value, argument2 => value2 } or an arrayref of them. These actions are submitted when the event is fired. Any arguments specified will override arguments submitted by form field. If you explicitly pass undef, then all actions will be submitted. This can be useful in conjunction with an onclickhandler, since declaring an onclickhandler intentionally turns off action submission. - disable => BOOLEAN If true, disable all form fields associated with the actions in submitwhen this Element is clicked. This serves to give immediate visual indication that the request is being processed, as well as to prevent double-submits. Defaults to true. - args => HASHREF Arguments to the region. These will override the arguments to the region that the region was given when it was last rendered. - effect => STRING The Scriptaculous or jQuery visual effect to use when updating or creating the fragment. - effect_args => HASHREF A hashref of arguments to pass to the effect when it is created. These can be used to change the duration of the effect, for instance. - remove_effect => STRING As effect, but for when the previous version of the region is removed. - remove_effect_args => HASHREF As effect_args, but for remove_effect. - beforeclick => STRING String contains some Javascript code to be used before a click. - confirm => STRING Prompt the user with a Javascript confirm dialog with the given text before carrying out the rest of the handlers. If the user cancels, do nothing, otherwise proceed as normal. TODO: This does not have a non-Javascript fallback method yet. handlers The following handlers are supported: onclick onchange ondblclick onmousedown onmouseup onmouseover onmousemove onmouseout onfocus onblur onkeypress onkeydown onkeyup onselect NOTE: onload, onunload, onsubmit and onreset are not yet supported WARNING: if you use the onclick handler, make sure that your javascript is "return (function name);", or you may well get a very strange-looking error from your browser. accessors Any descendant of Jifty::Web::Form::Element should be able to accept any of the event handlers (above) as one of the keys to its new parameter hash. new PARAMHASH OVERRIDE Create a new Jifty::Web::Form::Element object blessed with PARAMHASH, and set with accessors for the hash values in OVERRIDE. onclick The onclick event occurs when the pointing device button is clicked over an element. This attribute may be used with most elements. onchange The onchange event occurs when a control loses the input focus and its value has been modified since gaining focus. This handler can be used with all form elements. ondblclick The ondblclick event occurs when the pointing device button is double clicked over an element. This handler can be used with all form elements. onmousedown The onmousedown event occurs when the pointing device button is pressed over an element. This handler can be used with all form elements. onmouseup The onmouseup event occurs when the pointing device button is released over an element. This handler can be used with all form elements. onmouseover The onmouseover event occurs when the pointing device is moved onto an element. This handler can be used with all form elements. onmousemove The onmousemove event occurs when the pointing device is moved while it is over an element. This handler can be used with all form elements. onmouseout The onmouseout event occurs when the pointing device is moved away from an element. This handler can be used with all form elements. onfocus The onfocus event occurs when an element receives focus either by the pointing device or by tabbing navigation. This handler can be used with all form elements. onblur The onblur event occurs when an element loses focus either by the pointing device or by tabbing navigation. This handler can be used with all form elements. onkeypress The onkeypress event occurs when a key is pressed and released over an element. This handler can be used with all form elements. onkeydown The onkeydown event occurs when a key is pressed down over an element. This handler can be used with all form elements. onkeyup The onkeyup event occurs when a key is released over an element. This handler can be used with all form elements. onselect The onselect event occurs when a user selects some text in a text field. This attribute may be used with the text and textarea fields. _handler_setup This method is used by all handlers to normalize all arguments. handlers_used Returns the names of javascript handlers which exist for this element. javascript Returns the javascript necessary to make the events happen, as a string of HTML attributes. javascript_attrs Returns the javascript necessary to make the events happen, as a hash of attribute-name and value. javascript_preempt Returns true if the the javascript's handlers should prevent the web browser's standard effects from happening; that is, for onclick, it prevents buttons from submitting and the like. The default is to return true, but this can be overridden. class Sets the CSS class that the element will display as title Sets the title that the element will display, e.g. for tooltips key_binding Sets the key binding associated with this element key_binding_label Sets the key binding label associated with this element (if none is specified, the normal label is used instead) id Subclasses must override this to provide each element with a unique id. label Sets the label of the element. This will be used for the key binding legend if key_binding_label is not set. key_binding_javascript Returns the javascript fragment to add key binding for this input, if one exists. render_key_binding Renders the javascript from "key_binding_javscript" in a <script> tag, if needed. handler_allowed HANDLER_NAME Returns 1 if the handler (e.g. onclick) is allowed. Undef otherwise. The set defined here represents the typical handlers that are permitted. Derived classes should override if they stray from the norm. By default we allow: onchange onclick ondblclick onmousedown onmouseup onmouseover onmousemove onmouseout onfocus onblur onkeypress onkeydown onkeyup
https://metacpan.org/pod/Jifty::Web::Form::Element
CC-MAIN-2015-18
refinedweb
1,350
55.34
Ads So I want to make a file reader/ buffered reader that reads a new line of the textfile, lets say every 30 second. Like it reads the first line, waiting 30 seconds, read the next line and so one. Im thinking it might come in handy using tread.sleep() I've searched but can't seem to find an example Hope you guys can help me import java.io.*; class ReadFileWithTime { public static void main(String[] args) { try{ FileReader fr=new FileReader("C:/data.txt"); BufferedReader br=new BufferedReader(fr); String str=""; while((str=br.readLine())!=null){ System.out.println(str); Thread.sleep(30000); } } catch(Exception e){ System.out.println(e); } } }
http://roseindia.net/answers/viewqa/Java-Beginners/24145-Reading-string-from-file-timed.html
CC-MAIN-2017-22
refinedweb
112
67.15
In the introduction to SwiftUI post I mentioned views. SwiftUI is all about views. Remember the Hello World app? import SwiftUI struct ContentView: View { var body: some View { Text("Hello World") } } ContentView is the main view. Its job is to define which views compose our app. In here, we have a single view, Text. If you run this in Xcode, this is what the app will look like: Notice the additional code after the ContentViewstruct: this is how we tell Xcode what to display in the preview panel on the right. It’s not part of the app, but it’s used in development. A view can have modifiers. Here’s an example of a modifier of the Text view, font(): struct ContentView: View { var body: some View { Text("Hello World") .font(.largeTitle) } } This modifier takes the Text view we created and makes the font larger: Different views can have different modifiers. We’ve just seen the Text view so far, and that view has a number of modifiers you can use, including: font()sets the default font for text in the view background()sets the view background foregroundColor()sets the color of the foreground elements displayed by the view padding()pads the view along all edges … and many more. In the case of Text you can check all the modifiers you can use in this page:. It’s important to note that the modifier does not modify the existing view. It actually takes an existing view and creates a new view. Why is this important? Because this fact causes the order of modifiers to matter. Suppose you want to set the background of the Text view, and then add some padding to it. Text("Hello World") .padding() .background(Color.blue) Here’s the result: but if you invert the 2 modifiers, you get this result: This is the consequence of modifiers returning a new view once they are applied, and not modifying the existing view. Check out my Web Development Bootcamp. Next cohort is in April 2022, join the waiting list!
https://flaviocopes.com/swiftui-views-modifiers/
CC-MAIN-2021-49
refinedweb
342
72.66
I was creating a new notebook and wanted to pull in show_install to ensure that my environment looked correct and I think it is kind of strange how you call it at the moment. So in the docs, it says show_install is located here: from fastai.utils.collect_env import * but it seems like this only need to be from fastai.utils import * So I was curious what needed to happen to make this happen (also if people are interested in this change). To me, it just seems like it fits the fastai library just having the two levels. My suggestion is to add the following to the __init__.py file: from .collect_env import * __all__ = [*collect_env.__all__] I tried this in my local environment and it worked as I expected it to. Here is the PR if there is interest:
https://forums.fast.ai/t/show-install-modification-suggestion/44214
CC-MAIN-2022-21
refinedweb
139
74.79
This post assumes some basic C skills. Linux puts you in full control. This is not always seen from everyone’s perspective, but a power user loves to be in control. I’m going to show you a basic trick that lets you heavily influence the behavior of most applications, which is not only fun, but also, at times, useful. Let us begin with a simple example. Fun first, science later. #include <stdio.h> #include <stdlib.h> #include <time.h> int main(){ srand(time(NULL)); int i = 10; while(i--) printf("%d\n",rand()%100); return 0; } Simple enough, I believe. I compiled it with no special flags, just gcc random_num.c -o random_num gcc random_num.c -o random_num I hope the resulting output is obvious – ten randomly selected numbers 0-99, hopefully different each time you run this program. Now let’s pretend we don’t really have the source of this executable. Either delete the source file, or move it somewhere – we won’t need it. We will significantly modify this programs behavior, yet without touching it’s source code nor recompiling it. For this, lets create another simple C file: int rand(){ return 42; //the most random number in the universe } We’ll compile it into a shared library. gcc -shared -fPIC unrandom.c -o unrandom.so.so export LD_PRELOAD=$PWD/unrandom.so and then run the program normally. An unchanged app run in an apparently usual manner seems to be affected by what we did in our tiny library… Yup, you are right, our program failed to generate random numbers, because it did not use the “real” rand(), but the one we provided – which returns 42 every time. This is not entirely true. We did not choose which rand() we want our program to use. We told it just to use rand(). When our program is started, certain libraries (that provide functionality needed by the program) are loaded. We can learn which are these using ldd: $) What you see as the output is the list of libs that are needed by random_nums. This list is built into the executable, and is determined compile time. The exact output might slightly differ on your machine, but a libc.so must be there – this is the file which provides core C functionality. That includes the “real” rand(). We can have a peek at what functions does libc provide. I used the following to get a full list: nm -D /lib/libc.so.6 nm -D /lib/libc.so.6 The nm command lists symbols found in a binary file. The -D flag tells it to look for dynamic symbols, which makes sense, as libc.so.6 is a dynamic library. The output is very long, but it indeed lists rand() among many other standard functions. Now what happens when we set up the environmental variable LD_PRELOAD? This variable forces some libraries to be loaded for a program. In our case, it loads unrandom.so for random_num, even though the program itself does not ask for it. The following command may be interesting: $) Note that it lists our custom library. And indeed this is the reason why it’s code get’s executed: random_num calls rand(), but if unrandom.so is loaded it is our library that provides implementation for rand(). Neat, isn’t it? This is not enough. I’d like to be able to inject some code into an application in a similar manner, but in such way that it will be able to function normally. It’s clear if we implemented open() with a simple “return 0;“, the application we would like to hack should malfunction. The point is to be transparent, and to actually call the original open: int open(const char *pathname, int flags){ /* Some evil injected code goes here. */ return open(pathname,flags); // Here we call the "real" open function, that is provided to us by libc.so } Hm. Not really. This won’t call the “original” open(…). Obviously, this is an endless recursive call. How do we access the “real” open function? It is needed to use the programming interface to the dynamic linker. It’s simpler than it sounds. Have a look at this complete example, and then I’ll explain what happens); } The dlfcn.h is needed for dlsym function we use later. That strange #define directive instructs the compiler to enable some non-standard stuff, we need it to enable RTLD_NEXT in dlfcn.h. That typedef is just creating an alias to a complicated pointer-to-function type, with arguments just as the original open – the alias name is orig_open_f_type, which we’ll use later. The body of our custom open(…) consists of some custom code. The last part of it creates a new function pointer orig_open which will point to the original open(…) function. In order to get the address of that function, we ask dlsym to find for us the next “open” function on dynamic libraries stack. Finally, we call that function (passing the same arguments as were passed to our fake “open”), and return it’s return value as ours. As the “evil injected code” I simply used: printf("The victim used open(...) to access '%s'!!!\n",pathname); //remember to include stdio.h! To compile it, I needed to slightly adjust compiler flags: gcc -shared -fPIC inspect_open.c -o inspect_open.so -ldl gcc -shared -fPIC inspect_open.c -o inspect_open.so -ldl I had to append -ldl, so that this shared library is linked to libdl, which provides the dlsym function. (Nah, I am not going to create a fake version of dlsym, though this might be fun.) So what do I have in result? A shared library, which implements the open(…) function so that it behaves exactly as the real open(…)… except it has a side effect of printfing the file path :-) If you are not convinced this is a powerful trick, it’s the time you tried the following: LD_PRELOAD=$PWD/inspect_open.so gnome-calculator LD_PRELOAD=$PWD/inspect_open.so gnome-calculator I encourage you to see the result yourself, but basically it lists every file this application accesses. In real time. I believe it’s not that hard to imagine why this might be useful for debugging or investigating unknown applications. Please note, however, that this particular trick is not quite complete, because open() is not the only function that opens files… For example, there is also open64() in the standard library, and for full investigation you would need to create a fake one too. If you are still with me and enjoyed the above, let me suggest a bunch of ideas of what can be achieved using this trick. Keep in mind that you can do all the above without to source of the affected app! These are only the ideas I came up with. I bet you can find some too, if you do – share them by commenting!
http://gnome-look.org/stories/Rafa%C5%82+Cie%C5%9Blak%3A+Dynamic+linker+tricks%3A+Using+LD_PRELOAD+to+cheat%2C+inject+features+and+investigate+programs?id=151238
CC-MAIN-2014-52
refinedweb
1,154
66.74
- Rewrite from scratch? Sometimes I wish I could do that here... Admin I want to find whoever wrote those bits and slap them around. Even though they've been seen before, those sorts of mistakes are indicative of a lack of even the most fundamental knowledge. Not knowing to use < 0 to determine if something is negative? You don't even need to have touched a computer before to realize that a negative number will always be less than zero. And don't get me started on the abuse of the modulo operator... I deeply pity whoever hired this genius, but at the same time feel that they're getting what they deserve by obviously not screening out candidates like this. The submitter, though, certainly does not deserve being subjected to this... Admin value+=step ..... what if value at the beginning is not divisible by step? maybe this is safer: value = (value / step + 1) * step; Admin Only if you assume integer division. Admin You are laughing at perfectly (unless step is BIG - which slows down the function) working function and replace it with buggy ones. Who should be laughed at? Admin This convert-to-string-and-look-for-minus-sign seems to be a classic. Throughout all lines of code on earth that test for negative, how many times it happens ? 0.1% ? Admin You're right, anyway being the syntax C/C#/Java-ish it was for me natural to think at integer division as well as to think value and step to be integers variables. Admin flash offers such a rich and deep seam in the dirty coal-mine of WTFs that I'd always assumed it was too base to post any of the turd I deal with on a near daily basis. it provides a near perfect environment to nurture the most unintelligible, mangled and generally screwed up code. A surprisingly large proportion of flash programmers never went anywhere near a computing related education, and are mostly self taught "pros" whose attempts at anything beyond trivial toy applications will end up costing their employers very dearly indeed! beware. With that in mind, if you are any good at it, the pickings can be quite tasty, especially with the up and up of online advertising. Admin While you those tricks will work to determine the sign of a number, I'd image that if(x<0) uses the least amount of clock cycles. Admin isn't more something like: value = value + value%step + 1 ? Admin No: value = value - value%step + step Admin value%step is the distance to the next number divisable by step ... Admin [snipped many replacement solutions for the first "divisibility" snippet] So what's wrong with the solution the original author used? I see only the problem that the comment seems a but misleading (isn't "value" always divisible by "value"?). Apart from that, the original snippet apparently works as expected and requires virtually no thinking about - you can't really say the same about the other proposed solutions... Actually I'm using such heavily non-elegant solutions often if there's the danger that the elegant solution would also be error-prone and difficult to understand for others. Btw. who designed this strange forum posting usability where I have to decide whether I want to quote a previous posting _before_ getting an editor window? How about adding a "paste as quotation" button next to the formatting buttons (and maybe removing the unncessary HTML formatting buttons instead...) ? Just my 2 cents... Admin No, that's not equivalent. Take an example with value = 7 and step = 5, you want to find x s.t. x > value and 5|x and x is minimal. So x = value - (value % step) + step = 7 - 2 + 5 = 10 Whereas x = value + (value % step) = 7 + 2 = 9, which isn't what you want. Admin next_value_divisible_through_step_without_remainder = value + ((step - value) % step) obviously modulo is difficult for some people... Admin Hahahaha. I love this forum! Admin If value=7 and step=5, what is the value of (-2 % 5) ??? You might want to rethink that one... Admin The real wtf is noone can get it right If you want to round down to the nearest value that's a multiple of step: down_value=value-value%step; If you want to round up to the next value up_value=value+step-value%step; Its a really bad sign that its taken this many posts to get it right. Admin During my interview for a job, the guy gave me a quiz on modular arithmetic - basically write a formula that produces each of the following input-output tables. I got it all right (though it took me way too long to think about it), but he told me that a lot of people messed up, or wrote a lot of stuff down and crossed it out. I think I probably would have as well, as I'm pretty bad at that kind of stuff, but I'd been reading a number theory book at the time. On a related note, it would be nice if you could reliably take the modulus of negative numbers in a mathematically nice way while programming. This is possible in some programming languages but annoying in others. Admin It's worse than that, because if the example is in Java or C# or any other language which copied the deranged '%' behavior from C, then *all* of the replacements, including yours, break for negative numbers. (% isn't really modulus, it's remainder-after-integer-division-that-rounds-toward-zero.) So a fixed version is: down_value = value - ((value % step + step) % step) up_value = value - ((value % step + step) % step) + step Admin But which is actually the intended behavior? Was the original code correct to begin with, or is this a bug? It's bizarre enough that we can't easily determine their intent. In any case, the original code is WTFy enough that it should never escape without a comment explaining its intent. WTF was he TRYING to do, anyway?! Admin I'm curious about that real % operator you're talking about... Admin Once and for all: value += step - value%stepjeez. Admin I'm the submitter of this one... What the original author tried to do, was simply find out the next number that could be devided by "step". He wrote a function for this called "calculateDistance", which had this code in it. (makes no sense at all), and yes, "step" was BIG (so it was looping a lot). This application was one big wtf, rewriting it from scratch was really worth the effort. Admin Obviously, the enterprisey solution would be to store the next "step" for all possible "values" in a giant SQL table. Admin I've put way too much thought into this... Modulo operater gives you a remainder. I'm plugging arbitrary values into step and value in my head... since you guys said seven and five as values, step*value=35, and 35 is the first number you get to that fits the description. I would assume (just because i am too lazy to test it) that you wouldn't have a situation where step > value (value = 2, step = 100). correct me if i am wrong, but isn't the answer always going to be (or mostly always going to be) step*value? Or am i misreading the "code"? //find the next "value" divisible by "step" ? Isn't this just a "lowest common denominator" in disguise (or whatever its inverse is)? if step is always smaller than value.... Admin um... ok.. if step was constant... step += step. if step was NOT constant, then you STILL don't need to loop, you just need to store what it originally was, i.e. step_constant = 5, step = 15; step+step_constant is the next number that matches. regardless, even if step is not constant, step+=step is going to be the next number, isn't it? Or is it just waaaaay too late for me? Admin Euhmz... i guess its late for you :) What he was trying to do was this. Let's say he had 23 as the value. And he wanted to know what the next number was that could be devided by 500. What strikes me the most is this guy know how to convert a Number to String, even how to take the first character from that string... but still doesn't realize he can just check if its smaller then 0? Admin I believe that using the "on crack" C style % operator, true modulus would be calculated as follows: ((m+(v%m))%m) ... and prepares himself for the inevitable stoning... Admin > The real wtf is noone can get it right > If you want to round up to the next value > up_value=value+step-value%step Hah! You didn't get it either. Rounding up 4 on 4-grain will give you 8 with this equation. You have to account for the values that are already aligned. Hence: /// \brief Rounds \p n up to be divisible by \p grain template <typename T> inline T Align (T n, size_t grain) { T r = n % grain; T a = n + (grain - r); return (r ? a : n); } The a temporary should be used (instead of just folding the code into the conditional) because it allows the compiler to use a cmov and compile the whole thing with no jumps. Admin Value = 23, step = 500. increasing value 1 at a time until it was divisible by 500 = 977 iterations of that loop, giving you 1000.... how was i wrong? Like i said in my first post, step HAS to be smaller than value otherwise it's Step+=step. if it is smaller, then value+=step and THEN loop it. (because you're not going to find anything smaller than step+step ANYHOW... 2 is the smallest integer you can divide by and get a different result - barring 1 and 0, of course.) Admin it's supposed to give 8, hence the initial increment in the original code. Admin Value = 23, step = 500. increasing value 1 at a time until it was divisible by 500 = 977 iterations of that loop, giving you 1000.... how was i wrong? you're wrong because the correct next value is 500. Admin if that were the case why wouldn't the code be "VALUE = STEP"? For that matter, why even bother having a section of code? whenever you needed to know what the next divisible number was (with step being LARGER THAN VALUE) why not just use "step" in the code instead of even needing a stupid "value" variable? Admin This forum rules. Sadly, all of the above posts are WTFs. Here's the flash solution: value += to_int( (((step - value % step ) - step ) + ((step - value % step ) + step) ) / 2).to_string()); sheesh... ps. there is no integer division in flash, at least not in the context of the original post captcha: flash application? Admin Either this is all some kind of joke, or you mostly fail at solving trivial problems. To get the to the next value divisble by step you only need to add the difference in the modulus from step value += step - (value%step); There are probably faster and nicer ways. This is just the trivial solution. Some of you mentioned this already and others ignored it... which is even more of a WTF. How hard is reading a thread before posting your crap? Admin Heh... is it me, or wasn't the purpose of the value and step code just to increment value by the step, as the original author suggested? That is, the value is already divisible by step when it starts. But, like other people stated, the intent is a bit confusing. Admin Euh... becuase if value is 1023, and step is 500, the outcome should be 1500. And you aresaying 500 can't be devided by 500? It's probably time togo to sleep :) Admin I've said this at least four times now.... IF step is LARGER than value, the answer is either step, or multiples of step. If not, THEN you have to do other voodoo magic. see above post. I'm out of here, since this has been beaten to death and i hate having to repeat myself when the stuff is in the thread in black and white. Admin value is 23 increase value by 1 while value mod 500 is not equal to 0 --> increase value by 1 Now read this slowly. Admin Sorry, in the case of step being larger then value,you are right. Its late here :) But it could be smaller, so personally I wouldn't write an if, but I defenetly wouldn't loop! Admin OK. Imports system.IO Imports System.Console Imports System.Math Module Module1 Sub Main() Dim _step As Integer = 500 Dim value As Integer = 23 Dim counter As Integer = 0 While ((value Mod _step) > 0) counter = counter + 1 value = value + 1 End While WriteLine(value) WriteLine(_step) WriteLine(counter) Read() End Sub End Module This outputs: 500 500 477 Read above... My statement was not incorrect, i just assumed that the original programmer couldn't have been so stupid as to have made the variable "step" larger than the variable "value" everything i said regarding the step>value was 100% correct, except that i assumed it wanted the NEXT larger value. which would mean 1000, in this case. 477 iterations. Admin hehe hehehe... kewl. Admin If value%step is zero then value+=step-(value%step); has the same effect as value+=step; hence your elseif is redundant. I will concede that your if(step > value) is more efficient in that it dodges the modulus operator, however I did point out that the code I (and others) provided is the trivial solution. By singling out a special case yours has become non-trivial. The point being that even a novice should be able to work out "value+=step-(value%step);" with little to no thought. For a paid programmer to implement a loop... he needs to have his hands removed so that he can never write code again! My code performs exactly the same function as yours otherwise. Just to stress the point, if step is greater than, less than or equal to value the single line I provided still works. Anyway... you don't really have to use capital letters that much, especially when there are buttons provided to help you format things in other, more attractive ways. Also, please try to spell my name correctly, there really is no excuse when it has been spelled for you at the top of the quote. Admin The quote seemed to not appear... despite being prominently visible whilst writing the post :/ I was refering to GeneWitch and his code and comments posted above: Quote: This isn't really C's fault, if anyone is to blame it's the processor designers. Whether integer division and modulo round towards zero or negative infinity is implementation defined, the language simply mirrors whatever your processor actually does to avoid costly workarounds on "odd" processors. This is (at least some of the reason) why there's a div() function, it's guaranteed to round towards zero. Admin Admin Okay, I got to about value == 27, then I got bored and wandered off. Now I'm trying to catch up with my computer, which got bored somewhat earlier, but I can still see it in the distance... Admin IMHO, there's not much concensus on what a "non-deranged" modulo behavior should be in the case of negative dividends and divisors. C99 defines it one way, ADA as another. Although, if you pronounce "%" as "remainder" instead of "modulo" as Java would have you do, then C99, Java, and ADA are all in line (ADA having both mod and rem operators). In any case, I'd much rather to an if-branch on the negative value case than modulo twice. Wouldn't that always be both more efficient and less likely to give me a headache? (okay, don't answer the latter, I already know its answer).
https://thedailywtf.com/articles/comments/Fun_with_Maths/1
CC-MAIN-2019-26
refinedweb
2,682
71.55
D opening paragraphs of his article Venema describes one of his teaching methods:. Venema apparently does not provide his students with any information regarding possible alternative hypotheses to explain that data. Indeed, he displays the usual smug self-satisfied complacency of the committed Darwinist who does not believe there is any other hypothesis. Venema concludes: “I feel learning about evolution in a Christian liberal arts university is one of the very best places to do so, providing the institution treats the topics fairly.” Does Venema treat the topic fairly? 37 Replies to “Spring it on ’em and Watch the Fur Fly” Barry What exactly is the possible alternative hypotheses that could explain the similarity with chimpanzee genes and the chromosome fusion? JLAfan2001 asks: Barry What exactly is the possible alternative hypotheses that could explain the similarity with chimpanzee genes and the chromosome fusion? The only other explanation is that the two species were zapped in by the designer who just happened to make them genetically similar and threw in a fusion event just for laughs Mr. Arrington, Dr. Venema’s actions concern me simply from a scientific-logical point of view. It appears he is asking his students to make a prediction from the evidence and then celebrating the confirmation of that prediction as some sort of trustworthy knowledge in favor of common descent. But logically I think that fails. He seems to be arguing like this: Premise 1: If common descent is the reason for the chromosome similarities AND we set the chromosome sequences side by side for visual inspection, Then we would predict the similarities to show transition from one species to another. Premise 2: The sequences show transition. (Confimed prediction) Conclusion: That tells us common descent is the explanation. But the only thing you can conclude from a confirmed prediction is that maybe the speculative explanation (i.e. common descent in this case) is correct. The reason for the maybe is because this form of argument is inductive, not deductive despite the deductive appearance (if it was deductive it would be affirming the consequent). I would expect a science professor to recognize that common descent at best would be a maybe in this case. That would naturally lead to the possibility of other explanations even if other possibilities have not been put forth. Of course in the case of ID another hypothesis has been put forth (I recognize ID is not necessarily against common descent). That would be common design. @smiddyone It’s a bit more complicated than that. Re-using a reply I sent to someone else, if you don’t mind: ———————— First, the ch2 fusion is cherry picking among numerous other genetic scars that show no common ancestry. Evolutionary biologist Richard Sternberg remarked: Likewise, creation geneticist Jeffrey Tomkins has written: Second, the signature of a fusion event is weak. In making the argument for fusion, Daniel Fairbanks wrote: But telomeric DNA normally consists of thousands of repeats of a 6-base-pair sequence TTAGGG. So if two chromosomes were fused end-to-end, a huge amount of alleged telomeric DNA is missing and/or garbled. Contrary to Fairbank’s “precisely what we would expect”, others have noted that the site appears far more degenerate than expected, which is especially odd, considering that meiotic recombination is suppressed in pericentric DNA, which should cause it to mutate more slowly; meaning that 6m years isn’t enough time to account for what we see. Despite this, we may very well have once had 48 chromosomes in our past. But if you’re going to make an argument from similarity, why not cite the markers we know we share with apes, such as the ALU elements or common genes, versus one that depends on so many unknowns? Of course, similarity is no more an argument for common design than common descent. Moreso, fusion events serve as poor taxonomic dividers and don’t necessitate a speciation event. Some species show a very diverse range of karyotypes (number of chromosomes), with little-to-no effect on phenotype (how an organism looks and behaves): I resent any such teaching. The problem is that he is presenting evidence for similar chromosomes and for a fusion event. Is this a scientific argument for “evolution” over “design”? No, as Cornelius would say, it is a “religious” argument for “evolution” over “design”. I would suspect that someone steeped in evolutionary thinking just can not see how he is making a religious, not scientific argument. I feel sorry for Dennis. JLAfan2001 and smiddyone: Steve_Gann has made an important point. I would state it this way: The thing we can conclusively conclude by observing that two sets of chromosomes share certain similarities is that the two sets of chromosomes share those similarities. That is it. As soon as we start attaching a historical narrative about how one set of chromosomes supposedly came from the other at some point in the past, or about the alleged mechanism(s) involved in getting from chromosome Set A to Set B, or attaching a further philosophical gloss that it all happened without plan or purpose, we have gone far off the path from the evidence. Our storytelling could be correct, but it is far from conclusive that it is. And our historical narrative has its own holes and weaknesses. I think Venema’s approach is OK in terms of letting the students examine chromosomal similarities and then telling them after the fact which species were involved. I even like that approach, as it will probably get a lot of interesting questions and discussion going. But he needs to allow the questions and discussion to really continue and dive deep, not just triumphantly proclaim that this comparative analysis somehow proves anything about human evolution. Eric I can see your point but are they really “adding” to the evidence or just drawing a conclusion from it? It’s the same as a crime scene. The investigators didn’t see what happened but they have to come to a conclusion based on the evidence they gathered (and macro evolution does have some compelling evidence). Otherwise we would never solve any crimes. I can see the problems with the way scientists are portraying it as fact when we really haven’t observed one life-form becoming another. They should be saying this is what we think happened based on what we know but we haven’t seen it happen so we could be wrong. JoeCoder, that was a good summary of why the chromosome 2 fusion argument does not wash: Moreover if similarities are held to be proof of common descent, why are not dissimilarities held as disproof??? This following study hinted at far greater differences than 6%: Further note: Moreover the ‘anomaly’ of unique ORFan genes is found in every new genome sequenced thus far: As well, completely contrary to evolutionary thought, these ‘new’ ORFan genes are found to be just as essential as ‘old’ genes for maintaining life: This following study, in which the functional role of unique ORFan genes was analyzed for humans, the (Darwinian) researchers were ‘very shocked’ and ‘taken aback’ by what they found; What I love about Venema’s teaching is that he uses the most basic,observable,common sense evidence to prove common descent,and common descent is really all that matters. Can anyone offer an example of two blood related beings that are alive today that are not genetically similar? Can anyone offer a different interpretation of genetic similarity that we could observe in real time, right now? Denying common descent is miracle begging since most new species arising in the historical record would have had to pop into existence in a cloud of white smoke. Can anyone offer evidence of, or reproduce that? I think we have a more good old fashioned explanation for how biology comes into existence:born of another animal that is strikingly genetically similar to it’s offspring. JDH, how can say it’s a religious argument when it is only based on the most straightforward,obvious,logical,non miracle begging scientific evidence? Common descent comes from the most literal translation of the natural record. But to expand a bit on what Eric said here: i.e. even if Darwinists had been able to prove that similarity was as close as it has falsely been portrayed to be to the general public (98.5%) in the past (approx. 45 million bp), the fact is that Darwinists have no demonstrated mechanism to account for any change at all, much less the massive differences now being found: To cut to the chase, Darwinists are approaching the ‘problem’ from the entirely wrong ‘bottom up’ conceptual level: Dr. Stephen Meyer comments at the end of the preceding video,,, In fact to get a proper handle on this ‘problem’, a ‘top down’ information theoretic approach needs to be adopted that considers, correctly, that information is primary and material substrates are secondary in the hierarchy of the construction and operation of the cell instead of insisting on the insane approach of maintaining ‘bottom up’ material processes constructed this unfathomable complexity, we find in biological life, all on their own. smiddyone you state: Actually smiddyone since the entire universe ‘had to pop into existence in a cloud of white smoke’ in the big bang, then there is now no ‘scientific’ reason to prevent the abrupt appearance on fossils in the fossil record. In fact when this argument is hashed out in its entirety it renders your base materialistic/atheistic philosophy completely absurd: Materialism simply dissolves into absurdity when pushed to extremes and certainly offers no guarantee to us for believing our perceptions and reasoning within science are trustworthy in the first place: smiddlyone, you simply have no right, as a atheist/materialist, to insist that the laws of the universe are inflexible whenever you need them to prevent God from intervening in the universe when he sees fit to be flexible when you need them to be to allow ‘random’ universes to be created. correction: when he sees fit “And to insist they are” flexible when you need them to be to allow ‘random’ universes to be created. Well at least we now we are talking about the supernatural. The Big Bang has been recorded as a real event. The sudden materialization of new animals into existence a la from The Enterprise’s transporter has not. I forgot who’s quote this is but it’s something like:”Let’s not argue over what God could or couldn’t do, but what he has done.” Barry, The problem is, Venema is only looking at similarities, he is not acknowledging differences. Can’t tell from the post what species Venema used, but let’s say they are whales and dolphins. Okay, suppose it comes out that whales are basically blowing and breeching, and otherwise doing their thousand millennial thing, and dolphins are flippering around with keyboards, sending rockets to the Moon and probes to Mars, and playing the stock market. How important will the genetic similarities seem then? These are the kinds of questions Christian Darwinists never ask. And they are precisely the questions that threaten to upend their tricky little scheme. smiddyone, Logically, any confirmed prediction has to be a maybe. That means common descent is a maybe. What you are saying about it being the most straight forward interpretation (and thus the correct one) depends on philosophical and theological ideas that eliminate any other competing theory that does not involve reproductive continuity through time. This is the reason Dr. Venema’s technique bother’s me. He makes it look like it is just about evidence. But it is not. It is also about the uniformity of nature and the non-intervention of God. At least I think that is where he is coming from having read other things about him. If you assume the uniformity of nature through out time there can be no other alternative than reproductive continuity. Maybe that is the case. Maybe common descent is true. But let’s not fool ourselves that common descent is the only place the evidence can take this. Why must common design be rejected based on this evidence? Do you have a scientific reason for rejecting common design? I can give you two- Common Design and convergence. @smiddyone I don’t recall seeing you here before, so if you’re new, then welcome. I hope the others here don’t run you off with poor manners :P. We need a heterogeneous group in order to have good discussions. But I would like to respond to one of your other points. You wrote: Paleontologist Donald Prothero had an interesting piece about sudden appearances and stasis in the fossil record in Skeptic Magazine several months back. The whole thing is a great read, but if it’s too long, I’ll quote you this excerpt: Emphasis mine. There are some famous transition sequences (along with their own problems; each has a lively debate), but among 1.2m animal species existing at given time, there are wide ranges of morphologies between many species (e.g. the great variety found in dogs). Yet something like only 5% of an organisms phenotype preserved in the fossil record, even in the rare case for complete skeletons. As Gould remarked on the most well-studied sequence, (Panda’s Thumb, 1980, p.126), “Most hominid fossils, even though they serve as a basis for endless speculation and elaborate storytelling, are fragments of jaws and scraps of skulls.” Widespread morphological convergence makes a great mess of any remaining evidence. Take this diagram of convergence between marsupial and placental mammals. Given evolution, the mouse and mole are more closely related to the humpback whales and elephants than to marsupial mice and moles, and the wolf is closer to a human or a bat than to the tasmanian wolf. In these cases, fossil evidence would provide a gradual but grossly incorrect sequence. Nice post, JoeCoder Thanks, JLAfan2001. They are drawing a conclusion, but it is not just based on the evidence, that is the key point. The conclusion is also based upon a preferred paradigm with which they approach the data, coupled with an exclusion of any possibilities that don’t fall within that preferred paradigm. Also, some important questions that arise are assumed away or simply ignored (see below). I agree and I think we are largely on the same page here. It seems however, that perhaps the caveats should be even stronger and broader than simply saying the change of species hasn’t been observed. The caveats should also include (i) other possible explanations for the data exist (design or convergence, at the very least), and therefore, not only have we not observed our preferred conclusion, but it is not the only possible conclusion that could reasonably flow from the data; (ii) under our preferred scenario, it is unclear exactly how most of the changes in the chromosomes could have come about in the short time available and been successfully incorporated into the DNA; (iii) we have no idea what would drive the changes (other than pure chance, with the resulting issues of genetic entropy, etc.); (iv) it is unclear whether the observed differences in the DNA in fact produce all the differences we observe in phenotype; (v) it is unclear what other higher-level organismal changes would have to be incorporated to take advantage of the DNA (or conversely, on what basis can we possibly say which of the differences in the DNA actually produce the significant differences we see between humans and other primates?); (vi) the comparisons of DNA typically rely solely on basic sequence and do not take into account three-dimensional structure, placement, concatenation, highly different expressions of particular genes, etc. — we know very little about these and have no idea whether tweaking particular gene sequences would result in anything like what is being claimed, namely a chimp-like creature turning into a human; (vii) we have no idea whether there are intermediate species and, if so, whether a series of gradual changes would have a realistic chance of enhancing survival and becoming fixed in the population at each step along the way; and so on. I’m just brainstorming these questions as I type. If we spent time on it, we could come up with another dozen related questions that would further call into question, or at least lessen, the certainty with which the proclamation of common descent is made. —– To put a blunt point on it, assume for a moment that human DNA and chimp DNA were 100% identical. What would that tell us about our decent from chimps? Nothing. It would tell us that our DNA is identical. The story — the history — is still open for discussion. Further, identical DNA would underscore (as do the popular claims of ‘98% identical,’ ‘99% identical’ and so forth) that the key differences between the species might not be in the DNA sequence. Nice find JoeCoder, I’m surprised that Shermer let it in ‘Skeptic’ Might as well repeat this here: Here are a few neat little videos that show the surprising ‘abrupt appearance’ facet of the fossil record. i.e. the ‘trade secret’ of paleontology: I work in software. Two similar programs could have very different programs. But usually the programs are similar, because the designer (me or one of my co-workers) cut and pasted the code from one program and made the modifications necessary for it to meet the requirements of the second program. Similiarity does not disprove design. More importantly, similarity isn’t exclusive to common descent. @EvilSnack – I’m also a developer. I find patterns of homology in my code all the time. Just today I cloned an entire web page to create another very similar one. Elsewhere I have a ~500 line database wrapper that I wrote in php and subsequently translated to java, c#, and jscript/asp classic as the projects I was working on required. @ba Ha, “nice find” you say? I’m pretty sure I got that source from you. Methinks you don’t read what you cite 😛 News @ 14 “How important will the genetic similarities seem then?” If dolphins are much more smarter than whales it still would not erase the fact that they are related. This seems to be the new strategy: If you can’t disprove common descent, then try to de-emphasize it. It’s like saying humans aren’t related to chimpanzees because chimps live at the zoo and can’t drive cars. smiddyone- no need to disprove common descent as no one has proved it. And just because you want to be related to chimps doesn’t make it so. Of related interest, Casey Luskin has this just up on ENV: @smiddyone Sorry you’re being so outnumbered here–I often face the same fate on reddit and it can be a bit overwhelming. Please disregard Joe, sometimes he takes these debates too personally. I’d like to ask a question about similarity from a different angle–what percentage of DNA similarity is the magic number to confirm that two species do or don’t share a common ancestor? Say humans and chimps, 50%, 95%, 98.5% ? Rather, a better criteria is to gauge if evolution can span the difference, given estimated population sizes, mutation rates, timespans, and an idea of the amount of genetic novelty needed. I hold that evolution could not produce a human from a chimp-like ancestor in periods even much longer than 6m years. Take HIV and p. falciparum (common human malaria), two examples from Michael Behe’s book, Edge of Evolution. In the last several decades, each have had around a million times more selection and mutation events than humans would’ve had since a chimp divergence, yet they’ve each developed 1 and 0 new protein-protein binding sites, respectively, and I think HIV may have duplicated a gene. Many. I or ba77 can cite peer-reviewed sources for each of these, but I’m trying to avoid writing a bibliography again, in order to keep the thread readable. I wish there was a way to edit posts here, I had meant to add these links for point 1, since it’s oft-disputed: 1. Sexual recombination slows macroevolution, (summary) Here is the a brief sketch of the problems I have with the teaching of evolution as is done currently. Consider the following questions 1. Do you believe in crucifixion? 2. Do you believe in The Crucifixion? They are two vastly different questions. Of course everyone who is sane believes in crucifixion. The Romans accurately described it. Corpses have even been found from Roman times which show evidence of having been crucified. Not everyone believes in The Crucifixion. This implies a belief that the Son of God came to earth to die on a cross for all the sins of the world. His atoning death is available for all. Two very different beliefs. Now take two other questions: 1. Do you believe in evolution? 2. Do you believe in the Theory of Evolution? Any observant person believes in evolution – that process by which populations of organisms change over time. Its obvious that adaptability is built into life. But when one considers the “Theory of Evolution” one is professing belief in a particular type of evolution. A history of life that basically says neo-Darwinism has created all forms of life by chance mutation ( plus other processes ) and natural selection. This has occurred without any input by an intelligent designer. But I contend that most people do not believe in the whole “Theory of Evolution” and its a good thing they don’t. Because it is an absurd theory that the Bible says only a fool could believe. Saying “I believe in the theory of evolution” is a logical impossibility. Its logically impossible because the “Theory of Evolution” is not a physical thing. It is a thought. It is just a bunch of words, an immaterial, abstract, object that has no single representation. There is no possible way that an unknowing bunch of chemicals can have a predisposition for an immaterial thing, independent of its representation. Now I believe whole heartedly, that a bunch of chemicals ( genes and cells ) can have a personal predisposition to all kinds of physical things. Pepper, spices, carrots, blonds…. But if materialism is true, there is no possible way for a purely physical brain to be set up to have a predisposition for an abstract, immaterial concept. So to say, that “I am just a bunch of chemicals that came together by random chance,” precludes me from having a preference for one immaterial thing over another. In other words, belief in the “Theory of Evolution” precludes the ability to choose to adhere to an abstract belief. Things like meaning, purpose, nobility, can not evolve. Those who believe that they can are only fooling themselves. After many years, no one has a credible model of the evolution of consciousness. Heck we don’t even know what it is. So I would think that most people believe some combination of evolution mixed with intelligent intervention. Its the only thing that makes sense, given our ability to choose to believe, advocate for, and even choose to die for an abstract concept. Once we make this admission — that really the only thing that can account for man’s ability to have a preference for one abstract thought over another is the input of intelligent design — the specific amount of intelligent intervention mixed with just letting natural processes have their way becomes a religious argument. Not a scientific one. JoeCoder, your limits to genetic variation because of sexual recombination paper goes very well with this recent article: Duplicated code is bad design. 😉 @Mung Many critical systems designed by humans will have redundancy in case the primary path fails. We see the same thing in our own duplicate genes. From The regulatory utilization of genetic redundancy through responsive backup circuits, PNAS, 2006 A recently published paper hypothesized that the number of copies of a gene controls the speed of various processes. See figure 3 and the “Gene Balance Hypothesis” section from Gene balance hypothesis: Connecting issues of dosage sensitivity across biological disciplines, PNAS 2012 JoeCoder: Explain how that works with source code. You write segments of source code that are redundant in case one of the segments fails? def happy_path n n / 0 rescue DivideByZeroError sad_path n end def sad_path n n / 0 rescue DivideByZeroError happy_path n end More or less, except you don’t write it twice; it’s simply deployed to more than one piece of hardware in case one fails, or an extremely rare bug that makes it past testing shows up in one. Walter Bright, a former Boeing engineer and one of my favorite programmers to follow, speaks about this frequently: Then it’s not duplicated code as I meant the term.
https://uncommondescent.com/intelligent-design/spring-it-on-em-and-watch-the-fur-fly/
CC-MAIN-2022-40
refinedweb
4,170
51.38
The problem is typical Knapsack problem. The idea is, each coin can be used infinitely. So the inner loop of dp is ascending. If one coin can only be used once, the inner loop should be descending. public class Solution { public int coinChange(int[] coins, int amount) { int[] dp = new int[amount + 1]; Arrays.fill(dp, Integer.MAX_VALUE); dp[0] = 0; for (int coin : coins) { for (int j = coin; j <= amount; j++) { if (dp[j - coin] != Integer.MAX_VALUE) { dp[j] = Math.min(dp[j], dp[j - coin] + 1); } } } return dp[amount] == Integer.MAX_VALUE ? -1 : dp[amount]; } } I can't understand the relationship between this question and Knapsack_problem, can you give more details about this? This is what I get from your code: For each iteration, the dp[] result shows that the minimum number of coins combined for each amount. Let's take 11 and [5,2,1] for example. After the first iteration of coin 5, we can know whether the amount can be expressed by coin 5 and what is the minimum number. The second iteration uses the previous result of coin 5 to determine which amounts can expressed by the combination of 5 and 3 and what is the minimum amount needed by Math.min(dp[i], dp[i - coin] + 1). And so on..
https://discuss.leetcode.com/topic/57101/simplest-java-solution-beat-89
CC-MAIN-2018-05
refinedweb
216
66.94
ftw - traverse (walk) a file tree #include <ftw.h> int ftw(const char *path, int (*fn)(const char *, const struct stat *ptr, int flag), int ndirs); The ftw() function recursively descends the directory hierarchy rooted in path. For each object in the hierarchy, ftw() calls will not be processed. If the integer is FTW_NS, the stat structure will contain undefined values. An example of an object that would cause FTW_NS to be passed to the function pointed to by fn would be a file in a directory with read but without execute (search) permission. The ftw() function visits a directory before visiting any of its descendants. The ftw() function uses at most one file descriptor for each level in the tree. The argument ndirs should be in the range of 1 to {OPEN_MAX}. The tree traversal continues until] (see FTW_DNR and FTW_NS above), it returns -1 and errno is set to indicate the error. The external variable errno may contain any error value that is possible when a directory is opened or when one of the stat functions is executed on a directory or file. The ftw() function will fail if: - [EACCES] - Search permission is denied for any component of path or read permission is denied for path. - [ELOOP] - Too many symbolic links were encountered. - [ENAMETOOLONG] - The length of the path exceeds {PATH_MAX}, or a pathname component is longer than {NAME_MAX}. - [ENOENT] - A component of path does not name an existing file or path is an empty string. - [ENOTDIR] - A component of path is not a directory. The ftw() function may fail if: - [EINVAL] - The value of the ndirs argument is invalid. - [ENAMETOOLONG] - Pathname resolution of a symbolic link produced an intermediate result whose length exceeds {PATH_MAX}. In addition, if the function pointed to by fn encounters system errors, errno may be set accordingly. None. The ftw() may allocate dynamic storage during its operation. If ftw() is forcibly terminated, such as by longjmp() or siglongjmp() being executed by the function pointed to by fn or an interrupt routine, ftw() will not have a chance to free that storage, so it will remain permanently allocated. A safe way to handle interrupts is to store the fact that an interrupt has occurred, and arrange to have the function pointed to by fn return a non-zero value at its next invocation. None. longjmp(), lstat(), malloc(), nftw(), opendir(), siglongjmp(), stat(), <ftw.h>, <sys/stat.h>. Derived from Issue 1 of the SVID.
http://pubs.opengroup.org/onlinepubs/7908799/xsh/ftw.html
CC-MAIN-2014-10
refinedweb
409
62.17
We have two Server 2008 R2 servers running DFS-R (named dfs01 and dfs02) in a 2008 R2 Domain. dfs01 dfs02 Today I found the files in server dfs01 can not be replicated to dfs02. So I used the command dfsrdiag backlog /rgname:<group> /rfname:<folder> /sendingmember:dfs01/receivingmember:dfs02 to check the backlog. After executing the command, I get the following error: Failed to execute GetVersionVector Method. Err: -2147217406 <0x80041002> operation Failed. Failed to execute GetVersionVector Method. Err: -2147217406 <0x80041002> operation Failed. How can I resolve this? This happens after you install hotfix 2663685 It changes the behaviour after a dirty DFSR shutdown so that there is no longer an automatic restart, instead it stays down allowing you to do whatever backups you may need to do, then you run a WMI command as per the article to restart it. Word of warning - applying this hotfix on a cluster means it effectively isn't highly avaialble as a failover will leave DFSR down on the node taking over. You can adjust this by a registry setting. Personally, I'm about to undo this hotfix across our estate as it's more trouble than it's worth, DFSR falls over and doesn't come back online til we arrive in on Monday, and the backlogs just grow and grow Applications and Services Log > DFS Replication The easiest way to get this up and running again is to go to your Event Viewer, and go to Applications and Services Log > DFS Replication. Look for the event 2213: The exact command you need to run is in there. Additionally, to revert DFS-R back to its original settings, run this: wmic /namespace:\\root\microsoftdfs path dfsrmachineconfig set StopReplicationOnAutoRecovery=FALSE Since it does work on a different volume then D, I would assume it's related to the D volume, and reformat that volume (or just delete and recreate volume) or permanently move the replica to a different volume. If there was truly something wrong with DFSR itself, it wouldn't work regardless of the replica or volume. By posting your answer, you agree to the privacy policy and terms of service. asked 2 years ago viewed 4447 times active 1 year ago
http://serverfault.com/questions/431125/windows-2008-dfs-replication-issue/431755
CC-MAIN-2015-14
refinedweb
370
58.82
19 September 2011 22:01 [Source: ICIS news] HOUSTON (ICIS)--US ethylene margins continued to fall in the second week of September, dropping by 15% from a week earlier on lower spot prices and higher production costs, the ICIS margin report showed on Monday. Ethylene margins were at 26.88 cents/lb ($593/tonne, €433/tonne) in the week ended 16 September, down from 31.61 cents/lb a week earlier, using ethane as a feedstock. The drop last week followed a nearly 4% average decline in ethylene spot prices, as the monomer traded at 57.25–60.00 cents/lb for September delivery, down from 60.00–62.00 cents/lb a week earlier. Market sources continued to point to looser supply, citing a delay in a Dow Chemical cracker turnaround in ?xml:namespace> Dow was expected to shut down the 610,000 tonne/year cracker in A Dow spokesperson did not respond to requests for comment. On the feedstock side, ethane prices spiked last week, rising by nearly 20% from a week earlier to 85 cents/gal, on increased demand and talk that pipeline issues were affecting ethane and propane deliveries into the Mont Belvieu area. Ethane demand also strengthened because Dow was said to have been caught short on the feedstock after pushing back its The pipeline issues and the turnaround delay may both have played a role in pushing up ethane, a market participant said, also citing weakening co-product values as a factor keeping ethane competitive as a cracker feedstock. Ethane lost some strength toward the end of the week, ending Friday at 79 cents/gal and dropping to 76.00–76.50 cents/gal on Monday. Ethylene also posted heavy losses on Monday, trading down to 55.50 cents/lb for September delivery. (
http://www.icis.com/Articles/2011/09/19/9493470/us-ethylene-margins-fall-15-on-higher-feed-lower-spot.html
CC-MAIN-2014-41
refinedweb
299
61.56
Previous posts for this project: There was some nice weather in Belgium the past week, and I took the opportunity to prepare the garden for summer. Mowing the lawn, planting some herbs, cleaning the terrace, etc ... This means that I didn't make a whole lot of progress on my project this week, but it doesn't mean I didn't do anything either. For this week's update, I've been combining some components I got up and running in the previous weeks, more specifically: the Touch Board and the Raspberry Pi with LED strip. I hooked up the Touch Board via USB to the Raspberry Pi and had it send PLAYX (where X is the number of the electrode pressed) messages to the Pi over serial. Listing the tty devices, I determined the Touch Board was the "ttyACM0" device. pi@PiDesk ~ $ ls -l /dev/tty tty tty17 tty26 tty35 tty44 tty53 tty62 tty0 tty18 tty27 tty36 tty45 tty54 tty63 tty1 tty19 tty28 tty37 tty46 tty55 tty7 tty10 tty2 tty29 tty38 tty47 tty56 tty8 tty11 tty20 tty3 tty39 tty48 tty57 tty9 tty12 tty21 tty30 tty4 tty49 tty58 ttyACM0 tty13 tty22 tty31 tty40 tty5 tty59 ttyAMA0 tty14 tty23 tty32 tty41 tty50 tty6 ttyprintk tty15 tty24 tty33 tty42 tty51 tty60 I installed "minicom" to verify the expected serial messages where being received by the Pi. pi@PiDesk ~ $ sudo apt-get install minicom Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: lrzsz The following NEW packages will be installed: lrzsz minicom 0 upgraded, 2 newly installed, 0 to remove and 22 not upgraded. Need to get 420 kB of archives. After this operation, 1189 kB of additional disk space will be used. Do you want to continue [Y/n]? y Get:1 wheezy/main lrzsz armhf 0.12.21-5 [106 kB] Get:2 wheezy/main minicom armhf 2.6.1-1 [314 kB] Fetched 420 kB in 2s (173 kB/s) Selecting previously unselected package lrzsz. (Reading database ... 78547 ... Processing triggers for menu ... Setting up lrzsz (0.12.21-5) ... Setting up minicom (2.6.1-1) ... Processing triggers for menu ... Using the minicom command with parameter "s", the serial port information can be configured. I specified the correct interface and baudrate and could see the messages coming in. pi@PiDesk ~ $ sudo minicom -s To be able to read from the serial interface from Python, I tried to install the "python-serial" module. It turned out to be pre-installed. pi@PiDesk ~/rpi_ws281x/python/examples $ sudo apt-get install python-serial Reading package lists... Done Building dependency tree Reading state information... Done python-serial is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 22 not upgraded. After taking the "strandtest.py" NeoPixel strip example and modifying it to react to serial input, the PLAY messages could trigger the LED strip. It does not yet make a distinction between the buttons being pressed, but the mechanism itself is working. pi@PiDesk ~/rpi_ws281x/python/examples $ sudo python button.py Press Ctrl-C to quit. PLAY0 PLAY0 PLAY0 PLAY1 PLAY4 PLAY7 PLAY0 The Python script reading the serial input and triggering the LED strip can be found here: And finally, a short demo: At the actual date the first part of the project is closed: machines are assembled in the box, powered and the entire architecture has been set about 95% accordingly with the initial design. The only plus is the introduction of a fourth PI as the Cirrus audio board it is very difficult to match with any other hardware implementation on the same device so the PI hosting the Cirrus will only work in conjunction with the Bitscope module via the USB that is used in reverse mode. I mean as an acquisition and pre-processing probe instead of its traditional role of analog and logic analyser. As mentioned before in this first experimental device Medtiech will be powered by an external ATX source providing 3.3V, 5V and 12V distributed with a simple set of power replication units as shown in the images below: the first shows the ATX connected to the main logic power supply switch while the second shows the simple circuit of the power replicators distributed along the center of the box to reach easily the devices.. TODO: Analyse in-depth alternative powering options. As the high voltage power supply is external to the device this solution can be acceptable but in big mobility conditions a battery operated Meditech system should be preferred. A detail that should be considered instead is the power consumption of the entire system - also after a further power usage optimisation. The Raspberry PI units tend to consume at least 2mA/h each one or more, depending on the extra boards attached. Another factor if the operating environment: in extremely hot conditions in conjunction with high humidity percentage the batteries are not a reliable solution if used as the main power source of the system. The LCD display has beed adapted to shown the lowest possible profile and the electronic parts has been reassembled with flexible lightweight acrylic plastic components and super-compact micro foam to reduce as much as possible the weight of the unit. The following image shows the final aspect of the 15 inches display that actually weights about 1 Kg less than original. TODO: A smaller device (e.g. 7 inches) may fit in the Meditech box; this aspect has to be evaluated at the end of the project to give more portability to the entire system without penalising the shown information. The images below shows some details of the assembled display as in the actual experimental version. The following image shows the assembled Meditech box as it appears when is close for transportation and open ready to be used. Not that an extra-12Vplug is provided (to the left of the second image) to power the printer. This device can wok with or without connected power as it has its own internal battery accessed via Bluetooth wireless connection. The center of the box, including the HD (1Tb 5 inches, will be replaced by a 2.5 SSD HD, 180 Gb) and the network hub together with the power units. Note that the cables - especially the LAN connections - will be replaced by shorter ones to reduce the wires occupation. The image above shows the internal of the devices container. It hosts the service PI (with GPS, Accelerometer, Bluetooth for printing and other support features) on the center, the main PI (with database, collector features, WiFi bridge and data organisation and remote feeding, real time clock) to the right side. The left side hosts the Pi dedicated to the audio acquisition and the analog / digital readings through the Bitscope. Note that to the bottom right side there is a mechanic switch to detect when the box section opening. This information and other related to the health status of the system (i.e. the internal temperature, the fan speed etc.) are controlled by the ChipKit board (right side) sending critical conditions and changes on the system status to the main PI governing the entire architecture. In response to the sensors feeds the main PI can show alarms and warning pop-up windows with high priority, stop some functionalities or shutdown the entire system. In the Meditech container there is a variable number of devices working at least at two different voltages: 5V and 12V. In this prototype version the definitive powering system has been moved to the bottom of the priorities due the restricted deadlines and the need to have an experimental unit before a final decision: battery type, charging type, power consumption etc. Just for this reason the power will be granted by a common ATX switching power supply. This implies a couple of conditions: The main power module simply act with a NE555 in a similar way as the logic power switch of PCs. This is a small independent circuits that should be replaced by the future battery power control. The following images shows the circuit schematics and the relative layout. Every powering unitis a short module that will fit in a plastic rail; the circuit can be positioned along the wider side of the bottom of the Meditech box for easy reachability of the powered devices. Every group is connected with the previous and exposes all the power supply voltages and ground. The following images shows the circuit schematics and the relative layout. Starting from the selected box that will host the Meditech prototype components the first step has been to prepare the container to host the prototype components. First of all the setting of the cooling fan. The following image shows the general idea on how to use one of the two sides of the box. The other side will contain the default probes, accessories etc. To keep the system totally modular, every specialised device (based on the Raspberry PI) is hosted on one of the yellow frames. The central zone will host the cooling system only because it will contain several control components, a status display, status leds and so on. Every RPI unit is simple to install and replace for any reason and to keep the things more simple the standard connectors will left untouched so that any non-expert user can repair a unit simply replacing the entire block. A mechanical microswitch should disable the entire powering until the top lid is not closed. The air circulation is granted by a 12 cm PWM controlled fan and the frames will be adapted to permit to the cooling system to work properly; the fan will start at a sensor detect a critical temperature.Temperature and switch will work autonomously controlled by a program running on the ChipKit PI board that also control the led and other display feedback - despite the LCD monitor - interacting with the RPImaster that controls the entire networked system. The storage and network hub should be fixed in the center area - together with the powering system - as they will cable the entire system. The following three images shows the installation of the fan and the preparation of the forced air circulation. The prototype container is arrived today, just waiting to be adapted to host the Meditech components. It is almost compact and has sufficient space to share all the components, the power unit (in the base) and one of the two sides will contain probes and accessory. For now the decision is to posticipate the question of the battery powering and charger system. Just to remember, the project will include: solar cell, external domestic power charger and car power charger. For this first version I will adopt an external power unit (maybe an ATX switching) then the entire system should be tested for a correct calculation of the battery power, battery charger etc. The only limit will be to avoid the AC voltage directly connected to the box. Next images will show the box open. Since my last post I've made some good progress on setting up communication between the various Raspberry Pi computers over MQTT. I got a case in the mail for my RPi 2 and that bugger is now sitting in the living room with a network cable plugged directly into the cable router. At my desk I have the other two RPi's that make up the brains of the pizza carrier. PiFace CAD is running on my old RPi model B and the Xtrinsic board and GPS module are on the Model B+. Both have their own wifi dongles, the B+ is getting internet via the WiPi dongle that came with the kit. The other is using the standard dongle available from Adafruit. So far, I can successfully publish messages from the B+ computer to the server (running on the RPi 2) and I can subscribe to those messages on the model B. This was a major hurdle for me because while the libraries that are available for communicating over MQTT are dead simple to use, getting a grip on how it all works is not. At least for me, anyway. As mentioned in my previous posts, the server is running lighttpd and mosquitto. Mosquitto is the MQTT broker, it acts as a middleman between the subscriber and publisher clients. Lighttpd is a lightweight web server. The next hurdle was to get websockets enabled and working with mosquitto. It turns out that this was not possible until mid-last year. I have managed to get mosquitto working with websockets enabled and I can send messages using MQTT but I'm having some trouble getting messages sent over websockets via JavaScript. I'll have to keep working on this, there must be something in the config file that needs adjusting. Right now, my browser can connect to the broker but the broker gives me an error message: 1432607712: Socket error on client lilxbuntu, disconnecting. Not sure where the socket error is coming from, so I'll have to dig deeper! Once I get that fixed, though, I'll be in business and I can start building out the various interfaces! I don't think I mentioned this before, but initially in my proposal I had planned to 3D print the case that would hold the pizza box. I've since given this a lot of thought and I think a better idea is to modify the existing pizza delivery bags. There are a number of reasons for this, but the obvious one is that it will save me money. Furthermore, I think there are some pluses to having a soft case over a hard one. I kind of like the idea of modifying the bag for another reason, which is that I can dust off my sewing machine skills (and my sewing machine). Other thoughts for the project before the deadline is up: 1) Handling multiple orders for one bag (turns out that pizza delivery bags handle at minimum two pizzas at a time). 2) Figuring out how to help the driver make speedy deliveries with PizzaPi in tow and, 3) Is there an easy and cost effective way to keep the bag heated? I was supposed to be on vacation already, but I had to stay on at the lab a few extra weeks. I'm looking forward to spending endless days working on my projects at home and sleeping in a bit. Until my next update... References: Make your Raspberry Pi into the Ultimate IoT Hub - ThingStudio BlogThingStudio Blog Build your own Javascript MQTT Web Application Paho-MQTT Open Source Messaging Previously: Sci Fi Your Pi - Prince Dakkar's patent log taking chart compass Sci Fi Your Pi - Prince Dakkar's patent log taking chart compass - Functional Design Route Selection and Indication One of the main elements that makes up the design is the indication of the route that will be taken from the current location to reach the destination. This works in two parts the selection of the route and then the indication on the map. To select the route I wanted to use some suitably dramatic wheels that turn a location roll with a selection of locations and destinations listed. These wheels will not directly drive the rolls as this would move them too quickly so the will be connected via a small cog on the wheel and a large cog on the roll. This will give a large number of turns needed to move the roll using the wheel. This will also add to the Steampunk aesthetic by adding functional cogs to the design. The selection of the destination and start point will be handled by a wheel for each. The rotation of the wheel will connect different circuits which consist of strings of LEDs which will sit underneath the map to show the route to be taken. Each tube will have a connector running from one side of the roll to the other. This will connect with some brushes or sprung contacts to complete the circuit with the correct string of lights. There will probably be some resistors required to ensure this does not fry the RPi, the diagram is to indicate how the idea works rather than being and exact schematic. Essentially each of the circular rolls shown in section have a connection that links two contacts to select the correct string of LEDs. This selects the correct circuit then the RPi program when triggered (through suitably dramatic methodology) will light up the string of LEDs to show the route to be taken. The plan to make this work is to have the strings of LEDs mounted flush in a bed on which the map will sit. Thus when the LEDs are illuminated it will show the route son the map. TO make this work some suitably bright and Small LEDs will be required to make the lights look right when under the map (which will diffuse the light slightly depending on the paper used to print the map). The LEDs for the non selected routes should not be visible with only the selected route illuminated. For stylistic reasons the routes may be indicated on the maps (like the trade route shown on old types of map. The map will also be a slightly (OK a lot) more accurate rendering and will be a world map rather than just part of it. May 23 2015 Day 30 Sound Test I have been working diligently on some of the sound bytes I intend to use and the necessary scripting involved triggering the sound bytes at the appropriate time during sensor reading activity. Scripts will be commented appropriately when modified specifically for the Picorder operation. Final code and documentation will be provided later as the project nears completion. In the accompanying video sound test, I am using the previously dissembled stereo speakers normally used with an mp3 player or Smartphone. I took the system apart and mounted the speaker housing on one side of a perf board and secured it with hot glue and mounted the accompanying circuitry on the reverse side. The original unit was powered by a 1.5 vdc power source and will later on be adapted to draw its power from the Raspberry Pi power source. I supplied power to the speakers with a 1.5 vdc battery and made temporary connections to the Pi sound output jack for the video demonstration. Within the script are 4 GPIO pins set as input triggers. Each pin is held high in its idle state. When pulled low on a specific GPIO pin, the appropriate sound byte will play through the speaker. I am using temporary sound bytes from Star Trek the original series for testing and these will change to be aligned with the sensors I use later on such as the tricorder or alert sounds. As seen in the video, as I pulled each GPIO pin low (to a ground potential) that action initiated the playing of the particular sound byte identified in the script and associated with the specific pin. The momentary triggering is essential to avoid the instant replaying of the sound byte over and over until it is unintelligible as demonstrated in the video. Here is the basic code used for this sound test: #!/usr/bin/env python import os from time import sleep import RPi.GPIO as GPIO GPIO.setmode(GPIO.BCM) GPIO.setup(18, GPIO.IN) GPIO.setup(23, GPIO.IN) GPIO.setup(24, GPIO.IN) GPIO.setup(25, GPIO.IN) while True: if (GPIO.input(18) == False): os.system('mpg123 -q twohours.mp3 &') if (GPIO.input(23) == False): os.system('mpg123 -q access.mp3 &') if (GPIO.input(24) == False): os.system('mpg123 -q defense.mp3 &') if (GPIO.input(25)== False): os.system('mpg123 -q Destruct.mp3 &') sleep(0.5); Speaker mount Hot glued to secure to board L/R channels and amplifier Amplifier power connections Speaker powered up The above assembly will be positioned inside the final casement along with the Pi and appropriate sensors. Next blog post is planned to be the multiple sensor testing and scripting for the various sensors. It may also include the readout display either as a graphic or numerical display. Michael The project proposed has a subsystem which can use the image processing capabilities of the RPi to get commands visually which can be transmitted to various devices over the network. In the previous post () I started on how to install OpenCV and how to take a picture with the RPi Camera. In this tutorial, I will be going through the procedure of setting up video acquisition using Python, OpenCV and the RPI Camera There are a number of tutorials on the subject of Capturing Video in Python however this series is focused on using OpenCV on the Raspberry Pi 2 and the RPi camera. For a beginner, it can be confusing to get existing example code to run using the RPi Camera since the basic functionality is a bit different. The RPi Camera is far more capable than an ordinary USB camera since we can control some functionality of the camera using some functions and code as we will see in this tutorial. In the previous post, we discussed the installation of OpenCV and the PiCamera Module. I will list out the most useful functions in the module and how to use them as follows: 1. capture(output, format=None, use_video_port=False, resize=None, splitter_port=0, **options) In the above, the filename, output format, and image size can be configured. 2. capture_continuous(output, format=None, use_video_port=False, resize=None, splitter_port=0, burst=False, **options) Used to capture a video stream with a specific format, and size 3. capture_sequence(outputs, format='jpeg', use_video_port=False, resize=None, splitter_port=0, burst=False, **options) Used to capture a sequence of images for say a time lapse video? 4. record_sequence(outputs, format='h264', resize=None, splitter_port=1, **options) Used to record a sequence of video clips of predefined length 5. awb_gains Used to get or set the auto white balance gains. Useful when you are trying control the white balance in a scene 6. awb_mode Used to get or set the Auto White balance mode. You can set it to ‘off if you face problems with image processing in a place. 7. brightness Used to set the brightness manually. 8. contrast Used to set the Contract Manually. 9. exposure_mode Used to adjust the exposure mode of the camera. 10. meter_mode Retrieves or sets the metering mode of the camera. Can be set to average, spot or matrix or backlight. The complete list is available at () but these are the ones I commonly use. Lets start by making a template for our OpenCV projects. Since OpenCV allows the option for image compression out of the box, you may be tempted to use it. It will save space if you are saving the images HOWEVER, jpeg is a lossy compression format AND it takes processing horsepower to compress and decompress it. Instead I would recommend capturing and processing raw images which is better in live streams. Here is the code... # import the necessary packages from picamera.array import PiRGBArray from picamera import PiCamera import time import cv2 # initialize the camera and grab a reference to the raw camera capture camera = PiCamera() camera.resolution = (640, 480) camera.framerate = 30 Capture = PiRGBArray(camera, size=(640, 480)) time.sleep(0.1) # capture frames from the camera for frame in camera.capture_continuous(Capture, format="bgr", use_video_port=True): image = frame.array # show the frame and do stuff to it here cv2.imshow("Frame", image) key = cv2.waitKey(1) & 0xFF # clear the stream in preparation for the next frame rawCapture.truncate(0) # if the `q` key was pressed, break from the loop if key == ord("q"): break The code above is a good starting point for your OpenCV projects and I would use try and catch to allow cleanup. The alternative is to use ‘with’ statement which is explained here () and is given in the () import time import picamera import picamera.array import cv2 with picamera.PiCamera() as camera: # camera.start_preview() camera.resolution=(640,480) camera.framerate=30 time.sleep(2) with picamera.array.PiRGBArray(camera) as rawCapture: time.sleep(0.1) for frame in camera.capture_continuous(rawCapture, format='bgr', use_video_port=True): image=frame.array cv2.imshow("Video Feed", image) # Do Stuff here key=cv2.waitKey(1) & 0xff rawCapture.truncate(0) if key==ord("q"): break I was able to get a decent output with this and if you know of a better method, please do let me know and I will update it. Calling all 3D printer people. This is something that you will like and I will be trying this out myself. The concept is to take a picture every few minutes if not seconds and then put them all together to for a video. This allows for a “Fast-Forward” view of the subject which may be a plant growing, ants building, sunrise and sunset or my favourite… a model being 3D printed. The script is simple as: import time import picamera with picamera.PiCamera() as camera: camera.start_preview() time.sleep(2) for filename in camera.capture_continuous('img{counter:03d}.jpg'): print('Captured %s' % filename) time.sleep(300) # wait 5 minutes That’s it! You can set various parameters like AWB and the duration between the images. I recommend copying this script in a folder and running it from there so that all the files are created in the folder only. There are other stuff that you can do like stream the video over a network but my interest was only to capture and process. I have presented a small segment of code that I hope will be useful to you all starting out. In the next episode, I will be showing you how to select objects in a live stream and then track them. See ya next time! As I mentioned in some post before, the initial idea to use a separate tablet as display unit has been expanded and simplified, using an LCD display integrated with the device. With the advantage to use the external smartphone or tablet as the access point for tethering only. The two images below shows the prototype display. It was already available so it is sufficient to make tests but it is 4:3. The used monitor is a non-standard HDMI; I mean that it supports the HDMI input but as a matter of fact it is a 4:3 proportion screen so you can see the image is stretched. If the monitor is not the standard HDMI, frequency etc, there is a complete wiki page explaining how to properly set the file /boot/config.txt accordingly with the monitor characteristics. Following the clear explained settings it is possible to calibrate the GPU settings at boot to manage the display accordingly with the screen size, resolution, aspect ratio and scan frequency. It is not possible to write down a tutorial due the wide number of options and the fact that in most cases these settings strictly depends on a certain screen characteristics. In fact it is not a too risky operation to make tests if the boot is set to not start immediately the graphical environment. If something goes wrong it is always possible to step back with the settings without problems. Just following this empirical method I have set the display changing the following parameters (the order they appear in the config.txt file does not matter): # Custom resolution settings hdmi_group=2 # Set to 1024x768 60Hz hdmi_mode=16 # Set the screen to composite PAL (maybe meaningless for the HDMI output) sdtv_mode=2 # Screen aspect ratio (maybe meaningless for the HDMI output) sdtv_aspect=2 The following image shows the correct proportions on the screen after reboot. Now the proportions are correct but the screen resolution was too low as far as I know for this LCD. Keeping the same aspect ration and the same refresh rate the screen settings has been changed again to # Custom resolution settings hdmi_group=2 # 1280x950 60 Hz htmi_mode=32 As shown in the next image now aspect ration and resolution are correct (note the different size and disposition of the icons on the desktop in these last two images). All the settings tables are included in the mentioned wiki page that is attached to this post in PDF format for any use. My kit have arrived (yes, almost 45 days, thats a con. to live in Brasil), SO, i will keep it updated weekly, any news, in the next posts... Tks all, and good luck to all of us Hi everybody Is time for a new step in Cybernetic Interface development - audio interface configuration. Because RaspberryPi lacks a microphone input, which I need for my project, I had to add and configure an external USB sound card. Due to space constraints I picked the smallest I could find, Konig 3D Sound, based on C-Media CM108 Audio Controller. It worked from the first try, plugged directly in RPi USB port or in USB Hub. Configuration and testing steps are listed bellow: Check sound card configuration: cat /proc/asound/card -> 0 [ALSA ]: bcm2835 - bcm2835 ALSA bcm2835 ALSA 1 [Device ]: USB-Audio - USB PnP Sound Device C-Media Electronics Inc. USB PnP Sound Device at usb-bcm2708_usb-1.2, full speedd8c:013c C-Media Electronics, Inc. CM108 Audio Controller show that the USB sound card is visible. Next, edit alsa-base.conf to load snd-usb-audio as first option: sudo nano /etc/modprobe.d/alsa-base.conf Change configuration to make USB sound card the default one: options snd-usb-audio index=-2 to options snd-usb-audio index=0 and after a sudo reboot, cat /proc/asound/cards -> should looks like this: 0 [Device ]: USB-Audio - USB PnP Sound Device C-Media Electronics Inc. USB PnP Sound Device at usb-bcm2708_usb-1.2, full speed 1 [ALSA ]: bcm2835 - bcm2835 ALSA bcm2835 ALSA If not already installed, install alsa-base, alsa-utils and mpg321 (or mpg123, mplayer, etc.) : sudo apt-get update sudo apt-get upgrade sudo apt-get install alsa-base alsa-utils mpg321 sudo reboot Next, edit /etc/asound.conf and change playback and capture devices from "internal" to "usb". Mine look like this: sudo nano /etc/asound.conf pcm.usb { type hw card Device } pcm.internal { type hw card ALSA } pcm.!default { type asym playback.pcm { type plug slave.pcm "usb" } capture.pcm { type plug slave.pcm "usb" } } ctl.!default { type asym playback.pcm { type plug slave.pcm "usb" } capture.pcm { type plug slave.pcm "usb" } } To be sure, do an other reboot and proceed to test the configuration. To check configuration I used: amixer -c 0 - to display current settings. Mine looks like this: pi@cyberpi ~$ amixer -c 0 Simple mixer control 'Speaker',0 Capabilities: pvolume pswitch pswitch-joined penum Playback channels: Front Left - Front Right Limits: Playback 0 - 151 Mono: Front Left: Playback 119 [79%] [-6.06dB] [on] Front Right: Playback 119 [79%] [-6.06dB] [on] Simple mixer control 'Mic',0 Capabilities: pvolume pvolume-joined cvolume cvolume-joined pswitch pswitch-joined cswitch cswitch-joined penum Playback channels: Mono Capture channels: Mono Limits: Playback 0 - 127 Capture 0 - 16 Mono: Playback 96 [76%] [17.99dB] [off] Capture 0 [0%] [0.00dB] [on] Simple mixer control 'Auto Gain Control',0 Capabilities: pswitch pswitch-joined penum Playback channels: Mono Mono: Playback [on] and alsamixer -c 0 - to modify speakers and microphone levels. Using mpg321 (mpg123/mplayer/aplay/other) and favourite test sound file, plug headphones or speakers into external soundcard output and check if the sound is correctly played. It did pi@cyberpi ~$ mpg321 /home/pi/test.mp3 Ok, with playback working, let's check the recording side. Plug microphone into USB soundcard input and launch: pi@cyberpi ~$ arecord -D plughw:0,0 -f cd ./test.wav Use Ctrl+C to stop recording. Check the result -> pi@cyberpi ~$ mpg321 test.wav -> success . If needed, use "alsamixer -c 0" to adjust sound levels to meet your requirements. That's it. Now I have both audio playback and recording on Raspberry PI. Next step is to do some speech recognition and implement the audio aided menus. All the best -=Seba=- This week has seen more Python coding to produce menus and test out the capabilities of the PiFaceCAD. but it is not elegant enough to share just yet. I need to cobble together the Service Element from the Sysinfo program and the Internet Radio program so that i can get the Pi to start up the generator as a service on start up so that users don't need a console or TTY connection to use it. I do have some pictures to share, though, as I have been looking at the possibility of using one of the intelligent touch screen displays from 4D Systems in Oz. I will still produce a PiFaceCAD version,even if I do keep this interface as an option. The big issue with the 4D display is the parameter passing between the display module and the Pi, so the PiFaceCAD version is likely to be less of a challenge. Here are some screenshots of the basic pages from a uLCD-43PT which has a 480x272 display with resistive touch screen and a Picaso processor (the uLCD-43DT with the Diablo processor is recommended for new designs). Dave Hamblin of Element 14 contacted me regarding the bits that were not delivered. It seems that apart from the Pi A+, which i did not need, i might be getting the remaining bits.at some stage. I was intending to use the Wolfson Audio card to do the voice output for the interpretation. Also the I Ching is supposed to be a random creation based on the state of the universe at the time it is cast so the RTC Shim and GPS/Accelerometer bits would give the user options of what inputs to to select as seeds for the random number generator. I will be off work next week with fewer other interruptions to get in the way, so expect more progress by this time next week. Application Information ChipKit Pi Vs Arduino Pro Mini Quadcopter Assembled (You call that a Quadcopter?) QuadCop -The Control Switch Quad Cop with ChipKit Pi - An "Experience" with Innovation Required The Raspberry Pi goes for a fly! With Pi cam For the Raspberry Pi Flight System (RPFS) I want the Pi to do all the GPS manipulation, parsing and calculations. Working with the GPS is only one of several tasks the RPFS will be doing. Because Pi 2 now has multiple cores so this make multi-tasking a really good option. The Microstack GPS is connected to the Pi's serial port and spits out NMEA strings. For more information see. NMEA is a common format for exchanging GPS data. One way to do multi tasking within a single application is to use threads. In GNU C, there is a thread library called PThreads that is very easy to use. I decided I would write a multi-threaded object class that can easily be reused in any project with minimal effort. To scope out what it takes to make this possible, here is what needs to be accomplished: After doing some research, I found how to read the serial port with an interrupt as well as a small pre-written library for parsing NMEA data. The library is called TinyGPS++ and it is actually written for the Arduino. I ported it over to the Raspberry Pi for use with C and C++. You can find the TinyGPS++ library here: I then wrapped the serial code and the TinyGPS++ code into a nice multithreaded GPS class. Here is an example of how to use my new C++ class which has gps.h and gps.cpp as well as the 2 TinyGPS++ code files. include <iostream> using namespace std; #include "gps.h" int main(void) { GPS *gps = new GPS(); gps->Initialize(); gps->Start(); while(1) { sleep(5); cout << gps->GetAge() << endl; cout << gps->GetLat() << endl; cout << gps->GetLong() << endl; cout << gps->GetAlt() << endl << endl; } delete gps; return 0; } As you can see, it is as easy as creating the GPS object, initializing and starting it. I have implemented a few functions for testing: GetAge - How old the GPS coordinates are in seconds. The Microstack sends new information every 1 second so typically the age of the coordinates is around .3 to .5 seconds. This is a good thing to check as if the coordinates are too old you may want to wait for an update. GetLat - Get the current latitude GetLong - Get the current longitude GetAlt - Get the current altitude The reason I use functions instead of directly accessing variables is because we need thread safe reads: double GPS::GetLat() { double l; readBlock = true; l = currentLat; readBlock = false; return l; } The readBlock flag is checked in the threaded update code, and as long as the flag is set to true, it wont update the variables. If anyone is interested in using my GPS object above, please let me know. I can send the code out as well as how to compile it. It is designed for the RPi and microstack. I have some cleanup to do in the code but I am happy to pass it a long. Once I get it completed I will be putting it up on my personal blog. But if you want to get started with the GPS now with C++, here is a way to get going. Another update is that I have the ChipKit Pi working completely as a control switch. Here is a demo flight I did with the control switch reading my radio inputs and sending them to the flight controller. A small step in the right direction. I am working hard to get some auto flight code done, and having the GPS working is a HUGE step in that direction. It doesn't look like much but a lot is going on. The ChipKit pi is reading the PWM signals from the Rx, and then passing them to the flight controller on the quad via the software PWM library. A switch on the radio will put the Raspberry Pi into control, and my radio inputs will be ignored except to put it back into manual control at my request. Its been very windy the last 2 weeks so I had to settle for a simple hover inside, a bit nerve racking. This is in fact the first test of this. Did it turn out ok? Here is a video of the short hover flight: Here is some rambling about what is going on so far with this, this is a bad video my apologies I just felt I need some "proof" since the flight looks like any other flight The Meditech architecture is based on a certain number of Raspberry PI (initially three in the actual prototype) every one dedicated to manage specific tasks. These tasks are strictly related and needs to be also synchronised in time: For example the database records on the RPImaster device should have the right timestamp accordingly with the other network node from where the event is collected. Not only but some continuous data collections comes from different devices and should report the same timing with at least 1/10 of second precision. To these functional aspects we should add the fact that there are conditions that does not permit to the devices to stay synchronised with the Internet time because of poor signal, Internet connection unavailability and so on. This is the reason that I have adopted an internal time synchronisation using the NTP time protocol. A good yes not too complex explanation on how the NTP protocol works and the difference between the NTP client and server can be found on this article (also attached in PDF to this post) The concept is clearly explained but as our needs are slightly different it is not to be considered as a cut-and-paste document and should be interpreted. First of all, the RPIslave1 device, that is the network NTP server, hosts the PiFace real-time clock board. The network is not accessible by the because of the bridge RPImaster, but when needed every internal device is enabled to access the Internet, with limitation to some protocols only. RPIslave1 is the only device that access to the Internet NTP servers, when there is an available connection, to update its RTC. When there is not any available connection the RTC can provide anyway the right time to synchronise all the other devices because it is configured as NTP server. Meditech hosts internally two different networks: The configuration of the NTP protocol in the RPImaster device is a bit uncommon because it is the client of the RPIslave1 device but is also a secondary server, bridging the NTP protocol between the two networks. The RPIslave2 device, connected via WiFi, can't access directly the ethernet wired network so it has its NTP protocol configured to access to the RPImaster secondary NTP server. Excluded the NTP server in all the other devices the access to the NTP Internet servers has been disabled resulting a time synchronised network based on a primary NTP server. The NTP configuration parameters are stored in the /etc/ntp.conf configuration file. That should be edited accordingly to the network configuration. The three configuration files are attached in the ntp_conf.zip file to this post. The nest paragraphs only show the ntp.conf file parts that are meaningful for this kind of configuration As this is the main NTP server the default configuration of the public servers remain untouched: if the Internet connection is available the internal date and time is updated with one of the accessible public servers. # The NTP server is configured to grant the access to the Meditech internal networks so that any client can ask for NTP updates. # Internal network for both ethernet and WiFi lans have full access # to this server (nocriptography needed). # If you want to grant the access to only encrypted clients access, add "notrust" at the end of every network definition restrict 192.168.1.0 mask 255.255.255.0 restrict 192.168.5.0 mask 255.255.255.0 # Authorised clients subnets broadcast 192.168.1.255 broadcast 192.168.5.255 RPImaster is configured as both an NTP client and a (secondary) NTP server. For testing purposes only (this devices has a 1Tb hard disk for data storage) an extensive NTP logging has been enabled. # Enable this if you want statistics to be logged. # For NTP internal server testing and data logging statsdir /var/log/ntpstats/ statistics loopstats peerstats clockstats filegen loopstats file loopstats type day enable filegen peerstats file peerstats type day enable filegen clockstats file clockstats type day enable Only the RPIslave1 NTP server is enabled while the NTP Internet pool is disabled (left commented) # You do need to talk to an NTP server or two (or three). # Internal server server 192.168.5.1 # Internet NTP servers are disabled to manage a unique synchronized # time between the entire network # RPImaster is also configured as a (secondary) NTP server to grant the access to the WiFi second network through the bridging features. # Clients from this (example!) subnet have unlimited access # This device is the network bridge so relaunch the NTP server # to the wlan0 connected devices. restrict 192.168.1.0 mask 255.255.255.0 # If you want to provide time to your local subnet, change the next line. # (Again, the address is an example only.) broadcast 192.168.1.255 The shown client is accessing the Meditech network through the WiFi so its NTP server is RPImaster. It is obvious that then same configuration settings (with the correct IP address) are valid also for the RPIslave1 NTP server. # Access the secondary NTP server RPImaster via the WiFi connection server 192.168.1.99 # After the configuration of the file /etc/ntp.conf restart the NTP service on every device $> sudo /etc/init.d/ntp restart Previous posts for this project: I: Sci Fi Your Pi: PiDesk - Guide: Controlling NeoPixels with the Raspberry Pi A+/B+ I can now proceed by defining custom animations based on triggers such as received emails or hashtags being used on twitter for example. Addressable LEDs or NeoPixels are typically used in combination with an Arduino or similar microcontroller, due to the timing critical signal required to control them. An SBC such as the Raspberry Pi is not suited for such realtime GPIO activities, as the Linux operating system runs other tasks in parallel. Or at least that was the case until Jeremy Garff found a way to use the DMA (Direct Memory Access) module to transfer bytes of memory between different parts of the processor, without using the CPU and thus not being interrupted by the Pi's OS. This procedure works for all Raspberry Pi models except version 2! Jeremy Garff has written a library called "rpi_ws281x", which can be found on his GitHub page:. It makes use of the Pi's BCM2835's PWM module to drive the controllable WS281X LEDs found in NeoPixel strips and rings. The folks at Adafruit have created a Python wrapper for the library along with some Python example scripts, making it look and feel like the Arduino NeoPixels library. So if you're familiar with NeoPixels on the Arduino, you should be up and running with this version in no time. To compile and install the library, follow the steps below. First, install the dependencies required to download and install the library: sudo apt-get update sudo apt-get install build-essential python-dev git scons swig Next, download the files and build the library: git clone cd rpi_ws281x scons Finally, install the Python wrapper: cd python sudo python setup.py install As you can see, these steps are very straightforward. Hooking up the NeoPixels to the Raspberry Pi is extremely easy, just make sure the power supply used is properly rated for the number of NeoPixels you intend to use. For testing, I used a 5V/4A power supply to power the Pi and the NeoPixels (12 and 60 LEDs). Make sure the ground signals of the NeoPixel strip/ring and the Raspberry Pi are connected. If they are not, the LEDs won't function properly and will light up in unpredictable patterns. Even though the LED strip/ring operates at 5V and the Pi's GPIO at 3.3V, it appears that it is possible to drive the LEDs without having to use logic level conversion. I tested two components: Both performed as expected using the sample script (strandtest.py), which I edited to configure the correct number of LEDs: # LED strip configuration: LED_COUNT = 60 # = 255 # Set to 0 for darkest and 255 for brightest LED_INVERT = False # True to invert the signal (when using NPN transistor level shift) As you can see, it is also possible to edit other parameters to match the LEDs used, but also parameters such as brightness. After a full-discharge the battery used for testing has been recharged with the simple circuit mentioned in the post Powering the camera probe for few hours to test if the charging circuit, experimental and very simple, may have problems . The duration time has been calculated from the power-on until one of the components has not presented problems due to the low power. The entire system worked continuously for 63 minutes then camera continued to stream data but the LCD display backlight gone off and the Raspberry PI power led (the red one) disappeared. The PI temperature shown on the display never raised more than 43 C As mentioned in previous posts, the Camera Probe (that is, the RPIslave2) will include a WiFi connection. To make the board more compact and use a cheap but reliable WiFi connection the choice was the ZD1211. In the Linux environment it is a sort of "standard" as the os firmware and kernel modules to support this device is widely diffused on many linux distributions, especially for the embedded linux devices. As a matter of fact any kind of WiFi dongle supporting this firmware maybe considered working for our environment. I have tested in past several WiFi dongles based on the ZD1211 under Ubuntu 8 and 10, OpenWRT, Ltib and other very custom linux including a small linux embedded device developed for the Nintendo a couple of year ago. The other advantage is that this firmware is so diffused that it is probably one of the first that will be supported by any new linux distribution; and we can consider the raspbian a recent Debian distro without doubts. In attach to this post there is the working procedure to install the kernel module (loaded automatically) to have the WiFi working on the Raspberry PI. At the actual date I have tested it on all the models of PI with no problems at all. As mentioned in previous posts the device RPIslave2 is equipped with the PI camera to capture still frames imaging diagnostics. This probe should be easily moved outside the Meditech unit; it remain connected to the remaining system via WiFi and should be battery-powered. The powering characteristics should be the following: The power test circuit is actually using a 7.5 LiIon rechargeable battery, 1500 mA. As shown in the image there are two parts (will be a single smaller circuit). The regulated 5Vcc power is granted by a LM7805 while the battery is also connected to a simple yet small recharging device. At the actual date the recharging circuit assolve its function to stop powering the battery as it is very near to full charge due to the final resistor that should be calculated based on the power of the battery. I have considered to erogate a lower voltage from the charging unit respect the battery and - in this case of the 1500 mA battery - limiting to about 1000 mA the power. The recharging circuit is done for now with a couple of LM350. I consider anyway to make the recharging unit with a OpAmp (LM358 maybe fine) and a feedback with three leds: under charge, charged, low battery The system has beed tested for many hours and responded well. I think that some alternative solutions should be adopted for the power regulator finding a way to replace the LM7805 with some alternative: this component grant a prefect current level but dissipates too much current, specially when the RPIslave is running with all the features (camera, WiFi unit, image streaming etc.) Inside the Meditech architecture the platform RPImaster is the deviced that has the main role of server coordinating the system of probes, storing data, streaming and more. In few word the RPImaster device will assolve the following tasks: Based on the Qt 5.3 environment the RPImaster unit will include also the custom User Interface, with the option of many simplifications and a more ergonomic usage (more details on the UI design and features, based on Qt5, will be discussed in further documents) The successfully adoption of the Qt 5.3 environment installed in the device got the option to improve the original UI design strategy. As a matter of fact this means that instead of an external tablet dedicated to acquire the visual information from the Meditech unit, showing them to the local operator and eventually when needed sending them to a remote support unit has been eliminated: At this point any mobile Android device will be useful for the remote connection sharing its internet connection. The choice of the Qt 5.3 version has been conditioned by the availability of all the libraries needed to assemble a well working environment that has demonstrated to be stable and very well responsive. Due the relatively limited resources of the Raspberry PI and other factors has conditioned the choice to use the Qt C++ bare environment also for the UI graphic design. This change has also a positive impact on the costs: The RPImaster device will comunicate P2P with the remote assistance unit with a set of tools provided by another Meditech dvice with some modifications. The secured communication between the two units Through the Internet gives a lot of options to better secure the communication protocol while exchanging sensitive data (patient information, data analysis, location information etc.) The following video shows some calculation and graphic tests of Qt 5.3 C++ sample applications running on the Raspberry PI 2 My initial design was to build a set of robots around the raspberry Pi and use other components in the kit to build the control systems. Since I am yet to receive the kit, I have moved to researching the Image capture and recognition part of the project. OpenCV (Open Source Computer Vision) is a library of programming functions mainly aimed at real-time computer vision, developed by Intel Russia research center in Nizhny Novgorod, and now supported by Willow Garage and Itseez. It is free for use under the open-source BSD license. The library is cross-platform. It focuses mainly on real-time image processing. If the library finds Intel's Integrated Performance Primitives on the system, it will use these proprietary optimized routines to accelerate itself. OpenCV has gained a lot of attention for the last few years with the advent of the RPi and other single board computers, it is now possible to have simpler Image Manipulation tasks executed on DIY systems. One module in my scifi design is to use gesture recognition and for that I intend to use an RPI 2 dedicated to Image processing and sensing. I am going to start with installing OpenCV on the RPI and then using it to simply acquiring an image. I will be using C++ for more advanced stuff since I am more comfortable with it as compared to Python but I will do the initial stuff using Python 2.7. Lets get started... Now there are a LOT of guides out there for this BUT I am giving you the details of "my way” of doing the install. If I make a mistake and you have a better way, please leave a comment and I will update it here. The first thing is to download the OpenCV library for Raspberry Pi and to do that we go to I chose the last stable version which was 2.4.10 and I think that should work for the most part. We also need to install some dependencies run the following commands sudo apt-get -y install build-essential cmake cmake-curses libeigen3-dev and then swig libv4l-0 libv4l-dev python-numpy libpython2.6 python-dev python2.6-dev libgtk2.0-dev Next unzip it using unzip opencv-2.4.10.zip cd opencv-2.4.10 mkdir release cd release ccmake ../ press ‘c’ to configure and toggle the options you want. I did the following. ANT_EXECUTABLE ANT_EXECUTABLE-NOTFOUND BUILD_DOCS ON BUILD_EXAMPLES ON BUILD_JASPER ON BUILD_JPEG ON BUILD_OPENEXR ON BUILD_PACKAGE ON BUILD_PERF_TESTS ON BUILD_PNG ON BUILD_SHARED_LIBS ON BUILD_TBB OFF BUILD_TESTS ON BUILD_TIFF ON BUILD_WITH_DEBUG_INFO ON BUILD_ZLIB ON BUILD_opencv_apps ON BUILD_opencv_calib3d ON BUILD_opencv_contrib ON BUILD_opencv_core ON BUILD_opencv_features2d ON BUILD_opencv_flann ON BUILD_opencv_gpu ON BUILD_opencv_highgui ON BUILD_opencv_imgproc ON BUILD_opencv_legacy ON BUILD_opencv_ml ON BUILD_opencv_nonfree ON BUILD_opencv_objdetect ON BUILD_opencv_ocl ON BUILD_opencv_photo ON BUILD_opencv_python ON BUILD_opencv_stitching ON BUILD_opencv_superres ON BUILD_opencv_ts ON BUILD_opencv_video ON BUILD_opencv_videostab ON BUILD_opencv_world OFF CLAMDBLAS_INCLUDE_DIR CLAMDBLAS_INCLUDE_DIR-NOTFOUND CLAMDBLAS_ROOT_DIR CLAMDBLAS_ROOT_DIR-NOTFOUND CLAMDFFT_INCLUDE_DIR CLAMDFFT_INCLUDE_DIR-NOTFOUND CLAMDFFT_ROOT_DIR CLAMDFFT_ROOT_DIR-NOTFOUND CMAKE_BUILD_TYPE Release CMAKE_CONFIGURATION_TYPES Debug;Release CMAKE_INSTALL_PREFIX /usr/local CMAKE_VERBOSE OFF CUDA_BUILD_CUBIN OFF CUDA_BUILD_EMULATION OFF CUDA_HOST_COMPILER /usr/bin/gcc CUDA_SDK_ROOT_DIR CUDA_SDK_ROOT_DIR-NOTFOUND CUDA_SEPARABLE_COMPILATION OFF CUDA_TOOLKIT_ROOT_DIR CUDA_TOOLKIT_ROOT_DIR-NOTFOUND CUDA_VERBOSE_BUILD OFF EIGEN_INCLUDE_PATH /usr/include/eigen3 ENABLE_NEON OFF ENABLE_NOISY_WARNINGS OFF ENABLE_OMIT_FRAME_POINTER ON ENABLE_PRECOMPILED_HEADERS ON ENABLE_PROFILING OFF ENABLE_SOLUTION_FOLDERS OFF ENABLE_VFPV3 OFF EXECUTABLE_OUTPUT_PATH /home/pi/opencv-2.4.8/release/bin GIGEAPI_INCLUDE_PATH GIGEAPI_INCLUDE_PATH-NOTFOUND GIGEAPI_LIBRARIES GIGEAPI_LIBRARIES-NOTFOUND INSTALL_CREATE_DISTRIB OFF INSTALL_C_EXAMPLES OFF INSTALL_PYTHON_EXAMPLES OFF INSTALL_TO_MANGLED_PATHS OFF OPENCV_CONFIG_FILE_INCLUDE_DIR /home/pi/opencv/opencv-2.4.8/release OPENCV_EXTRA_MODULES_PATH OPENCV_WARNINGS_ARE_ERRORS OFF OPENEXR_INCLUDE_PATH OPENEXR_INCLUDE_PATH-NOTFOUND PVAPI_INCLUDE_PATH PVAPI_INCLUDE_PATH-NOTFOUND PYTHON_NUMPY_INCLUDE_DIR /usr/lib/pymodules/python2.7/numpy/core/include PYTHON_PACKAGES_PATH lib/python2.7/dist-packages SPHINX_BUILD SPHINX_BUILD-NOTFOUND WITH_1394 OFF WITH_CUBLAS OFF WITH_CUDA OFF WITH_CUFFT OFF WITH_EIGEN ON WITH_FFMPEG ON WITH_GIGEAPI OFF WITH_GSTREAMER ON WITH_GTK ON WITH_JASPER ON WITH_JPEG ON WITH_LIBV4L ON WITH_NVCUVID OFF WITH_OPENCL ON WITH_OPENCLAMDBLAS ON WITH_OPENCLAMDFFT ON WITH_OPENEXR ON WITH_OPENGL ON WITH_OPENMP OFF WITH_OPENNI OFF WITH_PNG ON WITH_PVAPI ON WITH_QT OFF WITH_TBB OFF WITH_TIFF ON WITH_UNICAP OFF WITH_V4L ON WITH_XIMEA OFF WITH_XINE OFF For the most part I just left things to the default except for enabling stuff with Jpeg, Png and TBB. press c again to configure and then g to generate the make file. This should drop you back to the command prompt. Next build with make I did this whole thing on a RPi 2 with a class 10 card. Yes it makes a difference since class 10 cards have faster access rates. It took around 3.5 hours and if you happen to do something wrong, do a make clean make Lastly do a make install and then reboot. This should have you up and running. I am assuming that you are using the RPi Camera and that you have enabled the RPi Camera using raspi-config. If not, then please refer- () You need to either attach a monitor to the RPi or access it via vnc as we did in the last post. Start the windows manager by typing startx You should now have the windows system running and should be able to see the desktop. Start a new terminal and create a new folder by typing mkdir opencv_tests Next create a new file by typing cd opencv_tests leafpad test1.py I am using leafpad as its just simple when using the windows env. Type the following lines # import the necessary packages from picamera.array import PiRGBArray from picamera import PiCamera import time") image = rawCapture.array # display the image on screen and wait for a keypress cv2.imshow("Image", image) cv2.waitKey(0) Save the file and close LeaftPad which will drop you back to the LX Terminal Now just type python test1.py And that should display an image in a new windows. Press any key and it will close the windows and return you to the command prompt. This was a lot of fun and I think to start with you should use python even if you have used OpenCV in C in the past. Its much simpler and with the power of the RPI2 the lag is almost gone. I will be implementing some gesture recognition while I wait for the Kit to be delivered. See you next time... The Machine-2-Machine protocol, MQTT, has really forced me to rethink how to organize the whole communication process of this project. MQTT stands for Message Queue Telemetry Transport and is touted as the protocol for the developing world of IoT (Internet of Things). Developed by Andy Stanford-Clark and Arlen Nipper, MQTT has been around since 1999. I heard about it while scouring the Raspberry Pi forum for information on communicating between two Pi's over TCP/IP. Here's what the official website says about. The central point of MQTT is the broker. The broker is like a base station for communication between sensors and subscribers. From my cursory research I've concluded that the broker really ought to be located on a computer that is robust and well, not running around inside a pizza box. MQTT expects connections to be bad on the endpoints and not as a result of the broker being down. The broker will hang onto the sensor data until connections are reestablished. What this means for PizzaPi is that I need a third dedicated Raspberry Pi server that can be kept safe and sound with a reliable internet connection. There are a number of brokers available but Mosquitto is the only open source version that I've found and it is the one I am running. Here's my updated hardware infrastructure: The diagram shows how the various devices/users/subscribers interact with the broker and web server that will run on the kit RPi 2. The customer and pizza store will really only directly interact with the web server while the kit RPi B+ and my original RPi B will interact with the broker. Before, I had intended to have the web server run on RPi 2 which is currently hooked up to the PiFace Control & Display 2. The two RPi's would then communicate directly over TCP and then any end users like the pizza store and customer would get information from RPi 2. Clearly, this leads to reliability problems and only running the PiFace CAD is a waste of the upgraded memory and speed on RPi 2, hence the Pi shuffle. I'll post again soon, the little GPS light is blinking, reminding that I need to get down to business! This is the first preview of the microscope camera that will be one of the probes connectable to the Meditech unit for micro-surface investigation e.g. skin, pariculars and so on. The video shows an example at about 200X magnification. As much the magnification increases, as the DOF is short. The device will be protected in front by a transparent circle and thanks to the internal leds ring light it can be put directly on the subject surface. Next step: stream the real-time images on a reliable protocol through other devices. The following video shows the microscope in action Previous posts for this project: Last" widescreen with a maximum resolution of 1440x! Instead of using the samba protocol, that is more resource consuming and is needed only when the network files should be shared including Windows machines or other desktops or anyway for some specific needs, a better solution to share folders between several linux machines is through the NFS file system. Installing the server side on raspian should be take in account a couple of issues that affect this Debian distro. First of all we have two options to install the server: nfs-server and nfs-kernel-server but only the second is well working in this Debian distro without issues. Debian documentation site only says that this is strongly suggested on the most recent distributions The problem is that this choice should be warned because also the other NFS server alternative installs on raspian without problem. So we proceed installing the NFS server, the NFS common utilities and the portmapper: $>sudo apt-get install nfs-kernel-server nfs-common portmap Note: the portmap package (port mapping service) must be installed but it is probably already present on the system; it depends on what other packages you have previously installed in your Raspberry. So don't worry if after the installation you see that the package is already installed and if you read that there is a problem starting the nfs server. After the packages installation has been finished we should configure the NFS server instructing how to share the folders that will be mounted by the remote linux machines, editing the exports file $>sudo nano /etc/exports Every folder that should be shared with the NFS clients should be listed in this configuration file. The following configuration line is related to the case we are managing; for further details on the exports conifguration file syntax read the exports NFS configuration documentation. /home/stream xxx.xxx.xxx.xxx/0(rw,sync,no_subtree_check,no_root_squash) Despite what I have found on the Internet in many place It is mandatory to specify the NFS server IP and the subnet /0 (for the case of a single device sharing a folder). At this point, we should restart the NFS service with the command $>sudo service nfs-kernel-server restart Surprisingly the server seems unable to restart showing a message like the response listed below $>sudo service nfs-kernel-server restart [ ok ] Stopping NFS kernel daemon: mountd nfsd. [ ok ] Unexporting directories for NFS kernel daemon.... [ ok ] Exporting directories for NFS kernel daemon.... [....] Starting NFS kernel daemon: nfsd [warn] Not starting: portmapper is not running ... (warning). Showing the RPC call status with the command $> rpcinfo -p we see something like the following: <br>rpcinfo: can't contact portmapper: RPC: Remote system error - No such file or directory This occurs because of the issue mentioned above. The port mapping service is correctly installed on the system but seems that raspian is not enabled to start it by default during the reboot. The workaround to solve this issue is to explicitly add the binding of the port mapping to the boot sequence startup via rc.d configuration with the following command: $>sudo update-rc.d rpcbind enable && sudo update-rc.d nfs-common enable At this point after the reboot the system works correctly. Test restarting the NFS service after a reboot $>sudo /etc/init.d/nfs-kernel-server restart to see the correct restart sequence notifications on the terminal pi@RPIslave2 ~ $ Take in account that he warnings related to the tcp6 and udp6 depends on the use of the IPV4 only and not the IPV6 protocol in the network, so it doesn't affect the server functionality. For permanent mounting the mount parameters should be added to the /etc/fstab file of the client computer. Client installation is almost simple as we only need to install the portmapper and the nfs client. Install the packages $>sudo apt-get install nfs-common Then create the folder where the remote mount should be mapped $>sudo mkdir -p <client mount folder> Now mount the remote folder (with the full path) on the client local folder on the client machine. $>sudo mount xxx.xxx.xxx.xxx:/<server full path shared folder> <client mount folder> That's all! With the mount command you will see an output like this (showing my specific case as an example) $>mount /dev/root on / type ext4 (rw,noatime,data=ordered) devtmpfs on /dev type devtmpfs (rw,relatime,size=437856k,nr_inodes=109464,mode=755) tmpfs on /run type tmpfs (rw,nosuid,noexec,relatime,size=88432=176860k) devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000) /dev/mmcblk0p5 on /boot type vfat (rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=ascii,shortname=mixed,errors=remount-ro) rpc_pipefs on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime) /dev/mmcblk0p3 on /media/SETTINGS_ type ext4 (rw,nosuid,nodev,relatime,data=ordered,uhelper=udisks) 192.168.5.3:/home/pi/stream on /mnt/stream type nfs4 (rw,relatime,vers=4.0,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.168.5.2,local_lock=none,addr=192.168.5.3) Where the last line shows the remote /home/stream folder mounted on the local /mnt/stream folder You will see also the mounted folder, that is as a matter of fact is very similar to a removable media with the disk free df command as shown in the example below. $> df -h Filesystem Size Used Avail Use% Mounted on rootfs 877G 4.3G 828G 1% / /dev/root 877G 4.3G 828G 1% / devtmpfs 428M 0 428M 0% /dev tmpfs 87M 304K 87M 1% /run tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 173M 0 173M 0% /run/shm /dev/mmcblk0p5 60M 15M 45M 25% /boot /dev/mmcblk0p3 27M 442K 25M 2% /media/SETTINGS_ 192.168.5.3:/home/pi/stream 5.8G 2.6G 2.9G 48% /mnt/stream A component that is very important for my project arrived: the projector! The very generic-looking box Projectors can get really expensive, but as you can see I went with a budget version. For around 50 bucks you can find yourself a Chinese-made projector that does 480x320 and has a brightness of 100 lumen. If you're thinking that's pretty pathetic, I agree. I decided to spend a little extra and got myself one that did 640x480 (at only 80 lumen) for 70 bucks. This way R2-D2's projector would at least meet the VGA standard. After unboxing and inspecting it at work, one of my colleagues remarked that it made a rattling noise. Some part was loose inside the projector. He handed me a screwdriver and convinced me to take apart the projector I'd owned for all of 10 minutes. Don't worry, I put it back together again The rattling sound came from one of the speakers. It was supposed to be hot-glued to the case but had come loose. Opening it up really drove home how cheaply it was made. The plastic lens is something you'd find in a pair of toy binoculars. After I took it home I hooked it up to a Pi and saw if they would play nice. Initially, they didn't. I was using a bargain bin HDMI cable and I just couldn't get the Pi to detect anything. I swapped it out with an AmazonBasics cable that I had a bit more confidence in and after that it worked fine. Hooked up to HDMI and happy The projected image itself is very blurry, even after fiddling with the lens. I could get it into focus but even then it wasn't really.. sharp. I logged in on the Pi with ssh and set up x11vnc. This allowed me to get a better look at what was going on. Compare a screenshot of my VNC client to the actual image being projected. It's hard to believe but that's exactly the same image being shown. The projector presents itself with 720p as its only available resolution and then squishes that widescreen image down to 640x480. The result is predictably awful. I don't really need amazing image quality for this project but this is just awful. If you're at all interested in buying a projector, I really can't recommend a device like this. Literally save yourself the headache and get something decent. One of the most versatile probes of Meditech is the camera that cover a meaningful wide range of applications shortly described below The camera probe should be relatively small and lightweight, battery operated and - despite the other parts of the Meditech "block" (the main case) needs to be managed independently, eventually near the patient. Accordingly with these specifications the "camera probe", not yet sure what other associated probes can contain, should have the following charactristics To capture the images in the different conditions that are needed the raspistill has all the needed features: shooting time settings, single repeated image or different named frames, frequency, duration, resolution and a lot of other controls. So a couple of commands for starting and stopping the camera has been created; these will evolve in a more complex one managing all the features the camera can do as mentioned above, controlled by the button. #!/bin/sh # Start still frame and stream to the server echo "`date +%y/%m/%d_%H:%M:%S`: stream_start" 1>>/home/pi/stream.log # Mount remote share sudo mount 192.168.1.99:/home/pi/stream /home/pi/stream >>/home/pi/stream.log # Image numbering # for test only #raspistill -w 640 -h 480 -q 10 -o /home/pi/stream/pic%04d.jpg -tl 200 -t 9999999 -th 0:0:0 -n >>/home/pi/stream.log 2>>/dev/null & #Fixed name raspistill -w 640 -h 480 -q 10 -o /home/pi/stream/pic.jpg -tl 50 -t 9999999 -th 0:0:0 -n >>/home/pi/stream.log 2>>/dev/null & #!/bin/sh # Stop still frame echo "`date +%y/%m/%d_%H:%M:%S`: stream_stop" 1>>/home/pi/stream.log sudo killall raspistill >>/home/pi/stream.log 2>>/home/pi/stream.log sudo umount /home/pi/stream These commands append a status log to the file stream.log To start the camera still sequence you can see that before a remote folder has been mounted: this was the problem s that required most the time to test due the way the streamer interpret the new files added. For a continuous still capture, to save disk space and avoid problems the command use always the same name for every frame. The result is that every new shoot the same file is rewritten. A more detailed description of the file sharing strategy over the net for both the client and server will be discussed in the next paragraph. The question of sharing files over the network Meditech is based on (it uses three different Raspberry PI has been solved using the NFS file system The entire installation procedure will be discussed in a separate post. The principle is that the RPIMaster Raspi device includes a large storage: 1Tb USB hard disk so one of the roles of this unit is to collect and store all the data that are produced for any reason by the entire system. Adopting this kind of centralised architecture give the possibility to increase the number of future optional probes without altering the system in any way. The approach is the following: RPIMaster is the centralised device with the large storage system RPISlave2 is the device dedicated to manage the camera features A note it is the worth on the mjpg-streamers application. It is a lightweight linux utility that manages all the streaming process with several options: can send the files to a folder, on the http output and so on.The problem is that the plugin of the application that detect the image, when a new frame is detected but then for some reason id deleted of changed it stops the acquisition and the streaming port become unresponsive. This event occurs randomly the streamer on RPImaster does not need to be synched with the remote device that shares the images. So in come cases some millisecond after the images has beed reconised by the streamer on the RPImaster, the same image is replaced by a new frame in the RPIslave2 sequence. To avoid this psoblem I had to modify the plugin so that - only for testing purposes - when this event occur it is only noticed on a log file. As soon as the streamer has been definitely tested the updated version will be shared on a post to be available to other users with the same problem. Also in this case a bash simplified command has beed created to start the streaming process when needed. #!/bin/bash #Start streaming echo "`date +%y/%m/%d_%H:%M:%S`: stream_start" 1>>/home/pi/stream.log /home/pi/mjpg-streamer/mjpg_streamer -i "/home/pi/mjpg-streamer/input_file.so -f /home/pi/stream" -o "/home/pi/mjpg-streamer/output_http.so -p 8090 -w /home/pi/mjpg-streamer/www" 0>>/home/pi/stream.log 1>>/home/pi/stream.log 2>>/home/pi/stream.log >>/home/pi/stream.log 2>>/dev/null & The following video shows how this architecture works I have ordered a USB to RS232 cable (USB-RS232 WEUSB-RS232 WE) to be able to communicate to the AB SLC PLC I have ordered a USB to RS422 cable (USB-RS422 WEUSB-RS422 WE) to be able to communicate to the Fenner M-Trim controller FTP Communication While waiting for the above cables to arrive, I have been working to get the FTP communications working OEE data Data will be read from the PLC every minute and stored to a local file. The local file will be sent to the server every hour. Recipe data Recipe data will be stored on the same server. Each recipe will have it's own file on the server. A list of those files will be stored in a separate file with a specific name. The plan is to retrieve the single file then iterate through the lines of the file, downloading the recipes to local storage on the pi. The Twisted framework is being used to accomplish the communication in this project.The FTPClient protocol has been adapted to allow the 'APPE' instruction. The LoopingCall is being used to schedule the sending of the OEE data to the server every hour (3600 seconds) and the receiving of the recipe file from the server (every 24 hours). One drawback that I did find with the Twisted framework was when I manually disconnected the network cable from the pi, it took about 20 minutes for the code to realize the connection was lost. There wasn't any lost data as far as I could see. The data was transferred when the connection was reestablished. So week 1 was a success in establishing FTP communications. Later in the project I will make the times selectable through the PIFace controller. class FTPClientA(FTPClient): """ ************************************************************** Protocol subclass from FTPClient to add 'APPE' support allowing the ability to append data to a file. Also using connectionMade method to start data transfer loops ************************************************************** """ def __init__(self, username = 'anonymous', password = 'anonymous@', passive = 1): FTPClient.__init__(self, username, password, passive) self.OEELoop = LoopingCall(sendOEEData, self) self.RecipeDownload.LoopingCall(rerieveRecipes, self) def connectionMade(self): """ ****************************************************************. ***************************************************************** """ self.OEELoop.start(3600).addErrback(fail) self.RecipeDownload.start(86400).addErrback(fail) def connectionLost(self, reason): """ **************************************************************** Called when the connection is shut down. Clear any circular references here and any external references to this protocol. The connection has been closed. @type reson: L{twisted.python.failure.Failure} ****************************************************************** """ print "connection lost" self.OEELoop.stop() def appendFile(self, path): """ ****************************************************************** Append to a file at the given path This method issues the 'APPE' FTP command @return: A Tuple of two L{Deferred}s: -L{Deferred} L{IFinishableConsumer}. You must call the C{finish} method on the IFinishableConsumer when the file is completely transferred. -L{Deferred} list of control-connection responses. ****************************************************************** """ cmds = ['APPE ' + self.escapePath(path)] return self.sendToConnection(cmds) appe = appendFile class FTPClientAFactory(ReconnectingClientFactory): def buildProtocol(self, addr): self.resetDelay() p = FTPClientA(username='anonymous', password='anonymous@') p.factory = self return p def clientConnectionLost(self, connector, reason): """ ************************************************************* Called when a connection has been lost after it was connected. @type reason: L{twisted.python.failure.Failure} ************************************************************* """ print 'Lost connection. Reason:', reason ReconnectingClientFactory.clientConnectionLost(self, connector, reason) def clientConnectionFailed(self, connector, reason): """ ************************************************************* Called when a connection has failed to connect. @type reason: L{twisted.python.failure.Failure} ************************************************************* """ print 'Connection failed. Reason:', reason ReconnectingClientFactory.clientConnectionFailed(self, connector, reason) def run(): # Get config config = Options() config.parseOptions() config.opts['port'] = int(config.opts['port']) config.opts['passive'] = int(config.opts['passive']) config.opts['debug'] = int(config.opts['debug']) # Create the client connector = reactor.connectTCP(config.opts['host'], config.opts['port'], FTPClientAFactory()) reactor.run() Hello The parts arrived a week ago and I was glad to find all essential pieces inside. The first step was to check the RaspberryPi_V2 and install Raspbian, 2015-02-16 at that time (I saw a new one - 2015-05-05 - came out since) and made usual configuration, expand FS, enable camera and network configuration for remote access. This was my first contact with the RaspberryPi_V2 and I'm pleased about it. Improved CPU, Micro-SD slot and the 4 USBs are so far the most important points for me. Wireless Network Adapter I'll use the RaspberryPi headless so I need a wireless network adapter connected on USB. For this purpose I'm using the included WiPi, which performed very well. I have tried a handful of wireless USB adapters so far and I can say the WiPi is the most reliable and trouble free so far. It really worked out of the box, connection is very stable and it worked connected directly to RaspberryPi_V2 and to external USB hubs, both powered and not powered. Real time clock Because RaspberryPi don't have an Real Time Clock (RTC) and by design my project is not always connected to network/internet I need an external RTC. In this case I use the provided PiFace Real Time Clock. Installation and configuration were pretty easy. One thing I to do before using the official PiFaceRTC instructions is to enable I2C from raspi-config. Configuration steps are bellow: - install a CR1220 battery on PiFace Real Time Clock - install PiFace Real Time Clock module on RaspberryPi - sudo raspi-config -> select "Advanced Options" -> Enable I2C -> Finish -> Reboot - cd /home/pi - with RaspberryPi connected to internet -> wget - chmod +x installpifacerealtimeclock.sh - sudo ./installpifacerealtimeclock.sh - sudo reboot - set current date/time -> sudo date -s "1 MAY 2015 10:10:30" - with RaspberryPi disconnected from network to avoid NTP update time, let RaspberryPi powered off a couple of minutes and then turn it back on. Now check if date and time are correct, for my case they were That's it, I now have a running onboard RTC. One thing I don't like about this board is the way it is attached to RaspberryPi. Sometimes the RaspberryPi pins did not make a good contact with metallized holes from RTC board and then date command returned a wrong time. Bending RaspberryPi pins here and there improved connection, but I believe I had to change the way boards are connected. More about PiFaceRTC can be found on the following links: Input and Display One of the important way I send commands and receive feedback to and from RPi is the PiFace Control & Display module. Installation and configuration was straightforward using official instructions, no tricks involved I planned from the beginning to use Python for this project and PiFaceCAD have support for both Python 2.7 and Python 3. For testing purposes I performed following steps: - mounted PiFaceCAD on RPi, as described in instructions - enable SPI port -> sudo raspi-config -> select "Advanced Options" -> Enable SPI -> Finish -> Reboot - sudo apt-get install python-pifacecad - run test application -> python /usr/share/doc/python-pifacecad/examples/sysinfo.py -> Success. Information about IP, temperature and memory load are shown. PiFaceCAD can be controlled with an infrared remote controller, but I will skip this step for now, as I don't need it for the moment. After successful installation and configuration of PiFaceCAD module, I started to write the code for control menu. I hope to have it ready in the next couple of weeks. So far I have no cons about this module, everything worked fine from the beginning, although it feels a little bulky as it is and most likely will undergo some surgery on the display side to lower its profile. More about PiFaceRTC can be found on the following links: Well, so far I have RaspberryPi_V2 + RTC + PiFaceCAD working well together. In the next post I'll add more components to this core. May the Force be with you Application Information ChipKit Pi Vs Arduino Pro Mini Quadcopter Assembled (You call that a Quadcopter?) QuadCop -The Control Switch Quad Cop with ChipKit Pi - An "Experience" with Innovation Required I rigged up my raspberry pi to my quad copter and wrote a quick script so I can push a button to start and stop the camera. The camera has a LED on it so it works great to know it is recording. I haven't flown anything RC for about 6 months just coming out of winter, so its not the most glamorous flight, but I wanted to see how well the camera works. There was a 30mph gusting wind and its been like that all week, so the wind was a major problem during this flight. It did however make me realize I may need a few adjustments to my control protocol. There are some skips in the video, I am not sure if that happened during recording or during the conversion processes. I'll figure that out later. The camera output is in H264 raw format and you can use MP4Box (use apt-get) to convert it to mp4 format for playback on a windows machine and uploading to YouTube. I did the conversion on the Raspberry Pi itself. Oh BTW, I got the GPS working too without the main board, this will be nice since I will have the Chip Kit Pi installed. I still would like it for the accelerometer though. Pics and vid below! Edit: For clarification, I am flying this quadcopter manually, it is not auto flying. The autopilot is still under development. I just wanted to tryout the quadcopter and the camera to ensure everything is working. Here is the script I used to turn on the camera. Right now it is connected to a button, but going forward it will be connected the RPi2 via GPIO. The RPi2 will set a pin to "HIGH" and that will turn the camera on. Then it goes back to low. Setting it to HIGH again will turn the camera off. The output filename is set with the date and time so each capture is in its own file. import RPi.GPIO as GPIO import os import time GPIO.setwarnings(False) GPIO.setmode(GPIO.BCM) GPIO.setup(18, GPIO.IN, pull_up_down = GPIO.PUD_DOWN) while True: GPIO.wait_for_edge(18, GPIO.RISING) print("RecordingVideo") filename1 = time.strftime("%Y%m%d-%H%M%S") + '.h264' os.system('raspivid -t 99999999 -o /home/pi/vids/' + filename1 + '&' ) time.sleep(1) GPIO.wait_for_edge(18, GPIO.RISING) print("Stopped") os.system('pkill raspivid') time.sleep(1) GPIO.cleanup() Might want to have a puke bag ready. Flight starts at around :30 seconds into the video. Previously: Sci Fi Your Pi - Prince Dakkar's patent log taking chart compass To realise my initial ideas for the project I have started work on designing the functionality of the device. This really divides into three main function sets; indicating the users current position, setting the route to be navigated and, Indicating the route and direction of travel. Indicating current position The device will indicate the users position on a map. The position will be indicated by the intersection of two bars that will move over the inbuilt map (probably with some sort of elaborate decoration at the intersection). To make this happen GPS data will be taken from the GPS module and then converted to set the correct distance to move across the map. Each bar will move independently with a chain or belt at one end and some form of free running support at the other. There will be a bar for each longitude and latitude each having is position controlled by the Raspberry Pi controlling stepper motors or servos. The main challenge is in ensuring the position shown on the map is the correct position. In my initial design i was planning to use a flat stylised map to fit in with the steampunk style. these maps are often non linear in their projection meaning that the bars would have to move different distances depending on the position on the earth. They are also sometimes interrupted as well which would add a further layer of complexity in managing the physical position of the bars to indicate actual position on the map. Another solution is to use a Linear projection that means the movement for the bars will directly translate to changes in longitude / latitude. These maps may be less pretty but could make implementation easier. It would be possible to use a non linearly projected map but to make the position accurate I will need to 'do some maths' to convert the changes in GPS position to change in position on the map. For this to work the map will need to be using a known projection and I will need to understand the conversion factor involved and how that changes as the position moves away from the equator. It could be possible to manage this by using a globe rather than a flat map but this could prove rather difficult to use for actual navigation. This may not be a problem as it could be designed to be mounted permanently an adventurer's craft rather than something that is carried around. This is not how I originally planned the device but could look really interesting implemented this way. Once this is overcome it is then a relatively simple task to get the bars to intersect at that position by moving them a set distance from the initial reference point of the map. If map projection is of interest to you then the below video has a comprehensive explanation of the idea: Route Setting Know one's present position is only part of the challenge. It is also important to know how to get to the next location for your quest or adventure. The first step of this process is creating the ability to set the location of the destination (and current location). To fit with the overall style this needs to be suitably dramatic and will not have a screen or keyboard input to allow a postcode or address to be entered. Most of the adventures seen in the books and films in this genre have a limited number of destinations (the center of the earth could be a bit difficult to map, but most others should be OK). So at the moment i am thinking that selecting the destination from a list (using some form of scrolling wheel?). For start location I could either use the current GPS position or add to the theatrical nature of the device by having that selected in a similar manor as well. To input the information into the device I intend to either connect different pins for each destination (with the other pin requirements this could leave a very small list of destinations) and use the Raspberry Pi to store the information. The second option I am thinking of is just use the destination and current location selections to connect different circuits to select the routes. for this option there will need to be someway of passing the destination to the Raspberry Pi to enable some of the functionality described below. Indicating the Route and Direction of Travel Once the destination (and current location) has been selected the gentleman (or lady) adventurer now need to know which way to go to get there. I am intending to have two parts to this; an overall route indicator and, a direction of travel arrow. The overall route indicator will most likely be a string of LEDs from the current location to the destination. This could be managed by using a set number of strings from the set destinations and locations or could use a matrix of LEDs to allow more flexibility (partly depends on how location / destination is selected). This could either be controlled using the Raspberry Pi or use the selections made for location / destination to connect different circuits of LEDs to create the route. The effect is intended to be similar to the cinematic "travel by map" sequences satirised in the below Muppets clip. It will probably be yellow dots rather than a line but to have them flash in sequence is the sort of thing I have in mind. The direction of travel arrow will indicate which direction to move from the current location to reach the target destination. The intention is to use the current GPS position and destination GPS co-ordinates to show which way to move. This is intended to be a group of LEDs under a decorative compass style cutout. The Raspberry Pi will compare the co-ordinates and then light the correct LED (or LEDs) to indicate in which direction the required destination lies. I would like to have the main cadinal points (N, E, S W) to start with and add in the ordinals (NE, NW, SE, SW) if possible. this will indicate a direction "as the crow (or airship) flies" which is entirely appropriate for an adventurer who will not be constrained by such boring constructs as roads or shipping lanes. I intend to test each of these as a separate function then combine them all and construct a suitable enclosure. I will also be intending to add extra parts to enhance the theatrical nature of the device. This will most likely take the form of large handles or cog wheels that will initiate functions of control the way the device works. Hello, everyone! Sorry I have not posted much recently. I would be remiss in my contestant responsibilities if I did not keep up with my updates and give a quick heads up on how things are going with PizzaPi! Last week I started my 10-week full-time internship at the lab I work at and its been a bit of a challenge getting used to working all those hours! I'm actually spreading these 10-weeks over a 20-week period, so in a few weeks I'll be off again and I'll be more active around here. Today I tried to rebuild my setup and make sure everything is still in working order. All the parts are working, so that's definitely good. I installed a web server on the B+ and I'm investigating the use of MQTT to handle the transfer of sensor data to the other Raspberry Pi. It's touted as the protocol for the Internet Of Things. I've never used it before but I'm going to use it for this project. Here's some links for more background: MQTT 101 - How to get started with the lightweight IoT protocol An Open Source MQTT v3.1 Broker I'll write more about how I set it up between the two Pis later. Sorry I've been silent all week! I'm still in the game, just trying to do a good job at work, too! Oh, and what I do at the lab is mostly computer science. I'm developing a web database to house information and analysis of samples. I'd like to move more into the robotics side of the lab, but I have to do some intense schmoozing this summer to make that happen! More later! Hello all! Been busy getting everything connected and initialized for the build. Successfully connected Gertbot, MEMS board, and Pi Camera. I've got them successfully communicating through a Pi Rack onto the B+ (as the MEMS refuses to work with the Pi 2). I have noticed that whenever the Gertbot is driving my test motor that the MEMS readings really wig out. Ultimately I think I will have both Pis running in the project - a navigator and a driver. I'll have the B+ connected to the MEMS board, Pi Camera and GPS modules (the latter of which is still set to arrive) to work on navigation, while the 2 will be running the motors and mechanicals. I haven't figured out how I want to approach servo control yet - maybe the chipkit? Anyway, here's a picture of my little leaning tower of testing. ultimately I'll mount everything securely and use ribbon cables to connect. Finally getting to grips with using Python and the PiFaceCAD and accessing the Pi using SSH/PuTTY. but I'm having less success writing blog posts that have all the fancy formatting that others are using, probably because i am writing the blogs late at night as a summary of the previous day or so's activity. Also, this Blog engine has not been programmed to autocorrect my dyslexic fingers. I'm on the road this weekend and most of next week, and i haven't quite go the project to the point where I can cart it around and remote into it from my laptop. So that means spending a bit of time writing the Python 3 modules that will do all the good stuff i need the Pi to do when i get back home. Here's a summary of what has happened so far. Key pointsmfor this week are: No pix this time, sorry. After probably having broken my pi camera last week, I tried to revive it, but to no avail. At this point it's probably safe to call it a lost cause. I wasn't too worried over it. I have 2 USB webcams lying around and I figured I might be able to just use one of those instead instead. In fact, these USB webcams have a built-in microphone. In absence of the Cirrus Logic audio card, that would really come in handy! Meet Laurel and Hardy The left one is a Microsoft LifeCam VX-3000, the right one is an MSI StarCam 370i. They might look like a wacky duo but both have served me well and are fully functional webcams. That being said, I can't seem to make them work on the Pi. They are both correctly detected when I connect them. Here's dmesg and lsusb for the MSI camera: [ 252.049044] usb 1-1.2: new full-speed USB device number 6 using dwc_otg [ 252.151553] usb 1-1.2: New USB device found, idVendor=0c45, idProduct=60fc [ 252.151579] usb 1-1.2: New USB device strings: Mfr=0, Product=1, SerialNumber=0 [ 252.151597] usb 1-1.2: Product: USB camera [ 252.205066] gspca_main: v2.14.0 registered [ 252.211258] gspca_main: sonixj-2.14.0 probing 0c45:60fc [ 252.214266] input: sonixj as /devices/platform/bcm2708_usb/usb1/1-1/1-1.2/input/input2 [ 252.215734] usbcore: registered new interface driver sonixj [ 252.302998] usbcore: registered new interface driver snd-usb-audio pi@raspberrypi ~ $ lsusb Bus 001 Device 006: ID 0c45:60fc Microdia PC Camera with Mic (SN9C105) And Microsoft: [ 417.699011] usb 1-1.2: new full-speed USB device number 7 using dwc_otg [ 417.801644] usb 1-1.2: New USB device found, idVendor=045e, idProduct=00f5 [ 417.801672] usb 1-1.2: New USB device strings: Mfr=0, Product=1, SerialNumber=0 [ 417.801690] usb 1-1.2: Product: USB camera [ 417.803249] gspca_main: sonixj-2.14.0 probing 045e:00f5 [ 417.806077] input: sonixj as /devices/platform/bcm2708_usb/usb1/1-1/1-1.2/input/input3 pi@raspberrypi ~ $ lsusb Bus 001 Device 007: ID 045e:00f5 Microsoft Corp. LifeCam VX-3000 The drivers appear to be correctly loaded and Iceweasel shows that both the camera and the built-in microphone are available. why yes, I would like that As soon as I try to actually use either camera though, no video or audio comes through and I see hundreds of these errors in dmesg: [12380.380712] gspca_main: ISOC data error: [8] len=0, status=-71 [12380.380721] gspca_main: ISOC data error: [9] len=0, status=-71 [12380.380730] gspca_main: ISOC data error: [10] len=0, status=-71 [12380.380738] gspca_main: ISOC data error: [11] len=0, status=-71 [12380.380747] gspca_main: ISOC data error: [12] len=0, status=-71 ...and these: [12452.042127] sonixj 1-1.2:1.0: URB error -71, resubmittingsonixj 1-1.2:1.0: URB error -71, resubmitting [12452.170150] sonixj 1-1.2:1.0: URB error -71, resubmittingsonixj 1-1.2:1.0: URB error -71, resubmitting [12452.298154] sonixj 1-1.2:1.0: URB error -71, resubmittingsonixj 1-1.2:1.0: URB error -71, resubmitting [12452.426173] sonixj 1-1.2:1.0: URB error -71, resubmittingsonixj 1-1.2:1.0: URB error -71, resubmitting [12452.554196] sonixj 1-1.2:1.0: URB error -71, resubmittingsonixj 1-1.2:1.0: URB error -71, resubmitting [12452.682204] sonixj 1-1.2:1.0: URB error -71, resubmittingsonixj 1-1.2:1.0: URB error -71, resubmitting [12452.810219] sonixj 1-1.2:1.0: URB error -71, resubmittingsonixj 1-1.2:1.0: URB error -71, resubmitting [12452.938235] sonixj 1-1.2:1.0: URB error -71, resubmittingsonixj 1-1.2:1.0: URB error -71, resubmitting Searching the web for what that error -71 means lead me to this Debian bug report. The issue appears to be very generic and I don't think I could get any webcam to work on Raspbian right now. It's really too bad but for the time being I'm going to give up on getting a camera to work with the Raspberry Pi. The software solution to create a bridge in the main device, able to manage the internal network and the external WiFi connection convinced me that this was probably the best solution to deliver all the needed features: The inspiring source for this solution after many tests and discarding other more complex and less performing variants come from an article on hackhappy.org site. Seeing in detail the procedure is almost simple and is covered by few steps: apt-get -y install isc-dhcp-server iptables In this case the dhcp server is not an essential element as all the connected RPI has a static IP address but will be useful for further - possible - external units connected to the system. The reason of this documental note is to underline the elements that should be took in account with the project respect the use of some connection strategies, i.e. the adoption of the Bluetooth for printing and - where it is possible - some probes and the WiFi connecting the diagnostic part with the display control. I tried to keep this document as short as possible argumenting the aspects with some attached PDF and specialised sites links. A general-interest document is Know your regulations before you design medical electronics (see the attached document). Bluetooth is adapt and with few precautions is one of the better ways to connect small medical devices, especially probes, to the controlling parts. It is continuously growing and is well accepted as it involves very few risks for the patient: this last is a point to take in special attention when operating in conditions where the patient has an unknown anamnesis. In general take a look on a actual state-of-the-art in the attached document Medical device connectivity from Wikipedia. In detail, there is an interesting article about the growing use of the Bluetooth technology on the Bluetooth.org SIG (Special Interest Group) in the two parts article published by Bluetooth.com. Links: Part one & Part two (documents are also attached to this post). In the second part of the article mentioned above there is also a note about the problems related to the security of the communication, involving potential risks for the patient privacy. In this case the attached document Securing Legacy Mobile Medical Devices analyses in-depth this aspect. In Meditech I have considered this potential risk as a must to be took in account referred to all the communication aspects adopted in the device components. As in some cases BLE, the Bluetooth Low Energy technology is not yet reliable and can't be applied due the reduced speed there is a lot of cases where this can be the solution, as a real good alternative to the traditional, more consuming Bluetooth 4.0 In the project initial definitions I have considered the adoption of the BLE I have already used just in a health-fitness development solution during the second half of 2014 as a real improvement option at least where it is possible. In fact it will not be present in the prototype for problems of sort deadline but it will be part of the production version of the same project. More about the Bluetooth adoption in the health and wellness medical electronic devices market can be found in this other article from Bluetooth.com Despite what is mentioned in many social sites, the adoption of the WiFi wireless communication - also in this case with some precaution - is one of the fastest growing technologies in medical structures. A good explanation of the low-risk and the advantages, pros and cons can be found in the attached article Building a Reliable Wireless Medial Device Network. In the last post, I described the hardware part for our robot which uses the Seeed Studios GrovePi+ since I am yet to receive my kit. In this post, I talk about the software part and making the robot move. Lets see what we can do. I have already mentioned that there are lots of posts on how to install Raspbian and my tweets on the install are available at . Lets move onto the more difficult stuff.. I did a related post on writing python scripts at It usually comes preinstalled with Raspbian and you can verify it by typing python —version at the terminal. Unlike my previous tutorials, we will work in the GUI and create a little guy of our own. Lets setup remote access to the RPi’s Windows env. VNC (Virtual Network Computing) is one way we can control and monitor a computers desktop from another computer over a network. Which in our case is going to be useful for wireless remote teleoperation of the robot and basic control of Raspberry Pi. I am assuming that we have setup a static IP for the RPi and it is connected to our local network. On the RPi we need to install tightvnc by running the command sudo apt-get install tightvncserver once it is done, we move on to starting a server by issuing the command vncserver :1 -geometry 1280x800 -depth 16 -pixelformat rgb565 It should ask you to enter a password which will be used for remote access. If not then run the following command vncpasswd After the password is set its time to login to the sever from another computer. If you are running windows or linux, then download the appropriate version of ultravnc. On a Mac OS, use the screen sharing app. The next thing to do is connect to the RPi and start writing some python scripts. There are a number of ways you can control motors from an RPi but I have chosen to use the seeedstudios grove Pi + for this particular bot. In order to use it, we need to download some ready made scripts to test things out. Go to and download the zip file. There is a software folder which not only has python but also nodejs and C and Shell example code. Now you may choose to employ and arduino or even connect a motor driver directly for which you will have to write your own functions for movement. In another post, I will be using the gert board to control some stepper/servo motors but this time its gonna be the Grove Pi+ and friends. I am not providing a tutorial but instead a step by step description of what I did and usually do when I come across a new platform. This should help you understand things a bit better. By default the I2C and SPI interfaces on the RPi are no enabled. We need to make some changes. First type the following command in your command prompt sudo raspi-config This will start the utility as shown below Go to Advanced Options -> I2C -> Yes The screen will ask if you want the interface to be enabled : Select “Yes” Select “Ok” The screen will ask if you want the module to be loaded by default : Select “Yes” Repeat the same for the SPI module. In addition to this, we need to edit the modules file. Execute the command sudo nano /etc/modules and it will open the modules file in the editor. Add the following line to the end. i2c-bcm2708 i2c-dev Use Ctrl-X, Y and enter to save the file and exit. Reboot. You should have the modules enabled and to check run the following command lsmod | grep i2c_ This should list out i2c modules and the presence of i2c_bcm2708 in the list will indicate that all is as it should be. Great so now we have the I2C and SPI all setup and we can move to testing out the motors For details on I2C refer to The first thing I need to The motor driver in question looks something like the image shown below. In this case, we want to test out the motors first and for that the python script is as follows #!/usr/bin/env python # Description: Grove Motor Drive via I2C # Author : Inderpreet Singh import smbus import time import RPi.GPIO as GPIO #]) # Things to do once time.sleep(1.0) MotorSpeedSetAB(250, 250) time.sleep(1) # The Looping Code Here try: while True: # Loop Things here MotorDirectionSet(0b00001010) time.sleep(5) MotorDirectionSet(0b00000000) time.sleep(2) MotorDirectionSet(0b00000101) time.sleep(5) MotorDirectionSet(0b00000000) time.sleep(2) except: print 'Somthing went wrong or you pressed Ctrl+C' finally: print 'Cleaning up Things...' GPIO.cleanup() This works for me and the motors move forward and then backwards as they should. I need a better battery though. You can replace the functions with your own to control the speed and direction control. I wanted to upload a video but unable to… yet. Lets have some fun with this one... The world of windows has shiny buttons and stuff and you can use the mouse to interact with objects. We need a Graphical User Interface albeit a simple one here as well. Hence we start with TkInter. Tkinter is the standard GUI library for Python. Python when combined with Tkinter provides a fast and easy way to create GUI applications. This link is a lot of help. I created a new script as follows: # Description: Grove Motor Drive via I2C # Author : Inderpreet Singh import smbus import time import RPi.GPIO as GPIO import Tkinter #]) def quit(): global TkTop TkTop.destroy() def Forward(): MotorSpeedSetAB(250, 250) MotorDirectionSet(0b00000101) def Backward(): MotorSpeedSetAB(250, 250) MotorDirectionSet(0b00001010) def Right(): MotorSpeedSetAB(250, 250) MotorDirectionSet(0b00000110) def Left(): MotorSpeedSetAB(250, 250) MotorDirectionSet(0b00001001) def Stop(): MotorSpeedSetAB(0, 0) MotorDirectionSet(0b00000000) # Things to do once time.sleep(1.0) MotorSpeedSetAB(250, 250) time.sleep(1) GUI = Tk() GUI.geometry(“250x250”) GUI.title(“Robot Control”) B1=Button(text=“Forward”, command=Forward) B2=Button(text=“Backward”, command=Backward) B3=Button(text=“Right”, command=Right) B4=Button(text=“Left”, command=Left) B5=Button(text=“Stop”, command=Stop) B1.grid(row=0, column=1) B4.grid(row=1, column=0) B5.grid(row=1, column=1) B3.grid(row=1, column=2) B2.grid(row=2, column=1) mainloop() This code creates a window as shown below and allows easy control of our robot using a simple GUI So now we have a little robot control and we should be able to use VNC to connect to it remotely. More on this next time… Stay tuned So, since my kit will take a month to arrive, i'm dealing with the things that i can... Shelf - well since i can't import one from anywhere (as far i was tempted to that, but taxes here is a robbery - around 100%), i have a buddy here in Brazil that's making those helmets in fiberglass, i'm getting one from him. threepio sound: gathering from my personal collection, and "cleanning" it with adobe audition... on that first stage, i will just add treepio voices in english and potuguese, later i will add more voices software - well since the most can be gathered from web, i'm focusing on some tracking sistem that will be arduino-based (sound tracking) (still waiting the servos (i've fried that i had)... so, for now, that's it wish me luck Just wondering - is anyone else still waiting on part of the kit? I haven't received the RPi A+, audio card, Microstack GPS, or Microstack baseboard. Did the kit get changed, does anyone know? Beggars can't be choosers - I was just hoping to utilize a couple of those other parts. Previous Posts Application Information ChipKit Pi Vs Arduino Pro Mini Quadcopter Assembled (You call that a Quadcopter?) QuadCop -The Control Switch I continue to find issues with the ChiKit Pi and work around them as I can. Here is a brief list and then a more in depth explanation for those who are interested. My goal is to show our Hosts the amount of innovation it takes to use the ChipKit Pi and to also help others who may choose to use it. 1) Digital Pins 4,5,6,7 and analog pins A3,A4 and A5 are not present. While there are pinouts on the the CKP for the digital pins, they are connected to nothing. SDL and SCL are present but not in the form of A4 and A5. 2) There is a bug in I2C as a slave, where the callback routine only gets 1 byte at a time. This means if I send 3 bytes in a block, the call back will be called 3 times instead of 1 time. This isnt a show stopper but requires one to create a block protocol similar to a serial communication. See for the bug submission 3) If your code uses Serial.println statements, you need to change to Serial1.println to see the output on the pi. 4) The Servo library doesn't work for me, however the SoftwarePWMServo does, and work well it has. its very similar just different function calls. Its not object oriented though. 5) The serial pins for the Pi are covered up by the CKP. There is a small header on the top of the board next to the header that connects to the pi. Some of the GPIO pins are brought here from the pi, but they are NOT in the expected location. You need a pinout for this. However the Serial pins are not broken out. So to get around this I soldered wires to the serial pins so I can connect my GPS directly to the Pi. 6) Number 6 isn't found yet but I am sure it will be None of these bugs are show stoppers. However, they require innovation to work around. I need to solder on wires, develop a block protocol for I2c, and remap all my pins in software to work around the pins I used that are not present on the CKP. Now for the more in depth explanations. ChipKit Pi (CKP) notes The CKP gives us a nice breakout for the PICPIC32MX250F128B in a format similar to the arduino uno. However, because the PIC32 included on the board does not have as many pins as the AVR328p on an UNO, some of the pins are present on the breakout but not actually connected to anything! This had me stumped for a bit until I found the pinout located here: Once again I assumed this breakout was 100% arduino compatible and I assumed wrong. This meant I had to do some reading, and it serves me right for not doing so up front. Thats ok, I still really like it. My original pinout mapping for the Arduino Pro Mini used most of the pins. The chipkit Pi is missing Digital Pins 4-7 and A4,A5. On the Arduino A4 and A5 are used for I2C SDA and SCL respectively. The SDA and SCL are present on the breakout and can be used to bring the CKP onto the I2C bus. Its unclear to me if these will actually function as analog pins as well. A3 is nowhere to be found. In my original code for the Aruino, I needed 7 digital pins to read PWM, 4 digital pins to write PWM (software based), SDA, SCL, and 2 digital pins for signaling the RPI. So 13 digital pins are needed in addition to the two for I2c. However the CPK only has 12 digital pins if I also count A0 and A1 as digital pins. What to do? Right next to the header where the raspberry pi JP5 +---J4---+ NC 01 02 GPIO0/GPIO2 GPIO1/GPIO3 03 04 NC GPIO4 05 06 GND GPIO18 07 08 GPIO17 GND 09 10 GPIO21/GPIO27 GPIO23 11 12 GPIO22 GPIO24 13 14 3V3 GND 15 16 GPIO10 GPIO25 17 18 GPIO9 GPIO8 19 20 GPIO11 GPIO7 21 22 GND +--------+ You can see some of the GPIO pins covered up by the Raspberry Pi are available for use. I "assume", and there is that word again, this is a stright passthrough from the RPi. Here are the items that need a digital pin: Radio Channel 1 - Forward/Reverse Radio Channel 2 - Left/Right Radio Channel 3 - Climb/Dive Radio Channel 4 - Rotate left/Right Radio Channel 5 - Auto or manual pilot mode Radio Channel 6 - Macro mode start/stop Radio Channel 7 - Perform sensor sweep indicator Motor ESC* 1 Motor ESC* 2 Motor ESC* 3 Motor ESC* 4 Automode Indicator - Connection to RPI to turn it to automode Macromode Indicator - connection to RPI to tell it to start/stop recording Sensor Sweep Indicator - connection to RPI to tell it to note a sensor sweep at these coordinates *ESC = electronic speed controller, which given a PWM sets the speed of a brushless motor. In all technicality these pins actually connect to the flight controller not the ESCs themselves. One solution is to read one of channels 5-7 through one of the GPIO pins on the RPFS. Given the Raspian is not a real time OS, this is generally not recommended. However channels 5-7 are simply "off/on" so they go from 1ms to 2ms and nothing in between. I feel that the variance that RPi will have is acceptable and will be less 100 microseconds anyways. This would be an issue if I needed more precise readings but on these pins I do not. if I go this route I will take the reading 3 times in a row and use the average in case there is some 1ms delay for some reason. Another option is to instead of using 3 digital pins to signal the RPI, write commands through I2C. I originally avoided this approach due to my original design of using the Arduino Minis. With all the reading and writing PWM signals on top of some I2c stuff, I felt is was being overtaxed already. The CKP has so much power though I think it will be strong enough to handle more parsing. With the bug in I2C its going to be more work to implement additional I2C commands. In the get hub bu submission at There is a work around but it requires blocking the ISR. Given I need to update servos and read PWM input all ISRs must run fast. It would be better to work around the bug using non-blocking code. Blocking code = When an ISR is called interrupts are disabled, so no other ISR can run. Servos are updated via an ISR so if an ISR takes too long, the servos will start to jitter. The networking part has two main roles: The main issue arised discussing with some Element14 community members about the choice of adopting the wired Ethernet LAN as the standard method connect the Meditech RPI units. In the discussion Epitaxial vs 1N4002 Diodes clem57 suggested to adopt the SPI protocol for the RPI internal connections as apparently more reliable. Frankly - as can be read in my responses to the discussion mentioned above - something sounded strange to me. But I am not a great expert of the SPI protocol usage so I had to document about it. Until now I have used intensively it only to manage the communication between microcontrollers and chip (e.g. analog potentiometer, shift registers and so on). Then I had the opportunity to make a deeper discussion about this aspect of the project with violet that got me a more wide view of her personal point of view about the usage of the SPI protocol in this specific context: connect and exchange data with a variable number of separate Raspberry PI devices doing specialised tasks. Following there is just the essential about her idea how to connect the devices (note that the Pi roles are only an example to explain the design):. [...] These considerations were accomplished with a clear schematic image reported below The intuitive reason - and probably my decent knowledge on how the networks works - that the adoption of the SPI protocol is not the correct solution has been influenced by several aspects. After the discussion mentioned above I have checked to see if these concepts was confirmed by more reliable documentation then my personal view.. [...] The SPI Bus is usually used only on the PCB. There are many facts, which prevent us from using it outside the PCB area. The SPI Bus was designed to transfer data between various IC chips, at very high speeds. Due to this high-speed aspect, the bus lines cannot be too long, because their reactance increases too much, and the Bus becomes unusable. However, its possible to use the SPI Bus outside the PCB at low speeds, but this is not quite practical. This is what I have found on almost all sites with tutorials explaining how the SPI protocol works and how it should be adopted. The text above is from this tutorial that I attach in pdf to this post for your convenience. The most important reason that I have finally decided to adopt the Ethernet connection for the internal data flow is because as I mention i the previous posts Meditech is not a fixed system: it is conceived to work with a minimal set of probes organised in a multi-board Pi processors that must be able to accept other specific probes without nothing to be changed. The better solution I see for this kind of cases is that every unit when present share the same networked architecture. Then the network architecutre has been defined as follows: To make the architecture able to connect only with the desired access point the AP connection information are defined statically in the network configuration files. This can be parametrised if a different WiFi AP connection approach is needed. If you are interested, take a look to the next chronological post where it is explained the command set used to enable the router-bridge on the RPI Master. The medical tricorder Meditech is a modular specialized version of the standard Starfleet tricorder (it will not be in service on the starships before the 23rh Century). It is equipped with sensors and analysis software tailored for medical diagnostic purposes. These sensors will collect, store and organise data to be used, especially in emergency and crytical situations on the Earth and other exoplanets, when assessing a patient's condition. The future centuries medical tricorders will be more compact and equipped than the Meditech unit that is under development in 2015, the actual century. Meditech tricorder will be a full working one with some limitations in size and features conditioned by our actual technology. The Meditech device is a portable unit of about 25cm per side controlled by a separate handheld to display data and organise patient information. These information can be sent to a remote central unit for helping the intervention of the remote operator. At the actual date our technology has not yet made available very-long-distance instant communications so in the today scenario we assume that it will be impossible to use this model sending data to any orbiting starship, if any. Instead this device is able to use the Earth internet network to connect the on-field operating unit with a remote specialised site, e.g. hospital, ambulatory, medical center and so on. One of the features of Meditech, the model is much smaller than the heavy-duty versions available on the first starship vessels, is the availability of a slide-up camera probe for fast high resolution images an videos for any purpose. The same probe includes a frame comparing software to check detail changes during short times, e.g. catching the eye reaction to a flash light. A series of standard medical probes, some coming from the past century but with improved functionality can track most of the most important vital signals of the patient (hearth rate, hearth frequency, body temperature, eye reactivity, glucose sensor, non-invasive probes like echography, ECG and others) The signals coming from the probes are collected and organised, then sent to a local display unit to facilitate a first diagnosis if the operator is enabled to do it. Alternatively the visualised data can be sent wirelessly via 3G / 4G or other remote networking technologies to a specialised unit able to give assistance to the medical operator acting on-site. Most of the signals can also be monitored continuously. The display unit consists of a 6-9 inches handheld acting also as mobile access point to the Internet. Meditech is equipped with a small printing unit managed by both the main device and the display device depending on what task should be done. The display unit graphic interface is designed to create the better user experience: As there are many different situation where the Meditech can be applied as helpful diagnosis device the main unit is completely modular: The system can host add-on probes to manage specific diagnostic information and it is designed to be able to host at least three more extra-probes (alternatively) that will be designed and created in the future. The Meditech mobile unit can be used everywhere including in closed areas and any Earth sanitary structure. Meditech unit can be paired with a remote unit installed in a medical center or available to specialised medical personnel that can support the remote operator in the patient diagnosis. As a matter of fact the remote assistance unit is a twin Meditech device that - depending by the conditions - can optionally host the probes too. The unit includes a HDMY 15 inche (or more) flat screen, keyboard, mouse and audio features. The remote assistance unit exclude instead the display unit handheld (replaced by the HID devices, i.e. keyboard, mouse and monitor). So the Meditech is convertible depending on the usage and it is not needed any kind of special installation, change, upgrade etc. This feature grant the better flexibility of the system that can work almost on any, known or unknown environment. A further evolution of the actual design is to setup a unit protected against damaging environmental factors e.g. humidity, direct sun exposition, unavailability of a stable AC recharging power supply and more. Note1: Images are mostly from the Star Trek wikipedia Memory Alpha. The shown images are not decorative only as are also the source of inspiration of the User Interface design for the device. Note2: Meditech device, codename tricorder will be available as the first prototype for the end of August, 2015. It's a pleasure to see a future idea transformed in reality just now. Hello all! I am behind everyone else I realize - I will do my best to make up for this in the coming weeks! I'm a full-time student going into finals, so I've been quite busy! I also thought I'd share another side project which has robbed me of my pi time. I'm involved in the Science Outreach Club at my college. We do hands on activities for students in our (quite rural) area. It gives them the opportunity to experience STEM in a way they might otherwise not have been able. I spearheaded an upcoming event in robotics, and have been diligently working to build a small fleet of 8 autonomous mobile robot kits. They use Servocity's Sprout "Runt Rover" as a frame, as it allows them to be assembled without tools. I used Picaxe boards with LD239 motor controllers to control the bots, and added SRF-04 Ultrasonic Sensors to allow them to sense and navigate. I've spent most of my free time this last week slaving over a soldering iron to that end. On to the task at hand...I have set up my RPi, and am getting acquainted with it - it's actually my first time using one, and it's been fascinating. I've gone through what I've recieved so far of the kit, anyway, and looked at what exactly I want to hook up. As the full details of my proposal are perhaps outside of the scope of the timeframe and budget of this challenge (though I intend to continue to work on it after the challenge has closed) I have felt that I need to prioritize aspects of it. Specifically, I'm going to work on mobility and sensing first. That being said, I will be using the GertBot for motor control, and using the Microstack GPS, Xtrinsic Sensor Board, and Picamera module for basic sensing. I'll be interfacing all three with the RPI 2 via a Pi Face Pirack GPIO extender. I'm currently enrolled in a computer vision course taught by Prof. Peter Corke which I believe will give me a definite foot forward in terms of image acquisition and interpretation from the Picamera. For the frame I am considering using a Servocity "Bogie" rover, as it utilizes the rocker-bogie suspension that I want to implement. We'll see - it may be too small for this application. I really am looking forward to spending more time on this project, and I will be sure to keep everyone posted! Previous Posts Application Information ChipKit Pi Vs Arduino Pro Mini Quadcopter Assembled (You call that a Quadcopter?) QuadCop -The Control Switch The Control Switch I've been using the ChipKit Pi and have really enjoyed learning its abilities. I originally thought it was an Arduino clone of some sort, but I realized quickly this is not the case. It does have some Arduino type libraries but it quickly diverges into its own paradigm. Here is a video to show what I am doing followed by a more in depth explanation below. In this video: My original design called for Arduino Pro Minis, however I have decided instead to use the ChipKit Pi to replace the functionality of the minis. One such functionality is called the "Control Switch". From my application: "This is a custom configured Arduino Mini that is connected to both the Radio receiver (Rx) and The Rasberry Pie Flight System (RPFS). It is used to generate PWM signals to the flight controller either by reading the RPFS or the Rx depending on if the flight mode is manual or auto. It is responsible for switching signals when it detects mode changes between auto and manual. All PWM input from the Rx and PWM outputs to the flight controller are done with the control switch. Digital output is fed back to the RPFS to indicate the modes read off the Rx. This relieves the RPFs from having to detect pulse widths from the Rx as well have having the Rx connected to two different systems." So what this entails at a high level is that I can control the QuadCop Manually, and then I can tell it to start flying in automode. This means that my manual control must be overridden. Further, I can take back control at any time. This requires the reading of PWM signals, and the generation of PWM signals. Both of these are done differently on the ChipKit Pi than they are done on the Arduino. However because the Pic32 is so much faster, the Interrupt Service Routines (ISR) are much faster, and have less overhead. This means a smoother signal can be generated when generating multiple signals. The Control Switch is reading 7 PWM Signals and writing 5 PWM signals to control the QuadCop when in manual flying mode. There are several pieces of code out there that do the above, but I wanted to write my own from scratch using C. I prefer to understand everything that is going on under the hood with not black box concepts. When in autoflight mode, the RPFS takes over. The RPFS will be described more later, but at a high level it is reading the GPS, and executing the waypoint macros. It does this by sending a "control byte" to the control switch via I2C. The control switch reads and parses the control byte out and then moves in the desired directions. There are 8 movements a quadcopter can do: Forward Reverse Left Right Climb Dive Rotate left Rotate right These motions can be represented in 1 byte, each bit representing a direction. Of course we cant request both forwards and backwards motion at the same time and such a condition is checked for validation of the control byte that is received. The RPFS sends a control byte, a register command, and a Control Byte Check (CBC) over the I2C bus. The CBC is simply the remainder when dividing the control byte by 17 (control byte mod 17). One thing that is missing is speed information. Per my initial design, speed will be ignored and the Quad will move at a constant slow speed. The RPFS has a command available to increase the speed in any direction in the case of a strong wind not allowing the QuadCop to move in a certain direction at the set speed. The control switch will be powered by the RPFS which is turn controlled by a 5V linear regulator attached to a 6.6V LIFE pack commonly used in Radio Control applications. Previous posts for this project: Had: Sci Fi Your Pi: PiDesk - Guide: Stepper Motors with Gertbot! The: The output of the command should be similar to this: pi@PiDesk ~ $ wget --2015-05-04 19:28:43-- Resolving ()... 93.93.130.166 Connecting to ()|93.93.130.166|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 8537 (8.3K) [application/x-gzip] Saving to: `gertbot_py.tgz' 100%[================================================================================>] 8,537 --.-K/s in 0.01s 2015-05-04 19:28:43 (752 KB/s) - `gertbot_py.tgz' saved [8537/8537] The downloaded drivers are compressed. In order to be able to use it, the file must be extracted: pi@PiDesk ~ $ tar -xvzf gertbot_py.tgz gertbot.py There you have it, the "gertbot.py" file. By importing this file in other Python scripts, it is possible to call certain functions that will facilitate the control of the motors.. As for the hardware, I used two 200 steps, bipolar, 12V NEMA17 stepper motors and connected them to the Gertbot as per the documentation. The underside of the Gertbot also indicates how to connect the motors. As a demo application, I've built a prototype of a lift to slide a screen in and out of place. The lift is controlled using the Python script documented earlier. View all my posts on this project here: Got an update coming on some technical hardware, in the interim, Got my quadcopter assembled! The kids wanted to be in a video so here it is, cheesy as it gets. This is the Quadcopter I will be using for testing of the QuadCOP. I plan to get something sleeker later in the project. The early stages of software development often takes time to produce tangible results, particularly when you are climbing the learning curve of several technologies at once with crampons and ice-axes, but no rope . However, thanks to the documentation provided for the PiFaceCAD Examples — PiFace Control and Display (CAD) 2.0.7 documentation, and the documentation that comes with Python, plus a small injection of brain power by yours truly, we have a first example of the Hexagram Display Outtput page on the PiFaceCAD screen. It might not look much, but the two Hexagrams are custom bitmaps created using Python lists, and the output algorithm proves the conversion from the original (old) Hexagram as cast by the computing algorithm to the 'New' Hexagram created by the changing Yin and Yang lines. Next task is to sort the Welcome Menu page and the control/parameter parameter passing from the compute engine to the display module, to test the random number generation. It won't be much of an Oracle if it always gives the same answer . As i have also got the IR module to recognise the remote from my Hauppauge PVR card, i intend to test both IR remote and PiFaceCAD buttons for Menu Navigation. The PiFaceCAD Internet Radio example software should help there. The need is to create a reliable and fast, simply to modify command set (as a matter of fact should become a Python library) to control the printer features. The small thermal ESC/POS printer, supports a well known protocol able to generate all the needed printout we need. Adopting a Bluetooth enabled printer has the great advantage that the same peripheral can be used to print from inside the Meditech device and from the Mobile Display Unit without changing almost nothing. Searching on the available documentation in Internet it seems that the most reliable way to manage the bluetooth hardware layers with python is using the lightblue cross-platform. I have tried it and it really support any kind of Bluetooth layer. There are some issues that can be solved but as the final result I have excluded it. The reason is that the architecture should be as simple as possible. In our case we don't need the full availability of the Bluetooth features but only a serial printer enabled and ready to work wireless, available at short and medium range. The most simple choice revealed to be setup the serial layer over the Bluetooth interface. With the advantage that by the point of view of the entire system the printer remain a serial device as it should be. On the RPI side I have used a common bluetooth dongle. As for what I see, the same procedure will work with almost any other USB Bluetooth dongle. The entire setup procedure is almost simple and can be replicated in minutes on any Raspberry PI running raspian. I am working with python 3.2 so should be took in account that some instructions in the standard libraries - i.e. the serial library - manages data in a slight different way than the previous python versions. I suggest anyway to adopt this version because the printing data are sent through the serial as bytes(...), including the encoding format. Specifying UTF-8 we are sure that every character is sent as an 8 bit packet to the peripheral. This aspect is important because of the printing protocol to avoid unexpected results when sending data including control characters. If not yet present it is necessary to install the components as shown below: sudo apt-get install bluetooth bluez-utils blueman sudo apt-get install python-serial After this operation the linux Bluetooth components and the serial library for python are installed on the system and we can proceed. The next steps are shown as single command lines, but it is not complex to create a bash script to make all together in a single shot. hciconfig Use this command to see how the Bluetooth device is recognised by the system. In my case it is handled as hci0 Now the printer should be powered and visible by the RPI Bluetooth then launch the command hcitool scan After waiting a while also the printer will appear in the identified devices. The result is something like xx:xx:xx:xx:xx:xx PRINTER_NAME that are the device Bluetooth address and the Bluetooth printer name. At this point we should pair the printer with the RPI using the command sudo bluez-simple-agent hci0 xx:xx:xx:xx:xx:xx After few seconds the command ask to introduce the printer Bluetooth pin then the device is paired with the RPI. At this point we should make the binding between the two bluetooth stable also after the reboot. We can do it editing the rfcomm.conf file sudo leafpad /etc/bluetooth/rfcomm.conf When the file opens in the editor (you can use the vi editor is there is not any graphic desktop running) add - or uncomment and change if already present - the following lines: # Change rfcomm0 accordingly with your Bluetooth dongle setting rfcomm0 { bind yes; # Replace xx:xx:xx ... with the printer Bluetooth address device xx:xx:xx:xx:xx:xx; channel 1; comment "Serial Bluetooth printer"; } Save the file and exit. Now the last command is to enable the binding with the peripheral immediately with the command sudo rfcomm bind all Now the printer is connected and ready to receive data from the applications. Open the Python 3 Ide and from the editor insert the following test script #! /usr/bin/python import serial from time import sleep bluetoothSerial = serial.Serial("/dev/rfcomm0", baudrate=9600) testS = "Raspberry-PI Project \nMeditech Printer Test" testV = input() bluetoothSerial.write(bytes(testS, 'UTF-8')) bluetoothSerial.write(bytes(testV, 'UTF-8')) Run the script and it should print a test. In this document I will show the results, in a separate post I will publish the setting procedure. As mentioned in a previous post, a small thermal printer will change the rules of the game tracking the history of an emergency intervention. In unconventional situations I have personally experienced that it maybe useful that some data are on paper instead of in digital format only. The other point is that some health status reports are better to be printed for a lot of reasons (discussed in future during the first tests). So Meditech will include a small thermal printer to produce - in some situation on-demand and as a mandatory task in others - short documents. Below there is a video showing the RPI printing a long list (the /usr/bin folder file list, for curious) to see a reliability, responsivity and speed test. As the printer, with the proper control codes, can also work in graphic mode it will be used also to print some graphic representation of the data: available in multiple copies, on-field and immediately available. In the previous post Meditech: Powering the unit michaelkellett rose the problem of electrical safety and general medical devices compliance about the possible risks adopting some components, especially related to the power supply architecture. It is clear that as Meditech will be a medical device patient safety from any possible device derived injuries should be considered in depth. The actual phase of the project, that is the definition of the parts and the general architecture, consider with more attention the bare technical aspects than the specific medical safety compliance. This aspect will be reviewed later, probably the better choice is on the first full-working prototype. There are anyway some aspects that I have always in mind. It is almost obvious that this will be the most critical aspect related to potential shock damage to the patient. In the power architecture design I have already took some precautions, like setting the charging unit removable and available only in some conditions while the device is not working with the health probes. The prototype design anyway is highly modular and the powering system can be independently redesigned despite the rest of electronics. This aspect can't be covered by a single general application as the regulatory instructions will vary by country. The approach is that a generally "safe" prototype should be open to be localised following the specifications in every country. A good source about the medical electronic devices esign can be found in the article of Jerry Tower, in Electronic Design blog. The article can also be found in attach to this post. Well this is the Meditech printer (or what it will become in very few time, hopefully). Next step is to make it printing on the Raspberry PI while for now the RPI has been set with the Bluetooth support and is able to pair the printer, saw as a serial device. As Meditech is almost complex several power levels should be provided. To simplify the scenario the elements that should be powered in the base architecture are listed below The more reliable solution I can see is the adoption of a multiple power-source solution: Until now it should be possible to provide only +12V and +5V to power the entire system. A dedicated power control unit, including power level indicator and some logic, current regulators and battery to main power switch should be designed. As mentioned in some previous post the Meditech architecture can host add-on modules. To avoid to limit the module usage due to power limitations, especially when the system is used outdoor and is battery-operated, every module should include an autonomous battery powering system that is under charge when the module is connected to the main device and an external power source is provided. Received most of the kit parts - really all of the parts that I need for my project. I need to order the following cables to communicate to peripherals: USB to RS232 (9 pin Din connector) USB to RS422 with wire end exposed. So to recap my project: Today’s manufacturers are looking to continuously improve their processes to be able to increase quality and reduce waste and cost. Part of the continuous improvement initiatives includes Overall Equipment Effectiveness (OEE) and Total Productive Maintenance (TPM) programs. Many manufactures look to process data to help with these continuous improvement efforts. While many new machines now have data collection capabilities, many older machines do not. One of the biggest issues limiting manufacturers from analyzing data from machines is the limitation of communication channels on older PLC’s. These older PLC’s typically have limited number of slow serial communication channels that were originally meant for programmers to connect to the PLC to troubleshoot or make modification to the code. As more digital components started being introduced into the manufacturing area (digital drives, encoders, some transducers, etc) the need for analogue operator input decreased. Dedicated HMI applications have replaced these analogue controls in favor of more precise digital signals. These HMI’s communicate with the PLC through these limited number of serial channels. This does not leave any available communication channels for the transmission of data that would be helpful in continuous improvement efforts. For this project I will use the raspberry pi to act as a communication hub. It will: send and receive data to/from the PLC via a RS-232 communication channel with the DF-1 protocol send data to the Fenner M-Trim via a RS-422 communication channel with straight ASCII commands/responses send data (received from the PLC) to an OEE server via an Ethernet connection with FTP receive data from a recipe database via an Ethernet connection via FTP send email/text alerts when vibration data from PLC reaches a certain threshold The Piface Control and Display will be used to setup the raspberry pi communication channels It has been busy this week, although the blog has taken some time to put together. I've been gathering a few more supplies necessary for the construction of the container for the "Picorder". While that is in progress, I've set in motion some basic tests of the device. I've been working on the display functions mainly at the moment. I've attached some more photos and video of some of their results. If you'll notice the display pin outs are very different so determining the correct pin out to program the Pi correctly was 'interesting'. In the meantime, I've been disassembling some more devices for parts. The speaker assembly had a damaged plug so I decided to use it's components for the Picorder. The original Tricorder has sound features so I intend to have it here also. In my last blog I stated I would be doing some sketches for the housing unit, however, I have a couple selections en route that I found online so I may not need to completely construct the case. I'm tracking the code I'm using to so it'll be published at a later date in its entirety along with a completed parts list. Enjoy the image gallery and video! As the RPI master will act as a server and data collector, centered on MySQL database it has been clear from the start that a large storage solution has to be adopted. This can be done in two possible ways: Considering the pros and cons, the decision was to find a way to move the entire system on an external USB hard disk. The most important factor conditioning this solution was the software to be installed on the system: Also using a 32Mb microSD I can't be sure there will be sufficient space in the future hosting all the packages and components needed, especially in this development and experimenting phase, where installed components are redundant untile all the things are not clearly defined. Then there is the aspect related to the development environment: in many cases first of all the ChipKit PI module, the possibility to develop some parts on the RPI platform demonstrates to be a winning solution. Based on some information I got on the Internet I have tried to identify all the issues and tricks to successfully create a RPI system with the Raspian-wheezy OS running on a USB external hard disk. The details on the installation procedure and where should be paid attention are described in the document I have published under the group Embedded Linux (the link is here: Raspberry PI: USB hard disk boot ) A copy of the document is attached to this post. The following image shows the actual - experimental - solution with a 1Tb HD: I have used a 5 inches Sata disk just because it was here unused. Important note on the kind of HD to use Despite that the HD in the image was the only large HD available at the moment, I have done some other tries with other kind of 2,5 inches HD discovering that these seems not reliable because the Raspberry PI USB can't power them properly. This aspect should be investigates further because - as explained in several sites - the USB ports of the RPI can erogate up to 1.2A if the device is powered with 2A 5V. I have used a wall-mount power supply where the declared power is 2A but nothing has changed. Maybe that it is not sufficient and as soon as possible I will try with a different powering system. Previous posts for this project: Last week I started experimenting with a component which isn't part of the kit. As described in my very first post, I plan to use the Touch Board for capacitive touch applications. Because the board also has an onboard mp3 player, it is perfect to play certain sound effects when triggering the sensors. First I got to set up the board and familiarise myself with how it works and then I worked out a little demo for my futuristic desk. You can find more information and some thoughts on the Touch Board here: Sci Fi Your Pi: PiDesk - Review: Bare Conductive Touch Board For a little taste of how this will fit into my project, you can watch the video below: Who can guess where I got the (temporary) sound effects from ?? It is time to discuss the adopted development strategy for the entire project whose main complexity IMHO is that involved different technologies and need almost different approaches. Especially because the final point it to harmonise all the involved components. In this first period I took the time to explore as much as possible how every component will work and how can be integrated in the echosystem. At the actual date there is a certain number of fixed points that, until proven the contrary, I assume that are at least one of the better solutions to be adopted. I have tried to write down a scheme using a single RPI board trying to see it by many different points of view but the final choice is that at least two units will be used. There are two important reasons: The immediate next problem has been to choose the better solution to create a collaborative network between the different units. Thanks to clem57 michaelkellett and others (see the discussion Epitaxial vs 1N4002 Diodes) the initial idea was the final choice. The RPI machines will be connected through a small low power switch creating an internal network. There will be a PI master server collecting data and other tasks, and a PI slave connected to a set of probes. Note that the LAN wired network will connect only the Meditech RPI devices leaving some ports available for further modules and/or a connection to an external network. One of the roles of the master PI is acting as a router between the wired LAN and the WiFi access, enabling the access by the interface unit. Below are all the RPI master roles identified until now: Meditech should include at least a set of probes for non-invasive healt analysis considered essential for almost any intervention (more details on the biomedical aspects will be discussed in separate posts providing detailed documentation). The probes, for project convenience, has been grouped in three main classes Heartrate frequency, blood pressure and ECG will be grouped together. The probes should be interfaced to the slave RPI with the Chipkit-PI board for a first low level data management and acquisition, then data are sent to the RPI slave platform for the final processing and math management. Collected data are delivered on-demand to the RPI master that will store them on its local database. This is important to manage these data-set separately because in most of the cases the information should be acquired in continuous to monitor the state of the patient. In the hope to have time to develop the components, the echography and other ultrasonic-based probes should be managed by the RPI master. The use of the RPI master for this task should be experimentally verified and at the moment can be considered an 80% trustable choice. The reason to put in charge this task to the already busy RPI master is that the data collection is not be done in continuous; the Bitscope device will be reversed to its common use (an oscilloscope) used as the pre-processing data acquisition; as a matter of fact, it is an independent embedded reliable device that can dramatically reduce the CPU work of the RPI master. In fact if tests demonstrate that the process is too heavy for the RPI master these probes will be managed separately by a third unit, also connected in the internal network. Essential to make a precise and fast check in case of suspected diabetic crisis or diabetic inducted coma, this component too is in charge of the RPI master. This is managed by a dedicated hardware and the signals evaluation and calculations don't need to be monitored in continuous (not a very-high priority process) so this task can be done on-demand by the master server. The RPI master will collect the probes data but not only. It will act as a sort of black-box where the history of every intervention is stored along a timeline (a set of records on the database). Some strategical setting information like will be permanently stored on the database. These information are also printed in hard-copy format on a serial thermal printer (55 mm wide) integrated in the Meditech system. This feature and its usage approach is helpful for the medical assistance personnel working in emergency conditions (or out of the traditional hospitalisation structure). The intervention process should follow a rigid protocol that will vary by country so this part can be customised depending on the adopted procedure. An example that will be preset in the prototype can be the following: There are two moments the system will automatically print a status report: when the intervention call starts (time, current location, location to reach) and when the intervention is declared closed for any reason by the operator. During the intervention the operator, based on his own knowledge and experience can decide to obtain an hard-copy of additional information on the patient health status i.e. the response of his health check. Last year, I backed a project on kickstarter by a company called Bare Conductive: the Touch Board. The Touch Board is an Arduino compatible board, based off the Arduino Leonardo, which adds new features such as: The Touch Board can also be set up as a MIDI device. In the box of the kickstarter reward (£45, early bird £40), there was: Optionally, a LiPo battery and/or microUSB cable could be added for respectively £5 and £2, making the kit even more complete. The Bare Conductive website contains a lot of tutorials and projects using the Touch Board and/or electric paint. One of the tutorials is about setting up the Touch Board, starting with the Arduino IDE. The setting up tutorial can be found here. The Touch Board comes pre-installed with an audio guide that can be accessed by touching the electrodes on the board. Here's a video of the audio guide:! A total of three libraries need to be installed/updated: The MP3 chip and microSD card libraries are bundled together in a single zip file. Once the libraries are installed by extracting them in the Arduino IDE's libraries folder, the programming environment is ready. is the touch threshold - setting it low makes it more like a proximity trigger // default value is 40 for touch MPR121.setTouchThreshold(8); // this is the release threshold - must ALWAYS be smaller than the touch threshold // default value is 20 for touch MPR121.setReleaseThreshold(4); This changes the thresholds for all electrodes. If this is only required for specific electrodes, this can be specified as well: // configure touch threshold for electrode 0 MPR121.setTouchThreshold(0, 8); // configure release threshold for electrode 5 MPR121.setReleaseThreshold(5, 4); I've been experimenting with materials such as copper tape, aluminium foil and electric paint. Alligator clips can be used to connect the sensors to the Touch Board. Alternatively, the connection could be made by applying electric paint to the electrode, this method is called "cold soldering".. Hi all, This post is nothing but some of my thoughts on ideas that I have proposed for this challenge. And it will be short too. By the way, this week I received KIT from element14, and as like all I got very excited to see and work with them. I am not going to describe about KIT contents as they are available on element14, and I don't want to duplicate them here. All I am going to give is just a link where you can find the information about these KITs. Sci Fi Your Pi Design Challenge - Kit List If you haven't read about my concepts then here is the snapshot of same IRON MAN Computer Interactions - To start with I am very excited to try IRON MAN movie computer interactions. And I am sure it will be coolest thing to see. In kit we got two sensor Kits one is XTRINSIC SENSE BOARD from freescale and MICROSTACK ACCELEROMETER board. The initial task is to recognise some gesture with these sensors. I will be posting the implementation details in next few blogs. Then, once I am able to detect some natural gestures then next part is to map it with some computer action. There would be wireless communication between the interactive computer and the module on wrist or in palm. The task described above would be my first task in this challenge. I would update as I keep accomplishing the tasks. If you are still confuse with what I will be really doing in IRON MAN Computer Interaction, then wait for my video . Regards, Shrenik I dreamt of using Windows 10 on the Raspberry Pi 2 for my project. There is more to go for this but the possibilities are amazing. Microsoft announced the Windows Core yesterday and have made available a preview download of the core. I was able to get it to boot the RPi 2, write some code on Visual Studio and debug them on the board. The core has still a way to go for completion, but the version they have out is pretty good. Here is an image showing the default app. The core does not have the Windows Experience, the application is responsible for the UI. The default app has a few preferences and shows the IP address Here is a second Image running my test application. I will attempting GPIO next. Stay tuned. As described in my previous post, my project has sub modules. Each has a Sci Fi Purpose and it should all come together to complete project VIRUS. OR I may divert all together and do what I see more fun to do. In this post, I start with making a robot. I will try and write these posts out in the form of tutorials so that you at home can follow along. Lets make robots! In the movie Real Steel, the robot was recovered from a junkyard but unfortunaterly, I am still waiting for the kit to arrive and in the mean time I am using what I have.! Other posts on my project: I am working on a real update for my project. I have several parts connected up and nearly 1000 lines of C code written for my arduino and my Raspberry Pi2. I have some things to demonstrate and some explanations to do. I decided to take a tangent and work with the ChipKit Pi. I remember a few years back I wanted a Chip Kit UNO because it was a 32 bit processor that is much faster than the AVR used in the Arduino, however I never got around to it. I didn't recognize what Element14 had sent me until I looked at it closer! I am very excited to own one of these. I have been testing some code on an Arduino nano328 to read my radio's PWM signals and communicate via I2C to my Raspberry Pi2. Things are working great so I spent some time seeing if I can get the same code to run on the ChipKit. The code compiled with minimal changes. I used the MPIDE on the Raspberry pi to compile it. As a quick test, I ran a simple speed test on all 3 parts involved and here is a video outlining my results. This is the FIRST video blog I have ever done so bare with me, they will get more polished over time. Watch the video and read some more comments below. I kept wanting to call the ChipKit Pi the ChipKit UNO so you will hear me pause every time I say it. To summarize: ChipKit Pi: 98ms Arduino Nano: 4350ms Raspberry Pi 2: 15ms Notes Phantom I2C objects Serial pins are different for uploading sketches and for viewing output from Serial.print statements. Wire.read() is Wire.receive() in the ChipKit version. I commented the wire parts out int eh CHipKit Pi code below. I am not certain the MPIDE is turning on compiler optimizations. Will research. Speed increase of the ChipKit Pi: 44.3 times faster. The Raspberry Pi2 is 6.5 timers faster than the ChipKit pi, and 290 times faster than the Arduino. Keep in mind this is just a purely simple processing test and ignore I/O and other functionality. PS: QuadCOP parts get tomorrow, the FUN stuff including all the motors and the frame! Should be able to do a test flight (manual, no pi etc.) this weekend. Raspberry Pi Code (use time command to get results): #include <stdio.h> int main() { int a; int b; b = 1; int l1 = 100; int l2 = 10000000; for(int i=0;i<100;i++) for(a = 0;a <=3000;a++) { b = b + a; b = b * 2; b = b % 1657; } printf("Result: %d\n",b); return 0; } Arduino Code: (spits out information on serial) #include <Wire.h>); } Chip Kit Pit code:); }
https://www.element14.com/community/community/design-challenges/sci-fi-your-pi/blog/2015/05
CC-MAIN-2017-47
refinedweb
27,357
60.85
Wiki xUnit++ / SuitesAndAttributes Suites Suites are a broad method of grouping your tests. All tests within a SUITE block are contained within the same C++ namespace, and can be selected based on the name of the suite using the standard test runner. SUITE("Some Suite") { FACT("Some Fact") { } } Attributes Attributes are a much more fine-grained method of grouping tests. Every test (whether within a suite or not) can be decorated with up to eight key-value pair attributes, by wrapping the test(s) in an ATTRIBUTES block: ATTRIBUTES(("Category", "Server")) { FACT("Fact 1") { // test some work that apparently fits in a "Server" category } FACT("Fact 2") { // test some work that apparently fits in a "Server" category } } Skipping Tests There is one special attribute, "Skip" (capitalization required). Any tests with "Skip" as an attribute key will be ignored at run time and the attribute value will be printed as the reason for skipping the test. ATTRIBUTES(("Skip", "Takes way too long to complete. Need to refactor")) { TIMED_FACT("Test name", 0) { // do some long-running work that makes the test process take too long: std::this_thread::sleep_for(std::chrono::seconds(10)); } } // If the "Skip" attribute is the only attribute, then the SKIP macro can be used as a short-cut. SKIP("Another test to skip") { FACT("Do nothing") { } } Importantly, skipped tests will never be instantiated, so no long-running fixture or theory setup will be run. Updated
https://bitbucket.org/moswald/xunit/wiki/SuitesAndAttributes
CC-MAIN-2016-26
refinedweb
236
52.63
sentinel alternatives and similar packages Based on the "Framework Components" category. Alternatively, view sentinel alternatives based on common mentions on social networks and blogs. plug10.0 7.5 sentinel VS plugA specification and conveniences for composable modules between web applications surface9.8 9.2 sentinel VS surfaceA server-side rendering component library for Phoenix commanded9.8 7.5 sentinel VS commandedUse Commanded to build Elixir CQRS/ES applications ex_admin9.7 0.0 sentinel VS ex_adminExAdmin is an auto administration package for Elixir and the Phoenix Framework torch9.3 7.3 sentinel VS torchA rapid admin generator for Elixir & Phoenix addict9.2 0.0 sentinel VS addictUser management lib for Phoenix Framework phoenix_html9.1 7.5 sentinel VS phoenix_htmlPhoenix.HTML functions for working with HTML strings and templates scrivener9.0 3.9 sentinel VS scrivenerPagination for the Elixir ecosystem phoenix_ecto8.9 4.8 sentinel VS phoenix_ectoPhoenix and Ecto integration with support for concurrent acceptance testing react_phoenix8.8 0.1 sentinel VS react_phoenixMake rendering React.js components in Phoenix easy corsica8.7 4.1 sentinel VS corsicaElixir library for dealing with CORS requests. 🏖 cors_plug8.7 3.2 sentinel VS cors_plugAn Elixir Plug to add CORS. absinthe_plug8.7 5.8 sentinel VS absinthe_plugPlug support for Absinthe, the GraphQL toolkit for Elixir Raxx8.6 0.4 sentinel VS RaxxInterface for HTTP webservers, frameworks and clients scrivener_html8.5 0.0 sentinel VS scrivener_htmlHTML view helpers for Scrivener phoenix_slime8.4 1.9 sentinel VS phoenix_slimePhoenix Template Engine for Slime phoenix_live_reload8.4 4.3 sentinel VS phoenix_live_reloadProvides live-reload functionality for Phoenix params8.2 0.0 sentinel VS paramsEasy parameters validation/casting with Ecto.Schema, akin to Rails' strong parameters. kerosene7.9 0.0 sentinel VS kerosenePagination for Ecto and Pheonix. dayron7.9 0.0 sentinel VS dayronA repository `similar` to Ecto.Repo that maps to an underlying http client, sending requests to an external rest api instead of a database phoenix_pubsub_redis7.9 1.4 sentinel VS phoenix_pubsub_redisThe Redis PubSub adapter for the Phoenix framework rummage_ecto7.9 0.0 sentinel VS rummage_ectoSearch, Sort and Pagination for ecto queries passport7.8 0.0 sentinel VS passportProvides authentication for phoenix application phoenix_token_auth7.7 0.0 sentinel VS phoenix_token_authToken authentication solution for Phoenix. Useful for APIs for e.g. single page apps. phoenix_haml7.7 0.0 sentinel VS phoenix_hamlPhoenix Template Engine for Haml rummage_phoenix7.7 0.0 sentinel VS rummage_phoenixFull Phoenix Support for Rummage. It can be used for searching, sorting and paginating collections in phoenix. recaptcha7.2 3.4 sentinel VS recaptchaA simple reCaptcha 2 library for Elixir applications. plug_graphql7.0 0.0 sentinel VS plug_graphqlPlug (Phoenix) integration for GraphQL Elixir plugsnag6.9 2.5 sentinel VS plugsnagA Bugsnag notifier for Elixir's plug plug_rails_cookie_session_storeRails compatible Plug session store multiverse6.3 0.0 sentinel VS multiverseElixir package that allows to add compatibility layers via API gateways. access pass6.3 0.0 sentinel VS access passprovides a full user authentication experience for an API. Includes login,logout,register,forgot password, forgot username, confirmation email and all that other good stuff. Includes plug for checking for authenticated users and macro for generating the required routes. ashes6.1 0.0 sentinel VS ashesA code generation tool for the Phoenix web framework plug_auth6.0 0.0 sentinel VS plug_authA collection of authentication-related plugs webassembly6.0 0.0 sentinel VS webassemblyWeb DSL for Elixir filterable5.9 1.6 sentinel VS filterableFiltering from incoming params in Elixir/Ecto/Phoenix with easy to use DSL. phoenix_pubsub_rabbitmqRabbitMQ adapter for Phoenix's PubSub layer better_params5.8 0.0 sentinel VS better_paramsCleaner request parameters in Elixir web applications 🙌 scrivener_headers5.8 3.8 sentinel VS scrivener_headersScrivener pagination with headers and web linking plug_checkup5.6 1.2 sentinel VS plug_checkupPlugCheckup provides a Plug for adding simple health checks to your app Whatwasit5.5 0.0 sentinel VS WhatwasitTrack changes to your Ecto models plug_statsd5.4 0.0 sentinel VS plug_statsdSend connection response time and count to statsd plug_rest5.0 0.0 sentinel VS plug_restREST behaviour and Plug router for hypermedia web applications in Elixir trailing_format_plug4.9 0.0 sentinel VS trailing_format_plugAn elixir plug to support legacy APIs that use a rails-like trailing format: raygun4.8 0.0 sentinel sentinel VS VotexImplements vote / like / follow functionality for Ecto models in Elixir. Inspired from Acts as Votable gem in Ruby on Rails plug_jwt4.6 0.0 sentinel VS plug_jwtPlug for JWT authentication phoenix_pubsub_postgresPostgresql PubSub adapter for Phoenix apps Scout APM: A developer's best friend. Try free for 14-days Do you think we are missing an alternative of sentinel or a related project? Popular Comparisons README Sentinel Note Currently master (this readme) and the latest hex release have diverged due to poor planning on my part while working on the next version of Sentinel. It also currently interacts poorly with the new directory structure of Phoenix 1.3. I'm currently working on an update to remedy this, but cannot promise it will be released soon. If you'd like to assist in developing the latest version of Sentinel please reach out to me. Things I wish Guardian included out of the box, like Ueberauth integration, routing, invitation flow, confirmation emails, and, password reset emails. It's just a thin wrapper on Guardian but everybody shouldn't have to roll this themselves when they build stuff. I do my best to follow semantic versioning with this repo. Suggestions? See the Contributing/Want something new? section. Want an example app? Checkout Sentinel Example. Installation Here's how to add it to your Phoenix project, and things you need to setup: # mix.exs # Requires Elixir ~> 1.3 defp deps do # ... {:sentinel, "~> 2.0"}, {:guardian_db, "~> 0.8.0"}, # If you'd like to database back your tokens, and prevent replayability # ... end Configure Guardian Example config: # config/config.exs config :guardian, Guardian, allowed_algos: ["HS512"], # optional verify_module: Guardian.JWT, # optional issuer: "MyApp", ttl: { 30, :days }, verify_issuer: true, # optional secret_key: "guardian_sekret", serializer: Sentinel.GuardianSerializer, hooks: GuardianDb # optional if using guardiandb Optionally Configure GuardianDb config :guardian_db, GuardianDb, repo: MyApp.Repo The install task which ships with Sentinel, which you will run later in this walkthrough, creates the migration for the GuardianDb tokens. Configure Sentinel # config/config.exs config :sentinel, app_name: "Test App", user_model: Sentinel.User, # should be your generated model send_address: "[email protected]", crypto_provider: Comeonin.Bcrypt, repo: Sentinel.TestRepo, ecto_repos: [Sentinel.TestRepo], auth_handler: Sentinel.AuthHandler, layout_view: MyApp.Layout, # your layout layout: :app, views: %{ email: Sentinel.EmailView, # your email view (optional) error: Sentinel.ErrorView, # your error view (optional) password: Sentinel.PasswordView, # your password view (optional) session: Sentinel.SessionView, # your session view (optional) shared: Sentinel.SharedView, # your shared view (optional) user: Sentinel.UserView # your user view (optional) }, router: Sentinel.TestRouter, # your router endpoint: Sentinel.Endpoint, # your endpoint invitable: true, invitation_registration_url: "", # for api usage only confirmable: :optional, confirmable_redirect_url: "", # for api usage only password_reset_url: "", # for api usage only send_emails: true, user_model_validator: {MyApp.Accounts, :custom_changeset}, # your custom validator registrator_callback: {MyApp.Accounts, :setup} # your callback function (optional) See config/test.exs for an example of configuring Sentinel invitation_registration_url, confirmable_redirect_url, and password_reset_url are three configuration settings that must be set if using the API routing in order to have some place to be directed to after completing the relevant server action. In most cases I'd anticipate this being a page of a SPA, Mobile App, or other client interface. Configure Ueberauth # config/config.exs config :ueberauth, Ueberauth, providers: [ identity: { Ueberauth.Strategy.Identity, [ param_nesting: "user", callback_methods: ["POST"] ] }, ] Currently Sentinel is designed in such a way that the Identity Strategy must set params_nesting as "user". This is something that I would like to modify in future versions. You'd also want to add other Ueberauth provider configurations at this point, as described in the respective provider documentation. Configure Bamboo Mailer # config/config.exs config :sentinel, Sentinel.Mailer, adapter: Bamboo.TestAdapter Run the install Mix task Create the database using Ecto if it doesn't yet exist. mix sentinel.install This will create a user model if it doesn't already exist, add a migration for GuardianDb migration, and add a migration for Ueberauth provider credentials. You will want to delete the GuardianDb migration if you're choosing not to use it. Currently the install task outputs the following warning: warning: the :datetime type in migrations is deprecated, please use :utc_datetime or :naive_datetime instead This is due to the fact that Phoenix's generators don't appear to support utc_datetime being passed in. Please modify the generated migration accordingly. Phoenix's generators also appear to not support setting null: false with the migration generator, so you will want to set that in the migration for the user email as well. Mount the desired routes defmodule MyApp.Router do use MyApp.Web, :router require Sentinel # ... # ... scope "/" do # pipe_through, browser, api, or your own pipeline depending on your needs # pipe_through :browser # pipe_through :api Sentinel.mount_ueberauth end scope "/" do pipe_through :browser Sentinel.mount_html end scope "/api", as: :api do pipe_through :api Sentinel.mount_api end end Be aware that the routes mounted by the macro Sentinel.mount_ueberauth must be mounted on the root of your URL, due to the way Ueberauth matches against routes. To illustrate, the route for requesting a given provider must be example.com/auth/:provider. If it is example.com/api/auth/:provider Ueberauth will not properly register requests. NOTE: You will run into an issue here if you set the scope to scope "/", MyApp.Router do. The generated routes are shown in /lib/sentinel.ex: Sentinel.mount_ueberauth Sentinel.mount_html Sentinel.mount_api Overriding the Defaults Confirmable By default users are not required to confirm their account to login. If you'd like to require confirmation set the confirmable configuration field to :required. If you don't want confirmation emails sent, set the field to :false. The default is :optional. Invitable By default, users are required to have a password upon creation. If you'd like to enable users to create accounts on behalf of other users without a password you can set the invitable configuration field to true. This will result in the user being sent an email with a link to GET users/:id/invited, which you can complete by posting to the same URL, with the following params: { "confirmation_token": "confirmation_token_from_email_provided_as_url_param", "password_reset_token": "password_reset_token_from_email_provided_as_url_param", "password": "newly_defined_user_password" } Custom Routes If you want to customize the routes, or use your own controller endpoints you can do that by overriding the individual routes listed. Generate custom views If you want to use custom views, you'll need copy over the views and templates to your application. Sentinel provides a mix task make this a one-liner: mix sentinel.gen.views This mix task accepts a single argument of the specific context. This value can be "email", "error", "password", "session", "shared", or "user". Once you copy over a context's view and templates, you must update the config to point to your application's local files: config :sentinel, views: %{user: MyApp.Web.UserView} The keys for this views config map correspond with the list of contexts above. Auth Error Handler If you'd like to write your own custom authorization or authentication handler change the auth_handler Sentinel configuration option to the module name of your handler. It must define two functions, unauthorized/2, and unauthenticated/2, where the first parameter is the connection, and the second is information about the session. Custom model validator If you want to add custom changeset validations to the user model, you can do that by specifying a user model validator: config :sentinel, user_model_validator: {MyApp.Accounts, :custom_changeset} This function must accept 2 arguments consisting of a changeset and a map of params and must return a changeset. The params in the second argument will be the raw params from the original request (not the ueberauth callback params). def custom_changeset(changeset, attrs \\ %{}) do changeset |> cast(attrs, [:my_attr]) |> validate_required([:my_attr]) |> validate_inclusion(:my_attr, ["foo", "bar"]) end Contributing/Want something new? Create an issue. Preferably with a PR. If you're super awesome include tests. As you recall from the license, this is provided as is. I don't make any money on this, so I do support when I feel like it. That said, I want to do my best to contribute to the Elixir/Phoenix community, so I'll do what I can. Having said that if you bother to put up a PR I'll take a look, and either merge it, or let you know what needs to change before I do. Having experienced sending in PRs and never hearing anything about them, I know it sucks. *Note that all licence references and agreements mentioned in the sentinel README section above are relevant to that project's source code only.
https://elixir.libhunt.com/sentinel-alternatives
CC-MAIN-2021-43
refinedweb
2,095
50.53
Opened 4 years ago Closed 4 years ago Last modified 4 years ago #15306 closed (fixed) In admin, filtering on some list_filter fields raises SuspiciousOperation Description I just upgraded from 1.1.2 to 1.1.4 because of the security fixes. Now, when I filter by some fields on some models, it raises a SuspiciousOperation exception. In the case I'm looking at now, the field is listed in the list_filter attribute of the model's admin. My understanding is that I should be able to filter on fields that are in this list. Thanks for your help! Change History (6) comment:1 Changed 4 years ago by ramiro - Needs documentation unset - Needs tests unset - Patch needs improvement unset comment:2 Changed 4 years ago by russellm - Resolution set to invalid - Status changed from new to closed Closing invalid -- without more information, it's impossible to tell if you're hitting the expected behavior covered by the security fix, or some other problem. Please reopen if you can provide a simple example demonstrating the problem. comment:3 follow-up: ↓ 5 Changed 4 years ago by dbenamy@… - Resolution invalid deleted - Status changed from closed to reopened models.py: from django.db import models class ManagedItem(models.Model): pass class Story(ManagedItem): pass class ArticleChannel(ManagedItem): pass class Article(Story): channel = models.ForeignKey(ArticleChannel) admin.py: from django.contrib import admin from django import forms from models import (Article, ArticleChannel) class ManagedItemAdmin(admin.ModelAdmin): pass class ArticleChannelAdminForm(forms.ModelForm): class Meta: model = ArticleChannel class ArticleChannelAdmin(ManagedItemAdmin): form = ArticleChannelAdminForm class ArticleAdminForm(forms.ModelForm): class Meta: model = Article class ArticleAdmin(ManagedItemAdmin): form = ArticleAdminForm list_filter = ('channel',) admin.site.register(ArticleChannel, ArticleChannelAdmin) admin.site.register(Article, ArticleAdmin) Create 2 ArticleChannels. Then go to the Article admin and try to filter by channel. comment:4 Changed 4 years ago by ramiro - Resolution set to fixed - Status changed from reopened to closed comment:5 in reply to: ↑ 3 Changed 4 years ago by ramiro After the security fix was applied it was found that it had to be losened for the 1.2.X branch because of the kind of problems you report. It was done correctly and in time for the 1.2.4 release but I forgot to backport it to the old 1.1.x branch and so releases 1.1.3 and 1.1.4 shipped with an admin filtering security check more strict than necessary. To get this change in your copy of Django you will need to update it to a development checkout of the releases/1.1.X SVN branch at revision r15555 or newer or apply manually the patch of such commit to your 1.1.4 installation. comment:6 Changed 4 years ago by anonymous There really should be a note added to. I wasted a lot of time upgrading to a release, qaing it, and then pulling out a minimal test case, all for a known bug. Do you have any idea when 1.1.5 will be released with this fix? Or if I have to upgrade to an svn revision, does 15555 also include other things that aren't production ready? Please post a reduced version of your model(s) and field(s), plus the respective ModelAdmin.
https://code.djangoproject.com/ticket/15306
CC-MAIN-2015-27
refinedweb
542
57.87
Opened 10 years ago Closed 5 years ago #10906 closed defect (duplicate) lazy import can break unique representation Description A fun experiment: edit sage/categories/semigroups, and add the following lines at the beginning: from sage.misc.lazy_import import lazy_import lazy_import('sage.rings.rational_field', 'QQ') Then restart sage: sage: sage.categories.semigroups.QQ is QQ False This bit me hard, because such a lazy_import indirectly changed the base ring of a matrix to be equal to QQ but not identical, which in turn broke all the linear algebra: I was getting a matrix space over QQ whose elements were generic matrices. Change History (8) comment:1 Changed 10 years ago by comment:2 Changed 10 years ago by lazy_import should only really be used on callables, even then there are possibilities to break stuff. I'm not convinced that lazy_import is necessarily that helpful for these reasons. comment:3 follow-up: ↓ 4 Changed 10 years ago by Lazy import are mostly used to avoid importing expensive modules when you might want to use their functionality. It would be both inefficient and messy to use at a fine-graned level. Whole modules are nice to lazily import, e.g. sage: lazy_import('sage.rings', 'all', 'rings') sage: rings <module 'sage.rings.all' from '/mnt/usb1/scratch/robertwb/sage-4.6.2.rc1/local/lib/python2.6/site-packages/sage/rings/all.pyc'> then you can use rings.X and it will resolve lazily. This can be especially useful for heavy, external dependencies (e.g. matplotlib). As for the basic objects like ZZ, QQ, CC, etc. there's no reason to lazily import them, as those modules will always be already loaded. The safest are modules and callables, using lazy_import objects is just fine, passing them around is more dangerous. comment:4 in reply to: ↑ 3 Changed 10 years ago by Lazy import are mostly used to avoid importing expensive modules when you might want to use their functionality. It would be both inefficient and messy to use at a fine-graned level. Whole modules are nice to lazily import. As for the basic objects like ZZ, QQ, CC, etc. there's no reason to lazily import them, as those modules will always be already loaded. Well, except for category code, especially in the basic categories! And that's precisely where lazy importation is a nice idiom, since this code is loaded very early and one does not want to cause loops there. But lazy importing the appropriate modules instead will indeed work too. Thanks for the tip. Now, to avoid getting confused in case of misuse (which can lead to very tricky situations to debug), what about changing the repr for lazy imported object so that one would get something like: sage: lazy_import('sage.all', 'ZZ', 'my_ZZ') sage: def bla(x = my_ZZ): ... return x sage: bla() Lazy import of Integer Ring sage: bla() is ZZ False sage: bla()(1) 1 sage: bla() is ZZ False comment:5 Changed 10 years ago by Good point about categories. As for printing them out like that would make them very unfriendly for top-level use. I could see this being a useful flag to set for debugging though (and for a slew of other objects, so we'd have "Gap(1)" and "Pari(1)" and "Maxima(1)" instead of just "1" for all of them. For our sanity, I could see rejecting lazy import objects for category bases. It may also make sense to not delegate equality (and hashcode) operations. comment:6 Changed 5 years ago by - Milestone set to sage-duplicate/invalid/wontfix - Reviewers set to Jeroen Demeyer - Status changed from new to needs_review comment:7 Changed 5 years ago by - Status changed from needs_review to positive_review comment:8 Changed 5 years ago by - Resolution set to duplicate - Status changed from positive_review to closed I am having hard time inserting lazy_imports into some libraries. They break coercions and parents. In the example above the problem cannot really be solved. QQ and a lazy_QQ will always have different id because they are different objects, "is" is comparing the memory addresses that of course are different. My advise: never use lazy_imports for QQ, ZZ, CC, RR, SR, all the symbolic constants, Infinity...
https://trac.sagemath.org/ticket/10906
CC-MAIN-2020-50
refinedweb
706
62.78
I thought this bug would have been fixed but in the servicereferences.clientconfig file in the silverlight app make sure your project name is in the contract. This is what is auto </client> </system.serviceModel></configuration> But this is what is needed for it to </client> </system.serviceModel></configuration> Silverlight-helpVb TipsSpace Coast .Net User Group i have a WCF service, and the proxy is not being generated for me neither. I'm using RC1 i have investigated the problem and i have found that the Reference.cs file is empty. Anyone else is experiencing this behavior? How can we work around it? Hello, On Beta 2 there were some hidden issues when installing SL2 Dev Tools, the thing is that in some cases there's a webservice library that is more up to date that the one is needed to install, Take a look at the bottom of this thread: HTH Braulio // --------------------------------- Braulio Diez Free Silverlight Based Database Schema Editor/// --------------------------------- I think Ken's reply is address a different issue we had during the beta, that we fixed during the final release. I have a couple of questions, to help me try to reproduce this (it doesn't happen on my machine). dbaechtel, if you open the reference.vb file(turn on show all files in the solution explorer), is it empty like what fkhoury is seeing in his response. I assume it is not because you can see it in the object browser. You shouldn't need to add references to system.servicemodel or anything, that should be added automatically when you add the service reference. To call it you should be able to just call "dim proxy as new ServiceReference1.ServiceClient". It is not under the web namespace. then you can call proxy. I included some sample steps below Do you have team foundation server installed by any chance? fkhoury, can you check to see if you have any compile warnings or errors? If you are trying to add a service reference to a regular wcf service (one the use wsHttpBinding, not a Silverlight enabled wcf service(one the uses basicHttpBinding), then this is not supported, but you should have a compiler warning. Here are some sample steps to get a basic Silverlight app working (in vb) Create new vb Silverlight app Add Silverlight enabled wcf service to project Build Add service reference Add <TextBlock x: to page.xaml and the following code to the code behind Public Sub New() InitializeComponent() client = New ServiceReference1.Service1Client() client.DoWorkAsync() End Sub Dim WithEvents client As ServiceReference1.Service1Client Public Sub test(ByVal sender As Object, ByVal e As ComponentModel.AsyncCompletedEventArgs) Handles client.DoWorkCompleted t1.Text = "completed" Hope this helps, -Ed Smith Microsoft Visual Studio team Hi Ed, Im not able to generate the proxy anymore (event with an empty reference.cs file) because VS.net is crashing during the generation of the proxy before getting to the stage of creating the cs file. i reinstalled silverlight tools and the problem is still the same. The same problem is happening with a colleage of mine, we both have TFS installed, but im making sure that everything is checked out before creating the proxy, and im always providing a new name for the proxy. I'll try your solution in a separate solution and tell you about the result. Thank you,Fadi KHOURY Thanks Fadi, Try out the scenario I sent, and see if that works for you. I'm trying to figure out if somehow you got a bad install or if this is just a bug in the product code. How did you install VS, tfs, sp1, and silverlight (In what order). Did you uninstall anything including any prior versions. You can email me the dump for the crash and we can take a look. Can you try to run the web service in the browser(right click on the svc in solution explorer and choose view in browser, then when IE comes up, click on the link to the ?wsdl file) and include the wsdl file in the email. You can also try to generate the proxy using slsvcutil.exe. This is the command line tool that does what the add service reference dialog does in VS. I tried the scenario and it worked, althougth i got a message when creating the silverlight enabled WCF Service. it said "Object reference not set as an instance of an object", but then the proxy was genrated successfully and i could call it. I installed VS,TFS,SP1, Silverlight in this order. On one machine, i have uninstalled everything even VS, and then reinstalled it on another drive. but on the second machine i just unistalled the old version of silverlight. Both machines have the same problem. The Breaking News is that today i tried to generate the proxy from another silverlight project in the same solution and it worked! The first silverlight project still breaks on generating the proxy. Now, This looks more like a csproj problem rather than an installation problem. Regards,Fadi KHOURY Problem Resolved. After a lot of trial and errors, i found the cause of the crash. it is weird but true, once you add a reference to C1.Silverlight.RichTextBox, VS.net will crash on generating a WCF proxy. Everytime, i want to update the proxy, i remove the reference to this DLL and then add it again before compiling the project. you can find this DLL here Thanks. I got your repro and we are investigating it. The trouble happens when we try to load the types from the dll, we are doing this because you have type sharing turned on (we search your projects assemblies for types that match the types in your service). If you turn off type sharing (in the advanced dialog) or just uncheck this dll, then you will not run into this problem. Microsoft Visual Studio
http://silverlight.net/forums/p/31137/98741.aspx
crawl-002
refinedweb
986
72.87
1 A it i - rl N T .11 T- II J I II II - ' WE GO WHERE DEMOCRATH 3 PEINCIFLES P0I2TT THE WAY ; WJLSiH THEY CEASE TO LEAD, WE CEASE TOi FOLLOW." : V -.: Till. EBENSBURG, THURSDAY; OCTOBER 21, 1802.: NUMBER 52. if - . . " - "- . TERMS. The "VOUXTAIX SEXTIXEL" in publish- .1 everv Thursday nioruing, at One Dollar and fiflv Cents per annum,, if paid in advance or within three months; after three months Two Dollar will be charged. , ' , ' tuiu. taken for a shorter teriol than six months r and ne-paper-wiH-beH . 1 ...,.' It snrrAsi rn nut lire TMLtl. A I disconunueu um ......... - failure to notify a discontinuanc at the expira tion of the term- subscribed for, will be consid ered as a new engagement. Ef- ADVERTISEMENTS will be inserted at thefoilowing rates: 50 cents per square for the first insertion; 75 cents for two insertions; for three insertions; and 25 cents per square tor every subsequent insertion. A liberal reduc tion made to those who advertise by the year. Ml advertisements handed in must have the troi-cr number of insertions marked thereon, or they will be published until forbidden, and -harwl in accordance with the above terms. 2 11 letters and communications to insure attention must be post paid.- A. J. IlllEY TWENTY YEARS ACO. I'to wandered in the village, Tom ; I've sat be neath the tree Upon the school-bouse play-ground, which shel tered you and me; But none were there to greet me, Tom, aud few Tverc left to know, That played with us upon the grass, some twen ty j ears ago. Tliegrnssis just as green, Tom; bare-footed C bova at play, Were f porting just as we did then, with spirits just as gay m I'.ut tho ".Master' sleeps upon the bill, which, coated o'er with snow, Afforded us a sliding placC, just twenty years ago. The old school-house is altered some; the ben ches are replaced , j new ones, very like the same our penknives had detacea ; . Cut the name old bricks arc in the wall; the bell swings to and fro, Its music just the same, dear Tom, twas twenty years ago. The boys were playing some old game, beneath -s that same old tree ; Ho forget the name just now you've played " the same with me, On that same spot ; 'twas played with knives, bv throwing so and so : The lender had a task to do there, twenty years ago. Tho river's running just as still; the willows on its side Aro larger than they were Tom ; the stream ap pears less wide Hut the grape-vine swing is ruined now, where once we played the beau, And ewung our sweethearts "pretty girls" just twenty years ago. Tho spring that bubbled 'neath the hill, close by the spreading beech. Is very low 'twas once so high that ve could almost reach ; And, kneeling down to get a drink, dear Tom, I started so, . To see how much that I have changed, since twenty years ago. Near by the fpring, upon an elm, you know I cut our name, . . , Your sweetheart's just beneath it, Tom, and you did mine the same ; Some heartless wretch had peeled the bark 'twas dying sure but slow, Just as that one, whose name was cut, died twenty years ago. My lids have long been dry, Tom, but tears came in my eyes ; , 1 thought of her I loved so well those early broken ties ; I visited the old church-yard, and took some flowers to strew, Upon the graves of those we loved, some twenty years ago. ' , . , , Some are in the church-yard laid some sleep beneath the sea; . But few are loft of our old class, excepting you and me; , , And when our time shall come, Tom, and when we're called to go, . , I hope they'll lay us where we played, just twenty years ago. Grace Greenwood. Grace Greenwood is having a delightful time In London. The Earl of Carlisle procured her admission into the House of Lords, to witness the prorogation by the Queen, who, she says, is more remarkable for 'rosy plumptitude than re gal attitude. She styles Lablache 'a monster of melody ; who spouts up columns of sound from the vasty deep of his immense lungs, and whelms you in the flood. Tupper, with whom she spent a day, she speaks of as a man 'whose hospitality is as proverbial as his philosophy.' Miss Mitford is in a feeble state of health, yet resigned and cheerful. Sir Thomas Talfourd is a quiet, kindly, unpretending man, and conver ses agreeably, though with occasional wander ings of thought, and lapses into a sort of ejacu latory dreaminess.' Grace Greenwood dined Vth Mr. and Mrs. Charles Dickens, and a V .Itrat brilliant party, at the house of the JoVeiist, in Tavistock, Square.' Mr.; Dickens is slight in person, 'with a fine symmetrical head, and eyes beaming with genius and hum or.' lie is in 'admirable health and spirits, and good at least for twenty more charming serials.' Uis style of living is elegant and simple, and his servants wear no livery. . 'Mrs. Dickens is a charming person ; in character and manner truly a gentlewoman.' ' Walter Savage Landor is 'glorious old man, full of fine poetic thought, and generous enthusiasm for liberty.' ; Charles Eemble is a grand looking old man, animated wid agreeable in conversation,, and preserving Ja a wonderful degree his enthusiasm for his profession.' Carlyle said; Margaret Fuller was great creature ; but you have .no full biogra phy of her yet ; we want to know what time she Sot tip,- and what tort of shoes and stockings i&8 cre. Letter From tlie Itev. Mr. M'Donald. Catholic Patter of Manchester and Concord, New Hampshire. We publish below the reply to the infamous Roorback purporting to be a letter from the Ro man Catholics of Manchester and Concord, New Hampshire, which has been .going the rounds of the 3yhigj?ress. The, slanderous document ifwilf be seen, is branded , with the infamy it deserves, by the Catholic clergyman of the pla ces from whence it originated. The incident of the names of the two documents, as they have accidentally occurred, is most significant. The true, unvarnished.and honest tribute paid to Gen. Tierce, will speak for itself, while the in sidous and malicious slander, originating as it does with Cooney, who has for years been despi sed by tho upright Democracy of the country, will fall to the ground by the weight of its own baseness. To the Editors of the Boston rost. Gentlemen In the Manchester American, and in several other papers, have been publish ed documents, or certificates, numerously sign ed, and intended as an answer to the letter which I. in conjunction with a few CATHOLICS OF CONCORD, addressed to MR. WHITE OF MILWAUKIE, exonerating General Tierce from the charge of inactivity or indifferance in rela tion to the abrogation test. I deem it a duty to myself and to the signers f that letter, to show how those counter state ments were manufactured. Before doing so, I must premise : 1st. This is my fifth year in Manchester, Con cord, &c, and during that time I have- never, in any way, interfered in elections. "X et I at tentively watched the movements of the politi cal parties in this State, and particularly when the convention for revising the constipation was in session. As a Catholic, I was interested in at least one measure before that body. Hence I read and filed the daily reports of its proceed ings. From these, it was evident that WOOD BURY and PIERCE EXERTED THEMSELVES 8TREX0U3LY FOR THE REMOVAL OF THE TEST. 2. When Gen. Pierce was so unexpectedly nominated as the candidate of one party, he was at once accused, by a certain no torious par tisan, of being the principal, if not the sole cause of the failure, on the part of the people, to abolish the test This accusation, so astound ing to honest men in this section c f the country, wh knew Gen. Pierce had labored zealously in behalf of the Catholics led me to wish that some means might be adopted toi disabuse the Catholics of the Union of the fde impression which this most untrue charge wa s likely to cre ate. It was nothing to me how Ootholics voted, but I was not willing that party hacks should be permitted, with impunity, to trade upon what they called the Catholic vote. 3. Yet it seemed to me, that ;is the Catholic press throughout the country promptly exposed this unworthv artifice for entraf .ping the votes of Catholics," all had been done that was requi red under the circumstances. Vet, to the as tonishment of all persons here in New Hamp shire, who are totally blinded by party tie, the same charge was repeated for tho "WESTERN market I suppose, as:FEW persons in THIS quarter would be deceived by thu story, unless perhaps, such persons as are not. only willing. but tletermineu to De ueceiveu. Mr. White, of Milwaukie, wrote to some friend in New Hampshire, that this repitiition of the charge against Gen. Pierce was industriously circulated at the west. A few Catholics in Con cord, who were supposed to know something about the matter, were requested to fcitate what they knew. Accordingly they afiiied their names to the White letter. At this time I knew nothing of the letter. But fit was brought to me, and I was requested to certify that the names appended to that document were names of CATHOLICS LIVING IN CONCORD. I not onlv complied, but AS I ALSO KNEW THAT GEN. PIERCE HAD BEEN SHAMEFULLY BELIED IN THIS MATTER, I ADDED MY TESTIMONY TO THEIRS. It is true that I do not fully agrae, politically, with the party which has nominated Gen. Pierce, BUT I RE SPECT THE MAN. 1 KNEW THAT HE HAD DONE ALL THAT MAN COULD DO IN OUR BEHALF. AND THAT HENCE HE WAS GROSSLY CALUMNIATED. I saw that the votes of Catholics were made into stock in the vilest markets the political market and apart from the indignation which a Catholic would naturallv feel under these circumstances, I felt that COMMON GRATITUDE REQUIRED FROM US THE CATHOLICS OF NEW HAMPSHIRE, A CLEAR RECOGNITION OF THE FACT THAT GEN. PIERCE HADTRULY AND ZEAL OUSLY LABORED IN OUR BEHALF, AND, IF HE FAILED IUW-maiaau nitoiAiJi IN THIS MATTER, THE FAULT WAS NOT II1S. I certainly sapposed that the matter, so very simple in itself, would rest here. . No question would have been raised about it if Gen. Pierce had not been nominated no question will be raised about it three months hence. It seems, however, that the document signed by me, and the unanimous testimony of" the . Catholic papers, well nigh destroy ed what the Pilot calls the trap for catholic votes. It was resolved to mend the trap. So Mr. Coonev, of Albany, an Irish Catholic, I be lieve, visited New Hampshire. His object was to ge't up a counter document, signed by Irish Catholics. Another partisan, whom I need not name, was also interested in this matter. The result was that documents, numerously signed, were obtained from Manchester, Dover and other towns As Drovmson in his number for the pre sent month; caking of this very matter, says the fools are not all dead pet, and a new brood is hatched every year. The person xcho got vp those counter certificates regard the catholic voters as fools I suppose. I had determined to bestow no at tention upon the matter ; but, upon reading the documents in question, I found, not only that they contained contradictory statements a thing which did not concern me but that they were intended as an' impeachment of the truth of the certificate signed by me nay, the docu ments insinuate that our signatures were not all genuine. Of course my honor was here con cerned., ...... r , . . . . I know something about the manner in which the signatures were obtained, in Manchester and Concord. Few, of the sianers 1 believe, are voters. When I state that Manchester is a Whig city, that it is aup ported by corporation s--is alive with factoriei , foundries and machine establish ments -I suppose that I state no new thing when I say that mat iv operatives, in matters of this sort, act uude'r a species of moral restraint. What inducement sufficed1 to bring Mr. Coo ney all the w ay from Albany, to help the Cath olics of : this .State to settle , their own. affairs,-1 will not stop to inquire.' Perhaps he is person ally interested in the defeat of Pierce. Per haps Mr. Robin son is for he, also, although he has no part or 1 ot with us, kindly undertook a journey from Ne w York to New Hampshire, that we, Catholics, laight understand that we had been badly treatet 1 by Gen. Pierce athing we did not understand ' before, and which toe cannot, with all the pai ns a Robinson has taken, un derstand note. . The counter, certificates were, I believe, writ ten by Protest ants. Perhaps one of them was not. I ndepertdently of external evidences, there are phrases a nd expressions in them which be tray their Protestant origin. They may have been, and probably were, copied by Catholics. An Irish na.me, as the Pilot sayswill command any price nertainly any promise-: from politi cians, until .November next. Why Protestants betray such a tender interest in our welfare, and why those "Protestants Jiappen to be interested iu the coming election, is a phenomenon which I do uot profess to explain. : ' '-' The person who was employed in this place to obtain signatures from the operatives, is a per son of whose equivocal catholicity I will not now say a word. Few of the signers understood the import of the paper to which they affixed their names. Some were called upon to sign in the pre sence of their employers. Two persons, in one shopto give only one instance were morally forced to sign. Some were told that the docu ment was a petition for the establishment, of the ten hour system. Others were told that the mills would be stopped, and they, in consequence, thrown out of work, if Pierce were elected. Some were told that it was a sort of naturaliza tion paper. Others were told that it was a doc ument to level at English influence." Some were toid that it was a petition for equal rights. Others signed it simply because they were ask ed to do so. We need not wonder at all this, Messrs. Editors, for such things occur frequent ly, also, among persons who claim to be better informed. The monster petitions we so often hear of, are, I am pursuaded, got up in a similar way. Petition bearers commonly find that the general run of people will sign any paper. I am confi dent, after investigation of the matter, that not more than some half dozen persons knew -just what they were .doing when they signed the Cooney document. I know that some hero refu sed to sign the paper, aud. yet found their names appended to it. . ; i . Nevertheless, there were intelligent and - re spectable Catholics, whose names were required but who would not sign the paper. Yet these names were requisite. So the concoctors, after most of the names had been obtained, changed the whole document, as the first was of an ob jectional character. A few names were obtain ed to the second document, and then the bulk of the signatures were transferred by the concoc tors of the paper, and without leave, to the new document. I call it new document, because it differed, in several material respects, from the old one. It was a different document. There fore, nearly all the signers before the public, were forged. Persons signed the second that would not sign the first. Most of the persons whose names appear,- never saw the second. Then, the concoctors, in company with the ed itor of the Whig paper, came to me to certify that all the signers were Catholics. I could not do it. True, I certified to the White signatures. But they were few, and I knew all of them ; whereas I do not know half the signers of this Manchester document, admitting that all the names appended to it arc names of persons real ly existing. Secondly, because I would, in sign ing that paper, certify to an untruth, inasmuch as no true catholic wonld be guilty of signing a paper publicly slandering any man, much more a man now circumstanced as Pierce is. I kuew that, on reflection, the first two names on the first column, and the first on the second column of signers will repent, if not before the election excitement is over,' at least after, for having endorsed the public defamation of a man who has tried to befriend them and theirs who has been acquitted, after a thorough examina tion, of the charges they bring against him by the Catholic papers by Mr. Brownson, who says, in the numfcer for the present month, that Pierce is no bigot, and tha; he is well known to have exerted himself for the abolition of the test ; and that he has the most boundless con tempt for those who try to get votes for Scott by laying on the shoulders of Pierce the blame for the revised Constitution. Archibishop Hughes tells U9 that both candidates are wor thy of equal support. Nay, the respectable Whig papers scorn to notice this new false issue presented to Catholic voters, and even the pa pers which started the story, are beginning to own thai it is not trme. , J ' t f , n ? , - , I have something to say of the Concord docu ment. To be brief, they who signed it labored under the same misapprchensiou, and were im posed upon in the same way with .those who signed the Manchester rehash. Nay, more, Mr. Cooney is also responsible for the Concord doc ument Mr. Cooney, all the way from Albany, finding that the Concord town records contain ed nothing that Robison had not already twis ted, and after consultation with certain free soil notables, it was decided that another document should be prepared. . By the aid of a , partizan post-master and certain mill-agents, at Fisher ville, a few Irishmen were morally coerced to sign the paper. These persons assured me that they knew not what the paper contained. 'They would have caused the erasure of their signatures, but I did not think it ' worth while ; convinced, as I am, that the document will do no harm. At West Concord, Mr. Cooney and his, aboli tionist friends find a few Irishmen. The names of John Gallagher and John Lynch were in the Concord certificate signed by me ; and two men, also bearing these names, live at West Concord. These were asked whether they signed the White certificates, endorsed by me. No, said they.' Go to Concord, and you will find another John Gallagher and John Lynch, i The concoc tors professed to know no such men in Concord, and that no such would be found. So that John Gallagher and John .Lynch, of West Concord, wre induced to give an affidavit that they had not signed the White certificate.'' Their oath was au hoilest. one, of course. 1 ! -1' ; r , - ' The John Gallagher and John Lynch who did sign this White document, live in Concord. I know tiem, and I know they signed- it.- They are reaSy to make an affidavit to this effect.'" u Thatooaey certificate says that one Halpin lidx&i-gk 4h AVhiteCHumutJUi.eBiployv er is a Whig, and he, together with Cooney, per suaded Halpin to Swear that he did not sign the certificate.- It will recollected that I did not get up the document. I simply certified that the persons whose names were appended to it were Catholics, and that I believed the contents of the letter perfectly true. I find upon inquiry that Halpin, when asked to sign the paper, an swered, I will, but put down my name yourself. This circumstance he conceives, justified him in swearing that he did not sign it. 1 hese things prove that the triumph which the Cooneyiles supposed they had gained, with re ference to these three cases, is no triumph at all. Cooney went to most of the Irishmen known to be in the employment of Whigs, and, as a matter of course, I fear, most of them signed the paper.' So far as I know, only one man re fused. His name is Connars. The concoctors of the document coaxed and flattered him with out success. The amiable and accomplished daughter of the employer of Connars also en deavored to induce him to sign the paper. This was a hard trial ; but Connars, who understood, it would seem, the contents of the documents, 6teadily refused. "I was brought up to be a Democrat," was his constant reply. With reference to the Dover and Nashua certi ficates, 1 cannot speak from personal knowledge, but, if I be correctly informed, their history is very similar to that of the Manchester and Con cord documents. From the above facts, it will be evident to the public that the Coonejile papers, professing to embody the Catholic sentiment of New Hamp shire with reference to the test, (the only mat ter I have at any time touched upon) must be regarded as the production of a few political ene mies of Gen. Pierce, They do not, in the slightest degree, affect the truthfulness of my testimony, as heretofore published. In the language of Brownson, Pierce is well known to have exerted himself in advocating the abrogation of the test. If the Democrats wished to rest their case up on the number of signatures, they would, I doubt not, have procured an array of signatures that would overwhelm the Cooneyite documents. Perhaps they would now, if they thought it worth while. V Respectfully yours, - -- W: M'Donald. Catholic Pastor of Manchester and Concord, New Hampshire. Horrible Death from Hydrophobia. We mentioned, a few days ago, that Capt Williams, who had been bitten .by a rabid dog, at Brandywine, Del., had subsequently died of hydrophobia at his residence near Cape May. Dr. Wales, his attending physician, thus de scribes the condition of the unfortunate man af ter he became aware of the nature of his dis ease : "Fully now awake to the awful nature of his situation, his mind, too, but little disturbed, he continued from this time (about 6 o'clock, P. M.) in a wakeful state, now conversing with tolera ble composure upon such topics as might be sug gested, and anon thrown into the most painful muscular contortions, especially if any liquid were offered him, or even any allusion made to anything of the kind. The case, however, did not attaTn its worst phrase until about 8 o'clock the following morning. He seemed, indeed, from early dawn to this hour, a little more com posed; had even forced down a small quantity of milk, although not without an effort which was painful to behold. At or near 8 o'clock, however, the final struggle commenced. With a wild scream he besought the presence of his mother, his wife, and others of hfs relations and friends, and took a formal leave of them in a manntr rational as possible. This was at once succeeded by paroxysms so violent as to require his being firmly secured to the bed, in which sit uation the spasms increasing fast in frequency and violence, with a rapid flow of saliva, at first frothy and viscid, but afterwards thinner, less tenacious, and made up of froth and a glairy fluid. Which he sputtered forth while his strength continued, clutching at it with his hands, some times, as if to tear it away from his mouth. His countenance was now at times shockingly distorted, and his brain frenzied, his teeth gra ting and gnashing in a terrible manner. He contiiued in this awful state until about half past 10 o'clock, A. M., when his power having so fat failed as to prevent his dislodging, the fast accumulating saliva, his throat gradually filled with it, and he expired. "In the flitting and varied expressions which passed over the countenance of the unfortunate patient toward the close of life, there were ex hibited such appearances as would lead the be holder (without much stretch of the imagination to suppose that the rabid creature whose deadly poison was circulating through his system by its bite, had worked out the fact of transform ing his very nature into his own. There was the rabid canine expression as the human fea tures would in any way allow of. ilixl ft tmcedr. which, in a practice of twenty-five years, I have seen nothing to compare with." The "Test" A significant Fsct. In the town of Newport, New Hampshire, says the Concord Patriot, the names of those who voted on both sides of the question of striking out the religious test, were recorded. The New port Argus gives the names and political char acter of all who voted on the question, from . .1 i 1 OO nAmArtv&fa anil rn1 V wmcn n appears mat ...-. 11 Federalists and Free Soilers voted in favor of .... . a r T- .1 It.-.. An,l abolishing the test. While 70 xeueransia uu Free Soilers and only 4 Democrats voted against abolishing it. Among those who voted against abolishing the test, was D. Tekbt, a delegate to the late Federal Convention, which unanimously j... .-J nlonror that thft whics of jiaopieu u resuiuuuu uw-wfb -- , New Hampshire have always been in favor of . . . l I T lira, n linnHrnl fttbfer aDOlismng mat teat i . v - n.imtu'n rsf xa invention that passed that ly- -a'i-.fC,, 1, o vnw himself in favor of i 4U tno , .,1,'n a direc.tlu aaainst abol- ishing it. We have po doubt that if the facts could be shown, it would appear that three-quar- ters 01 me votes Bpiust ouvwcui". rcpro riven bv our oronnts. ' - - o - - . f General Seott'a Inconsistency, and 'his . Hospitality to the Adopted Cltlxsns. i ' We :; anticipate ' our intention 'of . giving the whole of tho abler memoir of General Scott, now being published in the New York Herald, to our readers, by printing, in advance, ths chapter which the skilful writer devotes to the preten tions set nn.by General Scott as the friend of the adopted "citizens, and especially of Iriehmcri. It is a production fortifieu by aDunuam autuon ty, and, as the writer has history for proof, his positions cannot be controverted. Never have we read a more scathing and everwhelming ex posure ; and all will concur in this opinion who read the passage we print to-day. Before a disclosure like this the halo of victory grows dim. No bravery can excuse brutality. No cause is ever made invincible by the exhibition of revenge in its leader. Even the guilty should be punished without the exhibition of exultant and barbarous malevolence. General Scott s profession of liberality to Irishmen is a pretence -a sham a most transparent huckstering lor votes. His punishment of them became a tavo rite theme for native-American orators ; and pictures of the fifty Irishmen hanging from a scaffold in Mexico were posted up in our great cities as an argument to prove the unworthiness of all the sons of Erin. The horrid scene and the savage sequel of branding, enacted by the express orders of General Scott, were called by the natives the justice of the ueneral. it the Democracy spoke of it as a most severe retribu tion, as they did, they were reminded that it was General Scott who did the deed ; and if the valor of adopted citizns in other conflicts was referred to as good ground against the condem nation of all for the crime of a few, the appeal was laughed to scorn. General Scott came home a victor; and whenever he discovered that he had a chance to be nominated President, he praised those whom for years he had persecuted and whose national courage, especially as exhib ited in defence of our country's honor, he had attempted to stain by a 6ingle example of Irish treachery, that history might quote it (but most unjustly) as a proof of the baseness of the whole Irish character. Wash Union. T1VO DATS LATER FROSX II A VAX A. Arrival or the Black Warrior An Editor Garrotted Death or his Mother by Urler The Crescent City rorbldden to enter the Harbor. New Oeleans, Oct. 4. The steamship Black Warrior, with Havana dates to the 1st, arrived here at noon. Her ad vices are two days later than those received by the Empire City. The Picayune has received files of. Havannai papers and correspondence to the day or sail ing. ; The disaffection on the Island was growing stronger and stronger every day, and the acts of the Captain General Canedo, and his Secretary, Don Mar in Galeno, are openly denounced in various quarters of the Island. Arrests of suspected persons are daily made, and some are thrown into prison on the most frivolous pretext. The police force has been materially increas ed, and domiciliary visits at unseasonable hours are of frequent occurrence. So fearful are the authorities lest an outbreak should suddenly occur, scarcely a vessel reach es the port that is not immediately boarded by one or more of the officers of the government, and the most searching investigation made. The execution of Signor Faccioli. one of the proprietors of the Voice of the People, the se cret issue of which created so much consterna tion among the government officials, took place on the 30th. He was publicly garotted. He died with mnch firmness, and manfully re fused to the last to criminate any person with him, or divulge the least secret connected with the revolutionary movements. The execution caused considerable excitement. So great was the shock experienced by his mother, who had been denied an interview with him, that she died of grief a short time after the execution had taken place. The arrests of Count de Dors Imlcos and Jose Farias had created much alarm. The edict issued by the Captain-General a- gainstthc American Steamship Lresent city, forbidding her to enter the harbor, whilst Mr. Smith remained on board, was still in force. Sea Monster. The cloop Escort, of Edgartown, Captain Cleveland, arrived here this morning, with a specimen of the fish genius, which we consider to be a great curiosity. The fish is of the whale species, generally, known by whalemen, as a right whale "Killer," It was caught on Mon day afternoon, off the south side of the vine yard, by a sword-fishing party. Its length is 15 feet, its thickness lour feet, and us weight about 3000 pounds. It has been visited by large numbers, who have expressed great curi osity at its mammoth proportions, come wag- the same monster uapt. oeaDury was in pursuit of, when last reported in the N. 1 . Tribune. If it is not the far-famed sea-serpent, which is annually seen off Nahant during . the watering seasons, it is a "distant connection or the tami ly," at least. The creature has a set of teeth, which for regularity and whiteness would ex cite the envy of our city belles, and cause a den tist to fall into raptures. In fact, this is a "fish as is a fish," and there is no fish 6tory about it. New Bedford Standard. The Female Race. An old bachelor, whose advances towards ma trimony bad been repulsed by a beautiful angel in petticoats, thus delivered himself of his sen timents in relation to the whole tribe : "Well, I always knew woman wern't worth thinking of; a set of dece'tful little monkeys ; changeable as a rainbow, superficial as parrots, as full of tricks as a conjurer, stubborn as a mule, vain as peacocks, noisy as magpies, and full of the "Old Harry" all the time ! There's Delilah, now didn't she take the "strength" out of Sampson! and wern't Sisera and Judith born fiends? and didn't that little minx of an Herodi- as dance John the Baptist s head off? Didn't Sarah "Cain" with Abraham till he packed Ha- gar oil I 1 ben there was ; well, the least said about her the better ! But didn't Eve, the foremother of the whole concern, have one talk too many with the "old Serpent!" Of course ! she didn t do nothing else ! Glad I never set mv ofFwtinTi An mnv nf 'fm. .Whr'iTBV niiriir I case : How tormented hot this rocm 15 : ; Descendants sfCuli. With the exception of the noble Surrey, we can not point out a representative in the male line of any English poet. The bl?cd of beings of that order can be seldom traced far down, even in the female line." ' There is no. English poet prior to the middle of the eighteenth century And., wa.lnf linttfl. ndm err air mithnr, ,f tcppt CLax endon and Shaftesbury of whom we have any inheritance among us. Chaucer's only son died childless ; Shakspeare's line" expired in his , daughter's only daughter. Nona of the other dramatists of that age left any progeny ; neith er did Raleigh, nor Bacon, nor Cowley, nor Butler. The grand-daughter of Milton was tho last of his family. Newton, Locke, Pope, Swift, Arbuthnot, Hume, Gibbon. Cowper, ' Grey, Walpole, Cavendish, (and we might easily extend the list) never married. Neither Bolin broke, nor Addison, nor Warburton, nor Burke' transmitted descendants. J6i?Th9 exact age of tho Duke of Wellington at the time of his death was eighty-three years and four months. The Marquis of Anglesey, who was with him at Waterloo, is a year older, and still survives in tolerable health. The new Duke at the time of his father's death, was at Frankfort, whence he was immediately summon ed. He has hitherto borne the title of Mara uis of Douro, and until the last election he enjoyed a seat in the House of Commons as member for Norwich. He is 45 years of-age, and bears a considerable, although not a striking, resem blance to bis tather. He has never taken any prominent part in public affairs, nor is he like ly to do so. The Duke's second son. Lord Charles Wcllesley, was with him at the time of his death. He is 44 years of age, and has a seat in the House of Commons as member for South Hampshire. It is believed thai the pro perty of the Duke of Wellington will be found to have accumulated greatly in late years. His income was very large, not only from the vari ous grants made to him by Parliament after his successive victories, but also from the numerous appointments he had long held. The principal of these was that of Commander-in-Chisf, of which the salary is 3, GOO per annum. The Expected Flying Ship. Mr. Rufus Torter, the proprietor of the pro posed Flying Ship, reports progress. He says that the most essential part of the apparatus is -ready tor inflation with air ; the longitudinal rods, rudder, pulleys, replenishing pipes and sa loon wires will soon be adjusted. The engines are superior, both in construction and style. The floor of the saloon" Is'lwenty feet in fengdT" by six in breadth, and consists of a combination of upward of one hundred and forty pieces of spruce timber, and strong enough to sustain forty persons ; yet its entire weight Is only twea- " ty-five pounds. The floor of the engine-room is ' arranged to be independent of the main floor ; and the engine and boiler are so arranged as to be at any time disconnected from the wheels and detached from the saloon, should occasion so require, for the purpose of repair or otherwise. If the weather continues favorable, and no un forsecn misfortune prevents, Mr. P. expects to gratify the friends of the project in about two weeks time by a successful demonstration. JY. V. Tribune. The White House In 17 8. A Mr. Wansey, whose published notes of a tour in this country in 1784 hae recently been the subject of notice in tho American papers, gives the following description of breakfast at the White House. Will the breakfast in these days bear a comparison with this ? "Mrs. Washington herself made tea axwl cef fee for ns. On the table were two small plates of sliced tongue, dry toast, bread and butter, but no broiled fish as it is the general custom. Miss Custis, her grand daughter, a pleosin young lady of about sixteen, sat next her broth er, George Washington Custis, abotot two years older than herself. There was but little ap pearance of form no livery. A silver urn, for hot water, was the only expensive thing on the table. Mrs. Washington appears to be some thing older than the President, altLonii bora in the same year, in statue rather robust, very plain in her dress." Sam Hyde was a tame Indian, and a moBt no torious liar. On one occasion he sold a man a deer he had shot and left on the epot where he M 3 "11 . .S V ? 1 . naa Kiuea mm uie puronaser to be at the trou ble of sending for hira. Sam described the lo cality where the deer was to be found. "In a certain field, near the creek, and under the big elm tree. The messenger returned without brineine the deer. There was no deer there! When flam. who had been paid in advance, was overhaul ed for his fraud, he answered; "iou found tho field!" "Yes." "You found the creek f "Yes." :, "You found the big elm tree !" "Yes." "You found no deer!" "No." . ' "Well, three truths to one lie is Prettv rood for an Indian!" v BSu.'Tapa.ri've'been-seemff cook and can you tell me how dough resmbles the sun? "The sun. Freddy!' "Yes, Pa." "No I cannot. Freddy, with great glee, 'Because when it rises it is light Pa soliloquising 'That child is too clever to live. "CA Compbomise. A New York paper sajg of the late Robert C. Sands sued for damages ia a case of breach of promise of marriage. He was offered two hundred dollars to heal his bro ken heart. "Two hundred !" he exclaimed, "two hundred dollars for ruined hopes, a blast ed life ! two hundred dollars for all this ! No never ! Make it three, and it's a bargain ' S"-There papen. in Boston called "To Day" another has been commenced called "To Morrow." The "Day After To-Morrow" is ex pected to appear shortly ; - and some anti-pro-jress people are meditating one to be called: . . 1" A. .1 11 f V . - - hum J xml | txt
http://chroniclingamerica.loc.gov/lccn/sn86071377/1852-10-21/ed-1/seq-1/ocr/
CC-MAIN-2014-10
refinedweb
6,579
70.02
Steven Bethard wrote: > > Everybody here agrees that this style makes the code much less legible. > > Partly because of the constant indirection. Also because it imposes > > learning all those two-letter abbreviations before reading a module, and > > the learning has to be redone on each visit, it just does not stick. > > Much less legible than without the namespace? Or much less legible > than with a non-abbreviated namespace. using abbreviations just for the sake of it may be a bad idea, but using it to able to quickly switch between different drivers works really well. my code is full of stuff like: import sqlite2 as DB import wckTkinter as WCK # import cElementtree as ET import xml.etree.ElementTree as ET but you sure won't see import sys as SY import os.path as op or other gratuitous aliasing. </F>
http://mail.python.org/pipermail/python-dev/2005-December/058694.html
crawl-002
refinedweb
139
64.91
Details - Type: Improvement - Status: Resolved - Priority: Major - Resolution: Fixed - Affects Version/s: None - Fix Version/s: 2.8.0, 3.0.0-alpha1 - Component/s: None - Labels: Description HDFS-6673 added the feature of Delimited format OIV tool on protocol buffer based fsimage. However, if the fsimage contains INodeReference, the tool fails because: Preconditions.checkState(e.getRefChildrenCount() == 0); This jira is to propose allow the tool to finish, so that user can get full metadata. Issue Links - Dependent - - is duplicated by HADOOP-14147 Offline Image Viewer bug - Resolved Activity - All - Work Log - History - Activity - Transitions I think at least we need to separate the snapshots and regular paths in the listing. If there's requirement for that, I'd prefer to address it in a separate JIRA. Since the XML OIV tool clearly lists snapshots along with INodes / INodeReference, I doubt we really need the Delimit tool to do the duplicate work. Missed 1 test case. Patch 3 fixed that, locally passed now. Also added some comments, no real code change. The patch LGTM in general. Will +1 after addressing the following comments: } catch (IOException ioe) { ++ignored; if (LOG.isDebugEnabled()) { LOG.debug("Exception caught, ignoring node:{}.", p.getId()); } Would you log the IOE in log as well? Also I feel that it should put an IOE to higher log level, e.g., INFO? getSnapshotName() seems just ignoring a ref Id. Would you change the function name accordingly? Additionally, can you add some comments to the following code? if (parent == null) { return getSnapshotName(inode); } Thanks! Thanks very much for reviewing Lei (Eddy) Xu! I'll address the last 2 comments shortly. I'm not sure about the log level though. Logging it at INFO or above will flood the console, and may confuse the user. For example, in a big cluster, currently the tool prints the following: 16/01/29 11:07:01 INFO offlineImageViewer.PBImageTextWriter: Found 44801245 INodes in the INode section 16/01/29 11:22:34 INFO offlineImageViewer.PBImageTextWriter: Ignored 2235 nodes. 16/01/29 11:22:34 INFO offlineImageViewer.PBImageTextWriter: Outputted 44801245 INodes. If we log every exception, there will be 2235 log entries, hence flooding out the summary info. I understand your concern. How about I add a sentence to the Ignored 2235 nodes. to say please turn on debug log for details?, and change this log level to WARN? Hm, since it's called in a recursive method, I don't think returning can distinguish it. How does a specific type of Exception (e.g. IgnoreSnapshotException) sound to you Lei (Eddy) Xu? We can ignore that and log IOE then. Patch 4 addressed the comments above - better logging and warn if IOException is thrown. Added a new type IgnoreSnapshotException to handle the snapshots in the recursive call. Lei (Eddy) Xu please take a look and let me know what you think. Thank you! Hi, Xiao Chen Thanks a lot for addressing the above comments. I feel that private String ignoreSnapshotName(long inode) throws IOException can be a static method of MetadataMap and returns void, what do you think? Btw, it seems that we do not need to use pre-increment in the code, e.g., + and +dirCount. Would you mind to change it to comply the coding style used in the rest of this file? Would +1 after fixing these. Thanks for the review Eddy! Patch 5 addresses all comments above. MetadataMap is an interface class, so I put the static method in PBImageTextWriter. Patch 5 also adds a header line to the delimited oiv output. It also adds the d or - at the beginning of the permission string, to be consistent with the legacy OIV and -ls -R Hm, BTW, should we mark this as incompatible change, due to the added d / - in the permissions string? FAILURE: Integrated in Hadoop-trunk-Commit #9225 (See) HDFS-9721. Allow Delimited PB OIV tool to run upon fsimage that contains (lei: rev 9d494f0c0eaa05417f3a3e88487d878d1731da36) - hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/PBImageTextWriter.java - hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/IgnoreSnapshotException.java - hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/PBImageDelimitedTextWriter.java - hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FSImageLoader.java - hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt - hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/TestOfflineImageViewer.java I studied the legacy_oiv Delimited tool, XML format OIV tool, and have decided to just allow the Delimited OIV tool to finish loading normal metadata, without worrying about snapshots. The legacy_oiv tool seems to print out both the normal namespace and the all found paths for snapshots all together, plus the snapshot name itself... IMO this is more confusing than not printing at all. If snapshot info is needed, one can easily get it from XML OIV tool. Attached patch 1. Below is an example of the legacy OIV, and the Delimited OIV in patch 1: Metadata was simply constructed with: A (a little bit) more complex case that involves INodeReference is included in the unit test.
https://issues.apache.org/jira/browse/HDFS-9721
CC-MAIN-2017-17
refinedweb
860
50.23
Extreme ASP.NET Makeover - Mr. Escher, Your Software is Ready By James Kovacs | 2009. Another Look at Host. Figure 1 IHostV30 The plug-ins, which implement IProviderV30, are initialized with a Host instance and a configuration string through their Init method: Looking at Host’s constructor, no dependency problems are immediately evident: The plug-in system seems fairly innocuous, but the devil is in the details. Let’s dig a little deeper and see what we find. The First Step is Realizing You Have a Problem. [TestFixture] public class AfterTheApplicationBootstraperIsExecuted { [Test] public void IHostCanBeResolvedFromTheContainer() { IoC.Resolve<IHostV30>(); } [Test] public void CacheInstanceIsInitialized() { Assert.That(Cache.Instance, Is.Not.Null); } [Test]. Going Around in Circles, as shown in Figure 3. Figure 3 CircularDependencyException Let’s take a look at the constructors for Host and Users:. The only reason for Users to reference Host is to raise events on behalf of the Users class. We can quickly and easily remove this dependency by allowing Users to raise its own events, as shown in Figure 4., as shown in Figure 6. Figure 6 Unit Test Success. Don’t You Forget about IoC With apologies to Billy Idol, we cannot forget about the container. It is responsible for creating and wiring together our dependencies. Looking at ScrewTurnWikiInitializationTask, we see the following:. The Journey Continues: Time to pack up the notification-related methods on Users and move them to their new home on Pages. And That’s a Wrap: · Do these dependencies make sense? · Should some of these dependencies be combined/split? · How do the dependencies relate to one another and can those relationships be improved? We can reason about the overall structure of our software and improve it because that structure is more apparent. In the end, our software becomes more flexible, more testable, and more resilient in the face of change. Acknowledgements.
https://msdn.microsoft.com/en-ca/magazine/ee470637.aspx
CC-MAIN-2018-51
refinedweb
308
58.58
#include <wx/richtext/richtextstyles.h> This is a base class for paragraph and character styles. Constructor. Destructor. Returns the style on which this style is based. Returns the style's description. Returns the style name. Returns the definition's properties. Returns the definition's properties. Returns the attributes associated with this style. Returns the attributes associated with this style. Returns the style attributes combined with the attributes of the specified base style, if any. This function works recursively. Sets the name of the style that this style is based on. Sets the style description. Sets the name of the style. Sets the definition's properties. Sets the attributes for this style.
https://docs.wxwidgets.org/trunk/classwx_rich_text_style_definition.html
CC-MAIN-2021-10
refinedweb
111
64.47
The aim of this tutorial is to create an IoT Alarm Which Alerts Your Manager or Person of choice if you haven't deactivated the alarm for X number of minutes by sending out an SMS text message. List of items required to create this project: - Wia Dot One (Buy Yours Here) - Dot One TFT LCD Screen Module (Buy Yours Here) - Wia Grove Module (Buy Yours Here) - Grove Button Module (Buy Yours Here) - Grove Speaker Module (Buy Yours Here) - 2x Grove Cable (Buy Yours Here) - MessageBird Account Step 1: Create a Wia account with the Dot One connected. If you haven’t done so yet, you can follow this tutorial over here. Step 2: Create an Account on MessageBird (LINK) by registering using Google/ Github account or by using an email address. Step 3: Once you have verified your email address a prompt asking you to pick an interface will show up. Select "Rest APIs". Step 4: Click on the " SMS API" once prompted to as shown in the screenshot. Step 5: Select "SEND SMS" box Step 6: A window will show up asking you to send your first message with a sample code. Ignore this and press the "Skip" button Step 7: You will then be greeted with a screen similar to this one. On the right hand side of the screen an you will see an "API Key" tab. Click on on the "Show" text beside the Live Key and Copy this key over as we will be needing it later. The Api Key will consist of letters and numbers. Step 8: Open dashboard.wia.io and select your space. Click on the flows Icon. Create a new flow using the blue "Create a Flow" button. Give your flow a name and press the blue button "Create flow". Step 9: The final flow chart should look like this. Drag the "Timer" node from the "Trigger" Pane and set your desired activation time. I set it at 7am UTC Time Drag the "Run Function" node from "Logic" Pane and Paste this Code into the code editor. This code will activate the alarm from Monday to Friday. You may change this by changing the if statement. Note Value of N: 0 is Sunday, 1 is Monday, 2 is Tuesday etc. var d = new Date(); var n = d.getDay(); if (n < 6 && n>0){ output.process = true; } else { output.process = false; } Drag the "Update State" node from the "Logic" Pane and select "Update devices from list". A list of connected devices should appear and select your device of choice. In the "Key" box type in "state" and in the value box "alarm-on". Connect all these 3 flows together. Drag the "Event Created" node from the "Trigger" Pane and type in "button-pressed". Select your device from the box below. Drag the "Update State" node from the "Logic" Pane and select "Update devices from list". A list of connected devices should appear and select your device of choice. In the "Key" box type in "state" and in the value box "alarm-off". Connect these 2 flows together Drag the "Event Created" node from "Trigger" Pane and type in "alarm-stop". Select your device from the box below. Drag the MessageBird "SMS Message" node from the "Services" Pane. A pop up prompt should appear asking you to configure the MessageBird Integration. Click on "Click here to configure" and paste in your access key from Step 7. After clicking the blue "Create Integration". Click on the "Settings" beside the "Send SMS Message" node and customise the Originator, Recipient and Body as per your liking. Note: In the free version you can only send messages to your own phone. Drag the "Update State" from the "Logic" Pane and select "Update devices from list". A list of connected devices should appear and select your device of choice. In the "Key" box type in "state" and in the value box "alarm-off". Connect the 3 states in an upside down "Y" shape as shown in the previous image. Step 10: Click on the Code Icon on the left side of the Wia dashboard and Create a code project. Give your project a name and click on "Blocks Project" and layout below. Alternatively you can create a "Code Project" and Paste the code found underneath to the code editor. Once completed upload your code to the device using the rocket icon found on the top right hand side of the screen. #include <WiFi.h> #include <Wia.h> long startcountingtime; Wia wiaClient = Wia(); void setup() { WiFi.begin(); delay(2500); pinMode(26, OUTPUT); pinMode(19, INPUT); } void loop() { startcountingtime = millis(); while ((wiaClient.getDeviceState("state")) == "alarm-on") { digitalWrite(26, HIGH); delay(50); digitalWrite(26, LOW); delay(15); if (digitalRead(19)) { wiaClient.createEvent("button-pressed"); delay(1000); } if (millis() - startcountingtime > 60000) { wiaClient.createEvent("alarm-stop"); delay(1000); } } delay(10000); } Step 11: Connect the Grove Button Board to Grove Connector "19 & 23" on the Wia Grove Module and the Grove Buzzer Board to Grove Connector "26 & 18" as shown below. Step 12: If done correctly the Alarm will activate at 7am UTC Time Mon-Fri. The user has 60 seconds to deactivate the alarm. If the user does not deactivate the alarm within 60 seconds, a sms text message will be send to the person of choice and alarm deactivates automatically. Congratulations and well done on completing this tutorial , hope you enjoyed it. Here's other fun projects you can try out -
https://community.wia.io/d/97-iot-morning-alarm-using-dot-one-and-grove-module
CC-MAIN-2021-10
refinedweb
910
74.08
Developer 1: Hey I want to make some changes to CommonLibrary1. Is that going to affect the 15 other projects people are working on? Developer 2: I dunno, ask John. Developer 1: He told me to ask you. Developer 1: Hey I want to make some changes to CommonLibrary1. Is that going to affect the 15 other projects people are working on? Developer 2: I dunno, ask John. Developer 1: He told me to ask you. Sound familiar? I wanted a quick easy, foolproof way to visualize project dependencies without depending on human intervention, i.e. some poor sap typing stuff into a Word document which would probably be wrong before he saved the document. This project attempts to solve that by taking a directory location and an output JPG location as parameters. It then scans the directories getting all the *.csproj files and determines their relationships. It then uses the graphviz package to visualize these dependencies quickly and easily with fancy schmancy arrows and circles. I thought for sure that someone would have done this already but I could only find something similar for C++ apps. Even if there was an existing thing that did this, chances are it wouldn't work for the code structure I like to use. Different companies like to do their references and what not differently, so it would be nice to own the code and modify it according to personal preference... hence this project was born. I like to put in post build events for code that will copy the output DLLs to a directory called References. That way, whether it is a debug or release compile, the referencing projects all point to the same place and everyone has consistent references. This project assumes that structure so if you have something different then you will need to make code changes. Don't worry, I have included 3 sample projects and full unit tests so you can figure out how it's supposed to work first, then change it to suit your needs. You'll need Nunit to run the tests. It's free, get it. In the DependencyTracker/Launch directory, there is a *.bat file which will generate two JPG files: one for the library code projects only and one that generates a JPG for all projects. By changing the search directory and output JPG locations, you can customize it to your scenario and then schedule it to run on whatever schedule works best. Output to a network drive which happens to be on a Web server and you have instant up to date documentation which is easily accessible. Most of the magic happens in Project.cs when it is trying to find the references: /// <span class="code-SummaryComment"><summary></span> /// Load project file from disk and get list of project paths /// <span class="code-SummaryComment"></summary></span> /// <span class="code-SummaryComment"><returns></returns></span> private string[] GetReferenceProjectPaths() { XmlDocument project = new XmlDocument(); project.Load(ProjectPath); XmlNamespaceManager namespaceManager = new XmlNamespaceManager(project.NameTable); namespaceManager.AddNamespace( "ns", ""); Nothing earth shattering, just parsing the *.csproj file which is XML and looking for the references. The only complication is that the reference is stored as a relative file path, but by changing the current directory to where the project is and then just browsing to that path, we can get to the referenced DLL pretty easy. Then we just have to figure out where the source project is given the compiled DLL's location. As mentioned previously, I like to keep a consistent "References" folder so it isn't too difficult to find the referenced project file from there. If you have a different structure, you will need to make changes here. After all of this is loaded in memory, we then call out to a command line program to launch the graphviz image generation based on a file that is output below. We must recursively parse all the various projects and dependencies that have been loaded and take care not to process the same one twice so we don't get phantom arrows on the graph. The basic format of the graphviz program is to take a *.dot file which has lines like "a -> b" which means a points to b. a -> b a b /// <span class="code-SummaryComment"><summary></span> /// This will output all dependencies since that is what we care about. /// Will need some additional work if we want to display /// straggler projects with no dependencies also. /// <span class="code-SummaryComment"></summary></span> private void AppendProjectLinks(StringBuilder sb, ProjectList projects) { //loop through every project and output its dependencies to the //file in the format //parent -> child //order doesn't matter for dot files foreach (Project project in projects) foreach (Project reference in project.ReferencedProjects) { //if not processed output link and add to processed list if (!ProcessedLinks.Contains( project.Name + "-" + reference.Name) ) { sb.Append( "\"" + project.Name + "\" -> \"" + reference.Name + "\"" + Environment.NewLine ); ProcessedLinks.Add(project.Name + "-" + reference.Name); } AppendProjectLinks(sb, project.ReferencedProjects); } } All the tests use relative paths so it shouldn't matter where you unzip the files to, they should work as long as you don't move any of the folders around in the zip file. This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) @"C:\Program Files\ATT\Graphviz\bin\dot.exe" string path = Path.Combine(Environment.ExpandEnvironmentVariables("%programfiles%"), @"ATT\Graphviz\bin\dot.exe"); System.Diagnostics.Process.Start( path, "-Tjpg \"" + DOTFile + "\" \"-o" + OutputJPGFilePath + "\"" ); General News Suggestion Question Bug Answer Joke Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Articles/20082/Visualizing-Project-Dependencies-Automatically?msg=2224037
CC-MAIN-2014-35
refinedweb
937
54.02
Win32::MMF::Shareable - tied variable interface to MMF use Win32::MMF::Shareable; my $ns = tie my $s1, "Win32::MMF::Shareable", "varid"; tie my @a1, "Win32::MMF::Shareable", "array"; $s1 = 'Hello world'; @a1 = ( A => 1, B => 2, C => 3 ); tie my $s2, "Win32::MMF::Shareable", "varid"; tie my @a1, "Win32::MMF::Shareable", "array"; print "$s2\n"; print "@a1\n"; This module provides a tied variable interface to the Win32::MMF module. It is part of the Win32::MMF package. The current version 0.09 of Win32::MMF is available on CPAN at:; There are two ways to initialize an MMF namespace to be used in tied mode. # Method 1 - when importing the module use Win32::MMF::Shareable { namespace = 'MyNamespace', size = 1024 * 1024, swapfile = 'C:\private.swp' }; # Method 2 - initialization upon first use use Win32::MMF::Shareable; tie $scalar, "Win32::MMF::Shareable", "var_1", { namespace = 'MyNamespace', size = 1024 * 1024, swapfile = 'C:\private.swp' }; The options are exactly the same as those for the Win32::MMF constructor, although you can pass in IPC::Shareable options as well, making it easy to port IPC::Shareable All read and write accesses to a tied variable are locked by default. Additional level of locking can be performed to protect critical part of the code. my $ns = tie $scalar, "Win32::MMF::Shareable", "var_1"; ... $ns->lock(); $scalar = 'some string'; $ns->unlock(); There is a built-in method debug that will display as much information as possible for a given tied object. my $ns = tie $scalar, "Win32::MMF::Shareable", "var_1"; ... $ns->debug(); Currently only scalar, list and hash can be tied and modified correctly. You can tie a scalar reference too, but the elements that the scalar reference is pointing to can not be modified by a direct assignment. The way to get around it is to make a local copy of the tied reference, modify the local copy, and then assign the modified local copy back to the reference. tie $ref, "Win32::MMF::Shareable", "var_1"; $ref = [ 'A', 'B', 'C' ]; push @$ref, 'D'; # this does not work @list = @$ref; push @list, 'D'; $ref = \@list; # this works Credits go to my wife Jenny and son Albert, and I love them forever. Roger Lee <roger@cpan.org> This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
http://search.cpan.org/~roger/Win32-MMF-0.09e/MMF/Shareable.pm
CC-MAIN-2018-17
refinedweb
385
67.99
Can I use Xerces for parsing XPath Discussion in 'XML' started by QQ, Mar 4, 2008. - Similar Threads "Memory leak" in javax.xml.xpath.XPathMarvin_123456, Jul 29, 2005, in forum: Java - Replies: - 4 - Views: - 2,298 - jan V - Jul 29, 2005 Xpath apache xerces/xalan dom3Volker Jordan, Jan 22, 2004, in forum: XML - Replies: - 0 - Views: - 700 - Volker Jordan - Jan 22, 2004 Run-Time Check Failure #2 using XPath with Xalan and XercesFrancesc Guim Bernat, Apr 30, 2004, in forum: XML - Replies: - 0 - Views: - 8: - 833 - cvissy - Nov 16, 2004 Does the xerces for c++ parser support XPath?honky, Aug 22, 2005, in forum: XML - Replies: - 1 - Views: - 3,579 - Martin Honnen - Aug 22, 2005 java xerces xpath fails with namespacejacksu, Feb 10, 2006, in forum: XML - Replies: - 18 - Views: - 8,699 - Greg - Feb 27, 2006 What libraries should I use for MIME parsing, XML parsing, and MySQL ?John Levine, Feb 2, 2012, in forum: Ruby - Replies: - 0 - Views: - 927 - John Levine - Feb 2, 2012 Xerces only supports the simplest XPath queries?Ramon F Herrera, Jun 1, 2012, in forum: XML - Replies: - 15 - Views: - 2,437 - Peter Flynn - Jun 4, 2012
http://www.thecodingforums.com/threads/can-i-use-xerces-for-parsing-xpath.595899/
CC-MAIN-2016-36
refinedweb
191
66.41
#include <knuminput.h> Detailed Description An input widget for integer numbers, consisting of a spinbox and a slider. - Deprecated: - since 5.0, use QSpinBox instead KIntNumInput combines a QSpinBox and optionally a QSlider with a label to make an easy to use control for setting some integer parameter. This is especially nice for configuration dialogs, which can have many such combinated controls. The slider is created only when the user specifies a range for the control using the setRange function or when the user calls setSliderEnabled. A special feature of KIntNumInput, designed specifically for the situation when there are several KIntNumInputs in a column, is that you can specify what portion of the control is taken by the QSpinBox (the remaining portion is used by the slider). This makes it very simple to have all the sliders in a column be the same size. It uses KIntValidator validator class. KIntNumInput enforces the value to be in the given range, and can display it in any base between 2 and 36. Definition at line 176 of file knuminput.h. Constructor & Destructor Documentation Constructs an input control for integer values with base 10 and initial value 0. Definition at line 349 of file knuminput.cpp. Constructor It constructs a QSpinBox that allows the input of integer numbers in the range of -INT_MAX to +INT_MAX. To set a descriptive label, use setLabel(). To enforce the value being in a range and optionally to attach a slider to it, use setRange(). - Parameters - Definition at line 356 of file knuminput.cpp. Constructor. the difference to the one above the below parameter instead. Definition at line 341 of file knuminput.cpp. Destructor. Definition at line 627 of file knuminput.cpp. Member Function Documentation You need to overwrite this method and implement your layout calculations there. See KIntNumInput::doLayout and KDoubleNumInput::doLayout implementation for details. Definition at line 577 of file knuminput.cpp. - Returns - the maximum value. - Returns - the minimum value. This method returns the minimum size necessary to display the control. The minimum size is enough to show all the labels in the current font (font change may invalidate the return value). - Returns - the minimum size necessary to show the control Reimplemented from QWidget. Definition at line 548 of file knuminput.cpp. - Returns - the prefix displayed in front of the value. - See also - setPrefix() - Returns - the current reference point - Returns - the curent value in units of the referencePoint. Emitted whenever valueChanged is. Contains the change relative to the referencePoint. sets focus to the edit widget and marks all text in if mark == true Definition at line 543 from KNumInput. Definition at line 672 of file knuminput.cpp. Sets the maximum value. Definition at line 493 of file knuminput.cpp. Sets the minimum value. Definition at line 483 of file knuminput.cpp. Sets the prefix to prefix. Use QString() to disable this feature. Formatting has to be provided (see above). - See also - QSpinBox::setPrefix(), setSuffix() Definition at line 531 of file knuminput.cpp. Sets the allowed input range and the step size for the slider and the spin box. - Parameters - Definition at line 452 of file knuminput.cpp. - Deprecated: - Use the other setRange function and setSliderEnabled instead Definition at line 476 of file knuminput.cpp. Sets the reference point for relativeValue. Definition at line 389 of file knuminput.cpp. Sets the value in units of the referencePoint. Definition at line 638 of file knuminput.cpp. - Returns - the step of the spin box Definition at line 508 of file knuminput.cpp. - Parameters - Definition at line 420 of file knuminput.cpp. Sets the special value text. If set, the SpinBox will display this text instead of the numeric value whenever the current value is equal to minVal(). Typically this is used for indicating that the choice has a special (default) meaning. Definition at line 661 of file knuminput.cpp. Sets the suffix to suffix. Use QString() to disable this feature. Formatting has to be provided (e.g. a space separator between the prepended value and the suffix's text has to be provided as the first character in the suffix). - See also - QSpinBox::setSuffix(), setPrefix() Definition at line 513 of file knuminput.cpp. Sets the suffix to suffix. Use this to add a plural-aware suffix, e.g. by using ki18np("singular", "plural"). - Since - 4.3 Definition at line 520 of file knuminput.cpp. Sets the value of the control. Definition at line 632 of file knuminput.cpp. - Returns - the step of the spin box - Returns - the string displayed for a special value. - See also - setSpecialValueText() - Returns - the spin box widget. Definition at line 363 of file knuminput.cpp. - Returns - the suffix displayed behind the value. - See also - setSuffix() - Returns - the current value. Emitted every time the value changes (by calling setValue() or by user interaction)..
https://api.kde.org/frameworks/kdelibs4support/html/classKIntNumInput.html
CC-MAIN-2019-39
refinedweb
797
52.26
I was thinking recently about how I might send a configured BlissFlixx SD card or complete system to a friend. How easy is it for someone to configure and use? This led to a bit of Python coding, as I attempted to make wifi configuration just a little bit easier. Background In an earlier post I described the media centre BlissFlixx, which runs on any model B variant of the Raspberry Pi and is remotely controlled via HTTP from a smart device like a tablet or phone. Once running, it is super-easy to use, so long as it is connected to your network and you know the IP address that your router has allocated to the Raspberry Pi. Although the recommendation is to run BlissFlixx via ethernet, I have been using it via wifi. My router is only a few metres away from the Pi so I get a good 54Mb/s data rate. Wifi configuration Since the Pi used for BlissFlixx is configured to only boot to the command prompt, I decided to add some code to intially do two things:- - Display the current system IP Address - Allow the user to configure wifi A last minute addition was to also display the initial wifi connection rate. This is what the TV screen looks like when the Pi has finished booting:- Clearly, if the bit rate is low, you won't get a very good viewing experience. The code I'm [clearly] not a Python programmer. So once you have picked yourself up off the floor and dried your eyes, you may be able to help tidy this up. In outline; I'm using Python to open ifconfig & iwconfig so that I can extract the current IP address and (where applicable) the wifi connection bit rate. For wifi configuration, the program accepts the user entered SSID and password, then runs wpa_passphrase to create a new wpa_supplicant.conf file. #=============================================================== #Simple Python3 prog to display current IP Address and to allow # users to enter wifi details and update the file:- # /etc/wpa_supplicant/wpa_supplicant.conf #SteveDee #Feb 2016 #---------------------------------------------------------------- import os import time #extract IP Address from ifconfig TheIP = "blank" os.system("ifconfig > /home/pi/blissflixx/ifconfig.txt") f_ipaddr = open('/home/pi/blissflixx/ifconfig.txt', 'r') inet_details = f_ipaddr.read() index=inet_details.find("inet addr:") offset = len("inet addr:") #I assume there are no more than 3 "inet addr" entries in ifconfig if index < 0: print("\n>>>>>>>>>>>>>> Hmmm, can't find an IP address!\n") else: testip = inet_details[index+offset:index+25] index_sp = testip.find(" ") testip=testip[0:index_sp] if testip != "127.0.0.1": #exclude loopback TheIP = testip else: testip=inet_details[index+offset:len(inet_details)] index=testip.find("inet addr:") if index < 0: print("\n>>>>>>>>>>>> Hmmm, can't find an IP address!\n") else: testip=testip[index+offset:index+25] index_sp = testip.find(" ") TheIP =testip[0:index_sp] if TheIP != "blank": print("\n*****************************************************************************************************************\n") print("\n Point the browser on your remote tablet/laptop/phone to http://"+TheIP+"\n") print("\n*****************************************************************************************************************\n\n") os.system("iwconfig > /home/pi/blissflixx/iwconfig.txt") time.sleep(5) f_wireless = open('/home/pi/blissflixx/iwconfig.txt', 'r') wifi_details = f_wireless.read() index = wifi_details.find("Bit Rate") if index > -1: rate = wifi_details[index:index + 17] print ("\n Wifi " + rate + "\n") print ("\n If you need to re-configure BlissFlixx to suit your wifi...\n") print (" ... please attached a keyboard to your Pi & enter the wifi/router SSID\n") #re-configure wifi varSSID = input("SSID: ") varPASS = input("Please enter wifi password: ") print (varSSID,' ',varPASS,'\n') os.system('wpa_passphrase ' + '"' +varSSID + '"' +" " + '"' + varPASS + '"' + ' > /home/pi/a_pytest') f_template = open('/home/pi/blissflixx/wpa_template', 'r') f_wifi = open('/home/pi/a_pytest', 'r') wpa_details = f_template.read() + '\n\n' + f_wifi.read() print (wpa_details) f_wpasupp = open('/home/pi/a_pytest', 'w') f_wpasupp.write(wpa_details) f_wpasupp.close() f_wifi.close() f_template.close() os.system('sudo mv /home/pi/a_pytest /etc/wpa_supplicant/wpa_supplicant.conf') print ("...I just need to re-boot the Pi so that changes can take affect...this may take a minute or two to stop all processes...") time.sleep(5) os.system('sudo shutdown now -r') I include a wpa_supplicant template file which looks like this:- ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev update_config=1 The passphrase info is simply added to the template text and a new file created. Python wouldn't let me put this straight into its final resting place, so it is initially saved in Pi home folders and then moved. Changes to blissflixx.py & start.sh I also made a couple of simple mods at the start of the main blissflixx.py file:- #!/usr/bin/python from os import path import sys, os LIB_PATH = path.join(path.abspath(path.dirname(__file__)), "lib") sys.path.append(LIB_PATH) import locations, gitutils, cherrypy import time os.system('clear') print "\n\n..........enforcing a 10 second delay.....\n" time.sleep(10) # Do not allow running as root if os.geteuid() == 0: The highlighted lines just clear the boot screen and add a 10 second delay. The delay appears to be necessary to ensure that the Pi is connected to the wifi before BlissFlixx starts. In start.sh I've just added a line to start my Python code:- #!/bin/bash python /home/pi/blissflixx/blissflixx.py --port 80 --daemon python3 /home/pi/blissflixx/wificonf.py All files live in: /home/pi/blissflixx but as already mentioned, a copy of the updated wpa_supplicant.conf is also copied to: /etc/wpa_supplicant.
http://captainbodgit.blogspot.com/2016/03/blissflixx-wifi-config-user-information.html
CC-MAIN-2017-34
refinedweb
898
57.57
Removing JavaScript from the document. Discussion in 'Javascript' started by Daz, Jan 14, 2007. removing namespaces from an XML documentMatt, Apr 7, 2004, in forum: XML - Replies: - 6 - Views: - 779 - Alex Shirshov - Apr 13, 2004 removing a namespace prefix and removing all attributes not in that same prefixChris Chiasson, Nov 12, 2006, in forum: XML - Replies: - 6 - Views: - 672 - Richard Tobin - Nov 14, 2006 Removing a tag from an xml documentChris Gallagher, Nov 24, 2006, in forum: Ruby - Replies: - 2 - Views: - 143 - Chris Gallagher - Nov 24, 2006 Why focus moves to top of document when removing element?Hvid Hat, Jan 23, 2008, in forum: Javascript - Replies: - 3 - Views: - 141 - Thomas 'PointedEars' Lahn - Jan 24, 2008
http://www.thecodingforums.com/threads/removing-javascript-from-the-document.929219/
CC-MAIN-2015-11
refinedweb
115
59.47
The Definitive Guide to Grails is a great book but like each book it contains a few errata. It is possible to submit errata on the book's page but these are not publicly available and therefore it is not possible to know if the errata you've found are new or have already been submitted many times. This page should offer a better opportunity to submit errata, addendum and small simplifications. Page 15 - The book says: Grails created a test class called HelloTests.groovy for the HelloController, but it created a class HelloControllerTest.groovy Page 21 - Assertions are not introduced in Java 5, but are also part of Java 1.4 page 28 - variable should be person and not fred - 17.<66 should be 17..<66 page 29 for(i in 0..<text[0..4]) { println text[i] } should be for(i in 0..<4) { println text[i] } page 49 Since version 1.2.12 log4j has also a TRACE level page 64 The version column is missing from figure 4-2. page 65 static optionals = ['notes'] Support for 'optionals' property will be completely removed in 0.6, i.e. this won't work in 1.0. Use: static constraints = { notes (nullable:true) //.. } pages 66-67 Given that add* is deprecated, the bottom of page 66 should read: "Not only that, GORM also automatically provides an implementation of an addToTags method that makes it easy to work with the association." The example at the top of page 67 should read in part: //now add some tags b.addToTags( new Tag(name: 'grails' ) ) b.addToTags( new Tag(name: 'web framework') ) page 82 password(matches: /[\w\d]+/, length:6..12) \d is useless as \w is already [a-zA-Z_0-9] page 83 In table 4-2 maxLength login(maxLength:5) Sets the maximum length of a string or array property NOTE: maxLength has been deprecated (0.4) then removed (0.5), use maxSize, e.g. login(maxSize:5) page 86 return ['passwordEqualsLastName', lastName] should be return ['EqualsLastName', obj.lastName] The following script can be helpful to execute code above to test for the last name error in the grails console: def user = new User(login:'barry', password:'barry', firstName:'barry1', lastName:'barry', email:'barry@fake.com') if(user.save()){ println "User $user created" } else { users.errors.allErrors.each { println ctx.getBean('messageSource').getMessage(it, Locale.getDefault()) } } Outstanding issue: unable to get arguments to resolve using the above script. If you examine the contents of the FieldErrror object the argument that is being passed from the constraint is not added to the FieldError arguments. In the listing 4-31 and 4-32 the "?. operator is used". This operator is presented on page 102 for those who are not familiar with it. page 103 notes(maxLength:1000) maxLength has been deprecated (0.4) then removed (0.5), use maxSize, e.g. notes(maxSize:1000) page 115 new Bookmark(title:"Canoo",url:"").save() should be new Bookmark(title:"Canoo",url:"").save() shouldn't it actually be new Bookmark(title:"Canoo",url:new URL("")).save() as it is in the rest of the book page 125,127 mock1.demand.render { Map params -> should be ctrlMock.demand.render { Map params -> page 130 Main feature of property webtest_showhtmlparseroutput is to control if html parsing messages should be saved in the WebTest report or not page 144 It appears that as of Grails 1.0 the log4j configuration is now located at grails-app/conf/Config.groovy. Also, how logging is defined is quite a bit different than the book indicates and you are encouraged to reference the Grails 1.0+ User Reference Guide under section 3.1. page 162 Input fields should have an id otherwise the <label for="...">...</label> are useless. See GRAILS-540 page 165 <div class="errors"> should be <div class="message">${flash.message}</div> <div class="errors"> This allows the password mis-match error to actually appear. <g:renderErrors should be <g:hasErrors <g:renderErrors </g:hasErrors> <input type="confirm" name="confirm" /> should be <input type="password" name="confirm" /> page 166 if( user.save() ) { redirect( controller: 'bookmark', action: 'list' ) } should be if( ! user.hasErrors() && user.save() ) { session.user = user redirect( controller: 'bookmark', action: 'list' ) } Otherwise, without setting the session.user to the newly created user, we simply go back to the login page because of the security intercept. Also, the check for hasErrors() seems to be more in line with how Groovy 1.0 does things. page 168 <form action="upload" enctype="multipart/form-data"> should be <form action="upload" method="post" enctype="multipart/form-data"> page 174 <p>${bookmark.title}</p> is correct in JSP too since version 2.0. Note that this is not exactly the same than <p><c:out</p> as the <c:out.../> escapes xml special characters what is not done by ${bookmark.title} (neither in GSP nor in JSP 2.0). This is important to avoid Javascript Cross Site Scripting (XSS). page 177 <g:each should be <g:each because - the ? is useless as null.each {} is a valid Groovy expression... that does nothing - bookmarks.tags? is not a valid Groovy expression, therefore this example only works due to current implementation detail of the <g:each ...> tag. page 182 In the Linking Tags paragraph ... you are always linking to the write (sic) place in a consistent manner? should be ... you are always linking to the right place in a consistent manner? page 197 <g:form should be <g:form because the search exists on all pages, even the tag controller pages, therefore the controller needs to be specified, else you get an error attempting to use the search form from the controller rendered pages also <g:submit should be <g:submitButton page 198 12 I like("name", params.q) should be 12 ilike("name", "%${params.q}%") also 6 if(params.q && !params.q?.indexOf('%')) { should be 6 if(params.q && !params.q?.contains('%')) { If the intention was to only perform the search if the user did not place a % in the search string. The way it is written now the user is forced to start all search strings with a %, though there is no indication that this must be done, and although the code appends a '%' character infront of the input string anyways. The bookmark controller in the current version of the example app (based on chapter 11) avoids this check altogether. page 200 The new GSP should be placed in grails-app/views/bookmark/_bookmark.gsp That's bookmark singular. page 207 Listing 8-52 reads: class BookmarkTagLib { def repeat = { attrs, body -> attrs.times?.toInteger().times { n -> body(n) } } } But it should read: class BookmarkTagLib { def repeat = { attrs, body -> attrs.times?.toInteger().times { n -> out << body(n) } } } page 209 To make custom editInPlace tag working, Scriptaculous need to be added in the main layout grails-app/views/layouts/main.gsp: <g:javascript Also, listing 8-55 reads: 9 body() 10 out << "</span>" 11 out << "<script type='text/javascript'>" 12 out << "new Ajax.InPlaceEditor('${id}', '" 13 createLink(attrs) But on Grails 0.5, that won't output the results of body() or createLink(attrs). It should read: 9 out << body() 10 out << "</span>" 11 out << "<script type='text/javascript'>" 12 out << "new Ajax.InPlaceEditor('${id}', '" 13 out << createLink(attrs) page 210 Listing 8-56 contains the following line: url="[action:'updateNotes', id:id:bookmark.id]" That should be: url="[action:'updateNotes', id:bookmark.id]" Listing 8-57 reads: def updateNotes = { update.call() render( Bookmark.get(params.id)?.notes ) } This depends on the update closure, which does more than just updating the record - it also redirects output to the show action. As a result, you'll end up with Show Bookmark page nested where the notes should be. An alternative is the following: def updateNotes = { def bookmark = Bookmark.get( params.id ) if(bookmark) { bookmark.properties = params if(bookmark.save()) render( Bookmark.get(params.id)?.notes ) else render( "Error saving bookmark" ) } } page 211 Listing 8-58 reads: class BookmarkController { ... void testEditInPlace() throws Exception { ... } } That should be: class BookmarkTests extends GroovyTestCase { ... void testEditInPlace() throws Exception { ... } } // page 221 To continue on with the bookmark example, you will need to make other domain changes. Indeed, you will need to revisit many GSP pages, etc. In addition to adding the new domain class of TagReference, the author has also removed from the working Bookmark class two fields: rating and type. The current Bookmark domain class should look like the following: class Bookmark { static belongsTo = User static hasMany = [tags:TagReference] // a bookmark has 1 to many tag references... User user URL url String title String notes Date dateCreated static constraints = { url(url:true) title(blank:false) notes( nullable:true, maxLength: 1000 ) } String toString() { return "$title - $url" } } Likewise, although not mentioned, the Tag domain element needs to change as well, since its clearly no longer belongs to the Bookmark domain. class Tag { String name String toString() { name } } Notice the removal of the belongsTo declarative. page 223 Due to a probably bug in Grails – the code to use the remoteField, and specifically to generate the update= clause needs to be changed. <g:remoteField Should be: <g:remoteField Why you ask? Currently (Grails 1.0.1 in any event) the phrase $ returns the string "null". This is one of those times. Later in the code when we actually declare the <div id="suggestions$ page 226 As of Grails 1.0, the code to add the tag and create a TagRefernece is incorrect. ... b.addTagReference( ... should be: ... b.addToTags( ... page 228 I believe the following code for suggestTag is much better than the original for various reasons. First, it works. The original code had a boundary condition that caused a 404 error to appear when the URL was empty. At least, the way I interpreted the code from the book, which had a missing } somewhere (from the trim() I think). Secondly, the code should be a little bit more efficient by not trying to find suggestions when there is no url. Also note the use of toURL and not toUrl, which doesn't exist. Finally, an important note – its key to ALWAYS render something from this routine, else the dreaded 404 due to its trying to default the action to finding a gsp page of the same name as the method being invoked, in this case it was looking for "suggestTag.gsp". def suggestTag = { def tags def bookmark = params.id ? Bookmark.get(params.id) : new Bookmark() if ( ! bookmark.url ) { if ( params.value?.trim() ) { if ( ! params.value?.startsWith("http://") ) { bookmark.url = "{params.value}".toURL() } } } // If we have a url -- try to get the bookmarks if ( bookmark.url ) { tags = getSuggestions( bookmark ) } // Must always render SOMETHING -- else the default action is to find a gsp page render( template: 'suggest', model: [tags: tags, bookmark: bookmark] ) } page 232 ... <div id="editButtons"> <g:submit <g:submitToRemote </div> ... should be: ... <div id="editButtons"> <g:submitButton <g:submitToRemote </div> ... Note the use of <g:submitButton> instead of <g:submit> which doesn't appear to exist anymore (if it ever did) as of 1.0. Also note do not use tabs (\t) to pretty up your gsp within at least a <g:render>, it will cause it not to find / parse the tag properly. i.e. <g:render template="blah" ... /> using a tab between the <g:render and the "template" will not work. Must be a space (or one assumes multiple spaces). page 236 The best solution for a real-world situation wouldn't be to perform caching but to avoid involving the server: the url is already available on the client side (in the bookmark link) and therefore the preview should be realised on the client side only. page 245 The location to get the HTTP client jar files has changed within Apache. It appears the new home is. page 250 A tip section would be great to explain the trick for string conversion in: bookmarks << new Bookmark(title:"${p.@description}", url:new URL("${p.@href}")) page 254 The use of the bookmark template to render the results from del.icio.us isn't optimum since, in its current version, it produces Edit, Delete, and Preview actions, none of which are valid for remote links. Either the bookmark template should be modified to optionally not render those actions, or a new smaller and shorter template be created. page 266 Listing 10-20 includes: Add Tag: <g:textField <g:submitButton Should be: Add Tag: <g:textField <g:submitButton page 267 The create-job script and associated cool quartz stuff was moved to a plug-in as of Grails 1.0 and needs to be installed in the project via the grails install-plugin quartz command. page 276 assert sw.toString().indexOf('<a href="">Grails Download Page</a>') should be assert sw.toString().contains('<a href="">Grails Download Page</a>') because String.indexOf(...) returns -1 when nothing is found and -1 is not false according to the Groovy Truth. page 278 contains(...) instead of indexOf(...) like on page 276 page 288 boolean equals(obj) { if (this == obj) return true should be boolean equals(obj) { if (this.is(obj)) return true otherwise a StackOverflowError will occur as == is the Groovy equivalent of equals() in Java.
http://docs.codehaus.org/display/GRAILS/Errata+and+addendum+in+The+Definitive+Guide+to+Grails
crawl-002
refinedweb
2,197
66.74
Today’s tip is a continuation from yesterday’s tip which talked about how to hide or show hidden members and types in the Object Browser. So today’s tip is how to actually make something hidden or hidable. In the System.ComponentModel namespace, there’s the EditorBrowseableAttribute class. Going back to yesterday’s foo() and bar() methods, you’ll see in the below example how the foo() doesn’t appear in Intellisense, just like it doesn’t appear in the Object Browser. Of course, you can still complete the line above with foo(), and everything will compile successfully. Technorati Tags: VS2005Tip,VS2008Tip
http://blogs.msdn.com/b/saraford/archive/2008/05/15/did-you-know-you-can-mark-methods-and-types-as-hidden-so-they-don-t-appear-in-intellisense-or-in-the-object-browser-216.aspx
CC-MAIN-2013-48
refinedweb
103
57.98
Oh how can I thank you enough, you make my day:) According to what you said I finally figure it out, it is the same as: <code> b = 1 def a(): b = b #no good:) </code> So in every day programming I should avoid using the same name for different objects because they will step on each other, right? On Nov 11, 6:18 pm, Gabriel Genellina <gagsl... at yahoo.com.ar> wrote: > At Saturday 11/11/2006 02:35, Camellia wrote: > > >But sorry I'm so dumb I can't say I really understand, > >what do I actually do when I define a function with its name "number"?Don't apologize, Python is a lot more dumb than you. It obeys very > simple rules (a good thing, so we can clearly understand them, once > we know them). > Recalling your previous example: > > > > >> def main(): > > > >> number = number()Python first scans the source code looking for assigned-to names. > That is, names to the left of equal signs. Those names, plus the > function formal parameters, make the list of "local names". Any other > names referenced are assumed to be globals, that is, living outside > your function. (That's not entirely true but enough for now). > Notice that "number" is a local name because it's assigned to; it > doesn't matter whether a global name "number" exists or not. > When the code is executed, the "number" to the right references the > local name, which has not been assigned to yet - that's why you get > an UnboundLocalError. "number" can't be a global name when used on > the right hand side, and a local name when used on the left hand > side. It's simple: it is local, or not, but not both. > > >why does a name of a function has something to do with a variable?Notice also that it does not matter *what* kind of object a name > refers to: it may be a function, a class, an object instance, > whatever. Inside a function, by example, a local name "a" may be > bound at most to a single object at a time, it doesn't matter its > type. A local name "a" can't refer to an integer and a class at the same time. > The left hand side of an assign statement *always* uses local names, > except when you declare a name to be global by using the "global" keyword. > And notice also that I've never used the word variable. > > >Oh wait can I do this in Python?: > ><code> > >def a(): > > def b() > ></code> > > >so the b() will appear to be a local function which is the possible > >cause of my little own error because the compiler will interpret the > >number() as a local function but a global one?Yes, you can. Python has "lexically nested scopes". The simple > local/global rule of above is a bit more complicated: when a name is > not local, it is searched inside the enclosing functions, then in the > containing module's global namespace, and last in the builtin names. > And yes, b is local to a, so it effectively hides any external b that > could be in outer scopes. > > -- > Gabriel Genellina > Softlab SRL > > __________________________________________________ > Correo Yahoo! > Espacio para todos tus mensajes, antivirus y antispam ¡gratis! > ¡Abrí tu cuenta ya! -
https://mail.python.org/pipermail/python-list/2006-November/393597.html
CC-MAIN-2014-10
refinedweb
548
70.13
Arrays in C++. These elements are numbered from 0 to 4, being 0 the first and 4 the last; In C++, the first element in an array is always numbered with a zero (not a one), no matter its length. . Declaration of Arrays An array declaration is very similar to a variable declaration. First a type is given for the elements of the array, then an identifier for the array and, within square brackets, the number of elements in the array. The number of elements must be an integer. For example data on the average temperature over the year in Britain for each of the last 100 years could be stored in an array declared as follows: float annual_temp[100]; This declaration will cause the compiler to allocate space for 100 consecutive float variables in memory. The number of elements in an array must be fixed at compile time. It is best to make the array size a constant and then, if required, the program can be changed to handle a different size of array by changing the value of the constant, const int NE = 100; float annual_temp[NE]; then if more records come to light it is easy to amend the program to cope with more values by changing the value of NE. This works because the compiler knows the value of the constant NE at compile time and can allocate an appropriate amount of space for the array. It would not work if an ordinary variable was used for the size in the array declaration since at compile time the compiler would not know a value for it. Accessing Array Elements Given the declaration above of a 100 element array the compiler reserves space for 100 consecutive floating point values and accesses these values using an index/subscript that takes values from 0 to 99. The first element in an array in C++ always has the index 0, and if the array has n elements the last element will have the index n-1. An array element is accessed by writing the identifier of the array followed by the subscript in square brackets. Thus to set the 15th element of the array above to 1.5 the following assignment is used: annual_temp[14] = 1.5; Note that since the first element is at index 0, then the ith element is at index i-1. Hence in the above the 15th element has index 14. An array element can be used anywhere an identifier may be used. Here are some examples assuming the following declarations: const int NE = 100, N = 50; int i, j, count[N]; float annual_temp[NE]; float sum, av1, av2; A value can be read into an array element directly, using cin cin >> count[i]; The element can be increased by 5, count[i] = count[i] + 5; or, using the shorthand form of the assignment count[i] += 5; Initialisation of arrays An array can be initialised in a similar manner as the initialisation of simple variables in their declaration . In this case the initial values are given as a list enclosed in curly brackets. For example initialising an array to hold the first few prime numbers could be written as follows: int primes[] = {1, 2, 3, 5, 7, 11, 13}; Note that the array has not been given a size, the compiler will make it large enough to hold the number of elements in the list. In this case primes would be allocated space for seven elements. If the array is given a size then this size must be greater than or equal to the number of elements in the initialisation list. For example: int primes[10] = {1, 2, 3, 5, 7}; would reserve space for a ten element array but would only initialise the first five elements. A sample C++ program to store and calculate the sum of 5 numbers entered by the user using arrays. #include <iostream> using namespace std; int main() { int numbers[5], sum = 0; cout << "Enter 5 numbers: "; // Storing 5 number entered by user in an array // Finding the sum of numbers entered for (int i = 0; i < 5; ++i) { cin >> numbers[i]; sum += numbers[i]; } cout << "Sum = " << sum << endl; return 0; } Output Enter 5 numbers: 3 4 5 4 2 Sum = 18 Things to remember when working with arrays in C++ Suppose you declared an array of 10 elements. Let’s say, int testArray[10]; You can use the array members from testArray[0] to testArray[9]. If you try to access array elements outside of its bound, let’s say testArray[14], the compiler may not show any error. However, this may cause unexpected output (undefined behavior). Advantages and Disadvantages of C++ language Types of variables in C++ Our course design of tutorials is practical and informative. At TekSlate, we offer resources to help you learn various IT courses. We avail both written material and demo video tutorials. For in-depth knowledge and practical experience explore Online C++ Training. 0 Responses on Arrays Concept in C++ Language"
https://tekslate.com/arrays-in-c-plus-plus/
CC-MAIN-2018-09
refinedweb
838
55.68
3556SOAP server response is not coming back to client ---please help Expand Messages - May 4, 2004Dear All, I am working with I830M4 chipset intel motherboard having x86 processor.and I am running WindowsCE .Net 4.2 here.The board is acting as my windows CE device and i am using Winxp machine with platform builder 4.2 to build Wince .net 4.2 and dowloded to above device. Now to test the kit i have client perl script using SOAP:LITE will be running on winxp and server perl script on wince 4.2 which will call perl test module. codes are written below. **** server side running on wince 4.2 ************************ # server ..running on wince 4.2 use SOAP::Transport::HTTP; # don't want to die on 'Broken pipe' #$SIG{PIPE} = 'IGNORE'; getMethods(); # find all my external and internal classes and methods my $daemon = SOAP::Transport::HTTP::Daemon -> # create the agent server new(LocalPort=>2020, Reuse=>1,Listen =>10) -> # allow LISTENQUE connections dispatch_to('.', 'agent', test_suite1.pm); # set method list (test_suite1.pm contains all tests script) Log("Contact agent at " . $daemon->url); print (@classList); $daemon->handle; ******client side running on wndows xp ************************* use SOAP::Transport::HTTP; use SOAP::Lite; if (!$ARGV[0]) { print "run_tests_remotely.pl - sample for remotely running tests\n"; print " usage: run_tests_remotely.pl <hostname>\n"; exit 1; } else { $host = $ARGV[0]; } $Remote = SOAP::Lite->uri('/test_suite1') ->proxy("", timeout => 600) ; # -------------- # getPropList () # -------------- @outNVL=$Remote->getPropList()->paramsall ; # getpropList() define in test_suite1.pm print "\n GetPropList result:\n @outNVL\n" ; exit 1; *********************** test_suite1.pm**************** package test_suite1; return 1; sub getPropList { $file = 'ParamNVL.txt' ; open(INFO,"< $file") or die "Failed to open file $file : $! \n"; @params = <INFO> ; # -- Reads content of file into an array -- close(INFO) ; return (@params) ; # return paramnVL.txt file contains. }# getPropList() **************************************************** Here daemon is running on wince 4.2 and while i am trying to execute the test from client, no response is coming back to client. But subroutine is called (getprobList in server and it is reading all values from paramNVL.txt) in the server that i have checked but all return values is not comimg back to client. In platform builder workspace i have added soap (client & server). SOAP:LITE module downloaded from soaplite.com and copied to wince 4.2. I have build perl binary distribution on wince 4.2. Can any one please tell me where i am worng ? is there any other things i need to do?. can any one please help me? Regards Subrata ________________________________________________________________________ Yahoo! India Matrimony: Find your partner online.
https://groups.yahoo.com/neo/groups/soaplite/conversations/messages/3556
CC-MAIN-2017-26
refinedweb
417
69.38
sentence the word "ever" should be replaced with the word "every". ASCII is understood by virtually ever computer in use. should be: ASCII is understood by virtually every computer in use. In the second sentence the word "on" should be replaced with the word "of". ...processing instruction is entirely a matter on convention; the... should say: ...processing instruction is entirely a matter of convention; the... The first sentence, which begins with "We can break up the declaration in particular systactic components..." should read: "We can break up the declaration into particular syntactic components..." In the last line of the <body> element replace the word "exhibition" with the word "expedition." ------------------------------------------------------------------------ # - ArticleHandler (add to handlers.py file) class ArticleHandler(ContentHandler): """ A handler to deal with articles in XML """ ------------------------------------------------------------------------ to this: ------------------------------------------------------------------------ from xml.sax.handler import ContentHandler class ArticleHandler(ContentHandler): """A handler to deal with articles in XML""" ------------------------------------------------------------------------ *** p. 54 -- Remove the following line of output from the top of the page: Start element: title *** p55, Example 3-3, first line: Change "XML.sax.handler" to "xml.sax.handler" (all lower case). The body part of the outpur from the art.py program is too long. It should end after "completed " because self.body=self.body[:78]+ "...". If you also used the spacing indentation in the article.xml example the body part text should stop at "NASA has com" *** No change required for reprint. But for confirmed errata page say: The exact end of the text will depend on the specific indentation used in the article.xml file, as well as whether the indentation uses space or tab characters. *** p. 57-58 -- All double-underscores "__" in code and prose should have a hair-space between the underscores. Would it be possible to search all of Chapter 3 for this and fix all of them? *** p60, Example 3-4. 9th line on page: "def __init__(..." should be indented two spaces. *** p68, Example 3-6, replace the first 11 lines: ------------------------------------------------------------------------. ------------------------------------------------------------------------ with these 11 lines: ------------------------------------------------------------------------ import os. *** p68, Example 3-6, last 5 lines on page should be replaced with: ------------------------------------------------------------------------ fullImageFile = os.path.join(dir, localname[2:]) + ".html" print "Will create:", fullImageFile fullImageHTML = ('<html><body><img src="%s%s"></body></html> ' % (localname[2:], ext)) ------------------------------------------------------------------------ Please keep all lines at current indentation. *** p75, Example 3-9, startElement() method: The "print" statement should be commented out; change: print "* Processing:", s to: # print "* Processing:", s Please maintain same indentation of line as we have now. *** p. 98 -- fix the 2 "__" on this page by inserting hair spaces. *** p. 100 -- fix the 2 "__" on this page by inserting hair spaces. *** p153, command line example at bottom of page, first line should be: C:>python c:python20xmldocdemoxmlprocxvcmd.py order.xml In other words, change "$ " at beginning of line to "C:>" *** p155, new_element_type() method, replace: ------------------------------------------------------------------------ def new_element_type(self, elem_name, elem_cont): print "New Element Type Declaration: ", elem_name, "Content Model: ", elem_cont ------------------------------------------------------------------------ with: ------------------------------------------------------------------------ def new_element_type(self, elem_name, elem_cont): print "New Element Type Declaration: ", elem_name print "Content Model: ", elem_cont *** p168, command line example in the middle of the page should be: C:9780596001285c7>python val.py BillSummary.xml In other words change "c6" to "c7" *** p173, Example 7-14, 3/4 of the way down the page, the line: flatfile = query.getvalue("flatfile", "")[0] should be: flatfile = query.getvalue("flatfile", "") In other words, drop the "[0]" at the end. Please maintain the same indentation. *** p174, Example 7-14, near the end, the line: if veh.errors: should read: if hasattr(veh, "errors") and veh.errors: {sample code - retrive.py} the line - print " of " + totalsize should be print " of ", totalsize U of UML is the initial of "Unified". *** p193, Example 8-4, halfway down the page, change: req = HTTP("192.168.1.23") # change to your IP or localhost to: req = HTTP("127.0.0.1") # change to a different IP if needed Duplicate word "shared". p. 233 -- paragraph under "Getting Profiles" -- change last sentence to read: In a simple test case (in the file <I>runcp.py</I>), you could use the methods as follows: p. 233 -- command line at bottom of the page -- change from: G:9780596001285c9>python runcp.py to: G:9780596001285c10> python runcp.py Extra "you": "...the order you in which you..." p. 239 -- first code example -- the line "else:" is out-dented by one space; it should be moved right one space so that "else:" lines up with "if e is not None:" *** p244, Example 10-2, 1/3 of the way down the page, change: raise exception.Exceptions("SQL Exec failed.") to: raise Exception("SQL Exec failed.") Please maintain current indentation of line. p. 261 -- code example at the top of the page (cont. from previous page) -- "if(dom):" should be changed to simply "if dom:" (no change to alignment) *** p. 266-267 -- fix the various "__" on these pages by inserting hair spaces. Typo in print statement: "Reponse: " should be: "Response: " resonse should be response resonse should be response *** p273, Example 10-12, echoResponse() method, near bottom of page, change: self.wfile.write("</xml></font><font face="arial,verdana,helvetica" " size="4">Body:<br><xmp>") to: self.wfile.write('</xml></font><font face="arial,verdana,helvetica"' ' size="4">Body:<br><xmp>') Please maintain current indentation of lines. *** p285, Example 10-15, halfway down the page, change: id = CustProfElement.getAttributeNS('', "id") to: id = CustProfElement.getAttributeNS(None, "id") Please maintain current indentation of line. their should be there *** p. 294 -- fix the various "__" on this page by inserting hair spaces. © 2017, O’Reilly Media, Inc. (707) 827-7019 (800) 889-8969 All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners.
http://www.oreilly.com/catalog/errata.csp?isbn=9780596001285
CC-MAIN-2017-26
refinedweb
939
59.7
Sorting Photos We. Yes, same camera—this could probably be an ad for a Canon A20 which has been abused, dropped by me and others and used by tens of kids that have never used a camera before and some of them that have never even used a flush toilet. In any case, the photos from the camera are spread over a few CF cards, CDs, two different computers and who knows where else. Sometimes the same photos have been saved multiple times. The photo number sequence has been reset when I changed CF cards. Or, put in other terms, I have a disaster to clean up. I regularly get asked for a particular photo and spend a bunch of time looking for it. This time, I decided to take the time to write a program to help solve the problem. Yes, there are lots of programs to sort and thumbnail photos but when you have 10,000 or so images to start with, some sort of pre-sorting makes sense. Here is what I want that presort to do. - Read a list of possible photo files. - Build a database with creation date, some source information and an organized place to store each photo. - Be able to tag each photo. In this case, any of a number of letters will work. - Optionally add a description. - Allow me to say "forget it" for obviously bad photos. - Allow me to incrementally add to this collection. Yeah, that's just a start but it makes sense considering the magnitude of the problem. Source information, for example, would be which computer or CD the photos came from. For most photos, the EXIF information from the camera will give me the actual date and time the photo was taken. But, if that isn't available (edited photo, for example), I will settle for the Linux filesystem timestamp. I see getting this stuff organized as a four-step process. - Find all the photos—this is a combination of physical work and then building lists of filenames. A find command can do the dirty work. for example find /home/tux/Pix -iname "*.jpg" >file.listcan do most of the work. Multiple lists can be built on a directory or directory tree basis. - fotosort (the program I am talking about here) and my time can be used to process each list. It will allow me to skip a photo or add some tagging information and save a copy, All the "processed" photos will end up in one big tree with the database pointing to them. - Toss duplicates. This will be the next programming project. With an MD5 digest and the file size (in bytes) in the database, it will be easy to find files that are duplicates. - Create photo galleries. Whether I elect to do the final step—create the galleries— manually, using one of the many existing programs or write something to do it myself, I am already heading in the right direction. All the information I need is in a database and the photos are all in one place. The Code Let's look at what I have created. It is far from a work of art as it has experienced the typical evolution sequence that most programs go through. But, it works. If I was going to use it regularly, I would invest a bit of time to clean it up and add error handling but it is petty much a one-shot for me. Class Rec is not much more than a comment that shows what data I will need. When used to create an instance, it is passed the source_info string. It will be common for all the records created in a single run of fotosort. The main program opens the filename list passed as a command line argument and opens the database (or creates it and the file tree if it doesn't exist). It then loops through those filenames displaying them using GraphicMagick's display function and checks to see if you want to save each one. If you say skip, it moves on to the next file. If you elect to save the file it gets the file timestamp, bytecount and MD5 digest, prompts for the flags and description, inserts the information into the database and copies the image file over to the new tree. No matter whether you picked save or not, the image display is terminated by calling kill with the pid returned when it was started. All the nitty gritty is handled by functions. Here is a quick look at the important ones. tree_setup() creates 100 sub-directories named 00 through 99. As I have 10,000 files to play with I certainly don't want to put them all in one directory. They will be stored in the 100 different directories selected by the last two digits of their filename. For example, picture z_000021, z_000121, z_099921, ... will all be stored in sub-directory 21. store_open() checks to see if the data directory is accessible. If so, it opens the database and returns the sqlite3 connection id. Otherwise, with your permission, it creates a new file tree (using tree_setup() and initializes the database. store_add() adds a record to the database. It returns the last row id (auto-increment id field) which is also the numeric part of the filename. We use this to copy the file to the data tree. file_ts() is, well, ugly. The clean part is stat is used to get the bytecount. The ugly part is getting the picture creation time from the EXIF info if it exists. I found references to multiple EXIF packages in Python but each seemed to have a problem. I elected to use the exiv2 program which is included in Kubuntu. I read the results until I find the "Image timestamp" line and hackishly convert it into a real Linux-ish timestamp (seconds since the epoch). It was a pain but this is the best choice for later data comparisons. If there is no EXIF information of the timestamp is missing, I settle for the last modify time in the filesystem. stat easily supplies this information. img_save() creates a filename consisting of z_ and a six digit number. That number is the database record id with leading zeros added. It then computes the actual destination path with the same mod 100 trick for directory name as tree_setup() used. img_hash() used hashlib to create an MD5 digest for the file. No magic other than hashlib is new and replaces the older digest creation routines. That's the end of the story. As I said, the program evolved and it shows it. It's actually a good example of why programs should be written twice. One serious (ok, irritating) problem remains. When the image display is opened, the focus switches to it. Thus, you need a mouse click to get back to the console window to communicate with the main loop. There is probably is the right way to fix this but, for now, just setting the Focus stealing prevention level in the KDE Control Module (click on the icon in the task bar, select Configure Window Behavior and the Advanced) to high solves the problem. Unfortunately, that isn't the general policy I want. I am sure it is easy to fix under program control—I just haven't figured out how yet. Now, I guess I need to actually spend the next few days using the program. I do need a bunch of photos for the Geek Ranch web site. Ed Note: The code below will NOT work if you copy and paste it. Get the code here. # fotosort.py # Takes a list of photo files and lets you play with them # What it did goes in a database including user supplied flags and description # Phil Hughes 25 Dec 2007@0643 import sys import os import time import shutil import hashlib from pysqlite2 import dbapi2 as sqlite dataloc ="/home/fyl/PIXTREE" # where to build the tree connection = 0 # will be connection ID class Rec(): # what we will put in the db def __init__(self, source_info): self.source_info = source_info # where it came from # id integer primary key # will be filename # flags text # letters used for selection # md5 text # MD5 hex digest # size integer # file byte count # description text # caption information # source_path text # path we got it from # timestamp integer # creation timestamp (from image of fs date) def tree_setup(): os.mkdir(dataloc, 0755) # tree base for x in range(100): # build 100 sub-directories os.mkdir("%s/%02i" % (dataloc, x), 0755) def show_pix(path): # runs display, returns display_pid so kill can work return os.spawnv(os.P_NOWAIT, "/usr/bin/gm", ["display", "-geometry", "240x240", path]) def store_open(): # opens, returns biggest ID or -1 on error # create data store if it doesn't exist if not os.access(dataloc, os.R_OK|os.W_OK|os.X_OK): print "can't open %s\n" % dataloc if raw_input("Create data structures (y/n): ") == 'y': tree_setup() # initialize the database con = sqlite.connect(dataloc + "/pix.db") cur = con.cursor() cur.execute('''create table pix (id integer primary key, flags text, md5 text, size integer, description text, source_info text, source_path text, timestamp integer) ''') else: # the boss said forget it exit(1) else: con = sqlite.connect(dataloc + "/pix.db") if con > 0: return con else: return -1 def store_close(con): con.close() def store_add(data): # assigns next id, saves, returns id cur = connection.cursor() cur.execute(''' insert into pix (flags, md5, size, description, source_info, source_path, timestamp) values (?, ?, ?, ?, ?, ?, ?)''', (data.flags, data.md5, data.size, data.description, data.source_info, data.source_path, data.timestamp) ) connection.commit() return cur.lastrowid def openfl(path): # open a file list, returns file object return open(path, 'r') def getfn(rec): # gets the next filename return readline(lfo) def form_fill(rec): # pass record to fill in rec.flags = raw_input("Flags: ") rec.description = raw_input("Desc.: ") def file_ts(path): # returns creation timestamp, file size in bytes size = os.stat(path).st_size # look for EXIF info but, if not found, uses filesystem timestamp exiv2fo = os.popen("/usr/bin/exiv2 %s" % path, 'r') for line in exiv2fo: if line[0:15] == "Image timestamp": cl = line.index(':') ts_str = line[cl+2:cl+21] ts = time.mktime((int(line[cl+2:cl+6]), int(line[cl+7:cl+9]), int(line[cl+10:cl+12]), int(line[cl+13:cl+15]), int(line[cl+16:cl+18]), int(line[cl+19:cl+21]), 0, 0, 0)) break else: # use filesystem timestamp ts = os.stat(path).st_mtime exiv2fo.close() return (long(ts), size) def img_save(image_file, id): # copy image file to store # store location is built from id and some other fun stuff fname = "z_%06d" % int(id) dest = dataloc + "/" + "%02d" % (int(id) % 100) + '/' + fname # print dest shutil.copyfile(image_file, dest) return dest def img_hash(image_file): # returns MD5 hash for a file fo = open(image_file, 'r') m = hashlib.md5() stuff = fo.read(8192) while len(stuff) > 0: m.update(stuff) stuff = fo.read(8192) fo.close() return (m.hexdigest()) ### ### This is where the action starts ### if len(sys.argv) != 2: print "usage %s names_file\n" % sys.argv[0] exit(1) lfo = openfl(sys.argv[1]) # filename list file connection = store_open() if connection < 0: print "%s: unable to initialize database" % sys,agrv[0] exit(1) # let's get the string to use for source info rec = Rec(raw_input("Enter source info: ")) for f in lfo: f = f.strip() # toss possible newline display_pid = show_pix(f) disp = raw_input("s[ave]/d[iscard]/q[uit]: ") if disp != 'q' and disp != 'd': rec.timestamp, rec.size = file_ts(f) rec.source_path = f rec.md5 = img_hash(f) # hash form_fill(rec) # get user input id = store_add(rec) # insert in db savedloc = img_save(f, id) # copy the image print "Photo saved as %s\n" % savedloc os.system("kill %s" % display_pid) if disp == 'q': break Doesn't seem to work - get an error I copied the script, saved it to a text file, and ran python fotosort.py. I'm getting the following error: ------------------------------------------------------------- File "fotosort.py", line 17 class Rec(): # what we will put in the db ^ SyntaxError: invalid syntax ------------------------------------------------------------- What do I need to get the script to run? Indenting The mostly likely problem is indenting. I should have put the code in a separate file that could be downloaded. I have asked our Webmistress to do that. Python blocks are indicated by indentation levels. While the code may appear ok, spaces and tabs that make things look lined up are not necessarily treated the same. Phil Hughes Code There is a link to the code in the article above. You can also get it right here. Katherine Druckman is webmistress at LinuxJournal.com. You might find her on Twitter or at the Southwest Drupal Summit have you tried kphotoalbum or f-spot or others? does your script do anything that one of the existing photomanagment applications can't do as well? i am quite happy managing my 12000 photos in kphotoalbum. it spreads them out on a scalable timeline and allows you to tag photos one by one or in bulk. it does not bother about the filenames or even the path as it tracks photos by checksum, so even if you move/rename them later, tags won't be lost. i started using kphotoalbum after i had about 10000 photos and just walked through the timeline spending a couple of hours each day tagging batches of photos for few months. before using kphotoalbum i just sorted my photos by date, putting them in a year/month/day sort of path. i still do that as kphotoalbum does not care. i don't import the original photos into kphotoalbum but only a smaller version (800x600) of them to make handling easier. (kphotoalbum does have a feature to handle offline storage though) as for backups, i don't delete a photo from the cf-card until it is copied to at least two locations. one on my notebook and one on an external usb disk. that external disk contains disk-images in dvd-sized files. photos get placed into those disk images, and disk-images are written to a dvd as they fill up. once a dvd is written, i delete the directory that corresponds to that dvd from my notebook, so that i end up with one copy of the photo on dvd and one on the external usb disk. i also keep a second set of dvd-disks at my grandmothers place. greetings, eMBee. ooops... that should have been a reply to the main article, not to the first comment. greetings, eMBee.
http://www.linuxjournal.com/node/1005967?quicktabs_1=0
CC-MAIN-2015-18
refinedweb
2,430
73.07
The challenge was:“We've found the source to the Arstotzka spies rendevous server, we must find out their new vault key.” You are also provided with a slurp.py python script and the ip:port. Trivia: Arstotzka is the place of a indie game called “Papers, Please”. The game has a very unusual gameplay. In the game you play a border control guard who is checking the the passports of persons wanting to enter Arstotzka. The game plays in a fictional Eastern Block state but the setting could also portrait modern day USA. slurp.py is the server listening on 128.238.66.222:7788. The goal is to pass it’s authentication scheme to get the flag. The authentication protocol is as follows: Phase 1, sha1 challengeServer -> Client: base64_encoded_24bit_urandom Client -> Server: The client has to find a value so that the hash $$sha1(base64\_encoded\_24bit\_urandom\ +\ client\_choosen\_chars)$$ ends with "\xFF\xFF\xFF". If the client is able to generate such a hash, the server continues to phase 2. We solved this phase with a simple brute force of the sha1 hash. After a few seconds you can find a sha1 hash which suffices the server-supplied challenge. Phase 2, authenticationRemark 1: The values sent by the client are transfer in a way that would potentially allow negative values. The formatting uses a unsigned short as packed length, concatenated with a string of the value in hexadecimal notation. However we couldn’t find any use for this during the exploitation. Remark 2: The random number generation used by the server is not perfect. It generates a 2048 bit number from 320 bit output of urandom, however it’s hashed in such a way that a portion of the resulting number will be contain zero bit. We couldn’t find a way to use this during the exploitation. def cryptrand(self,n=2048): p1=self.hashToInt(os.urandom(40))<<1600 p1+=self.hashToInt(p1)<<1000 p1+=self.hashToInt(p1)<<500 p1+=self.hashToInt(p1) bitmask=((2<<(n+1))-1) p1=(p1&bitmask) return (p1% self.N)Server -> Client: “Welcome to Arstotzka's check in server, please provide the agent number” Client -> Server: The client chooses the values of index and cEphemeral and send them to the server. Index has to be at least 2, but not greater than N/2 (N is constant, known prime used as modulus in all operations). The only restriction on cEphemeral is that cEphemeral % N != 0. Server -> Client: The server sends the values of sEphermeral, salt. The client already knows salt because it’s sha512(index). Client -> Server: The client calculates the gennedKey and sends it to the server. Server -> Client: The server checks whether the server-calculated gennedKey is the same as the one provided by the client. Then it tells the client whether authentication was successful or not. If the authentication was successful, the server sends: “Well done comrade, the key to the vault is $flag_value”. If you’re able to crack the authentication, you get the flag and solve the challenge. So how does the server test authentication? The client-supplied values are: index, cEphemeral The server settings are: password, 2048 bit modulus N The password is actually empty in the provided sources which let us to a dead end; more on this later. Then the server calculates: $$salt = sha512(index)$$ $$storedKey = index^{sha512(salt, password)}\ mod\ N$$ $$sEphemeralPriv = cryptrand()\ \ \#a\ 2048\ bit\ value,\ random\ in\ every\ connection$$ $$sEphemeral = index^{sEphemeralPriv} + 3 * storedKey\ mod\ N$$ $$sEphemeral = index^{sEphemeralPriv} + 3 * index^{sha512(salt + password)}\ mod\ N$$ $$\Longleftrightarrow$$ $$index^{sEphemeralPriv} = sEphemeral - 3 * index^{sha512(salt + password)}\ mod\ N$$ Note that sEphemeral and salt are now sent to the client $$slush = sha512(cEphemeral, sEphemeral)$$ $$agreedKey = sha512( (cEphemeral * storedKey^{slush})^{sEphemeralPriv}\ mod\ N )$$ $$gennedKey = sha512(sha512(N) \oplus sha512(index), sha512(index), salt, cEphemeral, sEphemeral, agreedKey)$$ The only unknown value is agreedKey. Then, the compare on the client-supplied gennedKey is done. So, the client needs to find a known agreedKey. First attemptOur first attempt was to calculate the agreedKey, which is actually possible without knowing the randomly generated sEphemeralPriv. $$agreedKey = sha512( (cEphemeral * storedKey^{slush})^{sEphemeralPriv}\ mod\ N )$$ $$agreedKey = sha512( agreedKey\_withouthash )$$ $$agreedKey\_withouthash = (cEphemeral * storedKey^{slush})^{sEphemeralPriv}\ mod\ N$$ $$agreedKey\_withouthash = (cEphemeral * index^{sha512(salt, password) * slush})^{sEphemeralPriv}\ mod\ N$$ If we choose $$cEphemeral = index^{xxx}$$ we get $$agreedKey\_withouthash = (index^{xxx} * index^{sha512(salt, password) * slush})^{sEphemeralPriv}\ mod\ N$$ $$agreedKey\_withouthash = (index^{xxx + sha512(salt, password) * slush})^{sEphemeralPriv}\ mod\ N$$ $$agreedKey\_withouthash = (index^{sEphemeralPriv})^{xxx + sha512(salt, password) * slush}\ mod\ N$$ With the following value from above: $$index^{sEphemeralPriv} = sEphemeral - 3 * index^{sha512(salt + password)}\ mod\ N$$ We get the following: $$agreedKey\_withouthash = (sEphemeral - 3 * index^{sha512(salt + password)})^{xxx + sha512(salt, password) * slush}\ mod\ N$$ So, we had a formula for calculating agreedKey and knew all variables of the formula. By calculating agreedKey we were able to calculate gennedKey and send it to the server. This all worked well for our test environment. However it did not work on the live flag server. Our guess was that the password is different on the live flag server, thus we were not able to calculate the correct agreedKey. Second attemptOur second attempt which was successful in the end, was to carefully choose the index. Let’s look again at this equation: $$agreedKey\_withouthash = (cEphemeral * index^{sha512(salt, password) * slush})^{sEphemeralPriv}\ mod\ N$$ We can set cEphemeral to 1 (the value is from the client), which simplifies the formula to: $$agreedKey\_withouthash = index^{sha512(salt, password) * slush * sEphemeralPriv}\ mod\ N$$ Because the exponential is changing for each connection on random, we can assume that it’s divisible by some small number, e.g. 3 or 4 (if not, we can retry until it is). So, let’s now find the index, such that: $$index^3 = 1\ (mod\ N)$$ (index is cubic root of 1 modulo prime N) If we manage to find such index and (sha512(salt, password) * slush * sEphemeralPriv) will be divisible by 3, agreedKey_withouthash will equals to 1. How to find it? One line in mathematica: Reduce[x^3 == 1, x, Modulus->5924486056224…(the value of N)] Result: There are 3 solutions, but only the second one satisfies constraints on the index value. Now we have to set the index to this number, calculate agreedKey = SHA512(1) and send the data, until it succeeds ;) Remarks: Instead of 3 you could use 4 and then find the root using formula: $$a^{\frac{N-1}{4}}$$ (a can be, for example, 2). You couldn’t use 2 instead of 3, because the only quadratic roots of 1 are 1 and -1 (refused by server’s constraints).
https://blog.dragonsector.pl/2013/09/csaw-ctf-quals-2013-slurp-crypto-500.html
CC-MAIN-2021-17
refinedweb
1,113
60.04
Loads a (directed, undirected, or multi) graph from a text file, InFNm, with 1 node and all its edges in a single line. InFNm is a whitespace separated file of several columns: <source node name> <destination node name 1> <destination node name 2> ... First column of each line contains a source node name followed by ids of the destination nodes. For example, ‘A B C’ encodes edges A–>B and A–>C. Note that this format allows for saving isolated nodes. Parameters: Class of output graph – one of PNGraph, PNEANet, or PUNGraph. Filename with the description of the graph nodes and edges. Stores the mapping from node names to node ids. Note that this is a special hash table for holding strings and has a different interface than the standard SNAP hash table. Use GetDat(name) to obtain a mapping from a node name to its node id. Return value: A Snap.py graph of the specified type GraphType. The following example shows how to load each of the graph types from a file named test.dat: import snap H = snap.TStrIntSH() Graph = snap.LoadConnListStr(snap.PNGraph, "test.dat", H) H = snap.TStrIntSH() UGraph = snap.LoadConnListStr(snap.PUNGraph, "test.dat", H) H = snap.TStrIntSH() Network = snap.LoadConnListStr(snap.PNEANet, "test.dat", H) A sample of test.dat is: A B C B C D
http://snap.stanford.edu/snappy/doc/reference/LoadConnListStr.html
CC-MAIN-2018-34
refinedweb
224
78.55
#include <listmode.h> The base class for list modes, should be inherited. Items stored in the channel's list Constructor. Tell a user that a list contains no elements. Sends 'eolnum' numeric with text 'eolstr', unless overridden (see constructor) Reimplemented from ModeHandler. Display the list for this mode See mode.h Reimplemented from ModeHandler. Perform a rehash of this mode's configuration data Get limit of this mode on a channel Retrieves the list of all modes set on the given channel Gets the lower list limit for this listmode. Determines whether some channels have longer lists than others. Handle the list mode. See mode.h Reimplemented from ModeHandler. In the event that the mode should be given a parameter, and no parameter was provided, this method is called. This allows you to give special information to the user, or handle this any way you like. Reimplemented from ModeHandler. Remove all instances of the mode from a channel. Populates the given modestack with modes that remove every instance of this mode from the channel. See mode.h for more details. Reimplemented from ModeHandler. Tell the user an item is already on the list. Overridden by implementing module. Tell the user the list is too long. Overridden by implementing module. Tell the user that the parameter is not in the list. Overridden by implementing module. Validate parameters. Overridden by implementing module. Limits on a per-channel basis read from the <listmode> config tag. Numeric to indicate end of list String to send for end of list Storage key Numeric to use when outputting the list Automatically tidy up entries
https://www.inspircd.org/api/3.0/class_list_mode_base.html
CC-MAIN-2021-39
refinedweb
268
61.33
this is my first time ever on the site posting, and as such, if you are reading this, I need you to understand that I am really, really new to C++. I've only even done Flash Actionscript, HTML and Javascript in the past but nothing in great detail as they stopped as my school curriculum stopped teaching it. Hence I'm using this video series by DevHQLessons on youtube to help me out with learning the basics. I just watched video on Class Members, and I think I get the concept, and in an attempt to put my knowledge to a bit of fun I tried to expand on what he did on the video (If you're interested). Anywho this coding for a Win32 Console Application, and the idea of my program at this state is to ask the user what they would like to order from a takeaway restaurant, by answering y/n to the prompts and then, if needed, specifying the quantity of each product, then listing them down at the end. My first idea for specifying the quantity was to use an integer, however this posed me some problems. The output at the end of the questions only ever lists how much chicken is ordered, nothing else. Additionally, the compiler freaks out if I don't initialize variables "chickens", or if I answer no to all the prompts. I hope I make sense, and if I don't, I'm incredibly sorry. Here is the code, if relevant: #include <iostream> using namespace std; class takeAway{ public: void wantsChicken(){ chicken = true; } void wantsNoodles(){ noodles = true; } void wantsBurger(){ burger = true; } bool chicken; bool noodles; bool burger; }; int main(){ char yn1; int qty1; char yn2; int qty2; char yn3; int qty3; takeAway Customer1; cout << "You want chicken? Y/N" << endl; cin >> yn1; if(yn1 == 'y' || yn1 == 'Y') { Customer1.wantsChicken(); cout << "How many chickens?" << endl; cin >> qty1; } cout << "You want noodles? Y/N" << endl; cin >> yn1; if(yn1 == 'y' || yn1 == 'Y') { Customer1.wantsChicken(); cout << "How many noodles?" << endl; cin >> qty2; } cout << "You want burgers? Y/N" << endl; cin >> yn1; if(yn1 == 'y' || yn1 == 'Y') { Customer1.wantsChicken(); cout << "How many burgers?" << endl; cin >> qty3; } cout << "You have ordered:" << endl; if(Customer1.chicken == true){ cout << "- " << qty1 << "x Chicken(s)" << endl; } if(Customer1.noodles == true){ cout << "- " << qty1 << "x Noodle(s)" << endl; } if(Customer1.burger == true){ cout << "- " << qty1 << "x Burger(s)" << endl; } system("PAUSE"); return 0; } Hopefully this is all covered, and if you could help, that would be much appreciated. I'm really hoping I can get some skills together with this language to the point of one day learning how to make a basic game.
http://www.dreamincode.net/forums/topic/309257-misunderstanding-with-how-to-use-an-integer-w-a-very-simple-program/
CC-MAIN-2016-44
refinedweb
444
70.02
Hey guys, now I have an assignment to create code that will ask for a number and then calculate the factorial of it with a for loop in a separate function and return the value to main.. Here is my code, I think it looks pretty good, but when I run it it just sends me the black box and it just stays blank with the blinking _ so Im not sure if it is trying to run it or something else is going on. #include <iostream> using namespace std; double factorial(double num); //factorial prototype void main () { int num; int answer; answer= factorial(num); cout << "Please enter a number: "; cin >> num; if (num < 0) cout <<"Please enter a positive integer.\n"; else cout <<"Factorial of " << num << " is: " <<answer; cout << "\n\n"; system ("pause"); } //compute factorial double factorial (double num) { double answer; int n; for (n = 1; n <= num; n++) { num *= n; } return answer; }
https://www.daniweb.com/programming/software-development/threads/390061/factorial-with-a-for-loop
CC-MAIN-2021-25
refinedweb
154
50.33
This post is a contribution from Adam Burns, an engineer with the SharePoint Developer Support team. This post repackages examples that are already out there on MSDN, but brings two examples together and provides some implementation details that may not be obvious in the existing articles. I also tried to leave out confusing details that didn’t bear directly on the subject of using custom claims to do Pre-Trimming of search results. In general you should look at the included sample code after reading the description of the different parts and that should be enough to understand the concepts. The sample code includes two projects, the Custom Indexing Connector (XmlFileConnector) which sends claims to BCS, and the SearchPreTrimmer project which actually implements the trimming. In addition you’ll find: We’ll use the XmlFileConnector. One of the good things about this kind of connector is that you can use it for a database. For instance it can be a product catalog, a navigation source, etc. To install the connector do the below steps (All paths are examples and can easily be substituted for applicable paths: 1. Build the included XmlFileConnector project. 2. Open the “Developer Command Prompt for Visual Studio 2012” and type: 3. Merge the registry entries for the protocol handler by double-clicking on the registry file located at “C:\Users\administrator.CONTOSO\Documents\Visual Studio 2012\Projects\SearchSecuirtyTrimmerExample\XmlFileConnector\xmldoc.reg”. That’s the protocol handler part of all this. Unless SharePoint knows about the protocol you are registering the connector won’t work. Registering the custom protocol is a requirement for custom indexing connectors in SharePoint. 4. Open “SharePoint 2103 Management Shell” and type: 5. You can check the success like this: Get-SPEnterpriseSearchCrawlCustomConnector -SearchApplication $searchapp | ft -Wrap You should get: 6. Stop and Start the Search Service 7. Create a Crawled Property Category. When the custom indexing connector crawls content the properties discovered and indexed will have to be assigned to a Crawled Property Connector. You will associate the category with the Connector. Later on, if you want to have Navigation by Search or Content By Search, you will want to create a managed property and map it to one of the custom properties in this Crawled Property Category. This will allow to present content using Managed Navigation and the Content Search Web Part (note that Content Search Web Part is not present in SharePoint Online yet, but it is coming in the future). To create a new crawled property category, open SharePoint 2013 Management Shell and type the following commands and run them, a. $searchapp = Get-SPEnterpriseSearchServiceApplication -Identity "<YourSearchServiceName>" b. New-SPEnterpriseSearchMetadataCategory -Name "XmlFileConnector" -Propset "BCC9619B-BFBD-4BD6-8E51-466F9241A27A" -searchApplication $searchapp NOTE: The Propset GUID, BCC9619B-BFBD-4BD6-8E51-466F9241A27A, is hardcoded in the file XmlDocumentNamingContainer.cs and should not be changed. c. To specify that if there are unknown properties in the newly created crawled property category, these should be discovered during crawl, at the command prompt, type and run the following: i. $c = Get-SPEnterpriseSearchMetadataCategory -SearchApplication ii. $searchapp -Identity "<ConnectorName>" iii. $c.DiscoverNewProperties = $true iv. $c.Update() i. $c = Get-SPEnterpriseSearchMetadataCategory -SearchApplication ii. $searchapp -Identity "<ConnectorName>" iii. $c.DiscoverNewProperties = $true iv. $c.Update() 8. Place a copy of the sample data from C:\Users\administrator.CONTOSO\Documents\Visual Studio 2012\Projects\SearchSecuirtyTrimmerExample\Product.xml in some other directory such as c:\ XMLCATALOG. Make sure the Search Service account has read permissions to this folder. 9. Create a Content Source for your Custom Connector: a. On the home page of the SharePoint Central Administration website, in the Application Management section, choose Manage service applications. b. On the Manage Service Applications page, choose Search service application. c. On the Search Service Administration Page, in the Crawling section, choose Content Sources. d. On the Manage Content Sources page, choose New Content Source. e. On the Add Content Source page, in the Name section, in the Name box, type a name for the new content source, for example XML Connector. f. In the Content Source Type section, select Custom repository. In the Type of Repository section, select xmldoc. g. In the Start Address section, in the Type start addresses below (one per line) box, type the address from where the crawler should being crawling the XML content. The start address syntax is different depending on where the XML content is located. Following the example so far you would put this value: xmldoc://localhost/C$/xmlcatalog/#x=Product:ID;;titleelm=Title;;urlelm=Url# The syntax is different if you have the data source directory on the local machine versus on a network share. If the data source directory is local, use the following syntax: xmldoc://localhost/C$/<contentfolder>/#x=doc:id;;urielm=url;;titleelm=title# If the data source directory is on a network drive, use the following syntax: xmldoc://<SharedNetworkPath>/#x=doc:id;;urielm=url;;titleelm=title# · "xmldoc" is the name of name of the protocol of this connector as registered in the Registry when installing this connector. See step 3 above. · "//<localhost>/c$/<contentfolder>/" or “//<ShareName>/<contentfolder>/” is the full path to directory holding our xml files that are to be crawled · "#x=:doc:url;;urielm=url;;titleelm=title#" is a special way to encode config parameters used by the connector o "x=:doc:url" is the path of element in the xml document that specifies the id of a single item o "urielm=url" is the name of the element in the xml document that will be set as the uri of the crawled item o "titleelm=title" is the name of the element in the xml document that will be set as the title of the crawled item · Note that the separator is “;;” and that the set of properties are delineated with “#” at the beginning and “#” at the end. If you’ve looked at the BCS model schema before you know that in the Methods collection of the Entity element you usually specify a Finder and a “SpecificFinder” stereotyped method. In this example the Model is not used for External Lists or a Business Data List Web Part. It does not create or update items. Therefore, it specifies an AssociationNavigator method instead, which is easier to implement and is good in this example because it lets us focus more on the Security Trimming part. Also, this connector doesn’t provide for much functionality at the folder level, so we’ll concentrate primarily on the XmlDocument Entity and ignore the Folder entity. In both the AssociationNavigator (or Finder) method definition and the SpecificFinder method definition we must specify a security descriptor but in this case we need to specify two other Type Descriptors. <TypeDescriptor Name="UsesPluggableAuth" TypeName="System.Boolean" /> <TypeDescriptor Name="SecurityDescriptor" TypeName="System.Byte[]" IsCollection="true"> <TypeDescriptors> <TypeDescriptor Name="Item" TypeName="System.Byte" /> </TypeDescriptors> </TypeDescriptor> <TypeDescriptor Name="docaclmeta" TypeName="System.String, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" /> In the Method Instance definition we tell BCS what the names of these field type will be: <Property Name="UsesPluggableAuthentication" Type="System.String">UsesPluggableAuth</Property> <Property Name="WindowsSecurityDescriptorField" Type="System.String">SecurityDescriptor</Property> <Property Name="DocaclmetaField" Type="System.String">docaclmeta</Property> · UsesPluggableAuth as a boolean field type. If the value of this field is true, then we’re telling BCS to use custom security claims instead of the Windows Security descriptors. · SecurityDescriptor as a byte array for the actual encoded claims data · docaclmeta as an optional string field, which will only be displayed in the search results if populated. This field is not queryable in the index. In the connector code the XmlFileLoader utility class populates these values into the document that is returned to the index. In some case we use logic to determine the value as here: UsesPluggableAuth =security !=null, The XmlFileLoader class also encodes the claim. Claims are encoded as a binary byte stream. The data type of this example is always of type string but this is not a requirement of the SharePoint backend. The claims are encoded in the AddClaimAcl method according to these rules: · The first byte signals an allow or deny claim · The second byte is always 1 to indicate that this is a non-NT security ACL (i.e. it is a claim ACL type) · The next four bytes is the size of the following claim value array · The claim value string follows as a Unicode byte array · The next four bytes following the claim value array, gives the length of the claim type · The claim type string follows as a Unicode byte array · The next four bytes following the claim type array, gives the length of the claim data type · The claim data type string follows as a Unicode byte array · The next four bytes following the claim data type array, gives the length of the claim original issuer · The claim issuer string finally follows as a Unicode byte array. Naturally, we have to include the three security-related fields in the Entity that is returned by the connector: public class Document { private DateTime lastModifiedTime = DateTime.Now; public string Title { get; set; } public string DocumentID { get; set; } public string Url { get; set; } public DateTime LastModifiedTime { get { return this.lastModifiedTime; } set { this.lastModifiedTime = value; } } public DocumentProperty[] DocumentProperties { get; set; } // Security Begin public Byte[] SecurityDescriptor { get; set; } public Boolean UsesPluggableAuth { get; set; } public string docaclmeta { get; set; } // Security End } In the example all the documents are returned from one XML file, but here is one Product from that file: <Product> <ID>1</ID> <Url></Url> <Title>Adventure Works Laptop15.4W M1548 White</Title> <Item_x0020_Number>1010101</Item_x0020_Number> <Group_x0020_Number>10101</Group_x0020_Number> <ItemCategoryNumber>101</ItemCategoryNumber> <ItemCategoryText>Laptops</ItemCategoryText> <About>Laptop with 640 GB hard drive and 4 GB RAM. 15.4-inch widescreen TFT LED display. 3 high-speed USB ports. Built-in webcam and DVD/CD-RW drive. </About> <UnitPrice>$758,00</UnitPrice> <Brand>Adventure Works</Brand> <Color>White</Color> <Weight>3.2</Weight> <ScreenSize>15.4</ScreenSize> <Memory>1000</Memory> <HardDrive>160</HardDrive> <Campaign>0</Campaign> <OnSale>1</OnSale> <Discount>-0.2</Discount> <Language_x0020_Tag>en-US</Language_x0020_Tag> <!-- Security Begin --> <claimtype></claimtype> <claimvalue>user1</claimvalue> <claimissuer>customtrimmer</claimissuer> <claimaclmeta>access</claimaclmeta> <!-- Security End --> </Product> Note that the last 4 elements specify the claim value that will be evaluated by the Security Descriptor member that is returned in the Entity. The new pre-trimmer type is where the trimmer logic is invoked pre-query evaluation. The search server backend rewrites the query adding security information before the index lookup in the search index. Because the data source contains information about the claims that are accepted, the lookup can return only the appropriate results that satisfy the claim demand. Benefits of Pre-Trimming Pre-trimming has better performance metrics so that is the method that is most recommended. Besides that the new Framework makes it easy to specify simple claims (in the form of strings) instead of having to encode ACLs in the form of Byte Arrays in the data source (something which can be quite time consuming and difficult). To create a custom security pre-trimmer we must implement the two methods of ISecurityTrimmerPre. IEnumerable<Tuple<Microsoft.IdentityModel.Claims.Claim, bool>> AddAccess(IDictionary<string, object> sessionProperties, IIdentityuserIdentity); void Initialize(NameValueCollection staticProperties, SearchServiceApplication searchApplication); In the beginning of the SearchPreTrimmer class I set the path to the datafile.txt membership file to @"c:\membership\datafile.txt". You’ll need to either change that or put your membership file there. The included example is basically blank but here is what mine looks like; domain\username:user1;user3;user2; contoso\aaronp:user1; contoso\adamb:user2; contoso\adambu:user1;user2;user3 contoso\alanb:user3; The identity of the user is the first thing on every line and then comes a colon. After that is a list of claims, separated by semicolons. I left the user1, user2, user3 claims, but you could make it “editor”, “approver”, “majordomo” or whatever group claims make sense to you. You don’t have to think of it as groups, it just a string claim. The data that is crawled by the connector will specify the list of claims that can view that item in the search results. The main thing you’ll be doing in Initialize is setting the values of claim type, the claim issuer and the path to the datafile. The claim type and claim issuer are very important because the trimmer will use this information to determine whether and how to apply the demands for claims. The path to the data file is important because that is where the membership is specified. You can think of it as a membership provider just for the specific claims that you want to inject for the purposes of your custom trimming. It The framework passes in a NameValueCollection and it’s possible values could be sent in from there. You will probably want to set default values as we do in the example. The AddAccess method of the trimmer is responsible for returning claims to be added to the query tree. The method has to return an IEnumerable<Tuple<Claim, bool>> but a Linked List is appropriate because it orders the claims in the same order that they were added in the datafile. The group membership data is refreshed every 30 seconds which is specified in the RefreshDataFile method. RefreshDataFile uses a utility wrapper class (Lookup) to load the membership data into a simple to use collection. The framework calls AddAccess passing in the identity with the claims we attached in the datafile specified in Initialize. Then we loop through those claims and add them to the IEnumerable of claims that AddAccess returns. To register the custom trimmer: New-SPEnterpriseSearchSecurityTrimmer -SearchApplication "Search Service Application" -typeName "CstSearchPreTrimmer.SearchPreTrimmer, CstSearchPreTrimmer,Version=1.0.0.0, Culture=neutral, PublicKeyToken=token" -id 1 net stop "SharePoint Search Host Controller” net start “SharePoint Search Host Controller” The –id parameter is just an index in the collection. I think it’s arbitrary. I had to put a value though. To debug the pre-trimmer code: You may not need to do this and you won’t want to do it too often, but if you have complex logic in any of the code that retrieves or sets the static properties of the trimmer, you may want to debug the code. I had to do the following things. I’m really not sure if all of them are necessary, but I could not break in without doing all of them: 1) Build and GAC the debug version of the SearchPreTrimmer to make sure you have the same version installed. 2) Remove the Security Trimmer with: $searchapp = Get-SPEnterpriseSearchServiceApplication -Identity "Search Service Application" Remove-SPEnterpriseSearchSecurityTrimmer -SearchApplication $searchapp -Identity 1 3) Restart the Search Service Host. a. net stop "SharePoint Search Host Controller” b. net start “SharePoint Search Host Controller” 4) IIS Reset. 5) Re-register the security trimmer using register script above. 6) Restart the Search Service Host. 7) Use appcmd list wp to get the correct Worker Process ID 8) Attach to the correct w3wp.exe and all noderunner.exe instances (not sure if there’s a way to get the correct instance or not). 9) Start start a Search using and Enterprise Search Center Site Collection. I got frequent problems like time outs and I always had to start the search on the same login account as Visual Studio was running in. Nevertheless it was possible and I did it more than once with the same steps as above. We’ve covered the basics of how claims are sent to SharePoint in the customer connector. We covered the basics of the custom Security Pre-Trimmer in SharePoint 2013 and how it can use the claims sent by the custom connector to let the Query Engine specify claims-based ACLs so that only relevant result are returned to the results page. Thanks for this very helpful article. Does a custom Pre-Trimmer have to be used with a custom search connector or can you register a Pre-Trimmer to be invoked on top of content crawled from a standard SharePoint web application? The reason I ask is because we have a SharePoint repository being accessed by users with potentially hundreds of thousands of claims / roles. However, each document will only have a few claims / roles applied to it. So it would be ideal if I could let the search crawler crawl the repository in the normal fashion and just call out to a custom Pre-Trimmer for the ACLs. Hi , Thanks for the article with detailed steps :-) !!! I have a question specific to point 9 under To debug the pre-trimmer code section.can we achieve same using rest API (client) of Fast to consume Search Results and see if security trimming works> Hi, Am not able to debug the Security Trimmer Dll ,but i added few logs in code initially which am able to see log comments but when i added bit extra logic and extra logging,deployed to GAC i still see still old log comments despite of repetitive deployment (around 20 times) :-( .. Please let me know how to referring to latest to DLL ,sorry if this too native question.. Thanks for the article. I have a more streamlined procedure for debugging the pre-trimmer: ------------------------------------ 2) Restart the Search Service Host. a. net stop "SharePoint Search Host Controller” b. net start “SharePoint Search Host Controller” 3) Attach to the correct noderunner.exe and all noderunner.exe instances (not sure if there’s a way to get the correct instance or not). 4) Start a Search using and Enterprise Search Center Site Collection. You don't need to remove and re-register the pre-trimmer. You don't need to reset IIS. You should attach the debugger to the noderunner processes; you don't need to attach to any w3wp processes. Eugene - If I understand you question correctly, this type of security trimmer would no be possible. Or let me amend that and say that I you had 1) another claim provider that you could identify and reference in your Trimmer code, 2) Had a data source that had the security descriptor already in it and 3) could somehow access all the claims information that is contained in the Model file from within SharePoint's build in-connector. I would have to do quite a bit of research to figure out if any of these elements were available, and then you'd have to figure out how your crawl sources would provide information to conform to those elements. So now you see why my first answer was "no." :) rk-muddala - Are you clear that his new security trimmer Framework is only available in SP 2013? For 2013 we don't use FAST for SharePoint anymore. I don't know anything about using a separate product for crawling or indexing, but it seems to me the basic concepts would hold, 1) you need a claim provider of some kind (in this example we just use a text file), 2) you need to crawl and store the data in a SharePoint index and 3) your data source needs to contain the elements necessary to control access - in this case it's the <claimaclmeta>access</claimaclmeta> in the Product.XML file. For debugging - did you follow my steps exactly? That was the specific reason I had do all those steps and why I listed them. Also see Eugene's reply above. He seem to have narrowed it down a bit. Thanks for your reply, Adam.You're right: we couldn't do everything using just a pre-trimmer. We found that it also suffers from limitations on the number of additional claims you can inject. However, we were able to make our usage scenario work by combining the pre-trimmer with a post trimmer. All of our data is stored in SharePoint, but some of the data requires the use of potentially hundreds of thousands of roles. Rather than adding the roles directly to the content or to the user (which breaks SharePoint), we use the pre-trimmer to add a single augmented claim to results coming from Url paths that would otherwise have required the large of claims. We then use a post trimmer to filter the search results based on authorization data coming from a custom database (which can handle the large number of claims). So far things are performing well in testing. Again, thanks for your post. It was very helpful. Hi @Eugene... very curious to know how you implemented the scenario.we are facing the same issue where we have multiple claims based on metadata that we would like to add on..
http://blogs.technet.com/b/sharepointdevelopersupport/archive/2013/05/28/implementing-a-custom-security-pre-trimmer-with-claims-in-sharepoint-2013.aspx
CC-MAIN-2014-35
refinedweb
3,442
53.41
The. What's Hadoop Good For? For those who missed the Big Data hype a few years ago, Hadoop is the poster child project for organizations that are crunching big data. While not the only solution, it's certainly the best-known. Farrellee says "Hadoop provides a powerful platform for computation and data management. That means when a Fedoran has log, media (identi.ca/twitter), click stream, geo-location, etc. data they can come to Fedora, yum install [Hadoop] and start processing it." Typically, big data crunching implies having a fair number of machines or instances to throw at a problem. Is Hadoop useful for those who are just working on a single desktop or laptop? Farrellee says it can be. "Hadoop can be used on a single machine, which is how many of us do our development and basic testing." "However," says Farrellee, "to really get benefits from Hadoop, Fedorans will need to tie together many more machines. To help with that, we're working on packaging the Apache Ambari project, which does multiple-system deployment and management of Hadoop." Ambari isn't the only Hadoop tool that's coming to Fedora in the future. Farrellee says that Hadoop is "the foundation of an entire ecosystem of tools and applications. Having Apache Hadoop in Fedora means we can start bringing projects like Hive, HBase, Pig, Solr, Flume and Mahout to Fedora." New In This Release The version packaged for Fedora 20, Hadoop 2.2.0, is hot off the presses of the Apache Hadoop project (released on 15 October 2013). Farrellee says that this version has several interesting new features, in addition to the existing functionality of Hadoop that big data crunchers know and love. The biggest change in this version, says Farrellee, is the general availability (GA) of Yet Another Resource Negotiator (YARN). "YARN gives Apache Hadoop the ability to concurrently manage multiple types of workloads on the same infrastructure. That means you can have MapReduce workloads on your Hadoop infrastructure right next to Spark workloads (a BDAS (Berkeley Data Analytics Stack project). And, it lets you consolidate your Hadoop ecosystem services to run on YARN instead of in parallel. The Hoya project is doing that for HBase." Farrellee also says that the release includes many enhancements to the Hadoop Distributed File System, including high availability, namespace federation, and snapshots. Dependencies, Dependencies, Dependencies! The hardest part about getting Hadoop into Fedora? "Dependencies, dependencies, dependencies!" says Farrellee. In general, dependencies are often a sticking point, especially (as Farrellee points out) for those tools that depend on "languages other than C/C++, Perl or Python.". When we did write patches we worked to get them upstream, but in at least one case, that of Jetty, it's complicated because the version Fedora has does not work with Java 6 and the upstream community isn't ready to give up on Java 6." Just because Hadoop is in Fedora 20, doesn't mean the problem goes away. "Dependencies are, and will be, an ongoing effort, as Fedora rapidly consumes new upstream versions." With all that work to be done, Farrellee was far from the only person working on the packaging effort for Hadoop. He says that the team "came together under the umbrella of the Big Data SIG that Robyn [Bergeron] kicked off near the beginning of 2013" and has been "awesome" in pulling together to get the job done. "We include people primarily interested in Hadoop, members from the Java SIG (which is key because the Hadoop ecosystem is mostly Java), random Fedorans who had an itch to scratch, and massively prolific packagers who were already looking at doing packages needed for Hadoop." What's next in Fedora with the Hadoop ecosystem? Ambari, already mentioned, is a big one. "We're working with the upstream Ambari community to get it ready for Fedora" says Farrellee. "It turns out to heavily use node.js, which does not have a strong presence in Fedora. HBase is on its way, along with Hive and a handful of others." The Big Data SIG also has a list of what's been done, what's in progress, and what's to come. In standard Fedora fashion, Farrellee adds that "anyone is welcome to add to the future list or take things off and start packaging them!" Guest contributor Joe Brockmeier works on the Open Source & Standards team at Red Hat. big data conference Said: Learn and train on big data at Jan 24-26 Big Data Bootcamp-Santa Clara register at use promo code BLOG to get $200 discount expires on Dec 23 bigonlinetraining Said: Thanks for sharing the valuable information,This is useful information for online learners Hadoop Big Data Online Training stephlaur Said: Thanks for sharing the valuable information,This is useful information for online learners sophiasravt Said: Good Article Very Usefull for online learners, Thank You For Info Provided sravtclaves Said: Good Article Very Usefull for online learners, Thank You For Info Provided
https://www.linux.com/news/featured-blogs/196-zonker/752637-focus-on-fedora-20-features-hadoop-in-heisenbug/
CC-MAIN-2014-15
refinedweb
831
61.77
The time is finally there: the Windows Phone SDK 8.0 is out, and now we can finally talk about all the great news things for developers that this new SDK brings, and I’ll start that right now. And I don’t think it will surprise anyone who knows me that the first thing I did was looking at the much-rumored new maps control. I was not disappointed. In general, just two words: super slick. The very embodiment of fast and fluid. What Windows Phone 8 brings to the mapping table made by geo hart skip and is nothing short of dramatic. Think the Nokia drive map on your Windows Phone 7. On steroids. As a control. For you to use in your apps. With downloadable maps for use offline. I have seen it run on actual hardware and for me as a GIS buff, used to the fact that maps stuff usually take a lot of processing power and doesn’t necessary always go super fast, this was a serious nerdgasm. So I created a little app to show off the very basics of this map control. It allows you to select the various map modes, as well to to select map heading and pitch. Heading is the direction the top of the map is pointing – in Bing Maps this was always “North”, and Pitch is how you are looking on the map. If that’s 0, you are looking straight from above. Now the important the thing about Windows Phone 8 is that the team really went out of their way to make sure code is backwards compatible. That means that not only the new maps control is in the framework, but also Ye Olde Bing Maps control. This can lead to some confusing situations. The important thing to remember when working with the new map control is - The (old) Bing Maps control stuff is in namespace Microsoft.Phone.Controls.Maps - The new map control stuff in in namespace Microsoft.Phone.Maps.Controls. So “Controls” and “Maps” are swapped. I always keep in mind “maps first” as a mnemonic to remember which namespace to use. Especially when you are using ReSharper or a tool like it that most helpfully offers you to add namespaces and references (again) can really get you caught on the wrong foot, so pay attention. I started out creating a “New Windows Phone App”, selected OS 8.0 (of course), fired up the Library Package Manager and downloaded my wp7nl library. This gives MVVMLight and some other stuff I need for my sample. At the moment of this writing this is still Windows Phone 7 code but this will run fine (of course this all will be updated shortly). The only thing you need to take care of is that you delete the references to Windows Phone Controls.dll and Windows Phone Controls.Maps.dll the package makes. First step is to make a simple view model describing the cartographic modes the new map control supports: using GalaSoft.MvvmLight; using Microsoft.Phone.Maps.Controls; namespace MvvmMapDemo1.ViewModels { public class MapMode : ViewModelBase { private string name; public string Name { get { return name; } set { if (name != value) { name = value; RaisePropertyChanged(() => Name); } } } private MapCartographicMode cartographicMode; public MapCartographicMode CartographicMode { get { return cartographicMode; } set { if (cartographicMode != value) { cartographicMode = value; RaisePropertyChanged(() => CartographicMode); } } } } } The main view is basically some properties and a little bit of logic. First part handles the setup and the properties for displaying and selecting the cartographic map modes: using System; using System.Collections.ObjectModel; using System.Device.Location; using GalaSoft.MvvmLight; using Microsoft.Phone.Maps.Controls; namespace MvvmMapDemo1.ViewModels { public class MapViewModel : ViewModelBase { public MapViewModel() { modes = new ObservableCollection<MapMode> { new MapMode {CartographicMode = MapCartographicMode.Road, Name = "Road"}, new MapMode {CartographicMode = MapCartographicMode.Aerial, Name = "Aerial"}, new MapMode {CartographicMode = MapCartographicMode.Hybrid, Name = "Hybrid"}, new MapMode {CartographicMode = MapCartographicMode.Terrain, Name = "Terrain"} }; selectedMode = modes[0]; } private MapMode selectedMode; public MapMode SelectedMode { get { return selectedMode; } set { if (selectedMode != value) { selectedMode = value; RaisePropertyChanged(() => SelectedMode); } } } private ObservableCollection<MapMode> modes; public ObservableCollection<MapMode> Modes { get { return modes; } set { if (modes != value) { modes = value; RaisePropertyChanged(() => Modes); } } } } } The only important part about this is that there must be an initially selected mode, as the control does not take it very well if the mode is forcibly set to null by the data binding. At little bit more interesting are the next two properties of the view model, which control heading and pitch: private double pitch; public double Pitch { get { return pitch; } set { if (Math.Abs(pitch - value) > 0.05) { pitch = value; RaisePropertyChanged(() => Pitch); } } } private double heading; public double Heading { get { return heading; } set { if (value > 180) value -= 360; if (value < -180) value += 360; if (Math.Abs(heading - value) > 0.05) { heading = value; RaisePropertyChanged(() => Heading); } } } The map seems to try to keep its heading between 0 and 360 degrees, but I like to have the slider in the middle for North position – that way you can use it to rotate the map left and right. So I want heading to be between –180 and +180, which should be functionally equivalent to between 0 and 360 - and apparently I get away with it. Since both values are doubles I don’t do the standard equals but use a threshold value to fire a PropertyChanged. Courtesy of ReSharper suggesting this. Then there’s a MapCenter property, that doesn’t do very much in this solution apart from setting the initial map center. I have discovered that the center map does not like being set to null either – this beasty is a bit more picky than the Bing Maps control it seems so I take care to set an initial value: private GeoCoordinate mapCenter = new GeoCoordinate(40.712923, -74.013292); /// /// Stores the map center ///public GeoCoordinate MapCenter { get { return mapCenter; } set { if (mapCenter != value) { mapCenter = value; RaisePropertyChanged(() => MapCenter); } } } private double zoomLevel = 15; public double ZoomLevel { get { return zoomLevel; } set { if (zoomLevel != value) { zoomLevel = value; RaisePropertyChanged(() => ZoomLevel); } } } Together with the initial ZoomLevel set to 15, this will give you a nice view of Manhattan Island, New York City, USA. There's also a boolean property "Landmarks" that will enable or disable landmarks - you can look that up in the sources if you like. Then I opened up Blend, plonked in a map, two sliders, a checkbox on the screen, did some fiddling with grids and stuff, and deleted a lot of auto-generated comments. That made me end up with quite a bit of XAML. I won’t show it all verbatim, but the most important thing is the map itself: <maps:Map x: Does not look like exactly rocket science, right? It is not, as long as you make sure the maps namespace is declared as: xmlns:maps="clr-namespace:Microsoft.Phone.Maps.Controls;assembly=Microsoft.Phone.Maps" The horizontal slider, controlling the heading, is defined as follows: <Slider Minimum ="-180" Maximum="180" Value="{Binding Heading, Mode=TwoWay}"/>The vertical slider, controlling the pitch, looks like this: <Slider Minimum="0" Maximum="75" Value="{Binding Pitch, Mode=TwoWay}" /> Sue me, I took the easy way out and declared the valid ranges in XAML in stead of in my (view) model. But the point of the latter is this: apparently you can only supply ranges between 0 and 75 for pitch. And the pitch is only effective from about level 7 and higher. If you zoom out further, the map just become orthogonal (i.e. viewed straight from above, as if the pitch is 0). This is actually animated if you zoom out using pinch zoom, a very nice visual effect. Finally, the list picker controlling which kind of map you see, and the checkbox indicating if the map should show landmarks or not <toolkit:ListPicker <CheckBox Content="Landmarks" IsChecked="{Binding Landmarks, Mode=TwoWay}"/> … only I haven’t been able to find any landmarks, not in my hometown Amersfoort, not Berlin nor New York so I suppose this is something that’s not implemented yet ;-) If you fire up this application you will get an immediate error message indicating that you have asked for the map but that you have not selected the right capability for this in the manifest. So double click on “Properties/WMAppManifest.xml” and select ID_CAP_MAP (yeah, the manifest’s got a nice editor too now), and fire up your app. And that’s all there is to your first spin with the new Windows Phone map control. It supports data binding – to an extent, and it’s easy to control and embed. Play with the sliders and scroll over the map – it’s amazingly fast and I can assure you it is even more so on actual hardware. Stay tuned: more is coming on this subject! Note: I have not touched upon creating and adding map keys to the map in this solution. This procedure is described here. As usual: download the sample solution here. 3 comments: I'm curious to see an offline capabilities test with map tiles stored locally on the phone or custom generated based on some local stored data. Cheers, Claudiu Hello, I am Madhuri. I want to do one application in Windows Phone 8 which shows trafficbetween two places on map. I am not getting any solution. Can you please suggest me something? @Maduri you have to elaborate a bit more on that I think
http://dotnetbyexample.blogspot.com/2012/10/introducing-new-windows-8-map-control.html
CC-MAIN-2019-04
refinedweb
1,552
63.19
Talk:DemonicRage Contents Questions & Comments Very nice! Seems you've won the race to melee surfing? (EDIT: To melee surfing that collects stats I meant. Since apparently Shadow does do non-learning wave surfing in melee, and while I believe Portia learns a little about enemy targeting it's not technically a wave surfer) I'm a little surprised by DemonicRage still performing behind Glacier though. Perhaps do some 1v1 tests of the surfing to see how well it's working in a more basic situation? --Rednaxela 15:16, 23 April 2010 (UTC) Thanks, your right... I haven't really tested the 1vrs1. The only difference between the melee and 1vrs1 is seperate Stats, which are both non-Segmented. I haven't tried segmenting yet, but I'm sure the 1vrs1 needs it. -Jlm0924 17:35, 23 April 2010 (UTC) Version History / Discussion DRv3.0 - Lately I have completely rebuilt and improved Demonics foundation (still based on Module) I implemented Rednaxela's tree into a bare bones version of D's gun, which I plan to later expand after the new movement is fleshed out. The gun's new PIF I made should be quite quick. I'm not sure how it compares to displacement vectors or if it's type has been done before, but I will post it and D's Radar source code next chance I have. DR3.03 - DR3.05 - despite finding a small bug in DR3.03 1vr1 movement, It's still performing sub par. I will need some time to improve DR's 1vr1 waveSurfing ( i've never spent much time there before.) Till then; I slapped the old random 1vrs1 movement back in to see what happens in the rumble. —Preceding unsigned comment added by Jlm0924 (talk • contribs) - Well, personally, I'd put one HawkOnFire vs DR3 in 1v1, and attempt to fix the remaining times DR gets hit since HOF is just a "Head-On" targeter. In some tests I saw it dodge well enough that it's clearly surfing, however got hit often enough to indicate the surfing is flawed in some manner or another. --Rednaxela 23:52, 25 April 2010 (UTC) yeah the bug in 1vrs1 : DR3.03 was accidently not just surfing the closest wave.. (It might have choosen a high risk part of the closest wave if that position was low risk on the waves behind it.) Without that flaw (unsegemented) works great for the first 10 rounds or so. Then the big guns start gaining ground back.. I tried simple segmenting, but it learns too slow to compete with the better Guns. I know nothing about the more advanced wavesurfers.( Whats going on in Druss/Diamond or Glacier?) ..but I'm going to avoid the segment setup, and try surfing with DC. `Jlm0924 06:42, 26 April 2010 (UTC) Well, Glacier isn't a surfer (in fact, it's 1v1 movement is terrible, just see it's non-melee ranking). Well, how to set up learning for surfing is basically the same as building a gun. I'm sure DC will work better than a single segmentation, though some like Druss have success with overlaying multiple segmentations. One note is usually one segments surfing less heavily due to the reduced amount of data. The other note, is that the particularly good guns start gaining back ground, even with segmented/DC surfing really, and what is usually done to counter this is enable a "flattener" when the enemy hit rate is high enough. The "flattener" basically keeps stats of where you've been, instead of where you've been hit, and you dodge that as well. This works well because it avoids going to the same place twice even if you haven't been hit there yet. Note though, a flattener isn't really necessary. Midboss and RougeDC in 1v1, both lack a "flattener" yet still score high (though not so high against the strong bots due to the lack of such). One thought, is because melee gives much less time to learn than 1v1, perhaps it would make sense switch to pure-flattener against the final remaining bot? I say such because I think chances are the final remaining bot is strong enough that flattener could be best. --Rednaxela 13:33, 26 April 2010 (UTC) From Portia's stats that was posted sometimes ago (I can't remember where), there is only a few bullets that actually fell into your escape angle and actually hit (the one you can save stats), so your should combine it with your old MR movement to be competitive. My two cents anyway. --Nat Pavasant 14:00, 26 April 2010 (UTC) - Thx for your wisdom Rednexala. DR has a flattener. But it worked better always off, then alway on :) so I will try enabling it as you suggest. When I get Home, I will release DRv3.06 with (NO-SEG, NO1vr1 bugg, and properly activated flattener) into the (Non-Melee) Rumble for the first time..and go from there. :)-Jlm0924 00:35, 27 April 2010 (UTC) DR3.06 - VR3.06 released.. ( still no segment, removed small 1vr1 bug and found bigger melee stats bug, played with the flattener (the stats to enable it were swayed by melee/ so I disabled for now) I also removed 2 gun distancing functions (distance and angle) and enabled painting waves. I've come to realise there is alot I can improve still. Currently Dr is selective on which waves it 'sees'. TODO list : allow DR to see more waves and weight them. Precise escape angle. -Jlm0924 00:35, 27 April 2010 (UTC) DR3.07 - tuned gun, added 1vrs1 flattener(rarely on, if ever) reduced health wieght, and territory wieghts, (which I should look at again for further improvement). fixed bug in bin smoothener and redesigned and tuned. Currently in number 2 spot with 414 battles :) Hope it holds on :) Just in time as I'll be back in school soon.-Jlm0924 19:50, 29 April 2010 (UTC) - Cool, fingers crossed dude! --Voidious 19:44, 29 April 2010 (UTC) - I'll cross'em for third :)-Jlm0924 19:50, 29 April 2010 (UTC) - Very nice! I guess this means I'll need to get working on Glacier... (given how terrible it's movement is in 1v1, perhaps I should just throw a quick random movement in to get an instant boost...) --Rednaxela 21:34, 29 April 2010 (UTC) - You should .. Your kick-butt gun is probably begging for it :) .. I'm waiting for 3.07 gets 1500 battles or so before releasing 3.08 ;) -Jlm0924 23:52, 29 April 2010 (UTC) Doh! What happened with 3.09? Hope you keep your old source. =) By the way, did you see this bug we found in the Wave Surfing Tutorial? Discussion of it at Talk:Diamond/Version_History#1.5.5, code change you should make here. --Voidious 03:18, 2 May 2010 (UTC) Yes I did notice this issue, but don't think I fully caught the bug.... (I was aware DR had this issue and was going to look into it ) I've been using: - && Math.abs(e.getHitBullet().getVelocity() - ew.velocity ) < 0.007 - Thx for the heads up ! :) I will try the suggested code and see if it helps DR catch the bullet hits it was missing :) I not sure what happened with 3.09 yet.. I suspect I introduced a new bug which I didn't catch before releasing. I haven't looked into it yet.. but suspected earliar that he was getting kicked out of battles late in the rounds ( but I guess not). No one has seen any errors messages pop up have they?? -Jlm0924 20:24, 2 May 2010 (UTC) All Versions of DR are now skipping big Time !! -Jlm0924 15:21, 3 May 2010 (UTC) I'm starting to get annoyed! I am using robocode 1.6.1.4 for testing.. In future Version of Robocode I think the time that aloted to bots before they start skipping turns needs to made more consistant.. The results of recalculating the cpu constant are far to great, which don't accomodate a pc's current work load. I am releasing DR 3.1.0 as 3.11 that has been made light years faster.I believe it is one of the fastest DRs made. Though There are still no guarentees it won't skip.. :( -Jlm0924 16:23, 3 May 2010 (UTC) Oh, I also added alittle dynamic anti skip. If it skips a turn, It will started reducing iterations on the gun and movement.. which hopefully reduces performance less than the skipping of turns... -Jlm0924 16:34, 3 May 2010 (UTC) - I hear you with getting annoyed there. Midboss in 1v1 there is having skipped turn problems itself. About making the time before bots start skipping turns more consistent, it's pretty difficult. See, currently Roboceode uses System.nanotime() to measure how long a robot is taking, which is subject to current load of the machine and such. There are APIs in Java that allow getting the CPU time used by the robot thread instead, however, these have much worse resolution than System.nanotime(). On Windows, those APIs are inaccurate enough that the only way used CPU time could done instead of System.nanotime(), would be if the average time over several turns was used to determine whether to skip or not. It's a bit of dilemma really. --Rednaxela 16:50, 3 May 2010 (UTC) - From the sounds of it... averaging nano time may help smoothen out spikes in machine load.... not to mention alittle forgiveness for guns targetting every 27th turn or so... -Jlm0924 23:59, 3 May 2010 (UTC) - Despite this version of DR doing well, I still suspect he is running into a slew of skip turns dew to inconsistency . Out of curiousity; Does robocode recalculate the cpu constant before every battle, or only manually / during install ?? I notice DR vrs Portia are unstable battles, and wonder if it's a skipping turns issue; Or a memory issue. I know DR is a memory hog ( which I will try to address soon.) I suspect Portia is also... (I hope bots don't start fighting for resources) It would be great if Robocode displayed a bots allowable memory and CPU time; the head room remaining... -Jlm0924 00:17, 4 May 2010 (UTC) - Robocode only recalculates the CPU constant when you tell it to (or edit it in robocode.properties). I agree that taking a wider view of CPU time used would probably be a good thing. DrussGT has similar problems, because its GoTo Surfing does most of its work in one tick, but it doesn't have to do much of anything most ticks. Does DR print when it skips turns? I can check on my currently running MeleeRumble client for you. It's a dual core machine with only one client and nothing else running, so it should be close to "optimal". Oh, and big congrats on the score jump! =) --Voidious 01:03, 4 May 2010 (UTC) - Thx Void. Yeah DR prints in normal fashion when it skips.. That'd be great if you could check :) I suspect it's only when it fights other resource hungry bots.. I'm guess you added some bells and whistles to your client :) I still need to take some time out and figure out how to run roboResearch.. :)-Jlm0924 01:25, 4 May 2010 (UTC) - Well, I'm not seeing too many skipped turns from DR here: - First battle: DR, Diamond, 4x HawkOnFire, 4x Coriantumr. DR skipped 2 turns total, first time in round 21. - Second battle: DR, Diamond, 4x Coriantumr, 4x Shadow (going for a "more resource hungry" field). DR skipped 2 total again, first time in round 12. - --Voidious 19:30, 5 May 2010 (UTC) - Thx Void... 2 turns, is okay I guess.. DR3.12 -DR3.13 Well I was quite pleased on 3.12's performance and ranking jump. (2nd place melee rumble) Though it managed to fair better against all bots.. Diamond still held it's ground in ranking. After some tweaking and mods to enemy threat factors.. DR3.13 lost it's edge to Diamond and over-all ground.-Jlm0924 03:43, 7 May 2010 (UTC) DR3.12a I released 3.12a to test a mod to enemy threat. (my testing was hard to tell) It dramatically increases DRs aggressiveness gainining a good amount of bullet damage. Though it gambles it's survival and will likely get into trouble with tough bots. Hope the mod stays, as it adds some character :) ... It did better than expected :) -Jlm0924 20:38, 7 May 2010 (UTC) - Very impressive gain with 3.12a there! So that was from making it more aggressive to enemies that are deemed not as strong? --Rednaxela 22:49, 7 May 2010 (UTC) - No ... It's getting closer to enemey(s) not targeting him. For example Dr will consider 2 bots ramming as almost zero risk and at times get within a bots width. The obviously upside is greater bullet damage to unsuspecting enemies. A bigger down side than you might expect is attracting the attention of advanced bots who target all. Another is having your safer corner position taken while swooping in field. Your gangBang turns into a drive-by with no safe place to go :) —Preceding unsigned comment added by Jlm0924 (talk • contribs) - Ahhh, that technique. In Glacier's danger formula what I do to account for that is factor in the ratio of "closest distance of anything to the enemy, versus, my distance to the enemy" (which is a value from some-small-number up to 1.0). Your approach of reducing that danger to zero is much more aggressive through. Hmm, perhaps I should try something like squaring or cubing that ratio... since that would make it more aggressive while still not reducing danger all the way to zero. Interesting... --Rednaxela 19:40, 8 May 2010 (UTC) - yep same sort of thing... yea try sqr or cube.. let me know what happens :)-Jlm0924 21:39, 8 May 2010 (UTC) DR3.14 - DR3.15 I've replaced non-segmented surfing stats with DC-type surfing with mixed results (as always) ... later I'll merge with 2.12a: It may be improving 1vrs1 but hurting melee... -Jlm0924 19:54, 7 May 2010 (UTC) - That would suggest to me that your DC-type surfing is having problems when the number of data points is too low. That's kind of interesting, because performing well with both high and low numbers of data points is generally a strength of DC-type method. This would suggest to me that either the number of returned data points is too small, or something about the processing after the nearest-neighbor search has issues. Perhaps you are weighting the data based on how close it matches, and that weighting is too harsh? Or perhaps the non-DC type had some kind of smoothing across bins, which is no longer the case? I found back when I was working on RougeDC, that how much one does the equivalent of bin smoothing, has a huge impact in DC-type surfing. --Rednaxela 22:49, 7 May 2010 (UTC) - as always your on the money... :) It has seperate 1v1 list and seperate melee list (both for DC/NOnSEG) . In melee it can be selective on adding points. When it gets to end game (1vrs1 DC stats) returns about 10-20 data points (in round 15) for Diamond when he is in the ring. Very low.. I going to release v3.15... which for now uses Non-seg until 25 or greater returned data points. There are advantages to non-seg so I can't swicth over sooner.. You made a great point... As I do smoothen out the DC-Wave after creating it; I never thought about playing with the smoothing.. I'll try that , Thx -Jlm0924 00:14, 8 May 2010 (UTC) - Well, part of my point was that there shouldn't be advantages to non-seg if the DC is set up ideally. If the number of points returned by the DC is at least 25, they're smoothed in the same way the non-seg is, and there is little weighting between them, the result should be *exactly* the same as the non-seg in fact. --Rednaxela 19:40, 8 May 2010 (UTC) - I hear you.. With my non seq, I account for bullet misses, which decay, and smoothen Stats over time..This naturally weights the waves (showing who more often or more recently hit DR) This isn't inhierent but could be added to DC with weighting. I haven't tried yet. Till then It seems my non-Seg is working better for the first bit , As I rushed to post DR3.15 before taking off last night.. I accidently had DR3.15 surfing DC first; Then switching over to Non-seg. doh! (performance went down instead of up). I have to play with wieghting/decay/smoothing and balance the wave with risk functions that are always present -Jlm0924 00:59, 9 May 2010 (UTC) - Hmm, there's a lot to consider. First, since 1v1 is such a small part of Melee, it's tough to test surfing changes' effectiveness. Does your surfing use data from Melee in 1v1? I would add some type of attribute to give some strong preference to 1v1 data, like "number of bots alive" (I do this in Diamond's gun). Trying to learn from misses also seems really dangerous. I would test vs HOT bots like HawkOnFire with and without that and see if it's screwing you up. - It seems very likely to me that there's something different about your smoothing in non-seg vs DC. Are you using VCS in your non-seg? I think Gaussian kernel density is really useful - a lot of DC kernel density stuff will just give a 0 density for anything outside of a bot width, while Gaussian will scale down and never hit zero. So being further away from a dangerous angle is better than being just beyond a bot width of it. Check Diamond::voidious.move.DiamondWhoosh.getDangerScoreif you want to take a peak. =) Here's a nice list of kernel density estimators for comparison. - Btw, if you want to make each version a sub-heading, you can do it with 3 ='s, like: === DR3.14 - UnReleased ===.) - Thx; Yes, Currently the smoothing is different and missed bullets decay the non-Seg stats. -Jlm0924 00:59, 9 May 2010 (UTC) DR3.16 - DR19 - The DCwave creation is done slightly differently, -I located a bug in bullet Power of all places ( must of been there for ever) Causing DR to Shoot higher power than intentended ( below 15 health) - The damage/risk factor of surfing is incorperating both non-seg and dcWaves simutainously - An effort was made to normalise all data to prevent and possible unbalancing when testing risk/surfing factors - a bug was found in missing hit waves - DR DR3.20 I posted my current version of Diamond to the MeleeRumble so we can get a fresh/accurate comparison. Great work you're doing with DR, good luck! What do you think is giving you the latest boost? --Voidious 15:46, 13 May 2010 (UTC) - The main boost is from changing the gun (targeting all/ enemy selection weighting) Previously DR would thrash between enemies. Additional bugs in missing hit waves, and bullet power selection.-Jlm0924 - Huge congrats tying it up at #1. =) Cool to see both Melee and 1v1 crowns so close at the same time. Scary to think you might still have more points in your gun... I figured you were already doing Shadow/Melee Gun when you got to #2. --Voidious 20:34, 16 May 2010 (UTC) - Indeed scary. I'm currently thinking that Glacier has the strongest melee gun in current existence, and DemonicRage now has the strongest melee movement thanks to it's melee wavesurfing. I'd be curious to make a test melding of the two as 'wiki.DemonicIce' perhaps... --Rednaxela 04:17, 17 May 2010 (UTC) - Thx guys :) -Jlm0924 14:50, 17 May 2010 (UTC) DR3.22 - DR3.23 -I was previously thinking I had some timing errors dealing with (in-complete and poorly) interpolated data, but I was over tired and chasing my tail :P - (Missing/in-accurate parts of the interpolated data were not being used anyWays)... - Though I did find a minor bug in interpolated data scanTimes - likely no performance impact - currently I'm testing the balance of my use of unsegmented/DC wave data. DR3.24 - DR3.26 - Testing a new Gun mod, getting very mixed results :( -Jlm0924 06:18, 25 May 2010 (UTC) - DR3.25 got licked out of a battle, so evaluating was difficult. - DR3.26 is likely my last attempt with new gun.. I found a couple small bugs and tuned it.. - The DC gun mod acts like a sonar, feeding in distances pinged off walls and bots - I haven't tested it 1vr1 yet, (but the improved wall distancing could possibly improve 1vr1)-Jlm0924 19:04, 26 May 2010 (UTC) - The last couple of days I've had 2 versions of DR in the rumble... (I'm removing one today) I hope no one has minded..-Jlm0924 19:12, 26 May 2010 (UTC) - I minded a little, since one of them outranked me. :-) Just kidding. But seriously, it looks like you've got a 0.25% lead now. So I think the crown is yours - congrats dude! --Voidious 14:32, 27 May 2010 (UTC) - Thankyou Voidious. -Jlm0924 16:33, 27 May 2010 (UTC) - Congrats from me too, great work. I can't believe Shadow is out of the top 3 already. If only I could find the time/energy to work on it... --ABC 17:30, 27 May 2010 (UTC) - If Robocode had a "Hall of Fame" your two names would be at the top :) I'm honnored, thx guys. -Jlm0924 23:43, 28 May 2010 (UTC) Wow, nice work! I hope to get some coding done over the next few months, maybe get that melee surfing of mine finished/working =) Great to see what a learning version of melee surfing is capable of. Just a question, how did you decide which bot an enemy is targeting when determining guess factors? (You are using guess factors, right?) I thought of just choosing the closest, but that seemed like it might be a bit of a gamble. --Skilgannon 07:15, 30 May 2010 (UTC) - From his debug graphics, I would say that he just aim every robot at himself. --Nat Pavasant 13:59, 30 May 2010 (UTC) - I fear DR just opened the door. I hate to think what DRUSS might do if it were melee surfing in the meleeRumble ;) Scroll way down the (1vr1) rumble Rankings and you know what I'm talking about :) With that said, I hope you get meleeSurfing up and running asap!! I think you'll like this: enemyScan.distance < enemyScan.closestBotDistance * 1.3 - I had thought of a name... I've got Wintermute in 1v1, so in Melee I need Neuromancer to complete the pair =) I'm not sure if you've read the Scifi series Neuromancer... it's pretty good. --Skilgannon 12:15, 31 May 2010 (UTC) - just read the wikipedia... sounds hard-core :) I'll keep eye out for Neuromancer :)-Jlm0924 21:52, 31 May 2010 (UTC) DR3.3 - i'm switching back to less numbers in the version number :P - tweeked DCWAves and improved the which locations to test and evalate risk ( I hope) - corrected Wave paint debug graphics - added alittle more bin smoothening and removed a huge memory consuming array that wasn't being used. (I abandon an overly ambitious stategy creator/management system) -Jlm0924 02:20, 8 June 2010 (UTC) DR3.3b -DR3.3f - minor edits and tweaks to mostly the Gun. - After several months break from Robocode I noticed that Diamond has taken the Melee Rumble Throne. I attempted some minor tweaks to DR's gun, But Diamond holds Strong. Congradulations Voidious. - I just noticed the radar in 3.3f is in constant spin ( not locking melee or 1vrs1 ) - Other bugs already in 3.3f: - - some enemyBot data (totalTimeAlive) and ALL of mybot data was being cleared out between rounds. ((this one been broken for a long time ) - - There were also some 1vrs1 gun attributes active in melee :( (note: I would like try closestCornerDistance(Thanks Void for the idea.. Does anyone use this?? I think DR will improve with this attribute ) - - waveSurfing weights in "create wave" were completely messed up. - - radar not locking as mentioned - - there were other issues, but in retrospect the above bugs were likely the cause. - Unbelievable... I tried many new ideas in these Versions.. Most notibly was a "Non Pattern Filter" for the DC-Gun, that should be re-evaluated. The name sounds weird.. What it did was take the movement pattern leading up to the Enemies current state, and compare it to the movement pattern leading up to the enemies Similar States. The Similar states list was then filtered, before being used to PIF -Jlm0924 When reading your update, I found something in your code below that could also be buggy. You are comparing two strings with '==', that should be 'equals'. if (e instanceof RobotDeathEvent && (( RobotDeathEvent)e).getName()== lookingFor.name). I think now you are checking if they are the same object, while you want to check if the contents of the object is equal. Good luck with your update and your new bot. --GrubbmGait 10:10, 12 January 2011 (UTC) DR3.3h I wasn't planning on releasing this, as I have started work on DEMONv0.01, but after finding so many bugs in 3.3f; I'll fix them next chance I have and give DR one more release before moving forward with Demonv0.01 -Jlm0924 07:43, 12 January 2011 (UTC) -that was a flop :P -Jlm0924 DR 3.4 This was a roll back to 3.3a. I didn't fix any known bugs.. (it even misses waves with bullet power x.x5) .... But I did fix and add afew features: - - added precise intersection (from Demon) - - improved the the safeBinsHit, If a wave an enemy shoots passes over another enemy first, thoose bins values are zeroed(precisely) marking it a safe position on the wave.. I think this feature is great for team battles, (though DR is not setup for) - -Demons when to fire gun , and method of dealing with low gunData was used.. ( eg if low data in 1vr1 he with add a small portion melee data)-Jlm0924 19:24, 11 February 2011 (UTC) Source Code package justin.radar; import justin.Module; import justin.Radar; import justin.Enemy; import robocode.DeathEvent; import robocode.Event; import robocode.HitRobotEvent; import robocode.ScannedRobotEvent; import robocode.RobotDeathEvent; import robocode.WinEvent; import robocode.util.Utils; import java.util.Hashtable; import java.util.Iterator; /** * - An Efficient and Robust RADAR system that I use with Module by jab. * * @author Justin Mallais * */ public class DynamicLocking extends Radar { public DynamicLocking(Module bot) { super(bot); } static final double PI = Math.PI; static double radarDirection = 1; public Enemy lookingFor = new Enemy(); // Note: !! new Enemy must contain a null name !!! // List of enemy names that empties every new round (the boolean is not used) private Hashtable<String, Boolean> knownEnemiesList = new Hashtable<String, Boolean>(); public boolean knownEnemiesListFull = false; public void scan(){ // Only executed once at beginning of new round. if(bot.getRadarTurnRemaining()==0 ){ // initial radar direction is towards the center of Battle field. radarDirection =(Utils.normalRelativeAngle(absbearing(bot.getX(),bot.getY(),bot.getBattleFieldWidth()/2,bot.getBattleFieldHeight()/2) - bot.getRadarHeadingRadians()) > 0 ? 1 : -1); double radarTurn = Double.POSITIVE_INFINITY * radarDirection; bot.setTurnRadarRightRadians(radarTurn);// the scan } } public void listen(Event e){ // These aren't necessary but HitRobot event could help if (e instanceof WinEvent) cleanUpRound(); if (e instanceof DeathEvent) cleanUpRound(); if (e instanceof HitRobotEvent) lookingFor = Module.enemies.get(((HitRobotEvent) e).getName()); // If who we are lookingFor has died if (e instanceof RobotDeathEvent && (( RobotDeathEvent)e).getName()== lookingFor.name){ lookingFor = new Enemy() ; // Note: !! new Enemy must contain a null name !!! } // RADAR SCANNED ROBOT EVENT if (e instanceof ScannedRobotEvent){ // Check we've found all the enemies if(! knownEnemiesListFull){ // Insure Enemy is marked alive; Module.enemies.get(((ScannedRobotEvent) e).getName()).alive = true; knownEnemiesList.put(((ScannedRobotEvent) e).getName(), true); knownEnemiesListFull = (knownEnemiesList.size() >= bot.getOthers())? true : false; //bot.out.println(" found "+knownEnemiesList.size()+" bots, and found all is "+knownEnemiesListFull); if( !knownEnemiesListFull ) return; } // We've found all the robots, now we can choose who to lookFor next if(lookingFor.name == null) lookingFor = Module.enemies.get( ((ScannedRobotEvent)e).getName() ); if( ((ScannedRobotEvent)e).getName() == lookingFor.name) { Iterator<Enemy> iterator= Module.enemies.values().iterator(); double bestScore=Double.POSITIVE_INFINITY; while (iterator.hasNext()){ Enemy tank= iterator.next(); if(tank.alive){ double time = tank.scanTime; double sweepSize = (Math.abs(Utils.normalRelativeAngle(tank.absBearingRadians - bot.getRadarHeadingRadians())) /PI); //1 needs the scan // NOTE: The commented out options below are for reference. (may be broken) // prioritise based on distance /* int distance = (int)Math.round((Math.min(1000,tank.distance)/1000*3)); // 1 needs the scan distance = ( distance ==3 && bot.getOthers()>4 ) ? 10 : 0; // farthest bots not a concern int priority = 0; if( (tank.name==bot.enemy.name || tank.name == bot.myClosestBot.name || bot.myLocation.distance(tank.location)<tank.cbD || bot.getOthers()<4)){ priority = -5; } */ double score = time - sweepSize;// + distance + priority; // scan target before he fires /* double ang = lookingFor.absBearing + (lookingFor.deltaAbsBearing); double turnsB4ScanBot = Utils.normalRelativeAngle(Math.abs( (bot.getRadarHeadingRadians()-ang) ))/.785; double sS = bot.getTime()-lookingFor.timeScanned; if(enemy.name == lookingFor.name && (bot.getGunHeat()/bot.getGunCoolingRate())-turnsB4ScanBot < 2 && sS > 1)score=score-25; */ // scan Target before we fire at him /* if (tank.name==bot.enemy.name && ticksUntilGunCool() < bot.enemy.timeSinceLastScan +2 ) { score = score - 10; // score is based on time } */ if(score < bestScore){ bestScore = score; lookingFor = tank; } } } // New scan double angle = lookingFor.absBearingRadians-(lookingFor.deltaAbsBearingRadians*2); // + lookingFor.deltaBearing;// should be much more accurate to use below radarDirection =(int) Math.signum(Utils.normalRelativeAngle(angle - bot.getRadarHeadingRadians())); double turnsTillScanBot = Utils.normalRelativeAngle(Math.abs( (bot.getRadarHeadingRadians()-angle) ))/.7; double radarTurn = Double.POSITIVE_INFINITY * radarDirection; // When to enable Radar Lock if(lookingFor.deltaScanTime < 1.1 && turnsTillScanBot < 1){ // A small Offset is needed. //A few different types are here for bling bling :) double offset =0; // = radarsMaxEscapeAngle(lookingFor.distance,sinceScanned) * radarDirection; // a small offset based on escape angle offset = offset + ( Math.abs(lookingFor.deltaAbsBearingRadians * 3 ) ); // greater offset for lateral speed offset = offset + (20* (lookingFor.deltaScanTime)) / (lookingFor.distance); // greater offset for smaller distance offset = offset * radarDirection; radarTurn =(Utils.normalRelativeAngle(angle - bot.getRadarHeadingRadians()+offset)); } bot.setTurnRadarRightRadians(radarTurn);// set scan } } } // Fail Safe public void cleanUpRound() { knownEnemiesList = null; // fail safe knownEnemiesListFull = false; // fail safe Iterator<Enemy> iterator= Module.enemies.values().iterator(); while (iterator.hasNext()){ Enemy him= iterator.next(); if(him.alive){ // clean Ups' } else him.alive = true; // fail safe } } /* // PAINT LookingFor public void onPaint(Graphics2D g){ if(lookingFor!=null&&lookingFor.location !=null){ g.setColor(new Color(0, 0, 255, 70)); g.fillRect((int)lookingFor.location.x-25,(int)lookingFor.location.y-25,50,50); } } */ // Utils //gets the absolute bearing between to x,y coordinates (not sure of Author) public double absbearing( double x1,double y1, double x2,double y2 ){ double xo = x2-x1; double yo = y2-y1; double h = getRange( x1,y1, x2,y2 ); if( xo > 0 && yo > 0 ){ return Math.asin( xo / h ); } if( xo > 0 && yo < 0 ){ return Math.PI - Math.asin( xo / h );} if( xo < 0 && yo < 0 ){ return Math.PI + Math.asin( -xo / h );} if( xo < 0 && yo > 0 ){ return 2.0*Math.PI - Math.asin( -xo / h );} return 0; } public double getRange( double x1,double y1, double x2,double y2 ){ double xo = x2-x1; double yo = y2-y1; double h = Math.sqrt( xo*xo + yo*yo ); return h; } /* protected long ticksUntilGunCool() { return Math.round(Math.ceil(bot.getGunHeat() / bot.getGunCoolingRate())); } public static double radarsMaxEscapeAngle(double distance, double sinceScanned) { return Math.asin( 8/ (distance-(8*sinceScanned))) * sinceScanned ;// } */ } - Note: HistoryLog is a linked list; Enemy is normal scan Data. - I also cleaned up PIF with comments -Jlm0924 19:16, 28 April 2010 (UTC) - fixed myNextLocation -Jlm0924 14:50, 17 May 2010 (UTC) - changed variables name 'headingRadians" to prevent confusion . The name should of been "effectiveHeadingRadians" -Jlm0924 23:56, 28 May 2010 (UTC) //Play It Forward // author : Justin Mallais public Angle getGunAngle(HistoryLog similar, Enemy e ,double bulletSpeed, long time, double weight){// final HistoryLog similarInfo = similar; final HistoryLog currInfo = e.last; HistoryLog endInfo = similarInfo; double bulletTime; long timeDelta = (time - currInfo.scanTime); double predDist = 0, predAng; // My 'current' position transposed on to the 'similar' battlefield Point2D.Double myRelativePosition = project(similarInfo.location, Utils.normalRelativeAngle(currInfo.absBearingRadians + PI-currInfo.effectiveHeadingRadians+similarInfo.effectiveHeadingRadians), currInfo.distance); while (endInfo.next != null && endInfo.round == similarInfo.round && endInfo.scanTime >= similarInfo.scanTime ) { endInfo = endInfo.next; bulletTime = (myRelativePosition.distance(endInfo.location) / bulletSpeed) +1; if (Math.abs(endInfo.scanTime - similarInfo.scanTime - timeDelta - bulletTime) <= 1) break; } if ( endInfo.next == null || endInfo.round != similarInfo.round )return null; // Enemies offset angle travelled predAng = Utils.normalRelativeAngle(DRUtils.absoluteBearing(similarInfo.location, endInfo.location) - similarInfo.effectiveHeadingRadians ); // Enemies distance travelled predDist = similarInfo.location.distance(endInfo.location); // Enemies future location on 'Current' battleField Point2D.Double predLocation = project(currInfo.location,Utils.normalRelativeAngle(predAng+currInfo.effectiveHeadingRadians),predDist); if(!Module.bf.contains(predLocation)) return null; // My absolute angle to his future position predAng = DRUtils.absoluteBearing(bot.myData.nextLocation, predLocation); // My absolute angle to his future position predDist = bot.myData.nextLocation.distance( predLocation); // // returns Angle : ("predicted angle", "tolerance", and weight)) Angle angle = new Angle( predAng, Math.atan(18 / predDist), 0, weight ); return angle; } - [View source↑] - [History↑]
https://robowiki.net/wiki/Talk:DemonicRage
CC-MAIN-2022-40
refinedweb
5,586
66.23
A Better Entity Framework Unit Of Work Pattern The standard Unit Of Work pattern has been around for the last 5 years. Today, I talk about a better way to implement the Unit of Work design pattern. When Entity Framework entered the picture some years ago, I was almost completed with an implementation of my own ORM (Object-Relational Mapper). The only thing missing in my ORM was a caching mechanism to speed it along. Once Entity Framework jumped from version 1 to 4, I decided to give it a try. After using EF4 for a while, I got into it and started using it more and more. At that point, I dropped work on my own ORM to start using EF4 exclusively. I came to appreciate the framework and the EF team's efforts making it better over the years. However, I was always worried about all of these repositories scattered around my application and started using the Unit Of Work design pattern. As with most developers, I wanted to make myself better and find out the best practices for the Entity Framework Unit of Work design pattern so I started looking around the big, bad web and found the Microsoft Unit of Work pattern for ASP.NET MVC. I also found a lot of developers using this particular method. public class MyUnitOfWork { private readonly EFContext _context; private PostRepository _postRepository; private TagRepository _tagRepository; public MyUnitOfWork() : this(new EFContext()) { } public MyUnitOfWork(EFContext context) { _context = context; } public PostRepository PostRepository { get { return _postRepository ?? (_postRepository = new PostRepository(_context)); } } public TagRepository TagRepository { get { return _tagRepository ?? (_tagRepository = new TagRepository(_context)); } } } On top of that, I saw an exceptional post from Derek Greer on Los Techies about him conducting a Survey of Entity Framework Unit of Work Patterns. So like Mr. Greer, this got me thinking of a better way to create a better Unit of Work. Two SOLID Rules Broken After going through his post, I started thinking of different ways to improve the Unit of Work pattern using Dependency Injection. The points he mentioned in the post are perfectly valid reasons to avoid this particular pattern. I'm hoping I can address all of these points and come up with a better pattern by the end of this post. Point 1 This approach leads to opaque dependencies. How true! All dependencies are explicitly defined in the Unit Of Work...hard-coded. Not a great pattern or easy to extend. Point 2 This violates the Open/Closed principle. Again, if you want to extend the Unit of Work class by adding another repository, you need to write it into the Unit of Work class. The open/closed principle says code should be "open for extension, closed for modification." Point 3 This violates the Single Responsibility Principle. I understand his approach, but it does encapsulate the committing and rollback of transactions through the DbContext. Also, according to Martin Fowler, a unit of work "maintains a list of objects affected by a business transaction and coordinates the writing out of changes and the resolution of concurrency problems." The "list of objects" are the repositories, the "business transactions" are the repository-specific business rules to retrieve data, and the "coordination of the writing of changes and concurrency problems" are through the DbContext. Point 4 This approach uses nominal abstraction which is semantically coupled with Entity Framework. When developers use Entity Framework, they create entities using a T4 template or the Entity Framework Reverse POCO Code First Extension. Once these entities are built, we have to inherit from the generated context to make sure any future changes aren't lost. I understand that defining a specific DbContext in the Unit of Work is required for it to work properly, but I think that's the nature of the beast. If you want to use a blogging module/context, you have to access the Blog Module. Any thoughts on this? Points Taken! I've read this post over a number of times and think it's definitely worth it to examine the different approaches and see which one makes sense for you or your company. After some thought, my first attempt is listed below using Dependency Injection. I'm using Ninject for this particular example, but you can insert your favorite DI library here. The GetRepository<T> method is simple enough to use your own DI library. Here's my first cut at this. public class MyBlogUnitOfWork { private readonly EFContext _context; public MyBlogUnitOfWork() : this(new EFContext()) { } public MyBlogUnitOfWork(EFContext context) { _context = context; } public T GetRepository<T>() where T : class { using (var kernel = new StandardKernel()) { kernel.Load(Assembly.GetExecutingAssembly()); var result = kernel.Get<T>(new ConstructorArgument("context", _context)); // Requirements // - Must be in this assembly // - Must implement a specific interface (i.e. IBlogModule) if (result != null && result.GetType().GetInterfaces().Contains(typeof(IBlogModule))) { return result; } } // Optional: return an error instead of a null? //var msg = typeof (T).FullName + " doesn't implement the IBlogModule."; //throw new Exception(msg); return null; } public void Commit() { _context.SaveChanges(); } public void Rollback() { _context .ChangeTracker .Entries() .ToList() .ForEach(x => x.Reload()); } public void Dispose() { if (_context != null) { _context.Dispose(); } } } The GetRepository<T> method is meant to return a repository that implements an IBlogModule in the current project/assembly. When I reflected on this approach, I came up with some thoughts on why this design makes sense: - When using dependency injection, there may be an overlap of multiple unit of work patterns in other assemblies. If I decided to use another method instead of the GetExecutingAssembly() and grab assemblies by extension or directory, I need a way to receive the correct repository I requested. If you request a UserRepository and it's in two projects, you could receive a different repository than expected. So based on this particular unit of work, all repositories implement an IBlogModule in this assembly. That way, I know that I'm grabbing the right repository from the right assembly. If it implements an IBlogModule, then I can use it. - Besides the method name, there is no mention of a repository in this Unit of Work pattern at all. This solves the Open/Closed principle. If I want to add another repository, my new repository would implement the IBlogModule interface and my unit of work would not require any changes. - I have commit and rollback methods in my unit of work based on the context used, but we could add TransactionScope as well as Mr. Greer mentioned in his post. - Keep in mind the IBlogModule doesn't have ANY methods or properties attached to it. It's strictly a flag for making our DI library aware that we "new up" the right repository. Unfortunately, it does use an Entity Framework-specific DbContext in this assembly (EFContext) so it's definitely coupled with this business assembly. Does that make it a bad design? Would I have any other use for my BlogContext in another unit of work somewhere in another assembly? I haven't found one yet. How to use it If you want to use this particular Unit of Work, an example is below: var unitOfWork = new BlogUnitOfWork(); var repository = unitOfWork.GetRepository<PostRepository>(); repository.Add(post); unitOfWork.Commit(); While you do need to be specific in which repository you want to use, it makes your Unit of Work a little smaller and more manageable. Conclusion This post was meant to examine the Entity Framework's Unit of Work design pattern and come up with a better way to make the pattern easier to work with and adhere to some SOLID principles. I want to thank Derek Greer for writing such a great post. If it wasn't for that post, I wouldn't have given the Unit of Work a second glance. If you think I've violated some principles in my code, please let me know. I always want to make code better than what it is. Post your comments and questions below.
https://www.danylkoweb.com/Blog/a-better-entity-framework-unit-of-work-pattern-DD
CC-MAIN-2022-40
refinedweb
1,318
54.73
Laying A Foundation I've recently had some conversations with other developers about setting up and configuring projects. I work 👷♂️ for a web development and marketing agency, WiredViews. Working for an agency involves a constant stream of greenfield projects, which means I've had lots of experience "starting" things. Below I'd like to cover some recommendations for setting up a Kentico 12 MVC project and what benefits these bring. This is going to be a 3-part post. In this post we're going to look at Developer Experience. The next 2 parts will cover Documentation and Configuration. Let's begin! Developer Experience Developer Experience is made up of all the little things in a code base that either make development a breeze 🏖 or wear us down over the course of a project 😩. If Developer Experience is bad, it's like walking on sharp stones, but if it's good, we are more efficient and focused on doing our best work 💪🏾. Improving Developer Experience can be owned by anyone on a team - it's usually too small to make a separate task, but big enough to annoy if it's not taken care of. Editorconfig I'm a big fan of not having to worry about formatting and syntactical conventions when coding. Languages, like Go, having code formatting and linting built in. JavaScript and TypeScript have Prettier, which I like to joke is the tool you can hate so you don't hate your teammates. They also have TSLint and ESLint, both of which have highly configurable rule systems. C# and .NET don't have anything quite as integrated or opinionated 😕, but we do have modern tooling to support formatting and conventions 👍🏽. These days, Visual Studio and VS Code both support formatting rules defined by EditorConfig (brought to us via Roslyn, the C# compiler-as-a-service). EditorConfig helps maintain consistent coding styles for multiple developers working on the same project across various editors and IDEs. We can find a full set of configurable rules in Microsoft's documentation. If you're unsure about what options to pick, it wouldn't be a bad choice to go with the configuration for the .NET Core runtime repository or the Roslyn repository. The best time to establish these types of conventions is at the beginning ⌚ of a project, both to help avoid massive changes in our Git history from re-formatting, and to not annoy other developers when they find out the way they've been formatting for 3 months is "the wrong way" 😒. Add an .editorconfig file to the root of your project on day 1 and use extensions like Code Cleanup On Save (for Visual Studio) and OmniSharp (for VS Code) to guarantee formatting is always applied 😎. Solution Folders When building a Kentico Portal Engine application, we could sometimes get away with the CMS being the only .NET ASP.NET project in a solution. However, now with Kentico 12 MVC, we will have at least 2. If we want to share code between the Content Management (CMS Web Forms application) and Content Delivery (MVC 5 application) projects, we'll likely have several more 😮. To learn more about structuring a Kentico 12 MVC code base, checkout Kentico 12: Design Patterns Part 20 - Choosing a Solution Architecture As the number of projects and files in our repository grows, we'll want to come up with new organizational patterns. Solution folders can help us do just that. In Visual Studio, Solution folders aren't actual folders on the filesystem, instead they are 'virtual' folders that appear in the Solution Explorer. A common pattern is to create src and tests folders to separate deployable code from tests. Maybe we have some console applications or other tools for a project - we could make a tools Solution folder and keep these projects and files there. I typically have a Solution Items folder where I put files at the root of the repository that I want to edit in Visual Studio, like documentation, a .gitignore, or other configuration files 🤓. Solution and Project Names When we use the Kentico Installation Manager (KIM) to create a new Kentico 12 MVC codebase, we'll get 2 solution files, WebApp.sln and one named after the project name we gave to the KIM. I dunno about you, but WebApp.sln is pretty bland 😝... who knows what could be in there!? In fact, my recommendation is to name Solutions and Projects to match the project being worked on. We likely have a client, department, or stakeholder this project is being built for. Figure out the official "business" project name and use that as a starting place for naming .NET solutions and projects. This will also impact .NET namespaces, and this is a good thing. For solutions, this can be done either by renaming the .sln file directly or renaming the top level Solution node in the Solution Explorer from within Visual Studio. We can even rename the CMS project! It's traditionally been named CMSApp, which comes from the CMS\CMSApp.csproj file. Go ahead and change that to match the name of your project, like CMSStorefront.Web. I like using the "Content Management" and "Content Delivery" terminology for the CMS and MVC applications respectively 🧐, so I'll fit those into a naming scheme, like Sandbox.Management.Weband Sandbox.Delivery.Web. We'll want to update the <RootNamespace> and <AssemblyName> properties in the .csproj to match. Note: While we can rename the CMS project, we cannot rename the \CMSfolder... well, technically you can but it will make applying hotfixes and patches a pain 😤. Running and Debugging 2 Apps Simultaneously As mentioned previously, the KIM (Kentico Installation Manager) provides us with 2 solution files upon the creation of a new Kentico 12 MVC application. I would recommend deleting 1 and adding the remaining ASP.NET project to the other solution. Why 🤔? Well, typically we won't be developing just the Content Delivery or Content Management applications. We'll likely be working on both - adding content to the document tree, creating new page types, and viewing the site to make sure we're querying and rendering data correctly. Do you want to always have 2 instances of Visual Studio open at the same time 🙄? Ya, neither do I 😉. Once both of the projects are in the same solution, we can start and run them simultaneously. Select the first project you want to start by right clicking on the project node in Visual Studio and selecting "Set as Startup Project": Then either use the keyboard shortcut ctrl+F5 or select the "Start Without Debugging" option from the "Debug" menu at the top of the screen: Now do the same two steps for the other project. The IIS Express tray icon should show 2 applications: What about debugging 🙂? We will "attach" to the running IIS Express processes. We need to get the process id from the IIS Express dialog (right click the tray icon and select "Show All Applications": Now, return to Visual Studio with this information and either use the keyboard shortcut ctrl+alt+p or select from the Debug menu "Attach to Process", after which the process list dialog will appear: Search for the correct IIS Express process (or processes, yes - you can attach to both applications at the same time 🤯!) and then click "Attach". We could have also assumed that all iisexpress.exeprocesses are the apps we just started and attached to those 🤷🏾♂️. The great thing about this setup is, we can launch one or both applications from the same Visual Studio instance and debug. If we want to stop debugging, we click the "stop" button, but this is much better than normal debugging, because the sites are still running 🥳! That's right, we can attach and detach as much as we want and we don't have to restart 😁 the sites every time. Also, there's a nice keyboard shortcut for re-attaching to the last attached process(es) - shift+alt+p, no need to open the process dialog again 😎. Conclusion In this post we covered: - Setting up EditorConfig for consistent linting and formatting - Using Solution Folders to make organizing our work in Visual Studio easier - Giving our Solution and Projects better names - Running both Content Delivery and Content Management applications out of the same solution Each one of these leads to a better Developer Experience for ourselves and for our team members. There's lots of other ways we can improve Developer Experience on a project, and I'd love to hear your ideas, so leave a comment below if you have: Discussion
https://practicaldev-herokuapp-com.global.ssl.fastly.net/seangwright/kentico-12-design-patterns-part-22-improving-our-projects-for-developer-experience-1jjk
CC-MAIN-2021-04
refinedweb
1,433
61.26
Functions relatives to tracks, vias and segments used to fill zones. More... #include <pcbnew.h> #include <class_board_item.h> #include <class_board_connected_item.h> #include <PolyLine.h> #include <trigo.h> Go to the source code of this file. Functions relatives to tracks, vias and segments used to fill zones. Definitions for tracks, vias and zones. Definition in file class_track.h. Definition at line 60 of file class_track.h. Referenced by VIA::Draw(). Definition at line 58 of file class_track.h. Referenced by PCB_IO::format(), DSN::SPECCTRA_DB::makeVIA(), and VIA::SetDrillDefault(). Definition at line 49 of file class_track.h. Scan a track list for the first VIA o NULL if not found (or NULL passed) Definition at line 490 of file class_track.h. References TRACK::Next(), PCB_VIA_T, and EDA_ITEM::Type(). Referenced by GENDRILL_WRITER_BASE::buildHolesList(), TRACKS_CLEANER::cleanupVias(), CreatePadsShapesSection(), TRACK::GetVia(), BOARD::GetViaByPosition(), and TRACKS_CLEANER::removeDuplicatesOfVia(). Function GetTrack is a helper function to locate a trace segment having an end point at aPosition on aLayerMask starting at aStartTrace and end at aEndTrace. The segments of track that are flagged as deleted or busy are ignored. Layer visibility is also ignored. Definition at line 68 of file class_track.cpp. References BUSY, IS_DELETED, and TRACK::Next(). Referenced by BOARD::chainMarkedSegments(), PCB_EDIT_FRAME::EraseRedundantTrack(), BOARD::GetLockPoint(), and BOARD::MarkTrace().
http://docs.kicad-pcb.org/doxygen/class__track_8h.html
CC-MAIN-2017-30
refinedweb
209
55
Volume 9 Number 2 INTHROP W PDATE U Winter 2000-01 Ed Lewandowski subject of SCETV documentary Hurley fellowships are English majors’ ticket to new view of old world To say that Edmund Lewandowski had a big impact on the arts scene in Rock Hill is an understatement. A nationally recognized Precisionist, Lewandowski chaired Winthrop’s art department from 1973 to 1984. Until his death on Sept. 7, 1998, he was a mentor, community activist and artist who was respected and loved by the Rock Hill community. As a tribute to Lewandowski for his contributions not only to South Ed Lewandowski Carolina but to American art, Winthrop and WNSCTV, a regional station of South Carolina ETV, have co-produced a documentary about the artist. The half-hour piece, entitled “Remembering Ed: The Last Precisionist,” was broadcast statewide on SCETV in September. Lewandowski’s art was recognized nationally as early as the 1930s. He worked with the Works Progress Administration during the Depression and completed murals throughout the Midwest. He was also a mosaic artist, best known for the War Memorial in Milwaukee, the largest mosaic produced in America. He has often been considered the last artist of the Precisionist movement, a distinctly American style in which industrial scenes and architectural motifs, devoid of human reference, are de- English majors Tamzen Wagner and Claire Sullivan knew each other from class. What they didn’t know was that they both would be chosen for a once-ina-lifetime, all expense paid summer study trip to England. Wagner, a senior from Columbus, Ohio, already had made her summer plans when she received word she was a fellowship recipient. She had planned to use savings bonds to finance her tuition at Cambridge University’s three-week international summer program in medieval studies. Sullivan, a junior from Rock Hill who would like to get her doctorate and teach on the college level, hadn’t even considered a trip abroad to further her education. She was still at loose ends when she received a call from Debra Boyd, English department chair. “Dr. Boyd is my advisor, and the last week at school she had asked me if I were interested in the Cambridge program,” Sullivan remembered. “She kept in touch with me over the summer, then told me that based on my GPA and activities, I’d been chosen for one of two fellowships for study and travel.” Sullivan counts being student marshal, Winthrop ambassador, vice president of the Literary Society and president of the English honor society, as well as holding a part-time job among her many activities. Wagner is equally in- (see Lewandowski on page 2) INSIDE Around campus: Winthrop offers new master’s degrees in middle level education and conducting p. 2 Faculty/staff notes: French professor learns secrets of French perfumeries ................................. p. 4 Student spotlights: Student hobnobs with international diplomats as assistant to ambassador Mark Erwin .......................... p. 7 Sports news:Freshmen men’s golfers ranked top in nation ... p. 9 Alumni activities: Carrolls’ grant lets students play the stock market .................................. p. 10 While studying at Cambridge, Claire Sullivan (center left) and Tamzen Wagner (center right) went on lecture-related excursions to attractions like the Globe theatre for an “awesome” production of “Hamlet.” Sullivan says the experience has changed her perspective and the direction of her life. volved. She is president of the Literary Society and vice president of the English honor society, is active in the campus ministry Reformed University Fellowship and tutors at the Writing Center. College of Ed auditorium named for Plowdens Winthrop has named its College of Education Auditorium after Irvin and Jean Kirby Plowden ’55. The Rock Hill couple donated $500,000 last summer for an endowed scholarship for education majors who would not be able to attend college without financial help. Students will begin receiving the academic scholarships starting in the fall. “There are no finer examples of citizens involved in service to education than Irvin and Jean Plowden. The creation of the Jean Kirby Plowden Endowed Scholarship is a capstone example of that involvement,” said President Anthony DiGiorgio during the dedication luncheon at McBryde Hall. The auditorium is located on the third floor of the Withers/W.T.S. Building which houses the Richard W. Riley College of Education, named last spring after U.S. Secretary of Education Richard Riley, a former South Carolina governor. Irvin Plowden retired in June 1999 as chair of Amida Industries, the company he built from three employees and a single product (carnival lights) into the largest supplier of mobile lighting in the world. An education major, Jean Plowden taught third grade for two years in Beaufort, SC, before she and Irvin married. She continued her link to education by serving as Amida’s company liaison with area schools, as well as serving on the board of Communities in Schools, a dropout prevention program for Rock Hill and Fort Mill students. The two were chosen to receive the Geraldine Trammell Hurley Fellowships for Study and Travel, established by James and Geraldine Hurley. The fellowship provides funds for up to four students majoring in English to study at international institutions, such as the Oxford Summer School in England and the Yeats Summer School in Ireland. “Travel is such a great educator,” Gerry Hurley had said last spring when she and her husband presented Winthrop with the $400,000 gift, $300,000 of which is a planned gift. The fellowship covered travel, food, lodging and tuition. Both Wagner − who didn’t have to cash in her savings bonds after all − and Sullivan flew to London a week early to explore on their own before classes began. Wagner’s mother used the money she had saved for Tamzen’s airfare to finance her own plane ticket and accompanied her daughter on excursions outside London. Sullivan stayed in London and investigated the city on her own. The two got together when they reached Cambridge. “We stayed at Newnham College where the medieval studies program was based,” Wagner said. “There were people (see Engish majors on page 7) 2 Winthrop Update • Winter 2000-01 AROUND CAMPUS University adds master’s in conducting, middle level education Degrees in choral and instrumental conducting meet community need Middle level degree fills specialized niche No one would argue that middle school students have unique needs. Recognizing the special skills teachers need to work with these students, Winthrop is offering a master’s degree in middle level education this fall. “Middle level education is unique because of the specialized developmental and curricular needs of the students,” said Barbara Blackburn, assistant professor of curriculum and instruction and program coordinator. “We have built a program to prepare teachers to be leaders in responding to the specific needs of this age group. Additionally, we are being proactive in moving to meet South Carolina’s need for middle level certification. We look at national research and link it to state-specific needs for both North and South Carolina and further link it to actual classroom practice.” Designed for people currently holding teaching certificates, the program is Winthrop’s Department of Music routinely gets calls about its graduate programs. However, a few years ago, faculty began to notice that they were increasingly getting inquiries about additional studies in conducting rather than music education or performance. “That’s how most middle school and high school choral and band directors spend their time,” explained Don Rogers, music department chair. “So, we put two and two together, created a faculty committee, talked to some active public school choral and band directors in the area with those interests, and put together a degree in conducting.” Rogers notes that not only does the degree meet an established need, but because of the expertise and experience of the music faculty, the university is able to offer the program with existing faculty. The new degree features two tracks: one in choral conducting and one in wind instrumental conducting. The program is designed for students who hold music performance, music education or equivalent undergraduate degrees from an accredited institution. Additionally, they must have keyboard proficiency that would be required under such a degree as well as proficiency in a major instrument, which varies according to the track. They also must have at least one year of college-level study in French, German or Italian. Choral conducting students also should have diction proficiency in liturgical Latin and either French, German or Italian. Winthrop is one of only two schools in the state with a master’s in conducting and the only school in the CharlotteMetro area with graduate degrees in music. Lewandowski Winthrop and DSS partner to better prepare social workers (continued from page 1) picted in a simple, clear, almost abstract manner. Winthrop University Galleries Director Tom Stanley who worked on the production with WNSC’s Steve Warren said, “This is one of the most rewarding projects I have had the good fortune of being involved with. The people we interviewed created a compelling story that goes far beyond Ed’s artistic achievements.” W INTHROP UPDATE Editor Gina Carroll Howard Contributing Writers Judy Longshaw Ryan Shelley Photographer Joel Nichols ADMINISTRATIVE OFFICERS President Anthony DiGiorgio Vice President University Advancement Rebecca McMillan Executive Director Alumni Relations Martie H. Curran Director of University Relations Ellen Wilder-Byrd Winthrop Update is published twice yearly by the Winthrop University Office of University Relations, 200 Tillman Hall, Rock Hill, SC 29733. It is printed on recycled paper using vegetable-based ink and is distributed to Winthrop alumni and friends. Visit Winthrop’s Web site at. Social workers have a tough job. However, a negotiated contract between the South Carolina Department of Social Services and Winthrop is helping to make their job a little easier by giving preservice students hands-on experience and current DSS staff a chance to further their education. As a part of the contract, Winthrop will receive $237,123 the first year with a possibility of renewal for up to four years. Under the agreement, six selected DSS employees may take six credit hours spring semester at Winthrop. The classes may be applied towards a bachelor’s degree in social work. Over the summer, 12 staffers may take classes. The program is in response to the high rate of DSS staff turnover. Its aim is to reduce this by offering promising current DSS staff further education and by better preparing students who are going into the often-demanding world of social work. Beginning spring semester, as many as eight pre-service students will be selected to participate in the program. They will be required to take six to nine hours of course work in child welfare, child protection and substance abuse to acquire the specialized knowledge they will need to work in the county offices. Investigators are still working out the student selection criteria. A child welfare-oriented field unit of up to eight students will be created within a county DSS office this summer, A fulltime field instructor will help the students adjust to the demands of social work. “These kinds of programs are going on all over the U.S. I’m glad to see this kind of professional incentive come into our community,” said Ron Green, principle investigator and social work chair. Contract funding also will provide support for converting three current social work courses onto the Web. The goal is to eventually convert all applicable social work classes into a Webbased format to make it more convenient for students and current workers to get the education they need. Once the program is implemented, Green and associate professor of social work Sue Lyman will assess the impact the project is having on the community. tailored so that participants, who choose one or two content specialties, can enter in either the fall or spring semester. The disciplinary focus reflects the National Middle School Association/NCATEapproved guidelines. Winthrop is the first university in the state to offer this sort of program. Renovations on baseball field nearly complete Winthrop’s baseball park is getting a new look. Work is almost finished to upgrade the facility to seat 1,900 spectators. The new park also will feature new dugouts, a press box, concession area and restrooms. Funds for the work will come partly from the proceeds of refinancing the original Coliseum construction bonds and partly from student athletics fees. Money from both sources is required to be devoted exclusively to intercollegiate athletic and recreational uses. The contract for the renovation totals $2,055,632. Private fundraising to support future upgrades, such as locker rooms, coaches’ offices space and an indoor hitting area, is underway. Contracts for planned upgrading of softball and soccer facilities have not yet been bid. The field, which is scheduled to have its grand opening on March 24, will be both a Winthrop and community resource, used by Winthrop teams during the season and by American Legion and Dixie baseball in the summer. Enrollment tops 6,000, highest ever Winthrop marks the 21st century with the largest student body − 6,062 students − in the institution’s 114-year history. This is the first time the university has exceeded 6,000 undergraduate and graduate students. In fall 1999, Winthrop enrolled 5,840 students. Other enrollment highlights include: *A freshman class that scored the highest average SAT in Winthrop’s history: 1056, a 14-point increase over last year’s average of 1042. More than half − 57.7 percent − of the freshman class ranked in the top quarter of their high school senior class. *A record for transfer students, who number 404 students, a 13.2 percent increase over last year’s transfer enrollment of 357. *The largest undergraduate enrollment in Winthrop’s history of 4,650 students and largest fall graduate enrollment of 1,412 students. The number of freshman students is down slightly from 971 in fall 1999 to 906 students in fall 2000. Winthrop officials aim for freshman classes of between 900 and 950 students. *The most diverse student body in Winthrop history. The university has 1,479 enrollees who are African-American, Hispanic and other populations, including 1,350 African-American students. *The increasing prominence of Winthrop graduate programs, with 1,412 students − 23.3 percent of the student body − engaged in graduate studies. The record number of transfer students, plus an increasing number of graduate students, helped the university reach this milestone. Winthrop Update • Winter 2000-01 3 AROUND CAMPUS U.S. News again ranks Winthrop among top 10 Southern public universities Winthrop is once again listed among the U.S. News & World Report’s annual top 10 regional public universities. The university tied with one other school − East Carolina University − for ninth place in the prestigious rankings. Among the 123 public and private regional universities rated, Winthrop again was in the top tier, tying at 27 with Bellarmine College in Kentucky, Christian Brothers University in Tennessee and East Carolina University. Regional universities specialize in undergraduate and master’s level programs. U.S. News rates schools for their academic reputation, graduation and retention rates, faculty resources, student selectivity, financial resources and alumni giving. The findings are in the year 2001 edition of America’s Best Colleges which is available on the U.S. News Web site () and on newsstands. The guide rated more than 1,400 colleges. This is the ninth time Winthrop has received a top ranking from the magazine. President Anthony DiGiorgio said the accolade is one of many “third-party verifications” of Winthrop’s overall quality. “As Winthrop helps define higher education for the 21st century, it is clear the Winthrop team is building a record of sustained high performance from a national perspective,” he said. “This ranking is just one indicator. Our continuing 100percent accreditation for all eligible programs is another.” Chemistry Dept. awarded contract with N.C. Division of Air Quality Winthrop has signed a contract with the North Carolina Division of Air Quality (NC DAQ) to provide research analytical support for the hydrocarbon sampling and analysis program. In a report issued by the American Lung Association in June 2000, the Rock Hill-Charlotte-Gastonia metropolitan area was identified as the eighth worst U.S. urban region for summertime production of ground-level ozone. The work with the Division of Air Quality will support chemistry student internships to evaluate hydrocarbon fingerprints and trends from 10-20 sampling locations using data collected over the past five years. Between seven and 10 chemistry students will be involved in various aspects of the project. Chemistry professor Pat Owens will direct the project. Owens is vice chair of the Mecklenburg County Environmental Protection Commission and a member of the air quality team for the Voices and Choices regional smart growth initiative. ABC receives $100K NEA grant All children in South Carolina – regardless of where they live − should have a quality, comprehensive arts education. That’s the premise of the Arts in Basic Curriculum project. A two-year $100,000 National Endowment for the Arts grant for a new arts education outreach program brings that goal closer to reality. The new ABC Outreach Program is designed to make the arts a basic part of students’ education in underserved areas. Expanding the project’s reach to offer more assistance at the local level was the major recommendation of the ABC project’s 10-year evaluation, according to Ray Dougherty, interim project director. The National Endowment for the Arts has a similar outreach goal. ABC staff will identify and recruit schools and districts limited by geographical and/or economic constraints, Winthrop chosen as site for Center of Excellence in Early Childhood Professional Development Winthrop’s Richard W. Riley College of Education continues to be recognized for the quality of its faculty and curriculum. Over the summer, the college was chosen as the site of the state’s newest Center of Excellence − a center to be devoted to identifying how South Carolina can best prepare children in their earliest years for success in the classroom. On Jan. 26, Gov. Jim Hodges and S.C. Secretary of Education Inez Tenenbaum are scheduled to cut the ribbon to open the new center. The Center of Excellence in Early Childhood Professional Development was chosen from a number of proposals submitted to the S.C. Commission on Higher Education by public colleges and universities competing for start-up funds. Winthrop partnered with the University of South Carolina’s Institute for Families in Society in crafting the proposal, which is to be funded with $424,049 in state money over the next four years. The Good Building will serve as headquarters for the center. Also collaborating with Winthrop and USC on the initiative are York Technical College, area school districts, York County’s Success by 6 organization and the state’s First Steps program. The center will work with York Tech to link childcare and early grade professional and paraprofessional training programs in ways that recognize families’ roles in early childhood preparation for the classroom. It also will be a resource center for Success by 6, First Steps and area school district programs. “It is critical that early childhood educators know how to work effectively with families. The more able teachers are to support and understand the diverse needs of families, the greater our children and society will benefit,” said Patricia Graham, dean of the Richard W. Riley College of Education. “The Center of Excellence in Early Childhood Professional Development will have this goal of family-centered curriculum and training as a primary focus.” with special emphasis on the state’s districts of greatest need. Staffers will work with the schools to promote significant curricular changes to broaden arts education. School and district outreach sites will be able to send their teachers to the Curriculum Leadership Institute in the Arts (CLIA), Spoleto Teachers Institute, and the Arts Education Leadership Institute for in-depth individual and team training. Registration goes online for spring The candy-chomping, soft-drink swilling, hours-long drop/add line snaking through Tillman to McBryde are a thing of the past. With the spring semester, students are able to register for classes via Wingspan, the Office of Records and Registration’s online program. After meeting with advisors – and the advisors log on to SIS to OK their course selection – students can check the course schedule, choose a class and section, and click to register. “The program tells if a course is open and how many seats are available. It also checks for prerequisites and grades, if specific grades are required,” explained Tim Drueke, registrar. “If a lab is required, or if there is a schedule conflict, the system will tell the student that as well.” Drueke said that in addition to registering – and dropping and adding classes, students also can use Wingspan to check grades, account balances, transcripts and credits transferred to Winthrop from other colleges. They also can check on any holds blocking them for current registration. “Students can register from any computer connected to the Internet, whether across campus, across town or around the world,” Drueke said. Drueke said the new system will free records and registration staff to direct their efforts to students who are having major problems or need course overrides. Music and the spoken word resonate in Frances May Barnes Recital Hall throughout the year. Among the guests appearing last October were (left to right) pianist Charles Hulin, a faculty jazz group and vice chair of First Union Corp. Ben Jenkins. 4 Winthrop Update • Winter 2000-01 FACULTY/STAFF NOTES Tom Moore to serve as acting dean for Arts and Sciences Tom Moore, former chair of the chemistry department and current director of the Master of Liberal Arts program, will take over as acting dean of the College of Arts and Sciences when Betsy Brown steps down on Feb. 1. Brown, dean of the College of Arts and Sciences since 1994 and a member of the administration since 1990, has accepted a position with the University of North Carolina. She will become an associate Tom Moore vice president for academic affairs in the Office of the President at the University of North Carolina on Feb. 1. The University of North Carolina comprises 16 public institutions, including UNC Chapel Hill, North Carolina School of the Arts, North Carolina State University and Appalachian State University. Moore will take over the leadership of faculty, students and staff in 14 departments. Until his appointment as acting dean, he served as the president of the faculty conference and faculty representative to Winthrop’s board of trustees. Moore earned a B.A. in chemistry from Huntingdon College in Montgomery, AL, and a Ph.D. in inorganic chemistry from the University of South Carolina. There will be a national search for a new dean for the College of Arts and Sciences at the beginning of 2001-2002 academic year. ”Perfume so powerful they seem to pass Through all materials, perfusing even glass” from “The Flask” by Baudelaire The French take their perfumes very seriously. Donald Friedman, professor of modern languages, discovered just how seriously when he toured the factories over the summer, visited the archives and talked with a few of the “noses” of some of the most well known perfume houses in France. Jacques Polge, the nose of Chanel, oversees the quality of Chanel #5, Coco and Allure. “He works in Donald Friedman the tradition of Chanel, which was founded in the 1920s,” Friedman said, explaining how the perfumery began. “Ernest Beaux, a perfumer at the court of Czar Nicholas II, had created a scent that was very rich in rose and jasmine. When he offered it to Coco Chanel, she said, ‘A woman should smell like a woman and not like a flower.’ In response, he added aldehyde, a starchy smell that gives Chanel #5 its distinctive fragrance of muted florals, creating a scent analogous to an abstract painting.” Friedman, who became curious about the modern French perfume industry after noticing repeated perfume imagery in turn-of-the-century French poetry, said many of the fragrance designers are third- or fourth-generation noses. Jacques Guerlain, the grandfather of the current Guerlain nose, Jean Paul Guerlain, created the house’s signature scent, Shalimar. Jean Paul Guerlain isn’t yet sure which − or if either − of his two grandchildren will be the nose of the next generation, continuing a 175-year-old tradition. Perspective noses undergo extensive training. Some who show promise attend a class in the south of France, while others enroll in the Institute of Perfumery in Paris for university level training. “Noses must be able to recognize up to 2,000 individual scents. They must be able to memorize fragrances, so that they are able to smell them mentally,” Friedman said. “Just as musicians compose in their heads, noses create fragrances in their minds. Perfumers are as important to the French popular culture as vintners are.” Just how important is obvious at the Osmotheque, the library of lost fragrances in Paris, which houses 2,000 lost perfumes that have been recreated and stored in ideal conditions. There visitors can sample the cologne made for Napoleon and scents the ancient Romans wore. Like the perfumers throughout the ages, today’s noses consider themselves olfactory poets. It is easy, they say, to create a pleasing scents, but great perfumes have the ability to recall memories and evoke dreams. Board notes Highlights of the Nov. 3 board of trustees meeting: • The university will sell $3.75 million in bonds to pay for the renovation of the Sims Building. Another $1.25 million in bonds will be issued in July 2001. • Winthrop also will sell $2.4 million in athletic facility bonds to renovate the baseball park, coliseum, softball complex and soccer fields. Money from the sale, along with athletic fees and funds from refinancing the coliseum, will pay for the improvements. • A third bond package will provide for the lease/purchase of five police cars for $110,000 and 252 personal computers for student laboratories for $230,000. • The board increased executive master of business administration fees for 2001-2002 by 8.57 percent for in-state residents to $19,000 and 9.27 percent for out-of-state residents to $21,200. • The board approved a bachelor’s of arts degree in environmental studies and B.S. degree in environmental sciences. Both degrees still need approval from the Commission on Higher Education. DiGiorgio’s contract extended Winthrop’s board of trustees voted at its June 9 meeting to extend President Anthony DiGiorgio’s contract to June 30, 2003, adding one year to the president’s current contract. Board members also gave DiGiorgio a rating of “exceeds expectations” during his formal evaluation. The Agency Head Salary Commission recommended, and the state Budget and Control Board approved, a 5 percent raise to $127,947. DiGiorgio has been president of Winthrop for 11 years. Faculty to delve into Twain and Trevelyan during sabbaticals The nine faculty members who will be on sabbatical during the 2000-01 year will be researching areas as wide-ranging as labor market conditions and lies in dating relationships. John Bird, associate professor of English, will be on sabbatical for the entire academic year finishing his book, Mark Twain and Metaphor, a study of Twain’s use of figurative language. Bird joined the Winthrop faculty in 1993. Peg De Lamater, associate professor of art and design spent her fall sabbatical working on an introductory art appreciation textbook, Visual Culture: a Global Perspective. She is writing the Asian section and is primarily responsible for the modern/contemporary section as well. De Lamater came to Winthrop in 1992. Bob Edgerton, music professor, will be taking spring semester to concentrate on choral music arranging, composing new settings of existing melodic material for combinations of singers with or without instrumentation. While on sabbatical, Edgerton plans on mastering computer music-writing technology, completing ongoing creative projects and creating new choral arrangements. Bill Naufftus, English professor, will be heading to libraries in the U.S. and England to research and begin writing a book on three related British historians: Lord Macaulay; his nephew, Sir George Otto Trevelyan; and Sir George’s son, George Macaulay Trevelyan. Naufftus, who received both the Outstanding Junior Professor and Kinard awards, has been the Margaret Bryant Professor at Winthrop since 1997. During his 200001 sabbatical, he will be looking at the changes and continuity in their reports on 18th-century British history. Naufftus has been a faculty member since 1980. Terry Norton, education professor, spent fall semester expanding his booklet, “Literacy Strategies,” to a full-length book. Norton, who joined the Winthrop faculty in 1981, said he is designing his book to help teachers with the S.C. curriculum standards for reading/English language arts. Darrell Parker, economics professor and director of the Economic Devel- opment Center, will be interviewing and surveying S.C. employers during spring semester about how they are handling the labor market conditions. Parker has received Phi Kappa Phi and First Union Bank excellence in teaching awards, was named Winthrop’s Distinguished Professor and was the university’s first Grier Professor of Business Administration. He joined the Winthrop faculty in 1985. Wilhelmenia Isaac Rembert ’72, associate professor of social work and associate vice president for graduate studies, will spend a year researching leadership, strategic decision-making, the management of complex change, and dealing with diverse populations in P-12 public school systems, higher education and business organizations to see what parallels exist in those sectors. She also will engage in a limited professional social work practice. Rembert, who was acting dean of the College of Education last year, is a member of the CharlotteMecklenburg School Board, Rembert says she plans to visit other communities to learn how they address public educa- tion reform and progress. She has been a Winthrop faculty member since 1979. Marilyn Smith, management professor, worked for Solectron Corp., an international provider of electronics manufacturing services to original equipment manufacturers, during her fall sabbatical. She did process engineering and applied her experience as a Malcolm Baldrige National Quality Award examiner. Smith joined the Winthrop faculty in 1989. Jennifer Solomon, associate professor of sociology, will be doing research for several books during her academicyear sabbatical. She will be working with her graduate school mentor on a book about lying between couples in a dating relationship, work with a former student on a book about the identity theory and self-esteem, and collaborate on a book about older people giving away valued possessions. Solomon, who joined the Winthrop faculty in 1990, has been named Phi Kappa Phi Faculty Mentor three times and is a Faculty Student Life Award winner. Winthrop Update • Winter 2000-01 5 FACULTY/STAFF NOTES Fulbright recipient Schweitzer to teach anthropology in Albania Wilcox gets Governor’s Award in humanities for academic achievement International aid is normally thought of in terms of food, shelter, water or medicine. However, Mary Schweitzer is one of approximately 2,000 U.S. Fulbright grantees in the 2000-2001 year who will offer the gift of education to the international community. Schweitzer, a professor of anthropology, retired in December with more than 22 years of service at Winthrop. She will leave in February for a six-month term in Albania. Mary Schweitzer There, she will teach anthropology courses on ethnicity and cross culture conflict resolution at The South Carolina Humanities Council has presented Earl Wilcox with a Governor’s Award in the Humanities for Academic Achievement for 2000. Since 1991, 25 individuals and organizations have received the Governor’s Award, the highest honor awarded for public scholarship and service in the humanities. Wilcox is the first award winner from Winthrop. He was selected for his impressive scholarly productivity durEarl Wilcox ing a 30-year career at Winthrop. Cited for his work on the University of Tirana, located in the country’s capital. “I hope to enrich the student’s understanding of the dynamics of ethnicity and the sources of ethnic identity. This will encourage them to be able to understand and appreciate the diversity of other people,” said Schweitzer. Schweitzer was selected for the grant based on her academic and teaching qualifications, publications, experience in teaching in the international community and letters of recommendation. Fulbright grants are highly competitive. Schweitzer was one of only three U.S. professors chosen to receive an Albania grant. The Fulbright Program is committed to building a mutual understanding between people of the United States and the rest of the world. Faculty member, art student work together to help Liberian museum get back on its feet In 1990, the Africana Museum at Cuttington University College had a collection of African art that was the envy of the continent. The museum proudly displayed nearly 3,000 catalogued pieces showcasing craftsmanship from throughout Africa. Today, the museum is literally a shell of itself. Only 153 pieces remain, the rest having been looted or destroyed, part of the toll of Liberia’s civil war. The building itself escaped serious damage because for a time it had been transformed into a mosque. Assistant art and design professor Alice Burmeister and senior art history major Erin Demery have begun the painstaking process of rebuilding the Africana Museum. Burmeister and Demery, with a Research Council grant for faculty and student research, spent a month this summer at Cuttington University College in Suacoco, Liberia, a village about 2˚ hours north of the capital city of Monrovia. There they photographed and videotaped the museum as they found it – windows broken out, debris on the floor. Then they began cleaning it up. They documented the pieces that were left and wrote descriptions. For each, they assessed its ethnic origin, region where it was created and probable date it was made. They also described the size, condition and function of the piece, as well as how it was acquired. The last time such a thorough inventory was done With the reopening last summer of the Africana Museum at Cuttington University College, assistant art and design professor Alice Burmeister (left) and senior art history major Erin Demery have begun the painstaking process of rebuilding the museum. was in 1976. Burmeister thinks many of the missing pieces may have been purchased by unsuspecting tourists or diplomatic personnel. She thinks they may have thought they were buying crafts from local artists. To give the local people a sense of ownership in the museum, she and Demery left the doors open while they worked. “We wanted the people to feel accountable for the items and to feel a sense of cultural pride,” Burmeister said. She said the college hopes to use the museum as a teaching tool in anthropology, art and aesthetic appreciation courses. Burmeister said the Liberian government sees education as the key to the country’s recovery. However, first basic needs must be met. “Their first priorities are providing adequate food, clean water and electricity,” she said. The seven-year civil war devastated the country, which was founded by freed American slaves, many from York County. Yet, the country is making progress. “The people are hard workers and have a great deal of resiliency,” Burmeister said. “They are bouncing back.” Before they left Suacoco, Demery curated an opening exhibition of the wooden sculptures, handwoven baskets and intricately carved doors. “It was a great learning experience for me,” she said. “I’d never done a show before.” Demery plans to continue her work with the museum as part of her cooperative experience. She is constructing a database for the Cuttington Web site of all the pieces currently in the museum as well as those that are missing. She wants people to know the museum has reopened and hopes someone will recognize a missing piece and return it. American writers, Wilcox is the director of the Robert Frost Society and founder of the The Robert Frost Review. Now in its 10th year, the journal is considered to be the leading publication for scholarship on America’s beloved poet. Wilcox directed an international Frost conference at Winthrop in 1997. One of the attendees was Lesley Lee Francis, Frost’s granddaughter who wrote a letter supporting Wilcox’s nomination. Wilcox also served as the second president of the International Jack London Society and co-founded the Philological Association of the Carolinas, now boasting more than 300 members. The association brings teachers and scholars together to read papers, conduct seminars and hold roundtable discussions on writers as diverse as Frost and Toni Morrison. For six years, Wilcox was executive director of the College English Association, a national organization, which was housed at Winthrop from 1994-1999. Wilcox retired in December after teaching at Winthrop for 30 years. Obituaries Glen Broach Glen Broach, political science professor and chair of the Department of Political Science since 1984, died Nov. 14 at his home. Dr. Broach, 58, edited since 1986 a monthly newsletter, “The Carolina Report,” which covered S.C. government and politics. He frequently was quoted in the media on state political issues. While at Winthrop, Dr. Broach won a Fulbright Lectureship Grant for Kiev, Ukraine, 1998, and a Group Fulbright Support Grant for research in Poland, 1995. He participated in a Summer Fulbright Study Tour of Poland in 1992. He also directed Winthrop’s Junior Scholars Camp for Gifted High School Students since 1989. A Phi Kappa Phi mentor from 1991-96, Dr. Broach won the honorary society’s Teaching Excellence Award for 1992-93. Dr. Broach taught political science courses on state and local government, public policy, public opinion, public administration and South Carolina government. He came to Winthrop in 1984 from East Tennessee State University, where he served as department chair for six of his 12-year tenure. Dr. Broach earned a bachelor’s degree in political science from Spring Hill College in Alabama and a master’s and Ph.D. in political science from the University of Alabama. Memorials may be made to the Winthrop University Foundation in memory of Dr. Glen Broach. 6 Winthrop Update • Winter 2000-01 STUDENT SPOTLIGHTS Students of note Three chemistry majors had summer internships at Atotech Corp., an international manufacturer of electroplating chemicals headquartered in Rock Hill. The students were: Derek Elgin, a junior from Hartsville, SC, and Heather Jessee, a sophomore from Winnsboro, SC, who both worked in the Research Chemistry Division, and Faith Roberts, a senior from Lugoff, SC, who worked in the Analytical Services Division. Paul Yates bewitched by magic of Disney When John Paul Yates talks about Disney, he uses the pronoun “we.” After two internships and fellowship ambassador program with Disney World in Florida over the past two years, Yates joined the theme park conglomerate as entertainment manager with Walt Disney Resorts in January after his December graduation. Disney, with its cartoon characters and magical fantasies, has played a significant role in Yates’ life. “My parents did it to me. I went to Disney three times a year since I was age zero,” said the Goose Creek, SC, native, who also worked in Disney stores as a teen-ager to earn extra money. Colombian trades order of the courts to the stolid order of the business world Martha Rojas has a desire to instill order, previously in a life prosecuting money launderers, now in the staid world of business. “I love to organize things,” said Rojas, who is one of five consultants in Winthrop’s Small Business Development Center. The graduate assistant helps the center’s clients prepare business plans, devise startups and coordinate services. “I like to pull different elements together.” Two years ago, Rojas left her job as a Colombian prosecutor to work on a business administration degree at Gadsden State Community College in Alabama. She wanted to expand her skills and learn English. Rojas found out about Winthrop from a Charlotte friend and enrolled this academic year to earn her M.B.A. “This is my first business experience,” she said of her work with the center. However, Rojas knows plenty about the illegal side of commerce. From 1996 to 1998, she worked as a prosecutor with the Office of the Prosecutor General handling international money laundering cases. Rojas can’t talk much about her former work except in general terms. She collected evidence, decided upon charges or plea-bargains, seized assets and −unlike American prosecutors − determined whether to grant bond to suspects before a trial. “It was a special unit created in my country in 1996,” Rojas said. Her country and the United States provided training at Quantico, VA, with American law enforcement, drug and customs agents and American prosecutors. The complex cases they handled involved criminals who hid illegal money, such as from sales of drugs or weapons. Her team traced complex international transactions through banks, the stock market, dummy companies and other economic avenues. Rojas’ team consisted of accountants, police officers and business experts who shared information with other, similar teams. “The transactions go from one coun- Prosecutor turned businesswoman Martha Rojas says she enjoys her assistantship with the Small Business Development Center because she likes to organize things. try to another and from one place to another in different banks and financial institutions,” she said. Rojas had worked three years earlier as a prosecuter in the anti-extortion and anti-kidnapping unit and before that, investigating illegal behavior of public employees, both in Armenia, Colombia. She fell into the administrative work while in law school, when her boss, the dean of the law school, took a high-ranking position similar to U.S. inspector general. She was one of several people from her law school, Andes University in Santafe de Bogota, to move with him. Rojas moved from the administrative and judicial environment to the prosecutor’s office because prosecutors were familiar with her work. Once in the prosecutor’s office working with money laundering, she found she wanted to learn more about the business world. “My business degree will complement my legal background, making it possible for me to work in international business,” Rojas said. Yates arrived at Winthrop in fall 1996 with an intention to major in political science. His booming voice and outgoing personality attracted the attention of drama professors, and he quickly switched to theatre performance. By his sophomore year, he was off to Orlando as part of the opening team of Disney’s Animal Kingdom for a six-month internship. He asked to work in attractions, merchandise or transportation. Disney fulfilled two of his three requests, making him a driver on the Kilimanjaro Safaris attraction. “I drove an 8 ˚ ton truck with no a.c. that carried 32 people,” Yates said. “They taught me how to drive it in three days.” After the driving lessons came script memorization and training on the headsets. Yates, who admits to being a little obsessive about his time at Disney, counted up his trips over the two-mile Safari course: 5,395. He also participated as part of the test team for night safaris. Yates returned to Florida in the summer 1999 to work as a puppeteer intern in the “Legend of the Lion King” in Disney’s Magic Kingdom. He estimated he performed in more than 600 shows for more than 250,000 people. Yates may talk about his internship fondly, but he says it was his time as a fellowship ambassador that cemented his relationship with Disney. “I sold my soul to those people willingly,” he said. During the yearlong international program, Yates lived in a housing complex with ambassadors representing more than 20 nations. He had roommates from Holland and Zimbabwe and neighbors from Croatia and Italy. Group members didn’t always agree on philosophies, but they got along exceedingly well through their work details, cultural presentations and yearlong seminars on such issues as leadership styles, cross-cultural understanding and hospitality industry dynamics. “We all just absolutely loved each other,” Yates said. “They’re the 80 best friends I have in this world.” For their cultural presentation on the United States, the American students found little-known history tidbits to present. “We showed them that during the great immigration waves of the late 1800s and early 1900s, ethnic names sometimes got ‘Americanized.’ We took their last names and did the same,” said Yates. He served as the emcee for the produc- Students of note Stephanie Koester, a senior chemistry major from Valparaiso, IN, worked as a summer chemistry intern with Sherman Williams in Cleveland, Ohio. The position was one of the nationally advertised experiential opportunities in chemistry published by the American Chemical Society. Ben Franklin aka Paul Yates has been a fixture at Disney World since he was a kid. When he graduated in December, Yates joined the theme park conglomerate as entertainment manager with Walt Disney Resorts. tion dressed as Benjamin Franklin. The yearlong experience certainly had its scary moments. “We had very intense bonding moments, such as the time when the Moroccan guy was bitten by a scorpion at 3 a.m., and I had to drive him to the hospital because he was allergic to it.” And there was also the anxiety when Hurricane Floyd threatened to hit Florida but veered away. One of the highlights for the ambassadors was working at Disney during the millennium celebration on Dec. 31, 1999. Yates claims he didn’t sleep for 72 hours but made it through the hectic week of preparations and New Year’s Eve with a few naps and lots of coffee. The theme park opened on Dec. 31 at 7 a.m. and reached capacity within half an hour. “No one can do that like we can,” he boasted of the midnight celebration, complete with the Spaceship Earth countdown and 45 minutes of fireworks lighting up the sky. Another memorable experience took place five months later when Disney and McDonalds brought 4,000 children from all over the world who had performed outstanding deeds. “We heard amazing stories from these kids,” Yates said. Along the way, Yates has met many celebrities, including actors Drew Carey, Christopher Reeve, Kevin Bacon, Courtney Cox, Jennifer Aniston and retired basketball player Michael Jordon. The end of the international ambassador program wasn’t the end of Disney’s association with many of the students. Yates estimated that half found permanent Disney positions in the United States and around the world. As for Yates, he wants to continue the Disney magic. He already has recruited some 10 students at Winthrop to work at the giant Florida theme park. “We’re just looking for good people,” he said with a smile. Winthrop Update • Winter 2000-01 when she returns from teaching abroad. This past year, Julie participated in a group exhibit and sold two drawings. Lisa Knisley has been appointed as director of community development for the Fort Mill Area Council. She will manage the chamber’s annual business awards and small business roundtable program. Lisa lives in Rock Hill. Alexa Gordon Roberts has been appointed downtown development director for Chester, SC. She and her family live in Chester. 1998 Jenniffer Austin served as the accompanist for Winthrop’s summer performance of “The Diary of Adam and Eve,” a one-act musical based on Mark Twain’s short story “The Diaries of Adam and Eve.” She lives in Pineville, NC. Jennifer Todd Kapp reports that she received her Master of Science degree in professional counseling last spring from Georgia State University. She and her husband live in Atlanta. Marcus Laster, a model for Barbizon of Charlotte, received an international award for placing second in the jeans competition at the annual International Model and Talent Association Convention held last summer in New York. More than 4,000 contestants were exposed to more than 150 international casting directors, personal managers and agents at the event. Marcus lives in Fort Mill, SC. Jennifer Little was featured in the Greenville News in a story about how she and her roommate decorate their house with flea market finds, treasures from family attics and thrift shop bargains to create a retro look. Jennifer lives in Greenville, SC. Mary Rolfes writes that after working in professional theatres across the U.S., she has settled in Charleston, SC, where she is working as the production stage manager with the Charleston Stage Company. She is in charge of stage management for their 10-show season and teaches acting classes to fourth- through sixth graders. Mary lives in Mount Pleasant, SC. Wendi Turner has joined the American Heart Association, Lowcountry Division as area director for managing the annual Heart Ball, a social event held in February. She lives in Charleston, SC. 1999 Becky Tauss Adams has been hired as director of community development for the Rock Hill Area Council. She lives in Rock Hill. Dave Alsobrooks has joined Jennings/The Agency as graphic designer. He develops and produces new advertising concepts. Dave lives in Charlotte. Derek Carlson and a friend have formed V, Inc., a company that will deliver online products that create value for both suppliers and customers of transportation services. The company’s first product, The Shipping Department, is an online marketplace of shipping information, tools, services and carriers specifically for small businesses. Jocquin Fuller, band director at Sullivan Middle School, was named Beginning Teacher of the Year after his second year of teaching. Jocquin is also assistant band director at Northwestern High School. He resides in Spartanburg, SC. Lara Beth Winburn Hardwick writes that she is pursing a master of church music at the Southern Baptist Theological Seminary in Louisville, KY. She is active in the youth and music ministry at her church. Lara Beth and her husband live in Louisville, KY. Dan Kophazi was featured in a (Rock Hill) Herald story about how he and his wife traveled to Russia and adopted two boys after several months of paperwork and regulations. Dan and his family live in Rock Hill. Kyle Tillman has been promoted to senior tax accountant at Faulkner and Thompson, certified public accountants. Kyle, who lives in Rock Hill, joined the firm in 1999 as a staff accountant. Susan Tucker has been promoted to general ledger manager of the Finance and Accounting Department at Burroughs & Chapin. Susan, who has been with the company since 1995, lives in Myrtle Beach, SC. 2000 Rod Allan writes he is extremely happy in his job at Datastream as a Microsoft SQ1 technical analyst and that he is doing extremely well. Rod lives in Greer, SC. Jennifer Corbell presented a joint recital this summer at First Baptist Church in Statesville, NC. The program included operatic arias and art songs. Jennifer, a soprano, is pursuing her Master of Arts degree with a concentration in pedagogy at the University of North Carolina at Greensboro and lives in Statesville, NC. 23 Classroom lessons pay off for business entrepreneur Evidence of Bernie Brown’s Winthrop education adorns his family’s business, the venerable VisuLite Theatre in Charlotte. The jewel-toned curtains represent his interior design classes. Some of his business acumen − such as getting rights to use the name VisuLite − may be traced to time he spent taking Winthrop marketing classes in the early 1990s. Brown and his brother, Chris, bought the old theatre on Elizabeth Avenue in Charlotte almost two years ago. They renovated the inside, putting in new electrical, plumbing and mechanical systems and installing a new floor that would give every seat a good view of the stage. Charlotteans used to flock to the VisuLite Theatre in its heyday because it was the only East Coast theatre with a rear projection system. “It was easier on the eyes,” Brown explains of the system in the theater that opened in 1938 as the Queen City’s first neighborhood cinema. Residents first saw Rhett and Scarlett in “Gone With the Wind” there, as well as Dorothy, Toto and friends in “The Wizard of Oz.” The VisuLite remained a theatre and art house until the early 1980s when a series of retail shops occupied the space. The Browns pledged when they bought the theatre that it would return to its intended use, “a place for entertainment and assembly.” They want to help improve the city’s image regarding the arts. “Charlotte has gotten a bad name,” Brown said because the city withdrew some funding for the arts a few years ago. “People think it is a banking town and is all about money. We want people to know there is support for this kind of thing.” The artsy Elizabeth neighborhood supports the family’s efforts to attract top national and regional musical talent, said the Winthrop alum. The VisuLite brings in jazz, blue grass, folk and acoustic bands, and hosts amateur poetry readings once a month. “There’s a real need in Charlotte for mid-level places to perform,” Brown added. There are nightclubs, such as the Double Door and Jack Straws, but after that, a group would have to play somewhere large such as the Blumenthal Performing Arts Center. The VisuLite fills that niche with its capacity for as many as 500 people. “This location is very key,” Brown said. Since the private club opened in November 1999, the owners have built its membership to 3,000 people, mostly in the 25- to 35-age category. Brown estimates he sends out 2,000 e-mail messages a week to let members know what’s happening. The theatre also has hosted several charity events, raising more than $150,000 for such causes as multiple sclerosis and Hands on Charlotte. Brown’s brother Chris handles the bar and personnel, while Brown lines up the entertainment. “We do probably 16 to 18 shows a month” featuring live bands, Brown said. He also opened the theatre this holiday season to host corporate Christmas parties. This winter, the VisuLite will begin showing art, cult and classic films on Sunday, Monday and Tuesday nights. Brown would love to see the place evolve into a stop for such national talent as Willie Nelson and sell tickets for $30 to $40. Until then, the former business major is building up his clientele and trying to make a name for the VisuLite. “I would love to show some of my business professors what we’re doing and talk to them about it,” Brown said, adding that he doesn’t always have much time for much long-term planning. “I’m beating it down every day with real life.” − Judy Longshaw Darryl Gomez, who is playing soccer for the Richmond Kickers of the ALeague, is a member of the Trinidad and Tobago Under-23 Olympic and National Team. He lives in Scarborough, Ontario, Canada. Ben Hough has joined Design Associates as a graphic designer. Ben lives in Pageland, SC. Adrienne Walters has joined the forestry consulting firm of Shaw McLeod Belser and Hurlbutt Inc. of Sumter as a development and communications assistant. Adrienne lives in Sumter, SC. Using business skills he learned at Winthrop, Bernie Brown and his brother have bought and revitalized the VisuLite Theatre in Charlotte. Brown keeps in touch with his business professors and is proud to show them what he has accomplished. 22 Winthrop Update • Winter 2000-01 the District 1 Adjuster of the Year for the Greer Claims Service Center of South Carolina Farm Bureau Insurance Co. He lives in Spartanburg, SC. Ronald Rossie has accepted a position with FedEx Custom Critical as the MidSouth regional recruiter. He is responsible for owner/operator recruiting, contractor relations and fleet expansion and management. Ronald and his wife live in Greer, SC. Phoebe Perry Sortet, a licensed social worker, has joined the staff of Hospice Community Care as a community development coordinator. The non-profit organization cares for terminally ill patients and their families in York, Chester, Cherokee, Fairfield, Lancaster and Union counties. Phoebe lives in Rock Hill. 1991 Theresa Bumstead Hanna reports that she and her husband have adopted a year-old boy from Novokuznetsk, Russia. She and her family live in Blythewood, SC. Vann Newkirk has joined North Carolina Wesleyan College as the director of institutional research. Vann is a doctoral candidate in history at Howard University. Daphine Glenn Robinson reports that she has received her Master of Business Administration degree last spring from Charleston Southern University. Upon graduation, she and her husband relocated to Pittsburgh. Gary Simrill of Rock Hill has been selected for inclusion in the Millennium Edition of “Who’s Who in the South and Southeast.” Gary is a York County businessman who has served in the S.C. House of Representatives since 1993. Carol Stewart writes that she received her specialist degree in health and physical education last summer from Augusta State University. She and her daughter live in Augusta, GA. 1992 Leslie Gravett of Lexington, SC, writes that she received her interdisciplinary Master of Arts in theatre last summer from the University of South Carolina. She says she is doing well and working as a drama teacher at Lexington High School. Christina Skelton Hunt writes that she enjoys being a stay-at-home mom with her two children.Christina and her family live in Fernandina Beach, FL. branch operations manager at the Nine West office of Founders Federal Credit Union. Daryl, who has his doctorate in metaphysics from the University of Metaphysics, also is an advanced hypnotherapist and counselor through the International Association of Counselors and Therapists and the American Board of Hypnotherapy. He owns Bowie Center of Hypnotherapy. Daryl Candice Croxton reports that last spring she transferred to the accounting department of Kanawha Insurance Co. as a staff accountant. She previously served in the marketing department of Kanawha HealthCare Solutions, Inc. Candice lives in Lancaster, SC. Edie Turner Dillé featured in The (Rock Hill) Herald as mentor of the week last fall. A teacher at York Technical College, she also is the college’s acting adviser for its Web master program and manager of the Education Technologies Center. Edie lives in Rock Hill. Kristi Lynne Herin wants her classmates to know that she is working as an investment specialist for Greenwood Bank and Trust. She lives in Greenwood, SC. Shelia Doyle Jiles writes that she is enjoying experiencing a new culture in her job as a speech language pathologist for the Department of Defense Dependents Schools on Osan Air Force Base in South Korea. Sheila says her role is a dual one. Since U.S. admissions officers can’t make it to Korea, the teachers speak to their students about their schools. So, Sheila will be making her pitch for Winthrop at a local college fair. Meredith Cornwell Nutter performed as “Eve” in the one-act musical “The Diary of Adam and Eve” last summer at Winthrop University. The musical is based on the Mark Twain short story, “The Diaries of Adam and Eve.” Meredith lives in Rock Hill. 1994 Amy Taylor Amyette writes that she is working as a clinical counselor with the South Carolina Department of Mental Health in Kershaw County. She says she and her husband spend most of their spare time backpacking and on the water at Lake Wateree. They live in Camden, SC. Brad Bryant has been named managing editor of The Laurens County Advertiser, where he previously was sports editor. He and his wife live in Laurens, SC. Laney Vehorn Robinson reports that she loves her job as an executive associate for First Union National Bank in Charlotte and has been with the bank for six years. She and her husband live in Pineville, NC. Laurie Carpenter has a new job as coordinator of marketing and recruitment for Graduate Studies at Winthrop. Previously, Laurie was an administrative specialist in that office. She lives in Rock Hill. 1993 Paula Norris Lollis reports that she is working as a business development manager with Gates/Arrow Distribut- Daryl Bowie has been promoted to ing, a division of Arrow Electronics, in Greenville, SC. She and her husband live in Greer, SC. Chad Smith has accepted a position as an attorney with the 16th Circuit Solicitor’s office in York, SC. He lives in Rock Hill. Mark Sweetman has joined Andrew Jackson High School as assistant principal. Mark, who has been in education for 21 years, served as guidance counselor at Andrew Jackson from 1988-91. He and his wife live in Sumter, SC. 1995 Greg Blackmon, a certified public accountant, has formed a partnership and begun Robinson-Blackmon Tax & Accounting Service. Greg lives in Lancaster, SC. Morri Creech has won Kent State University’s Stan and Tom Wick Poetry Prize for a first book of poetry, Paper Cathedrals, published by Kent State University Press. His poems have appeared in journals such as the Sewanee Review, Poetry and the New Criterion. An assistant professor at McNeese State University, Morri teaches in the M.F.A. program. He and his wife live in Lake Charles, LA. Richard Jenkins, completed his MBA this fall from the University of Phoenix. He and his wife, Undrea Capers Jenkins ’92, reside in WinstonSalem, NC. Carla Simon, assistant principal at Main Street Elementary School in Lake City, SC, was featured in a summer issue of Today’s Woman for her dedication to children. In addition to her work at school, Carla is extremely active in community activities with young people, including a traveling mentor program and the local Girl Scout troop. She also is a part-time kick boxing instructor. Carla lives in Lake City. Lauri SoJourner Yeargin of Rock Hill has received the William Leftwich Award for Outstanding New Professional from the National Association of Student Personnel Administrators Region III. Student development coordinator for orientation and community servicelearning at Winthrop University, Lauri was recognized for re-establishing a community voluntary service class for credit, creating a Service Learning Center in the Department of Student Development, managing the Close Scholars program and leading the annual Volunteer Fair. She also co-chaired the S.C. College Professional Association’s fall conference and was a volunteer with the Worthy Boys and Girls camp. 1996 Tabatha Barber-Crank has joined the staff of Hospice Community Care as a social worker and will handle admissions and provide bereavement services to patients and families. She lives in Great Falls, SC. Virginia Helms Bowman has joined the staff of Keystone Substance Abuse Services as an alcohol and drug safety action program counselor. She previously worked with the York County Department of Social Services. Virginia lives in Rock Hill. Paula Hobbs is working as a career counselor for Northwestern High School and the Applied Technology Center. She helps students learn about their abilities and the jobs available to them in their areas of interest. Paula lives in Rock Hill. Blackmon Huckabee has joined Murdock Law Firm as a staff member. He lives in Rock Hill. Kimberly Porter McVay has been chosen as the 2000-2001 teacher of the year at Dacusville Elementary School. She also is serving as a member of the board of directors for the Pickens County First Steps for School Readiness. Kimberly and her husband live in Easley, SC. Robert Ouzts of Allen Tate Realtors received the Sertoma Club’s Sertoman of the Year award for outstanding service. Robert, who provided publicity for the club, lives in Rock Hill. Alicia Picaro reports that she is teaching eighth grade social studies in Hollywood, SC. Alicia lives in Charleston, SC. Dawn Pompeii writes that she is working as the director of the annual fund at the University of South Carolina. Her husband, Ron Pompeii, is enrolled in pharmacy school at USC. They live in Columbia, SC. Karen Onspaugh Pope writes that she has received her master’s in elementary school counseling from The Citadel and teaches at Fairfield Primary School. She and her husband live in Winnsboro, SC. Shari Schlicht Tanner writes that she graduated from the Medical University of South Carolina last spring with a master’s degree in rehabilitation sciences, physical therapy. She and her husband live in Bamberg, SC. Michael York writes he is working as assistant director of operations at Tropical in Charlotte. He lives in Rock Hill, SC. 1997 Kristen Cowen writes that she is working as a financial consultant with Salomon Smith Barney and lives in Charlotte, NC. Julie Goolsby writes that she is teaching English in Prague, Czech Republic. Julie has been accepted into the M.F.A. painting program at George Washington University in Washington, DC, but has deferred her enrollment until fall 2001 Winthrop Update • Winter 2000-01 Greenville City Council. She lives in Greenville, SC. Angela Denise Ghent writes that she has served as a missionary to Russia and Guatemala. Now a sixth-grade teacher at United Faith Christian Academy in Charlotte, she is recording a worship CD. Angela lives in Lancaster, SC. Anne Holladay has been named as the director of public information for the Chester County School District. She previously served as Great Falls Middle and High School’s drama instructor. Anne lives in Chester, SC. Mary Humbach McKelvey has been named program manager for the Center for Management & Leadership at York Technical College. She and her husband live in Rock Hill. David Raines has joined the First Reliance Bank as credit manager. He lives in Florence, SC. Mark Rodman writes that he works as chief operating officer for Preservation North Carolina, a statewide nonprofit organization that promotes the preservation of historic structures throughout North Carolina. He received a Master of Arts degree in historic preservation planning from Cornell University last summer. Mark lives in Raleigh, NC. Kelly Hoffman Weiss is working as a teacher at Oakbrook Elementary School. She and her husband, John Weiss ’88, have three children and live in Charleston, SC. 1988 Paige Lusk Cromer has been named Teacher of the Year for Whitmire Elementary School. She has taught at Whitmire schools for 11 years and has twice been recognized in Who’s Who Among America’s Teachers based on nominations by former students. Paige and her husband have two daughters and live in Newberry, SC. Peg Fetter reports that her home is featured in Metropolitan Home American Style published by Dylan Landis, Clarkson Potter. The coffee table book, which includes designer Michael Graves, was released last fall. Peg lives in St. Louise, MO. Lynn Oshields LeGrand, a secondgrade teacher at Rosewood Elementary School, has been named Teacher of the Year for 2000-2001 for the Rock Hill district. Lynn lives in Rock Hill. Lori Edstrom Liedy writes that after living in Austria for six years, she and her husband have now been back in the United States for three. The Liedys have two children and live in Denver, CO, where Lori is a stay-at-home mom. John Weiss has been named advertising director of The Summerville Journal Scene. He and his wife, Kelly Hoffman Weiss ’87, have three children and live in Charleston, SC. 1989 Paula Morgan Doolittle reports that she has been named Teacher of the Year for 2000-2001 at Edwards Middle School where she is a guidance counselor. She says she truly believes that the education and training she gained at Winthrop shines through professionally and that Winthrop taught her leadership and determination. Paula lives in Central, SC. Susan Greene Fischer participated in the ribbon cutting service last summer at the opening of Greene’s Funeral Home Northwest Chapel, the Greene family’s second funeral home in Rock Hill. Susan, who lives in Rock Hill, is manager for the chapel. Mary Jane McGill has been named associate director of Keystone Substance Abuse Services. Mary Jane, who had previously worked at Keystone in parttime positions, has 19 years experience in the field. Bill Pfister has been appointed by the International Mission Board to fill an evangelism assignment in Buenos Aires, Argentina. He has been pastor of Swift Creek Baptist Church in Hartsville, SC, for the past two years. Angie Meetze Wilkerson reports that she is the owner of AMW Tutorial Service, which works with children in grades 1-8 in math, reading, language arts, writing and study skills. She and her husband have two sons and one daughter and live 21 in Gaffney, SC. 1990 Rhonda Short Hackworth reports she is working on a Ph.D. at the University of Missouri-Kansas City Conservatory of Music. Rhonda and her husband live in Kansas City, MO. Mark McCall has been named a partner of C.C. McGregor & Company, L.L.P. He received his master of taxation degree from the University of South Carolina in 1991 and has been a CPA since 1994. Mark, who lives in Columbia, SC, has extensive experience in the areas of state and federal taxation for individuals, corporation, estates, fiduciaries and partnerships. Mike Mullins has been recognized as Winthrop’s melting pot forges new life in new country Winthrop was in about him and V i c t o r i a about our daughUricoechea’s desters,” Uricoechea tiny. Although she said. was stymied in her For two years, first attempt to study Uricoechea and at the university, Mansueti thought she’s back and goabout leaving their ing for her second home. Uricoechea Winthrop degree. said they were not A native of alone. She said that Bogota, Colombia, because of the vioUricoechea spent lence, the country her senior year in is losing the engihigh school as an exneers, architects change student in and other educated Plymouth, WI. She professionals not enjoyed her U.S. exonly to the U.S., perience so much, but also to Costa she wanted to stay in Rica, Spain, Chili, Argentina and this country to complete her college edu- Victoria Uricoechea, Patrice Mansueti and their daughters Nathalie and Stephanie are truly an other Latin Ameriinternational family, melding their Colombian and French heritage in their new life in the can countries. Last cation. United States. summer, Mansueti “I have an aunt who lives in Charlotte, so my parents his B.S. in finance. They flew to Bogota asked for a transfer to Charlotte. said I could stay with her and go to and were married over the holiday break Uricoechea arrived in August and Winthrop,” she remembered. before Mansueti entered Winthrop’s re-enrolled in Winthrop’s M.A. proUricoechea happily settled in with M.B.A. program. gram in Spanish. She also is teaching her aunt and began taking classes at While she waited for him to graduate, Spanish classes at the university. The Central Piedmont Community College Uricoechea taught Spanish in the Rock girls followed in September. Speaking to prepare her for the TOEFL English Hill schools and at Winthrop while she French and Spanish, but no English, language proficiency test which is rebegan work on her M.A. in Spanish. they enrolled at Ebinport Elementary quired of foreign students wishing to When Mansueti graduated, the couple in Rock Hill. study in the U.S. That’s when she ran moved to France. However, after a little “At first, the little one would cry into her roadblock to Winthrop in the over a year, they decided to relocate to every day,” Uricoechea said. “She said form of a young man she began dating. Colombia where they felt there were she would ask to go to the bathroom, When Uricoechea’s parents learned better job opportunities. Shortly after and no one would understand her.” 18-year-olds could get married in the arriving, Mansueti got a job with ING, a However, within two months, with United States without parental apDutch bank, and Uricoechea had their the help of understanding teachers and proval, they whisked her home. first child. a Dominican Republic classmate, they Five years and one undergraduate Seven years went by. By then, were speaking English and reporting business degree later, Uricoechea deMansueti was working for Bank of to their mother that they are very happy. cided to try again. At a more mature 23, America and Uricoechea had given birth “Kids are like sponges. They learn she enrolled in Winthrop’s M.B.A. proto their second daughter. However, belanguages so quickly. It is much easier gram. Once again, a young man caught neath their happy exterior, worries to learn at 5 and 8 than at 18 and 20. her eye. This time, her parents aploomed. The economic situation in CoMy older daughter is making 90s on proved. Patrice Mansueti ’88, M.B.A. lombia was deteriorating, and violence her spelling tests. A girl from Mexico ‘90 was a Frenchman taking classes at and corruption were commonplace. has entered the school, and she wants Winthrop under the ESICAD program. “The threat of kidnapping was part of to help her make the transition.” “I got here in August and we started daily life. Although we wouldn’t see it, Finally, in November, Mansueti dating in September,” Uricoechea said. we knew that the drug lords and the joined his wife and children. The multiIn December of 1988, she graduated guerillas were powerful. With my husnational family is back at Winthrop with her master’s in business adminisband being a foreigner working for an where their story began. tration, and Mansueti graduated with American company, we were worried − Gina Carroll Howard 20 Winthrop Update • Winter 2000-01 Coca-Cola Scholars Foundation. She teaches pre-calculus, calculus and teacher cadet at Fort Mill High School. Anne lives in Rock Hill. Ruthie Ayers McCraw writes that she is a stay-at-home mom to her two children. She and her family live in Easley, SC. Christine Sherman reports that she is a major in the U.S. Army working at the Training and Doctrine Command (TRADOC) at Fort Monroe, VA. She has completed 18 years of active duty service and is currently involved in redesigning the Army for the 21st century. Christine says the Army will demonstrate the new Force XXI Division next spring at the National Training Center in Fort Irwin, CA. 1981 Tamah Hamlin Day reports that she is now serving as director of elementary education for the Pickens County School District. Tamah has two children and lives in Easley, SC. Frank Greene participated in the ribbon cutting service this summer at the opening of Greene’s Funeral Home Northwest Chapel, the Greene family’s second funeral home in Rock Hill. Celeste Herndon reports that she has retired as safety officer with the Public Library of Charlotte and Mecklenburg County after more than 10 years. Celeste, who lives in Charlotte, says she would love to hear from classmates. Jeannie Harris Jamieson participated in the Santee Cooper Energy Education Seminar last summer. The seminar was held for teachers, principals and administrators. Jeannie lives in Fort Mill, SC. Bert Owens writes that he is working as the vice president of West End Retirement Center Inc. and the assistant football and baseball coach at Easley High School. He lives in Easley, SC. Raymond Tucker has been named as the director of the Piedmont Choral Ensemble. He also serves as a minister of music at First United Methodist Church in Charlotte. Raymond, who lives in Batesburg, SC, enjoys antiquing, cooking, historical research, entertaining and opera. Mary Mallette Jenkins Wood writes that she is working as an elementary guidance counselor at Lesslie Elementary School. She and her husband have two sons and live in Rock Hill. 1982 Katrina Greene received an M.A. and Ph.D. in developmental psychology from the University of Virginia. She is in a tenure-track position in the Department of Human Development at Cornell University and this year is a visiting scholar in the Department of Psychology at the University of Michigan-Ann Arbor. tems. He and his wife have three children and live in Purcellville, VA. Shelia Ann James reports that she is working on her M.A. in conflict resolution at Columbia College, Columbia, SC, where she resides. 1984 Amanda Frick Maghsoud was promoted to associate vice president for finance and business at Winthrop University. In addition to her responsibilities as university treasurer and controller, she is responsible for the university’s budgets. She also serves as the university liaison with the offices of the state Budget and Control Board and state Treasurer. Amanda, her husband, son and daughter live in Rock Hill. Pattie Dove May participated in the Santee Cooper Energy Education Seminar last summer. The seminar was held for teachers, principals and administrators. Pattie and her husband live in Rock Hill. Brenda Thompson McCorkle, formerly director of membership for the York County Regional Chamber of Commerce, completed a week-long professional development program last summer with the Institute for Organizational Management at the University of Georgia. Upon completion of the program, Brenda was promoted to the position of vice president for membership and marketing. She lives in York, SC. 1983 Patsy Bowman, reading specialist at Fort Mill Elementary, will be traveling to China for three weeks in February as part of the People to People Ambassador Program. She and the other instructors on the trip will share their teaching methods with Chinese teachers and study the country’s culture. Patsy, who lives in Fort Mill, SC, traveled to Japan in 1998 as a Fulbright Memorial Scholar. She looks forward to sharing her experiences with her students when she returns. Meredith Smith Cadallader is very involved in her church activities in her hometown of Holiday, FL. Meredith teaches a home Bible study class and is a member of the church’s music ensemble. Lisa Campbell Carlton has been named placement director at Swofford Career Center where she served as the marketing instructor for 10 years. Lisa and her husband live in Greer, SC. Glinda Price-Coleman has resigned her position as executive director of the Chester Downtown Development Association. She lives in Chester, SC. Stephen Swan has joined the staff of Drayton Hall as the museum shop assistant. Stephen, who has more than 10 years of retail experience, lives in Charleston, SC. James Watts writes that he is working as a senior consultant at Siebel Sys- David Casey reports that he was named state manager for Cooperative Care Planning Services. He and his wife have twin boys and live in Rock Hill. Kay Roberts Cauthen, director of Cauthen Funeral Home in Lancaster, SC, was named vice president of the S.C. Funeral Directors Association for 20002001. She and her family live in Lancaster, SC. Steve Clark reports that he is working as the southeast area manager for Kohler Co. in the generator division. His wife, Eileen McManus Clark, is a homemaker. They have three children and live in Columbia, SC. Scott L. Coleman, an amateur historian, found a Confederate cannon shell and parrot shell in the muddy bottom of the Catawba River. His interest in the Catawba cannons began in the mid-’80s when he taught at what was then Castle Heights Junior High. Scott lives in Whitmire, SC. Larnie Lewers, who has been an educator for more than 25 years, has been appointed assistant principal at Southside Middle School. For the last three years, she served as principal at Johnakin Middle School in Marion. Larnie lives in Florence, SC. Theresa Monts Nelson writes that she is working as an occupational therapist. She and her husband have two sons and live in Mount Pleasant, SC. Susan Crowther O’Brien, Microsoft certified systems engineer, owns her own company Alar Productions Inc. a Web and database design company. She resides in Crownsville, MD. Chris Rolph writes that he is living in Brisbane, Australia, where he teaches at a private high school. Wade H. Witherspoon, III has been named assistant director of Communities in Schools. He works with the CIS daycare as well as serves in a disciplinary role. Wade and his wife have two children and live in Rock Hill. 1985 Cathie Cooper Gober writes that she was selected as Teacher of the Year for 1999-2000 at Sugar Hill Elementary in Buford, GA. She has taught for 14 years in the Gwinnett County Schools. She and her family live in Alpharetta, GA. Sara Edstrom Lang reports she and her husband, John Lang, returned from Russia where they finalized the adoption of their two daughters. She and her family live in Beaufort, SC. Cynthia Washington Williams has received her Master of Arts degree in counseling from Webster University’s Metropolitan Campus in North Charleston, SC. Cynthia, who lives in Charleston, works as a clinical counselor at the Charleston County Detention Center. 1986 Stewart Berry writes that he is working in outside sales for Rental Service Corp. He and his family live in Greer, SC. Jackie Cooley-Finger has been named guidance counselor for Swofford Career Center where she had served as placement director for the past 12 years. Jackie and her husband live in Chesnee, SC. John Harp has been named as the dean of students at Cornell College. He previously served as the associate dean of students at Cornell and has been in the field of college administration for 13 years. John lives in Mount Vernon, IA. Robert McDonald has been named part-time assistant dean at the Virginia Military Institute. He also teaches part time in the Department of English and Fine Arts and continues as director of the Writing Across the Curriculum Program. Robert joined the VMI faculty in 1992. He and his wife live in Lexington, VA. Robin Smith writes that she is teaching in McClellaville, SC, and also has a business with a high school friend specializing in custom window treatments. Active in her church, Robin lives in Georgetown, SC. 1987 Derrick Alridge was featured in The (Rock Hill) Herald last fall for teaching teachers, administrators, policy makers and historians how the civil rights movement affects schooling and education. Derrick is an assistant professor at the University of Georgia. Judy Alston writes that she was recently appointed chair of the Department of Educational Administration and Supervision at Bowling Green State University. She lives in Toledo, Ohio. Tracy Holcombe Craven, a firstgrade teacher at Oakdale Elementary School, has been awarded one of the Milken Family Foundation’s National Educator Awards. She received $25,000, which she says she will use to start a college fund for her infant son. Tracy also will be traveling to Hollywood, CA, where she will join the other winners for the year 2000. The Cravens live in Rock Hill. Johnny Dewese has been named area supervisor II at the Lancaster Area Office and Training Center of S.C. Vocational Rehabilitation. Johnny and his wife live in Great Falls, SC. Chandra Dillard reports that she was elected to a four-year term on the Winthrop Update • Winter 2000-01 2000-2001. She and her husband own a Christmas tree farm and live in Pelzer, SC. Katherine McGinnis Sammons writes that she is working as the vice president of information services for Follman Properties. She says she would love to hear from fellow classmates. Katherine and her family live in Ballwin, MO. Martha Sentelle-Brown wants her friends to know that she is head of interior designer and marketing at the Clark Group, Inc., an architectural, space planning and design/build firm in the Harbison area of Columbia. She and her husband have one son and live in Irmo, SC. They enjoy traveling and raising Labradors and going to Georgia football games. Sally Tyler-Shive reports that she has just completed her first year back in Chester, SC, where she serves as the curriculum coordinator for Chester Middle School. She and her daughter reside in Rock Hill. 1977 Susan Brunson Barrett of Charleston, SC, writes that she enjoying having dinner and spending the evening with fellow class of ’77 members Joanne Baines Abernathy of Gaffney, SC; Ginger Barfield of Cayce, SC; Rhetta Moore of Mount Pleasant, SC; and Susan Whittier Vinson of Charleston, SC, at the home of Susan Clarkson in Charleston. She says it was great to get together with old friends. Pam Griffin Boiter reports that she is relocation director for Rigby Co. Realtors in Greenville, SC. She and her husband reside in Greenville. Michael Dearing was named information systems audit specialist for First Citizens Bank. He lives in Garner, NC. Doug Echols, mayor of Rock Hill, was elected to the Municipal Association of South Carolina’s board of directors. Doug and his wife reside in Rock Hill. Marsha Broach Haselden reports that she is now employed by the Gaston County Schools as the media coordinator at Pinewood Elementary. She and her husband live in Stanley, NC. Robert Roberts writes that he has retired from the Kershaw County Schools as a teacher and coach. Robert lives in Kershaw, SC. Betty Phillips Robinson has been named the 2000 Teacher of the Year at Guinyard School in St. Matthews, SC, where she and her husband live. Jerry Thomas has been named superintendent of Union County public schools. For the past six years, he has served as deputy superintendent. Jerry and his family live in Wingate, NC. 1978 Ben Johnson is working as an attorney with Robinson, Bradshaw and Hinson in Rock Hill. 19 and live in Mauldin, SC. this year. She lives in Greenville, SC. Kathryn Leonard Hamilton reports that she moved to a brand new school Anne Ledford has been named a 2000 Educator of Distinction by the James Lyon says he still is serving as a parish priest at the Church of the Good Shepherd in Columbia, SC. This year he was inducted into the Anglican Priests Eucharistic League and admitted as a priest associate of the Society of the Holy Cross. He and his wife, Sallie Leslie Lyon ’79, live in Columbia, SC. Anita Wilson writes that she was happy to see her former roommate, Rosey Fender Anderson of Barnwell, SC, when Rosey served on the Southern Regional Education Board site visit team to her school, Laurens District 55 High School, last spring. Anita, who lives in Simpsonville, SC, works as administrative assistant for curriculum and instruction at LDHS. She says they enjoyed visiting with Anita and getting caught up on family news. 1979 Deana Lemmons Blanton writes that she is a partner with her family in two businesses, Catalog Clothing and Decorative Fabrics. She and her husband live in Gaffney, SC. Karen Greene Frazier has been chosen to receive a Canine Assistants service dog from Winn Dixie, Milk-Bone and the Canine Assistants Organization. Karen, who has muscular dystrophy, will receive her own dog after she attends the Canine Assistants camp. Her dog will be trained to meet her individual needs. Karen and her husband live in Rock Hill. Sallie Leslie Lyon writes she is a volunteer coordinator at St. Joseph’s Catholic School in Columbia, SC. She and her husband, James Lyon ’78, live in Columbia. Sula Smith Pettibon has joined the staff of The (Rock Hill) Herald as the business editor and is writing business stories and oversees the content of the business section. Sula left The Herald in 1992 after serving as managing editor, city editor, assistant city editor and reporter. For the past six years, she has been a religion teacher at St. Anne Catholic School. Sula and her husband have two children and live in Rock Hill. Pat Eller Richardson has been named Employee of the Year by Richland Lexington State Employee Chapter and Female Employee of the Year by the S.C. State Employee Association. Pat is a graduate of the class of 2000 Leadership South Carolina and lives in Columbia, SC. 1980 Vicki Hawkins Corn reports that after coaching softball at Spartanburg Methodist College for 11 years, she now is serving as an academic advisor for SMC. She and her husband have two children Success By 6 director Donna Wooldridge says the children’s initiative puts together specific annual goals to reach their objective of having all children in York County prepared to enter school. Donna Wooldridge working to ensure that children get successful start Some people fear it, others can’t live without it, but for Donna Wooldridge, director of Success By 6, change has made her more aware of the problems facing York County’s children. A native of Lynchburg VA, Wooldridge, ’74, M.Ed. ’79, had been working at York Technical College for 11 years as the director of the Women’s Center, when she decided it was time for something new. “As the director of the Women’s Center, I was responsible for promoting success in women’s lives through economic self-sufficiency. This sufficiency was encouraged by a focus on education and training. We made this happen by working with health and human services, educational, government and various other organizations. I was happy with the impact the center made on the lives of women, but it was time for a change,” said Wooldridge. Through her experience at York Tech, Wooldridge acquired the skills she needed to take on the task of directing Success By 6. “Collaboration with other organizations was an important part of my job at York Tech. This position at Success by 6 had a grant writing foundation and also required some of the same kind of management skills as my job at the Women’s Center,” said Wooldridge, who joined Success by 6 in August of ’99. York County’s Success By 6 is an initiative that emerged from a national United Way movement called Mobilization for America’s Children. York County is one of more than 200 existing Success By 6 initiatives nationwide. By working in coordination with literacy groups, health organizations and educators, the county’s initiative aims to ensure that all children in the area enter school prepared. “We focus primarily on children from birth to six because these first few years of a child’s life are critical. The quality of development in these years can often impact a child for life. What we do for our children now largely determines their future path as an adult and member of the community,” Wooldridge said. The need for attention to early childhood development issues, Wooldridge submits, is becoming critical. Children have special needs that people either don’t notice or don’t have enough time to address. “It is our job to facilitate a collaboration of services to make sure that these needs are being met,” she said. Realizing the immense scope of this project, Wooldridge points out, “Sadly, all children have the ability to be at risk. And I think that it is a huge undertaking for us to get 100 percent of York County students prepared for school, but that must be our ultimate goal,” she said. Now that she has embraced the new and exciting challenges that accompany change, Wooldridge is settling in to her new position. “There is a lot to learn and do, but I am excited about the possibilities,” she said, “In the future, I can see myself exactly where I am. This is an extremely rewarding position. Not only am I challenged, but I feel as if the things I am a part of will have a positive influence on the lives of so many children.” For Wooldridge, this new position has put her in touch with the reality of children in York County. “I had not worked in exactly this kind of environment before so, like many others, I didn’t have a clear idea of what problems today’s children face. Now that I am aware, I’m very glad that I am able to work with people who provide quality services for so important a need. Every child deserves a healthy start in life.” − Ryan Shelley 18 Winthrop Update • Winter 2000-01 Sharon Atwood Welfare, an English teacher at Walhalla High School, writes that she is enjoying her new home Tamassee, SC. Sharon says she would love to hear from her friends. 1969 Jenny Bowers Castro reports that she has resigned as treasurer/membership chair of the Carolinas’ Association for Professional Researchers in Advancement on which she served since 1992. Jenny lives in Rock Hill. Vicki Jean Phillips-Roach received her Ph.D. in curriculum and instruction from Clemson University last summer. She lives in Walhalla, SC. Carol Bower Vagnini reports that she has taken over as head of school at Children’s House Montessori in Wooster, Ohio, where she has taught for 10 years. Carol says that three years ago, she spent eight weeks in Colorado to receive training for her AMS (American Montessori Society) credential in early childhood. Carol and her husband have three daughters and live in Wooster, OH. 1970 Patricia Whitaker Davis writes that she is working as a kindergarten teacher in Dorchester District II School System. She has two daughters and lives in Summerville, SC. Donnelle Eargle was honored this spring with the Leadership in Aging award from colleges of the Worcester Consortium Gerontology Studies Program (CGSP) at its 20th anniversary banquet. She served as CGSP director during the 1980s. Donnelle has held faculty and administrative positions at Harvard Medical School, University of Massachusetts Medical Center and Clark University, and has developed international collaborations between universities in the U.S. and Israel. Donnelle and her husband divide their time between Windham, ME, and Boston. She says she would love to hear from Winthrop friends. Hilda Mangum Hopper has graduated from the S.C. Municipal Clerk and Treasurers Institute. Hilda lives in Clover, SC, where she serves as the town treasurer. Kathy Hite James has become the 48th member of the S.C. Golf Hall of Fame. One of South Carolina’s premier junior and amateur players, Kathy is a 14-year member of the LPGA. She conducts junior clinics and sells real estate in Palm Desert, CA, where she and her husband live. Bob Jenkins, retired teacher and coach at Northwestern High School, was honored with the Bob Jenkins Cross Country Classic which debuted in September. Robert and his family live in Rock Hill. Judy Frank Langley, a special education teacher in Darlington County, has taken on a major role with the Pilot Club. She is governor of South Carolina District Pilot International. Judy previously served in numerous capacities with the Pilot Club. She and her husband live in Darlington, SC. Loretta Christopher McAbee was named Teacher of the Year at Merriwether Middle School in Edgefield County. She and her husband live near Clarks Hill, SC. Cheryl Harrington McBride writes that she has worked for the Cheraw Yarn Mills for 13 years. She says her daughter, Ashley McBride Holmes ’99, teaches in Charlotte. Cheryl and her husband live in Cheraw, SC. Celia Campbell Roberts reports that she has retired from the Kershaw County School District after 30 years of teaching. She and her husband live in Kershaw, SC. Betsy Gibson Scarborough reports that she is teaching science and math at Crayton Middle School. She has one daughter and one son and lives in Columbia, SC. Mary Reynolds Woldering of Euclid, Ohio, writes that she has finally received her licensure for art K-12. She says she serves as the scoutmaster for her son’s Boy Scout troop. 1971 Myra Huffstetler Bonner reports that she has completed the coursework for her master’s of education degree in secondary education with an emphasis in French. She has two grown daughters and lives in Rock Hill. 1972 Brenda Russell Bonner of Rock Hill reports that she is working as a technical writer for Crowell Systems, which produces medical software systems. She says she has been in this field for more than 15 years. James Gordon has been named as the 2000-2001 Teacher of the Year for Newberry II Learning Center, where he is the special education instructor for the NEAR program. He and his wife have three children and live in Ballentine, SC. Teresa Rutherford Justice has been promoted to the director of Sponsored Programs and Research at Winthrop University where she previously served as director of budgets. She lives in Rock Hill. Evelyn Morris McLeish writes that she is teaching second grade at Mount Gallant Elementary School in Rock Hill. Evelyn and her husband, who live in Rock Hill, have three children and three grandchildren. Bessie Moody-Lawrence, a member of the South Carolina House of Representatives, and a group of other state lawmakers have formed the S.C. Democratic Women’s Legislative Caucus, an organization designed to increase the number of Democratic women involved in all levels of state government. Bessie serves as the caucus vice chair. She lives in Rock Hill. Carolyn Law Price works as the director of counseling services for Berkeley County School District and also serves as the chair of the Trident Technical College Area Commission. She has served as a commission member since 1995. Carolyn and her husband have three daughters and live in Pinopolis, SC. Janet Rice Smalley writes that she is working as the curriculum coordinator for Walhalla High School, one of 46 identified “New Americas High Schools” honored for excellence and innovations by the U.S. Department of Education. She says that on behalf of WHS and the USDE, she has visited schools this year in Miami, Dallas, San Francisco and Washington, DC. Janet lives in Walhalla, SC. Gail Smith Stephens reports that she is still teaching the emotionally and mentally disabled self-contained classes at Hughes Academy in Greenville, SC. She and her husband have become foster parents of two daughters. Brenda Massey Swearingen reports that she has completed her 11th year as the volunteer coordinator at the Anderson Free Clinic. She also does animal rescue for the local Humane Society, volunteers for the local Habitat for Humanity, and enjoys contra dancing to unwind. Brenda and her husband celebrated their 15th wedding anniversary this year and live in Anderson, SC. 1973 Sylvia Echols, a child development specialist, has agreed to serve as honorary chair of the fund raising campaign for Pilgrims’ Inn. Sylvia and her husband live in Rock Hill. Carolyn Dodds Nelson has been named Registered Nurse of the Year for Roper Hospital North. She lives in Mount Pleasant, SC. Susan Evans Utsey writes that she is working as a teaching leader for Bible Study Fellowship International (BSF), an interdenominational Bible study of nearly 1,000 classes on six continents. She and her husband live in Advance, NC, near Winston-Salem. 1974 Marianne Mackey Nicholson reports that she has begun her fifth year as the assistant principal in WSFC/Schools at Kernersville Middle School, the largest middle school in her district with almost 1,200 students. Marianne and her husband, who have four daughters, live in Clemmons, NC. Lou Funderburk Wylie was named as Fort Mill School District’s first Sup- port Staff Employee of the Year. Lou serves as the technology assistant at Gold Hill Middle School and lives in Fort Mill, SC. Marvin Waldrep, a Chester Allstate Insurance agent, received fire safety educational materials from his company’s corporate office to provide to the local fire department and area children. He and his wife live in Chester, SC. 1975 May Rogers Caesar has been named principal of Bishopville Intermediate School. She is a 24-year veteran educator actively involved in a variety of school, civic and community initiatives. Marilyn Elder Cole reports that she received the Delta Sigma Theta Star Achievement Award for the Chesterfield, VA, Chapter in 1999. Marilyn serves on the board of directors for the Leadership Metro in Richmond, VA, and the Big Brothers/Big Sisters of Richmond. She also serves on the board of advisors for the First Tee Chesterfield and is vice president of the Top Lady clubbers (Richmond’s golf club for black women). Marilyn lives in Doswell, VA. Regenia Mitchum Rawlinson was profiled in The (Rock Hill) Herald last fall. She is head of the guidance department at Northwestern High and co-hosts a weekly television show, “Raising Parents,” with her husband, David Rawlinson ’85. She and her husband reside in Rock Hill. Ann Shackleton Smith writes that she has been assigned to serve as principal of Ebenezer Avenue Elementary School. She has been principal of Sylvia Circle Elementary School for the past three years during which time it was recognized as a School of Promise and was a S.C. School Incentive Award Winner. Previously, Ann taught for 21 years. She reports she has many Winthrop students with the Winthrop Friends program serve as mentors for Ebenezer Avenue students. Ann lives in Rock Hill. Teresa Lowe Wiley participated in the Santee Cooper Energy Education Seminar over the summer. The seminar was held for teachers, principals and administrators. Teresa and her husband live in York, SC. 1976 Regina Davis Lambert has been named principal of Canton Middle School. She also has served as lead industry education coordinator for Buncombe County Schools, assistant principal of North Buncombe High School and marketing education teacher for two area schools. Regina lives in Asheville, NC. Sandra Lancaster Mayberry reports that she has begun her 25th year at Belton Middle School where she teaches seventh- and eighth-grade science and social studies for the gifted and talented and was named Teacher of the Year for Winthrop Update • Winter 2000-01 spending time with her five grandchildren. Diane − who is retired − lives with her husband in Hopkins, SC. Mary Carter Hunter reports that she has been an English teacher for Sumter School District 2 for the past 20 years and has taught a total of 33 years in South Carolina public schools. She is teaching 12th grade tech prep and honors English at Crestwood High School in Sumter. Mary says in addition to being a pastor’s wife, she is also the church organist, church financial secretary and young adult Sunday school teacher. The Hunters live in Dalzell, SC, and have four children and four grandchildren. Linda Dantzler LeMaster writes that she has served 15 years as the choral director at South Florence High School. She says that her 1999-2000 show choir performed at Carnegie Hall in New York as the highlight of a good year of concerts, shows and competitions. Linda, who lives in Florence, SC, says she is also a new grandmother. Brenda Thrailkill Leopard writes that she has retired after teaching for 30 years in South Carolina. The last six years, she taught drivers education in Greenwood, SC. Brenda has two grandchildren and three children. She and her husband live in Hodges, SC. Gena Perry Taylor reports that she is teaching in Polk County Public Schools at Jewett Elementary School of the Arts in Winter Haven, FL. She and her husband have two grown children and two grandchildren and live in Lakeland, FL. 1966 Carole Bryant Coker reports that she has completed her 25th year in education. She is working as an elementary guidance counselor and enjoys the young children after working with teenagers for 23 years. Carole lives in Taylors, SC. Deborah Saylor Jeffery writes that she and her husband traveled to Riggins, Idaho, where they got together with Judi Behre Sale and her husband of Lenoir, NC. The couples had previously gotten together in the summer of 1997 for a wonderful tour around Asheville, Boone and Lenoir. Deborah and her husband live in Anchorage, AK. Harriett Foxworth Skinner, Rotary district governor, spoke at the annual meeting of the Rotary Club in Chester, SC. She spoke on this year’s theme, “Create Awareness, Take Action,” and her goals for the year. Harriet and her husband reside in Aiken, SC. Patricia Jeanes Snyder writes that after teaching high school English for 15 years, she is now completing her 20th year as an office/human resources manager for Consolidate Systems, Inc. in Memphis, TN. She and her husband, who live in Germantown, TN, celebrated their 35th anniversary this year. They have one grandson. Frances Jackson Willis reports that she retired in 1999 from Lexington School District 2 and is now working for Nationwide Insurance. She lives in Lexington, SC. 1967 Susan Harris Kincaid writes that she is working as the executive director of Sherwood Conservatory of Music in Chicago where she completed a successful capital campaign and oversaw the construction of the conservatory’s new state-of-the-art facility downtown. Susan and her husband live in Chicago. Alice Wald reports that she is selfemployed as a licensed independent social worker and is a member of the Religious Society of Friends (Quakers). She received her M.S. in social work at the University of Tennessee School of Social Work in 1969. Alice lives in Columbia, SC. 1968 Mary Shannon Boyd reports that she and her husband are temporarily moving back to the old homestead in Blackstock, SC. She says that after 30 years with the Syracuse Symphony and as a teacher, they plan to travel some and then get back to work. They have three children. Barbara O’Neal Burgess writes that she is in her 30th year of teaching, 29 of which have been at McCants Middle School in Anderson, SC. She says she also works part time at Taylor Stockyards and Christian Auction and Equipment Co. Barbara, who lives in Anderson, has three grandchildren. Linda Mixon Clary reports that she retired last year as the curriculum director for grades K-12 in Burke County, GA. She previously taught at Augusta State University and rose to the rank of professor. She says she is now consulting and walking on the beach. Linda lives in Aiken, SC. Frances Platt Dantzler of Lexington, SC, writes that in 1998 she retired after 30 years of teaching middle school science and computer classes. However, after a year she was enticed back to teaching under the critical needs provision and says the 1999-2000 year was one of her best. Frances says she returned this year as a “double dipper” in her same position at Lexington District 1. Judy Davis of McLean, VA, says it was great fun to join the Eagles fans at the NCAA Regional Tournament in the company of her aunt and 1968 classmates Helen Hancock Sablan of Tacoma, WA, Eleanor Dill Porter of Rock Hill, and Sue Alton O’Connor of Columbia, SC. She says they are all great basketball fans and it was special to cheer the Winthrop team. Nancy Henderson Gordon writes that she has retired after 28 years in the classroom. She says she is enjoying tennis, cycling, snow skiing and hiking. Nancy lives in Central, SC. 17 Linda Holladay Harrelson reports that she has retired after 32 years as an elementary school educator, the last 14 as principal at Bonner Elementary School where she was a teacher and administrator for 25 years. Linda and her husband live in Moncks Corner SC. Betty Anglin Smith reports that she has been painting professionally for more than 20 years. Her studio is in Charleston, SC, and she is represented by galleries in Charleston, San Francisco and Carmel, CA, New York, Martha’s Vineyard and Naples, FL. Betty, who has 28 year-old triplets all of whom are also in the arts, lives in Mt. Pleasant, SC. Jane Clinge Shuler writes that she is really enjoying serving as a Winthrop trustee. She says she has two more years to go and that Winthrop is in great shape. Jane lives in Orangeburg, SC. Bobbitt Lyle Smith, a second-grade teacher in Columbia, SC, says that she was named Teacher of the Year for 20002001. She and her husband live in Elgin, SC. Barbara Muller: building a better chip In these computer-dominated days, Dallas that really helped me further with the constant drive to invent faster my skills, but I have to say I got my and more powerful silicon microchips, degree in food science at the ‘Univerit is good to see that someone is still sity of Frito Lay’,” said Muller. trying to perfect the original chip: the In the Frito-Lay process lab, Muller potato chip. As a project scientist in creates small batches of the new prodthe New Products Division at Fritoucts so that she can refine the recipe. Lay, Barbara “After we Kirkpatrick receive some Muller is general concontinuing sumer feedthat quest. back, we subIn 1980, mit the prodafter having uct to a firm been out of that sets up the job marlarge-scale ket for more consumer than 10 years tests in places to raise her such as children, schools and Muller apchurches. At plied for a these tests, job at Fritothe consumer Lay. “I was is asked more Barbara Muller proudly displays the 2000 Market Maker first hired as extensive Award presented to Frito-Lay for the Fritos Sloppy Joe part of a Ba- and Scoops Snack Kit. She was instrumental in the devel- and specific sic Research opment of the Sloppy Joe. questions team that foabout the cused on project troubleshooting. I product. All this information is put stayed with that group for five years into the computer and graphed to see if and spent the remainder of my 20 years the consumer reaction falls within the at Frito-Lay in Product Development,” statistical parameters that we have desaid Muller ’64. cided must be met in order for the In Product Development, Muller is concept to become an actual product.” responsible for creating new Frito-Lay Not only does Muller rely on the products to put into the market. The consumer to decide whether the prodpotential product must undergo the uct has what it takes to make it to the scrutiny of many different levels of shelves, but also, to a certain degree, product-consumer evaluation before it the consumer dictates the finished prodis put on the shelves to represent the uct by making suggestions for posFrito-Lay name. sible improvements. “New products start out like all “Frito-Lay has always put a lot of other things in marketing, as an idea. emphasis on the consumer,” she said. We receive possible concepts from After many fruitful years at Fritobrainstorming sessions. This concept Lay, Muller has decided to retire next is then turned into a drawing, which year. “Frito-Lay has been a great comwe use to gauge whether consumers pany to work for. It is extremely famwould buy the product based on looks ily-value centered and is greatly conalone,” said Muller. cerned with the community. I have If it makes it this far, a three-dimenbeen very happy here these past years sional representation is made. Using because my job allows me to be creher chemistry background, Muller conative and to just simply have fun,” she cocts the ingredients that will make up said. the distinct flavor of the conceptualMuller says she is looking forward ized product. to her coming retirement. She and her “I had some experience using ‘inhusband are planning on taking it easy gredients’ since I had my B.S. in chemat their new log home on Lake Cyprus istry, but I felt that I might need more Springs, enjoying their six grandchiltraining in its application in the food dren “and just continuing to have fun.” industry. I took a few classes in food chemistry at the University of Texas in − Ryan Shelley 16 Winthrop Update • Winter 2000-01 1957 Mildred Crocker Bagnal writes that she and her husband are retired and enjoy spending time with their parents, children and three grandchildren. She would love to hear from any of her classmates. She and her husband live in Cayce, SC. Beatrice Anna Bernstein wants her friends to know that she is living in a nursing home in Fayetteville, NC. Carolyn Brunson Boudreaux reports that she and her husband are retired and spend their winters in Venice, FL, and their summers in Bethany Beach, DE. They enjoy traveling abroad and visiting their children in Delaware, Illinois and California. Maggie Lunn Foss ’41(right) of Santa Ynez, CA, represented Winthrop during the inauguration of the president of Pepperdine University. Nancy Dodson Christopher writes she and her husband are enjoying being grandparents. They like serving in many church programs and traveling. Nancy and her husband live in Anderson, SC. grandchildren and lives in Iva, SC. Billie Hamilton Wilson writes that she and Rhoda Spears Rice of Columbia, SC, attended their 50th high school reunion in Conway, SC. She says they celebrated with a clubhouse barbecue, a cruise up the Waccamaw River and a dinner at Ripley’s Aquarium at Myrtle Beach. Billie lives in Athens, GA. 1955 Kay Page Lumpkin reports that she has retired after working more than 36 years as the secretary at First Baptist Church in Dillon, SC. She has two “wonderful” children and four “super” grandsons and lives in Dillon. Lois Bailes Simmons writes that she would love to hear from her friends who lived on the third floor in Roddey during their freshman year. Lois now lives in Laurens, SC. 1956 Ann Garrett Cason reports that she and her husband traveled to Spain and Portugal, then to Germany and Austria where they enjoyed seeing the Oberammergau Passion Play. Ann is President of Laurens County Arts Council and resides in Clinton, SC. Emma Livingston Craig writes she and her husband are enjoying life in a small town on a bay after living in Atlanta. She travels, plays bridge, is a community volunteer and is involved with church activities. Emma and her husband live in Niceville, FL. Ruth Lever Sample reports that she and her husband are enjoying retirement living on Lake Norman and spoiling their grandchildren. Over the summer, they had a wonderful experience traveling to Switzerland, Austria, and Germany where they saw the Passion Play in Oberammergau. The Samples live in Davidson, NC. Geralene Norton Gardner writes that she has retired from teaching at Cocoa Beach High School. She says retirement is wonderful and that she and her husband enjoy traveling every month. They recently returned from a two-week trip to Alaska where they went “mushing” and took a helicopter ride over Mount McKinley and the glaciers in Juneau. Geralene and her husband reside in Satellite Beach, FL. Claire Simpson Godwin reports that she retired from Florence School District Three after 30 years in education. Last spring, the S.C. Association of Educational Office Professionals named her Administrator of the Year for South Carolina. Claire and her husband live in Lake City, SC. Barbara Taylor Hall reports that she has retired from the Tennessee Department of Correction/Youth Development after working 30 years in probation as superintendent of female institution and assistant superintendent of male institution. She says she is enjoys traveling, working with the church choir and being lazy. Barbara and her husband like to visit their three sons and four grandchildren. The Halls live in Mason, TN. 1959 Dorothy Nixon Collins writes that the best thing about retirement after 29 years of teaching is being her own boss. She says life is wonderful and she relishes traveling. Dorothy lives in Garner, NC. Joan Livingston St. Romain writes that she and her husband are enjoying retirement. They live in Tucker, GA. 1960 Mary Ann Neighbors Hoffman writes that she has retired from teaching after 36 years. She says she delights in seeing her new grandson, who lives next door. Mary Ann and her husband live in Rock Hill. Duane Batson Staggs of Landrum, SC, wrote an article that appeared in the Tryon Daily Bulletin last summer. The article detailed her 40th class reunion last spring at Winthrop’s Alumni Reunion Celebration. She reminisced about the fun – and work involved − in the Junior Follies. She said the reunion was her first visit back to campus since her graduation, and she is glad that class chair Anne Dickert of Columbia, SC, chased her down on the Internet to invite her to join the fun. Duane says she also received phone calls from classmates Mary Ann Fulmer Hite of Atlanta and Marolyn Shaw Blanton of Orangeburg, SC, regarding the class reunion. 1961 Martha Grant Ruble reports that she is working as a speech-language pathologist in the public schools of Anderson, SC, District 3. She and her husband have three children, three grandchildren and live in Due West, SC. Judith West Smith reports that she retired from the Gaston County Schools and is an instructor at Gaston College. Judy and her husband live in Gastonia, NC. 1962 Betty Moore Beverly writes that she is working as an employee assistance counselor for DuPont in Richmond, VA. She received her M.S. in rehab counseling in 1987 from Virginia Commonwealth University and is licensed in Virginia as a professional counselor, marriage and family therapist, and substance abuse treatment provider. She also is certified as an EAP. Betty lives in Chester, VA. Amanda Belcher Sessions reports that she retired after 35 years as a teacher and administrator at both the elementary school and district level. She says she is enjoying being a “stay-at-home grandma.” Amanda lives in Conway, SC. Brenda Bailey Tyson writes that she is working part time as the director of Career Services at Belmont Abbey College. She says each summer she enjoys a two-month vacation with her 12 grandchildren. Brenda lives at Lake Wylie, SC, and says she would love to hear from classmates. 1963 Sarah Limestone Armstrong received the Distinguished Alumna in Physical Education Award at Winthrop University. During her career in education, she has received numerous awards and honors for her commitment to teaching. She and her husband reside in Gray Court, SC. Carol Hardy Bryan reports that she likes showing visitors around her home- town of historic Edgefield, SC. She says she also finds the time rewarding that she spends as the editor of Quill, a 20page bimonthly publication of the Old Edgefield District Genealogical Society. Carol and her husband enjoy traveling and last summer visited Nebraska, South Dakota, Iowa and Missouri. Janice Frady Henson writes that she has retired after 30 years of teaching social studies and English at the secondary level. She plans to devote more time to her family, reading and traveling. Janice lives in Canton, NC. 1964 Martha Ann Sutton Clamp writes that she is a retired CPA. She is the captain of the women’s senior 3.0 tennis team which represented South Carolina in the “sectionals” in Mobile, AL, this summer. Martha Ann lives in Anderson, SC. Gray Little Komich reports that she is still teaching elementary school physical education. She lives in Alexandria, VA. Betsy Wren Smith reports that she spent two months last summer studying and hiking in Peru/Ecuador. She says she is still teaching Spanish at Travelers Rest High School and lives in Saluda, NC. Sara McMahan Utsey writes that she has retired – for the second time. She has left the School District of Greenville County after a total of 36 years of service to the state. Sara says she is enjoying being at home and spending time with her husband. The Utseys live in Greer, SC. Katherine Wood Wallace reports that she retired from real estate five years ago to play golf. She works part time as a member of the clubhouse staff at a local golf club and plays golf at least three times a week. She plays in tournaments and Pro Ams in Alabama. Katherine says she is trying to convince her husband to retire and join her in playing. The Wallaces live in Deatsville, AL, and have five grandchildren. 1965 Catherine Sutton Bryant reports that the marketing research firm she founded in 1985 has grown to more than 40 employees and many part-timers. She has two sons and lives in Lewisville, NC. Catherine says life is good. Diane Harrison Harwell, an adjunct professor for the Department of Educational Leadership and Policy in the College of Education at the University of South Carolina, writes that she volunteers as the awards and scholarships coordinator at the S.C. Association of School Administrators. She also serves as research observer in two S.C. elementary schools for SERVE, the Southeastern educational laboratory. Diane has begun to take guitar lessons and enjoys Winthrop Update • Winter 2000-01 six grandchildren and one great-grandchild. Mary Lou lives in Charleston, SC. Mary Moore Montgomery was honored for her dedication by the Williamsburg Presbyterian Church with a plaque identifying her as an Honorary Life Member of Presbyterian Women in the Presbyterian Church, USA. Mary lives at the Presbyterian Home in Florence, SC. 1943 Annie Bonnoitt Boatwright of Johnston, SC, writes that although she is retired from teaching, she keeps busy with church involvement and civic work. Her granddaughter, Sierra Boatwright Butler of Santa Barbara, CA, graduated from Winthrop in 1997. Annie also has two great-grandchildren. Sarah Sanders Williams of Gastonia, NC, reports that she has been retired from teaching in North and South Carolina schools for 25 years. Now she enjoys volunteering with civic and church organizations and spending time with her husband, family and five grandchildren. Sarah says she is proud of her Winthrop heritage and the opportunities given to her by her parents. She also has a sister Betty Sanders Miley who graduated from Winthrop in 1942 and another Frances Sanders Brunson who attended Winthrop for two years before getting married. Elsie Bennett Wilson has retired from Clemson University after more than 30 years. She served as administrative assistant to seven Clemson presidents. During a reception in her honor, she received resolutions from the S.C. House and Senate and was presented with the Order of the Silver Crescent by Gov. Jim Hodges. She also received a picture of Sikes Hall signed by all of the living Clemson presidents. Elsie lives in Seneca, SC. 1944 Sarah Horton Garvin wants her friends to know that she and her husband are moving to the Clemson Downs Retirement Center. They have three grown sons. Louise Raley Scott writes that upon her retirement she received the Order of the Palmetto from Gov. Carroll Campbell for her work with individuals with disabilities and special needs at the Scott Center in Hartsville, SC. Nancy Cooper Walker writes that when she retired from teaching, she and her husband traveled in their RV for sixand-a-half years. However, she wants her friends to know that they’re back home in St. Simons Island, GA, now. 1945 Jewell Clark Edwards reports that since her retirement in 1986, she has spent a great deal of time traveling and visiting Fripp Island, SC. Jewell, who 15 Keep in touch with your classmates Let your classmates know what you’ve been doing. Send information about yourself by mail to the Office of Alumni Relations, 304 Tillman, Winthrop University, Rock Hill, SC 29733; by e-mail to [email protected]; by phone to 803/323-2145 or 800/578-6545; or by fax to 803/323-2584. Because of the production schedule and space considerations for Winthrop Magazine and Winthrop Update, there may be a delay in reporting your activities. lives in Johnston, SC, says her five grandchildren bring her much joy. Evelyn Allen Linder writes of her wonderful experience at Winthrop, recalling the times she wrote poetry with her roommate, Mary Edith Turner ’43. Evelyn says she continues to write poetry at her home in Columbia, SC. 1946 Helen Smoak Camak writes that she visited with Anne Pitts McCabe ‘45 and her husband at their lovely home in Ft. Motte, SC. She says they attended their high school reunion at St. Matthews High School and had a marvelous time. Helen lives in Anderson, SC. Ruth Davis Knight wants her friends to know that she has moved to a beautiful new retirement area in Columbia, SC. She has lived in Columbia since 1964. Dona Ardrey Livingston and her family have put conservation easement on their property along the Catawba River. This will preserve the natural environment and water quality along the river. Dona lives in Rock Hill. Catherine Boone Shealy writes that she spent two weeks last summer volunteering in the laboratory of Sage Memorial Hospital which is run by the Navajo Nation Health Foundation at Ganado, AZ. This was Catherine’s ninth trip to Arizona and her 10th to a reservation. She and her husband live in Atlanta. Harritte Thomas Thompson is a globetrotter extraordinaire. During her 37 years with the CIA, she visited 33 countries, collecting artifacts and furniture for her home in Greer, SC. While Harritte’s home is a reminder of her adventures, it also serves as a location for extended family get-togethers, and her garden is the source for flowers she enjoys pressing to create one-of-a-kind sheets of stationery. 1947 Carolyn Pitts Reedy of Charleston, SC, reports that she and her husband retired in 1996 and have been busier than ever. Not only are they trying to catch up on all the things they wanted to do before they retired, but they are enjoying church activities, traveling, and visiting with friends and family in Florida, South Carolina and Indianapolis. 1948 Mary Leila Carwile Andrews writes that she and her husband enjoy traveling and being active in church and community activities, including the North Carolina state music and medical organizations. She says they also take pleasure in attending their grandchildren’s activities. The Andrews live in Wilmington, NC. Betty Ann Jordan Cone writes that she and her husband, Dallas, celebrated a unique 50th wedding anniversary last spring at the home where they were married. The wedding party, relatives and close friends were cooked for and waited on by the Cones’ four children and seven grandchildren, who related wedding and honeymoon stories. Guests played a wedding version of “Who Wants to Be a Millionaire?” Betty and her husband reside in Ridge Spring, SC. Betty Smith Dickson writes she is still enjoying retirement after 16 years. She and her husband are active in their church and live in York, SC. Polly Wylie Ford of Rock Hill was named a Winthrop Distinguished Alumna in Physical Education during Alumni Reunion Celebration this spring. She and other physical education alumnae were honored at a breakfast sponsored by the Department of Health and Physical Education. Polly was department chair from 1962-1992 and is a past president of the Southern District of the American Alliance for Health, Physical Education, Recreation and Dance. 1949 Mary Fitzgerald Cagle wants her friends to know that she has moved back to her hometown of Gaffney, SC. Frances Brown Gaffney reports she and her husband have moved to Blythewood, SC, to be near their children. Ruby McCullough Henry reports that she and her husband are retired and live in Rock Hill. She retired after 30 years of South Carolina state service. Mary Jane Curry McKinney reports that she is retired and enjoys traveling, performing church work, gardening, playing bridge and visiting with her grandsons. She and her husband live in Simpsonville, SC. Coy Ayer Patrick writes she was a delegate to a People to People ambassadorial program conference in Egypt on community mental health. People to People is sponsored by the U.S. State Department. Following the conference, she cruised from Venice, Italy, to Barcelona, Spain. Coy lives in Rockville, MD. 1950 Ann Coile Bland writes that she had a wonderful golden reunion and wants her friends to know that after 38 years, she has returned home to South Carolina and is living in Lexington. Shirley Sparnell Corn says she and her husband are delighting in their retirement. They have been traveling and enjoying their lake home and church activities. Last year, they also became greatgrandparents. Shirley also serves on the alumni board of John de la Howe School. She and her husband live in Rock Hill. Rose Marie Neal Rieger writes that on their Alaskan cruise and visit to Vancouver, Canada, she and husband were impressed with the country’s beauty and its friendly people. Rose Marie and her husband live in Las Vegas, NV. Mattie Wallace Strickland says she had a wonderful 50th reunion in April and mini-reunion at Lake Junaluska in June at the home of Mary Holler. Mattie lives in Dillon, SC. Betty Owen Williamson writes that she enjoyed her 50th class reunion. She thanks all those who made it happen. Betty lives in Nashville, TN. Colleen Holland Yates wants everyone to know how important it is to vote during elections. In a local election, she lost by the narrow margin of nine votes. Colleen, who lives in Sumter, SC, says that goes to show that “every vote counts.” 1951 Jeanne Rheney O’Shields of Spartanburg, SC, writes that she and her husband are retired and enjoy traveling and volunteering in church and community activities. They have two grown children and five grandchildren. 1953 Nell Whitmire Holtzclaw reports that she retired in 1996 from Western Carolina University’s English Department after 37 full- and part-time years and is now professor emerita. She lives in Cullowhee, NC. 1954 Betty Holmes Gray writes that she is fond of traveling and recently visited Eastern Europe. She also enjoyed spending time in New York City with her nephew. Betty has two children and two 14 Winthrop Update • Winter 2000-01 has moved into an apartment in a retirement home near her daughters. She lives in Fairport, NY. Class Notes 1929 Martha Benton Davenport writes that she celebrated her 92nd birthday in July. She lives with her daughter, Harriette Davenport Moultrie ’65 and her husband in Lexington, SC. Martha is retired, having taught for 33 years. Agnes Jeter has been inducted into the Union High School Athletic Hall of Fame. Agnes, who said she played every sport in high school and college except golf, taught in the North Carolina public schools as well as at Greensboro College. She also operated Yonahlossee, a summer camp for girls, for nearly 30 years. Agnes lives in Union, SC. 1931 she retired in 1975 after teaching for 31 years. She lives in Aiken, SC. 1933 Helen Hutto and her best friend were featured in an article in the (Charleston) Post and Courier about growing up − and eloping − in Dorchester County. The marriages worked, though, as both couples have remained close friends and celebrated their golden anniversaries. Helen lives in St. George, SC. 1935 Juliet Woods Jenkins reports that she and her husband are living at Covenant Place retirement home in Sumter, SC. Mary Louise Myers writes that she celebrated her 90th birthday on Oct. 5 with family and friends. She enjoys her roses, church activities and time with family and friends. Mary Louise, a retired educator, lives in Oakway, SC, where she has a wonderful view of the Blue Ridge Mountains. Rebecca Barr Plexico of Barnwell, SC, writes that she is delighted that her granddaughter, Julie Barr Plexico, is a Winthrop student. Lottie Faye Barry Wade writes that Louise Johnson Spencer writes she 1938 1940 Nellie L. White of Brevard, NC, reports that she is living at a very nice retirement home. She says would love to hear from her friends. 1941 Dot McCown Blackwell of Florence, SC, reports that she visited classmate Penny Kneece McKeown ’41 during a trip to Aiken, SC, last summer and was also able to see Sis Crouch Kennedy ’42 of Williston, SC, while she was there. Dot also had a “classmate gettogether” in Florence, SC, at the home of Helen Watts Kirkley ’41. Jewel Carmichael ’42 of Florence also attended. Phyllis Fellers Hicklin writes that she has retired from teaching and is enjoying her grandchildren and flower and vegetable gardening. Phyllis lives in Richburg, SC. Nan Sturgis McRackan reports that she has been retired for 18 years but volunteers her time using music therapy at Tuomey Hospital. She says she and her husband enjoy visiting and spending time with their three children and their families. They have four grandchildren and live in Sumter, SC. Mary Claire Pinckney Seeger has published a novel, Ursa Major. She also has written a family history, The Pinckneys of Ashepoo, and several booklets drawing on her knowledge of Gullah. Mary Claire and her husband live in Charleston, SC. 1942 Anna Belle Graham Gay writes that she is very grateful for her Winthrop education and is especially grateful for the late Dr. Keith who was over the Debaters’ League. She said the techniques she learned have come in very “handy” in her career of community and church involvement. Also, her math skills have enabled her to continue preparing her own income tax forms. Anna Belle lives in Aberdeen, MD. Mary Lou Brown Griffin reports that she retired in 1984 and now spends time volunteering for the Charleston Symphony League and other community organizations. She says she travels some and enjoys spending time with her Susan Jones Connelly finds youth is simply a state of mind Seventy-eight-year-old Susan Jones Connelly ’42 has encourage them to be responsible and remind them that they always been a dancer at heart. She began taking dance at the are just one part of a team,” said Connelly. age of three and soon proved to be a natural talent. At the age The girls each have two costumes, one for winter and one of 10, she began teaching others to dance. for summer. They wear these to the football games and to “My dance teacher in Lancaster (SC) got married and competitions that the team enters during the course of the could no longer teach her classes, so she turned them over to school year. me. My dad built me a little stage in the backyard, and I gave “The summer costume is two pieces and their bellybuttons lessons all that summer for 10 cents a lesson. After my dad do show, but I don’t go for belly rings. These are still young began to see that I was making some real money, he built me girls, and although I want them to look attractive, I’m an entire studio in the backyard,” she said. shooting for ‘cute’ not ‘sexy.’ The best part about it is seeing She continued her small studio each summer throughout the girls learn how to ‘sell the show,’” said Connelly. “They high school and has been teaching ever since. start out all shy and bashful, but with time, they learn how to Used to taking center stage, Connelly made a splash when do the job.” she arrived at Winthrop at the age of 16. Connelly, a self-described “free spirit,” still has the “I remember receiving the honor of being thrown into the natural spunk and liveliness she exhibited as a young stufountain in front of Tillman and breaking the overhead lights dent. in Byrnes − which were brand new at the time − by whirling At the age when most people begin to at least consider my baton a little too high. The Student Government Associaretirement, the grandmother of four and great-grandmother tion met every Monday night and it seems like I was always of one also is a real estate agent and manager and owns her there to be disciplined for something,” said Connelly. own CPA firm and restaurant. She also is secretary and Feeling the pressures from her family and World War II, treasurer of Lancaster City Educational Foundation, secreConnelly left Winthrop her junior year and began working as tary for the city of Lancaster, field director of the Miss South choreographer for the Lancaster High School band dance Carolina Pageant and owner of Susan’s Dance Studio. team. Dance is a big part of life in Lancaster, and Susan’s Dance Connelly is still choreographing at Lancaster 58 years Studio is at the center. With more than six studios packed later. into the small town, dance is a very competitive business. Today, her dance squad is made up of 36 girls from eighth Susan Connelly began teaching dance when she was 10 Yet, through the years, Connelly’s 15-year-old studio has to 12th grade. The team is a vital part of the of the Lancaster years old and 68 years later, she’s still at it. managed to set itself apart from the crowd. High School football season. Through song and dance, the On Main Street in Lancaster, the studio is adorned with pictures of students that span more than three generations of Lancaster girls, and girls bring much-needed relief to the nail-biting tension of the gridiron. AccompaConnelly has instructed them all. With the aid of student teachers, she gives seven nied by the Lancaster High School Band, the dance squad performs Connelly’s hour-long lessons two days a week to more than 100 girls. This dedication to meticulously choreographed dance routines, which incorporate elements of mostly perfection gives the girls a competitive edge when Connelly enters them into local jazz and hip-hop dance. competitions and performances. Connelly puts a lot of responsibility on the girls with two practices a week during “The competitions are what I do this for. It’s a challenge and I am a fierce the off-season and three during football season. They stick to a strict practice regimen competitor. I do lose and when I do, I lose graciously,” she said. in order to prepare themselves for dance competitions throughout the state. The team Connelly can’t see herself retiring in the formal sense of the word. “I like to stay begins with local competitions and works their way up through lower state and allbusy,” she said. “I’m not going to stop until I absolutely have to because I enjoy what state competitions. This year the squad received good ratings at the all-state level, but I do and I’m good at it. My secret to staying young is thinking young, and my ability sadly didn’t make it to the finals. to think young comes from surrounding myself with the strength of youth.” “I try not to be too strict but I stress that the girls show up for practice to be able to take part in the concerts and competitions we do. They know that I wouldn’t think − Ryan Shelley twice about pulling one of them off the line up if they missed practice. I try to Winthrop Update • Winter 2000-01 13 ALUMNI NEWS Denise Nicole Bruner ’96 to Brian Clark Woods ’97 Raquel Benita Grant ’96 to Damon Lamont Bryant Karen Elizabeth Jackson ’96 to Bryan Keith Cress Dana Rae Lancaster ’96 to Daniel Thomas Aron Matthew Taylor Lindsay ’96 to Virginia Maria Garcia Karen Denise Onspaugh ’96 to Jason Pope Jr. Molly Christine Strasser ’96 to Marcus Christian Laster ’98 Kandise Paige Wyatt ’96 to Dick Butkus McDonald Garret Dailyn Zohner ’96 to Jennifer Margaret Coble ’97 Jessica Lea Alexander ’97 to Jonathan Sasser Kivett ‘97 Jessica Lynn Barnes ’97 to Christopher T. Smith Melissa Ann Benge ’97 to Brian Philip Collins Andrea Bolt Brown ’97 to Charles Paul Stephens Natalie Janette Copeland ’97 to James Crosland Melissa Ann Corn ’97 to Ronald Trombley Lori Deanne Cox ’97 to Robert Benton Tidwell ’98 Carrie Lynn Ellison ’97 to Gary O’Neal Vaughn Shirley Nicole Griggs ’97 to Patrick Kevin Massey Andrea Leigh Harman ’97 to Mark Larry Duncan Kevin Jonathan Hoxit ’97 to Stephanie Dawn Wright Shannon Marie Murray ’97 to James Christopher Ingram Christopher Adrian Revels ’97 to Monica Charlene Vaughn ’97 Bradley Onassis Robinson ’97 to Winnical Moses Jonathan Herndon Veale ’97 to Melanie Stetar Phillip Gregory Williams ’97 to Joy Lynn Hudson John Silas Bailey Jr. ’98 to Diana Marie Hevia ’00 James Davis Barnes ’98 to Ann Timmerman Brandi Langley Creech ’98 to Jody Forrest Weigle Shane Neil Duncan ’98 to Brandy Ann Ray Kimberly Deanna Dunsmore ’98 to Michael Edmond Smith Jr. Charlene Renee Garrick ’98 to Geoffrey Malone Gleaton Tracey Jean Glandon ’98 to Chad Edward Cannon Kristen Elise Glenn ’98 to Jeffry Lynn Chambers Marjorie Ann Grooms ’98 to Tim Anderson Latrinia Lanette Holmes ’98 to Duane Lamont Lucas Thomas Scott Jackson ’98 to Sarah Frances Watterson Patricia Ann Parker ’98 to Larry Hill Olivia Starr Praga ’98 to James Kevin Gray ’99 Amanda Celeste Shuler ’98 to Ben- jamin Ashley McCall ’99 Arclethia Shawntraya Thomas ’98 to Ritchie Parker Jenniffer Marie Todd ’98 to Jay Kapp Louis Edmond Venters III ’98 to Melissa Renee Smith Dama Leigh Black ’99 to Michael Alan Lake Jr. Kimberly Mischele Broach ’99 to Brian Dover Cynthia Diane Fultz ’99 to Mark Harry Berry Holly Ann Griffin ‘99 to Donald Heath Gillespie Lori Jane Hudgens ’99 to Jason Travis Bayne Julie Ann McGee ’99 to Bradley Jonathan Nix Angela Dawn Rhynes ’99 to Jason Dean Tucker Melanie Lynne Short ’99 to James Wayne Edwards Jennifer Lynn Shugart ’99 to John Curtis Mitchell Christopher Michael Steed ’99 to Erin Rebecca Hunter 2000 Timothy Lamar Stiles ’99 to Holly Renee Summey 2000 Susan Maria Barber ’00 to Steven Wayne Hackett Stephanie Gail Bolden ’00 to Jonathan Blanton Williams Melissa Ann Christmas ’00 to James Edison Phipps Jr. Amber Camille Covington ’00 to Jonathan Jason Melton Robert Eugene Fields ’00 to Tisha Nicole Adkins Jacqueline Suzanne Hayes ’00 to Charles Edward Williams Christina Marie Hipp ’00 to John Darby Adkins Benjamin Arthur Hough ’00 to Sharon Justina Page Heather Rae McCarley ’00 to Kevin Montgomery Matthew Blaine McIntosh ’00 to Jennifer Leigh Sullivan ’00 Amy Beth Ramsey ’00 to Daniel Nolan Sherer Amy Nicole Rogers ’00 to David Michael Koterba Natalie Ann Stephens ’00 to Andrew Newton Adams Kanisha Shani Willis ’00 to Carl Delain Goldson Melissa Lee Wilson ’00 to William Aaron Melton Deaths Mary Gaines Steer ’27 Agnes Gaston Wallace’27 Theodosia Burriss Willis ’40 Harriet Culler Worley ’40 Sallie Campbell Daniels ‘28 Ruth James Howle ’28 Eunice Robertson Stuart ’28 Madeline Littlefield Thompson ’28 Ruth Simmons Fulmer ’41 Katharine Rebecca Adams ’29 Edna Hendricks Dahne ’29 Sidney McMillan Jamison ’29 Mary Fair Newton ’29 Eugelia Good Taylor ’29 Mellie Way Keller ’30 Dorothy Holler Marbut ’30 Mary Gooch Baker ’31 Lora Barwick Burch ‘31 Fannie Miller Felder ’31 Aurelia Antley Smoke ’31 Lena-Miles Wever Todd ’31 Edith Smith Booher ’32 Dorothy Foxworth Teague ’32 Minnie Wilson Bonnette ’33 Sara Elizabeth Brown ’33 Willena Dickinson Gentry ‘33 Alexa Ragin Good ’33 Mary Fant Gunter ’33 Janese Bushardt Hanna ’33 Roberta Ricaud Kendall ’33 Robbie Gooch Baker ’34 Mildred Alice Burdette ’34 Martha Hatton Dominick ’34 Mary Rawlinson McMeekin ’34 Margaret Dew Love ’42 Ethel Coleman Maynard ’42 Helen Murray Salley ’42 Christine McNair Blonaisz ’43 Mary Johnson Farquhar ’43 Helen Sumner Goodson ’43 Evelyn Gause Gray ’43 Mary Williams Hill ’43 Virginia Carol Bedenbaugh ’44 Edith White Gamble ’44 Lucile Lucas ’44 Mary Calhoun Thompson ’44 Adele McKey Holleman ’45 Anna Margaret MacLauchlin ’45 Mary Edna Porter ’45 Louise Green Bailey ’46 Marjorie Callaham Littlefield ’46 Betty Kirkpatrick Lindler ’48 Jean Phillips Roddey ’48 Jean Martin Bouknight ‘49 Charlotte Kathryn Boykin ’49 Myra Adair Crocker ’50 Jewell Tuten Sandifer ’50 Ruth Parker Carroll ’51 Betty Jo Roberts Clark ‘51 Shirley McCraw Gray ’56 Sara Crosson Bandel ’35 Sara Cooper Griffith ’35 Evelyn Shearer Huskey ’35 Helen Chastain Mann ’35 Mary Shirley Oakes ’35 Jane Thomson Holmes ’58 Julia Rogers Cross ’36 Lillie Rogol Grablowsky ’36 Claudia Huey Hughes ’67 Ellen Alderman Jones ’37 Louise Miley Hiers ’38 Catherine Watson Mitchell ’38 Freida Ogburn Quam ’38 Judith Rogers Beaty ’39 Carolyn Miller Red ’39 Gale Johnson Belser ’40 Liswa Ellerbe Hasty ’40 Lucille Gregory Williams ’40 Sandra Hunter Lukevics ’60 Margaret Reamer Smith ’60 Charlotte Tillson Webb ’62 Patricia Harris Wolfe ’68 Michele Jan Coury ‘72 Linda Lee O’Kelley ’72 Nancy Ann Carino ’74 Mary Stokes Odom ’79 Arthur Von Settlemyre ’85 Belinda Frances Gaunce ’87 Antonio Brian Lyles ’88 CALL FOR NOMINATIONS Know someone who exemplifies the best a Winthrop alum can be? If so, nominate that person for one of the following awards, presented each year during Homecoming or Reunion Celebration activities: Alumni Distinguished Service Award For significant contributions to his or her alma mater, to the quality of life in his or her community and to the development of values and morals within others Professional Alumni Award Dorothy Surasky Cohen ’20 For significant contributions to his or her field of endeavor while exemplifying high moral standards and professional ethics Gertrude Haddon McRae ’21 Outstanding Young Alumni Award Nancy Marie Goodson ’25 Margaret White Ready ’25 Katherine Williams Anthony ’26 Lucy Burns Harris ’26 Annie Rhoad Roberts ’26 Daisy Reaves Warren ’26 Laura Janette Falls ’27 For his or her community service and professional achievements which have reflected well on all alumni and the university Mary Mildred Sullivan Award For selfless dedication of time, energy and talent in exceptional service to her For an official nomination form, please contact the Alumni Relations Office at 800/ 578-6545, or visit our Web site,. 12 Winthrop Update • Winter 2000-01 ALUMNI NEWS Milestones Births Cathy Skala ’84, a son, Zachary Michael Hoover, Feb. 15, 2000 Teresa Waters Taylor ’84, a son, Matthew Lukas Taylor, Aug. 16, 1999 Julie Barker Turner ’87, a daughter, Ellis Summers Turner, Oct. 30, 2000 Jane Melvin Williamson ’87, a son, Daniel Melvin Williamson, May 30, 2000 Lisa Melton Farmer ’88, a son, Zachary Marshall Farmer, Sept. 1, 2000 Janet Blair Hurst ’88, a daughter, Georgia Marie Hurst, Sept. 11, 2000 Janet Palmer D’Agostino ’89, a son, Michael Dominic D’Agostino, June 14, 2000 Angela Meetze Wilkerson ’89, a son, Brayden Thomas Wilkerson, May 10, 2000 Patricia Burch Cannon ’90, a daughter, Jules Cannon Fisher, Sept. 7, 1999 Leah Noderer Damron ’90, a daughter, Adeline Peace Damron, Aug. 5, 2000 Marcie Wheeler Leaphart ’91, a daughter, Audrey Grace Leaphart, June 12, 2000 Tamara Pierce-Beall ’91, a daughter, Sophia Marie Beall, Nov. 25, 1999 Carol Stewart ’91, a daughter, Anna Chandler Stewart, July 9, 2000 Julie Oakley Fallat ’92, and Michael Fallat ’97, a son, Hunter Oakley Fallat, Sept. 9, 2000 Amy Najim Howe ’92, a daughter, Grace Najim Howe, June 13, 2000 Melinda Hendryx Mitchell ’92, a daughter, Megan Fuller Mitchell, June 7, 2000 Juliet Latham Nussman ’92 and Roger Nussman ’97, a son, William Banks Nussman, May 1, 2000 Traci Koch Sergent ’92, a daughter, Jenna Rheanne Sergent, Oct. 10, 2000 Amy White Condon ’93, a daughter, Catherine Michelle Condon, May 25, 2000 Lisa Fralick Gallagher ’93, a daughter, Grace Meghan Gallagher, June 12, 2000 Chalmers Johnson ’93, a son, Chalmers Davis Johnson, March 13, 2000 Clarice McManus Marinello ’93, a daughter, Katherine Marie Marinello, Jan. 21, 2000 Phyllis Hambright McGill ’93, a son, Devin Shemar McGill, Aug. 2, 1999 Stephanie Gangemi McKee ’93 and Brian McKee ’94, a son, Brian Anthony McKee II, Jan. 19, 2000 Melanie Fairbanks Hitt ’94, a son, Brandon Michael Hitt, March 26, 2000 Melinda Schneider Piper ’94, a son, James Kyle Piper, April 14, 2000 Johnathan West ’94, a daughter, Anna Legan West, June 7, 2000 Margaret Ann Sullivan Kerri Ann Jarrard ’88 to Jimmy Shiflett Marriages Kimberley Allene Cooper ’89 to Ernest Ed Dotson Jr. Marianne Rogers ’89 to John Sutton Flythe 1960s Brenda Dale Thrailkill ’65 to Robert Earl Leopard Sr. 1970s Claudia Summers Jenkins ’74 to Frederick Stroman McKay Jr. Patricia Lee Pickett ’74 to Edgar S Wilbourn III Martha Jean Williams ’74 to Dennis Edward Hamric Sue Kendrick Love ’78 to Kevin Dale Boulware ’80 1980s Rico Johnnie Craft ’80 to Beckie Thompson Traylor Junell Mayes ’87 to Darrin Proctor Lisa Joann Shepherd ’87 to Brian Hugh Smith Andrew Beason Dykes ’88 to Jimmie Lynn Harrison Sarah Fields Griffin ’88 to Gregory Howard Linke Henry Sanford Howie III ’88 to A way to realize your dreams Share the Vision As retirees, Hugh and Betty enjoy traveling. In the past, the pleasure of their trips was diminished by their fears about the safety of their investments. After considering various money management alternatives, they concluded that a charitable remainder unitrust was the best choice. By transferring a sizable share of their holdings to a standard unitrust, they obtained professional management of the investments and an income for life. The surviving spouse will continue to receive the same benefits for life. They also like the unitrust concept because, overtime, the variable annual payments create a hedge against inflation if, as they expect, the assets continue to appreciate. They have averted the tax on highly appreciated securities used to fund the trust and secure an income tax deduction. After the couple’s lifetimes, the trust remainder will go to Winthrop for a purpose they designate. If you would like to learn more about the unique benefits of a charitable remainder trust, please contact: I am interested in learning more about gifts to Winthrop. Name ____________________________________________ Address __________________________________________ City/State/ZIP ______________________________________________ Please call me. Telephone ( Todd Shifflet ’96, twins, a daughter, Grace McKenzie Shifflet, and a son, Whitman Gossett Shifflet, Oct. 4, 2000 ) _______________________________ All inquiries held in strict confidence. L. Keith Williams ’79 Director of Planned Giving Winthrop University 126 Tillman Hall Rock Hill, South Carolina 29733 803-323-2150 or toll free 888-2191791; fax 803-323-3796 e-mail: [email protected] All information is held in strict confidence. This information is for illustrative purposes only and is not intended as legal advice. For legal advice, please consult an attorney. 1990s Kelly Elaine Ballentine ’90 to John Peter Colacioppo Yolanda Yvette Deas ’90 to Seimon Philip Johnson Tammie Mario Harrell ’90 to Andrew Maurice Holt Jeffrey Scott Helms ’90 to Robbye Ann Sutton ’97 Tracy Kim Jackson ’90 to Timothy Michael Pryor Stephen John Long ’90 to Samantha Franklin Anne Marie Mathis ’90 to Darryl Lyndell Maybin Sharon Yvonne Rushin ’90 to John W. Gilchrist Jr. John Alexander Black ’91 to Anna Laura Tucker Brian Bruce Adam ’92 to Louisa McMaster Burriss Noelle Lynn Henry ’92 to Dennison Parker Read Thomas Paul Turner ’92 to Mary Anise McDaniel Bradley Alan Armbruster ’93 to SiJeun Jane Wong ’97 Kimberly Leigh Deese ’93 to Mark E. Wilson Keva Angeline Diamond ’93 to Reco Romaine Miller Danny Kern Grigg Jr. ’93 to Susan Michele Williams Donna Elizabeth Locklair ’93 to Jeffrey Alan Mishoe Marie Christine Navello ’93 to Marion Randolph Hall Jr. Catherine Christine Coleman ’94 to Matt Burdette Claire Michelle Johnson ’94 to Timothy Francis Newport ’95 Robert Scott Stogner ’94 to Mira Ivy Burnett Melvin Douglas Branham ’95 to Cerelia Stroud Nanci Regan Cronin ’95 to Kenneth Lucas Price Jr. Leslie Renee Felts ’95 to Gary Clayton Schwake Jaime Jurado ’95 to Catherine Leslie Abernathy ’97 Tracey Elene Marshall ’95 to Scott Christopher Kirby Meredith Page McDaniel ’95 to Charles Marion Martin Michael Ervin Pearson ’95 to Karen Lee McGinnas Cedron Stanley Swain ’95 to Elizabeth A. Jennings Garnet Michael Welch ’95 to Amy Renee Walter Buffy Rubylyn Britt ’96 to Woodrow Terry Fountain Jr. Winthrop Update • Winter 2000-01 11 ALUMNI NEWS Winthrop alumni activities Student Alumni Council Alumni, students, faculty and staff learned dining “do’s and don’ts” in October when the Student Alumni Council cosponsored an etiquette dinner for the campus community. The event was planned to assist current students as they prepare for life after graduation. Rock Hill Alumni Club Members of the Rock Hill Alumni Club Executive Committee met to make plans for the club’s 2000-2001 year. Fran Heitman Peeler ’66 and the committee enjoyed seeing York County alumni in McBryde Hall at the alumni dinner Jan. 25. Black Alumni Advisory Council Members of the Black Alumni Advisory Council came together in October for their (left to right) Steven Lewandowski, Suzanne Lipscomb ’98, Gale DiGiorgio, Kristen annual fall meeting. Council members spent much time planning the “Party of the Millennium” Gebhart Magee ’95 and Jim Magee enjoyed the festive holiday deccorations that adorned the Poinsett Club during the Greenville County alumni dinner. that was held Jan. 20, 2001 at the Sheraton Airport Plaza in Charlotte. Washington, DC, Area Alumni Alumni living in the Washington, DC, area enjoyed seeing old friends, meeting new ones and eating spectacular food at a luncheon at The Mark Restaurant in downtown Washington in November. Rebecca McMillan, vice president for university advancement, gave an extensive report of all the latest Winthrop news. Judy Davis ’68 of McLean, VA, and Jean Appleby Jackson ’75 of Fairfax, VA, were instrumental in making sure alumni from the last 50 decades gathered together for a delightful afternoon! Sumter Alumni Sumter alumni met in the Alice Boyle Garden Center for dinner and had the Alumni from Sumter were all smiles after they finished a wonderful dinner at the Alice Boyle Garden opportunity to hear Betsy Brown, dean of the College of Arts and Sciences, provide Center. an update on campus happenings. Colleen Holland Yates ’50 and Mary Faucett Nims ’49 hosted this special event in November. Johnny Deal ’84, president of the Alumni Association, was on hand to bring greetings from the Alumni Association. Greenville Area Alumni The beautiful Poinsett Club was the setting as alumni in Greenville County enjoyed a delicious dinner and a presentation made by President Anthony DiGiorgio. Johnny Deal ’84, president of the Alumni Association, brought greetings to alumni and friends in attendance.Thanks to Pat Plexico Boutwell ’84 and Kristen Gebhart Magee ’95 for helping to make the event a success. (left to right) Meredith Byers Gergley ’93, Randy Cooke ’73, Lisa Ventimiglio ’98, Fran Heitman Peeler ’66, Sally Archer ’76, Shane Duncan ’98 and Derrick Gainey ’97, all members of the Rock Hill Alumni Club Executive Committee, enjoyed dining together as they made plans for the new year. (above) Alumni, members of the Student Alumni Council and students smile as they learn the proper techniques of dining etiquette from a nationally known business etiquette consultant. (right) Members of the Black Alumni Advisory Council (kneeling)Tracey Williams Pickard ’92, Trevor Beauford ’00, (standing, left to right) Finley O’Neal ’84, Abbigail Jefferson ’95, Katrina Davis O’Neal ’83, Leroy Thorn ’80 and Deidre Richburg ’85 worked throughout the fall planning for the “Party of the Millennium,” Jan. 20. 10 Winthrop Update • Winter 2000-01 ALUMNI NEWS Alumni Perspectives Winthrop is kicking off a new tradition Saturday, Feb. 17 − the 2001 Homecoming Extravaganza. Plan to join alumni and friends at a “tent town” on the coliseum grounds, 11 a.m.-2:30 p.m. where your favorite local restaurants will offer delectable goodies. Visit with other alumni, student organizations and representatives from the academic areas. Stop by the Alumni Association tent for Homecoming commemorative souvenirs and children’s activities, and take time to mingle with old pals and make new friends. Plan to come for the entire Homecoming Weekend, Feb. 16-18, which will be packed full of athletic events and celebrations of every kind. Be a special part of this new tradition! Also, classes from 1921 through 1976 ending in “1” and “6” need to be sure to put Alumni Reunion Celebration on the calendar for April 20 and 21. Reunion classes soon will be receiving information about plans for their celebrations. Ruth Bundy Hallman and the 50th reunion committee have a jam-packed schedule awaiting their classmates, and Margaret Williamson and Pam Mungo have exciting activities planned for the 25th reunion class. Finally, two alumni representatives to the Winthrop board of trustees will be elected during the winter of 2002. Nominations must be received in the Office of Alumni Relations by Aug. 15, 2001. Candidates must be both alumni of Winthrop and South Carolina residents. For information regarding the procedure for nominating board of trustee candidates, please contact the alumni office. If you have questions about any of this information, please contact the alumni office on our toll free number, 800-578-6545, or send us an e-mail or FAX us at 803-323-2584. If you have access to a computer, please visit alumni and check out all the information on the Alumni Association pages. Remember, we like hearing from you, so visit the campus − in person, by phone or on the Internet! Carrolls give business students the opportunity to play the market Financial planners Vivian Moore Carroll ’73 and her husband Larry are giving five business students a little practical experience in the stock market – using the Carrolls’ money. The Carrolls agreed to donate $100,000 for the students – three undergraduates and two graduate students − to manage. They would retain their original investment and absorb any loses. In addition, they agreed to annually donate any gains to the College of Business Administration for faculty development. “Successful investing is not an academic exercise,” said Larry Carroll, founder and president of Carroll Financial Associates. “Investment education comes from making decisions, and mistakes − with real money. You cannot truly learn investing from a textbook. We hope the students involved will ben- A word of appreciation Many thanks to the following alumnae for representing President DiGiorgio and Winthrop University at inaugurations across the country. Maggie Lunn Foss ’41 Pepperdine University Malibu, CA Sept. 23, 2000 Andrew K. Benton, President Martie H. Curran Executive Director Alumni Relations efit from this for the remainder of their lives.” Vivian Carroll, a financial consultant with Merrill Lynch, and her husband will periodically meet with the students and advise them on trading strategies. Before the students made their first buys, Larry Carroll met with associate professor of business administration Mike Evans, who has taught Winthrop’s investment class for six years. Carroll, Evans and one of the participating students discussed issues such as portfolio goals and risk tolerance. They decided that the portfolio would be allocated 100 percent to stocks. The goal would be to outperform the S&P 500 on an annual basis by 2 percent. With Evans’ guidance, the students researched and selected the stocks in which to invest. He hopes to develop the experience into a three-hour course. Grace Pow Simpson ’53 Hampden-Sydney College Hampden-Sydney, VA Oct. 21, 2000 Walter Michael Bortz, III, President Ann Bass Upton ’49 University of North Carolina at Asheville Asheville, NC Oct. 6, 2000 James H. Mullen, Jr., Chancellor Alumni Association slate of officers for 2001-2003 The Executive Board of the Alumni Association of Winthrop University submits for your consideration the following slate of candidates to serve as officers of the Association. One person has been named to fill each office that will become vacant June 30, 2001. A majority of votes cast shall constitute an election. Frankie Holley Cubbedge ’59, Belhaven, NC, is serving as president-elect of the Alumni Association. She was elected in 1999 to serve as the association’s president July 1, 2001 through June 30, 2003. Frankie Cubbedge Jolene Stepp President-elect to serve as president of the Alumni Association 2003-2005: Jolene Moss Stepp ’86, Rock Hill Recipient of the 1999 Alumni Association Professional Award; former member of the Rock Hill Area Alumni Steering Committee; charter member of the Rock Hill Alumni Club, serving as member of the Executive Committee and as treasurer; member of the 199395 Executive Board Nominating Committee; founder and co-owner of Coldwell Banker Stepp Tuttle Realty of Rock Hill, Fort Mill, Lancaster and Chester, SC. First vice-president: Deidre Toi Richburg ’85, Columbia, SC Chair of the Black Alumni Advisory Council, having served as a member of the council since 1996; member of the Alumni Association Executive Board; member of the Columbia Alumni Club; employed with USC School of Medicine in the Department of Pediatrics. Second vice-president: Dolly Crouch Mitchell ’62, Prosperity, SC Member of the Alumni Association Executive Board, chair of Awards Committee; class of 1962 “permanent” reunion chair; retired from Education Administration Dolly Mitchell Deidre Richburg Secretary: Timothy (Tim) Sease ’87, Mount Pleasant, SC Member Annual Fund Loyalty Council; 1987 class agent; former member of the Alumni Association Young Alumni Council; former reunion committee member; former treasurer of the Greater Charleston Area Alumni Club; vice president, First Federal of Charleston Treasurer: Linda Knox Warner ’80, Rock Hill Treasurer and finance chair of the Alumni Association since 1987; member University Foundation Board; active member of the Rock Hill Alumni Club; chair of the Affinity Card Task Force; accountant with Bernard N. Ackerman, CPA, P.A. Ballot President-elect − Jolene Moss Stepp Second vice president − Dolly Crouch Mitchell Linda Warner Tim Sease First vice president – Deidre Toi Richburg Secretary – Timothy B. Sease Treasurer – Linda Knox Warner Please mark and return your ballot by April 1, 2001 to: Office of Alumni Relations, 304 Tillman, Winthrop University, Rock Hill, SC 29733 Winthrop Update • Winter 2000-01 9 SPORTS NEWS Sports update By Jack Frost, sports information director Young volleyball team tops 20 wins for second season Coach Cathy Ivester’s squad, which did not have one senior on its roster, produced a 24-9 record. The team advanced to the championship game of the Big South Conference tournament for the first time, knocking off competitors until losing in the finals to Radford. Individuals highlight soccer season The 2000 season was a roller coaster ride for the Winthrop soccer team. The Eagles finished the season 8-9-1 overall and 3-3-1 in the Big South Conference to earn a fourth place seed in the conference tournament. Winthrop faced fifth place Charleston Southern in the first round and was knocked out 3-1. The parity of the league showed true this year as only one higher seeded team advanced to the semi-finals. A bright spot for head coach Rich Posipanko’s team was the performance of freshman Thorvaldur Arnason from Iceland. Arnason scored 10 goals and had three assists for a total of 23 points. He finished the season with four gamewinning goals, representing half of Winthrop’s win total. Arnason earned second-team All-Big South Conference honors and was joined on the honor squad by freshman teammate Donald MacGregor of Scotland and senior Brian Barrett. Barrett was also named to the Big South All-Academic team. The 24 victories were a school record and give the Lady Eagles back-to-back 20-plus win seasons. Winthrop also notched 11 conference victories, another school record, and it captured impressive non-conference wins over Mississippi State, Davidson, and Army. The win over MSU was played before the largest home crowd ever. Junior outside hitter Erin Lehman placed her name in the Winthrop record book as she reached 1,000 kills and 1,000 digs for her career. The only other player to accomplish that feat was 1999 graduate Kara Galer. Lehman is also only the fourth player to reach either 1,000 kills or 1,000 digs. She was recognized for these achievements by being named a firstteam All-Big South Conference selection. Sophomore Jennifer Pritchard and junior Sarah Brown were voted to the BSC All-Tournament team. Winthrop’s 2000 fall sports season ranks as the university’s most outstanding for team and personal achievements since the school became an NCAA member in 1986. Among the most notable accomplishments last fall were the women’s volleyball team school record for most victories, the first Big South Conference championship for the men’s cross country team, runner Adam Growley’s appearance in the NCAA cross country championships, four individual and two team golf tournament titles for the men’s and women’s golf team, and a number one national ranking for the men’s golf team freshman class. Growley leads men’s cross country to conference title, qualifies for NCAA championships Head coach Ben Paxton entered the 2000 season with high expectations. He felt his men’s team had the talent to capture its first Big South Conference title. Paxton’s hopes proved true as Winthrop cruised to the championship behind leadership of junior Adam Growley, who became the first Eagle to qualify for the NCAA championships. Growley along with teammates Justin Insco and Matt Kelleher earned AllBig South Conference honors by finish- ing among the top 10 runners. Growley was third, Insco was fourth and Kelleher finished sixth. Growley, who captured the Winthrop Invitational and had six top 20 finishes, qualified for the NCAA meet in Ames, Iowa, by finishing fourth in the NCAA District Meet at Furman. Senior Ashley Ackerman was Winthrop’s representative on the Big South Conference All-Academic team. Freshmen men’s golfer ranked top in nation; women’s team wins individual medallist honors If the fall season performance of Coach Eddie Weldon’s young men’s golf team is any indication, the Eagles should have a strong shot at capturing the 2001 Big South Championship this spring. Led by the freshman trio of Matt Johnson, Kyle Christman and Chuck Brueggeman, junior Kenny Doerrer and sophomore Matt Mondorff, Win- Jack Frost A highlight of the 2000 volleyball season was a pre-game ceremony in October honoring the 1999-2000 Lady Eagle squad as the nation’s top Division I academic team. Members of last year’s squad compiled a 3.61 grade point average and were honored by the American Volleyball Coaches Association (AVCA) as the top team. Each player received a Certificate of Achievement from the AVCA and a Certificate of Honor from Winthrop President Anthony DiGiorgio, who took part in the presentation along with Athletic Director Tom Hickman, NCAA faculty representative Evelyne Weeks and senior womenn’s administrator Susan Anfin. throp took two tournament titles at Chattanooga and Draper Valley and had a second place finish at Stetson. Johnson, whom Weldon calls one of the most talented players he has ever recruited to Winthrop, had two individual titles as he took medallist honors at Chattanooga and tied for top honors at Draper Valley. As a team, the Winthrop men set a new school record for 18 holes with a score of 276 in the opening round of the Stetson Invitational. Following the conclusion of the fall season, the Eagles learned that Golfstat ranked its freshman class No. 1 in the nation. Winthrop’s three freshmen compiled a stroke average of 73.80 and had a strength rating of 460.839 to finish ahead of second place Texas-Arlington. Among the top 25 teams in the rankings were perennial NCAA power Oklahoma State (4th), California (7th), North Carolina (8th), UCLA (11th), Purdue (12th), Auburn (13th), Missouri (19th) and Georgia (25th). Big South Conference member Coastal Carolina was ninth with a stroke average of 73.88 and a rating of 336.745 for its two freshmen. On the women’s side, senior Janice Roberts and junior Katie Allison, two former Big South Rookies of the Year, finished the fall season strong by capturing back-to-back medallist honors. Roberts was the top finisher at Draper Valley near Radford while Allison captured the East Carolina Lady Pirate Invitational. Led by Adam Growley, the first Eagle to qualify for the NCAA championships, Winthrop ran off with the Big South Conference title. DeVaux earns All BSC women’s cross country honors While the men’s team reached the top of the Big South, the women’s team will set that goal for next year after finishing sixth in the league championship. One of the bright spots for the Lady Eagles throughout the year was the performance of sophomore Jenny DeVaux who earned All-Big South Conference honors with a sixth place finish at the championship meet. DeVaux was Winthrop’s top finishers in each of its six meets. Senior Kathrin Milbury was named to the Big South Conference All-Academic team for the third consecutive year. 8 Winthrop Update • Winter 2000-01 STUDENT SPOTLIGHTS Erin Morris finds strength in the power of music Roshanda Yearwood says her whole life is television. As associate producer for the Charlotte NBC affiliate, she works in the wee hours of the morning compiling the news the anchors will read as viewers sip their morning coffee. Roshanda Yearwood is the woman behind the news At 3:30 Saturday mornings, Roshanda Yearwood isn’t at home in bed. However, unlike many college students who are awake at that hour on a weekend, Yearwood isn’t partying. Instead, she’s hard at work writing the news for WCNC, the NBC affiliate in Charlotte. Yearwood, who is executive producer of Winthrop’s video magazine, “Winthrop Close-Up” and edited the Roddey McMillan last year, has parlayed her experience into a part-time job – fulltime after graduation in December − as an associate producer for the TV network. In the wee hours of the morning, the senior from Columbia, SC, makes beat calls to ferret out the news, then writes stories the anchors read on “6 News Today.” She also produces the two and a half minutes of news read during the cutaways from NBC’s “Today” show. In addition to her weekend hours, Yearwood works 8 p.m. to midnight Monday-Wednesday writing stories and running the teleprompter for the 11 p.m. “Nightcast News.” “Sometimes I look at the video to make sure reporters have stand-up at the end of the tape or that the video is edited to match the story,” she said. Yearwood got the job when the NBC6 news director visited her broadcast programming class last spring. Students of note Kim Barroso, a sophomore piano performance major from the Philippines, won the Greater Spartanburg Philharmonic Concerto Competition. As the winner, he will appear as guest soloist with the Spartanburg Philharmonic Orchestra in a performance of the SaintSaens Piano Concerto No. 2 in G minor, Jan. 27 at 8 p.m. at Twitchell Auditorium on the campus of Converse College in Spartanburg, SC. “He learned I was producing “Winthrop Close-Up,” and the professor told him lots of good things about me. So, he approached me about working at 6.” Yearwood reported to work in May. Between WCNC, “Winthrop Close-Up” and classes, her time revolves around the camera. “Basically, my whole life is television,” she said. “I get up at 7 a.m., do class work until noon and when I get home on weekends. I’ve always been a good student and hard worker. Most my classes are in the afternoon, and NBC6 is very understanding if I tell them I need time off. They realize I’m still in school.” Although Yearwood admits she doesn’t have a social life, she feels the payoff is worth it. “I believe you have to sacrifice some things now to fulfill your goals.” Yearwood has had dreams of being in front of the camera since she was in fourth grade. The work she is doing now, she believes, is preparing her for that time. “I always knew this is what I want to do,” Yearwood vows.” Initially, I wanted to be a writer. As I got older, I realized I really enjoy meeting and working with people – and I love to talk. Why not get paid for doing what I love to do? So I decided I wanted to be on television. Production is a stepping stone to where I’m going. Writing skills in this business are very important. To be a good broadcaster or reporter, you have to have wonderful writing skills. What I’m doing is preparing me for that.” Yearwood is excited about the opportunity she has and says she wants to stay at NBC6 for a few years and get as much training as possible. She also plans to take some more classes and get more personal training “I want to take some voice and diction classes to perfect myself.” After that, you might hear: “And now let’s go to Roshanda Yearwood reporting from…” When Erin Morris was born in the small town of Andrews, SC, she seemed to be a normally healthy baby. Then at three months, she began losing her sight in one eye. By 20 months, she was completely blind. “I had something called retinoblastoma. It seemed to pop up out of nowhere because there isn’t any history of it in my family,” said the junior music major. Retinoblastoma is a cancer of the eyes that occurs in about one of every 15,000 to 30,000 children. Tumors attack the retina and hinder it from receiving focused images. Even though she was faced with such a life-altering tragedy so early in her life, Morris’ blindness didn’t affect her need to express herself in song. “I’ve been singing for as long as I can remember. As a child, I was always singing with the radio or something,” said Morris. Although she had always enjoyed singing, Morris did not get up enough courage to perform until she was 12 years old. “Believe it or not, I first sang in front of people as a dare. One of my teachers at the time, who had always encouraged my singing, dared me to get up in front of a group of teachers for a talent show,” she said. As she grew up, Morris became more and more confident in her talent. She sang in a gifted and talented chorus group in high school and in her church choir. She also played the flute in her high school band. “I always have had an interest in music, but I thought that if I majored in it, I would get behind,” she said. Morris overcame her hesitation to become a music major; she is now a dedicated member of both Jazz Voices and the Chorale. Even though she has overcome her fear, the vivacious young woman faces obstacles in performing that most people don’t. Contrary to popular belief, sing- ing is not entirely auditory, Morris submits. “Things are often more visual with music than one might expect. While the rest of the group is able to keep an even tempo by looking at the director, I have to feel when things sound right. I have to pay closer attention to what’s going on than most. But, after a while, I don’t even have to concentrate because it becomes automatic.” Morris is so enamored with the level of expression that singing gives her that she admits she enjoys singing in the car as well as in the auditorium. “I love it all. I especially love giving my interpretation to solos, because solos show a lot about an individual and are very different from person to person.” Although she does enjoy the occasional solo, Morris insists that she primarily likes to perform in small groups. “I like the smaller groups better because you are allowed to be more free and personal with less people to worry about. However, small groups also are more challenging because it is much easier to be heard if you mess up,” she said. Although Morris enjoys her music, she isn’t sure what she would like to do with her degree. “I really don’t know what the future holds for me as far as music goes. I had given some thought to getting my master’s degree in social work because I know, no matter what I do, I would love to help people,” she said. Morris has transcended the supposed limitations of her disability by a personal philosophy that stresses a concentration on strengths instead of weaknesses. “I think people should strive to find things that they are good at and that give them a sense of satisfaction. Being good at something builds character and strength. Instead of focusing on the things you can’t do, focus on the things you can do because everyone, no matter if they have a disability or not, has a niche, it’s just our job to find it.” Erin Morris (center) discovered a talent for singing and joins her voice with other students as a member of Winthrop’s Jazz Voices and Chorale. Winthrop Update • Winter 2000-01 7 STUDENT SPOTLIGHTS English majors (continued from page 1) of all ages in the program, from 18 to 80, just people interested in the topic.” Sullivan said she was surprised how many of her classmates were over 50 years old. “Many returned year after year. A lot of the people attending classes had Ph.D.s and most were highly educated,” Sullivan noted. Both women chose the medieval studies program because it offered optional excursions and presented a part of English literature to which they had had little exposure. “The only medieval literature I had read was Chaucer,” Wagner explained. “I’d heard about morality plays, but I’d never read them.” In addition to general lectures with guest speakers, Wagner took a week of study on medieval drama, a week on Richard II and one on medieval manuscripts, which was taught by a librarian at Corpus Christi College, a highly restricted manuscript library to which the students were granted access. Sullivan chose a week on middleEnglish literature, one on women in medieval society and a week on the history of stained glass art. Their lecture-related excursions took them to the Globe theatre for a production of Hamlet (which Sullivan deemed “awesome”); the Swan in Stratford for a performance of Henry IV, Part I; Lincoln to see the historic cathedral and castle; historic properties in Orford, Framlingham and Southwold; the medieval town of Lavenham; and to the textile town of Norwich. “It was a great experience to study with such experts,” Sullivan commented. “It will be very beneficial to me when I apply to graduate school to say I’ve studied at Cambridge.” Added Wagner, “It’s amazing to be able to do all this because of someone’s generosity.” The formal studies were just one dimension of what Wagner and Sullivan gained from their fellowship experience. “Being exposed to a new place and new culture − even though it’s not as different as non-Western cultures – was incredible. Being able to actually walk into a place I’d read and heard about in other classes was wonderful,” Wagner said. “It changed my perspective having gone to another country not knowing anyone,” agreed Sullivan, who e-mails newly made friends. “I now feel like I can do anything I want to do, and I want to either go to graduate school in London or take a year off and live there. I never would have thought about doing that before. This experience has changed the direction of my life.” Although Wagner and Sullivan had corresponded with the Hurleys before their trip, the two English majors had the opportunity to let their benefactors know just how much their summer abroad meant to them when they met the Hurleys for the first time in December. Wagner presented them with a thank-you gift, a tapestry pillowcase from Westminster Abbey. Student rubs shoulders with world leaders Mauritius is an unassuming volcanic island off the southeast coast of Africa that barely covers 720 square miles in the Indian Ocean. The island is framed by a small coastal plain that rises into discontinuous mountains encircling a central plateau. Packed into this small piece of paradise is a level of cultural diversity and natural beauty that Marlborough, CT, native Kristen Planny never would have imagined. A graduate student in counseling, Planny was selected to take part in a three-and-a–half-week internship as an assistant to former Winthrop Foundation board member U.S. Ambassador Mark Erwin who underwrote the internship. When she first arrived, Planny had mixed emotions. “The U.S. ambassador called Mauritius ‘the Hawaii of the Indian Ocean,’ yet some of the cities looked quite run down. However, the more I got to know the island, the more beautiful it became,” she said. One way Planny got to know the country was in her work with the SelfHelp Department, through which the U.S. provides financial support for the people of Mauritius. Planny visited some of the non-profit organizations that had requested funds from the embassy, including centers for the blind, mentally handicapped, drug rehabilitation and preservation of wilderness. One of Planny’s most important jobs was making sure the money was being spent on the projects for which it had been requested. “The blind center requested the money to purchase a piece of equipment that translates print into Braille, and the mentally handicapped school requested money for some furniture for a new school they were building. I was responsible for making sure these organizations followed through with their use of the money.” Students of note Four art and design students won awards in the S.C. State Fair Juried Art Exhibition in Columbia. Chris Clamp, a senior from Leesville, SC, won a Merit and Purchase award for “To Think of You Is to Cherish a Stinging Memory.” Josh Drews, a senior from Columbia was awarded third place in prints for his monoprint “One Over Her.” Frank McCauley, a junior from Summerville, SC, was awarded a Merit Award for “Place Head Here,” Drews’ and McCauley’s works were accepted into the traveling exhibition as well. Jeffrey Smith, a sophomore from Columbia was awarded second place in the Painting on Canvas for Amateurs category for his piece “Untitled.” Smith also was awarded third place in Drawing for Amateurs for “Melody Maker.” Kristen Planny never dreamed Mauritius then Prime Minister Navinchandra Ramgoolam himself would offer to show her how government business is conducted in a parliamentary system. Planny spent 31/2 weeks in the Indian Ocean nation as an assistant to former Winthrop Foundation board member U.S. Ambassador Mark Erwin. Planny said visiting these non-profits gave her a close look at a side of the country she would not otherwise have seen. In addition to her watchdog role, Planny took part in embassy meetings to discuss issues such as the New Africa Bill. “The New Africa Bill is a trade agreement between the U.S. and Africa. Mauritius sees it as a way to accelerate its current import and export quota with the U.S. Although Mauritius is a country with a growing trade industry that exports such things as sugarcane and textiles like Gap clothing, it also has a thriving tourism business. The bill could help Mauritius remain the African country with the highest per capita income,” she said. Planny also learned about her host country through the social scene. Not only did she help plan a huge July 4th celebration for the more than 1,200 people at the ambassador’s residence, but she was invited to parties that were much different from any she’d ever been to in the U.S.: ones for the independence of Russia, the Queen of England’s birthday and the independence of Madagascar. “I was so honored to be a part of these parties with foreign diplomats. They made me feel so welcome that I really felt a part of the celebration. It was one of the most incredible experiences I had in Mauritius,” she said. Although she was surrounded by a virtual paradise, Planny said what she most enjoyed about the country was her co-workers at the U.S. Embassy and the people of Mauritius themselves. “The people were so hospitable. It was so nice to be in a place where Americans were not only welcome but actually sought after. The majority of the tourists are from France and England; they don’t get as many Americans,” she said. According to Planny, approximately 5,000 American’s visit the island each year. One major factor contributing to this low number is that it takes 21 hours to get there from the U.S. “Including layovers, it took me around 36 hours to get to Mauritius,” she said. However, getting there was worth it. Planny experienced hospitality from both local people and foreign diplomats. Mauritius then Prime Minister Navinchandra Ramgoolam invited Planny to his office to show her how government business is conducted in a parliamentary system. “The prime minister gave me a special invitation to meet with him. He let me sign a book that diplomats he had met with from all over the world had signed. I felt so honored and surprised to be listed among such great people.” A member of the British Commonwealth, Mauritius boasts a multitude of languages and cultures. English is the official language, but Planny discovered almost everyone in the country also speaks French and Creole and, to a lesser degree, Hindi, Urdu, Hakka and Bojpoori. Fifty-two percent of the people are Hindu, while 28 percent are Christians and 17 percent Muslims. To add to the diversity, there are also a number of ethnicities on the island including IndoMauritians, Creoles, Sino-Mauritians and Franco-Mauritians. “With all of these contrasting cultures living on one island, it would seem as if there would be some sort of cultural tension. But Mauritius is a very culturally tolerant country due to the unifying force of the Creole language and the fact that tolerance is taught from a very early age,” said Planny. Because of her experiences in Mauritius, Planny learned more about both her own culture and herself. “It was the cultural differences that allowed me to learn about the U.S. Mauritius was an incredible learning experience for me and I couldn’t have dreamed anything to compare with it.” Office of University Relations Rock Hill, South Carolina 29733 Loyalty through the generations... continuing the legacy NON-PROFIT ORGN. U.S. POSTAGE PAID CREATIVE Number 2 Winter 2000-01 “The Refinery,” an oil on canvas painted by Ed Lewandowski in 1949, examplifies why the former Winthrop art department chair was honored in a documentary. Volume 9 W U IN T HROP PDATE * Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project
https://manualzz.com/doc/30273027/hurley-fellowships-are-english-majors--ticket-to-new-view.
CC-MAIN-2020-40
refinedweb
30,845
61.77
Application SDK Development Tutorial This tutorial provides hands-on experience developing Gecko OS Native C Applications using the Gecko OS Application SDK. The tutorial covers the steps required to develop and debug a Gecko OS application, as well as the process to deploy and manage a Gecko OS application on the Zentri Device Management Service (DMS). The application developed in this tutorial is a simple network-connected weather station app featuring the Si7021 Relative Temperature and Humidity (RH/T) Sensor available on the WGM160P Wireless Starter Kit (WSTK) mainboard. The application source code for the tutorial is also provided for reference. Additional Gecko OS Application Examples are available in the Gecko OS Application SDK. Software and Hardware Requirements This tutorial requires : - A WGM160P Wireless Starter Kit and Gecko OS Studio (GSS). For details on installing and setting up GSS, see Getting Started with Gecko OS Studio. - An account on the Zentri Device Management Service (DMS). Sign-in to both the DMS and Gecko OS Studio using a registered Silicon Labs account. - A UART terminal emulator application e.g., Tera Term, minicom, CoolTerm, and so on. Topics - Creating a Project in GSS - Setting up Your Device for Development - Build and Debug the Application - Output 'Hello World' via the Serial Port - Networked Application with a Single Line of Code - Using a Device Driver to Communicate with the Si7021 Sensor - Introduction to the Project Makefile - Creating Events to Read and Report the Weather - Gecko OS Components - Adding the Gecko OS Web Application - Implementing Wi-Fi Setup via the Web App - Preserve NVM Settings During Development - Configuring the Default Gecko OS Settings for Your Application - Adding Files to the File System - Deploying Your Application to the DMS - Performing Over the Air Firmware Updates via the DMS - Sharing Your Development Product with Other Users or Other Devices - Taking the Product to Production on the DMS Advanced Topics Creating a Project in GSS Open GSS and navigate to the Projects page as shown below. Create New Project then complete the project fields as shown below. A brief description of each field follows the image. Note! Ensure that file and directory names do NOT contain spaces. A description of the New Project page fields is provided below. - Project Name. The project identifier used in the IDE and the filesystem. - Target. The hardware target for the project. If a device is connected, this field will automatically be set to the connected device type. - License. Currently the STANDARDGecko OS license is the only license that supports SDK development, therefore this is the only option. - Software. The Gecko OS version used for the project. This field should be set to the latest release unless a previous (older) release is required. - DMS Product Code. The product code used to identify the project on the DMS. The product code must NOT contain spaces and is a maximum of 16 characters. - Workspace Path. The local workspace directory on disk where the project will be created and where the Gecko OS ASDK source code will be extracted. GSS will create a default folder, however a different folder location may be entered. Click the Create button to create the project. This initiates the process of downloading the SDK and platform files, creation of the project template and setup of the device for development. The project creation process can be run without a device connected (further information is available in the section, Setting up your Device for Development). During the Project Creation process, the following page is shown. After the project is created, two options are presented: Open Project Folder and Open in IDE. If the IDE has not previously been installed, select the option that includes Install IDE as shown below. Selecting Open in IDE launches the IDE, where the newly created weather project and the Gecko OS Application SDK can be explored. The project and Gecko OS ASDK are located adjacent to one another in the Project Explorer column on the left-hand side of the IDE. Setting up Your Device for Development A critical step for Gecko OS development is device setup. The device setup process performs the actions necessary to put the device in development mode and to enable the device to interact securely with the DMS. Device setup must be performed prior to using a device for development of a particular application. Device setup is required one time for a given application, although it must be repeated after switching from one project to another. Your computer must be connected to the Internet for device setup to be successful. You can begin software development without performing device setup, but you will not be able to load and run the project on the device until this process is complete. If a device is connected to your computer during the GSS project creation process, the device will be automatically set up for development. Device setup can be performed at any time for a given project by double-clicking Setup Device under the Build Targets of your project as shown below. The device setup process performs several actions outlined in the following list; DMS concepts referred to in the list below will be introduced later in this tutorial. - Initialize the device security keys necessary to communicate with DMS - Claim the device into your DMS account (if not already claimed) - Create a Product for your project named WEATHER - Activate the device to the WEATHERProduct - Program the device with the default Gecko OS Product which is SILABS-WGM160Pfor the WGM160P module. View your Device and Product in the DMS Using a web browser, go to the Zentri DMS and login using your Silicon Labs account. The activity log, available in the bottom left corner of DMS (example shown below), shows a log of recent activity in your account. Several entries are shown corresponding to the project creation and device setup process described in the tutorial so far. Select the Devices tab of the DMS (see below), the device is claimed into your account (now owned by you) and activated to your new development product called WEATHER. Select the Products tab, select the blue Development Products filter, and your recently created development Product WEATHER appears. See the example below. You may have noticed the DMS appended a 6 character code in front of your Product code. The examples above show the product 03MWMC-WEATHER. The 6 character code is a unique DMS customer code and is used to identify your development products. Your customer code is automatically prepended when a development product is created. Build and Debug the Application Building the Application Double-click one of the Build Targets of your project to build, download, and run the project on the device. Choose from three options: - Download Application. Build the project and download only the executable portion of the project to the device. This excludes resources that would otherwise be programmed to the filesystem such as config files, web server assets, and so on. - Download Resources. Build the project and download only the resources to the device. This excludes the executable portion of the project. - Download Application and Resources. Build the project and download the executable and all of the resources to the device. This target downloads everything in the project to the Device. The Download Application and Resources option is the fool-proof way to ensure the entire project is on the device. You should use this option the first time you build and download a project to a device. To save time during development, you can use Download Application if you have only modified your application code since the last download, or you can use Download Resources Only if you have only modified project resources since the last download. After the build and download is complete, your application automatically runs on your device. Debugging the Application To start a debug session, click on the bug icon as shown below. The application must already be downloaded to the device. The debugger launches and the application stops execution at the beginning of the function gos_app_init(), the entry point to all Gecko OS applications. The output of the project build is located in the SDK subfolder named output. The SDK and all subfolders are available in the IDE Project Explorer column. Output 'Hello World' via the Serial Port The entry point to any Gecko OS application is the function gos_app_init(), see Gecko OS Application Structure. Notice that when the project was created, GSS automatically inserted a GOS_LOG("Hello World!"); statement in the gos_app_init() function. The GOS_LOG() statement is a macro that outputs the specified text to the log bus. With the application downloaded to the device, open a device console, press the Reset button, and you will see the following text appear on the console. If you need help getting getting the console working, see Gecko OS Console. [Ready] LOCAL> Hello world! Type the Gecko OS command version (or the equivalent shortcut ver) and the text similar to the following appears. LOCAL> ver 03MWMC-WEATHER-0.0.0-local, Gecko_OS-STANDARD-4.0.17-1380, WGM160P Note that first part of the version is the product code created in the DMS during the project creation. For practice, copy the following code and replace the "Hello World!" text by editing the gos_app_init() function in main.c. void gos_app_init(void) { // Print to the serial terminal GOS_LOG("Weather Demo "); GOS_LOG(" | "); GOS_LOG(" \ | / "); GOS_LOG(" \ / "); GOS_LOG(" ,d8b, ., "); GOS_LOG(" (')-\")_ 88888 --- ;';' ';'. "); GOS_LOG("('- (. ')98P' ';.,; ,; "); GOS_LOG(" '-.(PjP)' \ '.';.' "); GOS_LOG(" | \ "); GOS_LOG(" | "); } Save the changes and double click the Download Application target. After the application builds and downloads, the following text appears on the device console. [Ready] LOCAL> Weather Demo | \ | / \ / ,d8b, ., (')-")_ 88888 --- ;';' ';'. ('- (. ')98P' ';.,; ,; '-.(PjP)' \ '.';.' | \ | Networked Application with a Single Line of Code Even though barely any code has been written, the project is already a complete, fully-functional network-ready application. You can test this in the device console by connecting to a network and downloading a portion of the Google Website using the sequence of Gecko OS commands shown below. Note that in the example below, the yellow text is user input and the white text is device output. LOCAL> network_up -s ! 41 networks found ! # Ch RSSI MAC (BSSID) Network (SSID) # 0 1 -79 82:2A:A8:87:82:40 SiliconLabsGuest # 1 1 -71 80:2A:A8:87:C2:8D SiliconLabs # 2 2 -66 2C:30:33:51:36:57 NETGEAR61 # 3 6 -85 92:2A:A8:87:A3:39 <ssid hidden> Type the number # that matches your Network: 0 Type the password for your Network : ******** [Associating to SiliconLabsGuest] In progress LOCAL> Security type from probe: WPA2-AES Obtaining IPv4 address via DHCP IPv4 address: 10.1.54.38 [Associated] LOCAL> http_get [2019-02-27 | 15:35:34: Opening:] Request GET / Connecting (http): [2019-02-27 | 15:35:34: Opened: 1] Status: 200 1 LOCAL> stream_read 1 1000 <"><title>Google</title><script nonce="kR0INZ86xC/a7AUJZCrEjg==">(function(){window.google={kEI:'Ra52XNPRMY6asQW0o4yQCA',kEXPI:'0,1353747,57,1958,1016,1406,698,527,730,1799,30,1227,806,911,247,25,203,27,222,1,37,430,2334178,329533,1294,12383,4855,32692,15247,867,6057,4704,1402,6381,3335,2,2,6801,364,1172,2147,1263,4242,224,2218,260,5107,575,835,284,2,579,727,2431,59,2,1,3,1297,4323,3390,10,300,658,609,774,2250,1407,3337,1146,5,2,2,981,764,222,2591,1021,2580,669,1050,1808,1129,268,81,7,1,2,488,620,29,1395,978,2632,696,3 In addition to the above example, the entire Gecko OS Command API is available for use with the application. Also, at this early stage of development, the application may be deployed to the DMS with full over-the-air firmware update capabilities as described later in this tutorial in the sections Deploying your application to the DMS and Performing over the air firmware updates via the DMS Using a Device Driver to Communicate with the Si7021 Sensor Next, you will learn how to add the Si7021 temperature/humidity sensor device driver to the project and use the driver to read the temperature and humidity from the sensor. STEP 1. Download the Si7013 device driver source. Note that the Si7013 driver also supports the Si7021 temperature/humidity sensor as both parts are from the same device family. STEP 2. Add the downloaded files into the project folder either by unzipping the file directly in the project folder or by unzipping it elsewhere and dragging and dropping it into the project in the IDE. You may have to refresh the IDE for the new files to appear. To refresh, right-click on the project name in the IDE, then select Refresh from the pop-up menu. After adding the driver, your project should look similar to the following screen capture. STEP 3. Add code to detect the Si7021 using the newly added driver. The Si7021 driver uses the Gecko OS I2C Master peripheral API to communicate with the device. To use the driver, first include the driver header file with #include "si7013.h" and then declare a gos_i2c_device that is used as the handle to the driver. After those two are added, we can use the driver to communicate with the device. The code below adds the driver and calls two driver functions from within gos_app_init() to detect the device and read the device firmware. #include "gos.h" #include "si7013.h" static gos_i2c_device_t si7021_device = { .port = PLATFORM_STD_I2C, .address = SI7021_ADDR >> 1, .speed = GOS_I2C_CLOCK_HIGH_SPEED, .read_timeout = 50, / si7021 can take up to 25 ms to finish a conversion / .retries = 0, .flags = 0 }; // ------------------------------------------------------------------------------------------------ void gos_app_init(void) { uint8_t part_id = 0; uint8_t part_rev = 0; // Print to the serial terminal GOS_LOG("Weather Demo"); // Detect the Si7021 and read the firmware version Si7013_Detect(&si7021_device, 0, &part_id); Si7013_GetFirmwareRevision(&si7021_device, 0, &part_rev); if(part_id == SI7021_DEVICE_ID) { GOS_LOG("Detected Si7021 Version %d.%d", ((part_rev >> 4) & 0xF), (part_rev & 0xF)); } else { GOS_LOG("Si7021 Not Detected"); } } STEP 4. Add the include path for the si7013 header to the project include path. Right-click the project name then select Properties. In the Properties dialog, under C/C++ Build -> Settings -> Include Paths, use the green + icon to add a new include path string si7013 as shown below. STEP 5. Build and download the application by double-clicking on Download Application. After the download completes, the following text appears in the device console. [Ready] LOCAL> Weather Demo Detected Si7021 Version 2.0 Introduction to the Project Makefile You may have noticed a file named weather.mk in your project. This is the makefile for the project, which instructs the build system how to build your project. Note that when using the GSS Eclipse IDE for development, the makefile is automatically managed and edited by the IDE. Therefore, editing this file directly may not always produce the desired outcome. Also note that the IDE does not edit the makefile until a build is performed. Therefore, modifying project settings will not be reflected in the makefile until a build is performed. Creating Events to Read and Report the Weather Next, add a couple of events to read the temperature and humidity from the Si7021 and report the results to the device console. Gecko OS applications follow an event-driven programming model. The details will not be discussed in this tutorial, but more information can be found in the Gecko OS documentation: For this tutorial, you will learn how to add a periodic event that initiates a Si7021 measurement. This event will, in-turn, register a timed event to read the results 50 milliseconds later, then report the results to the device console. The periodic event will run every 10 seconds. STEP 1. Register the periodic event in the gos_app_init() as show below. #define WEATHER_REPORT_PERIOD_MS 10000 void gos_app_init(void) { ... gos_event_register_periodic(start_measurement_event_handler, 0, WEATHER_REPORT_PERIOD_MS, 0); } STEP 2. Add the event handler functions as shown below to the main.c file of the project. The code registers a periodic handler start_measurement_event_handler() that runs every WEATHER_REPORT_PERIOD_MS seconds. The periodic handler starts a measurement, then registers a one-shot timer handler read_and_report_event_handler that reads and reports the sensor value after SI7021_MEASUREMENT_TIME_MS (which is enough time for the sensor reading to complete). Existing code is highlighted in blue text, new code is in black. #define SI7021_MEASUREMENT_TIME_MS 50 // the si7021 takes max 50ms to perform measurement void gos_app_init(void) { ... }// ------------------------------------------------------------------------------------------------ // Start the measurement then schedule a timed event to read the measurement // ------------------------------------------------------------------------------------------------ static void start_measurement_event_handler(void arg) { //Initiate the measurement Si7013_StartNoHoldMeasureRHAndTemp(&si7021_device, 0); // The Si7013 takes a maximum of 50 ms to complete a measurement // therefore we register an event to read the results 50 ms later gos_event_register_timed(read_and_report_event_handler, arg, SI7021_MEASUREMENT_TIME_MS, 0); } // ------------------------------------------------------------------------------------------------ // Read the measurement then report the results // ------------------------------------------------------------------------------------------------ static void read_and_report_event_handler(void arg) { uint32_t rh_data; int32_t temp_data; char temp_float_str[10]; char rh_float_str[10]; /)); } STEP 3. After adding the code shown above, double-click on Download Application. Once the project is built and downloaded to the device the following will appear in the device console with a new report being added every 10 seconds. LOCAL> Weather Demo Detected Si7021 Version 2.0 Temp: 29.2 C | RH: 31.3 % Temp: 29.3 C | RH: 31.2 % Temp: 29.2 C | RH: 31.1 % Temp: 29.3 C | RH: 31.1 % STEP 4. Place your finger on the Si7021 sensor or blow air over the top of it to see the results change as the sensor heats or cools. For your reference, the image below shows the location of the Si7021 sensor on the WSTK main board. Gecko OS Components The Gecko OS build system utilizes the concept of 'components' to allow easy modularization and reuse of source code and other resources. A component can be thought of as a code library but it may also include other resources such as files to be stored on the filesystem. A component may also include sub-components. The component may contain source code or static libraries. The primary element of a component is a makefile. When a component is included in your project, the Gecko OS build system executes the build of the component makefile and adds the component to your application. The Gecko OS Application SDK includes a variety of components that can be added to your project using the following procedure. - Right click on the project name then select Properties. - In the Properties dialog navigate to C/C++ Build -> Settings -> Components. - Add a component using the relative path in the SDK. Note that the component path is equivalent to the relative path of the component within the SDK except with periods (.) instead of slashes (/) to separate sub folders. For example: components.gecko_os.util.msgpack_file_reader Adding the Gecko OS Web Application The Gecko OS web application is technically a Gecko OS component. However, it is added/removed from a project differently in the IDE. By default, the web app component is added to a new project upon project creation because it is commonly used by many applications. To remove it, right click on the project then select Properties. In the Properties dialog navigate to C/C++ Build -> Settings -> General. Use the Add Gecko OS Webapp check box as shown in the image below. Before continuing, select Apply and Close. Go to the web app component folder found in the SDK sub-directory resources/gecko_os/webapp, and view the web app makefile webapp.mk as well as the associated manifest.cfg file. Notice that this particular web app component does not consist of any code. It is purely a set of files programmed to the file system on the device. With the application running, open the device console and type the ls command. You will see the web app files matching those listed in the manifest.cfg file. This is shown below with the web app files highlighted in yellow. LOCAL> ls ! # Size Version Filename # 0 171792 0.0.1 Weather.app # 1 297 3.1.3 favicon.ico.gz # 2 294520 4.0.17 sys/kernel.bin # 3 297040 2.1.0 sys/wfm_wf200.sec # 4 23823 3.1.3 webapp/gecko-os.css.gz # 5 72758 3.1.3 webapp/gecko-os.js.gz # 6 1819 3.1.3 webapp/index.html # 7 9530 3.1.3 webapp/unauthorized.html For practice, remove the web app component from the project and view the changes to the application. To remove the web app, uncheck the Add Gecko OS Webapp check box in the project properties dialog as described previously. Click Apply and Close, then double-click on the build target Download Application and Resources. Note! Since the web app component includes resources, you must use Download Application and Resources for the changes to be reflected on the device. After the application is loaded, type the ls command in the device console again and observe that the web app files are no longer available on the device file system. LOCAL> ls ! # Size Version Filename # 0 171792 0.0.1 Weather.app # 1 294520 4.0.17 sys/kernel.bin # 2 297040 2.1.0 sys/wfm_wf200.sec Re-add the web app by checking the Add Gecko OS Webapp check box in the project properties dialog. Then, double click the Download Application and Resources build target. Confirm the web app files have been restored to the device file system. Wi-Fi Setup Using the Web App The web app can be used as the primary interface to configure Wi-Fi network settings for your device. To enable use of the web app, we will change the application code to start Web Setup if a Wi-Fi connection is not successful. Web Setup initiates softAP mode allowing the user to connect to the device via Wi-Fi as if the device itself was an access point. Once connected, the user can configure the Wi-Fi network. Note! The Gecko OS SDK includes a device setup example application that can also be used to explore this functionality. STEP 1. In the function gos_app_init(), add a call to a new function to start the network interface and register two new event handlers to monitor and report the status of the setup events. As before, existing code is in blue and new code is in black. void gos_app_init(void) { ...gos_network_register_event_handler(GOS_INTERFACE_WLAN, wlan_network_event_handler); gos_setup_register_finished_event_handler(setup_finished_handler); start_network_interface(); } STEP 2. Append the function start_network_interface() to main.c. This function attempts to connect to a Wi-Fi network and, if not successful, start Web Setup. // ------------------------------------------------------------------------------------------------ // Try to start WLAN interface // On failure, start web setup // NOTE: Web setup will idle timeout after setup.web.idle_timeout seconds // Upon timeout, the system is rebooted. // ------------------------------------------------------------------------------------------------ static void start_network_interface(void) { gos_result_t result; GOS_LOG("\r\n\r\nChecking if device is able to connect to local network ..."); if (GOS_FAILED(result, gos_network_up(GOS_INTERFACE_WLAN, true))) { GOS_LOG("Failed to join local network, starting Web Setup"); if (GOS_FAILED(result, gos_setup_start())) { GOS_LOG("Failed to start Web Setup, rebooting"); gos_system_reboot(); } else { char buffer_url[128]; char buffer_username[64]; char buffer_passkey[64]; uint32_t idle_timeout; gos_settings_get_uint32("setup.web.idle_timeout", &idle_timeout); get_setup_url("setup.web.url", buffer_url, sizeof(buffer_url)); get_device_name("setup.web.ssid", buffer_username, sizeof(buffer_username)); gos_settings_get_print_str("setup.web.passkey", buffer_passkey, sizeof(buffer_passkey)); GOS_LOG(""); GOS_LOG("Device Web Setup started"); GOS_LOG(" 1. Connect to Wi-Fi network: %s with passkey: %s", buffer_username, buffer_passkey); GOS_LOG(" 2. Open browser to", buffer_url); GOS_LOG(" 3. Setup the device\r\n"); if(idle_timeout != 0) GOS_LOG("NOTE: Web setup will idle timeout after %d seconds\r\n", idle_timeout); } } else { GOS_LOG("Device WLAN started"); } } STEP 3. Append the function wlan_network_event_handler() to main.c. This function handles network events on the WLAN interface. // ------------------------------------------------------------------------------------------------ // Event handler for WLAN interface events // ------------------------------------------------------------------------------------------------ static void wlan_network_event_handler(bool is_up) { // If the WLAN interface has gone down AND softAP interface is NOT up try to restart the WLAN interface if ((is_up == false) && (gos_network_is_up(GOS_INTERFACE_SOFTAP) == false)) { // If the network goes down try to restart it, else start web setup start_network_interface(); } } STEP 4. Append the function setup_finished_handler() to main.c. This function is called when the user has set up the WLAN interface from the Web App. // ------------------------------------------------------------------------------------------------ // Just reboot the system when web setup finishes // ------------------------------------------------------------------------------------------------ static void setup_finished_handler(void *unsed) { GOS_LOG("Web setup finished, rebooting system"); gos_system_reboot(); } STEP 5. Append the following helper functions to main.c. These functions are used to format the device name and URL for console output. // ------------------------------------------------------------------------------------------------ // Convert '#' to last three chars of device's MAC address // ------------------------------------------------------------------------------------------------ static const char get_device_name(const char setting, char buffer, size_t buffer_length) { int setting_len; char setting_ptr; gos_settings_get_print_str(setting, buffer, buffer_length); setting_len = strlen(buffer); setting_ptr = &buffer[setting_len-1]; if (setting_ptr == '#') { char mac_str[20]; // Example output: // 4C:55:CC:10:78:9B gos_settings_get_print_str("softap.mac", mac_str, sizeof(mac_str)); setting_ptr++ = mac_str[13]; setting_ptr++ = mac_str[15]; setting_ptr++ = mac_str[16]; setting_ptr = 0; } return buffer; } // ------------------------------------------------------------------------------------------------ // Get the first entry in the setup.web.url domain list // ------------------------------------------------------------------------------------------------ static const char get_setup_url(const char setting, char buffer, size_t buffer_length) { gos_settings_get_print_str("setup.web.url", buffer, buffer_length); char comma_ptr = strchr(buffer, ','); comma_ptr = 0; return buffer; } STEP 6. Double click on the Download Application and Resources build target. After the application is loaded, you will see a message similar to the following in the device console. Failed to join local network, starting Web Setup Finding best SoftAP channel ... IPv4 address: 10.10.10.1 HTTP and REST API server listening on port: 80 Wi-Fi softAP: Gecko_OS-1BF on channel 1 Web browser : Device Web Setup started 1. Connect to Wi-Fi network: Gecko_OS-1BF with passkey: password 2. Open browser to 3. Set up the device NOTE: Web setup will idle timeout after 300 seconds STEP 7. Using a mobile device or a computer, follow the instructions to connect to the SoftAP created by the device and use the web app to configure the Wi-Fi settings. After the Wi-Fi settings are configured, the device will reboot. After reboot, the device will automatically connect to the Wi-Fi network. Preserve NVM Settings During Development The Wi-Fi settings configured in the previous section are stored in the Gecko OS variables wlan.ssid and wlan.passkey and saved in Non Volatile Memory (NVM) along with other settings. Typically, NVM is reset every time an application is downloaded to the device. This can become burdensome during development because it requires the developer to re-setup the Wi-Fi network and other settings each time the code is modified. To remedy this, you can enable a project setting to preserve NVM settings through application programming. Edit the project makefile weather.mk to specify the PRESERVE_NVM setting as shown below. Note! Because GSS does not (yet) include a user interface control to enable the PRESERVE_NVM setting, you have to edit the makefile manually. # Preserve NVM through application download PRESERVE_NVM := 1 Configuring Gecko OS for your Application Gecko OS Variables configure a Gecko OS device. They are essentially just configuration settings. The Gecko OS Settings API is used to programmatically access Gecko OS variables. In the context of the C API, the terms variables and settings may be used interchangeably. For this tutorial, you will create a settings.ini file and use the API function gos_load_app_settings_once() to configure the default state of the HTTP server. Any other Gecko OS variable can be added to the settings.ini file to configure the default value of the variable. STEP 1. Create a new file in the project called settings.ini with the following content. # Enable the HTTP Server http.server.enabled 1 # Enable the mDNS daemon mdns.enabled 1 # Configure the mDNS name. mdns.name mydevice-# STEP 2. Add a function call to gos_load_app_settings_once() to the gos_app_init() function as shown below. This causes the settings stored in the settings.ini file to be loaded when the application is initialized. void gos_app_init(void) { GOS_LOG("Weather Demo"); ...gos_load_app_settings_once("settings.ini", 1); ... } STEP 3. Edit the project makefile weather.mk to specify the setting.ini file as shown below. Note! Because GSS does not (yet) include a user interface control to add the settings.inifile, you must edit the makefile manually. # Paths to app settings .ini files (paths are relative to project directory) $(NAME)_SETTINGS_INCLUDES := settings.ini STEP 4. Double click on the Download Application build target to rebuild and download the application. Note that the settings file is built into the application image and is NOT stored in the filesystem. STEP 5. After the application is loaded on the device, verify the device settings are correct using the console as shown below. LOCAL> get http.server.enabled 1 LOCAL> get mdns.enabled 1 LOCAL> get mdns.name mydevice-# STEP 6. You should also now be able to access. Use the Gecko OS Command get wlan.macto get the MAC address for your device. Note! Not all clients support Network Discovery. In this case, connect to the web app using the IP address of the device, like the following. The IP address of the device can be read with the wlan.network.ip variable. Adding Files to the Filesystem Gecko OS provides a reliable read/write filesystem for use by the application. See Gecko OS File System. Add files to the filesystem using different methods including: - Use the IDE and a component with makefile during development - Manually add files to a bundle on the DMS. See the section Deploying your application to the DMS for more information about working with bundles on the DMS. - Manually add files to a specific device via the web app - An application can write files directly to the filesystem using the filesystem API. - An application can download files from a web server using the http_download command. In this tutorial, you will focus on the first method to use the IDE and a component with makefile. The steps below create and include the file about.html and illustrate how to view the contents of this file in both the command console and the Web app. STEP 1. Add a new file in the root directory of the project named about.html with the following contents. This is a simple HTML file that describes the application. <> STEP 2. Add a new file to the root directory of the project named manifest.cfg with the following contents. The manifest file will list all the files to be added to the filesystem. For another example of a manifest file, view the manifest.cfg file of the webapp component. ############################################### # # Resource Manifest # ################################# [about.html] name: about.html type: GENERAL path: about.html version: 1.0.0 STEP 3. Edit the makefile weather.mk to specify the manifest.cfg file. Note! Because GSS does not (yet) include a user interface control to add the manifest file, you must manually edit the makefile. # Path to resource manifest (path is relative to project directory) $(NAME)_RESOURCE_MANIFEST_PATH := manifest.cfg STEP 4. Double-click on the Download Application and Resources build target to rebuild and download the application and resource to the device. STEP 5. After the application is loaded on the device, verify the about.html file is available on the filesystem using the ls command as shown below. The new about.html file is highlighted in yellow. LOCAL> ls ! # Size Version Filename # 0 453 1.0.0 about.html # 1 297 3.1.3 favicon.ico.gz # 2 294560 4.0.18 sys/kernel.bin # 3 297040 2.1.0 sys/wfm_wf200.sec # 4 178664 0.0.1 weather.app # 5 23823 3.1.3 webapp/gecko-os.css.gz # 6 72758 3.1.3 webapp/gecko-os.js.gz # 7 1819 3.1.3 webapp/index.html # 8 9530 3.1.3 webapp/unauthorized.html LOCAL> STEP 6. Read the file contents in the command prompt as shown below. The yellow text is user input and the white text is device output. LOCAL> file_open about.html [2019-06-04 | 20:28:31: Opened: 1] 1 LOCAL> read 1 500 <> [2019-06-04 | 20:28:47: Closed: 1] STEP 7. View the about.html file in the device. Note! Not all clients support Network Discovery. In this case, connect to the web app using the IP address of the device, like the following. You can discover the IP address of the device be reading the wlan.network.ip Gecko OS variable. Deploying your Application to the DMS At any stage during the development process, your application can be deployed to the DMS as a development product. Once deployed, the application can be accessed by other devices and by other DMS users that have been given permission to access the product. Note that your DMS account is limited to a default of 50 devices that can be activated to a Development Product. Development Products are intended for development and small pilot runs only and do not have the security and role-based access restrictions necessary for large scale Production. STEP 1. As discussed in Setting up your device for development, a development product for your device is automatically created on the DMS when you set up the device. Log into dms.zentri.com and view your development products. STEP 2. Click on the <User Code>-WEATHER project and select the Bundles tab. If you have not yet deployed the product to DMS, you will find that no bundles exist for the project as shown below. STEP 3. In the GSS IDE, double-click on the Release to DMS build target as shown below. STEP 4. On the DMS, you will now see a new bundle added to your development product. Every time you release your project to the DMS, a new bundle will be created and the bundle version will be incremented. STEP 5. Select the new bundle to view the details and contents of the bundle. As you can see from the screen shot above, the bundle was released in a Previewstate providing a way to add/remove files to the bundle. Notice the bundle includes the file weather.app. This is the application binary that was compiled and tested using the SDK in the previous steps. Also, notice the bundle includes the files that are part of the web app. A bundle can be in several states, including Published. Once the bundle is set to Published, the bundle can no longer be edited. The Gecko OS Application SDK automatically releases bundles to the DMS in a Previewstate to enable developers to make final edits to the bundle before publishing. STEP 6. If you do not have any changes for the bundle, set the bundle state to Published. Once set to Published, a new drop-down appears allowing you to select a Tag. The Alpha and Beta tags are used during product testing, but for release software the tag is set to Release. This sets the bundle to the default bundle delivered when a device firmware update (DFU) is initiated. For additional details on the operation of bundles, see DMS tutorial. The notes below help to summarize the key aspects. - Remember that a Publishedbundle can no longer be changed. If you made a mistake, try copying the bundle and editing the new bundle. - The tag identifies which devices can gain access to the bundle. All devices are tagged as Release by default, including all standard evaluation boards. It is possible to change the tag of any device to alpha, beta or release. The device will then be able to access any bundle with a matching tag. - A Preview bundle can still be used to update the device firmware, but it's better to use a published bundle since the contents of the bundle are immutable (can’t be changed). This avoids confusion over which files are loaded onto a device if the bundle contents are changed multiple times. Performing Over the Air Firmware Updates via the DMS After a bundle is released to the DMS for your product, an over-the-air firmware update (OTA DFU) can be performed to update your device to the bundle. First, ensure that your Wi-Fi settings are configured correctly to connect to a Wi-Fi access point and that the access point has Internet access. In the device console, issue the (optional) Gecko OS command dfu_query to check if a new update is available. Then, issue dfu_update to perform the firmware update. The dfu_update command automatically performs a query to determine if an update is available and only performs the update if required. The console output below shows the entire update process. User input is shown in yellow. LOCAL> dfu_query Request POST /check Connecting (https): dfu.zentri.com:443 Starting TLS 1,03MWMC-WEATHER-0.0.1, Gecko_OS-STANDARD-4.0.18, WGM160P LOCAL> dfu_update UUID:EADE2FF30138EF705FC7265FD0CF5EFFFE9101BE DFU restarting ... Rebooting Starting network ... Request POST /request Connecting (https): dfu.zentri.com:443 Starting TLS Bundle version: 03MWMC-WEATHER-0.0.1, Gecko_OS-STANDARD-4.0.18, WGM160P Performing 'single-pass' update Caching [size: 284] sys/device_credentials.bin Caching [size:178016] weather.app Copying sys/device_credentials.bin (0x02000085) to internal flash Copying weather.app (0x020000A9) to internal flash Updating device credentials Starting network ... Request POST /result Connecting (https): dfu.zentri.com:443 Starting TLS Exiting DFU mode, status: 1 [2019-05-04 | 05:50:30: Ready] [2019-05-04 | 05:50:30: Associating to SiliconLabsGuest] DFU completed successfully To successfully update firmware, the device must be activated to the product. To activate a device to a development product, the device must be claimed by a user that has access to the product. The Device Setup process described earlier will perform both of those steps. However, be careful when working with multiple projects to avoid unwittingly activating your device to a different product. Sharing Your Development Product with Other Users or Other Devices To share the product with other users or to update other devices without using the Device Setup process, use the dms_activate command to activate the device to a product as shown below. LOCAL> dms_activate 03MWMC-WEATHER Request POST /activate Connecting (https): dfu.zentri.com:443 Starting TLS Device activated The device must be claimed to a DMS user that has access to your development product. To provide another user access to your product, click on the <User Code>-WEATHER product in the DMS then select Users as shown below. If the device is not already claimed, the user can claim their device using the dms_claim command. The dms_claim command requires an account token from the DMS. To obtain a token, log in to the DMS and in the upper right hand corner, select Profile followed by API Tokens. See the example screen capture below. Copy one of the valid user tokens and paste it into the dms_claim command as shown below: LOCAL> dms_claim **************** Request POST /claim Connecting (https): dfu.zentri.com:443 Starting TLS Device claimed Once the device is claimed and activated, the dfu_update command can be used to update the device as described in the section: Performing over the air firmware updates via the DMS Taking the Product to Production on the DMS The tutorial has described the process of working with Development Products. A development product is unique to a developer and is only accessible by the developer unless another developer has been given explicit access to the Product. This enables product development and testing to be performed in isolation from devices deployed in the field. There is no chance that a device in the field can accidentally become activated and updated to a development version of the product. Once the development process is complete, a development product may be transitioned to production whereby the product can be deployed to a large fleet of devices in the field. Production Products are subject to a much higher level of security and the DMS provides a more sophisticated role-based level of access to assist with fleet management. The details of transitioning a product from development to production on the DMS will not be covered further in this tutorial. Details can be found in the DMS tutorial at.
https://docs.silabs.com/gecko-os/4/standard/latest/sdk/tutorial/
CC-MAIN-2019-43
refinedweb
6,616
56.86
So I’ve gotten reports that on some monitors my game despite being in fullscreen plays surrounded by black bars surrounding it or leaks off the screen rather than filling the screen. Is there a way to enforce the engine to fill the screen regardless of aspect ratio. force full screen This is probably what you’re looking for: It will open a new window which will be fullscreen, and adjust t your desktop’s resolution. You can change the resolution in my game will thqt effect this Yeah that does not work try python from bge import render render.setFullScreen(True) see my game launcher (sig) for more options. For whatever reason and I have tried this before but using code to set fullscreen or wondowed does not work at all make sure you unckeck all related to it in the renders tab (no fullscreen, no deskop, no vsync,etc. selected). Then it should work. I’m setting reso and fullscreen etc for quite some time now trough python. I have done that before, is there anything I need to do in regards to logic bricks to get that working No. well yes hook the script up to python, but other then that no. You could put the controller on the top of the execution list by hitting a smal icon on it. (im not home so cant tell how it looks like), but there is a small button on the and/python controllers that says to execute that controller before all others. Maybe that helps. Other then that, i would say try my game launcher in my sig, see if you can set those settings, if that works then you are doing something wrong i guess. At this point i have no other clues What you’re looking for is at the “Option Editor -> Render Tab -> Display Panel -> Framing -> Extend”. It changes your FOV based on the aspect ratio. However you may have problems with the aspect ratio itself on full-screen. Alternatively you can change the camera FOV manually, but at the end is the same so I don’t think it’s worth it. The aspect ratio depends on the window resolution, on full-screen that should be the same as the desktop resolution. To indicate that you want to use the same instead of a custom one, you must check “Render Tab -> Standalone Player Panel -> Desktop”. You may have further problems on some cases. For instance if there is a BGE/UPBGE bug that chooses the wrong resolution, of if you execute the game in compatibility mode with the option “run on 640x480 resolution mode”. These are harder to fix, so it’s best to give the player an option to set their own resolution inside the game. When to use a launcher? Sometimes games crash on full-screen, or on a certain resolution. This means that even if you give the player the option to choose from inside the game, if he can’t even start the game (because it crashes) he won’t be able to play at all. Here is where a launcher comes useful. With a launcher you can configure this settings BEFORE launching the game, and therefore fix the problem. Be wary though, because most “launchers” you’ll find in this forums are actually made with BGE and therefore useless for this purpose. In your case I don’t think you need a launcher. If in desperate needs you can always indicate the user the proper command to launch the game with the needed options. P.S: Make sure you have not messed with the camera settings though. For example, make sure your camera’s “Custom Viewport” panel is disabled.
https://blenderartists.org/t/force-full-screen/701189
CC-MAIN-2018-22
refinedweb
620
78.48
NAME¶ FBB::A2x - Objects performing ascii-to-x (anything) conversions SYNOPSIS¶ #include <bobcat/a2x> Linking option: -lbobcat DESCRIPTION¶ FBB::A2x objects offer the C++ equivalent of the standard C++ string conversion functions like stol, stoul, stod etc. These standard C++ string functions are extremely useful and should probably be preferred over using the members of A2x objects, but A2x offers additional benefits in that it generalizes these functions to any type that can be extracted from a istream objects. NAMESPACE¶ FBB All constructors, members, and operators, mentioned in this man-page, are defined in the namespace FBB. INHERITS FROM¶ std::istringstream CONSTRUCTORS¶ - o - A2x(): This constructor constructs an empty A2x object. No information can be converted from a thus constructed A2x object. - o - A2x(char const *text): This constructor stores text. If text represents a textual value of some type, the A2x object may be used to initialize or assign this value to a variable of that particular type. Extraction, however is also still possible. - o - A2x(std::string const &str): This constructor stores the text contained in str. If this text represents a textual value of some type, the A2x object may be used to initialize or assign this value to a variable of that particular type. Extraction is also still possible. The copy and move constructors are available. STATIC MEMBER FUNCTION¶ - o - bool lastFail(): This member returns true if the last conversion failed (i.e., the object’s fail() member returned true and returns false otherwise). This member allows checks on the success of the extraction/conversion using anonymous A2x objects. The member also returns true when no conversions have as yet been performed. Note that this member returns the value of a thread_local static member: different threads cannot inspect other threads’ lastFail status. MEMBER FUNCTION¶ All members of the istringstream class are available. - o - Type to(): This member returns any type Type supporting extractions from i[string]streams. If the extraction fails, the A2x object’s good() member returns false, and the Type’s default value is returned. This operator was implemented as a template member function. There is also a type conversion operator available (see below), but the member function variant may be preferred over the conversion operator in situations where explicit disambiguation is required (e.g., in cases where a conversion has no obvious type solution such as direct insertions) An example is provided in the EXAMPLE section below. OVERLOADED OPERATORS¶ - o - operator Type(): Conversion to any type Type supporting extractions from istreams. If the extraction fails, the A2x object’s good() member will return false, and the Type’s default value is returned. This operator was implemented as a member template. - o - istream &operator>>(istream &, Type &): Extraction to any type Type supporting extractions from istreams. If the extraction fails, the A2x object’s good() member returns false, and the Type’s default value is returned (this facility is implied by the fact that this class inherits from istringstream, but it’s probably useful to stress that the extraction operation is still available). - o - A2x &operator=(char const *): Stores new text in the A2x object, resets the status flags to ios::good. If a 0-pointer is passed, an empty string is stored. - o - A2x &operator=(std::string const &): Stores the text stored in the std::string argument in the A2x object, resets the status flags to ios::good. The overloaded assignment operator is available EXAMPLE¶ int x = A2x("12"); A2x a2x("12.50"); double d; d = a2x; a2x = "err"; d = a2x; // d now 0 a2x = " a"; char c = a2x; // c now ’a’ // explicit conversion to `double’ cout << A2x("12.4").to<double>() << endl; FILES¶ bobcat/a2x - defines the class interface SEE ALSO¶ BUGS¶ None Reported.).
https://manpages.debian.org/unstable/libbobcat-dev/a2x.3bobcat.en.html
CC-MAIN-2021-49
refinedweb
621
54.12