text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Object Integrity & Security: Duplicating Objects There is no actual object created here. Note that there is no instantiation—you only create a reference here. To experienced developers, this may seem obvious; however, when teaching objects to students for the first time, this is a major area of confusion. No actual spot object has been created. However, you still can use the spot reference; you just need to understand what you can use it for. For example, consider the following operation. You can actually assign the spot reference to the fido reference. spot = fido; Here is the important concept: Although you can assign these references to the same object, you are not copying anything. In this case, there is still only a single object despite the fact that you have two references, even though this might not be obvious. Take a look at Listing 02. // Class Dog class Dog { String name; String breed; public Dog(String n, String b) { name = n; breed = b; } public String getName() { return name; } public String getBreed() { return breed; } } // Class Duplicate public class Duplicate { public static void main(String[] args) { Dog fido = new Dog("fido", "retriever"); Dog spot; spot = fido; System.out.println("name = " + fido.getName()); System.out.println("name = " + spot.getName()); } } Listing 02: The Duplicate Class with Two References You have added code to the application to illustrate the concepts just explained. In Listing 02, you create the spot reference and then include a print statement to verify that both the fido and the spot reference point to the same object. When you run the updated application, you can see in Figure 02 that both the fido and the spot references print out the same name, in this case the original fido object. Figure 02: Simple Object Application With Two References You can see the graphical representation in Diagram 02. Both of the references are actually pointing to the fido object despite the fact that one of the reference is called spot. Diagram 02: An Object with Two References. One of the issues here is that, if you change the name for one of the references, you change them for both. Add some code to change the name of spot. Take a look at the additional lines of code below. // Class Dog ... public void setName(String n) { name = n; } .... // Class Duplicate .... spot.setName("rover"); .... When you run this code, you can see in Figure 03 that both of the object references are accessing this same object—whose name was changed to rover. C:column34>"C:Program FilesJavajdk1.5.0_07binjava" -classpath . Duplicate name = rover name = rover C:column34><<
https://www.developer.com/design/article.php/10925_3675326_2/Object-Integrity-amp-Security-Duplicating-Objectsop.htm
CC-MAIN-2018-17
refinedweb
431
61.87
. see link Sam’s post reminded me of an unpleasant bug that is related to this. If you run the following program, you will see that the CLR marshals the struct to the correct unmanaged layout. But examine the bits that are dumped for the struct in its managed form. The first character has been packed with the 2nd character and the layout is not sequential. If you only care about the layout after marshaling has occurred, this won’t affect you. But if you care about direct access to managed data via C#’s ‘unsafe’ feature of Managed C++’s ability to perform unsafe casting, this is a real problem. You can avoid this problem by using ExplicitLayout rather than SequentialLayout. However, this is unattractive for a number of reasons. First, C# defaults to Sequential. Second, it’s much clumsier to specify. Third, your code must have elevated security. Fourth, it is difficult to describe structures in a manner that works correctly and automatically with both 32 and 64 bit systems. (SequentialLayout of IntPtr is the best way to achieve this). Another option is to manually pack chars c1 & c2 together in the managed layout. Most developers do this even without thinking because it’s the efficient thing to do. This is probably the reason that the bug wasn’t noticed earlier. Until this bug is addressed, I would recommend that you manually pack small scalars on your Sequential classes and structs, so that the CLR’s layout engine doesn’t cause problems here. using System; using System.Runtime.InteropServices; [StructLayout(LayoutKind.Sequential)] struct B { public char c1; public int i1; public char c2; } class c { public static void Main(string[] args) { IntPtr ofs1; } -We don’t expose the managed size of objects because we want to reserve the ability to change the way we lay these things out.- That is exactly why it would be great to have a "SizeOf" memer for objects. Not for the purpose of determining that the object is in memory at a given location and continues on for (x) number of bytes but that the object consumes (x) about of bytes. My goal is not for moving, copying or accessing an object but just for memory consumption calculations. This allows a program to dynamically determine how much memory individual objects consume. Recently, I needed this functionality when I built my DAL layer. I wanted to cache (x) amount of Rows, Tables, Datasets locally based on their consumption against the available virtual memory or private configuration limits. Since it was impossible to know how much space each object consumed dynamically, I had to guess and hard code that guess into the program. A dynamic SizeOf would have given the information required and made this calculation dynamic as the client is running. I guess the point is, it would be useful to have a SizeOf functionality for the purposes of memory consumption along with a possible parameter to specify if it should include memory consumed by ref values also. Rocky, what we’ve found with caching is that using a retained size is a poor heuristic. A better heuristic is to select a desired hit rate and then tune for that. Without considering hit rate, we find that some caches are too small to do any real good and others (all too often) cache the world. A really large cache might have a hit rate that is only incrementally better than a much smaller one. And when you factor in the secondary impact to the process from having all that extra memory tied up (like more expensive Generation 2 collections, possibly more write barrier activity leading to Generation 0 costs, paging on the client, CPU cache pressure, etc.) the larger cache is often much worse than a smaller one. Dear All, I guess I am trying to dig a well in a desert to find water….just one more query…I tried the following… <System.Runtime.InteropServices.DllImport("kernel32.dll")> _ Public Shared Function GlobalSize(ByVal hMem As Long) As Long End Function Dim FrameString As String = "Hello World" Dim oHandle As GCHandle = GCHandle.Alloc(FrameString) Dim oPtr As IntPtr = New IntPtr(4) oPtr = GCHandle.op_Explicit(oHandle) Dim oLong As Long = CType(oPtr.ToInt32, Long) Dim oSize As Long = GlobalSize(oLong) But every time I got the same value in oSize. 5636130963718144 5636130963718144 Is this method wrong too… Please help… TALIA Many Regards Sunil I’m in exactly the same situation as Rocky above, albeit a year later 🙂 That is, I’d like to determine the size of objects in order to write caching functionality. Sure, you can constrain by number of items but but it would be much better to be able to constrain by total size of cache. I fully intend to use number of hits to determine what cached obejcts to ‘drop off’ once the cache has reached its maximum storage size. Any chance of the CLR team reconsidering? I think we’ve got past most of our philosophical objections to providing this information, since the ‘sizeof’ IL opcode provides the same information to early-bound callers. But it could still be a long time before you see this feature, because Whidbey is largely baked and Orcas is already looking tight. (In other words, it’s the usual situation for software). If you are caching leaf objects, an instance sizeof API would be useful to you. But if you cache instance graphs, you would have to perform some sort of graph walk to determine the true cost of a cache entry — paying attention to instance identity to avoid over-counting or even infinite recursion. CLR 中类型字段的运行时内存布局 (Layout) 原理浅析 [1]
https://blogs.msdn.microsoft.com/cbrumme/2003/04/15/size-of-a-managed-object/
CC-MAIN-2017-39
refinedweb
947
61.46
Guide: Google Maps V2 for Android : Creating your Google Map Application: If you already got your Google Maps Android API V2 key then you are ready to create your map application. If you don’t head to my Guide: Google Maps V2 for Android : Getting the API key post to read how to get it. So lets start: 1. Open Eclipse and create a new Android project. The first thing that we will handle is the import of Google Map classes. To get the Google Maps files we need to download the last version of Google Play Services via the Android SDK Manager. 2. After you downloaded the Google Play Services, restart Eclipse and in the Package Explorer Right-Click –> Import…. In the opened windows choose “Existing Android Code into Workspace” and click “Next”. Click the “Browse…” Button and head to the location of your SDK folder. in it find the following folder: \extras\google\google_play_services\libproject\google-play-services_lib and press “OK”, check the V next to it in the window and press the “Finish” button. 3. Now you added Google Play Services to your work space, we have to create a reference from our project to this library. Right-Click your project and choose “Properties” go to the Android section, in the lower part press the “Add…” button and add a reference to that library. Your result should be as in the screen shot bellow: Note: If you try to reference google-play-service library and you receive a red X next to this reference, what you should do is to move the .jar file to a location where it’s path will be shorter and then reference it again. 4. Another import we have to make in order to make our application work on Operation system prior to API11 is to import the support library this can be done very easily using Eclipse: Right-Click you project and choose “Android Tools” and then choose “Add Support Library…”: When you finish those import you should have the following libraries (Red) in the Android Dependencies folder (Green) of your project: 5. We are now ready for some codding: First of all open the Android Manifest file: add the following permissions: <permission android: <uses-permission android: Important: Replace your application package instead of the current “your.application.package” string. As mentioned in the comments by @Keilaron it looks like since the Google Play Services 3.1.59 update the MAPS_RECEIVE permissions are completely unnecessary and as a result they can be removed. 6. Next, Google Maps uses OpenGL so we have to add OpenGL support to our application by adding this to the Manifest file: <uses-feature android: 7. Finally add your key to you application right before you close your “application” node in the Manifest file: <meta-data android: 8. Now create an Activity that extends from “FragmentActivity”: import android.os.Bundle; import android.support.v4.app.FragmentActivity; public class MapActivity extends FragmentActivity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.map_layout); } } 9. Finally for map_layout, XML layout file that was set as a content view of the map activity write the following: <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns: <fragment android: </LinearLayout> 10. UPDATE: The last update for Google Play Services library revision 13 introduced a new meta-data tag that should be added as well to your Android Manifest file. So go ahead and add this right next to your API key meta-data code: <meta-data android: And that’s it, run the application and you should see a full screen map: Remember that if you want to run the application in the emulator you should install Google Play Services first. Enjoy and stay tuned. FILED UNDER :Guide , Guide - Android Development , Guide - Google Maps for Android Hey, it’s me the guy from Stackoverflow who asked how to stop making your Android Google Maps App to stop crashing and you said that I should follow steps 2-4 which I had done before posting there, so I’m wondering how do you suggest I stop it from crashing? I am using an android phone to run the app if you were wondering(which is needed since google play doesn’t work on normal emulator).Posted on March 12th, 2013 at 9:55 pm | Quote @Basicorex actually there is a way to use the emulator with Google maps, you can read about it here:Posted on March 13th, 2013 at 8:00 am | Quote Google Maps V2 in the Emulator. About your problem, it’s really hard to help someone without all the details. Please don’t describe the problem here, bur ratter post a link to the SO question. Thanks for the help. But now the emulator says update your google play services!Posted on March 13th, 2013 at 12:43 pm | Quote @Sameer Hussain read my next post to fix this: Google Map in EmulatorPosted on March 13th, 2013 at 7:42 pm | Quote Quick question, I have this working for a sample application to test a proof of concept (POC), worked great, now I’m creating a new Android project (hopefully to be deployed to public), followed all the steps, but it’s not working, fails some where in setContentView on in the onCreate process of the MainActivity . Registered a Key for Android app on the Google APIs. Since both projects are using the same debug.keystore I end up with something that looks like this on Google API Services: 38:C8:A9:92:8F:78:86:D3:0C:0E:29:01:CE:2B:D8:02:A5:66:54:3C;com.example.mapexample (this one works fine and was registered first for API Access) 38:C8:A9:92:8F:78:86:D3:0C:0E:29:01:CE:2B:D8:02:A5:66:54:3C;com.company.mapit (obviously SHA1 is not my real number). Is it possible that using the same SHA1 for both projects is causing the problem? I have checked and quadrupled check that the code is consistent everywhere, proper libraries are used (android-support-v4.,jar, etc.) I can’t get far enough into the application to effectively debug it because the code takes off outside my project. This is the run sequence: 1. White Screen when app is loading, title of app is showing – map never shows. 2. Sorry! The application mapit (process com.company.mapit) has stopped unexepectedly. Please try again. A single button (Force Close). 3. Application ends This happens in both Run and Debug mode. I am stuck – I have verified that everything is defined the same way as the other project – it is, libraries are mapped correctly etc. Application is written for Minimum 10 – (need to work on 2.3.6 phones) Max. 17 (4.2.2). Thanks in advance.Posted on March 14th, 2013 at 5:43 am | Quote @Dave My suggestion to you would be to go to your API console and create a new Project in the console as describe here: Google Map API V2 Key just to make sure it not a key/project problem. next embed the key in your project and run it. if you still have a problem, post a question on SO with your full stack trace, and post here a link, I will try to help you. I just have to say that I didn’t had the chance yet to publish a Google Map application on the market so I have no idea what should be changed regarding the Key for the Release version.Posted on March 14th, 2013 at 7:44 am | Quote Hi Emil, thanks for this great tutorial !!! so this project works well with me, but i want to use GoogleMap object with this line code: GoogleMap gMap = ((MapFragment)getFragmentManager().findFragmentById(R.id.map)).getMap(); but it didn’t work with me for this two reasons: - I have a galaxy s2 using android 2.3.4 so in the manifest file i’m using minSdk =8 =>Call requires API level 11 (current min is 8): android.app.Activity#getFragmentManager - I am not sure if this is the right method to get the map. Do you have any comments or ideas ? and thanks againPosted on April 1st, 2013 at 6:34 pm | Quote To get the instance of the map in API levels bellow 11 use the SupportMapFragment, and this method:Posted on April 1st, 2013 at 10:28 pm | Quote @Emil Adjiev thank you so much ! it works !Posted on April 2nd, 2013 at 2:09 pm | Quote You welcome, @Seyf : )Posted on April 2nd, 2013 at 2:11 pm | Quote Thanks a lotPosted on April 8th, 2013 at 3:51 pm | Quote i’ve made these steps multiple times .. and each time it stops at the same part .. i’ trying to run the google maps v2 ..Posted on April 12th, 2013 at 2:29 pm | Quote but in the step of adding the google-play-service-lib as a reference for my projects .. after adding and applying and pressing ok , the library doesn’t appear to me in both the folders (Android 2.2) and (Android Dependencies) i don’t know where is the problem, could u please help me to find it out @Nanees, First of all try to restart Eclipse and clear the project.Posted on April 12th, 2013 at 4:04 pm | Quote If this doesn’t help you, check in the properties of the project that you see a green V next to the google-play-services library referencing, if not you can try to move this jar file to a location where it’s path will be shorter and try to reference it again. After clearing the project and restarting eclipse ,, there is no a green V mark beside the referenced lib, but there is a red crossPosted on April 12th, 2013 at 4:11 pm | Quote So as I said: try to move this jar file to a location where it’s path will bePosted on April 12th, 2013 at 4:14 pm | Quote shorter and try to reference it again. you can try to update the google-play-services using the SDK manager and do the referencing again as well. i removed the Google play service from the SDK manager , and re-installed it again, then i’ve made those steps and restarted my eclipse then the same red cross appeared to me, this time i’ve change the path and put the folder (google-play-service-lib) direct in the C drivePosted on April 12th, 2013 at 4:24 pm | Quote Strange, usually those steps fixed it for me.Posted on April 12th, 2013 at 5:22 pm | Quote Maybe there is some thing wrong with your project. what happens when you try to reference this jar file from another project? @Emil Adjiev i tried referencing the jar files to all the projects i have and it gives me the same red crossPosted on April 14th, 2013 at 3:55 pm | Quote @Emil Adjiev Now i’ve solved the issue of referencing by clicking the check box of copy the project to my workspace when importing the library..Posted on April 14th, 2013 at 7:11 pm | Quote the maps page opened but it only loads a white blank page with zoom in and out. In the logcat: (Unable to load google maps) was wriiten @Nanees usually when you receive a blank screen with zoom controls then you have some problem with your key, or with the configurations you made in the API Console:Posted on April 14th, 2013 at 7:36 pm | Quote Try to go over this post: Google Maps API key and make sure that you have completed all the steps. If you positive that you did you can try to remove the dubug.keystore folder, compile some project in eclipse (That will result in regenerating a new debug.keystore folder and a new SHA1 key) and do the registration with the new key. Hi Emil, For step 8 & 9 are these new files I need to generate or do I use my previously coded files? I’m not sure where to put this information. thanksPosted on April 15th, 2013 at 5:06 am | Quote frank @frank, for step 8 you need to create a new java class file in the src folder of your project and paste there the code. for step 9 you new to create a new layout xml file and put it inside the res/layout folder.Posted on April 15th, 2013 at 9:30 am | Quote I use Eclipse-ADT, have problem?Posted on April 19th, 2013 at 3:45 pm | Quote I create a new class but have error. Have which one Main? Because a new class created show PUBLIC CLASS FRAGMENTACTIVITY Just gotta say, thank you for the concise write-up. I was following the Google Developer Guide to get a basic map to show up, and was ready to go to bed defeated! Knowing about the Android Dependencies will hopefully save me a lot of time in the future. Cheers! (Linked from Stack Overflow)Posted on April 22nd, 2013 at 8:06 am | Quote @Ed you welcome, it’s nice to see that it helps other people. you are welcome to send me questions if you encounter problems.Posted on April 22nd, 2013 at 10:27 am | Quote @BRShooter I didn’t understand what is the problem you encounter.Posted on April 22nd, 2013 at 11:37 am | Quote Emil , I followed all steps using libraries of Google play services r#4 but there is an error in .xml file at this line ” xmlns:map=””" which is ” unexpected namespace prefix “xmlns” found for tag fragmentPosted on April 25th, 2013 at 6:00 am | Quote when I delete this line it works with zoom levels only but there is no map appears Could you tell me what can I do? @Mayada As it look you are describing two problem that are not related one to another:Posted on April 25th, 2013 at 10:23 am | Quote 1. the problem with the name space can derive from two reason: a. you didn’t performed the first 3 steps correctly and didn’t reference the google-play-service library as needed. b. there is a bug with those the “map” prefix if you put you map fragment with other components on one screen. 2. the second problem where you see only the zoom controls and not the map, is related to some sort of API key problem. Either you didn’t produced as you should or you didn’t register it in to API Console the right way, take a look at this blog post and make sure you commit all the steps correctly: Google Maps API V2 Key Thanks Emil it worksPosted on April 25th, 2013 at 9:37 pm | Quote could you tell me how to do navigation? @Mayada if you want to implement navigation between two point, take a look at thisPosted on April 26th, 2013 at 12:40 pm | Quote answer I gave here: Draw driving route between 2 LatLng on GoogleMap SupportMapFragment Hi , EmilPosted on April 28th, 2013 at 2:24 pm | Quote I tried code here : but with error ” No such file or directory” Could you tell me what can I do ? @Mayada Please open a new question on SO with the full stack trace and post here a link and I will try to help you. It’s impossible to know what the problem is from “No such file or directory” error.Posted on April 28th, 2013 at 4:14 pm | Quote Hi ,EmilPosted on April 29th, 2013 at 7:42 am | Quote There is an error in class “GetDirectionsAsyncTask” in function “public void onPreExecute()” @ DialogUtils it doesn’t defined and @ “onPostExecute” function there is an error @ ” activity..handleGetDirectionsResult(result)” where I defined function “handleGetDirectionsResult(result) ” in MainActivity what can I do? @Mayada As I already said open a new question on SO, and I will help you there.Posted on April 29th, 2013 at 11:48 am | Quote I have refferenced the library and all went on well but, I could not run the emulator since this new error came up >>>[2013-05-10 04:15:24 - APPNAME] Could not find APPNAME.apk!Posted on May 10th, 2013 at 2:40 pm | Quote @Kevin Kaburu it look like Eclipse has some problem creating your .apk file. you could try restart Eclipse, this usually solves this problem for me.Posted on May 10th, 2013 at 9:24 pm | Quote @Emil This line in the xml file: xmlns:map=”” Gives error: Unexpected namespace prefix “xmlns” found ifor fragment tag Why is that ?Posted on May 12th, 2013 at 9:37 pm | Quote @shal you can remove this line if you are not using any of the map prefix properties.Posted on May 13th, 2013 at 10:51 am | Quote For your question this could be caused by 2 reasons: 1. There is a bug with Google maps that this “map” namespace can’t be found if the map fragment is not the only thing you have in your layout. 2. Another cause of this could be that there is some problem with the way you reference the google-play-services_lib. I will do all the things but map is not display ,it show white screen with zoom controller and it give me error Like: 05-16 12:26:00.351: E/Google Maps Android API(8366): Failed to load map. Error contacting Google servers. This is probably an authentication issue (but could be due to network errors).Posted on May 16th, 2013 at 10:13 am | Quote @Suraj usually this problem happens when you have some problem with the way you generated you API key or the way you registered it in the Google API Console, please check this guide on Google Maps API V2 Key and make sure you are doing all the step correctly.Posted on May 16th, 2013 at 10:50 am | Quote Also make sure that you have added this permission in you Manifest file: Hi Emil, this was really helpful in order to get started with a simple app from scratch. Would you happen to know how one would go about converting a more complex app that uses v1? Is it possible to use the v2 api key with v1 code? The app I’m referring to is: It looks to me like everything needs to be changed, but there are no direct correlations between v1 and v2 functionsPosted on May 29th, 2013 at 12:39 am | Quote the public type MapActivity must be defined in its own file….this is the error message after step 8Posted on May 29th, 2013 at 11:26 am | Quote @cc10 Hi Ryan, Sorry to be the one that gives you the bad news, but no there is no a quick way to go from Google Maps API V1 to Google Maps API V2 nor you can use the new API key with the old code (the keys are even produced differently). Most of the object have changed and you will have to replace them. So MapView becomes MapFragment or SupportMapFragment. MapActivity will be a simple Activity or a FragmentActivity for backward compatibility for Fragments. Overlays should be recreated as Polylines or Polygones depending on what you want to create.Posted on May 29th, 2013 at 11:38 am | Quote So no there is no a quick and simple way but most of the functionality is really straight forward. @Emil Adjiev thanks for the quick reply, i will see if it makes sense to convert the app, but I have a feeling it could be much quicker to simply write the map-related code from scratch.Posted on May 29th, 2013 at 3:47 pm | Quote @cc10 I have the feeling that you are right about it. Take some time to learn the new API and start everything from scratch.Posted on May 29th, 2013 at 5:20 pm | Quote @ranjan As the error clearly states: “public type MapActivity must be defined in its own file” you probably did not defined this class in it’s own java file.Posted on May 29th, 2013 at 5:22 pm | Quote I’ve followed Googles instructions over and over and i couldnt get it working, but i came across this site and it worked straight away….Moments earlier I was screaming in relief. THANK YOU!Posted on June 4th, 2013 at 2:59 am | Quote Hello Emil, Google Maps is now on API v3 and i can not make the maps work on my device! Could you look at my SO question that I have just posted? I am getting the error that I am pretty sure corresponds to having a faulty API key but I am using the androiddebugkey. Thanks for the help!Posted on June 13th, 2013 at 11:47 pm | Quote @Benzabill The latest native Google API version for android is V2. V3 is for web development. So I guess that this mix up is your mistake. If you need any other help please post a link to your SO question.Posted on June 16th, 2013 at 2:56 pm | Quote Hello @Emil I am trying to add markers to the map that i created using your tutorial. However it seems that i have to use a GoogleMaps object to do something like this. How do I do that? NOTE: I am trying to support API 8 and above so I am using SupportMapFragment Thanks p.s. i tried a different tutorial that created a GoogleMaps object but got error on the .getMap() section so I am wondering if it is possible to do things like that to your map.Posted on June 17th, 2013 at 8:34 pm | Quote I meant to say SupportMapFragment object rather than GoogleMaps object. getMap() returns a GoogleMaps object.Posted on June 17th, 2013 at 8:37 pm | Quote Thanks again @Benzabill To get the instance of the map in API levels bellow 11 use the SupportMapFragment, and this method: Then you can added a marker to it:Posted on June 18th, 2013 at 11:52 am | Quote h sir i have try your post several time in my eclipse for run map but every time same problem GoogleMap.apk not found i am run my eclipse on window xp 2002 please tell me how run google map on android emulator thank you jeetu dagar jeetu.dagar029@gmail.comPosted on June 24th, 2013 at 12:03 pm | Quote @jeetu First of all it’s impossible to run a Google Map API V2 application on the emulator out of the box. You can go over this blog post link to get an idea on how to run Google Maps API V2 on an emulator, go over the comments as well as there are new links for new set of files you will need to install.Posted on June 24th, 2013 at 12:39 pm | Quote Thank you for the post! Now I finally got the map to work in my application Cheers,Posted on July 9th, 2013 at 1:34 am | Quote Bianca @Bianca you welocme, have a nice coding : )Posted on July 9th, 2013 at 2:08 pm | Quote Hi Emil, I have been able to get the map working. However, regarding step 4, I only have “google-play-services_lib.jar” under my Android Dependencies. I don’t have “google-play-services.jar” or “android-support-v4.jar”. Since the map still works, I was wondering if these were supposed to be necessary or if you could tell me why it is not there and how I could get them to appear. Thanks!Posted on July 16th, 2013 at 1:36 am | Quote @Peter If your map showing and everything is working then you have nothing to worry about. Google have made some changes recently in Eclipse with the way your added libraries are showing in the Explorer, so you will not nececerly will see all those files.Posted on July 16th, 2013 at 3:40 am | Quote Hello sir,Posted on July 20th, 2013 at 8:28 pm | Quote sir i have followed all ur steps…but still than its not working. Initially it was prompting to update your Google Play Services with an Update Button but not after some corrections it has stopped working completely. Its crashing even after many attempts…sir plzz help me its mine college [projects and it means a lot to me.. @Pankhuri I can’t help your without clearly understanding what you problem is. Please post a question on SO with all the needed information and code file, and post here a link. Then I could try to help you.Posted on July 21st, 2013 at 2:33 am | Quote Hi Emil, i tried your tutorial for maps v2. honestly i am really the beginner and i have a problem. poof! “unfortunately, DANARGPS has stopped.” (my app name was “DANARGPS). really need your share, coz this i build for my thesis. thankyou very much.Posted on July 22nd, 2013 at 12:31 pm | Quote @Danssky there is no way for me to help you if you don’t provide me with you code and other related details.Posted on July 22nd, 2013 at 5:57 pm | Quote Please post a question on SO and post here a link so I could try to help you. Great tutorial! I have an app up and running that displays the map and markers for various locations however when I double tap to zoom in or pinch to zoom in the map goes into a loop of trying to locate the markers that it had previously located. Any thoughts on why the zooming would cause a problem on the map? ThanksPosted on July 22nd, 2013 at 8:59 pm | Quote @Adam, I don’t understand what do you mean by: ” the map goes into a loop of trying to locate the markers that it had previously located”, and I can’t really help you without seeing your code.Posted on July 23rd, 2013 at 4:57 pm | Quote Please post a question on SO and post here a link so I could try to help you. I am trying to get android-support-v4.jar and google-play-services.jar in android dependencies but not able to get. tried with android-tools > add support library. it is always asking me to install support library rev 18. i did it but getting some warning “Warning: Ignoring sample ‘downloaded’: does not contain source.properties.” so android tools method also fails. please help me if it is necessary to get both jar in android dependencies and any alternate way is there.Posted on July 26th, 2013 at 5:09 pm | Quote Thanks for the tutorial. It helped a lot. Now… How do I make the map zoom to my current location?Posted on July 31st, 2013 at 3:13 am | Quote @Robert You welcome, for your location you can use the build in setMyLocationEnabled(boolean) method or implement your own LocationListener that will be invoked on location change of the device. You can find a lot of examples on the web, and I guess I will make a guide soon as well.Posted on July 31st, 2013 at 7:09 pm | Quote I was wondering why this tutorial differs from the documentation in terms of the permissions. Turns out:Posted on August 1st, 2013 at 10:37 pm | Quote “MAPS_RECEIVE is now completely unnecessary. The latest update of Google Play Services 3.1.59 made it useless. As a result, it can be removed.” Source: Thanks @Keilaron. I wasn’t aware of that and this tutorial was written before this update when this permission was necessary so this is the reason it’s there. Thanks again for the information, I will update this post with the information you have provided.Posted on August 1st, 2013 at 10:48 pm | Quote hi, i’m getting map but its blankPosted on August 2nd, 2013 at 5:11 pm | Quote I have trouble with Android Google map,Posted on August 3rd, 2013 at 5:48 pm | Quote Please Solve My Problem. Thank You. @issaac without the relevant information I can’t help you. Please post a question on Stackoverflow and post here a link, and I will try to help you.Posted on August 4th, 2013 at 10:59 am | Quote @Ravi Shanker Yadav Please don’t pollute the comment section with a huge amount of code, post all this information on Stackoverflow and post here a link and I will try to help you.Posted on August 4th, 2013 at 11:03 am | Quote I have followed the instructions but get a blank map view which only shows the zoom buttons in the bottom right hand corner. I have used a MapFragment instead of the SupportMapFragment (all references in your code have been replaced with a reference to MapFragment in my code).Posted on August 16th, 2013 at 12:12 pm | Quote After 3 days of pulling hair …thank you! Been a bytch getting v2 to work on API 9 through API 17.Posted on August 16th, 2013 at 11:58 pm | Quote @Franz Klein I need to see your code in-order to help you. In general with those symptoms it’s there is some problem with your defined permissions or the way you configured your API Console. Please post a question on Stackoverflow and post here a link and I will try to help you.Posted on August 18th, 2013 at 11:22 am | Quote xmlns:map=”” showing errorPosted on August 19th, 2013 at 9:36 am | Quote @sanna you can remove this line.Posted on August 19th, 2013 at 4:00 pm | Quote In this example it will work without it. @Emil Adjiev I have discovered what the problem was. In the Google APIs Console I had switched on the Google Maps API v2 and not the Google Maps Android API v2. With Google Maps Android API v2 switched on it works.Posted on August 21st, 2013 at 4:28 pm | Quote MAAAAAAN you are the best finally worked . Thanks a lotPosted on September 20th, 2013 at 2:36 pm | Quote If I run the simple google map app in the device with version 2.3.6.Posted on September 21st, 2013 at 11:57 am | Quote i got an message like this: This app won’t run unless you update google play services I’m not running in the emulator. Please help me to resolve this @Thayar Hi, maybe google doesn’t provide the up to date version of Google play services for your device or the android version in it. you can try to remove google play services from your and run your Google Maps application again. Now you should receive the option to update google play services from the Play Store. Good luck.Posted on September 22nd, 2013 at 12:45 pm | Quote Thanks a lot for the detailed information. You made my day !!!Posted on October 14th, 2013 at 11:56 am | Quote Greetings friend, I have a problem with this code, or Eclipse in general… 11-04 15:31:29.354: E/AndroidRuntime(21753): java.lang.RuntimeException: Unable to start activity ComponentInfo{com.example.wcftestapp/com.example.wcftestapp.MainActivity}: android.view.InflateException: Binary XML file line #7: Error inflating class fragment I copied your code entirely and still i get this Error inflating class fragment Please help.Posted on November 4th, 2013 at 4:33 pm | Quote @Srdjan are you sure you are referencing the google play services library?Posted on November 4th, 2013 at 11:25 pm | Quote Emil Adjiev, help me , it’s not workingPosted on November 5th, 2013 at 6:20 pm | Quote it’s show a black screen T_T don’t know where is the problem i do all the steps cerfully Hi Emil ,Posted on November 6th, 2013 at 8:54 am | Quote I am getting error 11-06 12:14:12.351: E/AndroidRuntime(4797): Caused by: android.view.InflateException: Binary XML file line #1: Error inflating class fragment . i have followed by all steps . But still getting same error. could you help me . @Emma Sorry, but I can’t help you without any additional information. Please open a question on StackOverflow and paste here a link and I will try to help you.Posted on November 6th, 2013 at 8:32 pm | Quote @kishore Hi, this problem is usually happens when there is a problem in the way you referencing google-play-service or the android-support library. Also it may happen if you are trying to integrate a SupportMapFragment in a simple Activity (and not in a FragmentActivity).Posted on November 6th, 2013 at 8:44 pm | Quote thx for sharing!! helped me alot!! encountered some small problems after I followed this post, had to import the Google Play Services lib project to me workspace before it was referred right (was referring to the android sdk pad first I think). And after importing encountered next problem cus none of the amd emulators where working, only got the white bg, got this error in me logcat “Google Maps Android API(1332): Google Maps Android API v2 only supports devices with OpenGL ES 2.0 and above”. So had to use the genymotion emulators. But after 2 days I finally see some mapsPosted on November 10th, 2013 at 2:04 am | Quote v1 was so much easier to implement. Oh and almost forgot to mention atm u also have to asign the versioncode in ur project, using the meta valuePosted on November 10th, 2013 at 2:40 am | Quote @Jeelof Hey, I guess you right, API V1 was a little more easier, but here you got a lot more futures, and the maps are looking great, about the new meta-data tag, yes you are right, I need to add it to the guide, as this was added in the last google-play-services update. Thank you for your comments and have a great coding.Posted on November 12th, 2013 at 7:30 pm | Quote Hey , I am the same person from Stackoverflow. I just updated my version for Google play services and I have started getting run time errors again ,Can you please help ? I have tried reloading and referencing! Are the updates stored separately?Posted on November 13th, 2013 at 3:32 am | Quote @Kelvin I finally made it. I added network permission then i can start the google app.Posted on November 14th, 2013 at 8:48 am | Quote Very good thank you! Specially the last including part.Posted on November 14th, 2013 at 9:23 am | Quote million thanks. @Surbhi Hey, for the last update of Google Play Service there is another change you have to do in the Manifest file, Check section 10 of this guide.Posted on November 14th, 2013 at 12:40 pm | Quote Hi Emil, Thanks for your wonderful tutorial. may I know is it possible yo initiate map fragment in a fragment? I am now using tab host to develop my apps and just wonder if it possible to use map fragment in one of my tab fragment. Thanks in advance! KelvinPosted on November 15th, 2013 at 5:05 am | Quote This totally helped thank you!Posted on November 25th, 2013 at 5:42 am | Quote Nyc tutorial…Posted on November 29th, 2013 at 1:40 pm | Quote I am creating an app and using map in that. Can u give me the code for routing direction? The current location will be starting point and destination will be given by user from edit text. when button is pressed, the direction is shown. Can u plz help? @Prushni Hey, you can check this post I wrote for showing the route on the map.Posted on November 29th, 2013 at 2:26 pm | Quote There is a project file also that you can download and use to understand how it done. nunca pense que me tomaria 3 dias para hacer trabajar este ejemplo de mostrar un simple mapa en mi dispositivo android , me fui por todos los foros de internet y hasta consulte en la misma pagina de android developer pero no encontre solucion.. hasta que llegue aquí a tu pagina y por estas dos lineas : recien trabajo!!! Gracias!!! Saludos desde Perú ( Lima)Posted on December 4th, 2013 at 6:13 am | Quote @Emil Adjiev Hi I tried this tutorial in elipse but when iam following step 9. Finally for map_layout, XML layout file that was set iam getting error atPosted on December 4th, 2013 at 8:25 am | Quote xmlns:map=”” i.e. Unexperted namespace prifix “xmlns” found for tag fragment can i know how to solve this error @Javier Hey, I’m not talking Spanish but someone translated it for me, You welcome and have a nice coding : )Posted on December 4th, 2013 at 12:53 pm | Quote @srihari Hey, If you are not using any of the “map:” properties in the XML layout file, then you simply can remove this line of code : )Posted on December 4th, 2013 at 12:54 pm | Quote
http://blog-emildesign.rhcloud.com/?p=435
CC-MAIN-2013-48
refinedweb
6,239
67.59
I’ve been exploring python quite a bit lately in the context of Damage. At Damage we use c++ and python quite a bit, and I’m writing interfaces that work via different protocols (irc, web, etc..) and had an occasion to really explore the python module reloading mechanism. Python is a fascinating language in a lot of ways and like all new languages brings its share of adjustments to the table. One of the strongest aspects of python is its ability to reload modules on the fly. One of the trickiest parts with this mechanism is how it seems to relate to the scoping rules of python. For instance…suppose you import a module at one level and want to call a function that reloads the modules that have changed since last checked. You can do that, but if you do it at the called function level, you are actually only changing it within that scope. So if you want to do a base reload of an entire system, you have to have one known good base and have the sub modules reload all of their submodules in turn within the proper scoping context (If I understand this correctly). The outrageously cool thing about this is that you can wrap these importation statements within try/except blocks and thus give your program an extra level of robustness. The irony here is that repeated reloads (I tested it to around 1m reloads) presents the same amount of memory leakages as redefining the function would (via executing a file within an exec block for instance) , so you end up not making up any sort memory savings, which appear to be additive, but you do get a kind of correctness of scoping. Anyhow, thought I’d plop that and the following code on the table and see what O’Reilly Networkingans thought of it. Note, indenting is very important in python and ignored mostly for readability sake here: The code to reload a module would look something like this: import dispatch and then later in the code (and indented after a test for the reloading command event) try: reload(dispatch) except: print ‘-’*60 traceback.print_exc(file=sys.stdout) print ‘-’*60 The cool, and slightly dangerous and somewhat hard to debug, thing is that if you aren’t really concerned about allowing the old bot_dispatch code to run, if it fails the old code still exists and runs until you replace it with a good module that doesn’t throw an exception. Of course this can lead to some very lazy programming due to its robustness, but its a nice feature, sort of like an emergency backup. I’d love to hear how other python folks are doing this kind of reloading, and if my solution is the truly pythonic one or wildly off base and indicative of my ignorance with the language. Module reloading in python. See the PyUnit GUI The GUI for PyUnit, a.k.a. 'unittest', reloads modules automatically. A note about the problems presented by the task can be found at, though the current implementation has evolved slightly from the sample in that page. reloading leaks? I don't think repeatedly reloading a module should cause memory to leak, unless you're using some ancient version of Python that doesn't have the cycle collector...
http://www.oreillynet.com/onlamp/blog/2003/09/cascading_python_module_reload.html
crawl-002
refinedweb
556
62.41
Multicast in Java Multicast in Java Multicast in Java This section introduces you how to deliver...: unicast, broadcast and multicast. The detail information is given bellow:    vedio file saving - SQL vedio file saving in my university project i am developing a advertisement site.here i have decided to give my customers to show their adds as vedio... vedios in database. I want to use vedio streaming here.but i dont know anuthing UDP Client in Java UDP Client in Java  ... or messages for UDP server by the UDP client. For this process you must require..., then you send messages to UDP server. The sending process has been defined just  Image transfer using UDP - Java Beginners making a project to transfer a file from one computer to another.The problem I am getting is that I can transfer only text files properly.The file transfer is using UDP. I have used core java technologies like JFC,JDBC,UDP. My main .Net dll to Java - Java Beginners .Net dll to Java Hi, I've a .Net dll file and need to call into JAVA. Can i get any sample code on this, please? Thanks Chinnapa beans net beans Write a JAVA program to read the values of an NxN matrix and print its inverse net beans2 net beans2 Write a JAVA program to find the nearest two points to each other (defined in the 2D-space java vs .net - Java Beginners java vs .net which language is powerful now java or .net Hi Friend...Doubt on Uni Cast & Multi Cast - Java Beginners one-to-many connections, an example for multicast is the java eventhandling... and a broadcast packet is that hosts receiving multicast packets can... or more. The difference between a multicast packet and a broadcast packet Multicasting in Java - java tutorials,tutorial in Java, be it unicast, broadcast or multicast, one needs a java.net.DatagramSocket... : Multicast in Java UDP Multicast Client in Java Multicast Server...Multicasting in Java In a datagram network, Multicast is the transmission Java File Management Example of file. Read the example Read file in Java for more information on reading...Java File Management Example Hi, Is there any ready made API in Java for creating and updating data into text file? How a programmer can write code NET BEAN - IDE Questions NET BEAN Thanks for your response actually i am working on struts and other window application. so if you have complete resources abt it then tell me.... and if you have link of this book ""Java EE Development with Net Beans java binary file io example java binary file io example java binary file io example net beans 4 net beans 4 Write a JAVA program to read an initial two number x1 and x2, and determine if the two numbers are relatively prime. Two numbers are relatively prime. Two numbers are relatively prime if the only common factor Java FTP file upload example Java FTP file upload example Where I can find Java FTP file upload...; Hi, We have many examples of Java FTP file upload. We are using Apache... Programming in Java tutorials with example code. Thanks Wicket on Net Beans IDE consists of simply JAVA file and HTML file. Each and every component in this web framework application is created in java and it is later rendered into the HTML... Wicket on Net Beans IDE   Java programming or net beans - Java Beginners Java programming or net beans Help with programming in Java? Modify the dog class to include a new instance variable weight (double) and the Cat class to include a new instance variable coatColor (string). Add the corresponding File Handling in Java was trying to find some tutorials about handling file in Java on net. I have also seen...File Handling in Java Hi, While opening a file in Java developers... are the use of these classes? Is there example code for explaining the File Handling Dot Net Architect Dot Net Architect Position Vacant: Dot Net Architect Job Description Candidates will be handling Dot Net Projects.   Wicket to develop applications in using Java programming language. Apache Wicket is commonly... applications. Apache wicket is similar to the JSF ( Java Server Faces) and Tapestry frameworks. You can also use the Net Beans IDE for developing file in java - Java Beginners file in java I want to count the number of white line and comment line in a file (.sql) with java if you have an example thank you in advance PROJECT ON JAVA NET BEANS AND MYSQL !! PLEASE HELP URGENT PROJECT ON JAVA NET BEANS AND MYSQL !! PLEASE HELP URGENT i need a project based on connectivity..it can be based on any of the following topics...:// Directory and File Listing Example in Java C:\nisha>java DirListing example myfile.txt... specified a directory name "example" that contains only a single file "myfile.txt". Download this example Example How to write in File in Java How to write in File in Java Hi, How to write in File in Java. Please suggest and give example of this program. thanks Java write to file Java write to file How to write to a file in Java? Is there any good example code here? Thanks Hi, Java is one of the best... to a text file. Here is the examples: Example program to write to file. File Read file in java Read file in java Hi, How to Read file in java? Thanks Hi, Read complete tutorial with example at Read file in java. Thanks How to compress a file in Java? How to compress a file in Java? How to compress a file in Java through Java code? I want a good example in Java. Thanks Hi, Check the tutorial Compressing the file into GZIP format. Thanks java code - Development process java code to setup echo server and echo client. Hi Friend, Please visit the following links: Hope Java file byte reader Java file byte reader I am looking for an example program in Java for reading the bytes. More specifically I need java file byte reader example code... example for Reading file into byte array Reading a File into a Byte Array Java read binary file Java read binary file I want Java read binary file example code... at Reading binary file into byte array in Java. Thanks Hi, There is many more examples at Java File - Learn how to handle files in Java with Examples Java example for Reading file into byte array Java example for Reading file into byte array. You can then process the byte...: Java file to byte array Java file to byte array - Example 2... ByteBuffer to byte array in java. Read more at Java File - Example Java File Programming Java File Programming articles and example code In this section we will teach you about file programming in your Java applications. Java File Programming tutorial will teach you how to create, read and write in a file from Java Java upload file to ftp . View complete example at FTP File Upload in Java Thanks...Java upload file to ftp Hi, How to uploaded file to a ftp server in Java? Thanks Hi, It's very easy to write a program in Java Java delete file if exists Java delete file if exists Hi, Hi how to delete file if exists? I need example code of java delete file if exists. Thanks Hi... if tempFile exists File fileTemp = new File(tempFile); if (fileTemp.exists How to write in File in Java How to write in File in Java Hi, How to write in File in Java. Please suggest and give example of this program. thanks. Hi friends... to write in File of Java Program Java File Writing Example Java File Writing Example To write some data on the file you need to first open the file in append mode by using the java.io.FileWriter class and then pass...()); } } } Note:- Before running this example you must create a text file Java FTP Client Example Java FTP Client Example How to write Java FTP Client Example code? Thanks Hi, Here is the example code of simple FTP client in Java which downloads image from server FTP Download file example. Thanks   File Handling - Java Beginners ("File create and write example."); BufferedReader buff = new BufferedReader(new...File Handling I have written one program by which we can create a file , store any information in that,and resume that file using a user name Java File Download Example Java File Download Example To make an application that downloads the file from...(); } } When you will run the example you will get the new file named.... Make an object of java.net.URL, and pass the url of the the remote file, which Setup file - Java Beginners Setup file Hello sir ,how i can make Java Programs Set up File... to be printed. For instance in the above example we have used - throw new.... Point to note here is that the Java compiler very well knows about java-jar file creation - Java Beginners java-jar file creation how to create a jar file?i have a folder... file Hi Friend, JAR stands for the Java Archive. This file format is used to distribute a set of java classes. This file helps you to reduce the file File File How to find all files in our local computer using java Struts2.2.1 file upload Interceptor example. Struts2.2.1 file upload Interceptor example. In this example, we will disscuss about the file Upload Interceptor. Here, we are using a struts2.2.1 file... file(HTML,txt,Java) struts.messages.error.file.extension.not.allowed How to write to xml file in Java? How to write to xml file in Java? Hi Friends, Can anyone help me how to write in xml file in Java programming. Please fill free to give example or reference website for getting example. Hi, To write in xml file calling a java file in java calling a java file in java how to call a another java file in FILE ;Java Read content of File import java.io.*; public class Appliance { public...FILE There is a file named Name.txt that contains information related to appliances.I want to read the contents of this file and I used the code Read File in Java Read File in Java This tutorial shows you how to read file in Java. Example... is the complete example of Java program that reads a character file and prints... is used to handle the files in Java. In this example we will use following Java Write To File FileWriter Java Write To File FileWriter In this example you will learn how to write.... In the Example I have created an object of file using the class File... write() method writes streams of character to file. Example How to Write to a File in Java In this Java tutorial, we will demonstrate how to write to a file in Java... to a file from Java program. To write the Java Write To File we will use two... Java" into out.txt file. While creating the object of FileWriter class we Java Write to File Java Write to File In this tutorial you will learn how to write to file in java. Write to a file in java we are required to use some classes of java.io... : When you will execute this example a text file will be created on the specified Java write file Java write file Following example demonstrates, how to create and write a file...;with File class, several character files are created. And as shown in the example File Reader example in JRuby File Reader example in JRuby  ... to Read File in JRuby. We can read and write file on the console or in a file with the JRuby. Here in our example we are reading a text file and printing Create File in Java File is nothing but a simple storage of data in Java language. We call one... that are arranged in String, column and row line. This example stores file name... a file in java. This method returns a Boolean value "true" if the file is created Reading binary file into byte array in Java Example code of reading binary file into byte array in Java This example shows you how to read a binary file into byte array from Java program. This type... is:" + e.getMessage()); } } } Read more examples at Java File - Example Writing xml file - Java Beginners XmlServlet().createXmlTree(doc); System.out.println("Xml File Created...); Text text = doc.createTextNode("Roseindia .Net... = sw.toString(); File file = new File("c:/newxml.xml How to Write to a File in Java without overwriting Learn how to Write to a File in Java without overwriting in your Java program This tutorial teaches you how to write to a file in Java without overwriting... in a text file through your Java program. Program explained here writes the content Java get File Type Java get File Type This section illustrates you how to get the file type. In the given example, we have create an instance of the class File and passed the file 'Hello.txt CREATE AND WRITE FILE THREAD JAVA CREATE AND WRITE FILE THREAD JAVA Hi guys I was wondering how can I make this program in java with threads. I need to create a file and write... a beginner :( It has to listen from any port **port=6001 (example) Filesize Java Construct File Path to construct a file pathname by a Java program in Java. In this example I have created...Java Construct File Path In this section we will discuss about how to construct file path in Java. A file path can be constructed manually or by a Java Java - Deleting the file or Directory Java - Deleting the file or Directory This example illustrates how to delete...;} } Download File Deletion Example Java MappedByteBuffer example, How to create a large size file in java. Java MappedByteBuffer example, How to create a large file in java. In this tutorial, you will see how to create a large file with the help...;} } Output C:\>java CreateFile File
http://www.roseindia.net/tutorialhelp/comment/84360
CC-MAIN-2014-10
refinedweb
2,365
65.22
Visual C++ 2005 IDE Enhancements, Part 2 Class View Filtering The Visual Studio 2005 Class Viewer features two panes: one for types and namespaces, and the other for properties, members, and methods. The CommentedClass example from last month's article is shown in Figure 6 (displayed in the Class View window). Class View now can filter the types and namespaces that are displayed. In Figure 6, "Commented*" has been used to filter the view to display only those types whose names started with "Commented", which in this case is only a single class. As the number of types within a solution grows, the usefulness of this features increases exponentially. Class Diagram The addition of a built-in, UML-like modeling tool to Visual C++ is an addition that many developers will welcome with open arms. This section does not deliberate why the modeling is UML-like rather than pure UML, except to say that the most common answer from Microsoft seems to be that UML is not powerful enough to model the features of .NET in a neat and convenient form. For developers who have endured the pain of round-trip engineering through tools like Rational Rose, the fact that Visual Studio class diagrams offer trip-less modeling will be a welcome relief. The models are trip-less in the sense that code and diagrams are automatically kept up to date with each other, and there is no notion of one artifact being out of date with the other. Class diagrams are actually separate project files that allow the classes that are relevant to a particular view of the system to be added to a diagram. New classes can be dragged onto a diagram from the toolbox shown on the left of Figure 7, and existing classes can be dragged onto the design surface from the class view. A class can be shown in summary form, like frmStatus in Figure 7, or in an expanded form, like frmMain and frmAbout. A grid control called class details shows the methods, properties, and fields of a class in a hierarchical table. The class details grid is the most convenient way to alter a method, field, or property, though this can also be accomplished directly in the diagram. Figure 7: Class Diagram As you can see from the context menu in Figure 7, the class diagram surface supports many of the features traditionally associated with the class view and text editor windows, such as the abilities to add various code elements to a type and refactor a class (the refactoring support in Visual Studio 2005 will be covered in an upcoming article), and support for overriding inherited methods. More to Come Next month, the tour of new IDE features will continue with coverage on refactoring support within the IDE and the new build system.<<
http://www.developer.com/net/cplus/article.php/10919_3488546_2/Visual-C-2005-IDE-Enhancements-Part-2.htm
CC-MAIN-2017-30
refinedweb
471
52.23
Alright so I made a program that will count the number of fractions in a file and then output the number of times each fraction appeared. I got it to count the fraction and output. The problem is that I cannot figure out how to skip printing the same fractions count more than once. For example my code outputs: 5/5 has a count of 1 1/1 has a count of 2 1/10 has a count of 1 1/100 has a count of 1 1/1 has a count of 2 import java.util.Scanner; import java.io.FileNotFoundException; import java.lang.String; import java.io.File; import java.io.FileInputStream; public class Assignment1{ public static void main(String [] args){ //Reads file containing fractions Scanner inputFile = null; try { inputFile = new Scanner(new FileInputStream("fractions.txt")); } catch (FileNotFoundException e) { System.out.println("File not found or not opened."); System.exit(0); } //variables String[]fractions = new String[100]; //will take in the fractions String[]split = new String[2]; //used to split the fractions int [] numerator = new int [100]; // store numerators int [] denominator = new int [100]; //store denominators int count = 0; //number of lines in file int z = 0; int same = 0; //number of fractions that are the same //count the number of lines in the file, put each line into //the string[]fractions while(inputFile.hasNextLine()){ fractions[z]=inputFile.nextLine(); count++; //System.out.println(fractions[z]); z++; } //split the fractions[] into two arrays: nummerator and denominator for(int i = 0; i<count; i++){ split = fractions[i].split("/"); numerator[i]=Integer.valueOf(split[0]); denominator[i]=Integer.valueOf(split[1]); } //used to compare specific numerator and denominator to the rest of the numbers int num; int den; //start off by comparing denominator, and then compares the numerator //of like denominators for(int i = 0; i<=count; i++){ den = denominator[i]; num = numerator[i]; for(int a = 1; a<count; a++){ // if(den == denominator[a]){ //compare denominators if(num == numerator[a]){ // compare numerators same++; } } } //I am putting this in because I could not figure out why //the first and last fractions were printing a count of 0 if(same<=1){ System.out.println(num+ "/" + den + " has a count of 1"); }else{ System.out.println(num+ "/" + den + " has a count of "+ same); } same=0; } //output the totals } }
http://www.javaprogrammingforums.com/loops-control-statements/27458-counting-loop-outputting.html
CC-MAIN-2015-32
refinedweb
385
52.7
Pulse SDK Integration Tutorial for iOS: Network Logger Learn how to set up network logger for your app using Pulse SDK. Pulse framework provides you a UI to display the logs in your debug app and also persist the logs that can be exported anytime. Version - Swift 5, iOS 15, Xcode 13 Logging network requests is crucial when debugging and monitoring apps that rely heavily on remote data. In today’s API-driven world, this is true for almost all apps! You’ll come across two types of frameworks to interact with your network APIs while building an app: network proxies and network logger. Usually, network proxies are the tool you use for monitoring network traffic. Proxies sit between your app and the network, intercepting traffic on the way in and out. They log and present data in real time. Some good examples of proxy apps are Charles and Proxyman. Pulse Network Inspector takes a different approach than the previously mentioned proxy apps. Pulse is a network logger. It logs the network activity that an app makes and provides some intuitive UI for later viewing. Pulse is a framework that you can use to provide any of your app’s testers (QA, engineers, beta testers) access to a history of the network requests the app makes. It means the testers don’t need to know how to set up a proxy for themselves. Pulse lets you view network activity in supplementary apps or directly inside the app that triggered the network requests. It also provides access to its data store. This approach lets you build your UI around the log data if you choose. In this tutorial you will: - Learn how to integrate Pulse into your app. - Add simple data logging and visualise it in the Pulse UI. - Hook Pulse up to network requests and inspect all aspects of the request in the Pulse UI. - Implement an image downloader that adds Pulse logging to image network requests. Getting Started Download the project materials by clicking Download Materials at the top or bottom of this tutorial. In this tutorial, you’ll work on MovieSearch, an app which lets you search and view movies from The Movie Database via its API. You’ll add the Pulse framework to this app and learn how to use it to log and subsequently inspect the network traffic the app makes. Before you start, you need to register for an API key at the TMDB website. If you don’t have an account, go to the website’s registration page and sign up. After you sign up and log in, go to the account settings page and select the API option from the left panel. Next, you need to register an app to get an API key. While in API setting’s Overview section, click the link under the Request an API Key subsection. It’ll take you to the Create tab: Select the Developer option. After reading through it, accept the terms on the next page. Next, fill out the form’s required information, including your app description and address details. You can give any URL, name and description you’d like for the app. Even though this app isn’t for public consumption, you need to get past this step to retrieve the API key on the next screen. After you complete the application form, the next page will display your API key: Voila! Copy your API Key as you’ll need it in a moment. Open the starter app from the downloaded materials. Press Shift-Command-O, search and open APIKey.swift. Next, insert your API key from above into the value static constant property under the enum APIKey. That’s it for setup! :] Build and run the project to test the app. You’ll launch to a search screen. Type in any search to get a list of results: Finally, tap any result to see a detail view: Take a minute to review the code. Focus on the networking implementation in NetworkService.swift under the Source ▸ Network ▸ Service folder. This tutorial will add the Pulse SDK to the app and update the network stack to work with it. Pulse enables targeted logging of all network activity in the app and gives you a view into network performance and any errors. Before you begin coding it’s time to take a closer look at Pulse. Introducing Pulse Pulse is a persistent network logger and inspector. Although it can show you requests via the Pulse Pro experience in real time, its primary function is to log your app’s network activity into a persistent data store. You can then view the data in the custom native UI provided by the PulseUI framework and create a custom UI to display the network data from the logs. Pulse uses Apple’s open-source SwiftLog framework as a dependency to log data. Pulse is broken down into three main components: PulseCore, PulseUI and document-based Pulse apps. Here’s an explanation about what each of them does. PulseCore PulseCore is the base framework you use to log activity for introspection later. It also provides a network proxy for automatically capturing network requests. PulseCore is available on iOS, macOS, watchOS and tvOS. The PulseCore framework provides a PersistentLogHandler struct that logs to a persistent store. You can also use this struct for standard log messages from SwiftLog. PulseUI PulseUI provides a UI to parse and display network data from the Pulse logs. This framework’s approach offers a quick way to get up and running with Pulse data. Like PulseCore, it’s also available on iOS, macOS, tvOS and watchOS. Document-Based Pulse Apps Document-based Pulse apps are separate from the Pulse framework inside your app. They let you open Pulse documents shared from other devices or apps to view network logs. These apps are available on iOS and macOS. Pulse documents contain the network traffic logged and saved by an app. You can therefore use these to introspect traffic that an app made, perhaps to debug something that a QA analyst or a beta tester experienced. Now that you’ve got an overview of what Pulse does and its core frameworks, you’ll learn to integrate Pulse into the MovieSearch app. Pulse Integration Pulse is distributed via Swift Package Manager, making it easy to integrate into your project via Xcode. First, open the starter project. In Project navigator on the left, select the MovieSearch project node at the top. Under Project, select the MovieSearch project item. Then, go to the Package Dependencies tab. Under Packages in the middle pane, click the + button. Next, in the Search or Enter Package URL text field, enter: Leave the defaults as they are and click Add Package. Then tick all three package products when requested. Finally, click Add Package. Here you go! :] At the bottom of Xcode’s Project navigator, you’ll see Pulse and swift-log under the Package Dependencies section. Build and run to ensure everything is working as desired. Pulse is designed around networking data however you can also log any arbitrary data. To get started with the framework you’ll first log some basic data. Logging Basic Data In addition to network logs, Pulse lets you take advantage of SwiftLog to log any data you need outside of networking-related concerns. You’ll view these logs alongside your network logs. Setting Up Pulse Logs First, initialize and configure the Pulse logging backend. Open AppMain.swift. At the top of the file, add an import for Pulse and Logging under the other imports: import Pulse import Logging In init(), add the following code under setupNavbarAppearance(): LoggingSystem.bootstrap(PersistentLogHandler.init) This code configures SwiftLog to use Pulse’s PersistentLogHandler. Next, you’ll log some information using SwiftLog, the underlying subsystem Pulse uses. Introducing SwiftLog SwiftLog is an open-source logging implementation by Apple. SwiftLog statements have three components: Log Levels, Message and MetadataValue. Log Levels SwiftLog provides seven built-in log levels: trace debug info notice warning error critical You’ll use these levels as part of each log statement based on the severity. The message is the primary piece of information sent to a log statement. You can make use of interpolated strings in the message to add dynamic data at runtime in your logs. Metadata Value This optional parameter helps you attach more data to the log statement. It can be a String, Array or Dictionary. You won’t use this value in your log statements here, but if you ever need to attach more context to your logs, this is the place to do so. Are you ready to add your first log using SwiftLog API? Here you go! :] Logging With SwiftLog Open MovieListViewModel.swift. Then import Logging under the other imports: import Logging Under the declaration of the networkService property, add: let logger = Logger(label: "com.razeware.moviesearch") This line creates a SwiftLog Logger instance you can use to log events. You’ve given it a label that will separate these log messages from any others. Next, you’ll log an event when the user performs a search. Add the following code inside the search() function right before the logger.info("Performing search with term: \(self.searchText)") So you’ve added a log. But you have no way to view the logs! In the next section, you’ll remedy that. Using PulseUI to View Logs Pulse comes with a built-in viewer for logs available on macOS, iOS, iPadOS and tvOS. You can use it anywhere in your UI where you want to see what’s happening in your network stack. In this tutorial, you’ll set up the viewer as a modal view that displays via a toolbar button. You might want to make this view only available in debug builds or hide it deeper in the UI in a production app. Pulse is an excellent debugging tool, but you wouldn’t want it front and center in your app’s UI! Press Shift-Command-O and open ContentView.swift. At the top, right under the SwiftUI import, add an import for PulseUI: import PulseUI Next, under the declaration of var viewModel, add a state variable to control sheet visibility: @State private var showingSheet = false This state variable controls when the PulseUI log view renders as a modal. Next, right under the .searchable modifier, add a sheet modifier: .sheet(isPresented: $showingSheet) { MainView() } PulseUI provides MainView. It’s an easy-to-use view that displays a lot of information about your network and SwiftLog data. You added the ability to display the log view in a sheet, but you haven’t created a way to trigger the display yet. You’ll handle that now via a toolbar button. Under the .onSubmit action, add a toolbar modifier: // 1 .toolbar { // 2 ToolbarItem(placement: .navigationBarTrailing) { // 3 Button { // 4 showingSheet = true } label: { // 5 Image(systemName: "wifi") } } } This code: - Adds a toolbar to the navigation bar. - Creates a ToolbarItemaligned to the right. - Adds a Buttonas content to the ToolbarItem. - When a user taps the button, sets the showingSheetvariable to true. - Uses the SFSymbol for WiFi as an icon for the button. That’s all you need to display your log UI. Build and run. When the app launches, you’ll see your new toolbar item on the right side of the navigation bar: Perform a couple of searches, then click the new toolbar button. You’ll see your search logs: You won’t see much on the other tabs yet. You’ll explore them after you start capturing network traffic. Capturing Network Traffic Pulse has two methods for logging and capturing network traffic. You can use a combination of URLSessionTaskDelegate and URLSessionDataDelegate, or set up automated logging when your app launches. The recommended approach is to use delegation to log your network requests. The automated approach uses an Objective-C runtime feature called swizzling to inject Pulse into the depths of URLSession. This allows Pulse to capture all requests regardless of origin. While this may be desirable in some instances, it is something that can be dangerous. So you’ll follow the recommended approach here. Next, you’ll update the networking stack to use delegation as a way to insert Pulse into the mix. Adding a Logger Instance Open NetworkService.swift. Then add an import for PulseCore at the top of the file under the other imports: import PulseCore Under the declaration of urlSession, create a logger: private let logger = NetworkLogger() NetworkLogger is the class Pulse provides to perform all your logging functions. You’ll use this instance in the next step to log network activity. Implementing URLSession Delegates First, you need to change the network interaction to enable Pulse logging. You need to hook up Pulse to various different parts of the network request. This means you will switch from handling the search network request with an inline completion block, to a cached block that will execute as part of one of the URLSession delegate callback. Update Search to Work with a Delegate In NetworkService.swift, add the following property underneath the declaration of logger: var searchCompletion: ((Result<[Movie], NetworkError>) -> Void)? This property has the same signature as the completion block for Now, update the search(for:) function to remove parsing, error and completion handling. All of that responsibility will shift to the delegate. Replace the @discardableResult // 1 func search(for searchTerm: String) -> URLSessionDataTask? { // 2 guard let url = try? url(for: searchTerm) else { searchCompletion?(.failure(NetworkError.invalidURL)) return nil } // 3 let task = urlSession.dataTask(with: url) // 4 task.delegate = self // 5 task.resume() // 6 return task } This code: - Removes the completion argument and passes the search term entered in the search bar. - Then checks for a valid URL. If the URL is not valid then it calls the completion handler, if it exists. - Creates a data task from the URL. - Then sets the task’s delegate to self. - Calls resume()to start the request. - Finally, returns the task so you can cancel it if needed. You’ll see a warning, but don’t worry. You’ll fix it soon. Update NetworkService with URLSessionTaskDelegate While in NetworkService.swift, update NetworkService to make it a class instead of a struct and have it inherit from NSObject. Update the declaration as follows: class NetworkService: NSObject { ... } This change is required so that the class can become a URLSession delegate. Add an extension to NetworkService and declare conformance to URLSessionTaskDelegate and URLSessionDataDelegate and add the first delegate method by adding the following code at the bottom of the file: extension NetworkService: URLSessionTaskDelegate, URLSessionDataDelegate { func urlSession( _ session: URLSession, dataTask: URLSessionDataTask, didReceive response: URLResponse, completionHandler: @escaping (URLSession.ResponseDisposition) -> Void ) { // 1 logger.logDataTask(dataTask, didReceive: response) // 2 if let response = response as? HTTPURLResponse, response.statusCode != 200 { searchCompletion?(.failure(.invalidResponseType)) } // 3 completionHandler(.allow) } } This delegate function fires when urlSession receives a response. Here’s a code breakdown: - This is your first Pulse network logging statement! It tells Pulse to log that the data task received a response. - If the response isn’t a 200 status code, or success, you call searchCompletionwith a failure result. - However if the request was successful, you call the delegate call’s completion handler to allow the data task to continue. Next, add two more delegate methods: // 1 func urlSession( _ session: URLSession, task: URLSessionTask, didCompleteWithError error: Error? ) { logger.logTask(task, didCompleteWithError: error) } // 2 func urlSession( _ session: URLSession, task: URLSessionTask, didFinishCollecting metrics: URLSessionTaskMetrics ) { logger.logTask(task, didFinishCollecting: metrics) } The above functions add more Pulse log points for the data task at the following points: - Successful completion of the task. - When metrics have been collected. You have one final delegate protocol to implement. Update NetworkService with URLSessionDataDelegate Because you’ve switched to using URLSession delegates, you still need a way to hook into the response when data is received, parse it and update the backing array when you have updated results. At the bottom of the extension, add your final delegate function: func urlSession( _ session: URLSession, dataTask: URLSessionDataTask, didReceive data: Data ) { // 1 logger.logDataTask(dataTask, didReceive: data) do { // 2 let decoder = JSONDecoder() decoder.keyDecodingStrategy = .convertFromSnakeCase let movieResponse = try decoder.decode(MovieResponse.self, from: data) // 3 searchCompletion?(.success(movieResponse.list)) } catch { // 4 searchCompletion?(.failure(NetworkError.invalidParse)) } } This function: - Sends a log statement to Pulse for the receipt of data. - Attempts to decode the response. - If successful, calls searchCompletionwith a success result. - If parsing fails, calls completion with an appropriate error and a failure result. Next, you need to update MovieListViewModel to incorporate the changes to the search request. Update the List View Model Open MovieListViewModel.swift. Add a function under search() to handle the search response: private func processSearchResponse(result: Result<[Movie], NetworkError>) { // 1 DispatchQueue.main.async { // 2 self.loading = false // 3 guard let list = try? result.get() else { return } // 4 self.movieList = list } } The above code: - Moves to the main queue because it’ll trigger a UI update. - Sets the false. Updating this property re-renders the UI and causes the progress spinner to disappear. - If there are no results, returns early so as not to replace any existing list with an empty one. - Sets the published property movieListto the search results. This will trigger an update to the search results UI. Now, update the search() function to: func search() { // 1 if let task = currentTask { task.cancel() currentTask = nil } // 2 DispatchQueue.main.async { guard !self.searchText.isEmpty else { self.movieList = [] return } } // 3 logger.info("Performing search with term: \(self.searchText)") loading = true // 4 networkService.searchCompletion = processSearchResponse(result:) // 5 let task = networkService.search(for: searchText) // 6 currentTask = task } Here’s a code breakdown: - If currentTaskisn’t nil, that means a request is in progress. Cancel it to perform the new one. - If searchTextis empty, clear the list. - Then, log the search and set the loading state to true. - Set the new searchCompletionproperty to the processSearchResponse(result:). - Create the task and update the currentTaskproperty. That’s it! Build and run. You’ll see the app behave in the same way. But now, when you perform a few searches and tap the network button, you’ll get logs of the network activity! Make a few different searches and then go into the Pulse UI. Take some time to inspect the Pulse UI now that you have some log data to explore. The first view you’ll see is the Console tab: This view mixes all log items: network and standard SwiftLog entries. There is textual search and filter UI as well. You can filter by date and log level: Tapping an item gives you a detail summary: Pin detail items by tapping the pin icon at the top. The Share button lets you share your Pulse document for others to view in one of the dedicated iOS or macOS apps. Pin an item now to save it. Tapping View on the Response Body section shows you a formatted JSON response body: Back in the main request view, tap the Headers segment at the top and you’ll see a dedicated view of request and response headers: Select the Metrics segment and you’ll see network performance stats: Back in the main Pulse UI console, if you select the network tab, you’ll only see network logs and exclude the SwiftLog standard entries: Select the Pins tab to see the item you pinned as a favorite: Then select Settings to see some options available to you: Browse Files lets you browse the Files app for stored Pulse logs from outside this app. Unfortunately, an apparent bug in this UI doesn’t let this modal experience dismiss. So if you go into this then you’ll need to kill the app and relaunch it to get back to the app. Remote Logging pairs with Pulse Pro to allow livestreaming of network logs via the macOS app. You have one more area to update for Pulse logging. Capturing Image Requests The list and detail views both use NetworkService to download poster images. You’ll add logging for image requests as well. It makes sense to split this functionality into a helper class to add the logging. Set up ImageDownloader In the Xcode Project Navigator, open MovieSearch ▸ Source ▸ Network ▸ Service. Right-click the Service group and select New File …. Then select Swift File and click Next. Name the new file ImageDownloader.swift and click Create. Remove import Foundation and replace with this code: // 1 import UIKit import PulseCore // 2 class ImageDownloader: NSObject { private let imageBaseURLString = " let urlSession = URLSession(configuration: URLSessionConfiguration.default, delegate: nil, delegateQueue: nil) // 3 let logger = NetworkLogger() // 4 var imageDownloadCompletion: ((Result<UIImage, NetworkError>) -> Void)? // 5 func downloadImage(for imageType: ImageType, at path: String) { guard let url = try? url(for: imageType, at: path) else { return } let task = urlSession.dataTask(with: url) task.delegate = self task.resume() } // 6 private func url(for imageType: ImageType, at path: String) throws -> URL { let imagePathParam = imageType.pathParameter() guard let baseURL = URL(string: imageBaseURLString), var urlComponents = URLComponents(url: baseURL, resolvingAgainstBaseURL: false) else { throw NetworkError.invalidURL } urlComponents.path = "/t/p/\(imagePathParam)\(path)" let queryItems: [URLQueryItem] = [ URLQueryItem(name: "api_key", value: APIKey.value) ] urlComponents.queryItems = queryItems guard let url = urlComponents.url else { throw NetworkError.invalidURL } return url } } This code is an implementation of a simple download utility for images. As a quick summary, this code: - Imports UIKitinstead of Foundationbecause you’ll work with UIImage. It also imports PulseCorefor logging. - Makes the implementation a class. You’ll implement the same delegation pattern as in NetworkService, so ImageDownloaderneeds to be a class that inherits from NSObject. - Creates a NetworkLoggerinstance just like you did before. - Uses a property to hold a completion handler like you did before. - Creates a function to trigger image downloads. - Generates an image URL from the image type, detail or list, and the poster path retrieved from the API response for each movie. Next, implement Pulse logging in an extension in ImageDownloader.swift. Under the closing brace of ImageDownloader, add: extension ImageDownloader: URLSessionTaskDelegate, URLSessionDataDelegate { func urlSession( _ session: URLSession, dataTask: URLSessionDataTask, didReceive response: URLResponse, completionHandler: @escaping (URLSession.ResponseDisposition) -> Void ) { logger.logDataTask(dataTask, didReceive: response) if let response = response as? HTTPURLResponse, response.statusCode != 200 { imageDownloadCompletion?(.failure(.invalidResponseType)) } completionHandler(.allow) } func urlSession( _ session: URLSession, task: URLSessionTask, didCompleteWithError error: Error? ) { logger.logTask(task, didCompleteWithError: error) imageDownloadCompletion?(.failure(NetworkError.invalidResponseType)) } func urlSession( _ session: URLSession, task: URLSessionTask, didFinishCollecting metrics: URLSessionTaskMetrics ) { logger.logTask(task, didFinishCollecting: metrics) } func urlSession( _ session: URLSession, dataTask: URLSessionDataTask, didReceive data: Data ) { logger.logDataTask(dataTask, didReceive: data) guard let image = UIImage(data: data) else { imageDownloadCompletion?(.failure(.invalidParse)) return } imageDownloadCompletion?(.success(image)) } } Much of this is like the NetworkService extension. Note the transformation of Data to UIImage in the urlSession(_:dataTask:didReceive:) function. Aside from that key difference, all the patterns and log statements are like NetworkService. Update MovieDetailViewModel Now, you’ll switch to ImageDownloader for all image downloads. Open MovieDetailViewModel.swift. At the top of the class declaration, replace the networkService property with an ImageDownloader: private let imageDownloader = ImageDownloader() Next, under fetchImage(for:), add this helper function: private func processImageResponse(result: Result<UIImage, NetworkError>) { // 1 guard let image = try? result.get() else { return } // 2 DispatchQueue.main.async { self.posterImage = image } } This helper: - Checks whether the network request returned an image. Otherwise, it returns early. - Sets the posterImageon the main queue, triggering a UI update. Now, replace fetchImage(for:) with: func fetchImage(for movie: Movie, imageType: ImageType) { guard let posterPath = movie.posterPath else { return } imageDownloader.imageDownloadCompletion = processImageResponse(result:) imageDownloader.downloadImage(for: .list, at: posterPath) } This code switches the download image implementation to the new class. It follows the delegation pattern required to use Pulse logging. Next, open NetworkService.swift and remove the downloadImage(for:at:completion:) and url(for:at:) functions because you no longer need them. That completes your swap of the image download implementation. Build and run to check your work. Perform a search and you’ll see image request results showing in your logs alongside the search results: Tap an image request. Then tap again into Response Body and you’ll see the image as part of the response: And that’s it! You updated your network implementation to add Pulse logging. Pulse gives you some excellent functionality for in-app debugging out of the box. You’ll find the visually-oriented interface, ability to review results in-app and ability to share Pulse files helpful when debugging tricky network conditions. Where to Go From Here? Download the completed project by clicking Download Materials at the top or bottom of the tutorial. Pulse has many more features you can explore. Pulse Pro adds the ability to livestream logs. If you have specific needs, you can develop your own viewer front-end that works with the Pulse data and is implemented via a Core Data based API, so iOS and macOS developers should feel at home. Finally, if you want to explore more about network proxy tools, check out our tutorials on Charles and Proxyman, which offer a different take on viewing and debugging network conditions. If you have any comments or questions, please join the discussion below.
https://www.raywenderlich.com/30189310-pulse-sdk-integration-tutorial-for-ios-network-logger
CC-MAIN-2022-21
refinedweb
4,165
58.28
Powered by Excelasoft Solutions. Project B compiled as A.dll (assembly name A). Sign Up Now! Hitesh Murarka 8,547 views 20:50 How to Add Reference to your Visual Studio Project - Duration: 3:51. Vis Dotnet 636 views 0:29 How to fix error - The type or namespace name ConfigurationManager - Duration: 0:44. Transcript The interactive transcript could not be loaded. Aderson Oliveira 7,715 views 2:50 Fix- Visual Studio cannot start debugging because the debug target - Duration: 5:47. Selen Guest Is there any code like that in c# asp.net CrystalReport1 report=new CrystalReport1(); I get this error: C:\Inetpub\wwwroot\proje\kalite\rapor\Equipment\Eqp_Status.aspx.cs(31): The type or namespace name 'CrystalReport1' could not be found (are you I also noticed that the CrystalDecisions objects appeared in the Object Browser -- a dead giveaway that something was amiss! Solution 1 Accept Solution Reject Solution Look here: Similar issue resolved[^]Based on it, earlier versions did not allow this. If a question is poorly phrased then either ask for clarification, ignore it, or edit the question and fix the problem. share|improve this answer answered Nov 28 '12 at 4:01 DeepSpace101 4,37823883 add a comment| up vote 23 down vote I solved mine because the other project was coded with .NET 4.5 Vis Dotnet 25,636 views 0:52 Visual Studio Tip - How to fix missing usings - Duration: 2:06. And focus on the actual solution. –Uwe Allner Dec 21 at 9:56 add a comment| Your Answer draft saved draft discarded Sign up or log in Sign up using Google Type Or Namespace Does Not Exist In The Namespace View All Messages No new notifications. Treat my content as plain text, not as HTML Preview 0 … Existing Members Sign in to your account ...or Join us Download, Vote, Comment, Publish. The Type Or Namespace Name Could Not Be Found Visual Studio 2013 In case you are on VS2010, try:1. Still it was not working. Stay logged in Welcome to The Coding Forums! Getting the following error: Error 1 The type or namespace name 'PrintPDF' could not be found (are you missing a using directive or an assembly reference?) Line causing the error: Copy The Type Or Namespace Name Could Not Be Found Visual Studio 2015 Application Lifecycle> Running a Business Sales / Marketing Collaboration / Beta Testing Work Issues Design and Architecture ASP.NET JavaScript C / C++ / MFC> ATL / WTL / STL Managed C++/CLI has the solution: [the-type-or-namespace-name-could-not-be-found][1] [1]: stackoverflow.com/questions/4764978/… –Dani Apr 27 '12 at 19:48 add a comment| 2 Answers 2 active oldest votes up vote 1 down vote accepted OK if this is Terms of Service Layout: fixed | fluid CodeProject, 503-250 Ferrand Drive Toronto Ontario, M3C 3G8 Canada +1 416-849-8900 x 100 Dismiss Username Password Forgot Password Create Account Forums Questions Search Search Would presence of MANPADS ground the entire airline industry? For more information on the ReportClass class, please refer to the derived class ReportDocument."now the way to check if this is actually happening as it should, in solution explorer click on The Type Or Namespace Name Could Not Be Found Google current community chat Stack Overflow Meta Stack Overflow your communities Sign up or log in to customize your list. The Type Or Namespace Could Not Be Found Visual Studio 2015 It works Regrds Nive nive, Mar 6, 2008 #2 Advertisements Show Ignored Content Want to reply to this thread or ask your own question? I've also tested it on another machine and it appears that having CR9 Advanced installed on the machine will cause this problem. navigate to this website Member Login Remember Me Forgot your password? Sign in to report inappropriate content. Sign in 1 Loading... The Type Or Namespace Could Not Be Found Unity Let's work to help developers, not make them feel stupid. PrjForm was set to ".Net Framework 4 Client Profile" I changed it to ".Net Framework 4", and now I have a successful build. they are in C:\Program Files\Microsoft Visual Studio .NET 2003\Crystal Reports\Samples <---Sig Starts Here--->Please read Posting rules and How to get your questions answeredIt will help us to help you!John Spano MCSD, An Attempt Was Made To Load A Program With An Incorrect Format - Duration: 2:43. How to put a diacritic on top of an i? The Type Or Namespace Cannot Be Found Even After Adding The Reference I have tried to remove the CrystalReport.rpt file and adding it again, BUT I couldn't found it in the Add --> Existing Item list!! So i tried to add missing assemblies by NuGet package. Sign in to make your opinion count. About Us The Coding Forums is a place to seek help and ask questions relating to coding and programming languages. It compiles... Crystaldecisions Could Not Be Found Visual Studio 2012 Magento 2 GitHub version different to installed version Did Donald Trump say that "global warming was a hoax invented by the Chinese"? While it might not have been related to this particular case, I think someone else can find this information useful. Close Yeah, keep it Undo Close This video is unavailable. Turns out this was a client profiling issue. Vis Dotnet 5,200 views 0:37 How to fix error Could not load file or assembly - Duration: 0:52. Is the form "double Dutch" still used? Aman Shukla 3,718 views 3:51 The type or namespace name 'Data' does not exist in the namespace - Duration: 3:30. Welcome to the Coding Forums, the place to chat about anything related to programming and coding languages. The solution, after a painful day, was to make sure assemblies don't have same name. If you feel any content is violating any terms please contact. After perform this, the followingreferences will be added automatically: CrystalDecisions.CrystalReports.Engine CrystalDecisions.ReportSourc CrystalDecisions.SharedLeo Liu [MSFT] MSDN Community Support | Feedback to us Get or Request Code Sample from Microsoft Please remember to How much overhead / throughput penalty does it create? Ask a question on this topic 5 replies Share & Follow Privacy Terms of Use Legal Disclosure Copyright Trademark Sitemap Newsletter Skip navigation Sign inSearch Loading... This report class has ReportClass as its base class. Your Email This email is in use. It solved my problem. WinForms designer gave misleading error message saying likely resulting from incompatbile platform targets. Not what you were looking for? Your Email Password Forgot your password? Learn more about SAP Q&A. You don't create an instance of the rpt, you create a viewer object/form control and use it to show the report. <---Sig Starts Here--->Please read Posting rules and How to get Not the answer you're looking for? There is also a third project referenced by PrjForm, which it is able to reference and use successfully. Add to Want to watch this again later?
http://gsbook.org/not-be/name-crystalreport1-could-not-be-found-are.php
CC-MAIN-2018-17
refinedweb
1,165
64.2
Day 1 📅 11-06-2019 🕐 1h 🏁 Initial setup and getting ready Initial setup I’m going to use create-react-app tool to scaffold the project folder. It’s a tool provided by Facebook that allow to easy scaffold a pre-configured starter project. npx create-react-app todo-app The initial project consists of node_modules: contains all necessary dependencies. It’s generated scaffolding the app with create-react-apptool (there’s a npm installinto it) public: contains few files like the index.html, the application faviconand a manifestthat contains few basic application settings src: contains the code .gitignore package.json: there are all the project information like the version, the author and mainly the dependencies the application needs to work properly yarn.lock: contains all the dependencies Yarn needs with relative versions To run the starter basic application it’s enough to do cd todo-app npm start npm start is one of several pre-configured commands I’m going to use to develop this application. Other commands are: npm test npm build npm eject(stay away from it for now) Get ready for components In order to work with a sustainable and scalable structure, I like to keep things separated. I’m going to create two folders for components. These two folders will contains (surprise) components! The only difference between them is that a container is a component that manages the application state so it’s a stateful component. Other components are stateless components. The main component <App /> Let’s create the first component. I’m going go to move the App.js, App.test.js and App.css into their own folder ./containers/App/: // App.js import React, { Component } from 'react'; import './App.css'; class App extends Component { render() { return ( <div className="App"> Placeholder </div> ); } } export default App; /* App.css */ .App { text-align: center; } No changes to the App.test.js at the moment. Update index.js - importing App component - because files location is changed and delete useless files like logo.svg. The <Todo /> component Let’s create the <Todo /> component into the ./components folder. Create Todo.js, Todo.test.js and Todo.css. // Todo.js import React from 'react'; import './Todo.css'; const todo = () => ( <div className="Todo"> <p>Placeholder</p> </div> ) export default todo; /* Todo.css */ .Todo {} /* Empty for now */ Todo.test.js is similar to App.test.js: import React from 'react'; import ReactDOM from 'react-dom'; import Todo from './Todo'; it('renders without crashing', () => { const div = document.createElement('div'); ReactDOM.render(<Todo />, div); ReactDOM.unmountComponentAtNode(div); }); Now I can use the <Todo /> component into the <App /> component, doing: import React, { Component } from 'react'; import './App.css'; import Todo from '../../components/Todo/Todo'; class App extends Component { render() { return ( <div className="App"> <Todo /> </div> ); } } export default App; rossanodan / todo-app Simple to-do app built with React. This project was bootstrapped with Create React App. How to run locally git clone cd todo-app npm install npm start Discussion (0)
https://practicaldev-herokuapp-com.global.ssl.fastly.net/rossanodan/how-to-build-a-todo-app-with-react-and-firebase-database-coh
CC-MAIN-2021-10
refinedweb
495
50.73
Code. Collaborate. Organize. No Limits. Try it Today. 4! = 4 * 3! = 4 * (3*2!) = 4 * (3*(2*1!)) = 4 * (3*(2*(1))) = 24Obviously, the second approach is more complex to think as compared to first one. The second solution is called ‘Recursion’. Thinking the solution of the problem in this way is termed as ‘recursive thinking’. In this method of simplification we use ‘Divide and Conquer’ technique. We try to divide a problem (main problem) into sub-problems of the same type. Remember our factorial example. Problem of finding factorial of n can be formulated in recursive way as follows:- Main problem = fact(n) =(Preprocessing)(Sub problem of same type but for simpler case)(Post Processing) = n times fact(n-1) (Postprocessing) = n times fact(n-1) = n * fact(n–1) Now we are going to understand the gist of the above concept. There are same two steps we have to perform in designing any recursive solution. 1. Define what could be the base case(s) of this recursive solution. Examples:- 1.1. Base case for factorial problem is:- fact(n) = 1 , if n =0. 1.2. Base case for fibonnaci problem is:- fibonnaci(n) = 0 ,if n=0 fibonnaci(n) = 1 ,if n=1 1.3. Base case for palindrome problem is:- IsPalindrome(string) = true ,if Length(string) = 1 2. Think and formulate the recursive case of this problem. Recursive case designing needs to perform three tasks as given below:- 2.1. Divide the problem into one or more simpler parts of the main problem. End result of this exercise usually comes out to a mathematical formula expressed in form of a function: Figure 1 Where, n1 = Current input to recursive function n2 = Reduced or simpler input passed in next recursive call g() = Some pre/post processing operations need to applied to get f(n1) value from reduced case f(n2).But g() is optional. Examples:- 2.1.1. Problem of calculating factorial can be divided into sub problem of same types as:- fact(n) = n * fact(n-1), where n > 0 2.1.2. Problem of calculating nth term of fibonnaci series is done as:- fib(n) = fib(n-1) + fib(n-2) 2.1.3. Problem of checking that any given string is a palindrome or not can be reduced to checking its substring is a Palindrome or not. Substring is obtained from original string by removing first and last characters of the string. Provided before removing first and last character we have checked that the first and last characters are equal. Figure 2 2.2. Call the function (recursively) on each sub divided parts. Figure 3 2.3. Combine the solutions of the parts into a solution of the problem.This points need explanation for sure. First of all this step is not necessary in every recursive solution. We all know from step (a) given above that to create a recursive case we have to divide the main problem into sub problem of same type before giving a recursive call. What ever be the type of recursion every recursion should divide the problem in such a way that it should approach a base case in finite number of steps. But ultimately purpose of the whole exercise is to get our end result. So, depending upon when we get our end result in a recursive function we have two types of recursive functions.1. Tail recursive2. Augmenting recursive First we have to understand that what a ‘tail call’ is. Let’s understand by example. Figure 4So, Tail recursive functions are those whose tail call gives call to same function. For example, in the code below we see two tail operations and in the one of the tail call, we see that tail call foo(a-1), gives call to the same function foo. Hence, this is known as tail recursive function. Figure 5 Other characteristics of the tail recursive function are as given below:-1. Tail Recursive function in general looks as follows:- foo(n1) { … //No operation need to be done after returning from foo(n2) Return foo(n2); … } 2. If the ‘Tail Recursion Optimization’ is done in high level language compiler then end result from the last recursive call is directly returned to the external calling function that had called the Tail recursive function first time outside its function body. Usually, for each function call ‘stack’ space is allocated to store parameters and local variables. Stack space of the function is reserved till the return statement is executed for that function. As we know that in recursive calls, function is not able to execute its return statement since a recursive call is present before the return statement of the function and this is done till the base case is reached. Hence, for each call runtime has to reserve a stack space one after the other till base case is reached. So in tail recursion optimization stack is not kept reserved in the subsequent calls, only it is reserved for the current executing call and then deallocated on next recursive call without waiting for current function return statement to execute. 3. There are no pending operations to be performed on return from a recursive call.Tail recursive functions are often said to "return the value of the last recursive call as the value of the function. Some modern computing systems will actually compute tail-recursive functions using an iterative process. There are some functions that keep on simplifying problems till they end up to the base case and then actually start building end result when they are returning back from the base case. These types of functions are called ‘Augmenting-Recursive’ functions. Let’s better understand it by examples. Palindrome, Greatest Common Divisor (GCD) and Fibonnaci problems are implemented as ‘Tail-Recursive’ functions as there is no operation is left pending at each return call. While factorial calculation is ‘Augmenting Recursive’ function as the result actually start building when the function actually returns back from the base case (last recursive call). In fact, ‘Augmentative recursive’ functions are those when we have some function g() which does some post/preprocessing on ‘recursive case’ f(n2). When g() does not exits, then those functions are ‘Tail – recursive’. Usually we find that there are more ‘Tail – recursive’ cases then ‘Augmenting recursive’ cases. Tail Recursion example. Recursive case for Greatest common divisor problem is:- Program given below:-First call = gcd( 24 , 6)Second call = gcd(6, 24 % 6)Third call = 6Figure 6Augmenting Recursion example. Recursive case for factorial problem is:- Figure 7Let’s calculate the factorial of 4. It is denoted as 4!First call = 4 * fact(3)Second Call =4 * 3 * fact(2)Third Call =4 * 3 * 2 * fact(1)Fourth Call =4 * 3 * 2 * 1 * fact(0)Fifth Call =4 * 3 * 2 * 1 * 1 [1st return from Fifth Call (1*1) ] =4 * 3 * 2 * 1 [2nd return from Fourth Call (2*1)] =4 * 3 * 2 [3rd return from Third Call (3*2)] =4 * 6 [4th return from Second Call (4 * 6)] =24 [5th return (last return) from First call] Given below is the illustration that gives comparative analysis of three recursive problems (factorial, fibonnaci, palindrome) and helps in identifying general pattern used in designing all types of recursion problem. Figure 8( To view larger image, please download the attached 'Recurson Generic Steps.zip' file) int GCD(int ,int y) { if(y == 0) return x; else return GCD(y, x % y); } Whenever there is a pending operation to be performed on return from each recursive call, then recursion is known as Augmenting recursion. The "infamous" factorial function fact is usually written in a non-tail-recursive manner: int fact (int n) { if (n == 0) return 1; return n * fact(n - 1); } A function is directly recursive if it contains an explicit call to itself. int foo(int x) { if (x <= 0) return x; return foo(x - 1); } A function foo is indirectly recursive if it contains a call to another function which ultimately calls foo. int foo(int x) { if (x <= 0) return x; return foo1(x); } int foo1(int y) { return foo(y - 1); } When the pair of functions contains call to each other then they are said to perform mutual recursion. int foo(int x) { if (x <= 0) return x; return foo1(x); } int foo1(int y) { return foo(y - 1); }.Notice that the pending operation for the recursive call is another call to fib. Therefore fib is tree-recursive. int fib(int n) { if (n == 0) return 0; if (n == 1) return 1; return fib(n - 1) + fib(n - 2); } Recursive algorithms are particularly appropriate when the underlying problem or the data to be treated are defined in recursive terms. This term refers to the fact that the recursive procedures are acting on data that is defined recursively. Example XML files Parser, Linked List and Binary Trees. Following example shows Structural recursion example in which underlying problem is to read XML data file that has recursive data structure, hence structural recursion is used to parse that XML file.XML File Figure 9 Figure 10 Order if recursion only matters when the recursive call is not the ‘Tail’ call of the function. If the recursive call is associated with the return statement of the function then all the operations should be present before the return statement. But if the recursive call statement is not associated with the return statement and the return statement is defined after the recursive call then, order of the statements below and after the recursive call effects the execution of the recursive function, Figure 11 1. When the problem is complex and can be expressed in more simplified form as recursive case then its iterative counter part. In the "factorial" example the iterative implementation is likely to be slightly faster in practice than the recursive one.2. When the solution of the problem is inherently recursive. Like Structural recursion (Tree traversal) and Quick Sort. 1. When the problem is simple.2. When the solution of the problem is not inherently recursive. Main problem can not be expressed easily into sub problem of same type.3.. The figure below graphically illustrates this principle. Figure 12 Problem to Solution difficulty curveIt applies only to problems that can be solved recursively. As the problem becomes more difficult, the difficulty and CPU usage of the non-recursive solution begin to increase. On the other hand, the difficulty and CPU usage of the recursive solution -- simulated or otherwise -- begins to flatten. Notice that for very simple problems, the non-recursive approach is usually better. Here I’m trying to give some real life usages of recursion in computer science field. For beginner’s perspective, I have tried to write solutions in C#.NET of common recursive problems. Please find the source code and the executables as
http://www.codeproject.com/Articles/32873/Recursion-made-simple?fid=1534210&df=90&mpp=10&sort=Position&spc=None&tid=4321241
CC-MAIN-2014-15
refinedweb
1,807
53
Embedding the db4o Object-Oriented Database Listing 2. TreeNode Class /* * TreeNode */ using System; namespace PersistentTrees { /// <summary> /// Description of TreeNode. /// </summary> public class TreeNode { public TreeNode() { } private TreeNode left; // Left child private TreeNode right; // Right child private string key; // Key for this node private Object[] data; // Data associated with key // Create a new TreeNode, loaded with // key and data. public TreeNode(string _key, Object _data) { left = null; right = null; key = _key; data = new Object[1]; data[0] = _data; } // addData // Adds new data item to an existing node. // The array is extended. public void addData(Object _data) { Object[] newdata = new Object[data.Length+1]; Array.Copy(data,0,newdata,0,data.Length); newdata[data.Length]=_data; data = newdata; } // Property access public TreeNode Left { get { return left; } set { left = value; } } public TreeNode Right { get { return right; } set { right = value; } } public string Key { get { return key; } set { key = value; } } public Object[] getData() { return data; } } } Next, I create a trie, an indexing data structure specialized for searching text words. It is built as a series of nodes arranged in levels—each level holds a set of characters and associated pointers such that the characters on the topmost (or, root) level correspond to letters in a word's first character position; characters in the second level correspond to letters in the second character position, and so on. References associated with each character serve to “string” characters like beads on a thread, so that following a thread from the root down into the tree spells out the word being searched for. If this is difficult to visualize, the illustration in Figure 1 should help. Figure 1. A trie. In a trie index, individual characters within a word are stored at different node levels. This particular trie holds three words: as, ask and bet. The data pointers are actually references to the DictEntry objects associated with the corresponding words. Inserting a new word into a trie is relatively simple. Starting with the first matching character, you examine the root node to see whether that character exists. If not, add it, and from that point on, the algorithm inserts new nodes (each initialized with a subsequent letter) as it works through the target word. If the character does exist, the algorithm follows the associated pointer to the next level, and the examination process repeats. Ultimately, you've accounted for each character in the word, and the node you're on is the node on which you attach the data reference. Searching a trie is equally simple. Start at the root, and look for the first character. If the character is found, follow the associated reference to the next node; else, return a “not found” error. Otherwise, move to the next character, repeat, and if you get through the whole word, the data node associated with the terminal character points to the DictEntry object. The code for the trie is shown in Listing 3. Listing 3. Trie /* * Trie */ using System; using com.db4o; namespace PersistentTrees { /// <summary> /// Description of Trie. /// </summary> /// trie class public class Trie { private TriePnode root; // Root of Trie // Constructor public Trie() { root = null; } // insert // Insert a key/data pair into the tree. // Allows duplicates public void insert(string key, // Key to insert Object data) // Data assoc. with key { TriePnode t = root; TriePnode parent = null; int index=0; int slen = key.Length; for(int i=0; i< slen; i++) { char c = key[i]; // If a node doesn't exist -- create it if(t == null) t = new TriePnode(); // If this is the first node of the tree, // it is the // root. Otherwise, it is stored in the // pnodes array // of the parent if(i==0) root = t; else parent.setPnodePointer(index, t); // If the character is not on the node, // add it if((index=t.isCharOnNode(c))==-1) index = t.addKeyChar(c); if(i == slen-1) break; parent = t; t = t.getPnodePointer(index); } // Finally, add the data item t.addData(index, data); } // search // Searches for a string in the trie. // If found, returns the Object[] data array associated. // Else, returns null // db is the ObjectContainer holding the trie public Object[] search(string _key, ObjectContainer db) { TriePnode t; char c; int index=0; // Empty trie? if((t=root)==null) return(null); int slen = _key.Length; for(int i=0; i<slen; i++) { c = _key[i]; if((index=t.isCharOnNode(c))==-1)return(null); if(i==slen-1) break; db.activate(t,2); t = t.getPnodePointer(index); } // Get the data db.activate(t,3); return(t.getDnodePointers(index).getData()); } } } As the code for inserting and searching both binary trees and tries illustrates, we can work with database objects almost as though they were purely in memory objects. Specifically, we can attach an object to an index simply by storing its object reference in the data reference element. In addition, because the database makes no distinction between index objects and data objects, we need not create a separate index and data files. This keeps everything in one place, which is actually more of an advantage than one might first suppose. Code for reading a text file with words and definitions, creating DictEntry objects and storing them in the database, and also building binary tree and trie indexes and attaching the DictEntry objects to them looks like this: string theword; string pronunciation; int numdefs; int partofspeech; string definition; DictEntry _dictEntry; // Open a streamreader for the text file FileInfo sourceFile = new FileInfo(textFilePath); reader = sourceFile.OpenText(); // Open/create the database file ObjectContainer db = Db4o.openFile(databaseFilePath); // Create an empty Binary tree, and an empty trie BinaryTree mybintree = new BinaryTree(); Trie mytrie = new Trie(); // Sit in an endless loop, reading text, // building objects, and putting those objects // in the database while(true) { // Read a word. // If we read a "#", then we're done. theword = ReadWord(); if(theword.Equals("#")) break; // Read the pronunciation and put // it in the object pronunciation = ReadPronunciation(); _dictEntry = new DictEntry(theword, pronunciation); // Read the number of definitions numdefs = ReadNumOfDefs(); // Loop through definitions. For each, // read the part of speech and the // definition, add it to the definition // array. for(int i=0; i<numdefs; i++) { partofspeech = ReadPartOfSpeech(); definition = ReadDef(); Defn def = new Defn(partofspeech, definition); _dictEntry.add(def); } // We've read all of the definitions. // Put the DictEntry object into the // database db.set(_dictEntry); // Now insert _dictEntry into the binary tree // and the trie mybintree.insert(_dictEntry.TheWord, _dictEntry); mytrie.insert(_dictEntry.TheWord, _dictEntry); } // All done. // Store the binary tree and the trie db.set(mybintree); db.set(mytrie); // Commit everything db.commit(); This, of course, presumes a number of helper methods for reading the source file, but the flow of logic is nonetheless apparent. Notice again that we were able to store each index—in entirety—simply by storing the root with a single call to db.set(). Fetching something from the database is only somewhat trickier. As much as we'd like to treat persistent objects identically to transient objects, we cannot. Objects on disk must be read into memory, and this requires an explicit fetch. The initial fetch is, of course, is a call to db.get() to locate the root of the index. So, code that allows us to search for a word using either the binary tree or the trie index would look like this: public static void Main(string[] args) { Object[] found; DictEntry _entry; // Verify proper number of arguments if(args.Length !=3) { Console.WriteLine("usage: SearchDictDatabase <database> B|T <word>"); Console.WriteLine("<database> = path to db4o database"); Console.WriteLine("B = use binary tree; T = use trie"); Console.WriteLine("<word> = word to search for"); return; } // Verify 2nd argument if("BT".IndexOf(args[1])==-1) { Console.WriteLine("2nd argument must be B or T"); return; } // Open the database file ObjectContainer db = Db4o.openFile(args[0]); if(db!=null) Console.WriteLine("Open OK"); // Switch on the 2nd argument (B or T) if("BT".IndexOf(args[1])==0) { // Search binary tree // Create an empty binary tree object for the // search template BinaryTree btt = new BinaryTree(); ObjectSet result = db.get(btt); BinaryTree bt = (BinaryTree) result.next(); // Now search for the results found = bt.search(args[2],db); } else { // Search trie // Create an empty trie object fore the search // template Trie triet = new Trie(); ObjectSet result = db.get(triet); Trie mytrie = (Trie) result.next(); // Now search for the results found = mytrie.search(args[2],db); } // Close the database db.close(); // Was it in the database? if(found == null) { Console.WriteLine("Not found"); return; } // Fetch the DictEntry _entry = (DictEntry)found[0]; ... <Do stuff with _entry here> ... And now we can explain the purpose of the calls to db.activate() in the search methods of both Listings 1 and 3. When you call the db.set() method, as we explained, the db4o engine spiders through the object tree, persisting all reachable objects. (This is known as persistence by reachability.) In the reverse direction—that is, calling db.get() to fetch an object—db4o does not pull the entire object tree out of the database. If it did that, then fetching the root of, for example, the binary index, would cause db4o to pull the entire index, plus all the dictionary entries, plus all the definitions into memory at once—not very efficient if we want only one word. Instead, db4o uses a concept called activation depth. Suppose I've fetched object A into memory from a db4o database using a db.get() call. If I then call db.activate(A,6), that tells db4o also to fetch into memory all objects referenced by A, up to a depth of 6. So, the db.activate() calls that are sprinkled throughout the search routines of the binary tree and the trie classes ensure that the search operation always pulls in enough of the index so that the search can proceed. (And, at the end of a successful search, the dictionary objects are fet
http://www.linuxjournal.com/article/8645?page=0,2
CC-MAIN-2014-49
refinedweb
1,639
56.86
“There are more ways than one to skin a cat, so are there more ways than one of digging for money.” - Seba Smith - The Money Diggers, 1840. “There are more ways than one to skin a cat, so are there more ways than one of digging for money.” - Seba Smith - The Money Diggers, 1840. "There are more ways than one to dress a pug, so are there more ways than one of designing the app navigation” - Sebastian Witalec - NativeScript blogs, 2017. "There are more ways than one to dress a pug, so are there more ways than one of designing the app navigation” - Sebastian Witalec - NativeScript blogs, 2017. Choosing the right navigation model for your app is really important. Depending on the number of screens and their relationship you might either choose a tab navigation or a drawer navigation or even a combination of the two. However choosing what you want to do is just one side of the coin. The other side of the coin is the actual implementation. The problem is that there are many different ways to: After looking at some of the projects we've created with NativeScript over the years we realised that each developer had their own idea on how to dress their pug. dress their pug This is why we formed the Starter Kits Team. Who were tasked to come up with templates for NativeScript projects. They've spent some time researching various options and analysing common scenarios, weighing pros and cons of different techniques. As a result they've created a set of NativeScript project templates. Each project follows a consistent structure, each adheres to the industry best practices (like the Angular Style Guide) and each is a great place to start a brand new NativeScript project. Starter Kits All you need to do is just run: tns create app-name --template tns-template-name-here Please note that this article is based on templates for NativeScript 3.1, which are likely to evolve over time and some things might change. You cannot stop progress :) Please note that this article is based on templates for NativeScript 3.1, which are likely to evolve over time and some things might change. You cannot stop progress :) NativeScript 3.1 The templates are divided into 3 categories: Each of them comes in 3 variations: -ts -ng Even though each template is quite different, they still have some things in common. At the root of the project we have: _app-variables.scss app.scss platform.android.scss platform.ios.scss The _app-variables.scss style sheet is exactly the same for each template. This is the place where you can define the branding colours of your application. And if you need to build multiple apps using different templates, but the same branding colours, then you can just sync _app-variables.scss across all your projects. Most UI elements use the $accent-dark and $accent-light variables, so this should probably be your stop number one to update the colour scheme of your apps. $accent-dark $accent-light To add beautiful font based icons, each template comes with FontAwesome already configured. All you need to do is to add an fa class to your component style and then provide the unicode value of the icon you want. For example, the following label will appear like this: fa <Label class="fa h1" text="I like  more than "></Label> As you probably can guess  is the bath tub and  is the shower.   Here is a cheatsheet of all Font Awesome icons This is one of the simplest templates where each page is displayed in its own tab view. Out of the box we have 5 tabs: home, browse, search, featured and settings. Each tab is rather empty, but the templates are open for you to change them into something amazing. search featured settings To use the {N} Core JavaScript template, run: tns create app-name --template tns-template-tab-navigation tns create app-name --template tns-template-tab-navigation To use the {N} Core TypeScript template, run: tns create app-name --template tns-template-tab-navigation-ts tns create app-name --template tns-template-tab-navigation-ts To use the Angular template, run: tns create app-name --template tns-template-tab-navigation-ng tns create app-name --template tns-template-tab-navigation-ng All of this template's magic is contained in the tabs folder, which contains the tabs page component and a folder for each of the tab views. tabs The list of available tabs is located in the tabs view file, depending on the project type it is either tabs-page.xml or tabs.component.html (Angular). Basically the tabs page contains a TabView component, which has a tab item for every page that we want to display. tabs-page.xml tabs.component.html TabView To add content to any of the existing tab views, just go to their folder and make all your changes there. To add a new tab, you need to create a new component in the tabs folder (just like the other ones) and then add a tab item to the TabView. If you are using NativeScript Core then there might be one thing that you are not aware of yet, which is important in understanding what is going on in tabs-page.xml. That is how we to bring in external UI components. For example let's have a look at the Browse component. We know that it is located in the /tab/browse folder and the file containing the component is called BrowseView. So to add to it tabs-view we need to: /tab/browse BrowseView tabs-view <Page> xmlns:namespace="folderurl" xmlns:browse="/tabs/browse" <namespace:filename /> <browse:BrowseView /> The drawer template uses a side drawer for navigation. You can bring in the side drawer by either pressing the hamburger icon (in the top left corner) or by dragging the screen from left edge to right. Just like with the tabs template, we have 5 pages we can navigate to: home, browse, search, featured and settings. To use the {N} Core JavaScript template, run: tns create app-name --template tns-template-drawer-navigation tns create app-name --template tns-template-drawer-navigation To use the {N} Core TypeScript template, run: tns create app-name --template tns-template-drawer-navigation-ts tns create app-name --template tns-template-drawer-navigation-ts To use the Angular template, run: tns create app-name --template tns-template-drawer-navigation-ng tns create app-name --template tns-template-drawer-navigation-ng To add a new page to the navigation just go to shared/my-drawer/MyDrawer-view-model. The view model contains an array of navigation items, each is made of the following fields: shared/my-drawer/MyDrawer-view-model navigation items Here is an example: { title: "Featured", name: "featured", route: "featured/featured-page", icon: "\uf005", isSelected: selectedPage === "Featured" }, To add a new page to the navigation just go to shared/my-drawer/my-drawer.component. The component contains an array of navigation items, each is made of the following fields: shared/my-drawer/my-drawer.component app-routing.module.ts { title: "Featured", name: "featured", route: "/featured", icon: "\uf005" }, By default all items in the side bar are made to look the same, but the template provides a neat mechanism to style each item. All starts with the name property you provided in ItemTemplate: name ItemTemplate { title: "ABC", name: "abc", ... } We can take abc and add additional classes to our css, that will only apply to abc: abc sidedrawer-list-item-abc sidedrawer-list-item-abc selected To use it, you just need to add the additional styling to .sidedrawer-list-item class in the drawer's css file - either _MyDrawer.scss (Core) or _my-drawer.component.scss (Angular). .sidedrawer-list-item _MyDrawer.scss _my-drawer.component.scss .sidedrawer-list-item { // current css ... .sidedrawer-list-item-abc { background-color: black; Label { color: yellow; } &.selected { background-color: gray; Label { color: red; } } } } template-drawer-navigation template-drawer-navigation-ts template-drawer-navigation-ng The final template is focused on a Master Detail navigation. Where the Master page loads a list of cars from Firebase or Kinvey. When you tap on any of the items the app navigates to the car-detail page. This template works offline. Which is often a must have requirement for a serious app. must have To use the {N} Core JavaScript template, run: tns create app-name --template tns-template-master-detail tns create app-name --template tns-template-master-detail To use the {N} Core TypeScript template, run: tns create app-name --template tns-template-master-detail-ts tns create app-name --template tns-template-master-detail-ts To use the Angular template, run: tns create app-name --template tns-template-master-detail-ng tns create app-name --template tns-template-master-detail-ng There are 3 major components that are worth having a look at: car-list cars car-detail cars/car-detail car-detail-edit cars/car-detail-edit Additionally there are 2 helper components that are used to edit car details: list-selector image-add-remove Finally we also have a car-service (in the cars/shared folder), which is the service responsible for loading the data from the backend and sending updates. car-service cars/shared template-master-detail template-master-detail-ts template-master-detail-ng template-master-detail-kinvey-ng Please note that this is still a work in progress and we would like to hear any feedback you have to share with us. The best thing to do is to create a github issue under the relevant project. We already had a few, which helped us make the templates better. These templates are a really easy way to get you on the way to build world class apps. Whether you are a seasoned {N} dev or new to it, the templates offer a lot of smart solutions and best practices. Plus you get a nicely styled app with consistent look and feel between Android and iOS out of the box. So ... Give them a go and build world class apps.
https://www.nativescript.org/blog/scaffold-your-next-mobile-app-with-a-nativescript-template
CC-MAIN-2020-16
refinedweb
1,695
50.46
New Age C++ C++ pointers earned a bad reputation for causing memory leaks, but new smart pointers release memory as it stops being referenced. Recent announcements made by some big players in the industry about C++ enablement -- like Google's Android NDK, the Apple-sponsored Clang compiler or the inclusion of C++ as a first-class citizen in the Windows 8 application tooling -- have raised eyebrows. But despite these developments, C++ still isn't seen as a "hot" or "modern" language by many; by others it's viewed as too complex. The reality, though, is that C++ has been quietly evolving during the last decade, changing for the better. The biggest change is that many of its complexities have been overcome, turning it into a much simpler language. In addition, the traditional advantages of C++ are making it more appealing to developers. For example, the interest in tablets is providing a tailwind for C++, since they're more limited than traditional PCs: They run on batteries that need to last, making every CPU cycle matter; leveraging the GPU is critical; and getting highly responsive applications -- a main advantage of native code -- is also a must. This column's purpose is to show you how to use those advantages. I'll explain how coding in C++ looks today, and demystify old criticisms one at a time. Consider this your C++ classroom, where you'll learn about what this powerful language can do for your applications. This first installment addresses the most often-mentioned issue regarding C++: memory management. Fortunately, that is no longer the stumbling block it once was, through a technique known as smart pointers. Smart pointers are now part of C++11 (the latest ISO C++ standard ratified and published last year.) Let's see smart pointers in action: I'll define a class my_class, as in Listing 1. This class exposes a constructor and a destructor, both tracing messages to let you know each time an instance is created or dropped. Each instance gets a unique id during construction, which is retrieved by calling get_id(). Here's when the fun begins: I'll define a function foo() containing three pointers to my_class dynamic instances. // include this header to use C++ smart pointers. #include <memory> void foo() { shared_ptr<my_class>; } These pointers weren't declared as raw, C-styled ones, but as shared_ptr to my_class. Shared_ptr is a generic class (known as template class in C++) which models a traditional pointer by exposing the same operators: Shared_ptr also offers other value-added services that raw pointers don't. For instance, my main method contains only a call to the function foo(): int main(int argc, char* argv[]) { foo(); } I run it and get: Instance 1 is being created. Instance 2 is being created. p1 points to instance #1 p2 points to instance #1 p3 points to instance #2 Instance 2 is being destroyed. Instance 1 is being destroyed. Note the last two traces: The destructor was called for the two created instances, despite the fact that I didn't have any delete command. This happened when foo() returned, as the pointer variables lost visibility. There are a few more details to note: The latest version of C++ repurposed the keyword auto to make it work like var in C#. Thus, I could have defined these smart pointers in foo() as: void foo() { auto; } The p1, p2 and p3 types are inferred at compile time, so the output is the same. While shared_ptr is useful in those cases when the ownership of a dynamic instance doesn't belong to a single pointer, there's another smart pointer type for cases when such ownership doesn't need to be shared. In our example, p3 is the only pointer pointing to the second instance of my_class. Therefore, I'll redefine foo() this way (it doesn't change the console output): void foo() { auto p1 = make_shared<my_class>(), p2 = p1; // p3 is unique now (instead of shared). auto p3 = unique_ptr<my_class>(new my_class()); cout << "p1 points to instance #" << p1->get_id() << endl; cout << "p2 points to instance #" << p2->get_id() << endl; cout << "p3 points to instance #" << (*p3).get_id() << endl; } When a unique_ptr is assigned to another one, the pointed instance is "moved" from the pointer at the right side of the assignment to the pointer at the left. Thereafter, the pointer at the right side starts pointing to nullptr (a new reserved word that supersedes NULL usage). There's no reference counting with unique_ptrs. Therefore, they're lighter than shared ones. There's a special case where leaks can't be avoided, which is the case of circular references between shared pointers. Since each one has at least another one pointing to it, their counters never reach 0 and none of them are released when these pointer variables lose visibility; the leak just happens. Dealing with this requires a third kind of smart pointer, called weak_ptr<T>. A weak_ptr can point to a shared resource without incrementing its reference counter. I'll show an example in an upcoming installment. While C++ smart pointers seem to imitate the typical garbage collection mechanisms available in managed code, they release memory deterministically. In this column's example, as soon as foo() ends and control's handed back to main(), the destruction of the instances happens in between. In managed code, the garbage collection system determines when memory is released. Join the Adventure Whether you worked with C++ in the 90s and moved to a modern, managed language or you still code in C++ today but haven't hopped on the modern C++ train yet, my goal is to show you how much C++ has evolved in terms of elegance, expressive power and conciseness. This progress has closed the gap between C++ and modern languages without sacrificing either the kind of abstraction that object-oriented programming delivers, or its native ability to run directly on the physical machine. I hope you'll join me on this adventure of discovery. About the Author Diego Dagum is a software architect and developer with more than 20 years of experience. He can be reached at email@diegodagum.com. Printable Format I agree to this site's Privacy Policy. > More Webcasts
http://visualstudiomagazine.com/Articles/2012/05/30/pointers-get-smart.aspx?Page=1
CC-MAIN-2014-23
refinedweb
1,036
59.84
How to find Percentage Change in pandas So you are interested to find the percentage change in your data. Well it is a way to express the change in a variable over the period of time and it is heavily used when you are analyzing or comparing the data. In this post we will see how to calculate the percentage change using pandas pct_change() api and how it can be used with different data sets using its various arguments. As per the documentation, the definition of pandas pct_change method and its parameters are as shown: pct_change_(self, periods=1, fill_method=’pad’, limit=None, freq=None,) _ periods : int, default 1 Periods to shift for forming percent change. fill_method : str, default ‘pad’ How to handle NAs before computing percent changes. limit : int, default None The number of consecutive NAs to fill before stopping. freq : DateOffset, timedelta, or offset alias string, optional Increment to use from time series API (e.g. ‘M’ or BDay()) Before we dive deeper into using the pct_change, Lets understand how the Percentage change is calculated across the rows and columns of a dataframe Create a Dataframe import pandas as pd import random import numpy as np df = pd.DataFrame({"A":[1,4,5,4,6,10,14,None,20,22], "B":np.random.uniform(low=10.5, high=45.3, size=(10,)), "C":np.random.uniform(low=70.5, high=85, size=(10,))}) df Percentage Change between rows Here we will find out the percentage change between the rows. We are interested to find out the pct change in value for all indexes across the columns A,B and C. For example: percentage change between Column A and B at index 0 is given by the following formula: Where B0 is value of column B at index 0 and A0 is value at column A. df.pct_change(axis=1) Percentage Change between two columns The first row will be NaN since that is the first value for column A, B and C. The percentage change between columns is calculated using the formula: Where A1 is value of column A at index 0 and A1 is value at index 1 df.pct_change(axis=0,fill_method='bfill') fill_method in pct_change This is used to fill the NaN values in the data, there are two options i.e. pad and bfill that you can select to fill the NaN values in your data , By default it is pad, which means the NaN values in the data will be filled by the value from preceding row or column whereas bfill which stands for backfill means the NaN values will be filled by the value from succeeding row or column values. There is another argument limit which is used to decide how many NaN values you want to fill using these methods Percentage Change for Time series data In our time-series data we have the date index with a daily frequency. import pandas as pd import random import numpy as np # Creating the time-series index n=92 index = pd.date_range('01/01/2020', periods = n,freq='D') # Creating the dataframe df = pd.DataFrame({"A":np.random.uniform(low=0.5, high=13.3, size=(n,)), "B":np.random.uniform(low=10.5, high=45.3, size=(n,)), "C":np.random.uniform(low=70.5, high=85, size=(n,)), "D":np.random.uniform(low=50.5, high=65.7, size=(n,))}, index = index) df.head() freq in pct_change() So using the freq argument you can find the percentage change for any timedelta values, Suppose using this dataframe you want to find out the percentage change after every 5 days then set the freq as 5D. The first five rows is NaN since there are no 5 days back data is present for these values to find the pct change. Only we can start with the 6th row which can be compared with the 1st row to find the pct change for 5 days and similarly we can get pct_change for following rows df.pct_change(freq='5D') Monthly pct_change() in time series data With the same time-series lets find out how to find the monthly pct change in these values. First we need to get the Data for the last day of each month. So we will resample the data for frequency conversion and set the rule as ‘BM’ i.e. Business Month. monthly = df.resample('BM', how=lambda x: x[-1]) Now apply the pct_change() on this data to find out the monthly percentage change monthly.pct_change() if you want the monthly percentage change for the months which has only the last day date available df.asfreq('BM').pct_change() pct_change in groupby You can also find the percentage change within each group by applying pct_change() on the groupby object. The first value under pct_change() for each group is NaN since we are interested to find the percentage change within each group only. df = pd.DataFrame({'Name': ['Ali', 'Ali', 'Ali', 'Cala', 'Cala', 'Cala', 'Elena', 'Elena', 'Elena'], 'Time': [1, 2, 3, 1, 2, 3, 1, 2, 3], 'Amount': [24, 52, 34, 95, 98, 54, 32, 20, 16]}) df['pct_change'] = df.groupby(['Name'])['Amount'].pct_change() df
https://kanoki.org/2019/09/29/how-to-find-percentage-change-in-pandas/
CC-MAIN-2022-33
refinedweb
863
57.5
Introduction Angular is one of the most popular JavaScript frameworks. Indeed, Angular is already used by millions of developers and companies around the world, and its adoption is growing. In this article, we will learn about the latest features introduced in versions 8 and 9, in both Angular core as well as the Angular tooling: the new Ivy render, the Schematic and Builder APIs, the new build and test orchestration tool, Bazel, differential loading, etc. We will also discuss what the future of Angular has in store for us. Spoiler alert: it will be about a lightweight and faster runtime, performant, and faster apps and dev tools, and interoperability. Growing Adoption Rates In order to know how many developers are using Angular and understand the framework’s rate of adoption, the Angular team uses the number of daily active visitors to the official documentation website at angular.io. The rationale behind this metric is that many companies use Angular to build applications that are used internally or behind a firewall or to build applications that are not exposed directly to the open Internet (e.g. Dashboards for connected cars). The only viable metric left, in order to measure Angular adoption, is daily unique visitors. A few months back, the Angular team announced that the number of unique visitors to the official documentation website had exceeded 1.5 million and I am confident that this adoption will continue to grow with time. A Complete and Integrated Framework Why do developers love to use Angular? One of the reasons they keep coming back is that Angular is a framework — in contrast with a library — meaning that it offers a complete set of APIs and tools. Whether it’s routes managements, form building and validation, internationalization (i18n), server-side rendering, unit testing, tools for developers via Angular CLI, animations, or UI components, Angular integrates all the necessary building block APIs for building modern web applications, from prototyping to deploying. Release Cycle Angular has adopted a 6 month release cycle, based on the Semantic Version standard. This will help bring new innovations and bug fixes as quickly as possible while giving the users a clear agenda so they can plan their updates. This release cycle is as follows: Please note that the team also offers a Long-Term Support (LTS) policy: If you would like to try out the next version at any given time, with its documentation, you can find it at next.angular.io Automated Updates Procedure You can easily use the Angular CLI to upgrade your application’s dependencies to the latest version of Angular using the following command: ng update @angular/cli @angular/core You can also use the next tag to try the next (i.e. preview) version of Angular and provide feedback to the team: ng update @angular/cli@next @angular/core@next You can read more about these automated updates at update.angular.io. The New Features of Angular 8 Differential Loading This was one of the major features introduced in Angular 8. Differential loading is a technique where the browser decides which JavaScript version (ES5 or ES2015+) to load, parse, and execute based on its (the browser’s) capabilities. In fact, most modern browsers (we call them evergreen browsers) do natively support some of the newest ECMAScript features. One of the features that plays a major role in differential loading is ES Modules. Based on this simple observation, we then provide the browser with two sets of JavaScript files: A set of files transpiled to ES2015, for modern browsers. A set of files transpiled to ES5, for legacy browsers. How does differential loading work? When we build our application using the Angular CLI, it inspects our tsconfig.json file to check which JavaScript version we’re targeting. If our target is ES2015, the CLI runs two builds: one for ES2015 and one for ES5. Let’s take the following tsconfig.json for example: { "compilerOptions": { "module": "esnext", "moduleResolution": "node", "target": "es2015" } } Our target here is ES2015, thus the CLI will output the following index.html: <!-- For modern browsers --> <script type="module" src="polyfills-es2015.js"></script> <script type="module" src="runtime-es2015.js"></script> <script type="module" src="style-es2015.js"></script> <script type="module" src="vendor-es2015.js"></script> <script type="module" src="main-es2015.js"></script> <!-- For Legacy browsers --> <script nomodule</script> <script nomodule</script> <script nomodule</script> <script nomodule</script> <script nomodule</script> Please note that the differential loading feature is turned on by default in Angular 8 and you don’t have to worry about it. However, if you encounter any issues related to this feature, you can still revert to the ES5 target. The Angular team has been using this technique on angular.io and managed to save up to 40KB of files being loaded by modern browsers. Also, according to feedback from community members, they noticed improvements varying from 7-20%. Dynamic Imports Lazy loading parts of applications has always been one of the recommendations for better loading time. This can be accomplished by configuring the Angular router using the following code: { path: 'home', loadChildren: 'home/home.module#HomeModule' } This syntax was specific to Angular, however, and not part of any web standard. Now, since most modern browsers do natively support ES Modules, the new lazy loading syntax uses the Dynamic Imports standard: { path: 'home', loadChildren: () => import('home/home.module') .then(m => m.HomeModule) } Code editors such as VSCode and WebStorm can now validate and provide you with code completion for your module paths. Please note that if you are updating via the ng update command the CLI will automatically update your old syntax to the new one. Builder API Similar to the Schematics API that allows us to customize the generated code when using the CLI, through commands such as ng new, ng generate, ng add, and ng update, and adding new commands, the new Builder API allows us to customize the behavior of the build commands such as ng build, ng test, and ng run. This way, we can execute tasks that can run before, during, or after the build process. Some cloud providers have already started creating their own builders to help their users to easily and seamlessly deploy their Angular app. For instance, you can now easily deploy to Firebase using @angular/fire or to Microsoft Azure using @azure/ng-deploy. Web Workers Support If you have some logic in your application that does heavy computations, you should already be running these workloads on web workers. With the Angular CLI, you can automatically add and configure web workers to your components: ng generate web-worker <existing-component-name> Please note that the code running inside the Web Worker is your application’s logic, not the Angular runtime! Learn more about Web Workers and Angular from the official guide. Telemetry Version 8 of the CLI adds a new telemetry feature that helps the Angular team understand how developers use the CLI and debug critical issues. This telemetry feature is disabled by default and users have to opt-in and enable it in order to share anonymous usage data with the Angular team. This new API allows users to also plug-in their own analytics server and get the same collected data. The information such as RAM and CPU usage, the size of the bundles, and the CLI commands, then gets logged. You can read about the full list of dimensions here. Angular Elements Angular is embracing the Web Components standard thanks to Elements. With this new feature, it will be possible to package any Angular component as a Custom Element. There are many use cases for this new approach to shipping applications. You could use Angular components within: Server-side technologies, e.g. ASP.NET. Static web pages and websites, e.g. a CMS. Other front-end technologies, e.g. Vue.js. Here is a quick example: import { BrowserModule } from '@angular/platform-browser'; import { NgModule, Injector } from '@angular/core'; import { createCustomElement } from '@angular/elements'; import { HelloComponent } from './hello.component'; @NgModule({ declarations: [AppComponent], imports: [BrowserModule], providers: [], entryComponents: [HelloComponent] }) export class AppModule { constructor(injector: Injector) { const element = createCustomElement( HelloComponent, { injector }); customElements.define('x-hello', element); } } Then simply use your Element as a Web Component: <x-hello></x-hello> Bazel: A New Orchestration Build and Testing Tool Currently, the CLI uses webpack and ng-packagr to build our apps and library (and this is not going to change soon). However, these tools may show some performance issues when trying to build very large applications. Since version 8, the CLI has integrated a new experimental build tool called Bazel. Briefly, Bazel is an orchestration build and test tool created by Google that has been used internally for more than a decade to build the 86+TB of data inside Google’s gigantic mono repository, before they open-sourced it in 2015. Here are some of the benefits of Bazel: It supports backend, front-end, and mobile technologies. It can run tasks in parallel, locally, and on distributed farm machines. It supports incremental builds and caching. In the context of Angular, the Bazel integration (under the ABC initiative) is meant to give the Angular build and test toolchain a massive boost in performance. Bazel has already helped the Angular team reduce the Angular framework build itself from 1 hour to nearly seven minutes! You can start experimenting today with Bazel in the CLI with the following command if you are starting a new application: npm install -g @angular/bazel ng new my-app --collection=@angular/bazel If you would like to add Bazel to an existing CLI application, use the following: ng add @angular/bazel Please note that as you are using the CLI, all the configurations will be automatically managed by the CLI. So, in theory, you wouldn’t even need to learn about Bazel. However, Bazel is a fascinating technology and I highly recommend investing some time and learning it, in case you need to use it with other technologies! Ivy: The New Renderer In the last year, the Angular framework team started working on a new implementation of the renderer and template compiler, code named Ivy. The main goals of this full rewrite are: Reduce the size of the generated bundles. Reduce the compilation time. Provide better debugging. Improve static type checking inside HTML templates (thanks to the TypeScript Language Service). In order to have the smallest size possible, all the public APIs can now be tree-shaked: lifecycle hooks, pipes, queries, DI, etc. Also, one thing to mention is that with Ivy everything will be compiled in AOT mode. In Angular version 9, Ivy will be enabled by default. However, if you are using Angular version 8, you can already experiment with Ivy using the following command: ng new shiny-ivy-app --enable-ivy For existing apps, you can manually update the following files: - Enable Ivy in your tsconfig.json file: { "compilerOptions": { ... }, "angularCompilerOptions": { "enableIvy": true } } - Enable AOT mode in your main angular.json file: { "projects": { "my-app": { "architect": { "build": { "options": { ... "aot": true, } } } } } } The Future of Angular Disclaimer: the following opinions are based on some hypothetical ideas and vision that Misko Hevery shared at NgConf 2019.* With the new Ivy renderer and how the internal core of Angular has been redesigned, this is going to open up a whole new world of techniques in order to optimize even more Angular applications. We can imagine having NgModule become optional, and directly loading components independently (assuming Zone.js will not be required anymore!). Another possible solution to explore is server-side rendering, allowing Angular applications to fully render server-side with state and have that state propagated and resumed from the browser instead of recreating the show state again on the client! With these techniques, it would be possible to drastically boost the Time-to-Interactive and thus make Angular applications load very fast. With all the new features (and there are some more), the future of Angular will be about a lightweight and faster runtime, performant and faster apps and dev tools, and interoperability. Stay tuned. This article was originally published in our JavaScript Frameworks Trend Report. To read more articles like this, download the report today!
https://graphicdon.com/2020/02/03/whats-up-angular-dzone-web-dev/
CC-MAIN-2020-45
refinedweb
2,036
51.99
I am using TestNG together with Spring and JPA. So far I am using to test database stuff my test class which extends AbstractTransactionalTestNGSpringContextTests. With @TransactionConfiguration(defaultRollback = true) everything works fine and I do not need to cary about cleanup. Spring creates a default transaction on the beginning of each of my test method which is than rollbacked. This is a very neat trick to solve famous "Transactional tests considered harmful" problem. Unfortunately I need for one method in this class (one test) to not have this default transaction. This is because this test method simulates batch processing and I have multiple transactions inside it which are in production independent. The only way I was able to simulate and solve problem is to configure those inner transactions with Propagation.REQUIRES_NEW but this I do not want to have in production code. Is there some way to disable Spring transaction for my particular test method (so I do not need to use Propagation.REQUIRES_NEW but Propagation.REQUIRED in my service methods) ? I know you've already solved your problem, but for folks who will come here in the future... Unfortunately it seems that there is no way to disable existing transaction inside test that uses @Transactional annotation. IMHO Spring approach here is extremely inflexible. But there is a workaround for your problem. It would be enough to encapsulate needed logic inside Spring TransactionTemplate class. This would ensure that code in your test case would be launched inside a new transaction. Personal advice: the best and the most flexible way from my point of view is to abandon from the very beginning @Transactional tests and setup database into known state before every test. In this way, Spring will manage transactions exactly the same way as in production. No quirks, no hacks, no manual transaction management. I know that using @Transactional with a "rollback" policy around unit tests is a tempting idea, but it has too many pitfalls. I recommend reading this article Spring Pitfalls: Transactional tests considered harmful. Of course I don't complain here about @Transactional itself - because it greatly simplifies transaction management in the production code. I have found that by executing the body of my test in separate thread prevent Spring from transaction. So the workaround solution is something like: @ContextConfiguration(classes = { test.SpringTestConfigurator.class }) @TransactionConfiguration(defaultRollback = false) @Slf4j @WebAppConfiguration public class DBDataTest extends AbstractTransactionalTestNGSpringContextTests { /** * Variable to determine if some running thread has failed. */ private volatile Exception threadException = null; @Test(enabled = true) public void myTest() { try { this.threadException = null; Runnable task = () -> { myTestBody(); }; ExecutorService executor = Executors.newFixedThreadPool(1); executor.submit(task); executor.shutdown(); while (!executor.isTerminated()) { if (this.threadException != null) { throw this.threadException; } } if (this.threadException != null) { throw this.threadException; } } catch (Exception e) { log.error("Test has failed.", e); Assert.fail(); } } public void myTestBody() { try { // test body to do } catch (Exception e) { this.threadException = e; } } }
http://m.dlxedu.com/m/askdetail/3/c3b793a54643c947696f3128e61efd4e.html
CC-MAIN-2018-47
refinedweb
475
50.33
Abstract base class for a (toolbar) button or menu item. More... #include <Wt/Ext/AbstractButton> Abstract base class for a (toolbar) button or menu item. You may set the button text using setText() and icon use setIcon(), and configure whether the button/menu item can be checked or toggled using setCheckable(). To respond to a click, you can connect to the activated() signal, and for a checkable button/item you may listen to the toggled() signal. Signal emitted when a item gets activated. This signal is emitted for non-checkable items (for who isCheckable() == false), when the user activates the item (by clicking it). Configure the tool tip associated with this item. If the config has no parent, then ownership is transferred to this widget. Refresh the widget. The refresh method is invoked when the locale is changed using WApplication::setLocale() or when the user hit the refresh button. The widget must actualize its contents in response. Reimplemented from Wt::WWebWidget. Change the checked state. This is only used when the isCheckable() == true. Signal emitted when a item gets toggled. This signal is emitted for checkable items (for who isCheckable() == false), when the user changed toggles the item state. The new state is passed as a parameter value.
http://webtoolkit.eu/wt/doc/reference/html/classWt_1_1Ext_1_1AbstractButton.html
CC-MAIN-2014-52
refinedweb
208
67.96
. also read: trait Reader{ def read(source:String):String } trait StringReader extends Reader { override def read(source:String) = { Source.fromString(source).mkString } } We have seen previously about adding trait to the class declaration, but that’s not the only way to add a trait. One can mix in a trait during the object creation as well. But before that lets modify the Reader trait to make the method read to return some default string. Now lets look at an example to mixin a trait while creating object: class Person(var name:String, var age:Int){ def getDetails = name+ " "+ age } class Student(name:String, age:Int, var moreDetails:String) extends Person(name,age) with Reader{ override def getDetails = { val details = read(moreDetails) "Student details\n"+name+ " "+age+"\n"+"More: "+details } } Lets create an instance of above Student class without including the StringReader trait. object Main extends App{ val student1 = new Student("Sana", 20, "About the student") println(student1.getDetails) } The output would be: Student details Sana 20 More: DEFAULT Really not useful, the Reader trait is not enough, so we make use of adding a trait during object creation: object Main extends App{ val student1 = new Student("Sana", 20, "About the student") with StringReader println(student1.getDetails) } The output: Student details Sana 20 More: About the student Now we have more meaningful information and not the default implementation. Traits can also have fields- fields can be concrete and abstract. If an initial value is provided for the field in the trait then it becomes a concrete field, otherwise it is an abstract field, something like: import scala.io.Source trait Reader{ var source = "DEFAULT" def read = source } trait StringReader extends Reader { override def read = { Source.fromString(source).mkString } } class Person(var name:String, var age:Int){ def getDetails = name+ " "+ age } class Student(name:String, age:Int, moreDetails:String) extends Person(name,age) with Reader{ source = moreDetails override def getDetails = { val details = read "Student details\n"+name+" "+age+"\n"+"More: "+details } } object Main extends App{ val student1 = new Student("Sana", 20, "About the student") with StringReader println(student1.getDetails) } We just edited the trait and moved the parameter for read method into the field for the trait. And in the class Student we assign a new value to the source field in the trait. Layering Traits One can chain the traits such that one trait can invoke another version of the same method in a different trait. Lets add 2 more traits- FileReader and UrlReader. FileReader would read from a file and UrlReader would read content from a given URL. trait FileReader extends Reader{ override def read(source:String) = { Source.fromFile(source,"UTF-8").mkString } } trait UrlReader extends Reader{ override def read(source:String) = { Source.fromURL(super.read(source),"UTF-8").mkString } } Interesting to see super.read(source) in the UrlReader trait. Does that mean it inokves the read(source) in Reader trait? We wouldn’t expect anything useful from the read(source) version of Reader method. Instead, super.read(source) calls the next trait in the trait hierarchy, which depends on the order in which the traits are added. The traits are processed starting with the last one. Lets see how this makes sense: object Main extends App{ //case 1 val student2 = new Student("Stud1",20, "/tmp/url") with FileReader with UrlReader println(student2.getDetails) //case 2 val student3 = new Student("Stud2",20,"") with StringReader with UrlReader println(student3.getDetails) } In the case 1 we add FileReader and UrlReader traits. When UrlReader invokes super.read(source), the read(source) from the FileReader is invoked and you would expect to have a URL in the /tmp/url file. In the case 2 we add StringReader and UrlReader traits. When the UrlReader invokes super.read(source), the read(source) from the StringReader is invoked. The example above seems pretty naive and can be implemented in a more concise way. I havent been able to come up with a better example. But I hope I have been able to convey the concept though. Another interesting concept to explore is how the traits are mapped to the classes which the JVM can consume. A trait with abstract method: trait Reader{ def read(source:String):String } translates to a usual Interface in Java Compiled from "TraitTrans.scala" public interface Reader { public abstract java.lang.String read(java.lang.String); } A trait with method definition would translate into a Interface and a abstract class which has the implementations in the trait moved into static methods. Something like trait Reader{ def read(source:String):String } trait StringReader extends Reader{ import scala.io.Source def read(source:String) = Source.fromString(source).mkString } would create StringReader.class and StringReader$class.class files where in the StringReader.class is the interface and StringReader$class.class is an abstract class with the implementations in the static methods mohamed@mohamed-Aspire-4710:~/scalaP$ javap -c StringReader.class Compiled from "TraitTrans.scala" public abstract class StringReader$class { public static java.lang.String read(StringReader, java.lang.String); Code: 0: getstatic #11 // Field scala/io/Source$.MODULE$:Lscala/io/Source$; 3: aload_1 4: invokevirtual #16 // Method scala/io/Source$.fromString:(Ljava/lang/String;)Lscala/io/Source; 7: invokeinterface #22, 1 // InterfaceMethod scala/collection/TraversableOnce.mkString:()Ljava/lang/String; 12: areturn public static void $init$(StringReader); Code: 0: return } One can make out that the companion class generated contains the method implementations. Here is a superb description of how the traits and classes extending traits get translated to the classfiles for the JVM. These are few concepts which are worth learning as part of Traits. Another important concept is the Trait construction order and Self Types which I might cover in future posts. Speak Your Mind
http://www.javabeat.net/traits-scala-advanced-concepts/
CC-MAIN-2014-10
refinedweb
950
57.47
. I always cringe whenever I see someone select an entire line of code in the Visual Studio code editor before copying the line or deleting the line (see Figure 1). You don’t need to do this. Figure 1 If you want to copy a line of code then you can simply press CTRL-c to copy the line and press CTRL-v to paste the line. If you want to delete a line, don’t select it first, just press CTRL-x. You’ll be surprised how much time this one tip will save you. In the old days, before Visual Studio 2008, if you used a class in your code that was not a member of any of the existing namespaces imported in your code then you had no choice but to look up the class in the documentation and enter a using statement to import the new namespace. Visual Studio 2008 is smart enough to import namespaces for you automatically. If you type the name of a class that inhabits a namespace that has not been imported then Visual Studio displays a red bar beneath the class in the editor (see Figure 2). You can press CTRL-. to display a dialog box for picking the right namespace to import. Finally, just press the ENTER key to select a namespace (see Figure 3). Figure 2 Figure 3 Never type property declarations by hand. It takes forever! Instead, just type prop + TAB + TAB. When you type prop + TAB + TAB, you get a code snippet (template) for entering a property. Use TAB to move between the template parameters. Press the ENTER key when you are finished creating the property (see Figure 4). This tip has saved me from many days of tedious property typing. Figure 4 Whenever I finish creating a class, I always clean up the list of using statements that appear at the top of the class file. I like to remove any unused using statements to reduce the amount of visual clutter in my classes. You can remove using statements that are not required by your code by right-clicking the top of your code file, selecting the menu option Organize Usings, and select Remove and Sort. Figure 5 If you need to temporarily disable a block of code, or a section of an ASP.NET page, then you can comment out the region by pressing CTRL-k+c. I always do this when I want to rewrite an existing section of code, but I am afraid to delete the old code before writing the new code. For example, Figure 6 illustrates commented out code in the code editor. Figure 6 You can use the very same key combination to comment out code almost anywhere. For example, you can comment out code in ASP.NET pages, web.config files, and JavaScript files (see Figure 7). Figure 7 You can perform the opposite operation, and uncomment out code, by using the keyboard combination CTRL-k+u. After working in Visual Studio for an extended period of time, I end up with a lot of open documents. I like to quickly switch between open documents by hitting the keyboard combination CTRL-TAB (you can also use CTRL-TAB to shift focus to different tool windows). If you have too many documents open, using CTRL-TAB becomes more difficult because you must hunt through the set of open documents. There are two ways that you can close open documents. You can use the menu option Window, Close All Documents. Better yet, you can right click a tab that corresponds to an open document and select the menu option Close All But This. Selecting this latter option closes all open documents except the document corresponding to the tab (see Figure 8). Figure 8 After you add a user instance of a SQL Express database to a project (a RANU database), you can quickly connect to the database by double-clicking the .mdf file in the App_Data folder. Double-clicking the database opens the Server Explorer/Database Explorer window and expands the objects in the database automatically. I’m always composing new Visual Studio projects from bits and pieces of previous Visual Studio projects. For example, I might need to add a folder from a previous project or a set of files. You can add existing files to Visual Studio by right-clicking in the Solution Explorer window and selecting the menu option Add, Existing Item. But, this method of adding files is slow. Furthermore, you can’t add folders using this method. The best method of adding files and folders to Visual Studio project is to just drag the files or folder into the Solution Explorer window (or copy and paste the files or folder). For example, I am constantly adding my MoviesDB.mdf movies SQL Express database to new projects. I keep this file on my desktop and drag it into the App_Data folder whenever I need the database in a new Visual Studio project. This tip is particularly relevant for developers who build applications by using test-driven development. When doing test-driven development, you write a unit test first and then write code that satisfies the unit test. When writing the unit test, you often have to fight with statement auto-completion. There is an easy solution to this problem. Disable automatic statement completion by selecting Tools, Options, Text Editor, All Languages and uncheck the Auto list members checkbox (see Figure 9). Figure 9 After you disable auto list members, you can still display suggestions for statement completion by using the keyboard combination CTRL-SPACE. In general, using the mouse to perform an action in Visual Studio is slower than entering a keyboard combination. The fastest way to add a new item into a Visual Studio project is to hit the keyboard combination CTRL-n or the keyboard combination CTRL-SHIFT+A. The first keyboard combination works in ASP.NET Websites and the second keyboard combination works in both Website and ASP.NET MVC Web Application projects. This keyboard combination opens the Add New Item dialog box (see Figure 10). Figure 10 You can use the TAB and arrow keys to select an item from the dialog box and navigate to the Open button. Press the ENTER key to invoke the Open button. This tip is related to the previous one. After using the TAB key to navigate to the Name textbox in the Add New Item dialog box, you can enter the name of the new item. When typing the name of a new item, you don’t need to include the file extension. Visual Studio can determine the file extension from the selected template. For example, when adding a new Web Form named MyPage.aspx, you can simply type MyPage. When adding a new Master Page named Site.master, you can simply type Site. You might think that avoiding typing a couple of characters would not matter. But, if you add dozens of files to a Visual Studio project, the number of characters that you can avoid typing quickly adds up. If you have a tip or trick for Visual Studio 2008, please share it in the comments. However, nothing obscure please. I want to focus on only those tips and tricks that matter on a daily basis. great tips. we keep doing what has worked for us all along, and often aren't aware of new or quicker ways to do repetitive actions. thanks. Great tips! Did not know the one about copy and paste! Nice Tips , Thanks :) 'Insert Snippet..' and 'Surround Stephen Walther has posted a great list of Visual Studio 2008 Tips and Tricks . I found 3 or 4 that I. weblogs.asp.net/.../essential-visual-studio-tips-amp-tricks-that-every-developer-should-know.aspx より。 #1の、1行丸ごとコピーするやり方は知らなかった。なんとその行を選択していない状態で、Ctrl + C とするだけ。盲点だっ. The tips are quite basic really, but nice post anyway. @Mic Shift + Delete will actually cut the line, not just delete it :) @Kashif Select the text (one or more lines) and press Tab to increase the indenting, and Shift + Tab to decrease it. Of course, if it's just one line you don't need to select it to increase the indenting. Pingback from 2008 October 22 - Links for today « My (almost) Daily Links. Pingback from Dew Drop - October 22, 2008 | Alvin Ashcraft's Morning Dew. Nice One. Thanks for putting it together. Most of them works for VS 2005 as well - exception is 'Organize Using'. Step Nice, an easy to follow short list with some very helpful little tips. Thanks! It's worth mentioning that prop + tab + tab also works for for, if, foreach etc. I find the remove and sort using statements a life saver. Also check out code.msdn.microsoft.com/PowerCommands it allows you to remove and sort usings across a whole project. Pingback from Pages tagged "tips" Pingback from Visual Studio Shortcuts « Alfes’s Weblog Pingback from Essential Visual Studio Tips & Tricks that Every Developer Should Know « Razwan Kader Personal web) :P + >. Pingback from Bert Lamb » links for 2008-10-23 By pressing Ctrl+Tab and then holding the Ctrl key we can borwse all the Active Tool Windows and Active Files. Tip #9: You should use Ctrl-J instead of Ctrl-Space... J recalculates the autocomplete, plus I kept running into issues with space actually inserting space in some places. Ctrl-J will give you intellisense even if you've typed after the . which is...   weblogs.asp.net/.../essential-visual-studio-tips-amp-tricks-that-every-developer-should-know.asp weiling@dzone.com. Pingback from Friday Links #21 | Blue Onion Software * When intellisense is in the way of some code you want to see, hold Ctrl to make it transparent. When lines get very long, swith to full screen with Alt+Shift+Enter. The same for switching back. Pingback from LemurSoft :: Neptune (personal SMTP testing server) As tip #2 and #4 don't seem to work in VB, does anyone know if there are other ways to get this functionality in VB? 最近,在网上看到一篇关于VS2008的一些提示,可以提高开发效率,我把它翻译过来,当然里面也有很多自己的想法,分享一下,大家可以择有用的提示而用之。 参考:每个开发者都应该知道的提示和诀 #.think.in infoDose #4 (20th Oct - 25th Oct) October 28th Links - SQL, Visual Studio, YUI, jQuery). 原文:EssentialVisualStudioTips Since many of us make use of Microsoft Visual Studio 2008 on a daily basis, it’s always good to get to know some of the essential tips and ticks that can help speed up workflow and increase productivity. Stephen Walther has written a great article listing Also watch this one: channel9.msdn.com/.../TL46 Pingback from ASP.NET MVC Archived Buzz, Page 1. Pingback from Ryan J » Blog Archive » 11 Cool Visual Studio tricks 原文: EssentialVisualStudioTips Tip #8 (drag and drop) doesn't work for me if I'm running Visual Studio as Administrator on Vista. However, it works fine if I start Visual Studio normally. Is this a bug or a "security feature" since Explorer is not running as Administrator? I recently started reading The Productive Programmer , which so far is a pretty good read, and I've Regarding Tip #1, just found an enhancement - if you want to copy a procedure, minimize it with the outline plus sign and then press Ctrl+C Thanks for the great tips. Few others that often use are: Ctrl + F to Find in the current document. Ctrl + Shift + F to Find in the entire solution. Ctrl + Shift + Space to get intellisense for the method arguments. Pingback from VS 2008 Tips and Tracks « I read your book Asp.Net unleashed and was very impressed with the book. I am sure these new tricks will help me in becoming a more efficient coder. Thanks! Pingback from VS2008 Tips & Tricks | Unidev - Software Development some of them are useful. all the way it is nice Useful tips about Visual Studio 2008
http://weblogs.asp.net/stephenwalther/archive/2008/10/21/essential-visual-studio-tips-amp-tricks-that-every-developer-should-know.aspx
crawl-002
refinedweb
1,972
72.66
SPICE Kernel Required Reading Abstract Document Outline Introduction to Kernels Kernel Types Text Kernels and the Kernel Pool Binary Kernels SPICE Kernel Type Identification and Kernel Naming SPICE Kernel Type Identification Recommendations on Kernel File Naming Binary Kernel Specifications Text Kernel Specifications and Interfaces Text Kernel Specifications Variable Name Rules Assignment Rules Variable Value Rules Additional Text Kernel Syntax Rules Maximum Numbers of Variables and Variable Values Treatment of Invalid Text Kernels Additional Meta-kernel Specifications Text Kernel Interfaces - Fetching Data from the Kernel Pool Informational Functions Section 5 -- Kernel Management Kernel Priority Path Symbols in Meta-kernels Keeping Track of Loaded Kernels Reloading Kernels Changing Kernel Priority Load Limits Finding Out What's Loaded Unloading Kernels Manipulating Kernel Pool Contents Detecting Changes in the Kernel Pool Using Watchers Appendix A -- Discussion of Competing Data Binary Kernels SPKs CKs Binary PCKs Text Kernels Appendix B -- Glossary of Terms Agent Assignment Continued string Control words Direct assignment Element Incremental assignment Keeper (subsystem) Kernel pool (sometimes just called ``the pool'') Kernel variable Meta-kernel (also known as ``FURNSH kernel'') Operator Principal data Value Variable name Vector value Appendix C -- Summary of Routines Appendix D -- Summary of Key Text Kernel Parameter Values Appendix E -- Revision History Top Last revised on 2014 JUL 15 by N. J. Bachman. The kernel subsystem loads and unloads kernels, retrieves loaded data, and for text kernels, inserts data into the kernel pool. This document has five major sections. ``Kernel type identification and kernel naming'' contains specifications for kernel architecture and type identification and restrictions and recommendations concerning kernel file naming. ``Binary kernel specifications'' points the reader to other SPICE documents for most information on binary kernels. ``Text kernel specifications and interfaces,'' which includes extra rules for meta-kernels, provides a good deal of technical detail for both producers and consumers (users) of text kernels. ``Kernel management'' contains important information about managing and obtaining information about both text and binary kernels. Appendix A discusses the notion of ``competing data.'' Appendix B provides definitions of terms used in this document with SPICE-specific meaning. Appendix C provides a listing of kernel subsystem functions. Appendix D provides a summary of key text kernel parameter values. Appendix E provides the revision history of this document. Files containing the data used by SPICE are known as kernels (sometimes called ``kernel files''). Two kernel architectures exist, referred to as text kernels and binary kernels. Text kernels consist of human readable ASCII text; binary kernels consist of mostly non-ASCII data. Within each architecture there are several kernel types. The SPICE text kernels are: Text kernels are used where the amount of data being stored is relatively small, and where easy human readability and revision are important. Text kernels should contain descriptive information, provided by the kernel producer, describing the sources and intended uses of the kernel data. Text kernels associate values with variables using a ``name = value(s)'' form of assignment. The kernel pool is the repository of the information provided in these assignments. Populating the kernel pool occurs in either or both of two ways: by loading text kernels -- by far the most used method -- or by using pool subsystem functions. Once ``name = value(s)'' assignments provided in a text kernel have been loaded into the kernel pool the value(s) are said to be associated with the names. You may access these data through kernel pool look-up functions using the names as keys to find the associated values. The kernel pool look-up functions are described in detail a bit later in this document. However, some higher-level and more often used functions also access data loaded into the kernel pool. Two tables in the tutorial named ``Summary of Key Points'' provide details. Binary kernels store large data sets of primarily non-ASCII data, using either the DAF or DAS format (see the technical reference documents daf.req and das.req for details). For all but EK binary kernels, loading the binary kernel does not cause the subsystem associated with the kernel's type to read the principal kernel data; rather only a small amount of descriptive data are read so the subsystem becomes aware of the existence of the kernel and the nature of the data contained therein. The subsystem physically reads primary binary kernel data only when a data request is made by a kernel reader function. For EK binary kernels, the descriptive data mentioned above, and some database schema information, are read in at kernel load time. Principal data are read only when an EK query is made by a kernel reader function. Data from binary kernels do NOT get placed in the kernel pool; the pool is used only for text kernel data. Binary kernels contain a ``comment area'' where important descriptive information in ASCII form should be provided by the kernel producer. On occasion one may be given, or need to make, a ``transfer format'' file. This is an ASCII-format representation of a binary kernel, used in early versions of CSPICE to port binary kernels between dissimilar computers (e.g. IEEE - Little endian to IEEE - Big endian, or vice-versa). For the most part these transfer format files are no longer needed due to the addition of run-time translation capabilities in the binary kernel readers. But there are some situations when transfer format binary kernels are still needed; refer to the tutorial named ``Porting Kernels'' for details. Most SPICE users don't need to know about kernel type identification, but since this aspect of kernels is used later on in this document we have to explain the concept here. The first 6 to 8 bytes of a SPICE kernel are used for file type identification. In binary and text kernels this identifier consists of two string IDs separated by the ``/'' character. The first ID, identifying the file architecture of the kernel file (``DAF'', ``DAS'', ``KPL''), is always three characters long. The second ID, identifying the file type of the kernel file (``SPK'', ``PCK'', ``IK'', ``SCLK'', etc.), is two to four characters long. In transfer format files this file type identifier consists of a single string ID. See the Convert User's Guide for details. In binary kernels the kernel type identifier always occupies the first eight bytes. If the combined length of the kernel architecture ID, the ``/'' character, and the kernel type ID is less than 8 characters, the identifier is padded on the right to eight characters using blanks (e.g. ``DAF/SPK '', ``DAS/EK ''). The correct identifier is written to a binary kernel automatically when the kernel is created by calling the kernel type specific ``open new file'' function -- spkopn_c for SPK files, ckopn_c for CK files, etc. If a binary kernel is created by calling an architecture specific ``open new file'' function -- dafonw_c for DAF files, dasonw_c for DAS files, etc., -- it is the caller's responsibility to specify the correct kernel type in the corresponding input argument of these functions to make sure the correct kernel type identifier is written into the kernel. In text kernels the kernel type identifier occupies the first six to eight characters and is followed by optional trailing blanks and then by the end-of-line terminator character(s), resulting in the identifier appearing on a line by itself. If the combined length of the kernel architecture ID, the ``/'' character, and the kernel type ID is less than 8 characters, the identifier can, but does not have to be padded on the right to eight characters using blanks (e.g. ``KPL/SCLK'', ``KPL/IK '', etc.). Since most text kernels are created manually using a text editor, it is the responsibility of the person making the kernel to put the correct identifier by itself on the first line of the kernel. In transfer format files the SPICE kernel type identifier occupies the first six characters of the file and is followed by the expanded name of the format (e.g. ``DAFETF NAIF DAF ENCODED TRANSFER FILE''). The correct kernel type identifier is written to a transfer format file automatically when the file is created by the SPICE utility programs TOXFR or SPACIT. See their user guides, toxfr.ug and spacit.ug, for details. The SPICE kernel type identifiers used in modern SPICE kernels are as follows. Binary Kernels: SPK DAF/SPK CK DAF/CK PCK DAF/PCK EK DAS/EK Text Kernels: FK KPL/FK IK KPL/IK LSK KPL/LSK MK KPL/MK PCK KPL/PCK SCLK KPL/SCLK Transfer format files: DAF DAFETF DAS DASETF NAIF/DAF NAIF/DAS A text kernel not having a kernel type identifier can, in fact, be processed by high-level functions, and by low-level functions other than getfat_c that use text kernel data. However, NAIF strongly recommends kernel creators to provide the identifier. CSPICE places a few restrictions on kernel file names beyond those imposed by your operating system: NAIF recommends kernel names use only lower case letters. NAIF further recommends one follows the conventions established for kernel name extensions, shown below. .bc binary CK .bes binary Sequence Component EK .bpc binary PCK .bsp binary SPK .tf text FK .ti text IK .tls text LSK .tm text meta-kernel (FURNSH kernel) .tpc text PCK .tsc text SCLK Other than the general specifications and recommendations in the section ``Kernel type identification and kernel naming'' that are applicable to binary kernels, specifications for the various binary kernels are provided in kernel type specific technical reference documents, such as ``SPK Required Reading'' and ``CK Required Reading.'' The specifications and restrictions discussed below apply to any text kernel. However, the special type of text kernel known as a meta-kernel (sometimes called a ``FURNSH kernel'') has additional restrictions; these are discussed later in a section on meta-kernels. Often the easiest and best way to create a text kernel is to start with an existing text kernel, editing it to meet your needs. But knowing text kernel rules is still important. Those rules are documented in the remainder of this section. As the name implies, SPICE text kernels contain printable ASCII text (ASCII code 32-126). Text kernels may not contain non-printing characters, excepting tab (ASCII code 9). However NAIF recommends against use of tabs in text kernels. NAIF also recommends caution be exercised when cutting/pasting text from a formatted document into a text kernel; the text characters displayed in a document may not be in the accepted ASCII range, in which case the text kernel parser will fail when reading those characters. Assignments in SPICE text kernels have a ``name = value(s)'' or ``name += value(s)'' format. We illustrate this format by way of an example using an excerpt from a SPICE text planetary constants kernel (PCK). The format description given below applies to all SPICE text kernels; the specific data names shown in this example apply only to text PCK kernels. The example begins with a SPICE kernel type identifier and is then filled out with a combination of descriptive information, called comment blocks, and data blocks. KPL/PCK Planets first. Each has quadratic expressions for the direction (RA, Dec) of the north pole and the location and rotation state of the prime meridian. Planets with satellites (except Pluto) also have linear expressions for the auxiliary (phase) angles used in the nutation and libration expressions of their satellites. \begindata BODY399_POLE_RA = ( 0. -0.64061614 -0.00008386 ) BODY399_POLE_DEC = ( +90. -0.55675303 +0.00011851 ) BODY399_PM = ( 10.21 +360.98562970 +0. ) BODY399_LONG_AXIS = ( 0. ) BODY3_NUT_PREC_ANGLES = ( 125.045 -1935.53 249.390 -3871.06 196.694 -475263. 176.630 +487269.65 358.219 -36000. ) \begintext Each satellite has similar quadratic expressions for the pole and prime meridian. In addition, some satellites have nonzero nutation and libration amplitudes. (The number of amplitudes matches the number of auxiliary phase angles of the primary.) \begindata BODY301_POLE_RA = ( 270.000 -0.64061614 -0.00008386 ) BODY301_POLE_DEC = ( +66.534 -0.55675303 +0.00011851 ) BODY301_PM = ( 38.314 +13.1763581 0. ) BODY301_LONG_AXIS = ( 0. ) BODY301_NUT_PREC_RA = ( -3.878 -0.120 +0.070 -0.017 0. ) BODY301_NUT_PREC_DEC = ( +1.543 +0.024 -0.028 +0.007 0. ) BODY301_NUT_PREC_PM = ( +3.558 +0.121 -0.064 +0.016 +0.025 ) \begintext Here we include the radii of the satellites and planets. \begindata BODY399_RADII = ( 6378.140 6378.140 6356.755 ) BODY301_RADII = ( 1738. 1738. 1738. ) \begintext In this example there are several comment blocks providing information about the data. Except for the comments appearing just after the kernel type identifier and before the first data block, all comment blocks are introduced by the control word \begintext \begindata The Each of these control words must appear on a line by itself, and each may be preceded by white space. Within each data block there are one or more variable assignments. Each variable assignment consists of three components: A variable name can include any printable character except: Variable names are case-sensitive. Note that this behavior is different from that of most CSPICE high-level functions, which tend to ignore case in string inputs. Variable names that don't have the expected case will be invisible to CSPICE functions that try to fetch their values. Since high-level CSPICE functions that use kernel variables accept only upper case names, NAIF recommends upper case always be used for variable names. NAIF recommends you do not use a variable name with ``+'' as the last character. Direct assignments supersede previous assignments, whereas incremental assignments append the specified values to the set created by previous assignments. For example, the series of assignments BODY301_NUT_PREC_RA = -3.878 BODY301_NUT_PREC_RA += -0.120 BODY301_NUT_PREC_RA += +0.070 BODY301_NUT_PREC_RA += -0.017 BODY301_NUT_PREC_RA += 0. BODY301_NUT_PREC_RA = ( -3.878 -0.120 +0.070 -0.017 0 ) Values may be scalar (a single item) or vectors (two or more items). A value may be a number, a string, or a special form of a date. Numeric values may be provided in integer or floating point representation, with an optional sign. Engineering notation using an ``E'' or ``D'' is allowed. All numeric values, including integers, are stored as double precision numbers. Examples of assignments using valid numeric formats: BODY399_RADII = ( 6378.1366 6378.1366 6356.7519 ) BODY399_RADII = ( 6.3781366D3 6.3781366D3 6.3567519D3 ) BODY399_RADII = ( 6.3781366d3 6.3781366d3 6.3567519d3 ) BODY399_RADII = ( 6.3781366E3 6.3781366E3 6.3567519E3 ) BODY399_RADII = ( 6.3781366e3 6.3781366e3 6.3567519e3 ) BODY399_RADII = ( 6378 6378 6357 ) DISTANCE_UNITS = ( 'KILOMETERS' ) All string values, whether part of a scalar or vector assignment, must not exceed 80 characters on a given line. Creating a string value longer than 80 characters is possible through continuation of an assignment over multiple lines; this is described later. There is no practical limit on the length of a string value other than as mentioned in the section on String Continuation below. If you need to include a single quote in the string value, use the FORTRAN convention of ``doubling'' the quote. MESSAGE = ( 'You can''t always get what you want.' ) A second method for entering dates, unique to text kernels, uses an ``@'' syntax. Some examples: CALIBRATION_DATES = ( @31-JAN-1987, @feb/4/1987, @March-7-1987-3:10:39.221 ) Dates entered using the ``@'' syntax are converted to double precision seconds past the reference epoch J2000 as they are read into the kernel pool. Note that NO time system specification (e.g. UTC or TDB) is implied by dates using the ``@'' syntax. Association of a time system with such dates is performed by the software that uses them. For example, in SPICE leapseconds kernels, such dates represent UTC times; in frames kernels, they represent TDB times. You should refer to software user's guides or API documentation to understand the interpretation of these dates for your application. Vector values, whether of numeric, string or date types, are enclosed in parenthesis, and adjacent components are separated by either white space (blank or carriage return, but not TAB) or commas. MISSION_UNITS = ( 'KILOMETERS','SECONDS' 'KILOMETERS/SECOND' ) ERROR_EXAMPLE = ( 1, 2, 'THREE', 4, 'FIVE' ) Line Length All assignments, or portions of an assignment, occurring on a line must not exceed 132 characters, including the assignment operator and any leading or embedded white space. String Continuation It is possible to treat specified, consecutive elements of a string array as a single ``continued'' string. String continuation is indicated by placing a user-specified sequence of non-blank characters at the end (excluding trailing blanks) of each string value that is to be concatenated to its successor. The string continuation marker can be any positive number of printing characters that fit in a string value (except not true for meta-kernels). For example, if the character sequence // CONTINUED_STRINGS = ( 'This // ', 'is // ', 'just //', 'one long //', 'string.', 'Here''s a second //', 'continued //' 'string.' ) This is just one long string. Here's a second continued string. The CSPICE function stpool_c, and ONLY that function, provides the capability of retrieving continued strings from the kernel pool. See the discussion below under ``Fetching Data from the Kernel Pool'' or the header of stpool_c for further information. All variable values from all text kernels loaded into your program are stored in the kernel pool. There are upper bounds on the total numbers of variables and variable values. See Appendix D for the numeric values of these limits. If during a call to furnsh_c, an error is detected in a text kernel, CSPICE will signal an error. By default, a diagnostic message will be displayed to standard output and the program will terminate. If the CSPICE error handling subsystem is in RETURN mode, furnsh_c will return control to the calling program. RETURN mode is typically used in interactive programs. In the latter case, all data loaded from the text kernel prior to discovery of the error will remain loaded. If, in RETURN mode, an error occurs while a meta-kernel is being loaded, all files listed in that meta-kernel that have already been loaded will remain loaded. Files listed in the meta-kernel later than the file for which the failure occurred will not be loaded. Note that continuing program operation after a load failure could, due to changes in the availability of competing data, result in performing computations with data that were not planned to be used. A meta-kernel (also known as a ``FURNSH kernel'') is a special instance of a text kernel. Its use has been discussed earlier in this document. In addition to the text kernel specifications above, a meta-kernel has the following restrictions. For most SPICE users the accessing of text kernel data occurs inside of high-level CSPICE functions, so you may choose to skip the rest of this section. But if you need to work with text kernel variables that are not present in traditional text kernels, and thus are not accessed by high-level SPICE functions, read on. The values of variables stored in the kernel pool may be retrieved using the functions: gcpool_c( name, first, room, lenout, nvalues, values, found ); gdpool_c( name, first, room, nvalues, values, found ); gipool_c( name, first, room, nvalues, values, found ); stpool_c( name, nth, contin, lenout, string, size, found ); Four routines are provided for retrieving general information about the contents of the kernel pool. The kernel subsystem provides functions_c to load and unload SPICE files, known as kernels, and provides other kernel management and information functions. These functions_c are part of the ``KEEPER'' subsystem. For the SPICE system to use kernels, they must be made known to the system and opened at run time. This activity is called ``loading'' kernels. SPICE provides a simple interface for this purpose. The principal kernel loading function is named furnsh_c (pronounced ``furnish''). A kernel database stores the existence information for any kernel (text or binary) loaded by furnsh_c. The subsystem provides a set of functions that enable an application to find the names and attributes of kernels stored in the database. Early versions of CSPICE loaded kernels using functions specific to each kernel type. Code written for the binary kernels also supported a kernel unload facility. CSPICE continues to support the original kernel loaders and unloaders, but anyone writing new code should use the furnsh_c function instead of the kernel-specific functions. NAIF recommends loading multiple kernels using a ``meta-kernel'' rather than by executing multiple calls to furnsh_c. (``Meta-kernels'' are sometimes called ``furnsh kernels.'') A meta-kernel is a SPICE text kernel that lists the names of the kernels to load. At run time, the user's application supplies the name of the meta-kernel as an input argument to furnsh_c. For example, instead of loading kernels using the code fragment: #include "SpiceUsr.h" . . . furnsh_c ( "leapseconds.tls" ); furnsh_c ( "mgs.tsc" ); furnsh_c ( "generic.bsp" ); furnsh_c ( "mgs.bc" ); furnsh_c ( "earth.bpc" ); furnsh_c ( "mgs.bes" ); #include "SpiceUsr.h" . . . furnsh_c ( "kernels.tm" ); KPL/MK \begindata KERNELS_TO_LOAD = ( 'leapseconds.tls', 'mgs.tsc', 'generic.bsp', 'mgs.bc', 'earth.bpc', 'mgs.bes' ) \begintext While far less robust, it is also possible to provide the names of kernels to be loaded as input arguments to furnsh_c. For example, one may write #include "SpiceUsr.h" . . . #define NKER 6 char * kernels[NKER] = { "leapseconds.tls", "mgs.tsc", "generic.bsp", "mgs.bc", "earth.bpc", "mgs.bes" }; for ( int i = 0; i < NKER; i++ ) { furnsh_c ( kernels[i] ); } It is fairly common that two kernels of the same type - for example two SPKs - to have ``competing data.'' ``Competing'' means that both kernels could provide an answer to the user's request for data, even though the numeric results would likely be different. This usually occurs when the two kernels were produced using different input data and mostly contain non-competing data, but do have some overlap in time. When two or more kernels contain competing data a kernel loaded later has higher priority than kernel(s) loaded earlier. This is true whether using separate calls to furnsh_c for each kernel to be loaded, or a single call to furnsh_c with a list of kernels to be loaded, or a call to furnsh_c that loads a meta-kernel. See Appendix A for a more complete discussion on competing data. If orientation data for a given body-fixed frame are provided in both a text PCK and a binary PCK, data from the binary PCK always have higher priority. Inside a meta-kernel it is sometimes necessary to qualify kernel names with their path names. To reduce both typing and the need to continue kernel names over multiple lines, meta-kernels allow users to define symbols for paths. This is done using two kernel variables: PATH_VALUES PATH_SYMBOLS Then you can prefix with path symbols the kernel names specified in the KERNELS_TO_LOAD variable. Each symbol is prefixed with a dollar sign to indicate that it is in fact a symbol. Suppose in our example above the MGS kernels reside in the path /flight_projects/mgs/SPICE_kernels /generic/SPICE_kernels \begindata PATH_VALUES = ( '/flight_projects/mgs/SPICE_kernels', '/generic/SPICE_kernels' ) PATH_SYMBOLS = ( 'MGS', 'GEN' ) KERNELS_TO_LOAD = ( '$GEN/leapseconds.tls', '$MGS/mgs.tsc', '$GEN/generic.bsp', '$MGS/mgs.bc', '$GEN/earth.bpc', '$MGS/mgs.bes' ) \begintext Caution: the symbols defined using PATH_SYMBOLS are not related to the symbols supported by a host shell or any other operating system interface. The KEEPER subsystem maintains a database of the load operations that furnsh_c has performed during a program run. This is implemented using data structures of fixed size, so there is a limit on the maximum number of loaded kernels that the KEEPER subsystem can accommodate. When a kernel is loaded using furnsh_c, a new entry is created in the database of loaded kernels, whether or not the kernel is already loaded. All load and unload operations (see the discussion of unload_c below) affect the list of loaded kernels and therefore affect the results returned by the functions ktotal_c, kdata_c, and kinfo_c, all of which are discussed below under ``Finding Out What's Loaded.'' Reloading an already loaded kernel creates another (duplicate) entry in the database of loaded kernels, and thus decreases the available space in that list. furnsh_c's treatment of reloaded kernels is thus slightly different from that performed by the CSPICE low-level kernel loaders, which handle a reload operation by first unloading the kernel in question, then loading it. The recommended method of increasing the priority of a loaded binary kernel, or of a meta-kernel containing binary kernels, is to unload it using unload_c (see below), then reload it using furnsh_c. This technique helps reduce clutter in furnsh_c's kernel list. furnsh_c can currently keep track of up to 5000 kernels. The list of loaded kernels may contain multiple entries for a given kernel, so the number of distinct loaded kernels would be smaller if some have been reloaded. Unloading kernels using unload_c frees room in the kernel list, so there is no limit on the total number of load and corresponding unload operations performed in a program run. The DAF/DAS handle manager system imposes its own limit on the number of DAF binary kernels that may be loaded simultaneously. This limit is currently set to a total of 5000 DAF kernels. CSPICE-based applications may need to determine at run time which files have been loaded. Applications may need to find the DAF or DAS handles of loaded binary kernels so that the kernels may be searched. Some applications may need to unload kernels to make room for others, or change the priority of loaded kernels at run time. CSPICE provides kernel access functions to support these needs. For every loaded kernel, an application can find the name of the kernel, the kernel type (text or one of SPK, CK, PCK, or EK), the kernel's DAF or DAS handle if applicable, and the name of the meta-kernel used to load the kernel, if applicable. The function ktotal_c returns the count of loaded kernels of a given type. The function kdata_c returns information on the nth kernel of a given type. The two functions are normally used together. The following example shows how an application could retrieve summary information on the currently loaded SPK files: #include <stdio.h> #include "SpiceUsr.h" #define FILLEN 128 #define TYPLEN 32 #define SRCLEN 128 SpiceInt which; SpiceInt handle; SpiceChar file [FILLEN]; SpiceChar filtyp[TYPLEN]; SpiceChar source[SRCLEN]; SpiceBoolean found; . . . ktotal_c ( "spk", &count ); if ( count == 0 ) { printf ( "No SPK files loaded at this time.\n" ); } else { printf ( "The loaded SPK files are: \n\n" ); } for ( which = 0; which < count; which++ ) { kdata_c ( which, "spk", FILLEN, TYPLEN, SRCLEN, file, filtyp, &source, &handle, &found ); printf ( "%s\n", file ); } "spk" is a kernel type specifier. The allowed set of values is shown below. SPK --- Only SPK kernels are counted in the total CK --- Only CK kernels are counted in the total PCK --- Only binary PCK kernels are counted in the total EK --- Only EK kernels are counted in the total TEXT --- Only text kernels that are not meta- kernels are included in the total META --- Only meta-kernels are counted in the total ALL --- Every type of kernel is counted in the total CSPICE also contains the function kinfo_c that returns summary information about a kernel whose name is already known. kinfo_c is called as follows: kinfo_c ( file, TYPLEN, SRCLEN, filtyp, source, &handle, &found ); CSPICE-based applications may need to remove loaded kernels. Possible reasons for this are: Text kernels are unloaded by clearing the kernel pool and then reloading the other text kernels not designated for removal. Note that unloading text kernels has the side effect of wiping out any kernel variables and associated values that had been entered in the kernel pool using any of the kernel pool assignment functions, such as pcpool_c. It is important to consider whether this side effect is acceptable when writing code that may unload text kernels or meta-kernels. Call unload_c as follows: unload_c ( kernel ); The various platforms supported by CSPICE use different end-of-line (EOL) indicators in text files: Environment Native End-Of-Line Indicator ___________ _____________________ PC DOS/Windows <CR><LF> Unix <LF> Linux <LF> Mac OS X <LF> The CSPICE text file reader, rdtext_c, does not possess the capability to read non-native text files. Starting with the version N0052 release of the SPICE Toolkit (January, 2002), supported platforms are able to read DAF-based binary kernels (SPK, CK and binary PCK) that were written using a non-native binary representation. This access is read-only; any operations requiring writing to the file--for example, adding information to the comment area, or appending additional ephemeris data-- require prior conversion of the kernel to the native binary file format. See the ``Convert User's Guide'' for details. The main way one adds to or changes the contents of the kernel pool is by ``loading'' a SPICE text kernel using the function furnsh_c. However, the kernel subsystem also provides several other functions that allow one to change the contents of the kernel pool. #include "SpiceUsr.h" . . . #define LNSIZE 81 #define BUFSIZE 30 static SpiceChar text [BUFSIZE][LNS", " 25, @1990-JAN-1", " 26, @1991-JAN-1", " 27, @1992-JUL-1", " 28, @1993-JUL-1", " 29, @1994-JUL-1", " 30, @1996-JAN-1", " 31, @1997-JUL-1", " 32, @1999-JAN-1", " 33, @2006-JAN-1", " 34, @2009-JAN-1 )" }; /* Add the contents of the buffer to the kernel pool: */ lmpool_c ( text, BUFSIZE ); Since loading SPICE text kernels often happens only at program initialization, a function that relies on data in the kernel pool may run more efficiently if it can store a local copy of the values needed and update these only when a change occurs in the kernel pool. Two functions are available that allow a quick test to see whether kernel pool variables have been updated. For binary kernels, the conditions resulting in competing data depend on the kernel type. For SPKs, a segment contains data of a single SPK type, providing ephemeris for a single target measured relative to a single center and given in a single reference frame, spanning between specified start and stop times. If ephemeris data from any two segments, whether found in a single SPK file or in two SPK files, are for the same target and have an overlap in the time spans covered, then the two kernels are said to have some competing data. Note that centers play no role in the competition: two segments with the same target and different centers may compete. By definition, SPKs contain continuous data during the time interval covered by a segment, so there is no chance for a ``data gap'' in a segment within a higher priority file (later loaded file) leading to a state lookup coming from a segment in a lower priority file. SPK segment chaining may lead to a problem. It may happen that you have loaded into your program sufficient SPK data to compute the desired state or position vector, but CSPICE nevertheless returns an error message saying insufficient ephemeris data have been loaded. This can occur if a higher priority SPK segment, for which there are not sufficient additional SPK data to fully construct your requested state or position vector, is masking (blocking) a segment that is part of a viable (complete) chain. See the BACKUP section of the SPK tutorial for further discussion about this. Having competition between two SPKs can be a relatively common occurrence when using mission operations kernels, but is far less likely when using PDS-archived SPICE data sets because of the clean-up and consolidation actions usually taken when an archive delivery is produced. For CKs, a segment contains data of a single CK type providing the orientation of a reference frame associated with one object or structure, such as a spacecraft or instrument (sometimes called the ``to'' reference frame), relative to a second reference frame, generally referred to as the base reference frame (sometimes called the ``from'' reference frame), spanning between specified start and stop times. If transformation data from any two segments, whether found in a single CK file or in two CK files, are for the same object/structure (are for the same ``to'' frame) and have an overlap in the time span covered, then the two kernels may have competing data. But read on. However, unlike for SPKs, competition between CK files goes beyond segment-level considerations. The so-called ``continuous'' CK types (Types 2 through 5) do not necessarily provide orientation results for any epoch falling within a segment--there may be real data gaps. And the now little used Type 1 CK, containing discrete instances of orientation data, can be thought of as containing mostly data gaps. While some of the Toolkit software used to compute orientation obtained from CKs can provide an orientation result within a gap, this is usually not the case. See the CK tutorial and the ``CK Required Reading'' document for discussions on interpolation intervals, tolerance, and how the various CK readers work. CK segment chaining may lead to a problem. It may happen that you have loaded into your program sufficient CK data to compute the desired rotation matrix, but CSPICE nevertheless returns an error message saying insufficient data have been loaded. This can occur if a higher priority CK segment, for which there are not sufficient additional CK data to fully construct your requested rotation matrix, is masking (blocking) a segment that is part of a viable (complete) chain. Having competition between two CKs can be a relatively common occurrence when using mission operations kernels, but is far less likely when using PDS-archived SPICE data sets because of the clean-up and consolidation actions usually taken when an archive delivery is prepared. For binary PCKs, a segment contains data of a single binary PCK type providing orientation of a reference frame associated with a single object (a body-fixed frame), relative to a second reference frame, which is always an inertial frame, spanning between specified start and stop times. If orientation data from any segment in one binary PCK and orientation data from any segment in a second binary PCK are for the same body-fixed frame and overlap in time, then the two kernels are said to have competing data. At present binary PCKs produced by NAIF exist only for the earth and the moon. Having competition between the latest high precision, short term earth orientation binary PCK and the lower precision, long term predict earth orientation binary PCK is a clear possibility -- be sure to load the long term predict file first to ensure any higher precision files also loaded have higher priority. Orientation data provided in any loaded binary PCK have priority over what would have otherwise been competing data provided in any loaded text PCK. If a given variable name has two or more assignments, with the final assignment made using the ``='' operator, whether within a single loaded text kernel, or from multiple loaded text kernels, or achieved using CSPICE functions, the last such assignment supersedes all previous occurrences of the assignment. This superseding happens no matter how many values are contained in the last assignment. (It's as if all previous assignments for the subject name had never occurred.) It is generally best to unload a text kernel before loading another one containing competing data. A string associated with a list of kernel variables to be watched for updates. The string can be passed to the update checking function cvpool_c to determine whether any of the variables on the list have been updated. Often the string is the name of a function that needs to be informed if any of a specified set of kernel variables has had a change made to its associated value(s). What appears inside data blocks of a text kernel. Each assignment consists of three parts: a variable (also called variable name), an operator, and a scalar or vector value. For example, BODY399_RADII = ( 6378.14 6378.14 6356.75 ) Once a text kernel is loaded, the value(s) on the right hand sides of the assignments become associated with the variable names on the corresponding left hand sides. See ``direct assignment'' and ``incremental assignment'' below. A string value composed of two or more pieces--called elements--each of which is no longer than 80 characters. Markers indicating the start of data or comment blocks, specifically \begindata \begintext A text kernel assignment, made using the ``='' operator. When a direct assignment is processed during text kernel loading, it associates one or more values with a variable name, and in so doing, replaces any previous such associations. Within the kernel pool the length of a string value is limited to 80 characters. A string value that is longer than 80 characters may be stored in and extracted from the pool by chunking it into pieces--called elements--each of which is no longer than 80 characters. Such a string is referred to as a ``continued string.'' A text kernel assignment made using the ``+='' operator. When an incremental assignment is processed during text kernel loading, it appends one or more values to the list of values already associated with a variable name. Any previous such associations are NOT replaced; rather they are supplemented with the new value(s). Incremental assignments may be made to variables that didn't previously exist in the kernel pool; in such cases incremental assignments are equivalent to direct assignments. The SPICE subsystem used to keep track of (manage) loaded kernel files. In this sense it is also involved with the unloading of kernels. A specially managed area of program memory where data from text kernel assignment statements are stored. Often a synonym for ``variable name,'' but may refer to the combination of a variable name and its associated values. A special kind of text kernel, used to name a collection of kernels that are to be loaded into a user's application at run-time. May include the path names for the kernels as well as the file names. Within SPICE text kernels, an operator is either ``='' or the sequence of ``+'' and ``='', written as ``+=''. The former is used to make direct assignments, the latter is used to make incremental assignments. This term occurs only within this document. It is used to refer to the ``elemental'' data contained in a kernel, as opposed to meta-data or bookkeeping data. For instance, within an SPK the principal data are the polynomials or other numeric data providing ephemeris information. Not part of the principal data are the descriptive information placed in the comment area, the file architecture IDs, and the indexes that help the subsystem quickly find the principal data needed to return a state vector. That which appears on the right-hand side of an assignment. May be a single value or a vector of values. variable name = value(s) That which appears on the left-hand side of an assignment. Two or more values associated with a single variable name. Each of the function names is a mnemonic that translates into a short description of the function's purpose. clpool_c ( Clear the pool of kernel variables ) cvpool_c ( Check variable in the pool for update ) dtpool_c ( Return information about a kernel pool variable ) dvpool_c ( Delete a variable from the kernel pool ) expool_c ( Confirm the existence of a pool kernel variable ) furnsh_c ( Furnish a program with SPICE kernels ) gcpool_c ( Get character data from the kernel pool ) gdpool_c ( Get double precision values from the kernel pool ) gipool_c ( Get integers from the kernel pool ) gnpool_c ( Get names of kernel pool variables ) kclear_c ( Clear and re-initialize the kernel database ) kdata_c ( Return information about the nth loaded kernel ) kinfo_c ( Return information about a specific loaded kernel ) ktotal_c ( Return the number of kernels loaded using KEEPER ) lmpool_c ( Load variables from memory into the pool ) pcpool_c ( Put character strings into the kernel pool ) pdpool_c ( Put double precision values into the kernel pool ) pipool_c ( Put integers into the kernel pool ) stpool_c ( Return a string associated with a kernel variable ) swpool_c ( Set watch on a pool variable ) szpool_c ( Get size parameters of the kernel pool) unload_c ( Unload a kernel ) Text kernel limits Maximum variable name length: 32 Maximum length of any element of a string value: 80 Maximum number of distinct variables: 26003 Maximum number of numeric variable values: 400000 Maximum number of character strings stored in the kernel pool as values: 15000 Maximum length of a file name, including any path specification, placed in a meta-kernel: 255 Maximum total number of kernel files of any type that can be loaded simultaneously: 5000 2014 July 15, NJB (JPL) Updated numeric limits. Added discussion of kernel loading errors. Made small additions to discussion of file name restrictions. Added mention of treatment by GIPOOL of non-integer values. Made small addition to discussion of ``@'' time values in text kernels. Corrected a ``setparamsize'' setting that truncated function names. Changed quoting style to standard (`` '')for .ftm documents. Changed double quotes to single quotes in IDL code example. Made other miscellaneous, minor edits. 2011 October 24, CHA (JPL) Re-organization and added further clarifications. Also added Appendix A discussion of competing data, Appendix B providing a glossary of terms, and an Appendix C summarizing kernel subsystem functions. Includes much information provided by N. Bachman. 2011 APR 18, EDW (JPL) Edits for clarity and organization. Added description of the 32 character limit on user defined kernel pool variable names for furnsh_c, lmpool_c, pcpool_c, pdpool_c, and pdpool_c. Added mention that tabs are now allowed in text kernels. kclear_c now included in routines list. 2009 APR 08, BVS (JPL) Previous edits.
https://naif.jpl.nasa.gov/pub/naif/toolkit_docs/C/req/kernel.html
CC-MAIN-2019-30
refinedweb
6,883
51.38
The Splitting: Chapter 1 A mystery adventure about people and their reflections4.10 / 5.00 9,658 Views Confused Travolta Clicker EXTREME Insane journey through my own interpretations of the meme3.93 / 5.00 11,276 Views not a tutorial, but a useful resource to people writing heavy applications requiring the fastest execution time: these will probably apply to AS2 in the most part aswell (in terms of which is faster) first one: int,uint,Number, a loop of 100 000 000, simply increment and assignment to another variable int: 563ms uint; 2504ms Number; 1234ms now, the thing which occured to me here, and i quickly changed in all my applications as of yet, is that uint is for some reason. extremely slow! as expected int is the fastest for an iteration loop like this, but it should be expected that uint would be the same as int, however for some reason it is very slow, doing more tests, it seems this is also the case, in otherwords, DONT use uint unless you have to now, another important thing to note when building heavy applications needing as fast an execution as possible, although using an int for an iteration loop is much faster than Number, if you are then using that int in a floating point calculation, it can become alot slower because it has to convert the int to a Number, then back to an int again, so think about this. next one: strict data typing VS non strict data typing, and also declaration vs non declaration again, an iteration loop, this time just int and Number (uint is kind of obsolete now) (non declared are obviously, not typed either) int (strict): 562ms int (non strict): 6385ms int (not declared): undefined Number (strict): 1128 Number (non strict): 6385ms Number (not declared): undefined as expected, non strict is the same for both, well, because its the same loop, but notice the VAST difference between strict data typing the iteration loop, and non strict data typing half a second, and one second, compared to 6 and a half seconds! for the declaration, lets put it this way, it crashed flash player when i didnt declare because it took too long!, so lets try a smaller iteration for non declared comparison, only 2 000 000 this time! declared Number (strict): 24ms declared int (strict): 11ms declared variable (no type still) : 134ms undeclared variable: 10266ms i think this is enough to change anyones views on not bothering to declare the variables, look at the difference! 10 seconds, compared to 0.1 second, and when strict data typing to Number, its 0.024seconds, and to int, 0.011second!!! OK, next, lets see what the difference is in a for loop, for declaring the variable outside for loop, or in the for loop in the following way var a:int; for(a = 0; a<100000000; ++a) a = a; for(var a:int = 0; a<100000000; ++a) a = a; first: 562ms second: 581ms so we can see, its very slightly faster to declare the variable outside the for loop, but this brings up the next one, how about ++a in comparison to a++ ? ++a : 553ms a++ : 573ms so we see, that ++a is very slightly faster now, lets have a look at some array loops, first of all: is it faster to do new Array(); or []; when creating a new array? (using a loop of 2 000 000) Array(); 3141ms []; 1065ms we can see, that its alot faster to create a new array via [] than new Array(); now lets try push() in comparison to just setting an undefined item in an iteration loop: (10 000 000 loop) var b:Array = []; for(var a:int = 0; a<10000000; ++a) { b.push(a); } for(var a:int = 0; a<10000000; ++a) { b[a] = a; } push: 2049ms []: 1562ms we can see here, that using the [] syntax in this style of populating an array is faster than push, but what happens if we define the initial array as new Array(10000000); []: 1455ms if we use new Array() to set the size of the array, before populating it with [], its faster again now. how about declaring variables outside of a loop, to be used within, rather than inside the loop: i.e. var b:int for(var a:int = 0; a<10000000; ++a){ b = a<<1; } for(var a:int = 0; a<100000000; ++a){ var b:int = a<<1; } out: 555ms in: 535ms so it is slightly faster to declare the variable to be used inside the loop: anymore speed tests you would like me to run? just ask Very useful for maximising the speed for your stuff :P I still can't work out why uint is so slow, though... Nice speed tests. It's good to know when coding bigger games. Btw, how do you test them? At 7/28/06 07:23 AM, GuyWithHisComp wrote: Nice speed tests. It's good to know when coding bigger games. Btw, how do you test them? using getTimer();, basicly import flash.utils.*; var t:int = getTimer(); //whatever i want to test trace(getTimer()-t); so it gets the amount of time it took to execute whatever is inbetween At 7/28/06 07:24 AM, -dELta- wrote: using getTimer();, basicly Ah, cool. Never thought of that. How is this AS3 then? At 7/28/06 07:26 AM, ViktorHesselbom wrote: How is this AS3 then? because im doing the speed tests in flash 9 alpha, using AS3, and im also using the 'int' datatype and 'uint' datatype which aernt part of AS2, and because of the new virtual machine, the speed will be different than in AS2 for alot of things (although like i said, the which one is fastest, should remain the same) ok, this is a very strange one comparing ++a, to a++, and to a+=1 a+=1, although its speed is more eratic, and isnt so static, is faster than both ++a and a++ running the tests 10 or so times, i get averages around a++: 570 ++a: 559 a+=1: 499 At 7/28/06 07:31 AM, -dELta- wrote: comparing ++a, to a++, and to a+=1 Would that be any difference with a+=2? Just something to add: if(), conditionals and binary for defining a variable: var xe:Boolean = true; var xd:Number = 0; //If if(xe){ xd = 1.5; }; else{ xd = 0; }; //Conditional xd = (xe) ? 1.5 : 0; //Binary xd = xe * 1.5; When true: If - 99ms avg Conditional - 420 ms avg Binary - 109 ms avg When false: If - 108ms avg Conditional - 426 ms avg Binary - 109 ms avg This is by no means a scientific study; I just thought I'd see if I could get a rough idea of which worked faster. Flash 9 Alpha. At 7/28/06 07:39 AM, _Paranoia_ wrote: Just something to add: So conditional suck at speed and looks weird. :S At 7/28/06 07:38 AM, ViktorHesselbom wrote:At 7/28/06 07:31 AM, -dELta- wrote: comparing ++a, to a++, and to a+=1Would that be any difference with a+=2? delta: +=2 and +=1 cant really have a proper test, but they are roughly the same speed. And booleans (rough): if(a==true): 1929ms if( a): 787ms if(a==false): 1914ms if(!a): 784ms At 7/28/06 07:42 AM, ViktorHesselbom wrote:At 7/28/06 07:39 AM, _Paranoia_ wrote: Just something to add:So conditional suck at speed and looks weird. :S Apparently slow; but you're still taking very fast here. Unless you're going to be using it in a loop with tens of thousands of passes you'll have a lot more to worry about than conditionals, and that one line can make your code so much more readable. int: <<2 : 552ms int: >>2 : 574ms int: *4 : 2551ms int: /4 : 2775ms int: *0.25: 2010ms Number: <<2 : 2901ms Number: >>2 : 2942ms Number: *4 : 1320ms Number: /4 : 1452ms Number: *0.25 : 1224ms so we can see, ints are faster with binary arithmetic and Numbers are faster with everything else and obviously, multiplying by the inverse of a number, is faster than dividing by it :) delta try sins cosines tangents and sqrts now At 7/28/06 09:54 AM, Glaiel_Gamer wrote: delta try sins cosines tangents and sqrts now Aren't they kinda unrealistic to use integers for? :P At 7/28/06 10:04 AM, _Paranoia_ wrote:At 7/28/06 09:54 AM, Glaiel_Gamer wrote: delta try sins cosines tangents and sqrts nowAren't they kinda unrealistic to use integers for? :P you can have an integer input though What about -= and -- . Are there speeds the same as ++ and +=, or is it different? yeh, like ++a, a++, a+=1 a-=1 is the fastest, followed by ++a which is very silghtly, often the same as a++ sorry, a-=1 is the fastest, followed by --a, closely, and often on par with a-- a-=1 around about 305ms a-- and --a around about 344ms At 7/28/06 11:21 AM, -dELta- wrote: sorry, a-=1 is the fastest, followed by --a, closely, and often on par with a-- a-=1 around about 305ms a-- and --a around about 344ms Thanks. I'll try to remeber to use a-=1 more often then :) its been established already that using an if...else is faster than a ?... : .... conditional, so i wont do that one, but heres a comparison for int and Number in a large loop for Math.abs(), and using an if...else instead. but also whether its faster to use the prefix '-' to negate a number, or to multiple by -1. It should be clear that the prefix should be faster... but since weve already seen that +=1 and -=1 are faster than the prefixes ++ and --, we cant be too sure the loop for both is from -10000000 to 10000000, with another variable of the same type being set to the modulus of the iterated variable Number (if else -) : 214ms Number (if else *) : 208ms Number (abs) : 3111ms int (if else -) : 327ms int (if else *) : 335ms int (abs) : 3157ms first of all. we see that in all tests, int was outperformed by number, but also that Math.abs is very slow in comparison to the other tests, and like i said before with ++ and --, we cant be too sure. for Number, multiplying by -1 outperformed the negating prefix, but for int, multiplying was slower. it also seems, that if i refer to using non negative numbers in the iteration, int outperforms Number, rather than the other way round. however one last test, what happens if i make my own modulus function? Will it still be faster than Math.abs(), or will the function call cause it to slow down? function modulus(a:int):int { if(a<0) return -a; else return a; } function modulus(a:Number):Number { if(a<0) return a*-1; else return a; } int (function) : 2041ms int (if else '-') : 327ms int (abs) : 3157ms Number (function) : 2212ms Number (if else '*') : 208ms Number (abs) : 3111ms although the function usage is still qquite a bit faster than the Math.abs() call, its alot slower, basicly 10x slower than finding the modulus on the spot, this is one of the major downsides of functions it seems in AS, that is, they are very slow to call in comparison to rewriting the code where its needed. Im not recomending you to have massive functions rewrote on the spot, that would be FAR too messy, but i would recomend with simple short functions like this, to write them on the spot. You sure have been doing some research :P I make flashes because I can. PM me for anything flash or web related or visit my blog!! Do you think you'll ever be able to stop looking so smart, Mr Delta? Here's another one - modulo assignments versus if statements for resetting counters: Modulo - j += 1; j %= 2; If (equality) - j += 1; if(j == 2){ j = 0; }; If (comparison) - j += 1; if(j >= 2){ j = 0; }; Modulo assignment - 1082 Number equality check - 87 Number comparison - 91 If statements seem to be overall vary fast, compared to other functions, which isn't too suprising. They still look untidy, though :P does AS3 support inline functions? At 7/31/06 09:06 AM, Glaiel_Gamer wrote: does AS3 support inline functions? no Care to test Math.sqrt(x); vs Math.pow(x, 0.5); ? At 9/8/06 11:45 AM, ViktorHesselbom wrote: Care to test Math.sqrt(x); vs Math.pow(x, 0.5); ? Not bad :P It's all yours, anyon who wants to try it. I could probobly test it myself if my comp worked.. can you test the diference between "and" and "&&"?? and "or" and "||"? using loop of 10,000,000 sqrt, int: 1902ms pow,int: 4255ms sqrt, Number: 1885ms pow,Number: 4165ms i cant say that im suprised with this mind, surely a specific function for square root is faster than an arbitary power function. the loop itself, is done using int and Number for Number, so its harder to compare between int and Number, but since int loops are ALOT faster, it can be seen that using sqrt and pow with Numbers must also be ALOT faster, because Number outperformed int.
http://www.newgrounds.com/bbs/topic/535196
CC-MAIN-2016-07
refinedweb
2,227
71.38
> From: Helge Hess <address@hidden> > Date: Wed, 27 Feb 2002 18:10:07 +0100 > > Nicola Pero wrote: > >>Exactly. Just as a side note, Lars Sonchocky-Helldorf filed the request > >>for "safe-guarding" OS X Foundation/Cocoa Headers against multiple > >>#includes as an Apple Bug with radar ID #2868753. Today, just one day > >>later (!), the ticket has been closed. > >> > >>State: Verify > >>Resolution: Behaves Correctly > > > > You can always file the bug again if you disagree with them. > > > > I always thought that a company (Apple) quarrelling with his customers > > (Lars, you etc) - like in this case - is a very silly company. > > There are some million more customers Apple needs to care about ... I > wonder whether Omni's frameworks have safeguarded headers and whether > they use #import (I'm pretty sure they do). > > > I frankly don't understand why they can't protect the headers against > > multiple #includes. That way, they would just make more users happy (both > > the ones who want to use #import, and the ones who want to use #include), > > and they would loose nothing. > > They would make the ten GNUstep developers more happy, their real > customers do not care at all and go on using #import as the > documentation suggests. > Further they would need to modify about half of their code files, which > from a managers point of view very likely breaks something. Not to > mention existing documentation materials. > > There is no point in stating "#import is deprecated" if the major ObjC > developer base uses (and always used) #import on a day-by-day basis. I > completly agree with you that it's a non-issue to safe guard header > files. I completly disagree with you that using #import is an issue. > > Personally I safeguard all my header files and I always use #import for > Foundation and AppKit. This way I'm safe in every direction. > I'm pretty sure that's the only option which makes sense for tools like > gs-autodoc or gsweb or GNUmail which are supposed to work on MacOSX. Right, GNUstep developers and users are stuck between gcc developpers who have deprecated #import and Apple (ex. NeXT) who will continue to use it for the previsible future. In these conditions, and given that #import still works with gcc, the only sensible thing we can do is: - continue to use #import to import Objective-C interfaces. - always add preventive #ifndef/#define/#endif in prevision of possible obsolecense of #import. - keep at hand a simple script such as: for f in *.[hm] ; do cp $f $f~ && sed -e 's/#import/#include/' <$f~ >$f done In consequence, I would suggest to have the -Wno-import option by default in GNUstep makefiles. -- _------
http://lists.gnu.org/archive/html/discuss-gnustep/2002-02/msg00540.html
CC-MAIN-2016-40
refinedweb
444
62.88
tag:blogger.com,1999:blog-28896347171221124152018-05-27T18:39:44.495-07:00iWayneoPersonal blog of Wayne Douglas. I am a Brighton based developer. My predominant skillset is around the .NET space but I am quickly learning Go! and Erlang as I am getting slight lang fatigue. Main interests are around architecture (CQRS/ES), languages, DevOps and multi OS environmentsWayne Douglas Authentication with Ember + SailsJs / Waterlock Part 2 - Google oAuth2Leading on from part 1 <a href="" target="_blank">Jwt Authentication with Ember + SailsJs / Waterlock</a> I decided to continue with the theme by adding the ability to authenticate with a 3rd party authentication service - in this example I'm using Google oAuth2.<br /><br />Now this wasn't simple for me to figure out initially! Partly because, in my mind, the confusing docs surrounding Google oAuth2 where they state:<br /><blockquote class="tr_bq".</blockquote).<br /><br /.<br /><br />The desired flow is:<br /><br /><ol><li>User clicks "authenticate with Google" button</li><li>Popup appears with Google's auth flow</li><li>User provides access to our app </li><li>Popup closes (main app never refreshes or redirects)</li><li>Our app receives the authorization code from Google</li><li>Sends that code to our API</li><li>Our Api then validates that code by exchanging it for an access token</li><li>If the code is valid we find or create a user and set up the JWT for the client app</li><li>Our client app receives the JWT and uses that for future requests to the server</li></ol><div"):</div><blockquote class="tr_bq">ember server --proxy</blockquote>Simple!<br /><br />In terms of changes - I have included a modified version of <a href="" target="_blank">waterlock-google-auth</a> <a href="" target="_blank">CSRF</a> - I may add a step where the client can retreieve one before starting the auth flow - I may omit it by using a separate endpoint for SPA auth - not sure of the implications yet. That's all for the server!<br /><br />The client has a few new pieces:<br /><br /><ul><li>I've included the <a href="" target="_blank">ember-cli-simple-auth-torii</a> package - this handles auth flows requiring a popup</li><li>I've also included the <a href="" target="_blank">ember-cli-simple-auth-oauth2</a> package</li><li>And finally the <a href="" target="_blank">torii</a> package.</li></ul><div>Because we are customizing the standard flow a tad (sending the auth code to the API) I had to implement a custom authenticator: <a href="" target="_blank"></a></div><div><br /></div><div>The interesting part <a href="" target="_blank">here</a> is where I exchange the auth code for a validated user with an access token provided by our server - this makes up our JWT! Nice.</div><div><br /></div><div>I'll likely move on to add Facebook authentication and possibly create our own Spotify add-on to Waterlock and wire that in. Finally I'm planning on adding web sockets to provide "live" UI updates.</div><br /><img src="" height="1" width="1" alt=""/>Wayne Douglas Authentication with Ember + SailsJs / WaterlockThought I'd knock together a little example of using Waterlock/SailsJs with EmberJs and using JWT as the mechanism.<br /><br /><blockquote class="tr_bq">JSON Web Token (JWT) is a JSON-based open standard (RFC 7519) for passing claims between parties in web application environment. The tokens are designed to be compact, URL-safe and usable especially in web browser single sign-on (SSO) context.</blockquote>JWT is a means of providing a simple claims based authentication between a client and a server. The token is an encrypted piece of data which can be sent as part of a payload to allow access to restricted services. The full RFC is here <a href=""></a><br /><br />The basic gist of JWT is this:<br /><br /><ol><li>User authenticates with server (identity/password, oAuth etc)</li><li>Server generates JWT using secret key and some payload (usually a JSON string that provides some basic identification information, a subset of the User object for instance)</li><li>Server responds with JWT access_token</li><li>Client stores access_token in cookie or local storage (we will be using local storage)</li><li>Client provides access_token in future requests.</li></ol><div. </div><br /><br />Waterlock is a nifty little user authentication//JSON web token management tool built for Sails:<br /><ul><li>Waterlock - <a href=""></a></li><li>SailsJs - <a href=""></a></li></ul>Using these two projects in tandem makes it ridiculously easy to get up and running with JWT and a decent REST API. Couple that with EmberJs - "a framework for ambitious web applications" and you have a pretty niffty set-up.<br /><br />The best way for handling auth in an Ember app is through the use of Ember Simple Auth:<br /><ul><li>Ember Js - <a href=""></a></li><li>Ember Simple Auth - <a href=""></a></li></ul><div.</div><br /.<br /><br />So down to the code! The repo is here: <a href=""></a> go ahead and clone it if that's what flicks your switch - I expect it does else you wouldn't have read this far.<br /><br />There's 2 parts to this repo:<br /><ul><li>The Server</li><li>The Client</li></ul>I'll walk us through the interesting parts of the server initially - there's not much to it!<br /><br /: <a href=""></a>. And to get SailsJs dancing nicely with this spec we need to update the blueprints. To do this I use the brilliant repo by mphasize: <a href=""></a>. What this does is turn the standard JSON output from sails into nice neat JSON API compatible JSON. Neat.<br /><br />I've also included waterlock and waterlock-local-auth. To get all these you just need to cd into the /sonatribe-api directory and run<br /><blockquote class="tr_bq">npm install</blockquote>Now - that's pretty much it - if you're doing these from fresh you'll need to run the generate blueprints command but that's all documented on the repo's readme.md - RTFM ;)<br /><br />The noteworthy config changes include in the /config/models.js file:<br /><br /><a href=""></a><br /><br />You can see that I've added the "associations" and "validations" nodes to the config. It just defines explicitly how you want these types of nodes to appear in the JSON output.<br /><br />Next up is waterlock.js:<br /><br /><a href=""></a><br /><br /.<br /><br />That's it for the API! Simple yeah.<br /><br />So now onto the client - this is where things can get a little... involved. But considering what you're getting - I'd say not too involved. You'll want to cd into the /sonatribe-ui folder and do the usual npm install && bower install stuff.<br /><br />I've included ember-cli-simple-auth and ember-cli-simple-auth-token which are set up in the /config/environment.js file: <a href=""></a><br /><br />Go with my defaults for now - you can mess with these later if you want.<br /><br />Things to note:<br /><ul><li>I've used pod structure - it's great for organising an app small or large.</li><li>I've used the new computed decorators: <a href=""></a> meaning we get Java style attribute decoration for things like @observe and @property and even @whateverYouWant</li></ul>All this works out of the box when you npm/bower install my repo. The login and register components can be found here <a href=""></a> - we let the framework do the work so these components are ridiculously simple:<br /><ul><li>Login: <a href=""></a></li><li>Register: <a href=""></a></li></ul>The only noteworthy thing being the slightly different flow for registering as opposed to loging in - waterlock (currently) uses the same workflow and if an un-registered user attempts to login and "createOnNotFound" (<a href=""></a>) is true, waterlock assumes you want to create a new users - and provided the dupe checking passes (no duplicate username/email validations) - a new user is created. In future versions of Waterlock there will be an explicit /register endpoint. For now it's best to query the /users endpoint for email / username availability. I'm doing this here: <a href=""></a> (not wired up atm - I'll likely do that in the week to finish up).<br /><br /").<br /><br /!<br /><br /><img src="" height="1" width="1" alt=""/>Wayne Douglas Threading PrimersI've been refreshing my brain on threading in C# recently and have compiled a list of resources I found excellent on the subject.<br /><br />I'll leave them here for future me and also for anyone interested in the topic - save some google bandwidth :)<br /><br /><ul><li>Joe Albahari - Threading in C# <a href=""></a></li><li>Async Programming : Unit Testing Asynchronous Code: Three Solutions for Better Tests - <a href=""></a></li><li>Async/Await - Best Practices in Asynchronous Programming - <a href=""></a></li><li> Talk: Async best practices - <a href=""></a> </li></ul><img src="" height="1" width="1" alt=""/>Wayne Douglas with .NET on ubuntu<div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="" /></a></div><br /><br /).<br /><br />To get started you need <a href="">vagrant</a> installed. You can run this on any OS: OSX, Windows or Ubuntu. You'll also need <a href="">VirtualBox</a> - both free and OS :) <br />Then just pull my vagrant config from the <a href="">repo</a>.<br /><br />All you need to do then is open a terminal in that directory and type:<br /><br /><span style="color: #999999;">vagrant up </span><br /><br />BOOM!<br /><br />It takes a while to get loaded 1st time as it builds MonoDevelop from source but it's all set up and ready to go when it's finished.<br /><br />Vagrant is a great tool, lots of NET developers might not have heard of it but it makes creating dev (and prod) environments super easy.<br /><br />Enjoy :)<img src="" height="1" width="1" alt=""/>Wayne Douglas an ember-cli app inside a ServiceStack (or any) MVC appThe resulting code for this post can be found at: <a href="" target="_blank"></a><br /><br />We are looking into refactoring sonatribe to be an ember-cli SPA. Using the Discourse code base as inspiration I've been spiking various parts to see exactly how feasible this would be.<br /><br />This all started with us wanting to mimic the discourse header (ignoring the messy sub header):<br /><br /><div class="separator" style="clear: both; text-align: center;"></div><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="" height="228" width="640" /></a></div><br /!<br /><br />Reverse engineering the code I managed to stub a lot of the code for things like preload store and bound-avatar and basically get a functioning version of the header isolated into it's own ember-cli project:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="" height="82" width="640" /></a></div>Not bad! Wiring this up to our own search API I actually managed to get the grouped search results working. You can see this working search API is a really poor knocked-together ServiceStack API which has zero optimizations so ignore the poor perf on the results!<br /><br /.<br /><br /.<br /><br />So, serving the index.html as a dynamic (is that still the right term? It sounds so ASP 3.0!!!) page is pretty much a must have.<br /><br /.<br /><br />The idea here is to have 2 projects:<br /><span id="goog_255874629"></span><span id="goog_255874630"></span><a href=""></a><br /><ol><li>ember-cli app where we can develop the JS and CSS</li><li>an integration project where we can host index.html and inject session state and preloaded data</li></ol><div>To keep the integration project up-to-date when we make changes in the ember-cli project we can use Bower link (<a href="" target="_blank"></a>) - this will replicate changes to the integration project live using symlinks. You can link the /dist folder to the target project - you can see this here: <a href="" target="_blank"></a></div><div><br /></div><div>Rebuilding the project on changes can be done using the <span style="color: #666666; font-family: Courier New, Courier, monospace;">ember build --watch</span> command. </div><div><br /></div><div. </div><div><br /></div><div>We can then host our own contents of index.html as a view inside our hosted environment: <a href="" target="_blank"></a> </div><div><br /></div><div>So now we get the mixture of serverside code inside the ember-cli index.html and we can inject in session state and preloaded data!</div><div><br /></div><div>This means I can go back to my header spike project and replace all of the stubbed code with real user data and start to really figure out the auth aspects of this refactoring.</div><div><br /></div><div>While this is a ridiculously simple example - the project in github does show this all working:</div><div><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="" /></a></div><div><br /></div><div><br /></div><div><br /></div><br /.<br /><br />w://<br /><br /><img src="" height="1" width="1" alt=""/>Wayne Douglas ServiceStack / RavenDb API on UbuntuI.<br /><br />ServiceStack runs great self hosted on Mono but I have been getting issues with RavenDb: async calls seem to fail and things like bulk inserts seem to fail with this:<br /><br />If anyone has any help or pointers on these i'd be massively grateful!<br /><br /><br /><img src="" height="1" width="1" alt=""/>Wayne Douglas conversations platform - CQRS knockout - part 2: GetEventStore.comFollowing on from part 1 (<a href="">CQRS Journey</a>) this is my implementation spike using GetEventStore. Assesing which CQRS setup we should use at sonatribe for the new conversations feature.<br /><br />The code can be found <a href="">here</a>. <br /><br />This implementation is quite a bit different so I'll discuss where they branch off from each other and maybe try and figure out the pros and cons.<br /><br /.<br /><br /).<br /><br />Moving on from there I have a console app (as opposed to a worker role) running with TopShelf (for easy promotion to Windows Service when required). The code for this is here: <a href="" target="_blank"></a>. Really simple way of wiring up the command handlers using Castle Windsor.<br /><br />Looking at the one and only command handler: <a href="" target="_blank"></a> you can see how easy it is to save an aggregate root into the event store using the <a href="" target="_blank">GetEventStoreRepository</a>. This was one area that the developer is left to figure out - finding this repository was a great help.<br /><br />You can see in the AR that when I create a new conversation it fires a new ConversationStarted event. This event is listened to in the read model. The wiring up for the read model you can see here: <a href="" target="_blank"></a>. Again - there is some work to do here to turn the received events into something that can be worked with. There is also some (overly simple) code here to dispatch the events to the event handlers.<br /><br />Inside the event handlers I am denormalizing the data to a RavenDb read model.<br /><br /.<br /><br />Next up I'll be adding some real features using each of these infrastructures. I like to work from an emergent design BDD style approach so will be looking at which of these works best under a test 1st dev cycle.<img src="" height="1" width="1" alt=""/>Wayne Douglas conversations platform - CQRS knockout - part 1: CQRS JourneyI'm about to start hacking away at a new part of the sonatribe API - a feature we've been calling <a href="">Conversations</a>. I've been a fan of CQRS/ES for a while now and think that the natural way that a conversation is a stream makes it a very good fit for some event store goodness. <br /><br />Where!<br />.<br /><br />I guess the main concerns for me are:<br /><span id="goog_1338697404"></span><span id="goog_1338697405"></span><a href=""></a><br /><ul><li>Simple dev cycle- as I mentioned above - complexity seemed to be a major time burner for us before - I want this implementation to be drastically simpler.</li><li>Testable - this is a blatant must - previously this was quite hard - there was very little guidance here also.</li><li>It needs to be able to run on my laptop on a train! I do a lot of this work during my commute!</li></ul><div>So - taking the CQRS Journey code (<a href="" target="_blank"></a>). </div><div><br /></div><div>I only managed to port over the SQL bits - the Azure ServiceBus part is not working yet and probably won't - for reasons I'll specify in my next post. </div><div><br /></div><div>I also decided to use RavenDb as my read model - Table Storage is OK but RavenDb is absolutely nuts in terms of speed and simplicity. </div><div><br /></div><div>Anyways - the code is here: <a href="" target="_blank"></a></div><div><br /></div><div" </div><div><br /></div><div...</div><div><br /></div><div></div><div><br /></div><div><br /></div><div><br /></div><div><br /></div><div><br /></div><div><br /></div><img src="" height="1" width="1" alt=""/>Wayne Douglas Checking in the Settings!!Ideally configuration hangs off of the build setting or (better still) the deployment process takes care of the configuration. If you're using Puppet or Octopus to deploy your code then this is super easy. Having this sort of infrastructure isn't always feasible especially on smaller projects with smaller teams. The project I'm working on at the moment is one such codebase. We share settings.xml files, there are many of them and each developer needs to configure these files as per their own environment as well as the deployed environments - per feature. This is a problem we are very aware of and we have scheduled tickets to take care of it but in the mean time I'm going to look into how we can use Git to limit the pain of this.<br /><br />I am a massive culprit for checking in settings files - everybody does it - especially the nub on the team (me)... It's really annoying to the poor person who does an update after the point and then has to rework the settings files. So here are a couple of ways we can fix this (this is very basic git stuff so if you're well versed on Git you'll be well aware of the following!):<br /><br />Open up a command prompt and navigate to where you keep your git repos. Mine are normally in d:\git<br /><br />Lets start with a fresh plate, run the following:<br /><br /><script src=""></script> We've just created a brand new git repo for us to play with. very straight forward. We don't want to work in master so lets create a new branch:<br /><br /><script src=""></script><br />Now that we have the dev branch we can make our settings file.<br /><br /><script src=""></script> Add some dummy dev environment config to the settings.xml file. Now we can add that to the dev branch and commit it.<br /><br /><script src=""></script> Now that that's safe and sound we can create our feature branch where we will want to do some work and manage our own config:<br /><script src=""></script> Put some code in test.cs so we can commit that into the wayne branch<br /><br /><script src=""></script> Now lets customize the settings.xml file:<br /><br /><script src=""></script> Make the dev specific settings feature specific so you know if this works or not when we come to doing the switcheroo. Now we want to commit the feature specific settings into the feature branch: <br /><script src=""></script> Now we'll continue to do some more work: <br /><script src=""></script> And commit that into the feature branch <br /><script src=""></script> OK so now we have a dev branch with some settings in, we branched from dev to create a feature branch, did some work in that branch, configured the settings file to make it specific to that branch and then did even more work, big day in the office. What we want to do now is to merge the work in the feature branch to the dev branch - but we definitely do not want the settings from the feature branch to show in dev, that's just going to wind everyone up. <br />To do this we can revert the commit we made to push the settings into the git branch. We can check the log to see the commit and revert it. To do this run the following: <br /><script src=""></script> <br />This will yield something like the following (type q to exit the log):<br /><script src=""></script> As you can see the commit with ID "965f41809225abbe2ed2b74d6e162e27c9c18a72" so to revert this commit we can issue the following command:<br /><script src=""></script> And then we can safely merge our feature branch into dev: <br /><script src=""></script> Open up settings.xml and verify it's as expected - dev settings, not feature settings. <br />So that was one way of making sure we don't merge stuff we don't want to merge. If we wanted to do some more work in the feature branch and get the settings back we can use the same approach above to undo the commit made by git revert - git simply creates a compensating action to revert the commit by and this creates another commit - so we can just revert the commit created by the compensating action (which will create another compensating - git is just event sourcing after all!). Another approach is cherry pick commits from the feature branch. This will apply a single commit to the branch you are merging into. Here is the general gist of it: <br /><br /><script src=""></script><br />Git is powerful stuff - cherry picking is basically the converse to the 1st approach. It shows how useful doing small, discreet commits can be. In general there are only a small handful of git commands that I use frequently - you can do so much with a small amount of git commands it's ridiculous. But there are some extremely useful, more "advanced" features which really bring Git into it's own. I'm still learning Git but one thing I have found is that using it properly really pays off and there is always a sane solution to whatever insane mess I find myself in! <img src="" height="1" width="1" alt=""/>Wayne Douglas testing ServiceStack services with RavenDB embeded!<br /><br />To be able to run tests like the following regression test:<br /><br /><br /><script src=""></script> <br /><br />I use the AppHostHttpListenerBase in my BaseTest class. Which is roughly the same as the following:<br /><br /><br /><script src=""></script> <br /!<img src="" height="1" width="1" alt=""/>Wayne Douglas without CQRS and RavenDBFollowing on from my previous post about sonatribe, one of the hard decisions I had to make on this iteration was refactoring out the CQRS code to simplify the dev cycle and improve velocity. I'm a big proponent of CQRS - it answers a lot of questions in traditional dev and while there is some upfront complexity it is far outweighed by the lack of accidental complexity apparent in traditional software projects.<br /><br />One of my favorite features of a CQRS system is the denomalized rad layer - where the data stored in the read store maps directly to the DTO's coming out of the UI. When I refactored the code, I really wanted to be able to still get this sort of functionality. Denormalized read models have been around a long long time - SQL Server has "Views" where the developer can specify some SQL - normally making use of JOIN and UNION to present a readily available denormalized view over the data.<br /><br />In RavenDb we can achieve this using TransformResults. In a document database you model your data differently - rather than joins and lookup tables you can store reference data inside the main document (aggregate root). A good rule of thumb being that if you ever need to load that part of the data alone, independent of other data, then that data is a document of it's own, with it's own ID. Documents can reference other documents however using the Denormalized Reference pattern (). Denormalized references can become a chore when the referenced data gets updated - in this case you will need to PATCH the documents to reflect any changes needed.<br /><br />One of the cases where we have used TransformResults is to present the event profile (a festivals main page on the website) - in order to cut down requests to gather the data to present on this view (tags, images, users who are going, campsites, lineup info etc) we can aggregate all of that information in one TransformResult:<br /><br /><br /><script src=""></script> <br />While this looks quite the beast - it's actually quite simple - we're pre-compiling all of the associated data for each event. This means we can pull all of the JSON needed for an event profile using a single call!<br /><br /><br /><script src=""></script> <br />It's ridiculously simple and without cache turned on we can return from the REST API an event profile in less than 100 milliseconds - with cache turned on and tuned we can smash that right down - that kind of optimization comes much later however.<br /><br />Now this isn't _really_ anything like CQRS - but it gives me a good-enough-for-now alternative. At some point I will be bringing CQRS back into sonatribe, this time round most likely using Greg Youngs GetEventStore () - previously I used JOlivers EventStore and CommonDomain (which is used in Greg Young's implementation to my understanding). But for now I'm happy with this setup and it's simplicity.<br /><br /><br /><img src="" height="1" width="1" alt=""/>Wayne Douglas technical bits and piecesThe sonatribe project makes use of a broad range of technology from PHP to Microsoft Azure and a lot in between. We host the main site on Ubuntu but the backend is hosted on Azure and makes use of ServiceStack to build the REST API. The main components of this project are:<br /><br /><ul><li>PHP UI - running on an Azure Ubuntu VM</li><li>ServiceStack REST API - running on an Azure Windows VM</li><li>RavenDB database</li><li>EXTJS admin application</li><li>2 x Azure worker roles used to process event and artist information</li><li>Azure ServiceBus</li><li>Azure Cache</li><li>Azure CDN</li><li>SignalR backed chat server</li><li>Xamarin iOS app</li><li>Xamarin Android app</li><li>Facebook canvas</li></ul><div>Sonatribe has always been developed in our (stu, chris and myself) spare time - so no wonder it's taken ~3 years to develop and get the alpha out the door! The initial build of sonatribe (the site is currently on it's third rebuild!) used a CQRS backend and although I maintain this is the best architecture for a project like sonatribe, the complexity in the development/test cycle slowed progress too much, for such a small team trying to get a working "something" out the door, I had to make the hard decision to refactor the code and simplify it. </div><div><br /></div><div>Both the main website (PHP UI) and the Android & iOS apps use the same API for authentication and data. This is a great design as it means all of the business logic stays in one place. And when you're building a system that relies on multi-axis permissions/roles with a range of business logic and views over the data - this is a big advantage. </div><div><br /></div><div>The Extjs admin app allows us to import data from clashfinder. The data coming from that JSON is very basic and only specifies the act name, stage and time of the set. In order to build upon that data to be able to provide spotify playlists, artist bio's and images etc we have an import pipeline which also allows us to update that data (quite complex as the clashfinder data has no Id information). Importing information for each act is quite a long running process as we gather as much information as we can about each artist, we download images relevant to each artist and generate renditions for use in the site. Because of this long running process the import process follows the following flow (ignoring clashfinder updates - that's another story!):</div><div><ul><li>Initial import fired from admin app</li><li>The REST service picks up the import ID and pulls the data from clashfinder</li><li>The JSON is deserialized into POCO</li><li>A RavenDb bulk insert operation is started</li><li>The locations (stages) are pulled out of the data and are added to our database</li><li>For each of the artist sets we create a new listing event - a simple document which specifies the act name, stage, start and end</li><li>The bulk insert is committed to the DB - this process can take ~ 1 sec for a large festival with ~2000 acts and ~190 stages</li><li>For each listing event that was created we push a message onto the Azure ServiceBus</li><li>The worker process at the end of this queue picks up each message and processes it asynchronously.</li><li>We check to see if we can find any information about the act by using their name - we check spotify, last.fm, music brainz, wikipedia etc</li><li>If we can find the artist we download their spotify track Ids, artist bio, images etc and create a new artist in our DB linking all of these assets.</li><li>The artist is added to the listing event and saved</li><li>If we can't find the artist by the act name alone we try to break the name down - searching for substrings - perhaps the act is a collaboration.</li><li>For each artist we find in the act name we add them to the listing event and save it away</li></ul><div>The admin site has it's own schedule designer using bryntum scheduler which we can use to create and edit lineups. The preferred method atm is clashfinder imports due to the communal collaboration that goes into creating them.</div></div><div><br /></div><div>Using Xamarin to develop the Android and iOS apps means we can share a lot of the code base between the apps as well as make use of SignalR to provide chat functionality. Servicestack also provides a great PCL client - it's a no brainer. The apps are currently in the early stages of development - at the moment you can log in, view festivals and their lineups. We aim to have very basic implementations out in the app stores before Glasto. The apps will be free :)</div><div><br /></div><div>We've recently released the 1st alpha of sonatribe () and while there are a few rough edges we're very pleased with the result. The platform differs from anything out there at the moment because of the social aspect of the site as well as the fact that we will be offering native Android and iOS apps. While the alpha is a very bare bones (and in some places quite clumsy) implementation, we have a great platform to build on. This year our aim is to stabilize the platform, introduce the mobile apps and react to feedback. Our next big undertaking is tackling conversations, to build on the social aspect of the site. We're planning on swerving the traditional "forum software" implementation and intend to build a conversation platform from the ground up based on the usage of our demographic - something we can grow and improve upon and will be suitable for the next 10 years - not extinct 10 years ago. We've always found forums to be too 1990's and we've always said Facebook waters down the semantics too much for something like this (the whole reason sonatribe exists). </div><div><br /></div><div>We've got a great set of features and a lot of work has gone into getting to where we are, we're listening to user feedback and looking forward to the next set of great features. There's a hell of a lot left to do though!</div><div><br /></div><div>w://</div><div><br /></div><img src="" height="1" width="1" alt=""/>Wayne Douglas AlphaAlpha = release to friends and family!<br /><br />Sonatribe has been a long time coming! We started the project as FestivalStar around 2009 in a kind of accidental, off the cuff development. Back then it was an iPhone app that supported multiple festivals which did quite well on the AppStore. Since then we've scrapped the iPhone app (for now) had a change of direction, a complete redesign and after about 4 iterations (and 5 years!) later we're about ready for an alpha release! Sonatribe is built by Chris, Stu and myself. We've been hacking away at this in our spare time so it's been a long and windy road.<br /><br />The alpha that we're releasing is by no means the end product. We've got the project to a state where we have a small, manageable set of features we hope will be useful to festival goers. Sonatribe development is a constant ongoing thing and we'd like to generate a feedback loop from our users where by we shape the site and it's features based on this feedback loop. At the moment we have the skeleton to be able to provide you a small set of tools to socialize and plan your festival experience. By using the site we can get an idea of your likes in terms of music genres and can provide you recommendations for bands/DJs and events you might not have heard of before.<br /><br />We've teamed up with clashfinder.com to provide lineup data (we have our own lineup designer in the backend too which we feed back to clashfinder). We've also integrated with Last.fm to provide artist information.<br /><br />So what do you get? Like I said, it's early days but at the moment you can browse festivals, see the lineups for the festivals, create your own schedule, view act information (where they've played, where they're playing, bio, images etc), create a "campsite" and invite friends to it. When you have a campsite you get a private place to chat and a group schedule. You also get a (at the moment - very simple) profile page like any social network with private messaging.<br /><br />We've built Sonatribe from scratch to be exactly what it is - a social network for festival lovers. We've not shoehorned any blogging platform to create it. We have complete flexibility in it's future direction and a metric ton of ideas for features and goodies. There's no advertising and we have no intention of splattering the site with any. We're not sure how the site will pay for itself as yet but we want to keep it as clean and user friendly as possible.<br /><br />At this stage sonatribe.com is an alpha release and so you shouldn't expect it to be a complete and finished product. There will be some workflows that could be better, some of the site may seem a little slower than we'd like and somethings might just fail! There might be some features you'd like to see and some you'd like to tweak. All of your feedback is valuable and will help shape the project as it matures.<br /><br />How to provide feedback: On the right hand side of the site there is a tab you can click to provide feedback. If you want to be notified of progress via email you can join the sonatribe-users email list: <a href="" target="_blank"> </a><br />How to access the site: You can get to the site at: [You'll be provided the super secret URL!] - you will need a Twitter, Google or Facebook account to login.<br /><br />Rules of the road:<br /><ul><li>You can invite some friends, we'd like to get some real world usage. </li><li>We might have to cap new members at some point while we work out the inevitable creases but for now we'd just like to get the site working through it's paces. </li><li>If you're going to a festival that isn't listed just pop us an email, get in touch using the "Feedback and support" tab. </li><li>This is an alpha, things might go wrong for you, if anything dodgy happens please let us know and we'll be able to make the site more stable using your feedback.</li></ul>And finally, we aren't affiliated with anyone, nor do we claim to. If you're a festival organizer we'd love to hear from you. Our intention is to provide a means for users to discover your events and help build your community. We're building tools to help you manage your events and interact with your community which we aim to provide for free.<br /><br />Looking forward to receiving your feedback!<br /><br />Wayne, Chris, Stu <img src="" height="1" width="1" alt=""/>Wayne Douglas Android 3G helperManaging 3G connectivity in Xamarin Android is a nause - you have to mess about with JNI and a host of other gruff. I've created this little helper to make it easy: <br /><br /><br /><script src=""></script><img src="" height="1" width="1" alt=""/>Wayne Douglas tip – deploying unstable builds regularly [REPOST]When significant updated builds to RavenDb are posted to the Hibernating Rhinos build page (which is fairly frequently – a good thing) it means I need to deploy the updated DLLs to the servers. We currenty manage 8 instances of RavenDb – 1 for each service that supports sonatribe. That can be an aruous task – especially as we host 4 isntances on 1 server and 4 on another.<br /><br />I’ve been playing about with PowerShell recently and thought this would be a good opportunity to get my teeth into it. I was initially trying to get the WebDeploy stuff to work scripted but that was a massive nause.<br /><br />I ended up just going with a script i can run on each server. Here are the steps I needed to accomplish:<br /><br />stop iis<br />remove the old files from the db folder including the bin and all it’s contents but excluding the Data directory and the web.config and log4net.config files<br />copy the contents of the deplyment folder (which i set up to include the plugins needed) to the database directory<br />restart iis<br />Simples.<br /><br />Here’s the script:<br /><br /><br /><script src=""></script><br />Nice and simple. Just create an array of strings containing the paths to the folders where ravendb is hosted. Loop through them deleting and copying. Restart IIS.<br /><br />This saves me about an hour a week and also guarantees that I don’t accidentally miss something. It’s a shame the web-deploy stuff in msdeploy didn’t work – we host the databases in IIS so it would have been a suitable solution and would have meant i could deploy from my dev machine without even needing to log onto the server. I expect I’ll try and tackle that at some other point.<br /><br />w://<img src="" height="1" width="1" alt=""/>Wayne Douglas a stab at generating strongly typed Facebook Graph C# classes for .Net/Asp.Net (My attempt) [REPOST]The facebook API is flakey to work with at the best of times. However as Deap mentioned on his blog post: facebook documentation requires login (showstopper) it’s gotten even worse if you want to automate the process by scraping the docs. I’ve managed to take a stab at this using a different method to retrieve the docs – this method allows us to 1st login to the FB docs and then scrape away – building up the strongly typed objects on the way. Here’s my branch: FacebookSdk-contrib on github. This is using HtmlUnit – a Java project – so we’re also using a wrapper to use it under the IKVM runtime. It’s nearly there. Just some final bits to iron out. w:/<img src="" height="1" width="1" alt=""/>Wayne Douglas predicates in RavenDb [REPOST]Replication in RavenDb is pretty fekin sweet. I helped Oren by submitting bugs and testing it out in sonatribe dev by pushing it in a real world usage scenario. One of the features I asked about was the ability to filter out parts of the namespace for replication – instead of the ‘all or nothing’ approach used by default. Oren (as you’d expect) simply replied with a request for a patch. So here is my patch.<br /> <script src=""></script> <br /><br />My fork is here: <a href=""></a><br />Basically – my aim is to allow the developer to set replication filters – so only a part of a namespace is replicated – rather than the whole db. To do this I’ve added a ReplicationPredicate class in the Raven.Bundles.Replication.Data namespace. this is used when initialising the db:<br /><pre class="brush: csharp; title: ;" title=""></pre>it’s a work in progress but i’m finding it hard to get the time to commit to it due to the new baby. if anyone fancies lending a hand – there’s a test in the SimpleReplication.cs file Can_filter_replication_using_replication_predicates<br />w://<img src="" height="1" width="1" alt=""/>Wayne Douglas Mvc action selector – with list support [REPOST]Here is my updated AcceptParameterAttribute class with support for data bound lists:<br /><br /> <script src=""></script> some sample code to use this updated version:<br /><br />decorate the action with something like:<br /><br /><script src=""></script> <br />and then the buttons in the list can have the following format for the name:<br /><br /><script src=""></script> hth<img src="" height="1" width="1" alt=""/>Wayne Douglas Solr in Apache on WindowsI've been messing about with Solr a lot recently and although the finished setup is great at what it does, getting it setup in the first place is a complete nause - especially on Windows. Anything to do with Java is going to be an XML configuration nightmare so to help myself in the future and whoever else stumbles into the same issues I had, I'll document the process here.<br /><br />First up you want to ensure you have the latest install of the Java SDK installed. Grab that from Oracle and do a standard install.<br /><br />Next up you want to get Apache installed as a Windows Service. I installed the latest to date (Tomcat 8.0.0-RC5 (alpha)) - grab that from here: <a href=""></a> and follow the instructions.<br /><br />I'm not going to go into the details of installing the packages above as they are standard installations and you should accept the defaults in most cases.<br /><br />Now that we have Tomcat installed and running we can use it as the container to host the Solr instance. Grab the latest build of Solr from their website (). This is where the install gets a little interesting. Getting Solr running for your requirements will mean altering a lot of config XML files. It's obviously very handy to have those XML files under source control so keeping the config files inside the default locations withing Solr is not going to work. In this setup we'll want to point Solr to a home directory other than the default so we can keep the config files under Gits control.<br /><br />Once Solr has downloaded unzip it and grab Solr***.war (where *** is the version number) file from the solr-4.5.0\solr-4.5.0\dist folder and place it inside the lib (C:\Program Files\Apache Software Foundation\Tomcat 8.0\lib) folder in Apache - rename it so that it is called Solr.war. Now grab all of the jar files from solr-4.5.0\solr-4.5.0\dist\solrj-lib and solr-4.5.0\solr-4.5.0\dist folders and place them inside the same lib folder in Apache. It might not be necessary to install all of the jar files but incrementally adding them by deciphering Solrs exceptions takes an age and I doubt it has any negative impact.<br /><br />As I mentioned before, we want to store the config outside of the normal Solr installation so that it can live in the main repo folder along with whatever other supporting code you may have. To do this we need to tell Apache where to find our Solr home. To do this we need to place some XML inside Apache's conf directory (mine is at C:\Program Files\Apache Software Foundation\Tomcat 8.0\conf\Catalina\localhost). Here is an example which points Solr to c:\solr<br /><br /><script src=""></script><br /><br />Now restart the Apache service. If everything has gone well, navigating to should bring up the Apache homepage. Navigating to will bring up an error complaining about collection1 not being available. This is to be expected as we haven't given the core config yet.<br /><br />At this point, if you're seeing other isuses you're going to need to start using the logs directory in Apache to debug whatever issues you may have. If you have reached the same point as me then you can carry on the installation as follows.<br /><br />Inside the Solr download folder there is an example installation. Grab the solr folder from inside the examples: (\solr-4.5.0\solr-4.5.0\example\) and copy all of the files to c:\solr - or where ever you have decided your config home directory is. You'll need to open solr.xml and replace the contents with this:<br /><br /><script src=""></script> <br />Rename the collection1 folder to test-solr as that is what is specified in the solr.xml. Restart Apache and reload. You should now see Solr's homepage (all be it with a warning like: Error Instantiating SearchComponent, solr.clustering.ClusteringComponent failed to instantiate)!!<br /><br />SearchComponent is a contrib to the Solr project. For this example I have no need to go into getting that working so I'll just remove it form the config. Edit the solrconfig.xml file to look like this:<a href="" target="_blank"></a> (linked not previewed as it's huge).<br /><br />Now restart Apache and refresh the solr homepage. You should now be able to select the new test-solr core in the UI!<br /><br />As a final note - Unless you want to keep the solr index (you don't) I would add the data folder to the gitignore file - or svn ignore it - whatever SCC you use.<img src="" height="1" width="1" alt=""/>Wayne Douglas Android apps using CalabashBDD testing ASP.NET Mvc websites is a well beaten path with the likes of Nunit, Specflow, Selenium WebDriver etc but testing the features of an iPhone or Android app with an automated UI test suite is not so well covered. Xamarin have recently bought out LessPainful (); the guys who make Calabash (). Calabash is an open source project which aims to deliver BDD driven UI tests to the mobile device. LessPainful allows developers to run those tests in the cloud on a huge number of devices. Xamarin are taking LessPainful and adding a C# API (currently Calabash is driven through Ruby). Tests are written using the Cucumber DSL and custom steps are implemented using Ruby. With a C# API, developers will be able to define steps using Nunit which means SpecFlow could be used to add cucumber support and automatic step skeleton creation. Potentially sharing feature files across Android / iPhone as well as HTML 5 mobile app tests. <br /><br />For now though there is no public access to Xamarin's TestCloud (), the new name for LessPainful. The Calabash source is still open and available on GitHub, so anyone can set up and run BDD style tests using Calabash - just no TestCloud access for a while. <br /><br />At Psonar we wanted to get end to end coverage on UI features and though a lot of the code is covered by unit tests, there's nothing like getting feedback from automated UI tests. Setting up Calabash was not a straight forward "install this, run this; BOOM" type of process. So I've written up my findings for future devs to get their heads around the process, it is a well worthy investment of time. I'm no Ruby/Java dev. The exceptions and error reporting as well as versioning hell that I've just been through hasn't persuaded me to jump ship yet either! Some understanding of the two are very helpful though.<br /><br />First of all you need to make sure you have Ant () installed. I've not had any problems installing this; it's very straightforward "unzip to c:\ant, add ANT_HOME env var". Simples. <br /><br />Next up you want to download ruby 1.9 + devkit (version 1.9 is important as 2.0 has no compatible ghurkin lex parser - this tripped me up big time): <br /><br /><a href=""></a><a href=""></a><br /><br />Installing Ruby is simple, just double click and run. Installing the DevKit is similar to Ant in that you need to unzip it to c:\DevKit, add that path to your PATH env var. Open a cmd and navigate to c:\DevKit and run the following two commands:<br /><br /><script src=""></script> <br />This configures the DevKit in your Ruby installations. Next up you need to use the following command to install the calabash-android gem:<br /><br /><script src=""></script> <br />If you are getting errors at this point about not being able to compile native gems, you probably installed the wrong versions of Ruby and DevKit.<br /><br />All being well you should now be able to create a new folder for your Calabash tests and navigate to it and run:<br /><br /><script src=""></script> <br />This will install a default skeleton Calabash directory structure with some example cucumber code. Adding your app to the directory involves (in visual studio):<br />Select Release build (Calabash only supports release build APK's atm). Ensure you have no SDK linking in the projects properties view. Select Tools/Deploy Android Project. You need to specify the Android debug key located in "AppData\Local\Xamarin\Mono for Android". Specify the following when prompted:<br /><br />password: android<br />alias: androiddebugkey<br />key password: android<br /><br />Open a new cmd and run:<br /><br /><script src=""></script><br /><br />This fires Android ADB - the debug bridge that effectively outs all of the banter Android has with itself; very handy for debugging problems. Once the app is published and you've followed these steps copy the APK to the Calabash test folder and in another cmd window run:<br /><br /><script src=""></script> This will tell calabash to copy the app to the device or emulator and the Calabash server (installed next to your app on the device) will start receiving delegated commands issued from the calabash process running on your computer (which interprets the Cucumber test scripts and turns them into commands sent to the device as JSON). To get you started there are a bunch of pre-canned test steps on the Calabash GitHub Wiki:<br /><br /><b><a href=""></a> </b><br /><b><br /></b>My advice (if you are a Xamarin developer) would be to stick rigidly with the pre-canned steps for the moment. Stepping outside of this safety wall will mean you will more than likely face porting them over to C# at some point in the future. While I am sure Xamarin will maintain the ruby implementation, no one really wants 50/50 Ruby/C# definitions to maintain. The pre-canned steps seem to be fairly comprehensive and allow for a lot of expression so only advanced scenarios would absolutely require custom steps definitions. Although only using pre-canned steps are a lot more verbose (you can't aggregate steps for "Given" style steps for instance) - it's a price worth putting up with for now. <br /><br />I will try and add a follow up to this post explaining the process of adding iPhone tests - sharing the feature files across both OS's. Very tasty indeed!<br /><br />w<img src="" height="1" width="1" alt=""/>wayne-onoreply@blogger.com0 the website - 30/10/2012<p>This is a follow up to the 1st in this series: <a href=""></a></p> <p>And the second <a href=""></a></p> <p>And the third: <a href=""></a></p> <p>And the fourth: <a href=""></a></p> <p>Battery died on the train to Nottingham yesterday so I shall be starting back on it tonight! Looking back at my final sentence before the battery karked it I see: "Let's go and add the IRepository<T>.GetAll() method then"... I best look at the tests and make sure that's where I left off...</p> <p>After having a ganders I can see that I have added a MongoInstaller and MongoCollectionComponentLoader (wholesale robbed from barr a couple of minor mods) which allows me to inject a lazy loaded MongoCollection<T> into my repositories. My repository implementation now looks like:</p> <script src=""></script> <p>And a test:</p> <script src=""></script> <p>So, now we have the repository working with MongoDB (!!) and it's being injected into the controller so we can now mock and test and actually use MongoDb in our project. Now we need to go and modify the AdminControllerBehaviors test which checks the result of calling the Categories action so that we check to see if there is a view model returned which contains a list of categories. That test now looks like :</p> <script src=""></script> <p>Now if we run the specflow file now we're moving ahead again except we fail at:</p><script src=""></script> <p>This is because even though I know we are grabbing the categories from the DB we never actually saved them because we never had a test that demanded it. Now we do so lets extend the repository to add some persistence:</p> <script src=""></script> <p>Now we just need to go back and change the CategoriesAdd POST test so that we make sure we save the category into the DB:</p> <script src=""></script> <p>I've also whizzed through and updated the test for the loading of the categories action so that we are asserting that we are returning some categories:</p> <script src=""></script> <p>And the updated CategoriesController now looks like (I'm not suggesting this code is production ready but it passes this test!): <script src=""></script> <p>I also added the following to the Categories view:</p> <script src=""></script> <p>Now that we have persistence using MongoDb (which I have never used before) and we have all of the IoC, MVC infrastructure stuff all set up and we can save categories I feel pretty good about the progress. Picking off the categories feature was going easy on myself because it was the easiest feature to drop into. But I also knew that this feature would also bare the brunt of the infrastructure bootstrapping side of stuff. Although this feature isn't 100% complete and the code that is there isn't exactly tight I am happy that it's a good grounding to build on. I'm going to leave it there tonight. Next session I need to complete the feature and make the specflow test pass. I'll also set up a public deployment for the staging code - at some point I should hook this into TeamCity / Octopus deploy but that's further down the line.</p><img src="" height="1" width="1" alt=""/>wayne-onoreply@blogger.com0 the website - 29/10/2012<p>This is a follow up to the 1st in this series: <a href=""></a></p> <p>And the second <a href=""></a></p> <p>And the third: <a href=""></a></p> <p>Monday morning, 6:00am!! I left the site on Friday with a passing step which basically meant that the add category form wouldn't b0rk on submission. I know there's some blank spots to fill here (persistence) but I don't have a failing step yet so I'm being ignorant to all these things - I'll tackle them when the tests require me to.</p> <P>From the feature file I can see that we are failing on the following step: "Then I should see "category1" show up in the list of existing categories" because when we click the Save button we get a 404 POSTing to /admin/categories/add. This was a simple coding error with inconsistent action names: the GET action was called CategoriesAdd and the POST was called Categories - so the routing wouldn't work POSTing to that URL - I need to get the routing tests to be more detailed.</p> <p>The good thing about allowing my development to be guided by small steps in a feature file is that I can easilly gain continuity and remain focused during disparate, micro sprints: TBD (Train Based Development!). To get this step passing I just need to add a redirect in the POST action so that we're headed back to the main category page where we can add the list of existing categories. Before I can add the redirect to the action I need to make sure I have a failing test:</p> <script src=""></script> <p>Here is what the action now looks like in order to make that test pass:</p> <script src=""></script> <p>We're still not out of the woods however - we still have a failing step and in fact - I think the woods just got darker! The reason we're still failing is because we aren't showing a list of existing categories. Existing categories is the first mention of real persistence - we hinted at the fact that we were storing the categories somewhere when we submitted the form but there was not concrete spec suggesting that was the case. Now we have the requirement to pluck categories out of somewhere. This means a decision on what storage mechanism must be made. Because I don't want the unecessary complication of RDBMS plus ORM I have already opted out of SQL. This leaves me with the 3 good doc based storage engines that I would choose from:</p> <ul><li>RavenDb</li><li>Mongo</li><li>Riak</li></ul> <p>And without much adoooo I can tell you that I have opted for MongoDb on the bases that it is free and has a large community. [installs mongo + NuGet for driver in both MVC and Behaviors projects]. So I'm happy with the idea that I'm going to be fetching a set of Categories from MongoDb. To do this I need to pass in a repository to the Admin controller which I can use to talk to the DB. This means going back to the first test and making the AdminController instantiation work with an IRepository<Category>:</p> <script src=""></script> <p>And the code for the AdminController now looks like:</p> <script src=""></script> <p>That gives us the ability but we're not doing so let's sort that. To display the categories we need to be able to get all of them from the repository. Let's go and add the IRepository<T>.GetAll() method then.</p> <img src="" height="1" width="1" alt=""/>wayne-onoreply@blogger.com0 the website - 26/10/2012<p>This is a follow up to the 1st in this series: <a href=""></a></p> <p>And the second <a href=""></a></p> Third day; Third blast at the site and it's been 11 days since I last looked at the code. I've done a lot of client work and a lot of work on in between then and now so i don't really have a clue where I left it! Looking at the selenium tests I can see that the site spins up and it tries to reach the admin/categories/add URL - there's no link so I guess that's as good a place as any to start.<br />Without getting anal about testing the UI by precompiling it and blah I'm going to dive right in and add the link to the /admin page and get that step passing. It's as simple as:<br /><script src=""></script> <br />One step further! Now the selenium test moves the browser to the admin/categories/add url but again there's no code to handle this. Again - all URLs start with a route so I look at the routing test and find there's no route test defined for this URL so I go ahead and add that: <br /><script src=""></script> <br />This test obviously fails as there's no route actually defined - tests don't make code durrrrrr... I'll add that to the BrightonSausageCoEcomms.RouteConfig class:<br /><script src=""></script> <br />Chuck in a cheeky little view in /Views/Admin/CategoriesAdd.cshtml with it's corresponding empty view method returning View() in the controler (test first obviously) and we're done! Spinning up the selenium test again lets us go another step further. Next line in the feature is: "And I enter "category1" into the "CategoryName" textbox". This is again going to fail because we have no CategoryName text box so I'll add that<br /><script src=""></script> <br />Spinning up the selenium tests again I can see the browser is taken to the add category page and the CategoryName is populated! It's almost too much to handle... There's also a checkbox to add so I'll do the same rinse and repeat until I get past that line. Once that's all good I fumble on the next big line: "When I click the "Save" button". I'll add the save button then shall I?! Now the problem isn't really here - now that I added it - that line goes through ok... The problem is the next line: "Then I should see "category1" show up in the list of existing categories"<br />That's a pretty big statement! First up I added the submit button but since there's no form nothing happens. Even if there were it's not going anywhere as the controller has no clue what to do with such a request. Even if it did - what the hell does it do to make that sort of magic actually happen? Curiouser and curiouser... I'll add the form tag and see wha gwarn... The code now looks like:<br /><script src=""></script> <br />Perfecto.. except an HTTP POST to that url goes nowhere - although we have the routing set up we only know how to accept the HTTP GET verb - lets make the controller able to deal with the POST verb by adding another action.<br />This action is going to be slightly different because we're going to be recieving data from the forms post. The name of the category and the enabled value will be sent to the action in the form of a view model. A view model is just that - it's the model specific to the view. A model is just a data structure. So our test will need to pass this viewmodel into the action. And the action will need to be decorated with the HttpPost attribute and it will return a redirect to the admin/categories view but that's pretty much all we know for now. Here's the test (I'm leaving out the attribute part for now as I'm on the train and the reception is awful!):<br /><script src=""></script> <br />Notice we now have a CategoriesAddViewModel - I used resharper to drive the definition of that so by the time I had the test written I had the overload for Categories(CategoriesAddViewModel) created and the CategoriesAddViewModel inside the mvc project - all be it a blank class - it's enough to get this test passing for now. <br />That's it for now - I'm done for today. Not a lot achieved but I'm fried!!<img src="" height="1" width="1" alt=""/>wayne-onoreply@blogger.com0 the website - 15/10/2012<p>This is a follow up to the 1st in this series: <a href=""></a><p> <p>Second day; second blast on the site: Adding SpecFlow step definitions and driving the UI through Coypu / Selenium. I'll also take a look into the DB choice.</p> <p>I've wired up an MVC4 app with Castle Windsor IoC and added references to Nunit, FluentAssertions, Moq and Coypu to the behaviors project and made sure it's all ready to go.</p> <p>To start we want to drive the whole development from the UI down. To do this I want to have the story I created yesterday turned into steps I can use to drive a regression test through the browser - as it's a browser based app. To begin then, I copy that same story into specflow and generate the step definitions: <h2>SpecFlow:</h2> <a href="" imageanchor="1" style=""><img border="0" height="195" width="320" src="" /></a> <h2>Gist:</h2><script src=""></script> <p>The output from the generate steps command gives us the skeleton with which we can build out the tests for the UI. Let's go through each of the steps and populate them with the stuff we need:</p> <h3>[Given(@"I am an admin user and I have logged in")]</h3><p>Considering we aren't going to bother with security and logining in yet I can leave this one blank for now. There's something you can do to tell SpecFlow that it's not going to be implemented yet but I don't have internet here so can't google it. I'll come back to that when I'm on the train tomorrow morning. For now I can leave a //TODO: remark and it will be added to the task list in Visual Studio.</p> <h3>[Given(@"I have navigated to the manage categories screen")]</h3><p>This is where we can start adding actual code! To drive the user we use a thing called BrowserSession - this is a Coypu thing. You tell it how to drive the browser and it takes care of pushing stuff to selenium. To get one I add the following to the specflow steps file:</p> <script src=""></script> <p>There's some stuff going on here that works the way specflow expects it to work - notedly the FeatureContext stuff. Maybe chat about that more later - if not - google it ;)</p> <p>With that we can start to drive the UI (note I have set up VS to fire the site up using a static port - i could have added a hosts file entry to resolve some DNS and set up IIS so that it could handle it):</p> <script src=""></script> <p>Now, if I go back to the specflow file and hit ctrl>t+r (resharper shortcut for run tests) I see that firefox is spun up to the URL we specified and gets a 404. We have a failing step! The reason is obvs because there is nothing in the site to resolve that URL. To make this pass we need to tell the server how to handle requests on that URL. The way MVC knows how to handle requests and forward them to controllers / actions is by it's routing mechanism. So the very first step we need to make this step pass is to configure the routing. To do this in a TDD way we need to go down a level and add a behavior to check that the routing is configured.</p> <p>To do this I need a place to hold the routing behaviors. I'll add a new folder called Routing to the AdminRoutingBehaviors.cs file into that folder.</p> <h2>Emergent design</h2><p>At this point I'll introduce a small helper which will take a URL and return the route data:</p> <script src=""></script> <p>So... emergent design: This is where we allow the flow of the test to really drive the development and design of the software. Here's some info on <a href="">AgileSherpa</a> on the topic. Here's some info on the IBM website about <a href="">evolving architecture and emergent design</a> </p> <p>I want to make sure the URL "admin/categories" can be handled by the routing system. A lame but sufficient name might be: When_navigating_to_manage_categories_url_result_is_correct. It's not that great but it will do for now :)</p> <p>Starting with what we know about this behavior - we want to have some result - and it will be the result we expect. I.e. it will be from the admin controller and it will be the categories action. That's enough to get us started. The very first line we write might be (using fluent assertions):</p> <pre class='brush: csharp'><br />action.Should().Be(expectedAction);<br /></pre> <p>Now this is going to be glowing red all over VS as we have no action and whats more, trying to compare nothing to expectedAction which also doesn't exist is just never going to work.</p> <a href="" imageanchor="1" style=""><img border="0" height="87" width="320" src="" /></a> <p>Move the cursor to expectedAction and hit ctrl>space to get resharper to offer contextual help and select "Create local variable expectedAction". Make it a const string and give it a value (you should keep using resharper alt+enter to use contextual help to get to this - try not to use the mouse):</p> <script src=""></script> <p>Now hover over the result declaration and do the same. This time make it equal to the return value of GetRouteData:</p> <script src=""></script> <p>We now have another variable that needs declaring: url. Move the cursor over that and hit alt enter. This time it's going to be a const string again:</p> <script src=""></script> <p>Add to that the test for checking the controller part of the url is correct and you should have:</p> <script src=""></script> <p>The test is complete in that it will test what we want to test - but it still won't pass - hit ctrl>t+r and you'll see red. To get this to actually pass we need to implement the code in the RouteConfig class in the Mvc project. The route we want looks like: </p> <script src=""></script> <p>Heading back over to our specflow file and spinning up the tests still results in a 404! Obviously because sure as shit we've not got a controller or corresponding action to route the request to! To get that done I'm going to need more tests. I'll add a Controllers folder to the behaviors project and inside that I'll add an AdminControllerBehaviors.cs file. I've come up with another snazzy method name for the first test in this class: When_requesting_the_categories_admin_page_result_is_correct</p> <p>Let's start with the result again and work out from there:</p> [emersive design video] <p>Heading back over to the specflow test we're going to find that it still doesn't let us get valid because there's no view to return. We can add a blank view now. Now the specflow test should be showing yellow. This was only a [Given] part of the test - the test setup for the feature if you like. The next step is to implement the first part of the scenario: </p> <script src=""></script> <p>Maps to the following step:</p> <script src=""></script> <p>To speed this up I'm going to populate all of the steps in this feature and then I can go through and implement them. I end up with the following steps:</p> <script src=""></script><img src="" height="1" width="1" alt=""/>wayne-onoreply@blogger.com0 the website - 14/10/2012Before I start coding the site I thought it'd be a good idea to turn the project into a series of blog posts illustrating agile development usign BDD / Emersive Design. This post assumes you know about things like TDD/BDD, Agile, Emersive Design, Specflow, Selenium/Coypu. At this point in time I have created a git repo for the project, added a website project and a twin behaviors project. I have made some decsions early on which you may or may not have decided on. Mainly the my dev / test tools and the core stack:<br /><ul><li>Resharper 7</li><li>Asp.Net Mvc 4</li><li>Castle Windsor IoC</li><li>Nunit with Specflow and Coypu</li></ul>This is my stack of choice. I haven't mentioned any sort of database tech above because I haven't decided on that one yet - it's likely to be some doc based db however.<br />The reason I am coding this from scratch is because from my brief investigation into the free / opensource e-coms world nothing ticked my boxes in terms of quality, simplicity or design. Having thought about some alternatives I came across google checkout which allows me to create a site with a catalog and they take care of the shopping basket and checkout process. Leaving me to worry about the site design and admin.<br />I'm no designer so I've farmed that out to a friend of a friend and it will be integrated later down the line. First I'm going to focus on features. In true agile / bdd style :)<br />This is a side project for a friend and I've spent all the money on the design so I am going to be getting the initial phase of work out in short sharp and neat sprints (train journey length sprints!). Which leads me to the first set of admin stories (I'm going to create the backend first) for sprint #1:<br /><ul><li>Manage categories</li><li>Manage products</li></ul>Let's not worry about the customer facing site, logging in to the admin area, viewing and managing orders and customers or any of that yet - that can all come in a later sprint. To start I'm going to pick the easiest to begin.<br /><br />Expanding Feature #1 'Managing Categories': (As an admin user) I would like to be able to add / remove / edit category names. I think the ability to enable and disable categories would be good. I also think that a product should be able to belong to more than one category so they act more like tags - predefined tags I guess. Categories are only there as meta - if there are no tags, users can still find products by standard search (which will come later), categories provide a nice visual means of sorting and filtering products. Categories also offer a means to provide faceted information where users can see how many products are in each category. So with that we have our first SpecFlow file:<br /><br /><br /><script src=" Manage Product Categories"></script> <br />In the next blog post I'll start adding the Coypu / SpecFlow step definitions which will drive the development of this site from the UI down. I'll show the use of emergent design to lead the development in an agile, lean and focused way. I'll also begin thinking about which database we'll be using and the thoughts / reasoning behind the choices and the eventual decision.<img src="" height="1" width="1" alt=""/>wayne-onoreply@blogger.com0
http://feeds.feedburner.com/Iwayneo
CC-MAIN-2018-47
refinedweb
12,996
65.56
NOX/LOCA version 10.2 will be release as part of Trilinos 10.2 in March 2010. We have fixed a long standing issue with nox regarding overloading of the ostream operator. If you are compiling nox and get a compile/link error that fails to find the "operator<<" method on one of your classes, then you need to move your overloaded ostream function for that object into the correct namespace (the same namespace as the object it is operating on). The following artice by Herb Sutter explains this issue: NOX/LOCA version 10.0 will be release as part of Trilinos 10.0 in September 2009. NOX/LOCA version 9.0 will be release as part of Trilinos 9.0 in September 2008. NOX/LOCA version 8.0 will be release as part of Trilinos 8.0 in early September 2007. Current changes/additions: Teuchos::ParameterList p; p.set("MyPID", myPID); p.set("Output Processor", printProc); p.set("Output Precision", 3); p.set("Output Stream", outputstream); p.set("Error Stream", outputstream); Teuchos::ParameterList& info = p.sublist("Output Information"); info.set("Error", true); info.set("Details", true); info.set("Outer Iteration StatusTest", true); info.set("Outer Iteration", true); NOX::Utils utils3(p); NOX/LOCA version 7.0 will be release as part of Trilinos 7.0 in early September 2006. NOX and LOCA have undergone many important changes between the 6.0 and 7.0 release: NOX/LOCA 4.1 is being released as part of Trilinos 6.0 on September 1st 2005. General release 4.1 notes: LOCA release 4.1 notes: LOCA portability notes:
http://trilinos.sandia.gov/packages/docs/r10.4/packages/nox/doc/html/nox__release__information.html
CC-MAIN-2013-48
refinedweb
266
65.18
Basic Arithmetic in C++ Introduction: Basic Arithmetic in C++ Computer programming is a relatively new field of study. One of the most common and basic coding languages is called C++. This is used by big companies such as Google, Apple, and Microsoft to create applications. One example is the Google Doodle that appears on the Google homepage. This animation is often based on programming code in C++. One thing you may not realize is how simple it is to get started learning computer programming. Throughout these instructions you may see an option below a step labeled "Advanced." This means that it is optional to read. You do not need to read this portion to complete the exercise. In this set of instructions you will learn to apply basic arithmetic functions and to output the results using C++. Step 1: This will list the materials you need. Step 2: You will be directed to a website that will run your code. Step 3: You will declare and initialize your variables (give numbers to variables x and y.) Step 4: You will perform simple addition and the sum will appear in the output box. Step 5: You will test other arithmetic functions. Step 6: You will combine arithmetic functions. Step 1: Materials You will need a computer, phone, tablet, or any other device that can access a webpage and a keyboard. You will also need Internet access at the time you are following the instructions. Advanced: If you have more experience or ambition to further your studies in computer science, you may use your own compiler to carry out these instructions. For example, Visual Studio is my preferred compiler and Code Blocks is another popular compiler. Step 2: Set Up Go to the Code Chef website () as it is an online program that will allow you to test your code from any device. Keep in mind, this website does not save your work! The window in the middle contains the default set up code. This is also displayed in the figure. The meaning of this code is not necessary to complete the instructions, however if you are interested, they are described below. Advanced: #include using namespace std; These lines of code are necessary to import a library of information for the computer defined commands we will be using. Just like you would use a school library to locate and define unknown information, a compiler uses its libraries to figure out what each command you provide means. "iostream" in particular is the default library. It stands for "input/output stream." This tells the computer how to read in input from the user, and how to send output to the screen. int main( ) { //your code goes here return 0; } This part of the code sets up the main function. A function is a set of code that you, as the programmer, would create to do something useful. The main function is where you put all of the code you want to be tested. int - This stands for integer and tells the computer that you will be returning a whole number. main ( ) - The empty parenthesis implies that the function does not need extra information to run (these would be called parameters.) { } - Curly braces are used to group a function's code. The open bracket indicates the start of a group of code, and the close bracket indicates the end. return 0; - This tells the computer to output the integer 0, indicating that the main function had no issues. If something major went wrong with the code, it would output -1 instead. We will not encounter this issue. // your code goes here - Any line beginning with two forward slash marks is called a "comment." These lines are not seen by the computer at all. They are meant only for the programmer to use in explaining the functionality of the code. Step 3: Declare and Initialize Variables First, you will create two integer variables. These variables can be named any single letter. I chose to use "x" and "y" as my variable names. Next, you assign the variables a number. You can test several numbers at the end of this Instructable (in fact I encourage that!) but for now I recommend simple numbers. For example, I used "4" and "8." The key is to make your screen look just like my image!! Don't forget the semicolons on the end! Advanced: The variable names can be any set of letters and/or numbers that are not recognized by the computer as keywords. Spaces are not permitted but underscores are often used to separate words instead. For example, "unit_cost" would be a good variable name. Step 4: Output the Sum The command used for printing something to the screen (output box) is cout << followed by what you want to print. First you will print the sum of your variables. This is displayed in the figure as cout << x + y; Now, press the "Run" button at the bottom right of your CodeChef screen. This should compile and run your code, displaying the sum of your numbers in the output box below your coding box. You may need to scroll down to see it. Mine displayed the number 12. Step 5: Other Arithmetic Operations Try adding other arithmetic functions such as subtraction, multiplication, and division. Use the same cout command but with operations - , * , and / in place of the + . I chose to do y / x because y holds the larger value and would therefore give a simpler answer than x / y. When you are finished editing your code, press "Run" again to view the new output. Advanced: If you were to do x / y instead of how I wrote it above, you would find the output as 0. Although you would expect one half or 0.5 , the integer type only allows whole numbers. Therefore, when the answer is 0.5 , it will drop all of the decimal values and give you 0. The same is true for if you perform 25 / 2. You will get 12 instead of 12.5 . Another interesting operation is % which returns the remainder in a division. So using the example above 25 % 2 would give you 1 because 25 divided by two is 12 with a remainder of 1. Step 6: Combining Operations C++ allows you to also combine arithmetic operations. The compiler follows the order of operations. If you do not know the order of operations, see this link: . In this example, you make another integer variable "z" and set it equal to "7." Next, you perform the operation x + y * z and obtain the result of "60." This shows that CodeChef performed the multiplication of "y" and "z" first and then the addition of "x." Advanced: You are welcome to include parenthesis as well. For example, ( x + y ) * z would output 84 because it performed the addition of "x" and "y" first. Step 7: Conclusion Now you know how to output solutions to arithmetic operations using integer variables. You are able to create as many integer variables and operations as you like. If you would like to develop your C++ coding skills further, here is a helpful link: - There are 9 pages of information explaining the basics of coding in C++. If you make it through all 9 pages, you will have covered many if not all of the topics covered in an average Intro to Computer Science semester-long course. Thanks a lot :) Very nicely done, thanks for sharing!
http://www.instructables.com/id/Basic-Arithmetic-in-C-Draft-1/
CC-MAIN-2017-39
refinedweb
1,247
73.98
#include <avr/io.h>#include <avr/interrupt.h>volatile uint16_t myMeasurement;int main(void){ DDRC &= ~(1<<DDC0); // Pin A0 as Input DDRD |= (1<<PIND7);//Pin 6 as output ADCSRA |= 1<<ADPS2; // Prescaler=16, i.e. 1MHz ADMUX |= 1<<REFS0 | 1<<REFS1; //Internal 1.1V Ref used ADCSRA |= 1<<ADIE; // Enable the interrupt ADCSRA |= 1<<ADEN;// Enable the ADR sei();// Enable Interrupts (Global) ADCSRA |= 1<<ADSC;//start first conversion while(1){ //stay alive }}// The interrupt methodISR(ADC_vect){ uint8_t lowPart = ADCL; myMeasurement = ADCH<<8 | lowPart; ADCSRA |= 1<<ADSC;//trigger new conversion PIND ^= 1<<PIND6;//Flip pin 6 on arduino} // The interrupt methodISR(ADC_vect){ 154: 1f 92 push r1 156: 0f 92 push r0 158: 0f b6 in r0, 0x3f ; 63 15a: 0f 92 push r0 15c: 11 24 eor r1, r1 15e: 2f 93 push r18 160: 3f 93 push r19 162: 4f 93 push r20 164: 8f 93 push r24 166: 9f 93 push r25 168: ef 93 push r30 16a: ff 93 push r31 uint8_t lowPart = ADCL; 16c: 20 91 78 00 lds r18, 0x0078 myMeasurement = ADCH<<8 | lowPart; 170: 40 91 79 00 lds r20, 0x0079 174: 94 2f mov r25, r20 176: 80 e0 ldi r24, 0x00 ; 0 178: 30 e0 ldi r19, 0x00 ; 0 17a: 82 2b or r24, r18 17c: 93 2b or r25, r19 17e: 90 93 da 01 sts 0x01DA, r25 182: 80 93 d9 01 sts 0x01D9, r24 ADCSRA |= 1<<ADSC;//trigger new conversion 186: ea e7 ldi r30, 0x7A ; 122 188: f0 e0 ldi r31, 0x00 ; 0 18a: 80 81 ld r24, Z 18c: 80 64 ori r24, 0x40 ; 64 18e: 80 83 st Z, r24 PIND ^= 1<<PIND6;//Flip pin 6 on arduino 190: 89 b1 in r24, 0x09 ; 9 192: 90 e4 ldi r25, 0x40 ; 64 194: 89 27 eor r24, r25 196: 89 b9 out 0x09, r24 ; 9} 198: ff 91 pop r31 19a: ef 91 pop r30 19c: 9f 91 pop r25 19e: 8f 91 pop r24 1a0: 4f 91 pop r20 1a2: 3f 91 pop r19 1a4: 2f 91 pop r18 1a6: 0f 90 pop r0 1a8: 0f be out 0x3f, r0 ; 63 1aa: 0f 90 pop r0 1ac: 1f 90 pop r1 1ae: 18 95 reti [quote]This piece of code, generates a pulse of 29.41kHz on pin 6 - far from the expected 0.5MHz ... One thing I can comment on though - are you taking into account the number of instructions taken to process the interrupt? ...That's 26 instructions from the point the interrupt routine is triggered to when the next ADC sample is started. A normal conversion takes 13 ADC clock cycles. Looks like you are assuming that the ADC only takes one clock do to a conversion, whereas it actually takes more (around 12 AFAIR - check the datasheet). Quote from: majenko on Nov 11, 2012, 03:01 pmOne thing I can comment on though - are you taking into account the number of instructions taken to process the interrupt? ...That's 26 instructions from the point the interrupt routine is triggered to when the next ADC sample is started.You mean, I should write my code in such a way that the occurring number of instructions is lower? Unless I go down to the assembly level, I don't think this is possible. But, I have the feeling that there must be a way... I think I understood your confusion now. The adc prescaler selects the adc clock (to be 1/16 in your case) but each conversion may take multiple (around 15) ticks of the adc clock to complete. Since one conversion takes 13 ADC clock cycles, a maximum ADC clock of 1MHz means approximately 77k samples per second. This limits the bandwidth insingle-ended mode to 38.5 kHz, according to the Nyquist sampling theorem. it is 29.4kHz Yes (I got 31khz) but that's for two samples (because you flip the pin for each sampling and two flips complete a period) -> sampling is done about close to 60khz. That's very close to the 77khz figure in the datasheet (without considering latency).
http://forum.arduino.cc/index.php?topic=131870.msg991658
CC-MAIN-2014-52
refinedweb
683
66.2
First time here? Check out the FAQ! @LBerger it was answered in the github opencv issues. Opencv forces a tcp connection. So i changed the server to an http stream. Thanks for the help @LBerger i ve tried this solution, but it has the same behavior with the "udp://192.168.55.151:8554/" link. Is it possible that the problem is from the server? @LBerger but i get the same exact error from the ffmpeg when i try to connect with tcp. If i change to udp in the ffmpeg command, the decoding is successful. So i guess that i need to tell opencv to use ffmpeg with udp instead of tcp, right? I want to process and display a network rtsp stream that is created from a raspberry camera. I have this code: #include <iostream> #include <functional> #include <opencv2/highgui/highgui.hpp> #include <opencv2/imgproc/imgproc.hpp> int main(int argc, char** argv) { cv::VideoCapture * stream = new cv::VideoCapture("rtsp://192.168.55.151:8554/"); if (!stream->isOpened()) return -1; cv::namedWindow("rtsp_stream", CV_WINDOW_AUTOSIZE); cv::Mat frame; while (true) { if (!stream->read(frame)) return -1; cv::imshow("rtsp_stream", frame); cv::waitKey(15); } return 1; } When the stream is not live, the execution of this results in: [tcp @ 0xa12480] Connection to tcp://192.168.55.151:8554?timeout=0 failed: Connection refused Which means that the stream tries to connect with tcp. When the stream is live, the execution results in: [rtsp @ 0xb07960] method SETUP failed: 461 Client error From internet research i found that the problem may be that the stream uses udp. If i change the URL to: "udp://192.168.55.151:8554/" Then the execution freezes in the cv::VideoCapture("udp://192.168.55.151:8554/"); How can i solve my problem? cv::VideoCapture("udp://192.168.55.151:8554/"); Edit 1: VLC is able to open the stream. Edit 2: As i am given to understand, ffmpeg is used to decode the stream. When i run: ffmpeg -rtsp_transport udp -i rtsp://192.168.55.151:8554/ -t 5 test.mpg the stream decoding and saving is successful. So how can i specify the lower level protocol to be udp? Edit 3: If i use the ffmpeg command with tcp instead of udp, i get the same error with the c++ code, 461 client error I want to save Mat data to a float array, process them, and then save them back to Mat. I don't want to use .at function, cause processing code is already written and uses some temp arrays. I have this piece of code: cv::Mat test(cv::Mat original) { float * image_data = new float[original.rows * original.cols]; //Save to array init_data_image_32(original, image_data); //process //Restore from array cv::Mat res(original.rows, original.cols, CV_32F, image_data, original.cols); cv::namedWindow("b", CV_WINDOW_NORMAL); cv::imshow("b", res); return res; } bool doublesEqual(double a, double b) { double margin_of_error = 0.0001; return abs(a - b) < margin_of_error; } void init_data_image_32(cv::Mat image, float * image_data) { for (int i = 0; i < image.rows; i++) for (int j = 0; j < image.cols; j++) { image_data[i*image.cols + j] = image.at<float>(i, j); if (!doublesEqual(image_data[i*image.cols + j], image.at<float>(i, j))) std::cout << image_data[i*image.cols + j] << " " << image.at<float>(i, j) << std::endl; } } The imshow of the resulting image in the imshow(b) has nothing to do with original, its this one: Furthermore, the init_data_image_32 has some lines of output from the if clause: nan nan nan nan nan nan .. What am i doing wrong?
https://answers.opencv.org/users/74512/k_kaz/?sort=recent
CC-MAIN-2020-29
refinedweb
595
68.67
#include <app/cntfldst.h> Provides access to the text stored in a contact item field. An object of this class can be retrieved using CContactItemField::TextStorage(). Reimplemented from CContactFieldStorage::ExternalizeL(RWriteStream &)const Externalises the field data. Reimplemented from CContactFieldStorage::InternalizeL(RReadStream &) Internalises the field data. Reimplemented from CContactFieldStorage::IsFull()const Tests whether the field storage contains data. Reimplemented from CContactFieldStorage::RestoreL(CStreamStore &,RReadStream &) Restores the field data. Converts an array of text strings from plain text into Symbian editable text, appends them to a single descriptor, separating them with the new line character, and sets this as the text which is stored in the field. Any existing field text is replaced. The text is truncated to KCntMaxTextFieldLength characters if necessary. Converts a text string from plain text into Symbian editable text, and sets this as the text which is stored in the field. The text is truncated to KCntMaxTextFieldLength characters if necessary. Sets the field text. The text field object takes ownership of the specified descriptor. The function cannot leave. Sets the field text from a descriptor array. Each descriptor in the array is appended to the text field storage. They are separated by paragraph delimiters (CEditableText::EParagraphDelimiter). Any existing text is replaced. Sets the field text. This function allocates a new HBufC descriptor, deleting any existing one, and copies the new text into it. The function can leave. Converts a copy of the text stored in the field from Symbian editable text format into plain text and returns it as a pointer descriptor. Reimplemented from CContactFieldStorage::StoreL(CStreamStore &)const Stores the field data.
http://devlib.symbian.slions.net/belle/GUID-C6E5F800-0637-419E-8FE5-1EBB40E725AA/GUID-429B1D62-189C-3DE1-AABD-C162B0593992.html
CC-MAIN-2019-22
refinedweb
264
51.95
Whenever a class implements Serializable, it’s a candidate for object serialization. The serialization mechanism converts an object into bytes and then writes the object to the output stream. We use the class ObjectOutputStream to serialize a file and then ObjectInputStream to restore an object. import java.io.FileInputStream import java.io.FileOutputStream import java.io.ObjectInputStream import java.io.ObjectOutputStream fun main(args : Array<String>){ //Destination File val file = "belchers.burgers" //A map of family val family = mapOf( "Bob" to "Father", "Linda" to "Mother", "Tina" to "Oldest", "Gene" to "Middle", "Louise" to "Youngest") //Write the family map object to a file ObjectOutputStream(FileOutputStream(file)).use{ it -> it.writeObject(family)} println("Wrote $file") println() println("Time to read $file back") //Now time to read the family back into memory ObjectInputStream(FileInputStream(file)).use { it -> //Read the family back from the file val restedFamily = it.readObject() //Cast it back into a Map when (restedFamily) { //We can't use <String, String> because of type erasure is Map<*, *> -> println(restedFamily) else -> println("Deserialization failed") } } } The example program writes a map of strings to a file using object serialization. It begins by creating a map of test data on lines 11-16. Line 19 opens the file by creating a FileOutputStream object and passing in the file name to the constructor. The FileOutputStream object gets passed to the newly created ObjectOutputStream. We apply the use() function to make sure all resources are closed when finished. Writing the map to the file is painless. All we need to do is use the writeObject() method found on ObjectOutputStream, shown on line 19. The class does all of the work of flattening the family Map object into bytes and writing the bytes to the file. The use() function closes the file and the serialization process is complete. Reading the object back into memory is almost as simple. We open the file by creating a new FileInputStream object and supplying the constructor with the file name. The FileInputStream object is supplied to the constructor of the ObjectInputStream and we chain it to the use() function to make sure the file gets closed when finished. The object is restored with the readObject() method, but there is a catch. The readObject() method returns Any. It’s our job to downcast to the proper type. On line 31, we use the when() function and on line 33, we check that it is a Map. Since map is a generic interface and serialization doesn’t save type, we use *, * for the type arguments. At this point, we can work on the restedFamily object normally.
https://stonesoupprogramming.com/2017/11/25/kotlin-object-serialization/
CC-MAIN-2021-31
refinedweb
432
57.37
In this 360AnDev talk, Ana discusses what she’s learned about security and data privacy. Smartphones have become an extension of the user, allowing them to buy merchandise,. Introduction (0:00) My name is Ana. I come from a company in Europe that’s called Infinum. We do design and development. We’re an agency. We work with a lot of clients. In my line of work, I take care of security for banks. Security in general is a vast topic. I will try to focus on basic things that we can do to improve security in our applications. Throughout the presentation you’ll see that adding them up results in a good product. Banks tend to have different approaches to security. Some focus on prevention, but most banks tend to focus on mitigation. That means that one day, when somebody makes a huge withdrawal from your account, they will call you up and ask you, “Did you pay a certain amount of money to that company?” This is great, but it’s something that should be an added layer of security. The first layer should be the one you put in in the quality of your application and all the standards that you apply to the build itself. Build integrity (1:46) Let’s start with the basics: build integrity. What I mean by build integrity is, all the little things you can do when creating a project. Creating your super secure application, you can do small things that aren’t very complicated but will have a huge impact later on. You need to add a release key store to your application. We have a big team, and it often happens that a few of us have to work on the same application. While we’re in debug mode, I really hate it when I give my phone to another colleague and tell him to deploy his changes on my phone, and he has to reinstall it because the signatures don’t match. A good rule of thumb, which we have begun using, is that when you create project you create a release keystore immediately. And you sign all your builds with that one. The simple reason is that you will need that release keystore for publishing to your Google Play account. The other reason is that you don’t get irritating reinstalls between your team members as you develop. One release keystore can be used for all build types. I suggest you do use it for all build types. You really don’t want to lose that keystore. Ever. You learn from experience. Our experience was that a few years ago we had a colleague that worked on a project from scratch. He developed an application. He finished the application and pushed it to our Google Play account. A few months later he left the company. And a few months after that, the client said, “Oh, we would like an upgrade.” Fine. “What do you want us to do?” They give us the specification. We implemented. Then there comes the day when you have to push that update to your Google Play account. And we don’t have the keystore. Our colleague was super secure and placed the keystore outside of the git repository. Somewhere else. The problem is that we didn’t know where that somewhere else was. So you have to tell your client, “Whoops, we made a mistake. We cannot publish your application.” That’s really bad for you. You don’t want to end up looking like an amateur. Thankfully, the laptop that the colleague was using was still in our company. That problem was averted, but I cannot describe the shame that you feel when you tell the client that you have misplaced the only thing that’s needed for another publish or an update of the application. Your keystore needs to be safe. If somebody acquires your keystore, they hopefully won’t have access to your Google Play account, hopefully. If they do acquire your keystore, they can repackage your application, add some malicious code inside, and put it somewhere else. Publish it on a different site. Send emails to random people saying, “Hey, this is a new cool version of Facebook. Why don’t you download it from this link?” If they signed it with your keystore, or Facebook’s keystore, the users will hopefully not download it, but if they do, they will be able to upgrade the app. Get more development news like this This is a huge issue. Keeping your keystore separately stored somewhere, not in your git repository, is mandatory. signingConfigs { release { storeFile file("myapp.keystore") storePassword "password123" keyAlias "keyAlias" keyPassword "password789" } } You don’t keep your release keystores in your git repository because again, somebody could technically get to them, and if you have a large team and your team members leave, you don’t necessarily want them to have access to it. Another thing that people do is put keystore data directly in their build.gradle files. You don’t want to put this data directly in your build.gradle because, again, somebody can get to it. local.properties KEYSTORE_PASSWORD=password123 KEY_PASSWORD=password789 This is just one alternative. It’s an easy, obvious, and unimportant one, but you can put key-value pairs in your local.properties or build.gradle properties or somewhere else. You can use system environment variables and then reference them, or you can use properties and then parse them. try { storeFile file("myapp.keystore") storePassword KEYSTORE_PASSWORD keyAlias "keyAlias" keyPassword KEY_PASSWORD } catch (ex) { throw new InvalidUserDataException(“…”) } But the main reason is that you can reference something that doesn’t contain the actual data from your keystore. That’s one way to mitigate this issue. Another thing that people usually do by default, but don’t know the implications of, is to enable obfuscation. You usually obfuscate to minimize the app. To remove unnecessary resources. To shrink the build. But you also make your code unreadable. And it’s crucial because making your code unreadable for you will also make it unreadable for somebody else. release { minifyEnabled true proguardFiles getDefaultProguardFile( 'proguard-android.txt'), ‘proguard-rules.txt' signingConfig signingConfigs.release } This is, again, from the generated build.gradle file. You see references to .txt files that contain rules for your Proguard rules. People don’t like Proguard. Builds fail because when you add Proguard in the beginning, you develop, you add libraries, but you kind of forget to add rules for those other libraries. And most self-respecting libraries have a little section in their readme that says, “For Proguard, please add these two lines.” But people don’t do that until they have to release their build. That’s the last bullet here, staging versus production. You wait until production to try out your build that has Proguard in it. You should add Proguard to all your builds, whether it’s debug or whether it’s staging or production. It will cause problems when you try to debug it, but at least you’ll know straight away that you have forgotten to add some kind of rule. Again, builds failed, and then you have to go through the libraries that you added, and try to find the one that broke the build and add rules for that. If you add it as you develop and add other libraries, then you should have fairly little problems with it. If you don’t like Proguard, and I think we agreed that you don’t, you have other options. These are just two. They’re tools that you can add to your builds and you can write rules for your application. You don’t just get minimization, obfuscation. You can also add some build tampering detection. They’re powerful. I think the text protector has trial periods, but I think that they both are commercial solutions. If you look at obfuscated code, it’s the code you see that first year in college: when you just started programming, and you’re so super hyped. Everything’s going to be awesome and short. I’m just going to use one letter words for variables and it’s going to be really compact. public abstract class e { private int a = -1; private String b = null; protected boolean k = false; public abstract void a(Intent var1); protected final void a(String var1) { this.b = var1; } public final void c() { this.a = -1; this.b = null; } public final boolean d() { return this.k; } } This is an example of that. It’s hard to read, but that’s the point behind it. The final question: we think that we have managed to secure our APK at this point, but the truth is you can reverse engineer in Android applications very simply. There are tools, and by the half million results shown here, you can really start with anything. The problem is that your APK is a zip file. Unwrapping it and using some tool to get data out of it is not a big problem. Hence the tampering detection. When you’ve obfuscated your file, and you’ve added your keystore, you want to check whether somebody has downloaded it, maybe changed it a little, and whether there is something wrong. Potentially if it’s run on a rooted device or something else. context.getPackageManager() .getInstallerPackageName(context.getPackageName()) .startsWith("com.android.vending") There are three simple things that you can do with your build: Verify signing certificate at runtime. You can place your signature somewhere in the app, preferably throughout several variables so that it’s not placed somewhere obvious. You can also verify the installer. A few lines of code to check if the installer is Google Play. Another thing to check is whether the app is run on an emulator or if it’s debuggable. Again, this is very simple to do. Another problem we had is, at one point in time we started getting a lot of pressure ports for an application, which was weird, because the frequency of the ports was very high. Upon inspecting with Crashlytics we found that someone was running their application on a rooted emulator device. Having some kind of dialog that would raise up when your app is run on an emulator or is debuggable, and telling the user, “Hey, there is something wrong with this build, please don’t use it” or maybe exit the app, would be a much better solution than allowing the users to do whatever they want to the build. Data privacy (12:43) The next part that I want to address is data privacy. I think that most users are very sensitive about their data. I’m talking pictures. I’m talking conversations with your family or loved ones. You don’t want other people poking and prodding at it. Android says that there are three basic ways to store and retrieve data. We have internal storage. We have external storage. And you can use content providers for the same thing. We’re asking ourselves if the data is private. Internal storage, in general, is private. It belongs to your app. Only it can access it. It’s fine for most things you want to store for your app. The shared preferences are a subpart of the internal storage. Whenever you want to store some kind of user preference you use shared preferences. It’s not for big data, just simple information. It’s also stored in the private section of your app. The external storage, as it says by definition, is generally readable and writeable. That means that your app can change the data, other apps can change the data, and the user can come and delete whatever kind of configuration, picture, or file you placed in the external storage. That’s not private, by definition. And the third thing, content providers, are more of a storage mechanism used for sharing data between applications. One good use for this would be if you have an application that requires you to log in. Logging in to that application will also allow you access to some other application, so you don’t have a two login process. <provider android: android:protectionLevel="signature" It’s safe, but you have to do a little writing to make it safe. Those two things shown above are all you need to make your content providers secure. They have to be exported for other applications to use them, and the protection level for them has to be set to signature. This means that only an application that’s signed with the same keystore as your default application can use the content provider. We’re referring to the things we’ve already done with our build. This is an overview of those storage mechanisms. All we need to know in general is that internal storage and by definition share prefs, are private, while the other two options are not. Or it depends on how you configure them. But the question is whether it’s safe. Generally, yes. Until you do something with your device. And that’s rooting it. Once you root your device, everything you’ve done to ensure privacy is out the window. You can root your device for malicious reasons, or if you just want to remove the bloatware that comes with it. Either way, it’s fine, but it’s a reality and you cannot influence it. One solution is to encrypt stuff. The safest way to keep data on your device is to not keep it at all. But if you have to, encrypt it. Find some library or tool that suits your needs, and encrypt all the things that you need to have encrypted in your application. I will not go into all the options, but you don’t have to reinvent the wheel. Use whatever is available. When you encrypt stuff, you might still want to prompt your user to provide some kind of authentication method, some kind of PIN or password. That PIN or password should probably be the key that you use for the encryption. Wrong. If you use some kind of PIN to encrypt data, how many options for four digit PIN can you come up with? 10,000. 10,000 options, just for cracking the application. If you have time, I know you all here don’t, but if you have time and you’re really set on breaking something, you can go iterate through them. Try the basic things, birthday, anniversary dates, all the things that people tend to use as passwords. Using some other method like a password is a better option, but it depends on its length and complexity. One other thing I wanted to mention here is that when you unlock your phone with your PIN, like I said, you have 10,000 options to try out, but you will start with the obvious ones. If you use lock screen pattern, you have raised the bar about 40 times. There are about 389,000 unique, distinct patterns you can create on your device. That’s always a better option than your PIN. But again, human nature is the core of most problems on everything including security, so most people start at the top left point, and just draw three, four, items. That number decreases drastically. If you use some kind of password or something, don’t use it directly. You can use something that will transform your pin or password into something more complex. You can use Bcrypt. This is just one of the suggestions, which is an algorithm that transforms your pin into some kind of key that’s longer. It iterates through its magic, its function, and it’s much harder to crack, because the computational power that’s needed to crack that kind of password is unparalleled to the ones used for MD5 and SHA1 or other hashtables that you can get to. Encrypt the data and don’t do it directly with the pin. Transform it into something more complex. My question to you at the end of this section is, can your data remain private, in any way? No. Private, definitely not, because rooting your device allows access to everything, including the data. But if you don’t encrypt data and you keep it on the device, then you’re just asking for trouble. Don’t allow misuse of the user’s data. After we take care of our build integrity and implement some kind of privacy for the user’s data, the next thing we need to take care of is network security. Network security (20:45) If you want to make things safe you will use HTTPS. HTTP is a text protocol and the text is right there. If you want to compare HTTP to something, it would be like writing your credentials on a postcard and sending it through mail to the ending recipient. Anybody can intercept it, anybody can read it, and of course, anybody can use it. HTTPS, as you all know, encrypts the communication channel, which makes the whole communication between the app and the servers more secure. I say more secure because this is not enough to prevent a man-in-the-middle attack. (If you’re really playful, you can use a Charles proxy and place it to intercept data between the app and the servers. If you give that person your device and he decides to install the Charles proxy certificate on your device, if you use it, you will not get any kind of warning. Your device will think that it’s talking to the server, and the server will obviously think that it’s talking to your application, regardless of the HTTPS.) The solution for this problem is to pin stuff. Pinning certificates has its merits. We use it frequently because it adds that extra protection layer that is needed in some of the applications that we have. What it does is it defines which certificate authorities are trusted. Without certificate pinning your application will trust all the certificate authorities that are placed on your device, which is fine, but if your client wants to use a custom certificate, then you have to add it somehow. You can add it on your device, but having to ask 4,000 users to install an additional certificate just so they can use your application without warnings and problems is irritating. Nobody wants to do that. So, you pin the certificate in the application, and the effectiveness of the attack is reduced and the users can use your application to communicate with the servers. okhttpbuilder .pinClientCertificate(resources, R.raw.client_cert, "pass".toCharArray(), “PKCS12”) .pinServerCertificates(resources, R.raw.server_cert, "pass".toCharArray(), "BKS") .build(); return new OkClient(client); This is for an older version of okhttp, which we use because who doesn’t love Square libraries? This is all that’s needed to pin certificates. This is also an example that shows that we pinned two different certificates, client and server, which is not necessary, but pinning the server certificate is a must. What happens when the certificate changes and you have pinned it? Again, I think you learn from experience. When you get a call Monday morning from your client saying that the app is broken, nothing works, the world is about to implode. You ask them what has changed and they say “Nothing, everything’s the same. We just updated the certificates.” Then you ask yourself and the client, “Okay, do you remember the time when we pinned that certificate in the application?” They respond, “Sure.” And they’re still drawing a blank. This is a problem that we generally have with our clients. They don’t really understand the impact that the implementation has on the release cycle of the app and on the security features on the server. When you do pin certificates, you need to know how to inform your users and your clients that the change will break everything. If you do need to change your production certificate, and you have some kind of mechanism like, when you log in to your application, and the server tells you, “Okay, there’s a new version available. Please upgrade.” This doesn’t work, because the login call is probably under HTTPS itself, which means that there’s no communication at that point. You want some kind of outside mechanism that can notify users that there’s another version of the app available, “Please install it due to security reasons” or whatever you need to tell them. Or you can use even Google Call messaging or Fire-based messaging to notify them of the change. Another fun thing is many users don’t have auto-update set in their devices. This varies from region to region, and it also varies between users and their age group. Older users tend to not update their apps, even if their phones are burning, because that’s evil. I’m used to the version I had before. But this also poses a problem for you and your development cycle because users will not be able to use the application. And most users tend to respond, “It’s not working.” There’s no context. There’s no way you can help them, unless they’re informed of the changes that will have to happen. Also, one thing to plan in advance for is the impact that server setup has on your devices. If you go to the Qualys site, and type in any kind of URL from your website or something else, you get the complete security overview of your site. If you’re an A or A plus or minus, good job. If you have a lower grade then you probably need to rethink your security strategy on the server. Where it impacts Android is setting the security level on your site will also kill off some of the devices. As you know, TLS1.0 is obsolete. It’s not to be used anymore. But it will also eliminate the 2.37 devices that, believe me, are still used. Clients are reluctant to eliminate them. A good thing to do before you upgrade your security is to look at this site and see what kind of impact it will have on your users. It’s context aware, as most things are, so you need to find a compromise between the reach that you want to achieve and the security on the server. Another thing that you can do, and this is the moment where Android has begun producing serious and good things, is use the platform to your advantage. android:usesCleartextTraffic="false" StrictMode.setVmPolicy( new StrictMode.VmPolicy.Builder() .detectCleartextNetwork() .penaltyLog().build()); Starting from Android M, you have a one liner that breaks all HTTP calls in your app. As simple as that, you can disable all HTTP calls. You go through your app. It breaks somewhere, it’s not working. No data. Good. You forgot to take care of that one call, now is the time to do it. You can also set the strict mode policy, but it’s not necessary since you have the parameter up there. Another thing that has got more traction is biometrics in the form of fingerprint API. We finally have a standardized, unified API fingerprint storage and reading. The most important part of it is that it’s sufficiently secure now. Samsung is not trying to do their own thing. HTC is not trying to do their own thing. We have the TEE definition that must be complied with and it’s okay now. [{ "relation": ["delegate_permission/common.handle_all_urls"], "target": { "namespace": "android_app", "package_name": "com.example", "sha256_cert_fingerprints": ["14:6D:E9:...44:E5"] } }] Starting from M you have now the power to configure for a certain domain what app will open it. There’s a unified link between the app and between the server that will launch your app each time a link is launched. What you have is the place where you need to plant the JSON file with the configuration of the server. This is all the code that’s needed to link that to your application. That’s what you place there. Another thing is, with the coming Android N, we have a network security configuration feature. It’s all the stuff we did with certificate pinning, clear text data and other stuff, and you don’t need to do in code anymore. You have one unified file which allows you to set all the security features, exclusions, inclusions, that you want working for your app. <?xml version="1.0" encoding="utf-8"?> <manifest ... > <application android:networkSecurityConfig=" @xml/network_security_config" ... > ... </application> </manifest> The first thing you need to do with that is add it to the application tag. <?xml version="1.0" encoding="utf-8"?> <network-security-config> <domain-config <domain includeSubdomains="true">example.com</domain> <trust-anchors> <certificates src="@raw/my_ca"/> </trust-anchors> <pin-set <pin digest="SHA-256">7HIpa...BCoQYcRhJ3Y=</pin> <!-- backup pin --> <pin digest="SHA-256">fwza0...gO/04cDM1oE=</pin> </pin-set> </domain-config> </network-security-config> As far as configuration, here are few things that can be done. You can determine when you want to use clear text or not. You don’t. You can define the trust anchors that you want in your apps. Again, use your certificate, place it in your app, but you don’t need to read it or place the passwords and other stuff that you needed to do before. You just specify that this certificate is the one that’s going to be used for communication. And you can use another set of tags for pins and expiration, but that’s generally not a good idea. So you have a file, which you can configure, and it contains all the security information in one place. Another thing that’s important is you need to be an authority for your clients and you need to lead by example. That means it’s your job to tell them how to improve the security of the application and the whole process. You have to keep them up to date, which means if they tell you that they plan to update their certificates in six months, that’s a good thing because now you have six months to notify the users that their apps will stop working. If there are security issues, patches to be applied, it’s your job to notify them immediately, and to try to rollout a new version of the application with those things applied. Conclusion (33:05) Things to take away from this presentation include, if you need to use storage, use the internal storage. But encrypt data that you place in it. Use HTTPS. Certificates. You can pin them. You can now use the configuration file that will hold all the features that you need for your application. And again, be aware of the update cycle because breaking stuff for your end users is not cool and it damages your ego and it carries a message that you’re not doing your job right. Generally, and specifically, Android is not secure. I’d say not as secure as the iOS platform. I have both phones, and I feel more confident when using applications on the iPhone. What’s important is that everything that you do in your application regarding security is just another deterrent for your malicious attacker. You’re not creating a safe application, you’re just adding bits and pieces of rules that will make it harder for somebody to break it. It will make it harder for them to change stuff in your build, to intercept data, to read the data, and sniff the communication between the server and the application. We can make it better and we can make it less easy to abuse.
https://academy.realm.io/posts/360andev-ana-baotic-best-practices-app-security-android/
CC-MAIN-2018-22
refinedweb
4,658
73.37
So I'm trying to find angles A,B,C using law of cosines and I want the answers to be in degrees and the formula to convert the answer to degrees is angle*PI/180. However the answer does not convert to degrees what am I doing wrong? Code: #include "stdafx.h" #include <iostream> #include <iomanip> #include <cmath> #define PI 3.14 using namespace std; int main() { double A,B,C,a,b,c,b1,b2,h,R,L,T,x; //This is for angle a a = 10.0; b = 7.0; c = 5.0; A = (b*b+c*c-a*a)/(2.0*b*c); A = acos(A)*PI/180.0; cout <<"\n\n\t\tThe value of angle A ="<<A; //This is for angle b a = 11.6; b = 15.2; c = 7.4; B = (a*a+c*c-b*b)/(2.0*a*c); B = acos(B)*PI/180.0; cout<<"\n\n\t\tThe value of angle B ="<<B; //This is for angle c a = 2.0; b = 3.0; c = 4.0; C = (a*a+b*b-c*c)/(2.0*a*b); C = acos(C)*PI/180.0; cout<<"\n\n\t\tThe value of angle C ="<<C; cout<<"\n\n\n\n"; return 0; }
http://cboard.cprogramming.com/cplusplus-programming/154387-law-cosines-converting-degrees-error-printable-thread.html
CC-MAIN-2016-07
refinedweb
212
81.8
SKOS/FAQs From Semantic Web Standards This page lists common questions and answers regarding SKOS aims and application. Your contribution is much appreciated here. Feel free to add your questions (and answers!) directly below, or send suggestions to public-esw-thes@w3.org, but please don't delete anything written by somebody else! Also, this FAQ should cover a relatively broad audience, that may know a lot about Knowledge Organization Systems but not much about the semantic web, or vice versa, so bear this in mind when suggesting questions and writing answers. Q: Where can I find the RDF/OWL files for the SKOS and SKOS-XL ontologies? - See the RDF vocabularies section of the W3C SKOS site. The RDF files for SKOS are also accessible using content negociation from their respective namespaces, for instance. Please note that a normal HTML browser will get an HTML page when accessing these namespaces! Q: What exactly is a Knowledge Organization System, and what is it good for? - A Knowledge Organization System (KOS) is a set of elements, often structured and controlled, which can be used for describing (indexing) objects, browsing collections, etc (see SKOS cases). Typical examples of KOS are thesauri, classification schemes, subject heading lists, taxonomies... KOSs are common in cultural heritage institutions (libraries, museums) or in any scientific discipline, such as biology, which has a specific interest in naming and classifying. Q: How do I publish a KOS on the semantic web? - [Adapted from the ESW wiki] To publish a KOS such as thesaurus on the semantic web, follow these steps: (1) generate an RDF description of the thesaurus' content (2) publish the RDF data on the web. - The first step means creating a file or set of files that is an RDF-based serialization of the thesaurus itself. The SKOS RDF schema provides most, if not all of the RDF classes and properties you will need for this task. The SKOS Primer is the best place to start to learn more about this schema. For more in-depth guide to generating RDF-based serializations of existing thesauri, for both standard and non-standard thesauri, and from a number of existing formats (including XML, relational tables), please see the tutorials, presentations and papers section of the W3C SKOS site, or the documentation page of this wiki. Re-using an existing schema such as SKOS,. Note that a very effective way of publishing RDF KOS data over the web is to follow Linked Data principles. A very good tutorial can be found in the Best Practice Recipes for Publishing RDF Vocabularies. Q: What's the difference between 'concept-oriented' and 'term-oriented' models of thesaurus structure? - A 'term-oriented' thesaurus uses terms (words or phrases) as its primitive elements, and asserts relationships between those elements, such as a 'term equivalence' link between animal and creature. The main elements of 'concept-oriented' thesaurus are more abstract concepts, which are aimed at capturing meanings beyond lexicalizations. In practice, while a term-based thesaurus would directly relate two terms with a same meaning using an equivalence link, a concept-based vocabulary would represent the same word or phrases as mere labels of a single concept. The SKOS model is clearly concept-oriented. Q: Can I use SKOS to publish a glossary or other type of knowledge organization system? - There is no theoretical objection for using SKOS in a much wider range of scenarios than the thesaurus case that originally motivated its design. Glossaries or folksonomies, for instance, may be ported to SKOS. Please bear in mind though that this might require some departure from the original knowledge structure to fit the SKOS model, or, alternatively, the creation of a specific 'profile' or extension of SKOS to fit the vocabulary at hand. Q: Can I build a new knowledge organization system out of bits of someone else's? - Re-using concepts from one vocabulary in another one perfectly fits the SKOS approach to representing and sharing knowlegde organization systems over the web. In particular, a concept may be asserted to belong to several concept schemes, as explained in the SKOS Primer Q: Can I use SKOS properties for other purposes? - SKOS properties were designed with re-useability in mind. This holds especially for the documentation properties (skos:note and its sub-properties like skos:definition, skos:example, etc.). The lexical labels skos:prefLabel, skos:altLabel and skos:hiddenLabel provide a means for labeling arbitrary resources, indicating that one single label per language should be used as the preferred one. Furthermore, preferred labels can be used to unambigously represent an item within the scope of a given concept scheme, data set or application. The rdfs:domain of all these properties is not restricted to skos:Concept.
http://www.w3.org/2001/sw/wiki/SKOS/FAQs
CC-MAIN-2013-48
refinedweb
790
52.29
Recursion occurs when a function call causes that same function to be called again before the original function call terminates. For example, consider the well-known mathematical expression x! (i.e. the factorial operation). The factorial operation is defined for all nonnegative integers as follows:. Consider the expression factorial(3). This and all function calls create a new environment. An environment is basically just a table that maps identifiers (e.g. n, factorial, locals(). In the first function call, the only local variable that gets defined is n = 3. Therefore, printing locals() would show {'n': 3}. Since n == 3, the return value becomes n * factorial(n - 1). At this next step is where things might get a little confusing. Looking at our new expression, we already know what n is. However, we don't yet know what factorial(n - 1) is. First, n - 1 evaluates to 2. Then, 2 is passed to factorial as the value for n. Since this is a new function call, a second environment is created to store this new n. Let A be the first environment and B be the second environment. A still exists and equals {'n': 3}, however, B (which equals {'n': 2}) is the current environment. Looking at the function body, the return value is, again, n * factorial(n - 1). Without evaluating this expression, let's substitute it into the original return expression. By doing this, we're mentally discarding B, so remember to substitute n accordingly (i.e. references to B's n are replaced with n - 1 which uses A's n). Now, the original return expression becomes n * ((n - 1) * factorial((n - 1) - 1)). Take a second to ensure that you understand why this is so. Now, let's evaluate the factorial((n - 1) - 1)) portion of that. Since A's n == 3, we're passing 1 into factorial. Therefore, we are creating a new environment C which equals {'n': 1}. Again, the return value is n * factorial(n - 1). So let's replace factorial((n - 1) - 1)) of the “original” return expression similarly to how we adjusted the original return expression earlier. The “original” expression is now n * ((n - 1) * ((n - 2) * factorial((n - 2) - 1))). Almost done. Now, we need to evaluate factorial((n - 2) - 1). This time, we're passing in 0. Therefore, this evaluates to 1. Now, let's perform our last substitution. The “original” return expression is now n * ((n - 1) * ((n - 2) * 1)). Recalling that the original return expression is evaluated under A, the expression becomes 3 * ((3 - 1) * ((3 - 2) * 1)). This, of course, evaluates to 6. To confirm that this is the correct answer, recall that 3! == 3 * 2 * 1 == 6. Before reading any further, be sure that you fully understand the concept of environments and how they apply to recursion. The statement if n == 0: return 1 is called a base case. This is because, it exhibits no recursion. A base case is absolutely required. Without one, you'll run into infinite recursion. With that said, as long as you have at least one base case, you can have as many cases as you want. For example, we could have equivalently written factorial as follows: def factorial(n): if n == 0: return 1 elif n == 1: return 1 else: return n * factorial(n - 1) You may also have multiple recursion cases, but we won't get into that since it's relatively uncommon and is often difficult to mentally process. You can also have “parallel” recursive function calls. For example, consider the Fibonacci sequence which is defined as follows: We can define this is as follows: def fib(n): if n == 0 or n == 1: return n else: return fib(n - 2) + fib(n - 1) I won't walk through this function as thoroughly as I did with factorial(3), but the final return value of fib(5) is equivalent to the following (syntactically invalid) expression: ( fib((n - 2) - 2) + ( fib(((n - 2) - 1) - 2) + fib(((n - 2) - 1) - 1) ) ) + ( ( fib(((n - 1) - 2) - 2) + fib(((n - 1) - 2) - 1) ) + ( fib(((n - 1) - 1) - 2) + ( fib((((n - 1) - 1) - 1) - 2) + fib((((n - 1) - 1) - 1) - 1) ) ) ) This becomes (1 + (0 + 1)) + ((0 + 1) + (1 + (0 + 1))) which of course evaluates to 5. Now, let's cover a few more vocabulary terms: return foo(n - 1)is a tail call, but return foo(n - 1) + 1is not (since the addition is the last operation). Tail call optimization is helpful for a number of reasons: Python has no form of TCO implemented for a number of a reasons. Therefore, other techniques are required to skirt this limitation. The method of choice depends on the use case. With some intuition, the definitions of factorial and fib can relatively easily be converted to iterative code as follows: def factorial(n): product = 1 while n > 1: product *= n n -= 1 return product def fib(n): a, b = 0, 1 while n > 0: a, b = b, a + b n -= 1 return a This is usually the most efficient way to manually eliminate recursion, but it can become rather difficult for more complex functions. Another useful tool is Python's lru_cache decorator which can be used to reduce the number of redundant calculations. You now have an idea as to how to avoid recursion in Python, but when should you use recursion? The answer is “not often”. All recursive functions can be implemented iteratively. It's simply a matter of figuring out how to do so. However, there are rare cases in which recursion is okay. Recursion is common in Python when the expected inputs wouldn't cause a significant number of a recursive function calls. If recursion is a topic that interests you, I implore you to study functional languages such as Scheme or Haskell. In such languages, recursion is much more useful. Please note that the above example for the Fibonacci sequence, although good at showing how to apply the definition in python and later use of the lru cache, has an inefficient running time since it makes 2 recursive calls for each non base case. The number of calls to the function grows exponentially to n. Rather non-intuitively a more efficient implementation would use linear recursion: def fib(n): if n <= 1: return (n,0) else: (a, b) = fib(n - 1) return (a + b, a) But that one has the issue of returning a pair of numbers. This emphasizes that some functions really do not gain much from recursion.
https://riptutorial.com/python/example/6269/the-what--how--and-when-of-recursion
CC-MAIN-2021-10
refinedweb
1,087
64.3
in reply to Re: parsing XML fragments (xml log files) with XML::Parserin thread parsing XML fragments (xml log files) with XML::Parser. It parses numerical entities. Decoding them wasn't required and would only require one regex be added. It handles namespace prefixes exactly as I wanted it to (they're included in the tag name). It is trivial to make it handle them differently (which is the point). I won't go into "validation" here, it being a subject worthy of a lengthy write-up. A pre/post processor could fix this... Wow. You are really stuck in thinking in terms of an XML-parsing module. There is no need to do anything in a pre-/post-processor -- which is part of the whole point of the exercise. For example, supporting comments is 2 minutes' work and easily fits into the existing structure. The few items that rise to the level of being interesting to implement are the things that I've never actually seen used in any XML. So it shouldn't be surprising that I didn't bother to implement them in the code that implemented just what I needed for one project.? my $data= '(?: [^<>&]+ | &\#?\w+; )+'; [download] my $data= '(?: [^<&]+ | &\#?\w+; )+'; [download] XML allows for unescaped ">".
http://www.perlmonks.org/?node_id=893908
CC-MAIN-2017-30
refinedweb
209
65.73
Created attachment 517348 [details] [diff] [review] merged stack just for back-up purposes I have a stack of patches that speeds up scanning by over 10%. I'll put them up one by one with detailed measurements once bug 638034 is done. For the moment, I've attached the union of patches. (In reply to comment #0) > > I have a stack of patches that speeds up scanning by over 10%. Actually, it speeds up parse() -- which includes scanning and parsing -- by over 10%, so it really speeds up scanning itself by a lot more. Also see bug 637549. (In reply to comment #1) > > Actually, it speeds up parse() -- which includes scanning and parsing -- by > over 10%, so it really speeds up scanning itself by a lot more. woot! First, here are the parsemark-njn (which is parsemark + kraken-imaging data) results for all the patches. In each case I've given the kraken-imaging (which is number-heavy), zimbra-combined and parsemark-njn overall instruction counts. changeset imaging-kraken zimbra-combined parsemark-njn --------- -------------- --------------- ------------- 63054:ccc55e56efc9 788.5M 433.5M 2440.7M avoid-tokenbuf: 770.2M (1.024x) 433.0M (1.001x) 2420.1M (1.009x) split-dec-hex-oct 720.1M (1.095x) 433.3M (1.001x) 2370.3M (1.030x) hasFracOrExp 650.4M (1.212x) 432.3M (1.003x) 2296.0M (1.063x) getCharIgnoreEOL 645.3M (1.222x) 432.2M (1.003x) 2290.0M (1.066x) no-TOLOWER 636.6M (1.239x) 432.0M (1.004x) 2280.1M (1.070x) JS_ISSPACE2 633.6M (1.245x) 431.3M (1.005x) 2274.1M (1.073x) JS_ISIDSTART 613.0M (1.286x) 415.8M (1.043x) 2201.8M (1.108x) oneCharTokens 608.4M (1.296x) 417.7M (1.038x) 2200.4M (1.109x) avoid-tokenbuf2 606.2M (1.301x) 399.4M (1.099x) 2146.0M (1.146x) no-TSF_ERROR-check 594.2M (1.327x) 390.0M (1.111x) 2100.5M (1.162x) Timing-wise the gains are slightly smaller: I see 1.24x on imaging-kraken, 1.09x on zimbra-combined, and 1.12x overall. Here's the description of all the patches, coming shortly. 1. avoid-tokenbuf: when tokenizing a number, don't copy the number into tokenbuf, just do the conversion directly from userbuf. Safe because numbers never contain escape sequences. 2. split-dec-hex-oct: split up number parsing into three parts: decimal, hex, and octal. Make things faster by avoiding lots of repeated radix checks. Also is easier to read IMO, though the code is slightly longer. 3. hasFracOrExp: use GetPrefixInteger() to convert decimals that don't have a fraction or exponent; it's much faster than js_strtod(). 4. getCharIgnoreEOL: replace uses of getChar() in number scanning with getCharIgnoreEOL(), which is faster. 5. no-TOLOWER: don't use JS_TOLOWER for the exponent 'e' or 'E'; it's horribly slow. 6. JS_ISSPACE2: add a lookup table for 7-bit whitespace chars. Also introduces '____' as a local synonym for 'false' in char predicate lookup tables, which make it easier to see if they are correct. (I did this in response to getting an entry wrong in one of these tables.) 7. JS_ISIDSTART: add lookup tables for 7-bit identifier chars. 8. oneCharTokens: add a lookup table for tokens that are always one character long, and try to match them early on because they're very common. [Nb: The numbers above for this patch look bad, I think it's due to noise (eg. variations in register spilling) because it's clearly a win when you look at the code. Also, if I undo it at the end of the patch perf gets worse by about 20M instructions.] 9. avoid-tokenbuf2: when scanning identifiers, make the case where the identifier doesn't have any escape sequences (which is very common) faster by atomizing directly from userbuf. If an escape sequence is found, rescan the string in order to copy it to tokenbuf. Also, use getCharIgnoreEOL() instead of getChar() in both cases. Also, check for JS_ISIDENT before '\\' inside identifiers. 10. no-TSF_ERROR-check: the TSF_ERROR check never succeeds, AFAICT. This patch changes it to an assertion, and passes jsregtests. If that's not appropriate, moving the check into getToken() is almost as good -- the big wins here are from getToken() being inlined more often. Created attachment 518235 [details] [diff] [review] patch 1 (against TM 63063:87dcf64586a7) Created attachment 518237 [details] [diff] [review] patch 2 Created attachment 518238 [details] [diff] [review] patch 3 Created attachment 518239 [details] [diff] [review] patch 4 Created attachment 518240 [details] [diff] [review] patch 5 Created attachment 518241 [details] [diff] [review] patch 6 Created attachment 518242 [details] [diff] [review] patch 7 Created attachment 518243 [details] [diff] [review] patch 8 Created attachment 518244 [details] [diff] [review] patch 9 Created attachment 518245 [details] [diff] [review] patch 10 > 10. no-TSF_ERROR-check: the TSF_ERROR check never succeeds, AFAICT. This See bug 636224 comment 3. /be ? (In reply to comment #15) > ? Yes! The last two paragraphs are the conclusion. /be Comment on attachment 518235 [details] [diff] [review] patch 1 (against TM 63063:87dcf64586a7) >+ /* >+ * Unlike identifiers and strings, numbers cannot contain escaped >+ * chars, so we don't need to use tokenbuf. Instead we can just >+ * convert the jschars in userbuf directly to the numeric value. >+ */ > if (radix == 10) { >- if (!js_strtod(cx, tokenbuf.begin(), tokenbuf.end(), &dummy, &dval)) >+ if (!js_strtod(cx, numStart, userbuf.addressOfNextRawChar(), &dummy, &dval)) > goto error; > } else { >- if (!GetPrefixInteger(cx, tokenbuf.begin(), tokenbuf.end(), radix, &dummy, &dval)) ..1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890 >+ if (!GetPrefixInteger(cx, numStart, userbuf.addressOfNextRawChar(), radix, &dummy, >+ &dval)) > goto error; > } Total nit attack here: house style wants the goto error; braced because the condition is multiline, but that seems avoidable. Ideas: * Manually hoist and common userbuf.addressOfNextRawChar() into a local whose name is shorter than that pretty-long userbuf... expression. *. * Use jsdouble d; not jsdouble dval; for the local, it's the canonical name for that kind of variable. Or just brace. /be (In reply to comment #17) > > *. I originally didn't have "Raw" but then there was TokenBuf::{,un}getChar() and TokenStream::{,un}getChar() and they didn't feel sufficiently distinguished for my liking. I can shorten addressOfNextRawChar, though -- maybe just nextRawChar? (In reply to comment #18) > I originally didn't have "Raw" but then there was TokenBuf::{,un}getChar() and No longer in TokenBuf, right? I don't see them but I'm looking at tm tip, not all the patches here. > TokenStream::{,un}getChar() and they didn't feel sufficiently distinguished for > my liking. Different struct/class. > I can shorten addressOfNextRawChar, though -- maybe just nextRawChar? You've used addressOf elsewhere (grep shows all of these jsinterp.h jsobj.h jsobjinlines.h plus jsscan.h). So I'd rather keep consistency there. But since TokenBuf is not TokenStream, adding Raw to every method save init, atStart, poison, and findEOL seems unnecessary. There's no atRawStart :-P. /be TokenBuf now has getRawChar and ungetRawChar. I'll change it if it gets me the r+, but having both TokenBuf::getChar and TokenStream::getChar makes me nervous, because they have subtly different meanings (ie. the latter normalizes EOLs, the former doesn't.) And likewise for peekChar/matchChar/ungetChar. Different abstractions can have the same method name, but I take your point. The nesting makes a stronger case for encoding the difference in the struct or class into the method name. No worries, just do something about the nit. /be (You got the r+ already.) Comment on attachment 518237 [details] [diff] [review] patch 2 >+ const jschar *numStart; Could localize to the two major then-blocks that need it. Not sure what's best or if it matters with any compiler. >+ if (JS7_ISDNZ(c) || (c == '.' && JS7_ISDEC(peekChar()))) { Wow, DNZ -- ok, it matches (or sees and raises) the short JS7_ISDEC style names. Would JS7_ISDECNZ be better? It would break the 9-letter mold of all these JS7_* macros. >+ ReportCompileErrorNumber(cx, this, NULL, JSREPORT_ERROR, JSMSG_MISSING_HEXDIGITS); > goto error; >+ } >+ numStart = userbuf.addressOfNextRawChar() - 1; >+ while (JS7_ISHEX(c)) >+ c = getChar(); >+ Blank line here is unnecessary in prevailing style. >+ } else if (JS7_ISDEC(c)) { >+ radix = 8; >+ numStart = userbuf.addressOfNextRawChar() - 1; >+ while (JS7_ISDEC(c)) { >+ /* Octal integer literals are not permitted in strict mode code. */ >+ if (!ReportStrictModeError(cx, this, NULL, NULL, JSMSG_DEPRECATED_OCTAL)) >+ goto error; >+ ..12345678901234567890123456789012345678901234567890123456789012345678901234567890 >+ /* >+ * Outside strict mode, we permit 08 and 09 as decimal numbers, which >+ * makes our behaviour a superset of the ECMA numeric grammar. We >+ * might not always be so permissive, so we warn about it. >+ */ This comment somehow ended up overflowing the soft-ish comment margin of 79. How about rewrapping since you're moving it. >+ if (c >= '8') { >+ if (!ReportCompileErrorNumber(cx, this, NULL, JSREPORT_WARNING, >+ JSMSG_BAD_OCTAL, c == '8' ? "08" : "09")) { >+ goto error; >+ } >+ goto decimal; /* use the decimal scanner for the rest of the number */ >+ } >+ c = getChar(); >+ } > } else { >- if (!GetPrefixInteger(cx, numStart, userbuf.addressOfNextRawChar(), radix, &dummy, >- &dval)) >- goto error; Oh, this goes away, so nm my nit on patch 1! Looks great, a diff -w or -b would be even easier to read (I think -- try it and attach if true?). /be Comment on attachment 518238 [details] [diff] [review] patch 3 >diff --git a/js/src/jsscan.cpp b/js/src/jsscan.cpp >--- a/js/src/jsscan.cpp >+++ b/js/src/jsscan.cpp >@@ -1110,15 +1110,18 @@ TokenStream::getTokenInternal() > if (JS7_ISDNZ(c) || (c == '.' && JS7_ISDEC(peekChar()))) { > numStart = userbuf.addressOfNextRawChar() - 1; > decimal: Forgot to note that, as with comments not preceded by { taking up one or more lines, and other paragraph-breaking structures, a label usually merits a blank line before it. Pre-existing nit. /be Comment on attachment 518239 [details] [diff] [review] patch 4 Great to have the queued-up mini-patches to review. /be Comment on attachment 518240 [details] [diff] [review] patch 5 Why didn't I write a JS7_TOLOWER ages ago? Presumably it would be even faster: #define JS7_TOLOWER(c) ((c) | 0x20) But it's pretty low-level and not worth doing unless there's a huge win, which there can't be for this 'e'/'E' test. /be Comment on attachment 518241 [details] [diff] [review] patch 6 Ah, the old "-2" suffix. Could you use JS_ISSPACE_OR_BOM instead? I know it goes against the <ctypes.h> naming convention but something has got to give. /be Comment on attachment 518242 [details] [diff] [review] patch 7 >+extern const bool js_isidstart[]; >+extern const bool js_isident[]; >+ >+static inline bool >+JS_ISIDSTART(int32 c) The JS_ISSPACE{,2} last time too jschar c -- these JS_ISID* ones don't. Necessary difference? >+{ >+ unsigned w = c; >+ >+ return (w < 128) >+ ? js_isidstart[w] >+ : JS_ISLETTER(c); Fits on one line, arms of ?: are simple-enough expressions (member, call), why make it multiline? >+} >+ >+static inline bool >+JS_ISIDENT(int32 c) >+{ >+ unsigned w = c; >+ >+ return (w < 128) >+ ? js_isident[w] >+ : JS_ISIDPART(c); Ditto. /be Comment on attachment 518243 [details] [diff] [review] patch 8 >diff --git a/js/src/jsscan.cpp b/js/src/jsscan.cpp >--- a/js/src/jsscan.cpp >+++ b/js/src/jsscan.cpp >@@ -182,6 +182,27 @@ TokenStream::init(const jschar *base, si > listener = cx->debugHooks->sourceHandler; > listenerData = cx->debugHooks->sourceHandlerData; > >+ /* >+ * This table holds all the token kinds that satisfy these properties: >+ * - A single char long. >+ * - Cannot be a prefix of any longer token (eg. '+' is excluded because >+ * '+=' is a valid token). >+ * - Doesn't need tp->t_op set (eg. this excludes '~'). >+ * >+ * The few token kinds satisfying these properties cover roughly 35--45% >+ * of the tokens seen in practice. >+ */ >+ memset(oneCharTokens, 0, sizeof(oneCharTokens)); >+ oneCharTokens[';'] = TOK_SEMI; >+ oneCharTokens[','] = TOK_COMMA; >+ oneCharTokens['?'] = TOK_HOOK; >+ oneCharTokens['['] = TOK_LB; >+ oneCharTokens[']'] = TOK_RB; >+ oneCharTokens['{'] = TOK_LC; >+ oneCharTokens['}'] = TOK_RC; >+ oneCharTokens['('] = TOK_LP; >+ oneCharTokens[')'] = TOK_RP; Do this statically. We have lots of ways to automate (Python is one obvious choice). > /* >+ * Look for a one-char token; they're common and simple. >+ */ >+ >+ if (c < 128) { No blank line between major comment and statement it prefixes (could use minor comment style too but maybe this is part of a larger structure where majors in a row all want to look the same). >+ tt = oneCharTokens[c]; >+ if (tt != 0) >+ goto out; >+ } >+ >+ /* > * Look for an identifier. > */ > Ulp, pre-existing same nit. >+ TokenKind oneCharTokens[128]; /* table of one-char tokens */ So this would be a static const array, generated in a .h file. /be Comment on attachment 518244 [details] [diff] [review] patch 9 >+ c = getCharIgnoreEOL(); >+ if (JS_ISIDENT(c)) { >+ /* do nothing */ >+ } else if (c == '\\') { >+ if (!matchUnicodeEscapeIdent(&qc)) >+ break; >+ c = qc; >+ } else { >+ break; >+ } Here and below where the same if-else-if-else-break structure recurs, this is shorter and simpler: >+ if (!JS_ISIDENT(c)) { >+ if (c != '\\' || !matchUnicodeEscapeIdent(&qc)) >+ break; >+ c = qc; >+ } r=me with that in both places. /be Comment on attachment 518245 [details] [diff] [review] patch 10 I leave it to Luke to remove the TSF_ERROR setting TokenStream::getTokenInternal over in bug 636224. /be Comment on attachment 518243 [details] [diff] [review] patch 8 >+ TokenKind oneCharTokens[128]; /* table of one-char tokens */ Compress using uint8, per IRC discussion. A comment in TokenStream::init about how this re-init'ed, per-ts constant data is tolerable would be good for future readers. Cite this bug. >+ bool maybeEOL[256]; /* probabilistic EOL lookup table */ > bool maybeStrSpecial[256];/* speeds up string scanning */ Should these be JSPackedBool not bool? /be (In reply to comment #32) > > > >+ bool maybeEOL[256]; /* probabilistic EOL lookup table */ > > bool maybeStrSpecial[256];/* speeds up string scanning */ > > Should these be JSPackedBool not bool? Makes no difference with GCC, but I can do it to be safe. . (In reply to comment #34) > . D'oh -- noctal. I trust you have a plan to de-spaghetti-ize getTokenInternal at some point. /be (In reply to comment #35) > > I trust you have a plan to de-spaghetti-ize getTokenInternal at some point. Other than moving the two big XML-specific chunks out, no. Looking through the rest of it, the only other thing that bugs me is the handling of "//@line 123\n" lines, and maybe the handling of sharps, because they're obscure cases and relatively large. I'll add them to bug 636654. Comment on attachment 518242 [details] [diff] [review] patch 7 We talked on IRC about "int c" being best for stdio/ctypes and ultimate compiler happiness. r=me with that and the one-line ?: nits picked. /be Plus a breakage fix for good luck:
https://bugzilla.mozilla.org/show_bug.cgi?id=639420
CC-MAIN-2017-26
refinedweb
2,332
58.28
This example illustrates the comparison of two $100,000 loans. The major difference between the two loans is that the nominal interest rate in the second loan is lower than the first with the added expense of paying discount points at the time of initialization. Both alternatives are 30-year loans. The first loan is labeled “8.25% - no discount points” and the second one is labeled “8% - 1 discount point.” Assume that the interest paid qualifies for a tax deduction and you are in the 33% tax bracket. Also, your minimum attractive rate of return (MARR) for an alternative investment is 4% (adjusted for tax rate). You use the following statements to find the breakeven point in the life of the loan for your preference between the loans: proc loan start=1992:1 nosummaryprint amount=100000 life=360; fixed rate=8.25 label='8.25% - no discount points'; fixed rate=8 points=1000 label='8% - 1 discount point'; compare at=(48 54 60) all taxrate=33 marr=4; run; Output 17.1.1 shows the loan comparison reports as of January 1996 (48th period), July 1996 (54th period), and January 1997 (60th period). Output 17.1.1: Loan Comparison Reports for Discount Point Breakeven Notice that the breakeven point for present worth of cost and true rate both happen on July 1996. This indicates that if you intend to keep the loan for 4.5 years or more, it is better to pay the discount points for the lower rate. If your objective is to minimize the interest paid or the periodic payment, the “8% - 1 discount point” loan is the preferred choice.
http://support.sas.com/documentation/cdl/en/etsug/65545/HTML/default/etsug_loan_examples01.htm
CC-MAIN-2019-18
refinedweb
274
70.13
C functions for PSOC 1srinath.vardae Jul 23, 2012 9:01 AM Is there an inbuilt function that converts ADC's output values to decimal values(in C ) for display purpose. 1. Re: C functions for PSOC 1user_1377889 Feb 1, 2012 4:46 AM (in response to srinath.vardae) Oh yes, there are. #include <stdio.h> and use the sprintf, csprintf functions. The latter is used when the format is a string constant (as usual). Works correctly for floats with Designer 5.2 Bob 2. Re: C functions for PSOC 1srinath.vardae Feb 2, 2012 5:48 AM (in response to srinath.vardae) thank you Bob tough I could not able to use it . I got one similar to it, " char *itoa (char *string, int value, int base); " which convets integer values to string & can be easily displayed on an LCD using " LCD_PrString(CHAR * sRamString); " 3. Re: C functions for PSOC 1user_1377889 Feb 2, 2012 8:48 AM (in response to srinath.vardae)1 of 1 people found this helpful Try this #include <stdio.h> char Buffer[17]; // 16 chars for LCD-Line + /0 character void main(void) csprintf(Buffer,"%s = %d","Test",32767); // Buffer has now the string "Test = 32767" LCD_PrString(Buffer); You are totally free in specifying the format, have a look () at the specs. You can convert (and print) Floats, longs ints and chars. Bob 4. Re: C functions for PSOC 1prasanna.gandhiraj Jul 20, 2012 9:00 AM (in response to srinath.vardae) hi bob, i am faced with a similar problem, how to convert a floating point value to a string. my basic requirement is to send a float value to the hyperterminal display through UART. i am using PSoC designer 5.1 . when i use the sprintf function as shown below. the compiler shows out an error saying it expects pointer to flash char but found pointer to char. sprintf(Buffer,"%s = %d","Test",32767); 5. Re: C functions for PSOC 1user_1377889 Jul 20, 2012 9:32 AM (in response to srinath.vardae) The format is a const string, so you have to use csprintf(Buffer,"%s =%d","Test",32767) Bob 6. Re: C functions for PSOC 1prasanna.gandhiraj Jul 20, 2012 11:03 AM (in response to srinath.vardae) hi bob, i tested my program with the function csprintf(Buffer,"%s = %d","Test",32767); actually the function is written inside an infinite while loop along with many other functions. what i observe is whenever this function is not active(commented out) the while loop works. but as soon as the function is made active and included the while loop terminates after a single run. i don't know what could have gone wrong. 7. Re: C functions for PSOC 1user_1377889 Jul 20, 2012 11:28 AM (in response to srinath.vardae) Is the size of the buffer greater 12? Bob 8. Re: C functions for PSOC 1user_1377889 Jul 20, 2012 11:40 AM (in response to srinath.vardae) And for the floats have a look here . Probably you have to declare an integer and assign 32767 to it, maybe it is interpreted as a 32-bit number which will not be properly handled by sprintf. Bopb 9. Re: C functions for PSOC 1user_14586677 Jul 20, 2012 12:59 PM (in response to srinath.vardae) Compilers handle printf()/sprintf() differently, eg. feature set, from one compiler to another. Typically if printf() handles longs and floats, so does sprintf(). You can look at .h, .inc files for the definitions typically to confirm, or consult compiler manufacturer. Regards, Dana. 10. Re: C functions for PSOC 1arvi Jul 23, 2012 9:01 AM (in response to srinath.vardae) If you know the size of your final string beforehand, it is better to declare the string buffer as char Buffer[13] rather than char * Buffer[]; Here, the value is 13 assuming you have 12 characters ("Test = 32767") + 1 NULL terminator. You can then use UART_PutString(Buffer) which sends the NULL terminated string out the TX. -Arvind
https://community.cypress.com/thread/23546
CC-MAIN-2017-39
refinedweb
668
75
abstraction layer for saving binary data abstraction layer for saving binary data, or to construct stl-like iterators. This will only work for non-nested data structures, but maybe there is a way to do it for arbitrary data too. i am working on a solution for "list <struct whatever>" and will submit it when it works :-) if you have suggestions how to design the more general case, tell me! robert Hi, Thats a great idea. If I understand you correctly, then this shouldn't be too hard. Using the [b:7a32f5d626]sizeof[/b:7a32f5d626] operator, you will know exactly how big a binary variable is, and therefore where in a file any particular item will be into that file. Something like (sorry I'm used to the C style for files!) template <class T> class binaryio { FILE* stream; binaryio(char* filename) { stream = fopen(filename, "b"); } T operator [](int i) { fseek(stream, i * sizeof(T), 0); //<- Can't remember the syntax here. T data; fread(&data, 1, sizeof(T), stream); return data; } }; And something smarter to write back to random locations..Is that what you're think? yes, that's pretty much the same what i had in mind. but this will work only if the data in <class T> is stored sequentially. For simple types (int, double,char) this is definitely true, but what about complex classes, or classes with complex-type members ? This approach will work the same as any C++ container, such as vector or list etc. Your concern shouldn't about how the data inside a class is stored. In truth that's none of your business. What you're interested in doing is saving the contents of the class and retrieving it with (I suspect) a simple cast that put it back into the same form as it was originally. What I think though you're getting at is the use of the [b:472263e98b]new[/b:472263e98b] operator within that class (say).. Thus two variables of class X, could uses substantially different amounts of dynamic memory, because variable 1, allocated 100k of memory, while variable 2 allocated 200k of memory. This is a much more complex problem and one you can't solve unless you know exactly how that class is configured. The best you can do is insist that your these 'complex' data types have a common method that ensure that they can be forced to save data and reload it on que. If you go down that line. Then you probably want to structure your binary file in the following form: ------ [data for each data type] * N ------ [key to additional dynamic data] ------ [additional dynamic data] * N Hope that make sense. But here how I would image it working, with a request for some data 1) You extract the contents of the dynamic data type given the index, using the overload [] operator we discussed above. 2) You call the 'load' method within that newely reloaded class/struct etc, that reloads any additional dynamic data, give a file offset you provide. The offset you get from the key section (or if you prefer you store it immediately after each individual data type).. This simply means on saving you need to know how many data types you have, so you know how much space to leave for each section, with the additional dynamic bit on the end, since you've no idea how big this is until you get to saving.. Thats the only way I can see this will work.. Hope that helps.. hi, i thought about what you wrote and concluded that an efficient implementation must define at least two basic classes: one class that will be associated to the file and define a simple interface for reading and writing and another class that will put some constraints on the type of element. Furtheron it might be possible to inherit the first class from "vector" and thus provide a much more powerful tool. Have a look at the pieces of code below. robert //that's the save-binary-file-interface class SaveBinary : std::vector { public SaveBinary * read(int index); public SaveBinary * write(int index); //lots of member functions have to be overwritten } class SaveableType { //force the user to define that function protected virtual int load(int) = 0; } //that's what a user is going to do with the interface class MyClass : SaveableType { //define the "load" function int load(int position) { //code } //more code } int main() { MyClass mc1,mc2; SaveBinary<MyClass> sb; mc1.do_something(); mc2.do_other(); sb.push_back(mc1); sb.push_back(mc2); //probably you will NOT do this, but some other stl-algorithm //might perform something really elegant on "b" sort(b); return (0); } Thats seems the way to go.. You might want to create a small abstraction to your class. Not sure if they're is anything really neat you can do with templates to solve this. But you essentially want to be able to apply this approach to constant sized variables (classes included) without being dependent on the SaveableType and also to those that dynamically allocate memory.. I also wondered if you could so something dangerously clever by redefining the new operator, so you could catch memory being allocated by a class and allocate disk in your file accordingly. Given that every good class should have a copy constructor, which make an exact copy of anything it finds, I wonder if it might be possible to hijack something in there.. Mega long shot. And probably overtly complex.. Good luck Will., __________________________________________ [url=]pass-4sure.us[/url]
http://www.codecogs.com/pages/forums/pagegen.php?id=568
CC-MAIN-2018-34
refinedweb
924
58.92
I am a long time python developer. I was trying out go, converting an existing python app to go. It is modular and works really well for me. Upon creating the same structure in go, I seem to land in cyclic import errors, a lot more than I want to. Never had any import problems in python. I never even had to use import aliases. So I may have had some cyclic imports which were not evident in python. I actually find that strange. Anyways, I am lost, trying to fix these in go. I have read that interfaces can be used to avoid cyclic dependencies. But I don't understand how. I didn't find any examples on this either. Can somebody help me on this? EDIT: The current python application structure is as follows: The short version: Write config functions for hooking packages up to each other at run time rather than compile time. Instead of routes importing all the packages that define routes, it can export routes.Register, which main (or code in each app) can call. As a rule, split a package up when each piece could be useful on its own. If two pieces of functionality are really intimately related, you don't have to split them into packages at all; you can organize with multiple files or types instead. Go's net/http is a big package, for instance, and understandably so. More specifically: Move reusable code 'down' into lower-level packages untangled from your particular use case. If you have a package page containing both logic for your content management system and all-purpose HTML-manipulation code, consider moving the HTML stuff "down" to a package html so you can use it without importing unrelated content management stuff. Break up grab-bag packages ( utils, tools) by topic or dependency. Otherwise you can end up importing a huge utils package (and taking on all its dependencies) for one or two pieces of functionality (that wouldn't have so many dependencies if separated out). Pass around basic types and interface values. If you're depending on a package for just a type name, maybe you can avoid that. Maybe some code handling a []Page can get instead use a []string of filenames or a []int of IDs or some more general interface ( sql.Rows) instead. Related to the these points, Ben Johnson gave a lightning talk at GopherCon 2016. He suggests breaking up packages by dependency, and defining one package that just has interfaces and data types, without any but the most trivial functionality (and as a result few to no dependencies); in his words it defines the "language" of your app. Here, I'd rearrange things so the router doesn't need to include the routes: instead, each app package calls a router.Register() method. This is what the Gorilla web toolkit's mux package does. Your routes, database, and constants packages sound like low-level pieces that should be imported by your app code and not import it. Generally, try to build your app in layers. Your higher-layer, use-case-specific app code should import lower-layer, more fundamental tools, and never the other way around. Here are some more thoughts: Packages are for separating independently usable bits of functionality; you don't need to split one off whenever a source file gets large. Unlike in, say, Python or Java, in Go one can split and combine and rearrange files completely independent of the package structure, so you can break up huge files without breaking up packages. The standard library's net/http is about 7k lines (counting comments/blanks but not tests). Internally, it's split into many smaller files and types. But it's one package, I think 'cause there was no reason users would want, say, just cookie handling on its own. On the other hand, net and net/url are separate because they have uses outside HTTP. It's great if you can push "down" utilities into libraries that are independent and feel like their own polished products, or cleanly layer your application itself (e.g., UI sits atop an API sits atop some core functionality and data models). Likewise "horizontal" separation may help you hold the app in your head (e.g., the UI layer breaks up into user account management, the application core, and administrative tools, or something finer-grained than that). But, the core point is, you're free to split or not as works for you. Use Register or other runtime config methods to keep your general tools (like URL routing or DB access code) from needing to import your app code. Instead of your router looking at app1.Routes, app2.Routes, etc., you have your apps packages import router and register with it in their func init()s. Or, if you'd rather register routes from one package, you could make a myapp/routes package that imports router and all your views and calls router.Register. Point is, the router itself is all-purpose code that needn't import your application's views. Some ways to put together config APIs: Pass app behavior via interfaces or funcs: http can be passed custom implementations of Handler (of course) but also CookieJar or File. text/template and html/template can accept functions to be accessible from templates (in a FuncMap). Export shortcut functions from your package if appropriate: In http, callers can either make and separately configure some http.Server objects, or call http.ListenAndServe(...) that uses a global Server. That gives you a nice design--everything's in an object and callers can create multiple Servers in a process and such--but it also offers a lazy way to configure in the simple single-server case. If you have to, just duct-tape it: You don't have to limit yourself to super-elegant config systems if you can't fit one to your app: should your app have package "myapp/conf" with a global var Conf map[string]interface{}, I won't judge. My one warning would be that this ties every conf-importing package to your app: if some might otherwise be reusable outside your application, maybe you can find a better way to configure them. Those two are maybe the key principles, but a couple of specific cases/tactical thoughts: Separate fundamental tasks from app-dependent ones. One app I work on in another language has a "utils" module mixing general tasks (e.g., formatting datetimes or working with HTML) with app-specific stuff (that depends on the user schema, etc.). But the users package imports the utils, creating a cycle. If I were porting to Go, I'd move the user-dependent utils "up" out of the utils module, maybe to live with the user code or even above it. Consider breaking up grab-bag packages. Slightly enlarging on the last point: if two pieces of functionality are independent (that is, things still work if you move some code to another package) and unrelated from the user's perspective, they're candidates to be separated into two packages. Sometimes the bundling is harmless, but other times it leads to extra dependencies, or a less generic package name would just make clearer code. So my utils above might be broken up by topic or dependency (e.g., strutil, dbutil, etc.). If you wind up with lots of packages this way, we've got goimports to help manage them. Replace import-requiring object types in APIs with basic types and interfaces. Say two entities in your app have a many-to-many relationship like Users and Groups. If they live in different packages (a big 'if'), you can't have both u.Groups() returning a []group.Group and g.Users() returning []user.User because that requires the packages to import each other. However, you could change one or both of those return, say, a []uint of IDs or a sql.Rows or some other interface you can get to without importing a specific object type. Depending on your use case, types like User and Group might be so intimately related that it's better just to put them in one package, but if you decide they should be distinct, this is a way. Thanks for the detailed question and followup.
https://codedump.io/share/lckXUqTvgspe/1/cyclic-dependencies-and-interfaces-in-golang
CC-MAIN-2018-09
refinedweb
1,388
63.19
Logging is a pretty crucial part of any application. The Masonite Logging package allows you to see errors your application is throwing as well as allow you to log your own messages in several different alert levels. Masonite Logging currently contains the ability to log to a file, syslog and slack. To get the Masonite Logging package up and running on your machine you must first install the package: $ pip install masonite-logging And then add the Service Provider to your application: from masonite.logging.providers import LoggingProvider# ...PROVIDERS = [# ..# Third Party ProvidersLoggingProvider,#..] You can then publish the provider which will create your config/logging.py configuration file: $ craft publish LoggingProvider The Masonite Logging package will automatically register an exception listener with your Masonite application and log any exceptions your application encounters with the corresponding channel. The Masonite Logging package uses the concept of channels and drivers. A channel internally is used to create and pass various information to instantiate the driver. At a higher level, you will mainly be working with channels. Out of the box there are several different channels: single, stack, daily, slack, syslog and terminal. Each channel will handle logging messages in it's own way. The single channel will put all information in a "single" file. This file can be specified in your config/logging.py file in the path option: 'single': {# ...'path': 'storage/logs/single.log'}, The daily channel is similiar to the single channel except this will create a new log file based on the current day. So this will create log files like 10-23-2019.log and 10-24-2019.log. The path to set here needs to be a directory instead of a path to a file: 'single': {# ...'path': 'storage/logs'}, The Slack channel will send messages directly to your Slack channel so you or your team can act quickly and be alerted directly of any messages logged. You'll need to generate a Slack application token. You can add it to your .env file: SLACK_TOKEN=xoxp-35992.... You then need to set a few options in your config file if you need to change any default settings like the user, the icon emoji, 'slack': {# ...'channel': '#bot','emoji': ':warning:','username': 'Logging Bot','token': env('SLACK_TOKEN', None),# ...} These options are up to you. The terminal channel will simply output errors to the terminal. This is handy for debugging or in addition to other channels when using the stack channel. The stack channel is useful when you need to combine several channels together. Maybe you want it to log to both a daily channel file as well as send a message to your slack group. You can do this easily by specifying the channels within your stack channel options. 'stack': {# ...'channels': ['daily', 'slack']}, You can have as many channels as you want here. The syslog channel will tie directly into your system level logging software. Each operating system has its own type of system monitoring. You'll need to tie into your system socket path. This can be different per operating system and machine so find yours and put that socket path in the config options: 'syslog': {# ...'path': '/var/run/syslog',} Log levels are a hierarchy of log levels that you can specify on your channels. The hierarchy in order of most important to least important are: emergencyalertcriticalerrorwarningnoticeinfodebug Each channel can have its own minimum log level. The log message will only continue if it is greater than or equal to the minimum log level on that channel. For example, if we have a configuration like this on the daily channel: 'daily': {'level': 'info','path': 'storage/logs'}, This will only send messages to this channel if the log level is notice or above. It will ignore all debug level log messages. You can of course write your own log messages. You can resolve the logger class from the container and use a method equal to the log level you want to write the message for. from masonite.logging import Loggerdef show(self, logger: Logger):logger.debug('message')logger.info('message')logger.notice('message')logger.warning('message')# ... You can easily switch channels by using the channel method: from masonite.logging import Loggerdef show(self, logger: Logger):logger.channel('slack').debug('message')# ... By default, Masonite Logging will record all times in the UTC timezone but in the event you want to switch timezones, you can do so in your configuration file: CHANNELS = {'timezone': 'America/New_York','single': {# ...}, All timestamps associated with logging will now use the correct timezone.
https://docs.masoniteproject.com/official-packages/masonite-logging
CC-MAIN-2020-34
refinedweb
752
56.96
Advanced Java Interview Questions -16 1. 2. 3. 4. 5.What is error ? A SAX parsing error is generally a validation error; in other words, it occurs when an XML document is not valid, although it can also occur if the declaration specifies an XML version that the parser cannot handle. See also fatal error, warning. 6.What is Extensible Markup Language ? XML. 7.What is external entity ? An entity that exists as an external XML file, which is included in the XML document using an entity reference. 8.What is external subset ? That part of a DTD that is defined by references to external DTD files. 9.What is fatal error ? A fatal error occurs in the SAX parser when a document is not well formed or otherwise cannot be processed. See also error, warning. 10. 11.What is filter chain ? A concatenation of XSLT transformations in which the output of one transformation becomes the input of the next. 12.What is finder method ? A method defined in the Interview Questions – Home interface and invoked by a client to locate an entity bean. 13.What is form-based authentication ? An authentication mechanism in which a Web container provides an application-specific form for logging in. This form of authentication uses Base64 encoding and can expose user names and passwords 14.What is general entity ? An entity that is referenced as part of an XML document’s content, as distinct from a parameter entity, which is referenced in the DTD. A general entity can be a parsed entity or an unparsed entity. 15.What is group ? An authenticated set of users classified by common traits such as job title or customer profile. Groups are also associated with a set of roles, and every user that is a member of a group inherits all the roles assigned to that group. 16.What is handle ? An object that identifies an enterprise bean. A client can serialize the handle and then later deserialize it to obtain a reference to the enterprise bean. 17.What is Interview Questions – Home handle ? An object that can be used to obtain a reference to the Interview Questions – Home interface. A Interview Questions – Home handle can be serialized and written to stable storage and de-serialized to obtain the reference. 18.What is Interview Questions – Home interface ? One of two interfaces for an enterprise bean. The Interview Questions – Home interface defines zero or more methods for managing an enterprise bean. The Interview Questions – Home interface of a session bean defines create and remove methods, whereas the Interview Questions – Home interface of an entity bean defines create, finder, and remove methods. 19.What is Java 2 Platform, Micro Edition (J2ME) ? A highly optimized Java runtime environment targeting a wide range of consumer products, including pagers, cellular phones, screen phones, digital set-top boxes, and car navigation systems. 20.What is Java 2 Platform, Standard Edition (J2SE) ? The core Java technology platform. 21.What is. 22.What is Java API for XML Registries (JAXR) ? An API for accessing various kinds of XML registries. 23.What is Java API for XML-based RPC (JAX-RPC) ? An API for building Web services and clients that use remote procedure calls and XML. 24.What is J2SE ? Abbreviate of Java 2 Platform, Standard Edition. 25.What is JAR ? Java archive. A platform-independent file format that permits many files to be aggregated into one file. 26.What is Java 2 Platform, Enterprise Edition (J2EE) ? An environment for developing and deploying enterprise applications. The J2EE platform consists of a set of services, application programming interfaces (APIs), and protocols that provide the functionality for developing multitiered, Web-based applications. 27.What is Java IDL ? A technology that provides CORBA interoperability and connectivity capabilities for the J2EE platform. These capabilities enable J2EE applications to invoke operations on remote network services using the Object Management Group IDL and IIOP. 28.What is Java Message Service (JMS) ? An API for invoking operations on enterprise messaging systems. 29.What is Java Transaction Service (JTS) ? Specifies the implementation of a transaction manager that supports JTA and implements the Java mapping of the Object Management Group Object Transaction Service 1.1 specification at the level below the API. 30.What is JavaBeans component ? A Java class that can be manipulated by tools and composed into applications. A JavaBeans component must adhere to certain property and event interface conventions. 31.What is JavaMail ? An API for sending and receiving email. 32.What is JavaServer Faces Technology ? A framework for building server-side user interfaces for Web applications written in the Java programming language 33.What is JavaServer Faces conversion model ? A mechanism for converting between string-based markup generated by JavaServer Faces UI components and server-side Java objects. 34.What is JavaServer Faces event and listener model ? A mechanism for determining how events emitted by JavaServer Faces UI components are handled. This model is based on the JavaBeans component event and listener model. 35.What is Java Transaction API (JTA) ? An API that allows applications and J2EE servers to access transactions. 36.What is JavaServer Faces UI component ? A user interface control that outputs data to a client or allows a user to input data to a JavaServer Faces application. 37.What is JavaServer Faces UI component class ? A JavaServer Faces class that defines the behavior and properties of a JavaServer Faces UI component. 38.What is Java Naming and Directory Interface (JNDI) ? An API that provides naming and directory functionality. 39.What is Java Secure Socket Extension (JSSE) ? A set of packages that enable secure Internet communications. 40.What is JAXR client ? A client program that uses the JAXR API to access a business registry via a JAXR provider. 41.What is. 42. 43.What is JMS ? Java Message Service. 44.What is JMS administered object ? A preconfigured JMS object (a resource manager connection factory or a destination) created by an administrator for the use of JMS clients and placed in a JNDI namespace. 45.What is JMS application ? One or more JMS clients that exchange messages. 46.What is JAXR provider ? An implementation of the JAXR API that provides access to a specific registry provider or to a class of registry providers that are based on a common specification. 47.What is JDBC ? An JDBC for database-independent connectivity between the J2EE platform and a wide range of data sources. 48.What is JavaServer Faces validation model ? A mechanism for validating the data a user inputs to a JavaServer Faces UI component. 49.What is JMS client ? A Java language program that sends or receives messages. 50.What is JMS provider ? A messaging system that implements the Java Message Service as well as other administrative and control functionality needed in a full-featured messaging product. 51.What is JSP expression ? A scripting element that contains a valid scripting language expression that is evaluated, converted to a String, and placed into the implicit out object. 52.What is JSP expression language ? A language used to write expressions that access the properties of JavaBeans components. EL expressions can be used in static text and in any standard or custom tag attribute that can accept an expression. 53.What is JSP standard action ? An action that is defined in the JSP specification and is always available to a JSP page. 54. 55.What is JSP page ? A text-based document containing static text and JSP elements that describes how to process a request to create a response. A JSP page is translated into and handles requests as a servlet. 56”. 57.What is JSP scriptlet ? A JSP scripting element containing any code fragment that is valid in the scripting language used in the JSP page. The JSP specification describes what is a valid scriptlet for the case where the language page attribute is “java”. 58.What is local subset ? That part of the DTD that is defined within the current XML file. 59.What is managed bean creation facility ? A mechanism for defining the characteristics of JavaBeans components used in a JavaServer Faces application. 60.What is JTA ? Abbreviate of Java Transaction API. 61.What is JSP tag file ? A source file containing a reusable fragment of JSP code that is translated into a tag handler when a JSP page is translated into a servlet. 62.What is JSP tag handler ? A Java programming language object that implements the behavior of a custom tag. 63.What is JSP tag library ? A collection of custom tags described via a tag library descriptor and Java classes. 64.What is JSTL ? Abbreviate of JavaServer Pages Standard Tag Library. 65.What is JTS ? Abbreviate of Java Transaction Service. 66.What is keystore ? A file containing the keys and certificates used for authentication 67. 68. 69.What is message consumer ? An object created by a JMS session that is used for receiving messages sent to a destination. 70.What is. 71.What is message producer ? An object created by a JMS session that is used for sending messages to a destination. 72.What is mixed-content model ? A DTD specification that defines an element as containing a mixture of text and one more other elements. The specification must start with #PCDATA, followed by diverse elements, and must end with the “zero-or-more” asterisk symbol (*). 73.What is mutual authentication ? An authentication mechanism employed by two parties for the purpose of proving each other’s identity to one another. 74.What is should be interpreted according to your DTD rather than using the definition for an element in a different DTD. 75.What is naming context ? A set of associations between unique, atomic, people-friendly identifiers and objects. 76.What is parameter entity ? An entity that consists of DTD specifications, as distinct from a general entity. A parameter entity defined in the DTD can then be referenced at other points, thereby eliminating the need to recode the definition at each location it is used. 77.What is parsed entity ? A general entity that contains XML and therefore is parsed when inserted into the XML document, as opposed to an unparsed entity. 78.What is. 79.What is. 80.What is North American Industry Classification System (NAICS) ? A system for classifying business establishments based on the processes they use to produce goods or services. 81.What is notation ? A mechanism for defining a data format for a non-XML document referenced as an unparsed entity. This is a holdover from SGML. A newer standard is to use MIME data types and namespaces to prevent naming conflicts. 82.What is method-binding expression ? A Java Server Faces EL expression that refers to a method of a backing bean. This method performs either event handling, validation, or navigation processing for the UI component whose tag uses the method-binding expression. 83.What is method permission ? An authorization rule that determines who is permitted to execute one or more enterprise bean methods. 84.What is OASIS ? Organization for the Advancement of Structured Information Standards. A consortium that drives the development, convergence, and adoption of e-business standards. 85.What is OMG ? Object Management Group. A consortium that produces and maintains computer industry specifications for interoperable enterprise applications. 86.What is one-way messaging ? A method of transmitting messages without having to block until a response is received. 87.What is ORB ? Object request broker. A library that enables CORBA objects to locate and communicate with one another. 88.What is OS principal ? A principal native to the operating system on which the J2EE platform is executing. 89.What is OTS ? Object Transaction Service. A definition of the interfaces that permit CORBA objects to participate in transactions. 90.What is. 91.What is passivation ? The process of transferring an enterprise bean from memory to secondary storage. See activation. 92.What is persistence ? The protocol for transferring the state of an entity bean between its instance variables and an underlying database. 93.What is persistent field ? A virtual field of an entity bean that has container-managed persistence; it is stored in a database. 94.What is primary key ? An object that uniquely identifies an entity bean within a home. 95.What is principal ? The identity assigned to a user as a result of authentication. 96.What is privilege ? A security attribute that does not have the property of uniqueness and that can be shared by many principals. 97.What is POA ? Portable Object Adapter. A CORBA standard for building server-side applications that are portable across heterogeneous ORBs. 98.What is point-to-point messaging system ? A messaging system built on the concept of message queues. Each message is addressed to a specific queue; clients extract messages from the queues established to hold their messages. 99.What is processing instruction ? Information contained in an XML structure that is intended to be interpreted by a specific application. 100.What is programmatic security ? Security decisions that are made by security-aware applications. Programmatic security is useful when declarative security alone is not sufficient to express the security model of an application. 101.What is prolog ? The part of an XML document that precedes the XML data. The prolog includes the declaration and an optional DTD. 102. 103. 104.What is RAR ? Resource Adapter Archive. A JAR archive that contains a resource adapter module. 105.What is RDF ? Resource Description Framework. A standard for defining the kind of data that an XML file contains. Such information can help ensure semantic integrity-for example-by helping to make sure that a date is treated as a date rather than simply as text. 106.What is RDF schema ? A standard for specifying consistency rules that apply to the specifications contained in an RDF. 107.What is. 108.What is reentrant entity bean ? An entity bean that can handle multiple simultaneous, interleaved, or nested invocations that will not interfere with each other. 109.What is. 110.What is query string ? A component of an HTTP request URL that contains a set of parameters and values that affect the handling of the request. 111.What is queue ? A messaging system built on the concept of message queues. Each message is addressed to a specific queue; clients extract messages from the queues established to hold their messages. 112.What is registry ? An infrastructure that enables the building, deployment, and discovery of Web services. It is a neutral third party that facilitates dynamic and loosely coupled business-to-business (B2B) interactions. 113.What is remove method ? Method defined in the Home interface and invoked by a client to destroy an enterprise bean. 114.What is render kit ? A set of renderers that render output to a particular client. The JavaServer Faces implementation provides a standard HTML render kit, which is composed of renderers that can render HMTL markup. 115.What is registry provider ? An implementation of a business registry that conforms to a specification for XML registries (for example, ebXML or UDDI). 116.What is relationship field ? A virtual field of an entity bean having container-managed persistence; it identifies a related entity bean. 117.What is remote interface ? One of two interfaces for an enterprise bean. The remote interface defines the business methods callable by a client. 118.What is renderer ? A Java class that can render the output for a set of JavaServer Faces UI components. 119.
http://www.lessons99.com/advanced-java-interview-questions-16.html
CC-MAIN-2019-04
refinedweb
2,567
52.36
in reply to .vimrc for perl programmers I had a regex mapped to comment out lines or blocks of code and another to uncomment them. I always hated that this killed my "previous search expression" so that I could not go through a file looking for a pattern, comment out the line, and go to the next occurrence without re-issuing the search command. I always thought "There must be a better way. After OSCON 2007, I found that better way: function! Comment() let l:line = "#".getline(".") call setline(".",l:line) endfunction map ## :call Comment()<cr> function! UnComment() let l:line = getline(".") let l:pos = stridx(l:line,"#") if l:pos > -1 let l:line = strpart(l:line,0,l:pos).strpart(l:line,l:pos+1) endif call setline(".",l:line) endfunction map !# :call UnComment()<cr> [download] I also got tired of the fancy status line function not working inside closures because sub was indented. So, I did some poking at the code (that a coworker had put together) and made some minor fixes: if has( "folding" ) set statusline=%f%{CurrSubName()}\ %m%h%r\ %=%25(%-17(%l\,%c%V%)\ %p +%%%) set laststatus=2 set maxfuncdepth=1000 endif =~ '^\s*sub\>' let l:str = substitute( l:str, ' *{.*', '', '' ) let l:str = substitute( l:str, '^\s] Share and enjoy. Possibly.
http://www.perlmonks.org/?node_id=657871
CC-MAIN-2017-17
refinedweb
217
67.15
marble #include <ServerLayout.h> Detailed Description Definition at line 90 of file ServerLayout.h. Constructor & Destructor Documentation Definition at line 106 of file ServerLayout.cpp. Member Function Documentation Adds WMS query items to the prototypeUrl and returns the result. The following items are added: service, request, version, width, height, bbox. The following items are only added if they are not already specified in the dgml file: styles, format, srs, layers. Implements Marble::ServerLayout. Definition at line 111 of file ServerLayout.cpp. Definition at line 156 of file ServerLayout.cpp. Returns the name of the server layout to be used as the value in the mode attribute in the DGML file. Implements Marble::ServerLayout. Definition at line 151 of file ServerLayout.
https://api.kde.org/4.14-api/kdeedu-apidocs/marble/html/classMarble_1_1WmsServerLayout.html
CC-MAIN-2019-47
refinedweb
120
51.55
How to Generate Word-Lists with Python for Dictionary Attacks As per Alex's request, I am posting about generating word-lists in Python. However, this is my FIRST attempt with Python, so please provide me with critiques and any and all comments. I really want to know what you think as there was a little bump here and there seeing as I am transitioning from C#. Why the Program? Well, let's just run through a simple scenario: you're about to hack a vulnerable login page, but you think that brute-force is going to take ages (in fact, there's a decent chance it will), so why not try out a dictionary attack first? Because it's faster. [Please check my math here. I have not slept in the last 30 hours. I am not responsible for nonsense hereafter!] The English alphabet is 26 characters in length, and a 5 character password utilising brute force is 26^5, assuming it is not uppercase and has no special characters. 26^5 = 11881376 combinations! And that's the easy tier. Try a full dictionary—916132832 combinations (includes just upper, lower case and numbers). In these instances, you might want to try a dictionary attack. Now assuming a user has a password such as "thistle", a normal dictionary will suffice, but what if a password is "xZya6"? Well this is the program for you! Requirements - Python Step 1 Beginning of Your Code #! C:\python27 import string, random The above two lines are the beginning of our code. Since I am working on windowzer, my first line points to where I installed my Python. For linux users, change it to #!/usr/bin/python The import declaration just tells the program to import the string handling library and a library to handle random chars. Step 2 The Meat & Bones Now, if we think about it, we want to be able to do the following: - Tell the program how short each word should be. - Tell the program how long each word should be. - Tell the program how many words to generate. So enter these lines: minimum=input('Please enter the minimum length of any give word to be generated: ') maximum=input('Please enter the maximum length of any give word to be generated: ') wmaximum=input('Please enter the max number of words to be generate in the dictionary: ') Now decide on what kind of alphabet you will use—I chose the below: alphabet = 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYX0123456789' Replace above with - alphabet = string.letters[0:52] + string.digits + string.punctuation ..for runtime-generated alphabet in full ascii (no special symbols such as ¶) Next, declare the placeholder for our words. string='' Now, we tell Python to open a empty text file in write mode ("w"). (Linux users, point it your respective directory, or just write the file name if the file is next to your PY script) FILE = open("wl.txt","w") Now we write a loop which will range from 0 to the maximum number of words you defined, and generate words that hold random characters from the alphabet we defined earlier, in random order at variable length (assuming your min/max values were not identical on imput). for count in xrange(0,wmaximum): for x in random.sample(alphabet,random.randint(minimum,maximum)): string+=x Now we tell Python to write the strings (words) to the file we pointed the program to, by using '\n' to tell Python to separate each word in a new line. FILE.write(string+'\n') And the last functions are just: (1) Clear the string, (2) Close the file after editing—very important as changes might not register if it is not closed—and (3) prints the word "Done!" after finishing. string='' FILE.close() print 'DONE!' And that's it! Give it a go! 13 Comments nice :D is there a way to do the same thing, only with a dictionary? for example, create random english words? A friend of mine coded one that got English words :). It's quite clever. She made it spider Wikipedia for words and just made it not spider dupes or shorter than X characters. It's quite cool :3. I have an English dictionary word list and a few basic wordlist generators - the dictionary for english language was actually found for the game Scrabble!!! Anyway heres my repository with a few wordlist generators...and wordlists you will have to look through the words.txt files - these asre Dictionary words from a to z.....the generators however make all random combinations (non-words) the words.txt contents are organised in a list - however...sometimes they show as a huge unseparated line of text (one string) but when cloned/downloaded they appear as a list. My generators are very basic....but work! Yeah you can just adjust the for loop to run through a dictionary and join words together randomly from a given english dictionary. But there will not be any logic to it. You will end up with words like - "daisychaincatspear" or something. Is that what you meant? yup, pretty much :) i wonder, it would be pretty cool to have a visual-plot system that plots and connects random points on a given area. Awesome little program, I think the next step would be to utilize Mechanize: to fill the site's form automatically. Say no to mechanize! Haha I found mechanize to be very useful First, thanks for the code. Really useful. I however get the following error message: <-- Traceback (most recent call last): File "/home/arrayjumper/Desktop/Word-Lists for Dictionary attacks.py", line 10, in <module> alphabet = string.letters0:52 + string.digits + string.punctuation AttributeError: 'module' object has no attribute 'letters' --> Any idea what the problem might be? how to do this generating non-repeted words? Like: alphabet is '123' minimum=2 maximum=2 wmaximum=9 so result should be 11,12,13,23,21,32,22,33,31 but this setup gives me repeated results like 11,12,31,11,22,31,12,31,11 HOW TO BEGIN WITH DICTINARY ATTACK TO HACK WIFI SERVER ACCOUNT? so mine runs, but I dont get the output using this coding example I know it's supposed to make an empty text file for the results, but I get no text file, no results Searched through my drive and nothing - it says 'Done' as-well when I run it. Isn't it supposed to create a txt document in the same dir as the python script? I found it worked for me by printing the output to the Shell Window....then copy/paste into a txt doc - more work I know but I couldn't figure out why it wouldnt make the txt file - no errors either?? wierd lol It should be writing to a file yes. Did you download the code from the link at the bottom of the article? Share Your Thoughts
https://null-byte.wonderhowto.com/how-to/generate-word-lists-with-python-for-dictionary-attacks-0132761/
CC-MAIN-2017-26
refinedweb
1,149
72.87
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. Get current active view_id Is there a way to get the current active view_id? I need to refresh the page according to the active form that user are viewing. Or is there a way for me to refresh 2 or more page? (parent and wizard view) I have tried: @api.multi def refresh_ot(self, activity): ir_model_data = self.env['ir.model.data'] form_res = self.env.ref('mrp_employee_relation.mrp_employee_activity_form_view') tree_res = self.env.ref('mrp_employee_relation.mrp_employee_activity_tree_view') return { 'view_mode': 'form', 'view_type': 'form', 'res_model': 'mrp.employee.activity', 'res_id' : activity.id, 'view_id' : False, 'views' : [(form_res.id, 'form'), (tree_res.id, 'tree')], 'type' : 'ir.actions.act_window', } @api.multi def refresh_plat(self, plating): ir_model_data = self.env['ir.model.data'] form_res = self.env.ref('mrp_employee_relation.mrp_plating_report_form_view') tree_res = self.env.ref('mrp_employee_relation.mrp_plating_report_tree_view') return { 'view_mode': 'form', 'view_type': 'form', 'res_model': 'mrp.plating.report', 'res_id' : plating.id, 'view_id' : False, 'views' : [(form_res.id, 'form'), (tree_res.id, 'tree')], 'type' : 'ir.actions.act_window', } Then I call both function at Return, this method only work if I am in wizard view and it wont work if I am using the parent view while pressing a object button. Hello Munesh Nandwani, Try below code. @api.multi def refresh_plat(self, plating): model_obj = self.env['ir.model.data'] data_id = model_obj._get_id('module_name', 'view_id_which_you_want_refresh') view_id = model_obj.browse(data_id).res_id return { 'type': 'ir.actions.client', 'tag': 'reload', 'name': _('View Name'), 'res_model': 'model_name', 'view_type': 'view type which is kanban, form', 'view_mode': 'view mode which is kanban, form', 'view_id': view_id, 'target': 'current', 'nodestroy': True, } Hope it will works for you. Thanks, Hi Jignesh Metha, Thank you for you reply. But it's not the solution that I am looking for. I wish I can upload a picture to have a clearer view of the situation. What happen is, In From Employee Activity I have one2many of Plating. In that one2many tree I have a button name "start". If u click the one2many it will pop up a wizard that will have the same button (start). If use the button inside the wizard and I call both refresh function. It work properly. But if I use the button inside the one2many (without having a pop out wizard) it will not refresh properly (it only refresh that one2many not other one2many that are created because of that start button). So If i can find out currently the user is using the start button from wizard view or not wizard view I can call the function accordingly. I hope this clarifies my question. Thank you. About This Community Odoo Training Center Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
https://www.odoo.com/forum/help-1/question/get-current-active-view-id-111401
CC-MAIN-2017-34
refinedweb
464
53.37
How To Use Grep Command Recursively Through Sub-Directories In Linux? Option We will start with a simple example and only specifying recursive option -r which is shortcut for “recursive”. In this example we will search files those have string import. We will search /home directory. Recursive -r Option Specify File Name Pattern or Extension We can specify file pattern to search recursively. For example if we want to search the Python script or code files content we can use *.py file pattern to look only those files recursively. In this example we will search for import term. We will use --includeoption. Specify File Name Pattern or Extension Exclude Specified File Name Pattern or Extension We can also specify the file name patterns or extensions we want to exclude. For example if we only want to search python scripts but not pyc or python cache files we can specify to exclude pyc. We will use --exclude option for this. Search Case-Insensitive By default Search Case-Insensitive grep will search case sensitive. Case sensitive will look exact term. If we want to look incase-sensitive we should provide -i option. In this example we will search test case-insensitive which means alternative like TEST Test etc. will be looked for./etc <pre class="">$ grep -r -i "test" /home/</pre> <h1>Search Multiple Directories</h1> In previous examples we have provide only single directory to search. In some cases we may need to search multiple directories those resides in different path. In this example we will search directories /usr/share and /home in single command by adding them to the end of the command. Search Case-Insensitive Search Multiple Directories
https://www.poftut.com/use-grep-command-recursively-sub-directories-linux/
CC-MAIN-2018-30
refinedweb
279
58.48
Starting another RV plugin,...but this time we want to use some pre-existing python code. To make this work,...in the plugin file, using python, near the top, in the import statements, I need to add the old reliable statement we've all seen before,.. from shotgun_api3 import Shotgun But when I launch RV, it crashes. Rob@Shotgun told us (many months ago) that in order to prevent, do a Shotgun.NO_SSL_VALIDATION = True but unfortunately it still crashes at launch. To be fair, Rob's suggestion was in a slightly different context though, but it seemed like the logical thing to try. I'm surprised that importing the main Shotgun module would cause a crash, given all the cooperative efforts of Shotgun and Tweak to have their products co-mingle, so to speak. Can anybody shed any light on how to do this?...or even if it can be done?...and/or alternatives? We don't want to use ScreeningRoom for this...it's just a pretty simple python plugin for use in the RV player. Thanks,
https://support.shotgunsoftware.com/hc/ko/community/posts/209497628-Importing-shotgun-module-in-RV-plugin
CC-MAIN-2021-39
refinedweb
178
75.71
This tutorial will teach you how to pass data to a Blazor component as a parameter. This is useful if you have a customizable component that you wish use in multiple places across your page. Parent-Child Data Flow Suppose you have a custom UI component that you intend to use throughout a page. The page would be considered the parent component, and the UI component would be considered the child. In this tutorial, you will create a reusable child component, and then you will learn how to pass parameters to it from the parent. Create the Child Component For this example, we will create a simple Blazor component that generates a label and input. Start by creating a Blazor WebAssembly project called ComponentParameters. Then right-click the project and add a new folder called Components. Next, right-click the Components folder you just created, and add a new item, Add > New Item… Select Razor Component and name it CustomInput. Replace the content of CustomInput.Razor with the following code. Here, you can use some Bootstrap classes to modernize the appearance of your UI controls. <div class="form-group row"> <label for="@ID" class="col-sm-8 col-form-label">@ChildContent</label> <input id="@ID" class="col-sm-4 form-control" /> </div> @code { } Notice, there are two parameters in this component that need to be defined, ID and ChildContent. The ChildContent is the part of the HTML code between the tags. In JavaScript, it is known as innerHTML. <div>This is child content</div> If your component accepts child content, you will need to define a parameter and identify a place to render it. In the above example, you will also need to define a parameter for ID. <div class="form-group row"> <label for="@ID" class="col-sm-8 col-form-label">@ChildContent</label> <input id="@ID" class="col-sm-4 form-control" /> </div> @code { [Parameter] public string ID { get; set; } [Parameter] public RenderFragment ChildContent { get; set; } } As you can see, the ChildContent will be rendered as the text of the label for the input element. You could render the ChildContent anywhere in your custom component, something that makes Blazor very powerful! The ID parameter is used as the unique id for the input element. The label also references it so it knows which input it belongs to. Use Child Component with Parameters If you want to place your component on the project’s home page Index.razor , you will need to let the page know where to find the CustomInput component by issuing a @using statement. It should be in the form namespace.directory. In this case, the following line will work. @using ComponentParameters.Components To use the child component you just created, you will reference it by name inside angle brackets <>, just as you would any other html element tag. You will then provide values for any of the parameters you defined in the child component. You might use the component to add an input for collecting a person’s name as follows: <CustomInput ID="firstname">Enter your name</CustomInput> You placed the component by using the <CustomInput> tag. Then, you assigned a value to the ID parameter, just as you would an attribute of a regular html tag. Finally, the child content you provided will be used as the text of the label, exactly as defined in CustomInput.razor. You could place as many instances of the child component on the parent page as you wish, and you could provide different values for each of the parameters each time. You can think of it as a shortcode. This is the beauty of Blazor and Razor Components! @page "/" @using ComponentParameters.Components <div style="max-width:600px"> <CustomInput ID="firstname">First Name</CustomInput> <CustomInput ID="lastname">Last Name</CustomInput> <CustomInput ID="number">Phone Number</CustomInput> </div> The Bottom Line In this tutorial, you learned how to pass parameters from a parent component to a child component. Imagine using this trick while looping through the elements of a collection in order to populate a page? While this approach works for passing data to components on the same page, you may also be interested in how to pass parameters between components on different pages. Stay tuned, because that tutorial is coming next week!
https://wellsb.com/csharp/aspnet/pass-data-to-blazor-component/
CC-MAIN-2020-16
refinedweb
713
54.02
All C# classes, of any type, are treated as if they ultimately derive from System.Object. Interestingly, this includes value types. A base class is the immediate "parent" of a derived class. A derived class can be the base to further derived classes, creating an inheritance "tree" or hierarchy. A root class is the topmost class in an inheritance hierarchy. In C#, the root class is Object. The nomenclature is a bit confusing until you imagine an upside-down tree, with the root on top and the derived classes below. Thus, the base class is considered to be "above" the derived class. Object provides a number of methods that subclasses can and do override. These include Equals( ) to determine if two objects are the same; GetType( ), which returns the type of the object (discussed in Chapter 8); and ToString( ), which returns a string to represent the current object (discussed in Chapter 10). Table 5-1 summarizes the methods of Object. Example 5-4 illustrates the use of the ToString( ) method inherited from Object, as well as the fact that primitive datatypes such as int can be treated as if they inherit from Object. using System; public class SomeClass { private int val; public SomeClass(int someVal) { val = someVal; } public override string ToString( ) { return val.ToString( ); } } public class Tester { static void Main( ) { int i = 5; Console.WriteLine("The value of i is: {0}", i.ToString( )); SomeClass s = new SomeClass(7); Console.WriteLine("The value of s is {0}", s.ToString( )); } } Output: The value of i is: 5 The value of s is 7 The documentation for Object.ToString( ) reveals its signature: public virtual string ToString( ); It is a public virtual method that returns a string and that takes no parameters. All the built-in types, such as int, derive from Object and so can invoke Object's methods. Example 5-4 overrides the virtual function for SomeClass, which is the usual case, so that the class' ToString( ) method will return a meaningful value. If you comment out the overridden function, the base method will be invoked, which will change the output to: The value of s is SomeClass Thus, the default behavior is to return a string with the name of the class itself. Classes do not need to explicitly declare that they derive from Object; the inheritance is implicit.
https://etutorials.org/Programming/Programming+C.Sharp/Part+I+The+C+Language/Chapter+5.+Inheritance+and+Polymorphism/5.5+The+Root+of+All+Classes+Object/
CC-MAIN-2022-21
refinedweb
388
63.59
NAME | SYNOPSIS | PARAMETERS | DESCRIPTION | EXAMPLES | ERRORS | ATTRIBUTES | SEE ALSO | NOTES #include <sys/devpoll.h> int fd = open("/dev/poll", O_RDWR); ssize_t n = write(int fd, struct pollfd buf[], int bufsize); int n = ioctl(int fd, DP_POLL, struct dvpoll* arg); int n = ioctl(int fd, DP_ISPOLLED, struct pollfd* pfd); Open file descriptor that refers to the /dev/poll driver. /dev/poll Array of pollfd structures. Size of buf in bytes. Pointer to pollcall structure. Pointer to pollfd structure. The /dev/poll driver is a special driver that enables you to monitor multiple sets of polled file descriptors. By using the /dev/poll driver, you can efficiently poll large numbers of file descriptors. Access to the /dev/poll driver is provided through open(2), write(2), and ioctl(2) system calls. Writing an array of pollfd struct to the /dev/poll driver has the effect of adding these file descriptors to the monitored poll file descriptor set represented by the fd. To monitor multiple file descriptor sets, open the /dev/poll driver multiple times. Each fd corresponds to one set. For each pollfd struct entry (defined in sys/poll.h): struct pollfd { int fd; short events; short revents; }The fd field specifies the file descriptor being polled. The events field indicates the interested poll events on the file descriptor. If a pollfd array contains multiple pollfd entries with the same fd field, the "events" field in each pollfd entry is OR'ed. A special POLLREMOVE event in the events field of the pollfd structure removes the fd from the monitored set. The revents field is not used. Write returns the number of bytes written successfully or -1 when write fails. The DP_POLL ioctl is used to retrieve returned poll events occured on the polled file descriptors in the monitored set represented by fd. arg is a pointer to the devpoll structures which are defined as follows: struct dvpoll { struct pollfd* dp_fds; int dp_nfds; int dp_timeout; }The dp_fds points to a buffer that holds an array of returned pollfd structures. The dp_nfds field specifies the size of the buffer in terms of the number of pollfd entries it contains. The dp_nfds field also indicates the maximum number of file descriptors from which poll information can be obtained. If there is no interested events on any of the polled file descriptors, the DP_POLL ioctl call will wait dp_timeout milliseconds before returning. If dp_timeout is 0, the ioctl call returns immediately. If dp_timeout is -1, the call blocks until an interested poll events is available or the call is interrupted. Upon return, if the ioctl call has failed, -1 is returned. The memory content pointed by dp_fds is not modified. A return value 0 means the ioctl is timed out. In this case, the memory content pointed by dp_fds is not modified. If the call is successful, it returns the number of valid pollfd entries in the array pointed by dp_fds; the contents of the rest of the buffer is undefined. For each valid pollfd entry, the fd field indicates the file desciptor on which the polled events happened. The events field is the user specified poll events. The revents field contains the events occurred. –1 is returned if the call fails. DP_ISPOLLED ioctl allows you to query if a file descriptor is already in the monitored set represented by fd. The fd field of the pollfd structure indicates the file descriptor of interest. The DP_ISPOLLED ioctl returns 1 if the file descriptor is in the set. The events field contains the currently polled events. The revents field contains 0. The ioctl returns 0 if the file descriptor is not in the set. The pollfd structure pointed by pfd is not modified. The ioctl returns a -1 if the call fails. The following example shows how /dev/poll may be used. { ... /* * open the driver */ if ((wfd = open("/dev/poll", O_RDWR)) < 0) { exit(-1); } pollfd = (struct pollfd* )malloc(sizeof(struct pollfd) * MAXBUF); if (pollfd == NULL) { close(wfd); exit(-1); } /* * initialize buffer */ for (i = 0; i < MAXBUF; i++) { pollfd[i].fd = fds[i]; pollfd[i].events = POLLIN; pollfd[i].revents = 0; } if (write(wfd, &pollfd[0], sizeof(struct pollfd) * MAXBUF) != sizeof(struct pollfd) * MAXBUF) { perror("failed to write all pollfds"); close (wfd); free(pollfd); exit(-1); } /* * read from the devpoll driver */ dopoll.dp_timeout = -1; dopoll.dp_nfds = MAXBUF; dopoll.dp_fds = pollfd; result = ioctl(wfd, DP_POLL, &dopoll); if (result < 0) { perror("/dev/poll ioctl DP_POLL failed"); close (wfd); free(pollfd); exit(-1); } for (i = 0; i < result; i++) { read(dopoll.dp_fds[i].fd, rbuf, STRLEN); } ... } The following example is part of a test program which shows how DP_ISPOLLED() ioctl may be used. { ... loopcnt = 0; while (loopcnt < ITERATION) { rn = random(); rn %= RANGE; if (write(fds[rn], TESTSTRING, strlen(TESTSTRING)) != strlen(TESTSTRING)) { perror("write to fifo failed."); close (wfd); free(pollfd); error = 1; goto out1; } dpfd.fd = fds[rn]; dpfd.events = 0; dpfd.revents = 0; result = ioctl(wfd, DP_ISPOLLED, &dpfd); if (result < 0) { perror("/dev/poll ioctl DP_ISPOLLED failed"); printf("errno = %d\n", errno); close (wfd); free(pollfd); error = 1; goto out1; } if (result != 1) { printf("DP_ISPOLLED returned incorrect result: %d.\n", result); close (wfd); free(pollfd); error = 1; goto out1; } if (dpfd.fd != fds[rn]) { printf("DP_ISPOLLED returned wrong fd %d, expect %d\n", dpfd.fd, fds[rn]); close (wfd); free(pollfd); error = 1; goto out1; } if (dpfd.revents != POLLIN) { printf("DP_ISPOLLED returned unexpected revents %d\n", dpfd.revents); close (wfd); free(pollfd); error = 1; goto out1; } if (read(dpfd.fd, rbuf, strlen(TESTSTRING)) != strlen(TESTSTRING)) { perror("read from fifo failed"); close (wfd); free(pollfd); error = 1; goto out1; } loopcnt++; } A process does not have permission to access the content cached in /dev/poll. A signal was caught during the execution of the ioctl(2) function. The request argument requires a data transfer to or from a buffer pointed to by arg, but arg points to an illegal address. The request or arg parameter is not valid for this device.. See attributes(5) for a description of the following attributes: open(2), poll(2), write(2), attributes(5) The /dev/poll API is particularly beneficial to applications that poll a large number of file descriptors repeatedly. Applications will exhibit the best performance gain if the polled file descriptor list rarely change. When using the /dev/poll driver, you should remove a closed file descriptor from a monitored poll set. Failure to do so may result in a POLLNVAL revents being returned for the closed file descriptor. When a file descriptor is closed but not removed from the monitored set, and is reused in subsequent open of a different device, you will be polling the device associated with the reused file descriptor. In a multithreaded application, careful coordination among threads doing close and DP_POLL ioctl is recommended for consistent results. The /dev/poll driver caches a list of polled file descriptors, which are specific to a process. Therefore, the /dev/poll file descriptor of a process will be inherited by its child process, just like any other file descriptors. But the child process will have very limited access through this inherited /dev/poll file descriptor. Any attempt to write or do ioctl by the child process will result in an EACCES error. The child process should close the inherited /dev/poll file descriptor and open its own if desired. The /dev/poll driver does not yet support polling. Polling on a /dev/poll file descriptor will result in POLLERR being returned in the revents field of pollfd structure. NAME | SYNOPSIS | PARAMETERS | DESCRIPTION | EXAMPLES | ERRORS | ATTRIBUTES | SEE ALSO | NOTES
http://docs.oracle.com/cd/E19683-01/816-5223/6mbco0a3n/index.html
CC-MAIN-2015-40
refinedweb
1,257
66.23
Official project repository for the Setuptools build system). Hiya, I have a setup.py that is currently using setuptools to build a C++ python extension (which in-turn is part of a conda recipe). I'd need to ship some C++ header files with this package, but I can't seem to get it to work. I have added the headers into my setup, like this: from setuptools import setup setup(..., headers=[str(fn) for fn in Path("include").glob("**/*") if fn.is_file()]) However, when I install the package using python setup.py install it creates an egg file without installing these headers. I've read on various forums how to install a package that includes headers but nothing that I have tried works. For example, I've tried adding recursive-include include *.hpp to MANIFEST.in but this makes no difference to what gets installed. I've also tried using distutils.core.setup instead of setuptools.setup but then my pybind11 extension does not build. I've tried using pip install . but then my data_files not longer get packaged and my unit-tests failed. The setuptools documenting is a bit sparse when it comes to packaging headers files and a lot of the information in online forums is very old. I can see that setuptools does have a install_headers command, can someone point me to a example of how this works? Thanks!
https://gitter.im/pypa/setuptools?at=5ff81fbacd31da3d9a9c1b2d
CC-MAIN-2021-39
refinedweb
233
69.07
OpenGL Discussion and Help Forums > DEVELOPERS > OpenGL coding: advanced > jpeg files PDA View Full Version : jpeg files reubenhawkins 08-11-2002, 12:03 PM Does anybody know where I can find info on loading jpeg file and using them as textures? Thanks. Zeno 08-11-2002, 12:22 PM DevIL [This message has been edited by Zeno (edited 08-11-2002).] davepermen 08-11-2002, 12:24 PM OpenIL/DevIL Nehe IPicture loading google google google what you want to do is build upon two parts: the first one, don't ask it here the second one is simple and if you don't know it, ask in the beginners forum. there is some free source to load jpg files, but i don't remember the name.. well, you'll find it by browsing and searching and googling and searching the forums (gamedev,here,flipcode,there where ever, the topic is discussed and the solutions are out there..) good luck tcobbs 08-11-2002, 04:08 PM In addition to what's already been mentioned, be sure that JPEG really is an appropriate format. JPEG files are good for photographic images (which many textures are), but can be extremely poor for hand-drawn or computer-drawn images. (Many--if not most--webmasters are completely clueless of this fact). In addition, JPEG files don't support an alpha channel, so if you want 32-bit textures with alpha, you'll have to use a different format (such as PNG). SirKnight 08-11-2002, 05:10 PM I personally use tga files. They can be compressed or uncompressed if you want and they can support an alpha channel. So maybe check those out too. -SirKnight Fredz 08-11-2002, 06:17 PM JPEG's are Evil too : Nutty 08-12-2002, 02:35 PM jpegs aren't evil. jpegs rock! Use TGA's for alpha channels and normal maps. Use jpegs for everything else. But dont crank the compression too high, or they'll look crap. Oh, and according to Adobe Photoshop 6, PNG's dont support alpha channels. Nutty Julien Cayzac 08-12-2002, 03:10 PM Originally posted by Nutty: jpegs aren't evil. jpegs rock! Use TGA's for alpha channels and normal maps. Use jpegs for everything else. ...Or simply use PNGs for every thing down to multilayered heightmaps Oh, and according to Adobe Photoshop 6, PNG's dont support alpha channels. They must mean Adobe's Non-Compliant PNG's (TM) don't. Use The Gimp. Julien. SirKnight 08-12-2002, 05:03 PM Well I think JPEGs are still good. For somethings you may not want them, I can think of a few things I would never use a jpg for. But anyway, look at Quake 3. If you take a peek into the pak files, you will notice that a lot of the textures are JPGs! And the graphics in quake 3 look pretty good to me. -SirKnight zed 08-12-2002, 10:24 PM i dont know about photoshop 6 but photoshop 5.5 has embarassing? (for the programmeurs) support for png's. eg if u save a png with alpha channel + then reopen in the alpha channel has corrupted some of the RGB channels. TreeMan 08-15-2002, 06:28 AM check out the Intel JPEG Library Developer's Guide in pdf format available at : this url might also be useful: for a C++ example Hope that could help DFrey 08-15-2002, 06:55 AM They must mean Adobe's Non-Compliant PNG's (TM) don't. Use The Gimp. Or Paint Shop Pro 7. It too can easily support alpha in png files. pleopard 08-15-2002, 10:31 AM Originally posted by Fredz: JPEG's are Evil too :. True, JPEG does not support an alpha channel but it does support greyscale luminance images. If I need an alpha mask for a JPEG image, all I have to do is to save it as a greyscale JPEG and load it at the same time I load the RGB image. There are many tradeoffs you must consider when selecting an image format. I choose JPEG simply because it consistently yields the best compression ratios for the types of textures I use. That in turn cuts down on the loading time and the total amount of disk space I need for storage. Jeffry J Brickley 08-15-2002, 10:52 AM Jpeg/lossless jpeg/png/targa is all a matter of perspective and intent of use. Jpeg can take a little more time to load/save due to the multi-pass compression. Targa/raw or Targa/compressed can make a decent screen save minimal memory overhead and reasonably fast. etc. etc. You can pretty much use almost anythign for anything else, from saving all colors into their own greyscale map and then use a greyscale mapped GIF *shudder*. If someone wants to use jpeg, I tell them go for it. I do one step worse, I do almost everything now with wavelet compression. but then the size of images I use are just plain HUGE anymore and parsing a targa/tiff/jpeg/png all took way too long. My point, if there is one, and even I am not sure anymore, is that he has a right to use jpeg without arguing over it. but if you have a better suggestion, say, "try this, you might like it better, i do." Enjoy! Jeff P.S. Try parsing Nasa's Blue Marble 1Km for the world in almost any format. a programmer's nightmare. If anyone has anything faster than wavelet compression for that large of an image, I am definatly up for a change! tcobbs 08-16-2002, 07:22 PM Originally posted by Nutty: Oh, and according to Adobe Photoshop 6, PNG's dont support alpha channels. That's partially because they finally woke up and smelled the burning coffee and implemented transparency support correctly. If you create a Photoshop image with a transparent background, and save it using the PNG format, it automatically generates an appropriate alpha channel that represents the transparency of each pixel (at a full 8 bits, NOT on/off). Seeing as how that's almost certainly what you really wanted in the alpha channel anyway, this is a huge improvement over forcing you to draw into the alpha channel manually. dorbie 08-16-2002, 09:38 PM PNGs do support alpha channels. I've created PNG files with alpha using Adobe photoshop and it works just fine, it also seems to work in various browsers (although IE sucks at alpha blending). This image (DevIL logo) is a PNG and was created using photoshop. [This message has been edited by dorbie (edited 08-16-2002).] Wingman 08-17-2002, 01:15 AM Here's the code for loading jpg/jpeg files (but it's not my work): ----------------------------------------------------------------- #include "jpeglib.h" #pragma comment(lib, "jpeg.lib") //***************** Prototypes ************************// // function for loading .jpg tImage *LoadJPG(const char *strFileName); // for .jpg decoding and decompression void DecodeJPG(jpeg_decompress_struct* cinfo, tImage*pImageData); ///////////////////////////////// DECODE JPG /////////// /* here we decode .jpg */ void DecodeJPG(jpeg_decompress_struct* cinfo, tImage *pImageData) { // reads JPG's header jpeg_read_header(cinfo, TRUE); // decompression jpeg_start_decompress(cinfo); pImageData->channels = cinfo->num_components; pImageData->sizeX = cinfo->image_width; pImageData->sizeY = cinfo->image_height; int rowSpan = cinfo->image_width * cinfo->num_components; pImageData->data = ((unsigned char*)malloc(sizeof(unsigned char)*rowSpan*pImageData->sizeY)); unsigned char** rowPtr = new unsigned char*[pImageData->sizeY]; for (int i = 0; i < pImageData->sizeY; i++) rowPtr = &(pImageData->data[i * rowSpan]); int rowsRead = 0; while (cinfo->output_scanline < cinfo->output_height) { rowsRead += jpeg_read_scanlines(cinfo, &rowPtr[rowsRead], cinfo->output_height - rowsRead); } // finish decompression jpeg_finish_decompress(cinfo); } ///////////////////////////////// LOAD JPG ///////////// /* here we load .jpg */ tImage *LoadJPG(const char *strFileName) { struct jpeg_decompress_struct cinfo; tImage *pImageData = NULL; FILE *pFile; if((pFile = fopen(strFileName, "rb")) == NULL) { MessageBox(g_hWnd, "Loading Failed!", "Error", MB_OK); return NULL; } jpeg_error_mgr jerr; // Have our compression info object point to the error handler address cinfo.err = jpeg_std_error(&jerr); jpeg_create_decompress(&cinfo); jpeg_stdio_src(&cinfo, pFile); pImageData = (tImage*)malloc(sizeof(tImage)); DecodeJPG(&cinfo, pImageData); jpeg_destroy_decompress(&cinfo); fclose(pFile); return pImageData; } jwatte 08-17-2002, 10:07 AM I kind-of like .dds files. They come pre-compressed, so you don't have to spend time in the driver when uploading. The PhotoShop tools for saving .dds are pretty good (download from nVIDIAs site), and generate much better compressed images than the quick-and-dirty job the driver is doing. Now, if you're trying to minimize on-disk size, then something like JPEG2000 might be worth it. Else, just go with .dds. Except for normal maps -- s3tc doesn't do so well on those :-) evanGLizr 08-17-2002, 04:28 PM Originally posted by pleopard:. Huffman and RLE encodings are applied to the coefficients of the DCT (after the lossy encoding). I think that although offtopic, this info would be useful:). The FAQ is dated 28 of March 1999, so maybe JPEG-LS has indeed become widespread since then :? [EDIT] Added date info. [This message has been edited by evanGLizr (edited 08-17-2002).] Powered by vBulletin® Version 4.2.2 Copyright © 2014 vBulletin Solutions, Inc. All rights reserved.
http://www.opengl.org/discussion_boards/archive/index.php/t-151024.html
CC-MAIN-2014-10
refinedweb
1,518
64
Java Quiz: Nested Classes and Constructors Java Quiz: Nested Classes and Constructors DZone's latest quiz for intermediate Java developers! Learn how nested classes and constructors behave. Join the DZone community and get the full member experience.Join For Free Purpose This is a quiz aimed at intermediate developers to: Introduce you to how nested classes behave in Java. Improve your internal calculations and expectations regarding nested classes and constructors. What is written to the standard output as a result of executing the following code? public class Outer{ private int x; private int y; Outer(){ x = 4; y = 3; } Outer(int z){ new Outer(); x += z; y -= z; } class Inner{ Inner(){ x ++ ; y += 2; } Inner(int i){ this(); x -= i; y += i; System.out.print(x ++ + " " + -- y + " "); } } public static void main(String[] args){ Inner inner = new Outer(3).new Inner(2); } } Answer Explanation First, create the object Inner: Inner inner = new Outer(3).new Inner(2); new Outer(3) calls the int-arg constructor of the outer class new Outer(); // this new object doesn't affect the initial value of x & y. x += z; // z = 3. x = 0 + 3 = 3. y -= z; // y = 0 – 3 = -3 Now Inner(2) calls the int-arg constructor of the Inner class. Inner(int i){ this(); // this line calls the no-arg constructor of the inner class x++; // x = 3 + 1 = 4 y+=2; // y = -3 + 2 = -1 Parameter i = 2; x -= i; // x = 4 – 2 = 2 y += i; // y = -1 + 2 = 1 System.out.print(x ++ + " " + -- y + " "); x++ actually means x = x + 1; and adds one to the value after the expression is evaluated. --y is the predecrement and x++ is the postincrement. x is still 2, and --y decrements by one, so y = 1 - 1 = 0. So "2 0" is written to the standard output, making B is the correct answer. You can learn more about Sar here at his website! Hope you enjoyed this quiz. Let us know your thoughts in the comments! Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/java-quiz-nested-classes-and-constructors?fromrel=true
CC-MAIN-2019-51
refinedweb
352
56.76
In this tutorial we will learn about Python System Command. Previously we learned about Python Random Number. Table of Contents. import os cmd = "git --version" returned_value = os.system(cmd) # returns the exit code in unix print('returned value:', returned_value) The following output found in ubuntu 16.04 where git is installed already. git version 2.14.2 returned value: 0. import subprocess cmd = "git --version" returned_value = subprocess.call(cmd, shell=True) # returns the exit code in unix print('returned value:', returned_value).. import subprocess # Command to execute cmd = “termux-location” # Execute Command ecmd = subprocess.check_output(cmd) # Save Output to Variable scmd = ecmd.decode(‘utf-8’) # Access data # Adjust right and left value to narrow in on desired information print(scmd[16:27]) a=os.popen(“python manage.py test”).read() print(a) a—->> ‘ ‘ How can we access 3rd party cli commands like executing redis-cli commands or mongodb commands, in which a new shell is popped up The only way to achieve happiness is to cherish what you have and forget what you don’t have that’s great how to store os.system output to a variable and print later is it possible I have the same question too. How? The “subprocess” is returning NotADirectoryError: [Errno 20] Not a directory Hey Guys, I am using python3.6.9, need some help here – I am coming from a perl background, trying to find a way to run a command (say for example ‘df’ command and store the output of columns 2 and 6 (size and name) in a key,value pair in a hash(dict in python world) and/or push these into a list which can be called from anywhere in the script- I have not seen very many examples of this kind of logic – any help is appreciated. Thanks Q. import subprocess cmd = “ps -ef | wc -l” returned_output = subprocess.check_output(cmd) print(returned_output.decode(“utf-8”)) not working in Case command using with pipe How to solution this problem? Have you find the solution for this ? It works fine in Python3. In case, its still not working for you, try passing the cmd string as a raw input like this : cmd = r’ps -ef|wc -l’ In that case, python will not treat the ‘|’ as special parameter (OR operator). with python 2 no need to decode , I want include a variable which stores the result of previous command in subprocess.Popen({“command here”],subprocess.PIPE) example: state = “here” cmd = [“”” grep -i $state /log/messages”””] Hello, what if after executing command, cmd prompts something like “Press any key to continue or ctrl-c to cancel” and I basically need to somehow send any key press. Thanks for post. someone can share me the python os system need to try it this my mail.marckeef@hotmail.com If you just wanted to accomplish that then you could just do this: >>> text=’abc\ncde’ >>> print(text,file=open(‘file1′,’w’)) The file ‘file1’ will be saved to the current working directory. You can view this by: >>> import os >>> os.getcwd() I’m pretty new to python scripting, I’m trying to achieve the python equivalent of shell cmd = “echo -e “abc\ncde” >file1″ The contents of file1 then looks like this: abc cde My python script has: cmd = “echo -e \”abc\ncde\” >file” os.system(cmd) However, when executing this my file looks like this: -e abc cde -e is an option for echo to recognise \n as new line character and should not be written to the file. Is there a way around this? I’m using python 3.6.8. Here’s the fix: Just wrap it in triple quotes, remove the escapes on the quotes you had and specify the path to echo, such as /bin/echo. cmd = “””/bin/echo -e ”abc\ncde” >file””” os.system(cmd) I am trying to run “netsh wlan start hostednetwork” command but it’s telling me You must run this command from a command prompt with administrative privilege, what can i do pls I am trying to use the check_output command to call “git branch -r –merged” as follows: import subprocess import os import sys import csv cmd = “git branch -r –merged” listed_merged_branches = subprocess.check_output(cmd) print (listed_merged_branches.decode(“utf-8”)) i am getting callprocesserror Could you help please ? Please try: listed_merged_branches = subprocess.check_output(([‘git’,’branch’,’-r’,’-merged’])) print (listed_merged_branches.decode(“utf-8”))
https://www.journaldev.com/16140/python-system-command-os-subprocess-call
CC-MAIN-2021-10
refinedweb
730
63.19
Get Control Over [Embed] The . First of all, let’s talk about the basics of [Embed] in case it’s a new concept to you. This metadata tag tells the compiler to embed a file into the SWF and bind it to whatever comes after it. A huge majority of the code you see using this, including every [Embed] on this site, looks like this: The above works for images. This works for arbitrary data files: The only difference is in adding the mimeType to the [Embed] tag to tell the compiler to embed the file as a ByteArray rather than a Bitmap. There are other types you can embed (e.g. Font), but we’ll limit to these two types for the purposes of this article. One of the disadvantages to the above approach is that the mx.core.BitmapAsset class will be included in your SWF and thus add to its file size even though you almost certainly only care that it ultimately derives from Bitmap and therefore has all the useful properties like having an accessible BitmapData for manipulator and uploading to Stage3D textures. It turns out that if you’re willing to use an alternative syntax for the [Embed] metadata tag that you can actually skip embedding these mostly-irrelevant classes. This isn’t covered in Adobe’s documentation. Here’s how the alternate version looks: Here you’re explicitly providing a class to bind the file to rather than using a variable typed as a Class. So you don’t need any casting and know at compile time exactly what you’re instantiating. You can also add functionality to the class like how SomeImage has a function to return a String description of its dimensions. Unfortunately, it requires that you add another file if you want to use the embedded resource file from more than one source code (i.e. .as) file. Here’s how this alternate version would look for arbitrary, binary data: In summary, this alternate syntax allows you to avoid embedding extra classes like mx.core.BitmapAsset, derive classes from embedded files to add on functionality, and gain some compile-time confidence by skipping the need for casting from plain Object. Why not give it a try next time you need to [Embed]? Spot a bug? Have a question or suggestion? Post a comment! #1 by henke37 on August 12th, 2013 · | Quote Another advantage is that you get rid of the pointless variable/constant to hold the Class reference. #2 by max troost on August 12th, 2013 · | Quote this is a great one, mx.core.BitmapAsset always seemed obsolete when building a AS3 only app. But how would you implement this for a Font? Would be something like this? [Embed(source=”somefont.ttf”, ….)] public class SomeImage extends Font { } #3 by jackson on August 12th, 2013 · | Quote That’s the basic strategy, but there a few quirks to work around. For TrueType fonts you’ll need to add embedAsCFF="false"to get a font class like this: Then you’ll need to call Font.registerFontbefore you use the embedded font. For TextField, you also have to set the embedFontsfield to true. Here’s a little test app that uses the above font: Here’s a demo SWF showing off Trajan, which you probably don’t have installed on your computer. If you do, uninstall it before running the test an re-install afterward so you can confirm that it’s not a system font but actually embedded in the SWF. #4 by Stephen Bailey on August 12th, 2013 · | Quote If you are using this to embed a swf, just remember that SWF’s are treated slightly differently, in that the embedded object is not the SWF, but rather it Loader , so you may have to wait on its event complete before you can use it ( although I do worry about the loader completing before your callback is in place, and you have little control over this since you are the one calling the load method in the first place? ). Credit to this link where I first read about this : #5 by max troost on August 15th, 2013 · | Quote thanx allot Jackson! #6 by Florent Cailhol on August 20th, 2013 · | Quote You could also embed an image as BitmapData. [Embed(source=”someimage.png”)] public class SomeImage extends BitmapData { public function SomeImage() { super(0, 0); } } #7 by test on August 22nd, 2013 · | Quote hello.why the embed not use in flex 4.10 #8 by jackson on August 22nd, 2013 · | Quote I’m not sure. Have you asked someone from the Apache Flex project? #9 by erikdebruin on September 10th, 2013 · | Quote Hi, You are always invited to report issues with the Flex SDK to the Apache Flex JIRA system [1]. Please provide a ‘detailed’ explanation and the smallest test project with which we can reproduce the issue (if possible, otherwise the report alone is fine. Jackson: thanks, this is another amazing ‘hack’, I’m glad Flex can still surprise me ;-) EdB 1:
http://jacksondunstan.com/articles/2343
CC-MAIN-2017-39
refinedweb
838
69.21
If you view the source code of a typical web page, you are likely to see something like this near the top: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" ""> and/or <html xmlns="" ...> These. Why are these systems bothering to request these resources at all if they don’t care about the response? (For repeat offenders we eventually block the IPs at the TCP level as well.) We have identified some of the specific software causing this excessive traffic and have been in contact with the parties responsible to explain how their product or service is essentially creating a Distributed Denial of Service (DDoS) attack against W3C. Some have been very responsive, correcting the problem in a timely manner; unfortunately others have been dragging on for quite some time without resolution, and a number of sources remain unidentified. We would like to see this issue resolved once and for all, not just for our own needs but also to improve the quality of software deployed on the Web at large. Therefore we have a number of suggestions for those writing and deploying such software: - Pay attention to HTTP response codes This is basic good programming practice: check your return codes, otherwise you have no idea when something goes wrong. - Honor HTTP caching/expiry information Resources on our site are served in a cache-friendly way: our DTDs and schemata generally have explicit expiry times of 90 days or more, so there’s no reason to request these resources several times a day. (In one case we noticed, a number of IP addresses at one company were requesting DTDs from our site more than three hundred thousand times per day each, per IP address.) Mark Nottingham’s caching tutorial is an excellent resource to learn more about HTTP caching. - If you implement HTTP in a software library, allow for caching Any software that makes HTTP requests to other sites should make it straightforward to enable the use of a cache. Applications that use such libraries to contact other sites should clearly document how to enable caching, and preferably ship with caching enabled by default. Many XML utilities have the ability to use an XML catalog to map URIs for external resources to a locally-cached copy of the files. For information on configuring XML applications to use a catalog, see Norman Walsh’s Caching in with Resolvers article or Catalog support in libxml. -. If the software doesn’t make it straightforward to do so, file a bug report with the vendor, seek alternatives, or use an intercepting proxy server with a built-in cache. - Don’t fetch stuff unless you actually need it Judging from the response to our 503 errors, much of the software requesting DTDs and schemata from our site doesn’t even need them in the first place, so requesting them just wastes bandwidth and slows down the application. If you don’t need it, don’t fetch it! - Identify your user agents When deploying software that makes requests to other sites, you should set a custom User-Agentheader to identify the software and provide a means to contact its maintainers. Many of the automated requests we receive have generic user-agent headers such as Java/1.6.0or Python-urllib/2.1which provide no information on the actual software responsible for making the requests. Some sites (e.g. Google, Wikipedia) block access to such generic user-agents. We have not done that yet but may consider doing so. It is generally quite easy to set a custom User-Agent with most HTTP software libraries, see for example How to change the User-Agent of Python’s urllib. We are interested in feedback from the community on what else we can do to address the issue of this excessive traffic. Specifically: Do we need to make our specifications clearer in terms of HTTP caching and best practices for software developers?. Do you have any examples of specific applications that do things right/wrong by default, or pointers to documentation on how to enable caching in software packages that might be affecting us? What do other medium/large sites do to detect and prevent abuse? We are not alone in receiving excessive schema and namespace requests, take for example the stir when the DTD for RSS 0.91 disappeared. For other types of excessive traffic, we have looked at software to help block or rate-limit requests, e.g. mod_cband, mod_security, Fail2ban. Some of the community efforts in identifying abusive traffic are too aggressive for our needs. What do you use, and how do you use it? Should we just ignore the issue and serve all these requests? What if we start receiving 10 billion DTD requests/day instead of 100 million? Authors: Gerald Oskoboiny and Ted Guild I definitely like the idea of defining which DTD to use through something other than a straight HTTP URL. Another possible idea is that of decentralizing where the DTDs are actually stored, coupled with the above idea. A doctype declaration could then just be the name of the actual type of document. The browser (and supporting libraries) could keep track of “DTD servers”, which could possibly be set up by anyone. This solves both problems, both of having a URI for the doctype and for eliminating the strain of rogue applications who suck up W3’s bandwidth. Instead of rejecting the requests with 503, it might be a much better strategy to serve them, but very very slowly. Failures and error codes are rutinely ignored in code, but things which basically work will happen, and if things are suddenly incredibly slow, people tend to notice. Drip out the respons at around 10 char every other second, fast enough to keep the TCP connection open and the application running, but slow enough to totally wreck any hope of responsetime in the other end. One level nastier, and in general one level too nasty, is to return buggy contents for repeat offenders, hoping to make their applications fail with interesting diagnostics. Poul-Henning (Who has far more experience with this problem in NTP context than he ever wanted) One current problem is that all the DTDs are under which makes it difficult to distribute them around the Internet. Perhaps if new DTDs were placed at then DNS could be used to return the nearest server to the requesting agent – a technique used on a number of other sites of course. In the short term, there may be little effect as most documents have in them, but over time people would start using the new hostname instead, and of course newer DTDs would not be available anywhere else. You guys are completely right about this – the number of badly behaved robots, spiders and other tools around the internet these days is starting to get silly. We block a few of the generic ones like libwww-perl and some others, and yet they keep comming back for more. We had another spider looking for RSS feeds that didn’t exist going round & round in circles and eventually putting enough load on our server for it to send me an alert at 4am so I had to get up and block it’s IP range (that’s another annoying trend – spiders over massive and disparate IP address ranges that won’t go way.. *glares at Slurp*). These aren’t exactly the same thing, but they are kinda of the same ilk – badly behaved robots doing what they shouldn’t. There’s one more thing that drives me nuts that’s kinda part of this family of annoyances – robots that pick out URLs that don’t exist and aren’t linked from anywhere. The number of 404 requests we get from robots is insane. What I really wish is that robot developers would use the referer header so we can figure out where they got the URL from. A little off topic I grant you but it’s kinda the same thing at the same time, it’s time robots were written properly so they don’t hit servers in a way that seems like an attack – then they might not be treated as such all the time?. Please accept my humble observations. While attempting to learn about W3 and the evolving language of the hyperweb, I had certain misconceptions that must be common. I thought you were in the business of providing certain basic reference schemas that were critical to writing well formed web pages int the time of XML and xhtml. My first few compliant web pages were cut and pasted versions of example pages from W3. Only now after reading about yor problem in /. do I even have a clue that it is not so. I see two ways out of this. The first is education. You must make it clear how important schemas need to live somewhere else, keeping in mind that existing ones may have to last decades for the sake of legacy documents. The other possibility is that you accept responsibility for hosting these keystone documents, but push them out onto akamai servers or some other way of not being the bottleneck of the hyperverse. Although this is a terrible hack, your most common existing DTDs could be cached in the browser’s themselves, not unlike the root certificates. It is a compliment that peope think you are the center of the Document universe, if only you can survive it. Thank you very much for your time. Doug Wholeheartedly agree with this. I considered downloading a copy of the schemas I’d wanted and storing them in the “resources” directory of the website I was currently building. This way I could change “” to “” but unfortunately, I received a 503. Go figure. Downloading the DTD may or may not be a working idea in practice – it is certainly not recommended, since the purpose of the URI, as the text above clearly says, is *not* to give a place to download some file, it is to uniquely identify the DTD. This becomes especially important once you start combining documents using different (or maybe not so different) DTDs, but even for single files, is the best chance of n application to check whether it can handle this file. What about getting the ISPs and major internet backbones involved. For example an ISP could scan the URL traffic and return a locally cached version of the DTD. Also what if DTD requests were handled more like DNS requests? Where you have a handful of root servers on the back bones and each ISP has one or more DTD servers with cached copies of the DTD schemas. The various DTD servers would then only update their caches when the TTL settings of the DTD schema expire. That would require deep packet inspection (or even transparent proxies), since the ISP usually does not see the URL. I’d rather my ISP didn’t do that to my traffic. I think the with a problem as massive as this, defeating it will require working from several different angles at once. For starters, I’d try working with the people who make the libraries that get used by the offending apps to try and convince them to change their API’s. If the python urllib API changed so that you had to use a custome user agent to initialize the library, it’d be easier to track down the rogue apps. (Or at least print a mode-deprecation warning to stderr if you don’t give a custom User string) Once you find the bad apples, try to get in touch with the developers. Ask them what materials they learned from. Was there a specific online tutorial that taught them to do things badly? If so, try and submit some revised text, so that a second printing will lead fewer astray, or a tutorial webpage can update its offerings. Maybe try to dedicate a small group who spend at least one hour every week on giving the problem publicity. Posting a blog about the problem, submitting an article to slashdot, writing a tutorial on the right way to do things, actively searching out tutorials that sopmebody might use if they were learning how to write an app that could have these problems, and asking the tutorial to include some discussion of the issue, etc. And, others have suggested making things painfully slow when abusive behavior is encountered. I agree with that. Break some apps, force the issue. I’d hate you if your broke one of my apps, but it still is probably the correct thing to do. I think you should insite on librairy provider that a catalog is configure in the default setting. I remember last time I had to set one in Java, it was complex and time customing. I would find it better if a standard catalog was configure by default and a empty catalog could be use only when specificaly required. Most developper don’t have the time to play with these thing. Tarpitting the response sounds like a good idea, but it would require the capacity to sustain hundreds of times more connections than the servers do presently. That’s not free, either. Rate limiting the requests from individual IPs is more appealing — it would just require some decent packet filtering software in front of the servers, still not free but not a significant cost. This will reduce the bandwidth you’re paying for, but it probably won’t reduce the connection attempts significantly. I think the mistake you’re making here is that you only returned 503s for “hours or days”. Broken software is often broken due to neglect. Days is nothing. Think months, years maybe. OK, that’s contra to the mission of the w3c, I understand. Here’s an alternative, if rate limiting requests turns out, as I suspect, to have zero effect after months: Announce that the URIs are changing. dtd.w3.org is a good idea1. Wait a few months. Start returning 301s for the old request paths (far less bandwidth, and just as fast or faster than returning the DTD). Broken software that doesn’t cache results probably won’t follow the redirect (try it for a few hours to see). Humans that don’t follow the w3c news, but do care, will check in a regular browser and see what’s going on and update their software. If broken software follows the redirect, send them a special error page with instructions on how to clear their src address from the blacklist. Even if all of the above fails, you’ll still have gained by moving to a new host part of the URI — as mentioned before, you can distribute the load to other hosts in different places, and you can put them behind slow connections without affecting. 1. dtd.w3.org (or any three-character nodename to replace “www”) is important because the source to some of the software could be lost or otherwise unavailable. Replacing “www” with “dtd” in a binary editor is simple, but changing it to a string of a different length might not be. Obviously don’t change the request path portions of the URIs either. 🙂 PS: Preview appears to be broken — any changes I made, and then re-previewed, were lost in both the preview display and the textarea. Hopefully the edits will make it through when I ‘Send’ for real. If they do, this comment will be visible. I think the main source of the problem is that the XML spec (section 5.1) implies that a validating parser without any persistent state is supposed to do this, and so they mostly do. For example, the default Java SAX handler, if given an XHTML document, will fetch all of the DTD parts, extremely slowly. Of course, it’ll use a User-Agent like Java/1.6.0, because the application author probably doesn’t realize that the application will be making any network connections at all. Yes this is a huge problem. The applications that do these requests slow themselves down and the whole network. There is no need. For Java apps, see “Apache XML Commons resolver” which developed from Norm Walsh’s work. I think you have to combine solutions Poul-Henning Kamp and Ben Strawson wrote. I think the good solution is to use DNS load-balancing with a separated domain (e.g. schema.w3c.org or dtd.w3c.org) and find partners who can operate reliable mirrors (I think you can find easily such partners). To solve current problem you should permanently redirect (http 301) requests to the separated domain. You can slow down serving requests for biggest crawlers. Beside these solutions maybe you have to contact with major xml library vendors to ask them to disable validation or enable caching dtds by default and write best practices about validation in their documentation. I can’t understand why somebody use validation this way, it’s the slowest I can imagine. 🙂 Best regards, ivan I have to agree with a delay. After a user requests more than 100 in an hour, put them on a 2 second delay; after 1,000 a 10 second delay. That may alert them to the fact that something is wrong. And for those addresses that hit the 10 second delay and don’t seem to notice or slow their requests after 3 months, cut ’em off after the first 100 requests in an hour. Personally, I am also curious about all of the new developer tools for firefox, ie, etc. that perform validation on every page. I hope these are using caching mechanisms, but given the ease with which they can be implemented, they could quickly set up a distributed DOS. It’s good that this blog post is getting attention – I’m sure many of the developers of various libraries are starting to take notice. IMO, that’s the biggest problem right there – libraries, their documentation, and their default behavior. It’s a hard sell for a library developer to include a disk based cache mechanism as part of the parsing library when something like that really should be the responsibility of the program making use of the library. If every XML parsing library were to include file io as part of the package, that would be overkill. Certainly publishing tutorials about caching for performance reasons including sample code would be a good thing. Also, memory based caching should really be included by default for those libraries that can. I think the best way to resolve the problem for w3c would be to create some tutorials for using some of the more common parsing libraries that exhibit undesired behavior by default and as part of those tutorials show performance benchmarks. As those kinds of things start hitting the top pages for google searches, developers will take notice and start to build better solutions. I think short term fixes like 503 or even 404 errors will end up doing very little to resolve your issues long term, and probably will have very little effect short term as well. I also think that setting up a subdomain for dtds is a great idea. (dtd.w3.org). It at least opens up some potential solutions down the road. (geo-responsive dns, etc). Some cases like “Java/1.6.0” user agent may be that it’s a Java applet or application using the standard Java library. Changing the user agent string here may not always be possible and blocking it may cause harm to a lot of applications on the net. One way that was suggested is to limit the throughput to the resource, and that may be fine, but that should be done dynamically to not introduce problems for “normal” applications. To make things worse – in some cases a lot of requests may appear from a single IP address, but that address may be a NAT firewall for a large company. Of course – such companies should have a web cache. A limitation in throughput may not even be easy for the application developer to track down since the delay may be masked in a library and the developer may end up hunting problems in all other places than the DTD link. The use of a separate DTD serving address may actually be a good idea, since it will allow for a distributed load. The downside is that it will take some time before it becomes effective and that it will require some resources. The good side is that the amount of data served is very small and also very predictable, so it’s not very problematic to set up such a server. Hi Folks, Sorry but I think the W3C is at fault here. If a namespace should not be retrievable than DON’T use an http URL to identify it. If you want to foster or require the use of caching define some kind of optional cache identifier to define a namespace. In general, I think the identity of a resource being retrievable is a good thing. It promotes the idea that the “definition” of an identified thing can be discovered by retrieving it. That makes sense – the only disconnect here is that we are talking about a “well-known” thing like HTML where the logic breaks down. However, for not very-well-known tag sets, this makes a lot of sense. So, recommend you make an alternative syntax to specify that a local cached copy exists (or must exist). Than you can switch the default to that “cached copy URI”. If such a caching URI exists, my apologies for not researching it before posting. Best wishes, – Mike The standards encourage this behavior. The 4.01 spec says, “The URI in each document type declaration allows user agents to download the DTD and any entity sets that are needed.” And the XHTML spec says, “If the user agent claims to be a validating user agent, it must also validate documents against their referenced DTDs”. General purpose SGML or XML parsers will not embed the HTML DTDs, and even if they have the luxury of a cache it isn’t likely to persist across process invocations. There is obviously a software component to this problem, and developers need to be aware. But as you point out, the problem is not limited to the W3C. The best way forward will be to improve infrastructure, and in particular, to find sustainable caching strategies. I would like to think that the Scalability TAG will come up with solutions, but the emails are not encouraging. Track the IP to a person (use a court order if necessary) and find out what is the offending software. Then sue them for abuse. One case like that and you would get much more publicity than via slashdot. Track the IP to an ISP and ask them to install a transparent proxy for your site, or to contact their user and tell them to configure one. Convince Sun that the next update of Java 6 (and Apache commons) would install a local cache. Same for Python urllib2 and Perl’s libwww. I like the idea of slowing down offensive connections, but since that may be hard on the server level, you can just return a wrong DTD. Make it have a valid syntax, so the DTD parser would not fail, but contain no real elements. If that fails, use a completely invalid DTD. If we ignore the intentional “dual purpose” of the URLs concerned (that they should be used as a unique identifier, yet can also be used to consult the DTD), probably the biggest reason why the URLs are getting so many hits is because many parsing toolkits have bad defaults: that for many implementations of APIs like SAX, you have to go out of your way to override various “resolver” classes in order for your application not to go onto the network and retrieve stuff. So it’s quite possible that most people don’t even know that their code is causing network requests until their application seems to be freezing for an unforeseen reason which they then discover to be related to slow network communications. My first experience with excessive network usage probably arose with various Java libraries, but it’s true that Python’s standard library has similar mechanisms, and if you look at tools like xsltproc there are actually command switches called –nonet and –novalid, implying that you’ll probably fetch DTDs with that software unless you start using these switches. Who is responsible? Well, I don’t think you can put too much blame on the application authors. If using XML technologies has to involve a thorough perusal of the manual in order to know which switches, set in the “on” position by default, have to be turned off in order for the application to behave nicely, then the people writing the manual have to review their own decisions. Some clearer language in various specifications would help, rather than having to read around the committee hedging their bets on most of the issues the whole time. Thanks to everyone for your comments. I’ll try to reply in more detail later, just a quick followup for now: The tarpitting idea sounds worth trying; does anyone have specific software to recommend that is able to keep tens of thousands of concurrent connections open on a typical cheap Linux box? As Arman Sharif says you would say that the W3C would make it easy to download the schema-files they don’t want you to access directly. But it’s my experience that the W3C does not offer this properly. I actually wrote some software that reads XHTML documents into XML DOMs. As soon as the XML parser encounters an entity reference the URL will be loaded. So I created a local resolving mechanism with an entity resolver to read the DTDs from local, however: – I had to go to all the individual specifications and download the individual specs there, and create my own full repository (I don’t have the source-code here, but I am quite sure I ended up with over 50 files for 5 or specs). – Create my own mapping file that goes from public (handling broken in .Net XML parsers) and system ids to my local files. – And then of course implement entity resolving to actually pick up my local files. Every time a developer implements an application that loads html documents using a standard XML parser (a quite common thing I would say), they need to perform these steps to alleviate stress on the W3C servers. What I actually naively expected this article (found from slashdot) to contain when I opened it was a link to an archive with the files for all your stable specifications in one, with a id->path mapping, and some sample resolver code for common parser libraries in various languages. Does it exists? (and caching it after one request is not usable for many situations, since my reason for caching was actually not to lessen W3C’s Internet bill but allow the application to run without Internet access) 130 million requests a day is about 1500 per second. On a few static files. A 3 years-old laptop running lighttpd can manage that easily (and actually a lot more). On my server which is the cheapest Core2, lighttpd handles 200 req/s and it uses a few percent CPU. Just use lighttpd or nginx (obviously you should forget about Apache !) Now here is my suggestion to get rid of the spamming. You need two servers, a main server and a backup server. Both are machines suitable for serving a few thousands static requests per second, ie. the cheapest Core 2 boxes you can get. You will need to adjust the allowed number of sockets on both, of course, to allow as many concurrent connections as you can. Don’t forget to enable zlib compression. Now you implement some redirections : – When sees a request for a DTD, it redirects to dtd.w3.org which runs on the “main” box. – The “main” box has a good connection (like 100 Mbits) – When the “main” box receives a HTTP request, it looks up the client IP address. If this address has submitted few requests, it serves the requests. However, if this IP has submitted say, more than 5 requests in a period of a few hours, it redirects it to the “backup” server, which is dtd2.w3.org, on a different IP. To implement this you will need to code a simple C module for lighttpd. – The “backup” server is connected to the internet via a completely separate, rather slow (10 Mbits) connection. It just serves static files. So, the “main” server will always be fast and responsive, and the “backup” server will always have its connection horribly saturated. Therefore, any client will get fast response on the first request from the “main” server. Well behaving clients will cache it, and it ends there. Badly behaving clients will not cache it and will request again, they will get redirected to the “backup” server and feel the pain. Thank you all for the comments. Scaling is not the long term solution as it does not address the cause, however it is something we will have to continue to do and appreciate suggestions made in this area. By making this post we are trying to increase awareness so ideally this gets resolved as far upstream at the library level as possible since that will have the broadest effect. Community involvement with their respective development platforms of choice will help as we have had mixed success in identifying and contacting software and library maintainers. Some have been very responsive and are implementing caching catalogs or making static catalogs a default instead of an afterthought left to those installing and utilizing the library. Some developers have noticed our blocking scheme and have contacted us letting us know they have taken corrective steps on their own. For those who wondered why these schemata and namespace resources are made available via HTTP to begin with is we intend for them to be derefenced as needed but expected this to be more reasonable given the caching directives in the HTTP spec. The performance cost of going over the network repeatedly for the same thing should be reason enough for developers to cache. Since many of these systems ignore response codes a tarpit solution might very well succeed in gaining their attention, plus has some entertainment value. If their application performance suffers substantially enough developers may take notice. As mentioned many of these systems only understand a small handful of the various HTTP responses (200 OK, 302 Found, 401 Unauthorized, 403 Forbidden, 404 Not Found). We are more than slightly curious how the browser plugins, in cases we have for instance suspected a particular large scale ISP’s webmail plugin is a traffic culprit, would handle HTTP 401 “Authorization Required” responses to their requests. Inside the realm part of the WWW-Authenticate header it would be quite tempting to give the technical support phone number to the ISP who has not listened to our repeated phone calls and emails on the subject. That would likely get their attention and potentially encourage them to correct the plugin. Identifying the sources of W3C’s abusive DTD traffic can be quite time consuming and difficult depending on the data in the HTTP request. One rather odd case we see often has the HTTP Referrer as “” and we have so far not found the related software. For identifying some we have found resources provided by various organizations (eg McAfee SiteAdvisor) that catalog browser plugins, software network interactions and viruses quite helpful. We would very much like to collaborate with such organizations or similar community efforts to help us identify more software responsible for this traffic. We have made a couple efforts to establish contacts within a couple such organizations but unfortunately emails and phone calls have not gone very far. Specific suggestions or contacts for us to follow up with would be appreciated. Martin Nicholls, I cannot agree more with your sentiment towards poorly behaving bots/crawlers, they are getting out of hand. There has been talk around here at W3C and elsewhere on starting an activity for directives governing bot interactions with a website. There have been some scattered conventions which should be standardized and improved upon. For instance polite bots could identity themselves properly including URI with various information how to submit complaints means to authenticate crawler and IP range it is coming from respect directives regarding frequency, concurrency, etc. given in response headers of site being crawled advertise peak hours with higher thresholds if the crawler would like to schedule it’s return server being crawled could make available data of resources and last modified dates so more intelligent, minimal crawls can be made saving both sides resources Those bots that do not abide by these conventions and overstep the boundaries spelled out can be spotted and blocked through automated means. Those that do could do their indexing in as efficient a manner as is comfortable for the website being crawled. Why not change the doctype tag to something like: <!DOCTYPE PUBLIC html "-//W3C//DTD XHTML 1.0 Strict//EN" "dtd://TR/xhtml1/DTD/xhtml1-strict.dtd"> Then it’s known where the dtd is located in case it’s actually needed, but would only be used by applications and libraries that would probably need it, since it would require special handling instead of blind handling. Slowing traffic for the http link wouldn’t be great, because on a few occasions I’ve downloaded the DTD’s for learning the document types(you do want valid html instead of what most tutorials provide, right?). Still unique, still locatable, and hard to misinterpret. There is some good smtp tarpit software in OpenBSD. It isnt hard to change the BSD specific c calls to linux based calls using the same functions from Rsync. I guess it would need to be modified for http as well however. Serving requests very very slowly will unduly penalize the innocent bystanders. Many of us have no idea where to look when an application has slow response. Our systems are running software from dozens of vendors, and we will have no idea which of the vendors is running slowly. So we will just suffer. Or, we will decide our computer is old and needs to be replaced with a faster one: One which can hit your website even more frequently. I’ve had similar problems with dumb crawlers that couldn’t handle escaped ‘&’ entities in URLs and would bombard the server with invalid requests for them. So I have sympathy. However, I don’t know if the tarpit solution is a good idea. What’s John Q. Public who’s running some misbehaving software going to think? “Oh, this must be what they mean when they say XML is slow. This problem never happened before I put the DOCTYPE on all my files. That guy who pushed us to adopt XHTML is a moron.” Fix the problem going forward by changing the scheme for identifying DTDs, but think carefully before spreading the pain just to save the W3C some inconvenience. I would like to point out that, contrary to what Ted says, the system identifier *is* a downloadable resource, not just an idenitification (unlike the namespace URI, which is mere identification). Specifically, XML 1.0, section 4.4.3, says that a parser MUST “include” references to parsed entities if they are validating (so it’s no option not to read the DTD), and it MAY do so even if non-validating. In particular, reading the DTD is necessary even in non-validating mode in case the document contains entity references. Of course, to read the DTD, one might be able to use an alternate URL based on the public identifier. Unfortunately, catalogs are not in wide-spread use, and W3C does nothing to promote them. Martin, Where did I say System Ids are not downloadable resources? This post is about the frequency of the downloads, disregarding HTTP caching directives. Ted, I think Martin was referring to the following excerpt, which does sound like “these URIs are identifiers, not for download”. Note that these are not hyperlinks, these URIs are used for identification. This is a machine-readable way to say -this is HTML-. In particular, software does not usually need to fetch these resources, and certainly does not need to fetch the same one over and over As many have pointed, the data downloaded is needed, so it’d be great if W3C could provide basic catalogs/suggestions to be used as the sane default. You have to keep in mind that caching is a great solution for many given scenarios, but low level tools/libraries cannot be expected to assume caching/catalogs are THE right thing to do when the spec includes checking against what the URI point to. You saying that developers were supposed to implement caching due to their own performance concerns makes me wonder: what if most already do that? What will change when they do? What if most hits you get are from software that scraps “http://[…]” from data and follow that? What if library per-process/thread cache is already there but the system forks for each URL? How about distributing batches of URLs to visit? So IMO the W3C has a chance of simply postponing the issue if no steps are taken towards providing local, reliable catalogs to the community and changing the recommended http:// URIs to something else (like the dtd:// above). Daniel, A misunderstanding then, my apologies for us being ambiguous. We went with that wording to avoid going into the differences between DTDs and namespaces which parsers have no need to dereference as there may not be anything of use to them as is the case with xmlns=””. DTDs are meant to be downloaded for machine processing but reasonably not incessantly by an application running on a machine. We are seeing XML processors grab these even when they are not using them. Making catalogs available has come up before and we certainly will consider it. Catalogs still would need to make their way into the various tools and libraries, many of which do not come with any. We are just one of the many organizations and individuals making these sorts of resources (namespaces, DTD) available so tool and library developers will still have to collect these. It is difficult for tool and library developers to know what markup that will run through their utilities for validations or transformations not to mention new schemata are always being created. Because of this the best solution is for a caching XML Catalog resolver as I understand is part of Glassfish. The library will add DTDs to it’s cache as it needs them, caching is part of the HTTP protocol. Ted, Thanks a lot for the discussion. I like the caching idea, but believe distributing the load extends it. The general issue with caching is related to where to cache. Local (or in memory. per process, etc.) caches will be less efficient than shared ones. Shared (system, library) caches will have their own load of issues. So you might end with much nicer libraries and still being hammered by requests. Notice that scaling up and mirroring amounts to an extremely-shared cache. I believe having the machinery for mirroring in place (checksums, compressed snapshots, change notifications) could lead to lower level mirroring (dtd-daemon, anyone?). The benefits would be: Network admins could save resources that lazy programmers forgot to (and legacy code would automagically stop being so nasty). Users could get performance boosts by installing software that tricks dumb apps to fetch DTDs from a local cache, regardless of upstream actions. Library developers (and even dumb programmers) would have a Darn Easy® recommended route to caching, as local-ish mirroring and checksums would be discussed all over the place (and faster, cheaper, tastier). Also, maybe you should talk to Coral Content Distribution Network regarding forwarding traffic. It might be interesting for them to have such a huge source of input to their research. On a meta note, I think it could be very useful to have a central location (wiki?) to gather resources and discussions on this issue. One of the arguments against having caching resolvers in XML libraries has been this is attainable outside of the library, which it certainly is with a caching proxy server for instance. It is a very worthwhile solution and why we give the caching directives in the first place. We have seen a number of corporate and large ISP HTTP proxies hammering us because of some XML application[s] running behind them. Sometimes the network admins would, if they were responsive at all to us, add caching to their proxy setup or less often track down the parties responsible for the software causing the traffic. More often they would refuse to add caching to their proxy or any other action citing cost or complexity. Bandwidth is cheaper than equipment and admin time I guess. It is strange (and probably an indication of the lack of XML knowledge of many posters here) that noone mentioned the best solution on the application side: catalogs. They have been part of SGML and XML for many years, so there is good support for them. Any XML parser should support catalogs and, then, the DTD would be retrieved on the local disk and not through the network. (Of course, there are always broken programs and sites where catalogs will not be installed, so there is a still a need for other measures.) A few people mentioned testing that the file has been changed or not. The HTTP protocol has a If-Modified-Since header for precisely this purpose and W3C’s server honors it. You can set it, for instance, with curl: % curl -v –time-cond 20080201 -o html.dtd … < HTTP/1.0 304 Not Modified [No download] % curl -v –time-cond 20000201 -o html.dtd … < HTTP/1.0 200 OK [And the file is actually downloaded] Of course, this requires a program that sends it and has a local storage to keep the DTD, but recommending this technique may help (among other techniques like HTTP caching, XML catalogs, terminating the offenders, etc). Ted, Adding to what Daniel already said: I just happen to be stacking together a new flavor of modular XHTML in the spirit of XHTML+RDFa for the backend of a new website I’m working on. I’m using libxml on MacOS X via MacPorts. MacPorts has a package with HTML4 dtd’s, but not XHTML and it does not supply a catalog with the DTD’s. I’ll accept that I’ll have to add a new entry to the catalog, but I’ll still have to get the DTD’s in the first place. I have three options at this point: download each DTD (module) manually through my webbrowser at /MarkUp/DTD, let wget crawl the DTD directory, or download Debian’s w3c-dtd-xhtml package and rip the files from that package. Hardly convenient. I assume that a lot of developers will even ignore the speed problems whilst getting their new apps to work. I think it would really help if W3C would package its DTD’s in a tar.gz, and perhaps even pro-actively work with package maintainers to distribute these files. Obviously, this will not be a quick fix to your bandwidth problems, but I think it does address the core of the problem: Too many developers are not aware the inner workings of XML validation (or validating parsing) and assume ‘it just works’. my 2 cents Well, this could be an example of rogue crawlers, bots and spiders causing an effect on the website. Large numbers of crawlers which take on pages and catches everything that seems like a link and cannot analyse the HTML and miss ignoring the links in the dtd and xmlns. I agree we can’t just depend on w3 to persistently follow up as everyone has the responsibility to help. Can anyone comment on the combination of IE7, Docbook-generated XHTML, and the DTD that’s imbedded in such XHTML? We’ve generated a bunch of man pages built using Docbook 4.3 (see the link at top) which are now all failing in IE because (as best we can figure it out) w3.org is rejecting requests from the IE user agent. While I sympathize with the bandwidth concerns discussed above, the question is what can we do about it? We’re not the source of the offending app, Microsoft is, but we bear the consequences. Even if MS were to turn around a caching patch quickly, it probably wouldn’t get widely deployed for years and I imagine the W3C admins will not lift the IE ban until then. All I can think of to do on our end is to locally cache the DTD (and the entity files it references, IE also tries to fetch those) on our server and patch all of the documents to refer to those. BTW, I don’t see any reason this isn’t affecting the combination of IE with every Docbook-generated XHTML doc in the world, if they’re built using the standard stylesheet distribution. Are there any other options within our control, that don’t require cooperation from W3C or Microsoft? Jon, So I notice in these man pages, the main frame (eg ) is already XHTML markup but served with HTTP header Content-Type: application/xhtml+xml If you serve this [X]HTML markup as HTML instead of XML, MSIE will not call it’s XML processor which in turn tries to dereference DTD from us. Try serving it with .html extension and/or Content-Type: text/html and your problem should be resolved. Hi Ted, If we serve our content as HTML instead of XML, as you suggest, then IE will not invoke the MathPlayer plugin to render MathML content and the pages aren’t rendered correctly. Getting MathML displayed properly in the man pages is pretty much the whole point of this exercise, so I don’t think that will work for us. We’ve modified the man pages to refer to a local cache of the DTD, and that seems to work well enough. Sure, they’re ignoring the response status, but I’ll betcha most of them are doing synchronous requests. If I were solving this problem for W3C, I’d be delaying the abusers by 5 or 6 *minutes*. Maybe respond to the first request from a given IP/user agent with no or little delay, but each subsequent request within a certain timeframe incurs triple the previous delay, or the throughput gets progressively throttled-down until you’re drooling it out at 150bps. That would render the really abusive applications immediately unusable, and with any luck, the hordes of angry customers would get the vendors to fix their broken software. Microsoft blog article on how to more efficiently invoke MSXML in your applications. It’s not a complete or ideal solution, but have you considered in-place editing of the relevant DTDs to make them smaller while maintaining their semantics? It’s unpleasant, but Process allows for it. Take for instance. It is 25kb. Instead, replace it with a DTD that only contains: <!ENTITY % x SYSTEM "">%x That’s 56 bytes, 455 smaller than the raw version you have to serve to those stupid libraries that often don’t send Accept-Encoding: gzip (even if they support it), and still 120 times smaller than the gzip version. Now this assumes that an important subset of the requests that are made don’t actually do anything useful with the content and so don’t make a second request to the actual content. I suspect it’s worth a shot, or at least worth testing. It has the additional advantage that using a different DNS means that you might be able to use load distribution tricks not available to you for the general website. Anyway, just a thought! Please tell people from Saxon not to reload the xhtml.dtd everytime you open an internet document with the xpath-document() function. The source of the problem is that the URI actually exists. If the URI did not exist, then everything would be forced to implement local caches of the required files and there would be little sustained traffic to w3.org. If the URI is really name and not a resource then it should not match to resource, e.g. xhtml1-strict could have been instead of including the http:// and then this kind of issue would have been forced to resolve itself earlier in everyone’s development and test cycles. In case anyone experiences this issue when using Java facelets(!) look at this bug report: We are now seeing such extreme surges in traffic that our automatic and manual methods simply cannot keep up. Increases in serving capacity are readily consumed by this traffic and our site becomes overwhelmed. As such we are taking some more drastic temporary measures which we hope to be able to back down shortly. We are sorry for the impact this is causing the community. We continue experimenting with various methods including some of those suggested by posters here. If you are impacted file a bug report with the developers of the library or utility you use asking them to implement a [caching] catalog solution. You may also put a caching proxy in front of your application for immediate remedy to your situation, populating the cache with a user agent we are not blocking DTD access to. Java based applications and libraries are presently accounting for nearly 1/4th of our DTD traffic (in the hundred of millions a day). There is also another more substantial source of traffic which the vendor is working to correct in the hopefully near future. To ensure that Saxon doesn’t hit W3C Web site when transforming XHTML content, see: Trying to write a well-behaved system using Xerces DOM parser. Seem to be two things I need to do: set the UserAgent to indicated I’m not the raw Java libs, and manage a local cache. I can set the System user.agent, which URLConnection then uses in the request. It appends the Java “Java/vers” string to the one I provide, giving e.g. “DSS/1.0 Java/1.6.0_13”. I believe this is the correct format and intention for UserAgent, indicating the primary system and version followed by any subsystem. You’re still denying this request. Are you searching for the Java identifier *anywhere* in the string? That precludes any Java-based system (at least, ones not controlling the headers all the way down, i.e. using most any libraries) from behaving properly and working. Caveat: My understanding is limited. For those that (like me) run into this when using Ant (java build tool) xslt task – take a look at the xslt task manual and the section on xmlcatalog. That allowed me to keep the dtd files locally and use them from there. I was transforming XHTML, so that also required downloading several entities files as well. Dan, Changing the user-agent is commendable especially if you post it somewhere it can be indexed and people can contact you if there is an issue with it. Instead of writing something to maintain your own cache look to Xerces XML Catalog capabilities which I wish were the default instead of an after thought. For the time being I am also relaxing the filtering based on your suggestion. There is one particular Java UA that prepends a string that is causing 80 million or so hits/day at present. We contacted them after researching the user-agent used. I use a variety of mostly Java libraries and tools. The authoring tools and IDEs are generally good about using local cached copies of the DTDs. The libraries and tools like Saxon do not. While there are many historical reasons for why we have the implementations and behavior we have today, the best fix is for the library maintainers to enable DTD caching by default. After all, it is the libraries that are fetching the documents in the first place. Of course, for that to work, the W3 will have to still serve the documents so that they can be cached, but that will likely continue to cause the current problem given the long delay likely to occur between the time the libraries are changed and the time when they largely replace the currently deployed libraries.In the meantime, I am trying to resolve my own problems by implementing the necessary XML catalogs. However, I am now struggling with the problem of assembling all of the DTDs and related documents I need to cache locally. (My difficulty is not an isolated case. See Validating XHTML Basic 1.1 () for one other example.) My first attempt at a catalog simply included the XHTML DTDs, but then Saxon complained it could not find xhtml-lat1.ent. So then I needed to retrieve the referenced entities documents. Then I needed to do the same for each DTD I required. Facing the tedious prospect of pulling down each document individually, I went looking for an archive containing all current DTDs and related documents. After refining several searches and restricting them to the W3 site in the hope of finding an official—or, at least, semi-official—distribution, I finally located the DTD library () made available as part of the Markup Validation Service () distribution.That was too difficult and may not have been the best solution in any case. However, it does highlight the need to make available an official archive or library distribution and to make it clearly available from somewhere on the home page even if it is only listed on a page referenced from a “Downloads” link. If you want to encourage people to use local cached catalogs, help make it easier to assemble the necessary documents. I arrived at the page “” by following the link in the first sentence of section “A.1.2. XHTML-1.0-Transitional” in the W3C document “XHTML™ 1.0 The Extensible HyperText Markup Language (Second Edition)“, where it states, “The file DTD/xhtml1-transitional.dtd is a normative part of this specification”. The phrase “DTD/xhtml1-transitional.dtd” links through to the page in question.An annotated version of a DTD is available by following the link in following sentence in the same section, but, contrary to what is stated, this is clearly not an annotated version of the first file.The first file (the “normative” part of the specification) is either the phrase “see“ or this page, neither of which, as displayed in my browser window, is well formed XML (if I understand the W3C XML specification correctly.)Perhaps the W3C would suffer fewer problems of the kind discussed above if it maintained accurate online documentation. I am desperately sorry but I am to blame for some hundreds of those millions of requests. One of projects I did a while ago used PHP, called “the best tool for web” by some. And while I am aware of this problem I found no way to disable schema fetching in PHP without messing with PHP core itself. I had neither time to mess with it nor permissions to deploy those fixes. So, perhaps, you also need to contact authors of PHP XML parsers to persuade them to fix it (because I already gave up). OK, so, how do I parse a XHTML file using only the JDK? This fails with 503: DocumentBuilderFactory.newInstance().newDocumentBuilder().parse (“”) And how do I parse it within less than 10 lines of code? As a developer, all I see is that code that used to work doesn’t work anymore, because the naughty w3c decided to break it. You’re a bad w3c. Yes you are! 🙂 Now, I read the whole thread, I understood it, I understand w3c’s position, but as an end user (or end programmer, whatever), I still have to wonder, why do hundreds of programmers have to implement complicated caching techniques because you didn’t see this coming and didn’t plan in advance? Just complain to Xerces and Sun, and maybe Operating System manufacturers, and ask Sun and/or Xerces to cache such resources, at the JRE installation level. Or even better, at the operating system level. As for me, I will try to stay as far as possible from w3c “standards” if at all possible. OK. Problem fixed. How: Installed Squid for Windows, from to C:\squid. Copied cachemgr.conf.default, mime.conf.default and squid.conf.default from C:\squid\etc to cachemgr.conf, mime.conf and squid.conf. Modified this line in squid.conf: http_access allow localnet to: http_access allow localhost Run this commands at a command prompt: c:\squid\sbin>.\squid.exe -i c:\squid\sbin>.\squid.exe -z c:\squid\sbin>net start squid Modified my Java Application, adding this lines: System.setProperty(“http.proxyHost”, “localhost”); System.setProperty(“http.proxyPort”, “3128”); System.setProperty(“http.agent”, “Mozilla/5.0 (Windows; U; Win98; en-US; rv:1.7.2) Gecko/20040803″); You can also set the properties from command line, with java -Dhttp.proxyHost=localhost -Dhttp.proxyPort=3128 -Dhttp.agent=”Mozilla/5.0 (Windows; U; Win98; en-US; rv:1.7.2) Gecko/20040803” yourClass After you successfully access the DTDs from your application and the cache is populated, you can remove the fake http.agent line, or replace it with something useful. I am just trying to make a simple application do an XPath on a valid HTML document and it automatically pings you for the dtd. Don’t block IPs. Get sun to fix their lousy library. There is no way that an XPath should create a connection to the internet to collect a w3c. And they don’t give you any way to disable it. If they just had a nice IOC type framework it would be easy to fix, but they are just not that smart. Go back and reread your specification. The DTD declaration consists of two parts, the public identifier and the system identifier. The system identifier is supposed to point to where the local system can find the DTD. It may be a bad idea using the w3.org address in all the examples and there may be misbehaving user agents out there, but it is correct behavior for user agents to look up the DTD. The conformance section for XHTML actually uses the w3.org addresses without indicating those can technically change. Learn from that mistake for future specifications. It looks like you’re now blocking attempts by Microsoft’s .Net XmlDocument object. To echo Jon Leech’s comment back in February, we are at the mercy of whatever is going on at Microsoft. Our use case is manipulating an html page using the .Net XML objects. Since the html page contains HTML entities, we need the DOCTYPE reference, but recently that has been generating the 503 error. Does anyone know of a work around? XmlReader and XmlDocument classes in Microsoft .Net Framework try to find the specified url, not only Java. Anyway, I don’t like the DTD identification takes ‘http://…’ form. It’s quite mistakable, isn’t it? Microsoft releases fix for Microsoft XML Core Services (MSXML). Full release information If you have a Windows platform being blocked access to W3C, ensure you have this upgrade installed. I was surprised recently that a valid XML DOCTYPE declaration required a URI in addition to an FPI in a public identifier (unlike SGML which I understand just needs an FPI). I too have been bitten by the fervent attempts of Web software to dereference DTDs (for instance WebKit ostensibly does not have the XHTML+RDFa DTD in its catalogue). I think, however, that it is entirely reasonable to expect to be able to dereference a URL (specifically), especially within a framework that affords the dereferencing of URIs. By my reading, §4.2.2 of the XML spec indeed discusses the dereferencing of URIs in system identifiers: I cannot speak for the HTML equivalent, except to assume it would ideally follow the same rules as SGML in which a URI may be omitted from the public identifier. XML Namespaces using HTTP URIs should ideally have something present on the other end at the very least as a courtesy, but a sane XML processor should not attempt to dereference them on sight. This problem was also endemic to RDF before Linked Data gathered steam—there would be HTTP URIs used as identifiers everywhere but relatively few would correspond to live HTTP resources. Do I think vendors of Web software would serve their customers better if they kept DTD catalogues up to date? Indubitably. Do I think they are doing a disservice to high-traffic targets like the W3C by not including their DTDs? Absolutely. Do I think that complying in this manner is the past of least resistance? Unfortunately not. I think if you mint an ostensibly dereferenceable URI, you should expect attempts to dereference it. If you are in the business of dereferencing HTTP URIs, however, you should make an attempt to comply with their cache directives. When using Internet-Explorer 7 and browse to a HTML document on a misbehaving server the following may happen… The server responds with: HTTP/1.0 200 OK … http headers …. Content-Type: text/xml;charset=utf-8 … http headers …. <?xml version=”1.0″ encoding=”UTF-8″?> <!DOCTYPE html PUBLIC “-//W3C//DTD XHTML 1.0 Strict//EN” “”> <html lang=”nl” xmlns=””> …. the actual html content ….. </html> This is a HTML document encoded as XML and sent with a content-type of text/xml. Internet-Explorer 7 interpretes this as XML and tries to resolve the URI “”. Now see what happens when IE7 tries to resolve this: GET HTTP/1.1 Accept: */* Referer:.4506.2152; .NET CLR 3.5.30729) … other http headers … And the answer from W3C: HTTP/1.0 503 Service Unavailable Date: Thu, 07 Jan 2010 13:41:41 GMT Server: Apache/2 Content-Location: msie7.asis Vary: negotiate,User-Agent TCN: choice Retry-After: 864000 Cache-Control: max-age=21600 Expires: Thu, 07 Jan 2010 19:41:41 GMT P3P: policyref=”” Content-Length: 28 Content-Type: text/plain … other Http headers … see One solution: Tell Microsoft that this URI should be cached or tell Microsoft to create a catalog of DTDs inside its IE7 and IE8 browser. I think it should be possible using the standard windows updates to distribute a patch that solves a large part of this problem. Please tell Microsoft that they are part of this problem. I am afraid I am guilty of a few unnecessary requests to W3C DTDs. I just realized that a SaxParser in Java downloads the DTD even if the parser is non-validating, which is the default. I don’t think this is very well documented, so I post it here to warn others. This code will create a parser that doesn’t download the DTDs. SAXParserFactory factory = SAXParserFactory.newInstance(); factory.setFeature("", false); SAXParser parser = factory.newSAXParser(); I downloaded the Java JDK from Sun/Oracle website a couple days ago. Included in the JDK is a class called DocumentBuilder which provides access to the SAX xml parser (you see where this is going?). I compiled and ran a Java program which builds an object tree from the xml. The run failed in the SAX parser when processing the following xml: . That caused the parser to try to access this website, getting the 503 error. Can somebody please contact Sun about replacing the SAX parser with one which does not cause the website access? I don’t have the slightest idea who to talk to. And can I get a copy of the SAX package which doesn’t exhibit the problem, as discussed in the previous post by Henrik Solgaard? Thanks. I filed a bug report with Oracle (sun) on this. I included the URL for this blog. Hopefully they will react. Workaround: factory.setFeature(“”, false); IE8 transforms the code of a page if a X-UA-Compatible HTTP header is served together with the page. The transformation have several effects: Uppercasing tag names (always), inserting a the <META content=”IE=8.0000″ http-equiv=”X-UA-Compatible”> element (always). And, if the page contains the HTML5 doctype – <!DOCTYPE html>, then IE8 even replaces it with a legacy, non-official HTML4 doctype: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "">he URL of that doctype, only to reach a page which said , with a message to contact staff about how I reached that page.Some hunches, based on the fact that Ted said that the misuse of these URIs had only increaseed, since he first posted this article:X-UA-Compatible has increased in popularity The HTML5 doctype has also become more popular IE8 has become more popular. Chrome Frame has become more popular. HTML5 forbids non-stnadard META elements, thus one must use the HTTP header to be valid as HTML5. All the above points to factors that could explain why the abuse has increased, provided that IE8 is involved in this. (Note that even when a X-UA-Compatible header is used to request Chrome Frame, any IE8 without Chrome Frame installed, will be affected, regardless.) This post seems to be giving some folks the wrong impression. It is not possible to correctly parse most XHTML documents using an XML parser without reading the DTD. You can cache the DTD so you don’t read it more than once. You can read a local copy instead. You can point the DOCTYPE to a different copy of the DTD. But if you leave out the DTD completely and use a generic XML parser, you will lose information such as default attribute values (including namespace declarations) and entity definitions. This is arguably a design flaw in XML, but it is one we have to deal with. I can’t believe it, but Dreamweaver CS5 is causing this error. Here’s the error text: An exception occurred! Type:NetAccessorException, Message:Could not open file: So DW CS5 attempted to open this “URL”. Amazing that some developer at Adobe was so clueless (and I have a lot of respect for the quality of development that goes on at Adobe). I just found this thread. I read many of the comments but not all. Apologies if this duplicates what someone else has said. As some commenters have said the XML spec encourages validating parsers to download the DTD from the location given, and a document might not be identified correctly if the original DTD is not given with the original URI. Of course software can and should cache regularly accessed documents that are unlikely to change. But the other major error most software designers are making is not properly understanding or using the standalone declaration. If standalone=”yes” the DTD probably doesn’t need to be read in most cases. But the standalone declaration has to be set correctly in the document too. I only found this issue after several months of my XML schema validation working fine. Basically I was validating the schema in .NET and assumed the default settings would be set in an appropriate way. Its unfortunate as I think a lot of developers will get caught out by this the first time they use the validator. If the stick is to slow the servers, how about a carrot to go with it? DTDs easier to find and download in zip files, along with a catalog.xml file. OK, only half the issue, but it’d help. The other half is setting up a catalog resolver to read the catalog file. Ditto David’s comment. Is there a zip file I can get to ‘seed’ an off-line EntityResolver I am building? Something in the form of an XML Catalog would be great, but just a .zip of the actual dtd files from w3.org would be useful too. Various commenters ask for a ZIP of DTDs. It took me a few minutes to figure out where to download them from so I thought it worth posting links: HTML 4: (note no top-level directory in ZIP) XHTML 1.0: We implemented an application to analyse a fairly large website’s XHTML pages for Accessibility. Part of this process involved reading the XHTML into an XmlDocument using standard .Net methods. Because the DTD is stated in the way that it is, these methods automatically looked up the web address. This may be a noob mistake but there will be lots and lots of noobs who won’t realise that by doing the above, they are going to be calling the web address. Our application stopped working and it was only after trawling through Wireshark analysis that we worked out what was going wrong. We had to report the reason for the failure to the customer (government). I’m sure you can imagine the surprise and subsequent guffaws that went around the office when a high level manager of the customer insisted that we get in touch with the head of the W3C to resolve this issue. Needless to say, he was ignored and we developed a workaround. Anyway, in my humble opinion, it is not always the fault of the people who make software that uses the specified location of the DTD to read the XHTML. Perhaps a more foolproof method is required. “dtd://TR/xhtml1/DTD/xhtml1-strict.dtd” to be used perhaps? Or, maybe getting Oracle and Microsoft to make the standard .Net and Java XML parsers create the cache for the DTD out of the box would be most efficient. Another idea, although maybe not workable, would be to return a page with a set of links to standard code that will cache the DTDs instead of returning 503 responses. I apologize in advance, but I have a somewhat dissenting opinion. If a schema on w3.org’s site references other schemas, which in turn reference other schemas that redefine still other schemas, it’s a little much to expect the coder or application to know which ones to download. I’ve tried downloading every reference I can find (in the VoiceXML 2.1 schema) and loading them for unit testing is still taking several minutes per file, which probably means that despite my best efforts, your site is still being hit. If you put your URL at the top, that URL is gonna get hit. Come up with another address, or an alternative attribute at the top of the schema that tells the processing file not to hit you. This is all very well and I can see why you would block IP addresses for badly written code. Howver the authors of the article are making huge assumptions about the people who might be trying to download your resources. I am suffering the 503 response at present. I’m not a web site author or programmer, I’m a Unix systems administrator trying to implement a local copy of your validator with HTML5 support. To do this, I have to install the validator.nu application. This won’t build as it is attemting to download one of your DTD’s. I work for a reasonably large University with several thousand users who mostly get onto the Web via a (very large) caching proxy. This effectively means that we will all appear to have the same IP address. I don’t have the skills to change the validator.nu application builder and I don’t have time to learn Python only to forget it when I never use it again. So what do you suggest? I can’t change my external IP address and can’t change the code of (probably several hundred) students who may very well be writing poor code/html. If I can successfully install the validator.nu engine then I may take away a considerable amount of traffic to both your and their website as all conformance checking will be done locally. @MATT: thank you for the links. Yes I searched ZIP of DTDs to download them from dedicated servers with windows. For those that don’t have the links – there is a patch to the validator.nu build script that downloads the DTD’s as zipfiles – the patch is available at Thanks to AlanJ for this information Could someone post a full C# exemple on how to validate a xml against a xsd (and/or DTD ?) with local copies of all/some xsd (including recursive considerations and possible security issues) msdn is a mess (as always…) There is a C# library but it doesn’t work too. Hi all. This particular issue continues to perplex me. There are a number of XML Schemas and DTD documents that are required for general purpose XML processing (not just HTML, xhtml, etc), and the ‘throttling’ has impacted me in a number of ways. Despite the availability of web catalog systems, and other resources, it is ‘painful’ and inconvenient to manage. I have decided to start up a new open-source project (in Java) specifically targeting this type of issue. It is intended to be a ‘simple’ EntityResolver. It respects the Cache-Control HTTP headers in the response headers in order to maintain an on-disk cache of web resources. Only if the cache-entry goes out of date will the source server be re-queried. The initial version is a functional resolver that is suitable for running in a multi-threaded environment, and in a way where multiple JVM’s can share the same cache location. In other words, given that the w3c.org standard cache-control timeout is 90 days, it should be possible to set up a single cache folder for all your Java programs and have each document only pulled once every 90 days from w3c.org. Further, I anticipate that future versions will be expanded to allow a ‘chain’ of resolvers where ‘catalog’ type resolvers can be queried, and only if those fail, will the caching resolver be used. This should allow for a well-behaved, easy to use, efficient EntityResolver system. If anyone is interested in playing with the code you can grab it from github at (Apache 2.0 license) , you can contact me through github, and feel free to offer suggestions, criticisms, etc. Rolf, A caching catalog is the ideal solution and you might want to look at this earlier start on the subject. Ideally such a resolver would make its way upstream into JDK. Past lobbying efforts for that to happen have not been successful. I will gladly try to raise attention to your effort. Hi Ted. Thanks for the feedback, and encouragement. I don’t think this is the right place for a full discussion on the requirements, etc. I have set up an ‘issue’ and a wiki page for the proposal, and I am hoping to maybe have people comment on it there. It would be great to make it ‘official’ that way (part of the JRE). I guess I set my sights low by ‘hoping’ this could be an future apache-commons type tool…. ;-), and if not that, then I could possibly incorporate it in to the JDOM project. Right now it is too early though. So, please head over to: It would be great if you could contact me directly on this too. I would love to pick your brains on some ideas. Thanks Rolf Pingback: URI casuistry | Messages in a Bottle Pingback: QAS XML fail | RMIS Studio Blog Pingback: DOMDocument appendXML with special characters - PHP Solutions - Developers Q & A Pingback: QAS XML fail | RMIS Studio Blog Pingback: DOMDocument::validate() problem - PHP Solutions - Developers Q & A Pingback: Is Scala/Java not respecting w3 “excess dtd traffic” specs? | Ask Programming & Technology xhtml1-strict.dtd states “Copyright (c) 1998-2002 W3C (MIT, INRIA, Keio), All Rights Reserved.” Therefore, a local copy of the DTD can’t be obtained from another source than the W3C with legal permission. If the DTD URI would only be used for identification, there would still be a download attempt to just another W3C URL. For non-XHTML-specific, general XML tools, it is quite unlikely that they have local copies or know about URLs of all kinds of (even custom) DTDs, which might additionally be not that static as the XHTML DTDs, so they just attempt to download them from somewhere if the URI is an HTTP URL. Pingback: HTTP 503 While Parsing XML - Art of Coding I’ve written a Java wrapper in order to do XSLT on the command line. Since the tool is intended to be usable for all kinds of XML input, it doesn’t have special handling for XHTML. However, to avoid automatic download attempts, I’ve disabled URL resolving completely, because – as said – those URLs are intended as identifiers. As Elliotte Rusty Harold said here on 2010-05-08, it is impossible to successfully parse XHTML without the corresponding DTD and entity definitions. So I added a mechanism to resolve references locally based on a configuration file, where a user has to obtain the DTD and entity definition files from you manually (due to legal restrictions, they’re not redistributable, so I can’t pre-package them with my software). Therefore, I don’t do any caching of automatic download attempts, because other DTDs than the XHTML DTD may change more frequently or may become inaccessible, and, as said, the URL is for identification, not for download. This is how I’ve implemented it: xsltransformator1 (released under GNU AGPL 3 or any later version). If you would change your licensing of the DTD and entity files (at least permit redistribution, you may still prohibit modifications to the files), you would do both, increase user experience and lower the traffic to your site. Stephan, We would very much like to see our schemata included in library and tool catalogs and you certainly can include them. W3C document license [[Permission to copy, and distribute the contents of this document, or the W3C document from which this statement is linked, in any medium for any purpose and without fee or royalty is hereby granted, provided that you include the following on ALL copies of the document, or portions thereof, that you use:]] Thank you very much for your reply! I wasn’t aware of the W3C Document License and I really appreciate the open approach of the W3C regarding licensing. Would then this modified header comment make the XHTML 1.0 Strict DTD freely redistributable? If so, I would gladly pre-package it with my XHTML-processing tools, so that no user or setup would need to download the DTD from W3C servers. That is certainly fine and I’m trying to get clarification, hence the delay, whether a file in same directory instead of comment is sufficient in case you find that preferable. That DTD hasn’t changed since 2002 and is unlikely to change but you may want to check in the future. Ideal would be XML processors that require these to be stored in a caching catalog and go off caching directives given in HTTP. Packaging manually is certainly better than going over the net thousands of times a day/hour whatever for the same resource so thank you for doing so. Ted, thank you very much for your help! I won’t put a file with only the notification in the pre-packaged DTD directory, but indeed place a corresponding notice as header comment into every single W3C document. The linked one was just for demonstration purposes how it would look in the actual DTD. I got clarification and indeed as I suspected the license can be in a separate file instead of embedded as comment. Do whichever you prefer. I’ve just applied the license to the W3C documents I’m using with my tools, so hopefully this commit isn’t a license violation already, and if it is, please let me know how to comply. Maybe other people might use those license headers as well. In any case, thank you very much for your efforts, I really appreciate! Reading even the most primitive XHTML 1.1 file with a XML processor requires no less than 38 files to be obtained either from W3C or from a local catalog due to modularization. It seems that none of the required W3C files complies with the W3C Document License in the header comment per default, so they’re not easy to redistribute and downloading them for each software installation might seem to be an option – if downloading is even possible, since some system IDs don’t provide a URL (and therefore won’t be processable at all without a redistributed local catalog). I’ve just committed the XHTML 1.1 files, all extended by the W3C Document License in their header comments. The way the W3C Document License works as a redistributable license for free software packages is that I’ve obtained those documents from the copyright holder (W3C), which granted me the right to distribute the documents for any purpose (including sublicensing) as long as I comply with the license, so I distribute them as part of a free software package while sublicensing the W3C documents to every recepient, who gets the right to distribute the package and the W3C documents from my sublicensing without the need to become a licensee from W3C directly. As the W3C Document License isn’t particularly freedom protective in itself (allows restrictive sublicensing), it’s on the other hand free enough (respecting the four essential freedoms for software), so I didn’t sublicense the W3C documents to my users under the GNU AGPL 3 or later (yet). I would really like to see the w3c or oasis publish sets of catalogs for download. This would save hours of time trying to configure each tool. Netbeans allows developers to import an existing catalog. Sorry guys but I am a bit offended by ending up writing this. I have spent several days now trying to figure out how to write an XML Schema as it ought to be written. I have been quite dilligent, and have no idea what I have done wrong. It might be, that the java package I am using is broken (I doubt it) It might be that the schema I am trying to import () does something wrong (but I doubt it). In particular I have been working to the w3c document: Why is it so complex and why is it such that silly mistakes (obviously) can have catestrophic consequences. I can’t help feeling it is your own fault in some way. I am not doing anything werid;there are thousands trying to do the same and thus the problem I suspect. Please make it easier for us to use schemas the right way. I would talk to the people behind the Java package or the library it is based on. Quite a few development platforms understand the issue, how it is inefficient and poor design not to mention potentially abusive to incessantly request a remote resource ignoring caching directives. We see a high percent of Java user-agent strings. I have filled bug reports with some prominent libraries that have gone unanswered. I suspect some of these are not actively maintained. Hello, I am totally confused! I have not visited the W3C site for more than 2 weeks,yet i receive an abuse message telling me that I have attempted to use the site more than 500 times in 10 minutes! I am trying to set up a wordpress site for a local voluntary group. I selected this theme fmedicine because it is the only one that states it is w3c valadated! I believe in W3C goals and want to adhere to its high quality levels. COuld it be possible that something in the theme is accessing your site? I believe in w3c! A heads up: The internal XML validator of Java for Linux to validate XML against and XSD using this version of Java: $ java -version java version “1.7.0_121” OpenJDK Runtime Environment (IcedTea 2.6.8) (7u121-2.6.8-1ubuntu0.14.04.3) OpenJDK 64-Bit Server VM (build 24.121-b00, mixed mode) Using this script, (from this location:) #!/bin/bash # call xsdv #First find out where we are relative to the user dir callPath=${0%/*} if [[ -n “${callPath}” ]]; then callPath=${callPath}/ fi echo java -cp ${callPath}build:${callPath}lib/xsdv.jar xsdvalidator.validate “$@” java -cp ${callPath}build:${callPath}lib/xsdv.jar xsdvalidator.validate “$@” Always makes a request to: “” With the XSD I’m using (15118-2, I’m not at liberty to share this XSD) This script is very common, and although I’m not that familiar with XML, I wouldn’t be surprised if a lot of XSD’s contain a reference to the XMLSchema.xsd. I have a work around for this, I do NOT need (or want) to be unbanned either since being banned allows me to identify when I’m accessing an external website easily. This seems fundamental to the Java implementation of Linux – so if you can get that fixed, you might get a lot of requests eliminated. I have to generate millions of XML messages for the particular project I’m working on. I had no idea that Java was stupid enough to be accessing your site every time I validated a single XML message I generate. -Rich Pingback: Why are HTML character entities necessary? - QuestionFocus This article has some good info on how to disable external entities in various XML software: Pingback: Personal Hypertext Report #6 – jrnl
https://www.w3.org/blog/systeam/2008/02/08/w3c_s_excessive_dtd_traffic/
CC-MAIN-2018-39
refinedweb
14,389
61.16
Accessing and Adding Styles to the Host Element in Stencil By Josh Morony In the previous tutorial, we rebuilt a component that was originally designed with Angular. However, instead of using Angular to build it we used Stencil instead to create a generic web component that could work anywhere. Since we were no longer using a framework like Angular, we had to make a couple of changes to the component, but it still ended up functioning mostly the same. One feature we did leave out is the animated bar that indicates how long the flash message will be displayed for. We are going to add that now: In the Angular version, this bar was its own component that was then added inside of the flash message component. We could also do this inside of our own Stencil application – we could just have two separate components and use them together – but in the interest of creating a single self-contained component we are just going to extend the flash component we built with Stencil to include this functionality. In this tutorial, we will be walking through building this animated time bar in Stencil, and along the way we are going to learn how to use the @Element decorator to access the host element of the web component. Before We Get Started This tutorial assumes a basic level of knowledge about Stencil. You should already be reasonably comfortable with what the purpose of Stencil is and the basic syntax. If you are not yet familiar with Stencil, this is a good place to start. NOTE: This tutorial is a continuation of Using Stencil to Create a Custom Web Component with a Public API Method. If you want to follow along step-by-step you will need to have complete that tutorial first. Modify the Template In order to add the time bar to the component, we are going to have to make a couple of modifications to the template. We will add those and then talk through the changes. Modify the render()function in src/components/my-flash/my-flash.tsx to reflect the following: render() { return ( <div onClick={() => this.dismiss()} class={'flash-container ' + (this.active ? 'show ' : 'hide ') + this.activeClass}> <div class="message"> {this.message} </div> <p class="dismiss">tap to dismiss</p> <div class="time-bar-container"> <div class="time-bar"></div> </div> </div> ); } We have just added a time-bar-container and a time-bar. The concept of the time bar is reasonably simple, the time-bar element just animates between 100% and 0% in order to achieve the effect. In order for it to display properly, we also need to add a couple of new styles. Add the following styles to src/components/my-flash/my-flash.css: .time-bar-container { position: absolute; top: 0; width: 100%; height: 8px; } .time-bar { height: 100%; width: 100%; background-color: #fff; } We will be using a CSS transition to animate the change, but since the time of the transition is going to depend on the duration that the user supplies for the flash message, we will need to add the transition programmatically. Animating the Time Bar As I just mentioned, in order to animate the time bar we need to make some style changes programmatically. This means we need to get a reference to the time bar elements, and then add styles to them with JavaScript. In order to do this, we will be making use of the @Element decorator available in Stencil. Modify src/components/my-flash/my-flash.tsx to reflect the following: import { Component, State, Element, Method } from '@stencil/core'; @Component({ tag: 'my-flash', styleUrl: 'my-flash.css' }) export class MyFlash { @Element() flashElement: HTMLElement; @State() active: boolean = false; @State() message: string; @State() activeClass: string = 'primary'; private timeout: any; private timeBar: HTMLElement; componentDidLoad(){ this.timeBar = this.flashElement.querySelector('.time-bar'); } @Method() show(message: string, activeClass: string, duration: number): void { this.message = message; this.activeClass = activeClass; this.active = true; this.timeBar.style.opacity = '0.3'; this.timeBar.style.transition = 'width ' + duration + 'ms linear'; this.timeBar.style.width = '0%'; this.timeout = setTimeout(() => { this.dismiss(); }, duration) } dismiss(){ this.active = false; this.timeBar.style.opacity = '0'; this.timeBar.style.transition = 'none'; this.timeBar.style.width = '100%'; clearTimeout(this.timeout); } render() { return ( <div onClick={() => this.dismiss()} class={'flash-container ' + (this.active ? 'show ' : 'hide ') + this.activeClass}> <div class="message"> {this.message} </div> <p class="dismiss">tap to dismiss</p> <div class="time-bar-container"> <div class="time-bar"></div> </div> </div> ); } } Most of this code is the same as before, but there are a few important changes. We are setting up a class member using the @Element decorator as follows: @Element() flashElement: HTMLElement; This will set up a class member so that we can access the host element using this.flashElement throughout the class. When using this component, we would add <my-flash></my-flash> somewhere in our code, and this is what the host element is (basically, the element that contains the rest of the web component). Then in the componentDidLoad lifecycle hook, we set up a reference to the time-bar element that is inside of our component’s template. We do this by using a querySelector on the host element. Basically, we are saying “find an element with a class of time-bar inside of the host element”. The benefit to referencing the host element here rather than using something like document.getElementById, is that everything remains scoped to our web component. If we do something like, theHostElement.querySelector we know that we are only going to get elements inside of our web component. Inside of the show method, we modify some styles on the time bar. We set the transition duration to the duration that is supplied to the method, and we also modify the width and opacity. In the dismiss function we “reset” these values. It is important that we set the transition to none because otherwise, the time bar has to “recharge” its way back to 100% width over the supplied duration. If you were to then trigger another message before that finished, the bar would not yet be at 100% width. It is also important that we set the opacity of the bar to – since the time bar slowly fades out when it is dismissed, if the time bar remains visible you will see it go back to full width before the flash message is completely dismissed (which looks kind of silly). Many of my readers will have a background in Angular, and the stuff we are doing here is exactly the kind of thing you should avoid when using Angular. In Angular, you should not modify DOM properties directly like we are doing here. Angular is supposed to be platform agnostic, and so it is best to let Angular decide how best to perform certain actions. However, we aren’t building an application to run on a particular framework, we are building web components for the browser and so it makes sense to use methods that are specific to the browser. If you test out the code now, you should have something that looks like this: Summary Again, most of the code is very similar to the code that we used for the Angular component, we just don’t have the Angular framework “stuff” to build with, so we use plain JavaScript methods instead. The main benefit to having built this component as a generic web component is that we can now use it just about anywhere (in any framework, or without using a framework) just by dropping in <my-flash></my-flash>. However, there is still the matter of how to make the web component available to your project (rather than it just being available in the project you have just built it in). In a future tutorial, I will cover how we could package up this web component and make it installable as a standalone web component through something like npm.
https://www.joshmorony.com/accessing-and-adding-styles-to-the-host-element-in-stencil/
CC-MAIN-2019-51
refinedweb
1,325
54.12
Installing Camel K on K3s This guide assumes you’ve already deployed a K3s cluster and have installed and configured the kubectl command to manage the cluster. You can create a namespace to install Camel K on: kubectl create namespace camel-k-test || true You can configure Camel K installation to use the Docker registry, Quay.io or a similar publicly available registry, or; You can deploy your own private repository in the cluster or on your network. Using a Public Registry Most of the those registries require authentication to push images. Therefore, we have to create a secret the namespace that will contain the credentials to access it. To do so, you can execute: Note: before running the command below, please make sure that you are logged in to the registry you are planning to use. kubectl -n camel-k-test create secret generic my-registry-secret --from-file=$HOME/.docker/config.json Using a Private Registry Although K3s does not come with a private registry, one can be installed by following the steps described in the K3s' private registry documentation. Note: installing your own registry gives you more flexibility to define how the registry should run, including the level of security required for it to run. More specifically, you can configure your registry to require or not credentials, to use HTTP instead of HTTPS, and so on. For the purpose of this guide and to present how Camel K can be installed on a seamless way, this guide demonstrates the installation using an insecure registry (unencrypted and without authentication). Installing Camel K on K3s with Private Registry With the secret created on the cluster, we can install Camel K and tell it to use those credentials when pushing the integrations. You can now download kamel CLI tool from release page and put it on your system path. After configure kamel CLI, you can execute the following command to install it on the namespace and configured to use your private registry: kamel install -n camel-k-test --force --olm=false --registry address-of-the-registry --organization your-user-id-or-org --registry-insecure true After doing that, you’ll be ready to play with Camel K. Enjoy!
https://camel.apache.org/camel-k/1.7.x/installation/registry/k3s.html
CC-MAIN-2022-05
refinedweb
368
55.07
Overview - Introduction - The Image Interface - Parts of the Image Interface - Generating an Image - Writing Image to File - Reading Image From File - Base64 Encoding Image - Conclusion Introduction The Image interface is at the core of image manipulation in Go. No matter what format you want to import or export from, it ultimately ends up as an Image. This is where the beauty of Go interfaces really shines. Go comes with support for gif, jpeg, and png formats in the standard packages. These examples demonstrate how to programatically generate, encode, decode, write to file, and base64 encode images. We will also cover a little bit about interfaces. The Image Interface The Image interface has three elements. The color model, the dimensions, and the color at each pixel. Below is the actual interface definition. The interface is only an abstract thing though. We can't create an Image itself, we can only create things that satisfy the Image interface. Go provides a type that does that. The actual struct that implements the Image interface is the RGBA type. Now, to implement the interface we just have to have all of the data elements and functions that the interface expects. We don't have to specify that we are implementing the interface with any special keywords. Other languages might require somethign like class RGBA implements Image but that is not so in Go. So, the key thing to remember is that Image is an interface that defines the minimum requirements for pieces interacting together. The RGBA type is the type of image that we will use. We can use the RGBA anywhere an Image interface is accepted. // This is just an abstract interface. We can't actually instantiate an image // directly. We can only use or implement types that implement these functions // and contain these data elements. By doing so we satisfy the interface, // and we can use that type anywhere an Image interface is passed } Parts of the Image Interface The Image interface itself is pretty simple since it only has three elements. It is important to understand what each of those three elements represents too. Here is the source for those types. // The Image interface is simple with only 3 elements. Each one // of those elements has its own details though. Fortunately, // they aren't very complex either. Here they are. type Model interface { // Model can convert any Color to one from its // own color model. The conversion may be lossy. Convert(c Color) Color } // Stores two (x,y) coordinates called Points // Those points mark the two corners of the rectangle type Rectangle struct { Min, Max Point } // Color stores an RGB value and an Alpha(transparency). RGBA() (r, g, b, a uint32) } Generating an Image package main import ( "fmt" "image" ) func main() { // Create a blank image 10 pixels wide by 4 pixels tall myImage := image.NewRGBA(image.Rect(0, 0, 10, 4)) // You can access the pixels through myImage.Pix[i] // One pixel takes up four bytes/uint8. One for each: RGBA // So the first pixel is controlled by the first 4 elements // Values for color are 0 black - 255 full color // Alpha value is 0 transparent - 255 opaque myImage.Pix[0] = 255 // 1st pixel red myImage.Pix[1] = 0 // 1st pixel green myImage.Pix[2] = 0 // 1st pixel blue myImage.Pix[3] = 255 // 1st pixel alpha // myImage.Pix contains all the pixels // in a one-dimensional slice fmt.Println(myImage.Pix) // Stride is how many bytes take up 1 row of the image // Since 4 bytes are used for each pixel, the stride is // equal to 4 times the width of the image // Since all the pixels are stored in a 1D slice, // we need this to calculate where pixels are on different rows. fmt.Println(myImage.Stride) // 40 for an image 10 pixels wide } Writing Image to File The Encode() function accepts a writer so you could write it to any writer interface. That includes files, stdout, tcp sockets, or any custom one. In this example we open a file writer. package main import ( "image" "image/png" "os" ) func main() { // Create a blank image 100x200 pixels myImage := image.NewRGBA(image.Rect(0, 0, 100, 200)) // outputFile is a File type which satisfies Writer interface outputFile, err := os.Create("test.png") if err != nil { // Handle error } // Encode takes a writer interface and an image interface // We pass it the File and the RGBA png.Encode(outputFile, myImage) // Don't forget to close files outputFile.Close() } Reading Image From File package main import ( "fmt" "image" "image/png" "os" ) func main() { // Read image from file that already exists existingImageFile, err := os.Open("test.png") if err != nil { // Handle error } defer existingImageFile.Close() // Calling the generic image.Decode() will tell give us the data // and type of image it is as a string. We expect "png" imageData, imageType, err := image.Decode(existingImageFile) if err != nil { // Handle error } fmt.Println(imageData) fmt.Println(imageType) // We only need this because we already read from the file // We have to reset the file pointer back to beginning existingImageFile.Seek(0, 0) // Alternatively, since we know it is a png already // we can call png.Decode() directly loadedImage, err := png.Decode(existingImageFile) if err != nil { // Handle error } fmt.Println(loadedImage) } Base64 Encoding Image Instead of writing the image data to a file, we could base64 encode it and store it as a string. This is useful if you want to generate an image and embed it directly in an HTML document. That is beneficial for one-time images that don't need to be stored on the file system and for creating stand-alone HTML documents that don't require a folder full of images to go with it. package main import ( "bytes" "encoding/base64" "fmt" "image" "image/png" ) func main() { // Create a blank image 10x20 pixels myImage := image.NewRGBA(image.Rect(0, 0, 10, 20)) // In-memory buffer to store PNG image // before we base 64 encode it var buff bytes.Buffer // The Buffer satisfies the Writer interface so we can use it with Encode // In previous example we encoded to a file, this time to a temp buffer png.Encode(&buff, myImage) // Encode the bytes in the buffer to a base64 string encodedString := base64.StdEncoding.EncodeToString(buff.Bytes()) // You can embed it in an html doc with this string htmlImage := "<img src=\"data:image/png;base64," + encodedString + "\" />" fmt.Println(htmlImage) } Conclusion With this knowledge, you should be able to handle basic image manipulation with Go. For further reading, refer to the official documentation on the image package at.
https://www.devdungeon.com/content/working-images-go
CC-MAIN-2019-51
refinedweb
1,092
57.47
Episode 272 · November 21, 2018 Adding user avatars is pretty easy using Rails' ActiveStorage feature. We'll be using Devise in this example, but this applies to any user authentication system. Authentication Basics Models ActiveStorage File Uploading In this episode I'm going to show you a quick way to add avatars to your Rails application using ActiveStorage and Devise. We're going to start up a new Rails application using the Jumpstart template. This is going to take care of pre-installing Bootstrap and Devise for us, so we won't have to fiddle with those gems and can just focus on the avatars. So let's start up a new rails app named, appropriately, avatars, and run the appropriate generator: rails new -m template.rb avatars cd avatars rails active_storage:install rails db:migrate Running the ActiveStorage generator will provide the application with the database migration to create the necessary tables for ActiveStorage. If you aren't using the Jumpstart template, then at this point you would also need to install Devise, set up the User model and install the Devise views so they can be customized: rails g devise:install rails g devise User rails g devise:views Let's start customizing those views now. In the registrations edit view, just before the Full Name field, we're going to add a new field: app/views/devise/registrations/edit.html.erb <div class="form-group"> <div class="row"> <div class="col-sm-4"> <% if resource.avatar.attached? %> <%= image_tag resource.avatar.variant(resize: "100x100!"), class: "rounded-circle" %> <% else %> <%= image_tag garavatar_iamge_url(current_user.email, size: 100), height: 100, width: 100, class: "rounded-circle" %> <% end %> </div> <div class="col-sm-8"> <%= f.file_field :avatar %> </div> </div> </div> You could wrap this in Bootstrap classes if you felt so inclined for additional styling. But what we're really interested in doing is displaying either the User's avatar or a default from gravatar if the User doesn't have an avatar yet. (The Jumpstart template takes care of giving us the gravatar_image_tag gem. If you're not using the template, you'll need to install it.) Make sure you add the avatar to the User model: app/models/user.rb has_one_attached :avatar In the Application Controller, you'll want to add these lines if you don't have them. (The Jumpstart template takes care of giving them to us already.) app/controllers/application_controller.rb before_action :configure_permitted_parameters, if: devise_controller? protected def configure_permitted_parameters devise_parameters.sanitizer.permit(:account_update, keys: [:name, :avatar]) end What this is doing is making Devise accept the name and avatar fields as valid inputs, since they aren't fields that Devise accepts out of the box. With all of this set up, start up the Rails application, and visit the Sign Up page to create a new account to test this out. Assuming the email address you sign up with has a gravatar image associated with it, once you are logged in you should see that image displayed as your avatar. Visiting the account edit screen will also provide the file picker element that will allow you to upload a new Avatar to use instead of the default from gravatar. If you don't have any avatar images handy for testing this out to see how it all works, you can visit UI Faces, which has a wide selection of avatar images available. Now anywhere in the application where you are going to want to display the avatar is going to need code very similar to what we did in the view above. This calls for some refactoring, pulling that code out into a helper method. app/helpers/application_helper.rb def user_avatar(user, size=40) if user.avatar.attached? user.avatar.variant(resize: "#{size}x#{size}!") else gravatar_image_url(user.email, size: size) end end Now we have a generic helper method that handles all of the logic to figure out which avatar image to display. So we can go back to the view and update it to use the helper: app/views/devise/registrations/edit.html.erb <div class="form-group"> <div class="row"> <div class="col-sm-4"> <%= image_tag user_avatar(resource, 100), class: "rounded-circle" %> </div> <div class="col-sm-8"> <%= f.file_field :avatar %> </div> </div> </div> If you go back to the application and reload the page we shouldn't see any changes to the application, which is a good thing. The last thing we want to do is update our nav bar to use this new helper to display the avatar of the logged in user. Open up the navbar partial, and replace the gravatar image_tag line with: /app/views/shared/_navbar.html.erb <%= image_tag user_avatar(current_user, 20), class: "rounded-circle" %> Return to the application and refresh the page, and the uploaded avatar should now display in the navbar instead of the gravatar image. Now we have working avatars that we can use anywhere in our application: gravatar by default, or the uploaded image provided by a User. If you want to change where the images are stored you can use any of the ActiveStorage functionality to upload images to Amazon S3, DigitalOcean Spaces, Google Cloud Storage, or Azure. By default these images will be stored to disk so keep that in mind. Storing the images to disk is generally fine for development, but in production you will want to upgrade to one of the above mentioned options. To do that you'll need to obtain the requisite keys to configure your chosen storage option in config/storage.yml. Then you need to configure the production environment to use them: config/environments/production.rb Replace this: config.active_storage.service = :local With this (assuming your chosen image storage solution is Amazon S3) config.active_storage.service = :amazon With ActiveStorage, adding images to our Rails application is pretty easy. We don't have to think about changes to our database in order to add images, because ActiveStorage handles all of that for us. Join 18,000+ developers who get early access to new screencasts, articles, guides, updates, and more.
https://gorails.com/episodes/user-avatars-with-rails-active-storage?autoplay=1
CC-MAIN-2019-09
refinedweb
1,008
52.09
Step end_train joins steps from unrelated splits. Ensure that there is a matching join for every split.I checked the code and the self.next looks correct, is there a way to check the DAG that is created from the code and see exactly where the issue is? Hi, according to Netflix/metaflow#193 then ThrottlingException's don't cause task failure; however I am reliably getting ThrottlingException followed by a task failure a second later, e.g. 2020-11-30 15:16:58.243 [659/link_finder/4638 (pid 1526921)] AWS Batch job error: 2020-11-30 15:16:58.243 [659/link_finder/4638 (pid 1526921)] ClientError('An error occurred (ThrottlingException) when calling the GetLogEvents operation (reached max retries: 4): Rate exceeded') 2020-11-30 15:16:58.539 [659/link_finder/4638 (pid 1526921)] 2020-11-30 15:16:59.030 [659/link_finder/4638 (pid 1526921)] Task failed. 2020-11-30 15:16:59.092 [659/link_finder/4638 (pid 1535437)] Task is starting (retry). Perhaps, a flurry of logging error events are being masked by the throttling exception hiding the source of the failure? Hi I am trying to install metaflow for R following the doc, but ran into this following error when trying to test with metaflow::test(): Metaflow 2.2.0 executing HelloWorldFlow for user:ji.xu Validating your flow... The graph looks good! 2020-11-30 10:50:33.216 Workflow starting (run-id 1606762233207871): 2020-11-30 10:50:33.222 [1606762233207871/start/1 (pid 49822)] Task is starting. 2020-11-30 10:50:34.783 [1606762233207871/start/1 (pid 49822)] Fatal Python error: initsite: Failed to import the site module 2020-11-30 10:50:34.785 [1606762233207871/start/1 (pid 49822)] Traceback (most recent call last): 2020-11-30 10:50:34.786 [1606762233207871/start/1 (pid 49822)] File "/Users/ji.xu/anaconda3/envs/r-metaflow/lib/python3.6/site.py", line 550, in <module> 2020-11-30 10:50:34.786 [1606762233207871/start/1 (pid 49822)] main() 2020-11-30 10:50:34.786 [1606762233207871/start/1 (pid 49822)] File "/Users/ji.xu/anaconda3/envs/r-metaflow/lib/python3.6/site.py", line 531, in main 2020-11-30 10:50:34.786 [1606762233207871/start/1 (pid 49822)] known_paths = addusersitepackages(known_paths) 2020-11-30 10:50:34.786 [1606762233207871/start/1 (pid 49822)] File "/Users/ji.xu/anaconda3/envs/r-metaflow/lib/python3.6/site.py", line 282, in addusersitepackages 2020-11-30 10:50:34.786 [1606762233207871/start/1 (pid 49822)] user_site = getusersitepackages() 2020-11-30 10:50:34.786 [1606762233207871/start/1 (pid 49822)] File "/Users/ji.xu/anaconda3/envs/r-metaflow/lib/python3.6/site.py", line 258, in getusersitepackages 2020-11-30 10:50:34.787 [1606762233207871/start/1 (pid 49822)] user_base = getuserbase() # this will also set USER_BASE 2020-11-30 10:50:34.787 [1606762233207871/start/1 (pid 49822)] File "/Users/ji.xu/anaconda3/envs/r-metaflow/lib/python3.6/site.py", line 248, in getuserbase 2020-11-30 10:50:34.787 [1606762233207871/start/1 (pid 49822)] USER_BASE = get_config_var('userbase') 2020-11-30 10:50:34.787 [1606762233207871/start/1 (pid 49822)] File "/Users/ji.xu/anaconda3/envs/r-metaflow/lib/python3.6/sysconfig.py", line 609, in get_config_var 2020-11-30 10:50:34.787 [1606762233207871/start/1 (pid 49822)] return get_config_vars().get(name) 2020-11-30 10:50:34.787 [1606762233207871/start/1 (pid 49822)] File "/Users/ji.xu/anaconda3/envs/r-metaflow/lib/python3.6/sysconfig.py", line 588, in get_config_vars 2020-11-30 10:50:34.787 [1606762233207871/start/1 (pid 49822)] import _osx_support 2020-11-30 10:50:34.787 [1606762233207871/start/1 (pid 49822)] File "/Users/ji.xu/anaconda3/envs/r-metaflow/lib/python3.6/_osx_support.py", line 4, in <module> 2020-11-30 10:50:34.787 [1606762233207871/start/1 (pid 49822)] import re 2020-11-30 10:50:34.787 [1606762233207871/start/1 (pid 49822)] File "/Users/ji.xu/anaconda3/envs/r-metaflow/lib/python3.6/re.py", line 123, in <module> 2020-11-30 10:50:34.788 [1606762233207871/start/1 (pid 49822)] import sre_compile 2020-11-30 10:50:34.788 [1606762233207871/start/1 (pid 49822)] File "/Users/ji.xu/anaconda3/envs/r-metaflow/lib/python3.6/sre_compile.py", line 17, in <module> 2020-11-30 10:50:34.788 [1606762233207871/start/1 (pid 49822)] assert _sre.MAGIC == MAGIC, "SRE module mismatch" 2020-11-30 10:50:34.788 [1606762233207871/start/1 (pid 49822)] AssertionError: SRE module mismatch 2020-11-30 10:50:34.788 [1606762233207871/start/1 (pid 49822)] Error: Error 1 occurred creating conda environment r-reticulate 2020-11-30 10:50:34.817 [1606762233207871/start/1 (pid 49822)] Execution halted 2020-11-30 10:50:34.820 [1606762233207871/start/1 (pid 49822)] Task failed. 2020-11-30 10:50:34.820 Workflow failed. 2020-11-30 10:50:34.820 Terminating 0 active tasks... 2020-11-30 10:50:34.820 Flushing logs... Step failure: Step start (task-id 1) failed. It seems the problem is on the python side. Has anyone seen the same issue and has a solution? I've been having loads of problems getting my AWS Batch job working through Metaflow. Since I'm not overly experienced with AWS CloudOps, it's hard for me to tell whether the issue is an AWS issue or a Metaflow limitation. Here is one thing I experienced which may help others: @batchdecorator, I noticed that GPU instances were getting launched even when I explicitly denoted gpu=0as a parameter in the decorator. This appears to be happening for a couple of reasons: I maxed out my vCPU limit on my CPU ComputeEnvironment which is forcing a job to launch on the GPU ComputeEnvironment. After talking with AWS support, if any of you here are really wanting to crank up the number of batch workers, make sure the MaxVCPUBatch parameter in the CloudFormation template is also adjusted upwards accordingly. For me, I'm running Dask parallelization within each Batch task, so I'm using up the MaxVCPUBatch pretty quickly and was only seeing one c5.18xlarge instance launch at any one time when I had a MaxVCPUBatch value of 96 in my CloudFormation template. So ... even though the Metaflow documentation lists a --max-workers parameters in the CLI, the number of maximum workers will also be throttled by MaxVCPUBatch in the CloudFormation Template. The explicit denotion of gpu=0 does nothing within the Metaflow @batch decorator ( BatchJobclass). I know there are a lot of ways to correct for this (separate job queues, solution mentioned above, etc.) but was curious what the Metaflow devs on this forum think of possibly changing line 150 in batch_client.py to read if int(gpu) >= 0 to protect from GPU instances being launched "unnecessarily". Metaflow 2.2.5 executing HelloAWSFlow for user:jpujari Validating your flow... The graph looks good! Running pylint... Pylint is happy! 2020-11-30 17:12:42.323 [58/hello/251 (pid 14705)] [c0cd1149-2c44-4178-bbfe-40179180c331] Setting up task environment. 2020-11-30 17:12:42.325 [58/hello/251 (pid 14705)] [c0cd1149-2c44-4178-bbfe-40179180c331] /bin/sh: 1: [: -le: unexpected operator 2020-11-30 17:12:44.974 [58/hello/251 (pid 14705)] [c0cd1149-2c44-4178-bbfe-40179180c331] /bin/sh: 1: [: -gt: unexpected operator 2020-11-30 17:12:44.975 [58/hello/251 (pid 14705)] [c0cd1149-2c44-4178-bbfe-40179180c331] tar: job.tar: Cannot open: No such file or directory 2020-11-30 17:12:44.977 [58/hello/251 (pid 14705)] [c0cd1149-2c44-4178-bbfe-40179180c331] tar: Error is not recoverable: exiting now 2020-11-30 17:12:44.977 [58/hello/251 (pid 14705)] AWS Batch error: 2020-11-30 17:12:45.225 [58/hello/251 (pid 14705)] Essential container in task exited This could be a transient error. Use @retry to retry. 2020-11-30 17:12:45.233 [58/hello/251 (pid 14705)] 2020-11-30 17:12:47.600 [58/hello/251 (pid 14705)] Task failed. 2020-11-30 17:12:47.878 [58/hello/251 (pid 28819)] Task is starting (retry). 2020-11-30 17:12:48.588 [58/hello/251 (pid 28819)] Sleeping 2 minutes before the next AWS Batch retry Task is starting. <flow UserProfileFlow step make_user_profile[14] (input: [UserList(user_id=18...)> failed: Internal error Traceback (most recent call last): File "/metaflow/metaflow/datatools/s3.py", line 588, in _read_many_files stdout, stderr = self._s3op_with_retries(op, File "/metaflow/metaflow/datatools/s3.py", line 658, in _s3op_with_retries time.sleep(2i + random.randint(0, 10))op.stderrn16glb60' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/metaflow/metaflow/cli.py", line 883, in main start(auto_envvar_prefix='METAFLOW', obj=state) File "/metaflow/metaflow_UserProfileFlow_linux-64_b2cb8ad829dfa351f545cf2d7738ca9eab794992/lib/python3.8/site-packages/click/core.py", line 829, in __call__ return self.main(args, kwargs) File "/metaflow/metaflow_UserProfileFlow_linux-64_b2cb8ad829dfa351f545cf2d7738ca9eab794992/lib/python3.8/site-packages/click/core.py", line 782, in main rv = self.invoke(ctx) File "/metaflow/metaflow_UserProfileFlow_linux-64_b2cb8ad829dfa351f545cf2d7738ca9eab794992/lib/python3.8/site-packages/click/core.py", line 1259, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/metaflow/metaflow_UserProfileFlow_linux-64_b2cb8ad829dfa351f545cf2d7738ca9eab794992/lib/python3.8/site-packages/click/core.py", line 1066, in invoke return ctx.invoke(self.callback, ctx.params) File "/metaflow/metaflow_UserProfileFlow_linux-64_b2cb8ad829dfa351f545cf2d7738ca9eab794992/lib/python3.8/site-packages/click/core.py", line 610, in invoke return callback(args, kwargs) File "/metaflow/metaflow_UserProfileFlow_linux-64_b2cb8ad829dfa351f545cf2d7738ca9eab794992/lib/python3.8/site-packages/click/decorators.py", line 33, in new_func return f(get_current_context().obj, args, kwargs) File "/metaflow/metaflow/cli.py", line 437, in step task.run_step(step_name, File "/metaflow/metaflow/task.py", line 394, in run_step self._exec_step_function(step_func) File "/metaflow/metaflow/task.py", line 47, in _exec_step_function step_function() File "train.py", line 121, in make_user_profile files = s3.get_many(user_keys, return_missing=True) File "/metaflow/metaflow/datatools/s3.py", line 417, in get_many return list(starmap(S3Object, _get())) File "/metaflow/metaflow/datatools/s3.py", line 411, in _get for s3prefix, s3url, fname in res: File "/metaflow/metaflow/datatools/s3.py", line 597, in _read_many_files yield tuple(map(url_unquote, line.strip(b'\n').split(b' '))).inputs._ztikcnn' service@ Hey Metaflow, How can I disable the timestamp in the printout? I tried to monkeypatch the logger as follows but it messed all the parameter taking up and didn't work from metaflow import cli from functools import partial cli.logger = partial(cli.logger, timestamp=False) Is there any other way? from " if you have separate training and prediction flows in production, the prediction flow can access the previously built model as long as one exists in the same namespace" I have two such flows, but I can't figure out how to have them in the same namespace. I've tried --authorize but it seems it creates a unique production token (i.e. namespace("production:flow1-0-zjgv")) for every unique flow name. I'm able to get around this by changing namespaces inside the script, but it sounds from the documentation there should be a way to have them in the same namespace so that I can more easily access trained models from the traning flow when I run the prediction flow. Am I misunderstanding something here? Hi here, we're developing the plugin to run the Metaflow flows in the k8s using Argo Workflows. It's similar to the AWS Step Functions plugin but generates an Argo's WorkflowTemplate instead of the SFN's StateMachine. Also it adds an extra @argo decorator to specify k8s resources: @argo(image='tensorflow/tensorflow:2.2.1-gpu-py3', nodeSelector={'gpu': 'nvidia-tesla-v100'}) @resources(gpu=1, cpu=2, memory=6000) @step def training(self): ... Would you be interested to make such plugin a part of the Metaflow project? I’ve been running into this issue quite a bit recently. AWS Batch Error: CannotCreateContainerError: Error response from daemon: devmapper: Thin Pool has 4115 free data blocks which is less than minimum required 4449 free data blocks. Create more free space in thin pool or use dm.min_free_space option to change behavior This could be a transient error. Use @retry to retry. Removing any EC2s controlled by Metaflow and letting the ASG create new ones seems to temporarily solve the problem but it keeps reappearing. Any advice on how to mitigate? python flow.py runcauses the machine I'm running this on to go OOM after launching a Python process for each job. This is after setting --max-workersand --max-num-splitshigh enough to allow this many jobs. I got this very weird issue running metaflow locally with a step that import pandas... while running the code out of metaflow it all works fine... but the same code in the same env with metaflow I get ImportError: cannot import name 'Collection' from 'typing' (/home/hexa/.cache/pypoetry/virtualenvs/nima-images-H6fn72k1-py3.7/lib/python3.7/site-packages/typing.py) Ideas ? Hey Metaflow, one unexpected discovery I've stumbled upon is the orchestration of the DAG is performed locally, even when running on AWS Batch. Additionally, I created an access list for our AWS API Gateway that our Metaflow API uses, as one layer of security. This means that we have to leave our data scientists machines running and on the VPN, during long running training flows. Is there any configuration of Metaflow to allow the orchestration to be performed remotely and allow our development machines to be disconnected from the VPN or even shut down during execution? /migration_service/migration_server.py:17: DeprecationWarning: loop argument is deprecated app = web.Application(loop=loop) /migration_service/migration_server.py:28: DeprecationWarning: Application.make_handler(...) is deprecated, use AppRunner API instead AttributeError: 'NoneType' object has no attribute 'cursor' /bin/sh: 1: metadata_service: not found @batch(image='custom-image'). What would be the best way to build on top of the default image to ensure requirements for Metaflow are preserved? How does Metaflow install necessary dependencies on the default image? curlis not installed in the image. Seems to be used to lookup the dynamodb host. FWIW not an unreasonable dependency, and I was surprised to find that curlwasn't already baked into the continuumio/miniconda3:latestimage. Once installing it in the docker image the dynamodb host resolution worked as expected. Hello all, I'm trying to take a look at the artifacts associated with one of our SFN executions but when trying to call: Step(...).task.data We run into the following error: ServiceException: Metadata request (/flows/{Flow}/runs/{Run}/steps/start/tasks/{Task}/artifacts) failed (code 500): 500 Internal Server Error Looking at the logs from our metadata service, I see the following error: Traceback (most recent call last): File "/opt/latest/lib/python3.7/site-packages/aiohttp/web_protocol.py", line 418, in start resp = await task File "/opt/latest/lib/python3.7/site-packages/aiohttp/web_app.py", line 458, in _handle resp = await handler(request) File "/opt/latest/lib/python3.7/site-packages/services/metadata_service/api/artifact.py", line 140, in get_artifacts_by_task artifacts.body) File "/opt/latest/lib/python3.7/site-packages/services/metadata_service/api/artifact.py", line 355, in _filter_artifacts_by_attempt_id attempt_id = ArtificatsApi._get_latest_attempt_id(artifacts) File "/opt/latest/lib/python3.7/site-packages/services/metadata_service/api/artifact.py", line 349, in _get_latest_attempt_id if artifact['attempt_id'] > attempt_id: TypeError: string indices must be integers Hello, I'm trying to run metaflow on a docker alpine image with python and node. I've installed the metaflow required dependencies in the image along with other node requirements for my use case; and ran my script using --with batch:image=my-custom-image. It resulted in this error 2020-12-23 00:14:06.109 [4747/start/30948 (pid 15530)] [a3c2939b-9256-41e5-8c7e-f076261e2739] Setting up task environment. 2020-12-23 00:14:06.109 [4747/start/30948 (pid 15530)] [a3c2939b-9256-41e5-8c7e-f076261e2739] /usr/bin/python: No module named pip 2020-12-23 00:14:06.109 ] tar: can't open 'job.tar': No such file or directory 2020-12-23 00:14:08.205 [4747/start/30948 (pid 15530)] AWS Batch error: 2020-12-23 00:14:08.439 [4747/start/30948 (pid 15530)] Essential container in task exited This could be a transient error. Use @retry to retry. 2020-12-23 00:14:08.440 [4747/start/30948 (pid 15530)] 2020-12-23 00:14:08.791 [4747/start/30948 (pid 15530)] Task failed. My question is, pip and the required python dependencies are installed in the container so what is causing the No module named pip error? Thanks Anyone had problems with multi GPU on Aws Batch ? : I get like: CannotStartContainerError: Error response from daemon: OCI runtime create failed: container_linux.go:370: starting container process caused: process_linux.go:459: container init caused: Running hook #0:: error running hook: exit status 1, stdout: , stderr This could be a transient error. Use @retry to retry. 1 GPU works fine CONDA_CHANNELSbut that doesn't work for me. METAFLOW_RUN_to the desired variable but this doesn't seem to bring the variable into the AWS Batch environment for me. environmentdecorator but I'm getting a linting error when attempting to use that. Then, when I use --no-pylintto override, none of my Flow steps work. os.environ.geta custom environment variable from within one of my Metaflow steps that was created in my local environment and passed to the AWS Batch environment. I feel like I'm missing something rather obvious. hey guys – curious if anyone else would find value in exposing the batch job parameter for sharedMemorySize? It looks like the AWS batch team added the parameter towards the end of last year and it's a passthrough to docker run --shm-size, which can really speed up the performance of pytorch parallel dataloaders (especially to saturate multiple GPUs) and some boosting libraries. ECS defaults the instance shm to 50% of memory allocation, but docker will only expose 64mb of that by default to running containers. 📣 Metaflow was just included in the Netflix's security bug bounty program! Find vulnerabilities in the code and get paid for it 💰(Or just enjoy Metaflow getting more secure over time) Hey Metaflow! I have a pretty specific question: I find myself having trouble running a flow on AWS Batch that uses a container with pre-installed Python libraries. I happen to be using conda to install a few extra libraries in this step but by doing so, it seems I now have a fragmented environment. Any advice on how one can use a Docker container as a base environment and then seemingly add a few more packages in a specific step using conda? The success criteria here would be to successfully import a package installed by the Docker image as well as a different package installed by the conda decorator --max-workersbetween the local runtime and when deployed via SFN, specifically when having nested foreach fanouts. Locally, the runtime will enforce the parallelization at the task level so it will never go beyond that, however the SFN concurrency limit is enforced per-split, so the nested fanout will result in an effective parallelism of max-workers^2. Similarly, normal fanouts in a SFN deployment are not rate limited. Not sure it’s worth explicitly stating this in the docs, but thought I’d mention it just in case
https://gitter.im/metaflow_org/community
CC-MAIN-2021-04
refinedweb
3,215
50.94
Opened 10 years ago Closed 10 years ago #9934 closed (invalid) Django is generating requests that result in untyped binary data download in FF 3 Description Hi, I am having a problem that seems like a ghost. I can find where it's comming from. It doesn't happen all the time. But what happens is sometimes when I go to a page. instead of loading the page, the browsers askes me if I would like to download a file called untyped binary data. I am hosting the project on dreamhost. I am use fcgi. I am starting the requests here in a dispatch.fcgi #!/home/mysite/opt/bin/python import sys, os from flup.server.fcgi import WSGIServer from django.core.handlers.wsgi import WSGIHandler # Add a custom Python path. sys.path.insert(0, "/home/mysite/site_code") # Switch to the directory of your project. (Optional.) os.chdir("/home/mysite/site_code") # Set the DJANGO_SETTINGS_MODULE environment variable. os.environDJANGO_SETTINGS_MODULE? = "mysite.settings" WSGIServer(WSGIHandler()).run() This is what the file looks like that it asks me to download Attachments (1) Change History (4) Changed 10 years ago by comment:1 Changed 10 years ago by I have been investigating this further, and if you look at the file I attached there is a character at the the front of the HTTP/1.1 I think that might be throwing off the browser. comment:2 Changed 10 years ago by comment:3 Changed 10 years ago by I can't reproduce this, nor have I ever seen this behavior before on any Django install. I have to assume it's a misconfigured web server on your end, but please feel free to reopen if you can indeed trace this back to a Django bug. This is the body of the file it asks me to download.
https://code.djangoproject.com/ticket/9934
CC-MAIN-2019-22
refinedweb
303
64.61
Sitrep is a source code analyzer for Swift projects, giving you a high-level overview of your code: Behind the scenes, Sitrep captures a lot more information that could be utilized – how many functions you have, how many comments (regular and documentation), how large your enums are, and more. These aren’t currently reported, but could be in a future release. It’s also written as both a library and an executable, so it can be integrated elsewhere as needed. Sitrep is built using Apple’s SwiftSyntax, which means it parses Swift code accurately and efficiently. Note: Please make sure that the SwiftSyntax version specified in Package.swift matches your current Swift tools version. For example, if you're using Swift tools 5.3 you need to change the spec from 0.50400.0 to 0.50300.0. If you want to install the Sitrep command line tool, you have three options: Homebrew, Mint, or building it from the command line yourself. Use this command for Homebrew: brew install twostraws/brew/sitrep Using Homebrew allows you to run sitrep directly from the command line. For Mint, install and run Sitrep with these command: mint install twostraws/Sitrep@main mint run sitrep@main And finally, to build and install the command line tool yourself, clone the repository and run make install: git clone cd Sitrep make install As with the Homebrew option, building the command line tool yourself allows you to use the sitrep command directly from the command line. Sitrep is implemented as a library that does all the hard work of scanning and reporting, plus a small front end that handles reading and writing on the command line. As an alternative to using Sitrep from the command line, you can also use its library SitrepCore from inside your own Swift code. First, add Sitrep as a dependency in your Package.swift file: let package = Package( //... dependencies: [ .package(url: "", .branch("master")) ], //... ) Then import SitrepCore wherever you’d like to use it. When run on the command line without any flags, Sitrep will automatically scan your current directory and print its findings as text. To control this behavior, Sitrep supports several command line flags: -clets you specify a path to your .sitrep.yml configuration file, if you have one. -fsets the output format. For example, -f jsonenables JSON output. The default behavior is text output, which is equivalent to -f text. -iwill print debug information, showing the settings Sitrep would use if a real scan were requested, then exits. -psets the path Sitrep should scan. This defaults to your current working directory. -hprints command line help. You can customize the behavior of Sitrep by creating a .sitrep.yml file in the directory you wish to scan. This is a YAML file that allows you to provide permanent options for scanning this path, although right now this is limited to one thing: an array of directory names to exclude from the scan. For example, if you wanted to exclude the .build directory and your tests, you might create a .sitrep.yml file such as this one: excluded: - .build - Tests You can ask Sitrep to use a custom configuration file using the -c parameter, for example sitrep -c /path/to/.sitrep.yml -p /path/to/swift/project. Alternatively, you can use the -i parameter to have Sitrep tell you the configuration options it would use in a real analysis run. This will print the configuration information then exit. Sitrep is written using Swift 5.3. You can either build and run the executable directly, or integrate the SitrepCore library into your own code. To build Sitrep, clone this repository and open Terminal in the repository root directory. Then run: swift build swift run sitrep -p ~/path/to/your/project/root If you would like to keep a copy of the sitrep executable around, find it in the .debug directory after running swift build. To run Sitrep from the command line just provide it with the name of a project directory to parse – it will locate all Swift files recursively from there. Alternatively, just using sitrep by itself will scan the current directory. Any help you can offer with this project is most welcome, and trust me: there are opportunities big and small, so that someone with only a small amount of Swift experience can help. Some suggestions you might want to explore: Please ensure you write tests to accompany any code you contribute, and that SwiftLint returns no errors or warnings. Sitrep was designed and built by Paul Hudson, and is copyright © Paul Hudson 2021. Sitrep is licensed under the Apache License v2.0 with Runtime Library Exception; for the full license please see the LICENSE file. Sitrep is built on top of Apple’s SwiftSyntax library for parsing code, which is also available under the Apache License v2.0 with Runtime Library Exception. Swift, the Swift logo, and Xcode are trademarks of Apple Inc., registered in the U.S. and other countries. If you find Sitrep useful, you might find my website full of Swift tutorials equally useful: Hacking with Swift. Swiftpack is being maintained by Petr Pavlik | @ptrpavlik | @swiftpackco | API | Analytics
https://swiftpack.co/package/twostraws/Sitrep
CC-MAIN-2021-39
refinedweb
860
65.01
18 April 2007 17:03 [Source: ICIS news] By Nigel Davis LONDON (ICIS news)--Polyolefins prices are expected to be under pressure later in 2008 and through to the end of the decade as new low-cost capacity comes on stream in the Middle East and China boosts production. As yet, however, producers have been able to pass on fluctuations in oil and gas prices to a greater rather than to a lesser extent and hold their own in the great profitability race. Companies may not be basking in a warm or even comfortable glow, but supply/demand fundamentals have helped keep them happy. Nova Chemicals CEO Jeff Lipton, for instance, was upbeat at his company’s annual general meeting on 12 April. “Results in March of this year improved dramatically after we worked through inventory-related softness in January and February, and we expect the second quarter to be just as strong,” he said. “With growing demand and continuing delays in new capacity additions, we believe ethylene and polyethylene [PE]market conditions can remain strong through 2011,” he added. Lipton is ever the optimist and surprisingly bullish for the sector up to 2011. Most commentators forecast a drop in PE operating rates from around 2009, particularly as new capacities come on stream in ?xml:namespace> Delays in bringing some plants on line elsewhere in the Polyolefins giant Basell suggests that global polypropylene (PP) operating rates will remain over 90% until 2010. That is the good news. Not so encouraging is its forecast of PE operating rates as low as 83% in 2010 from around 86% this year as the new In these circumstances, how For the much shorter term, however, market analysts are relatively bullish. With demand holding and not currently looking pressured, higher feedstock and energy prices are working to keep polyolefins prices high. Naphtha prices have hit new peaks in Europe and Asia this year and risen sharply in the Demand in In Societe Generale analyst Sebastian Castelli is also relatively bullish on linear low density polyethylene (LLDPE) and PP in the short term. Energy prices are helping drive the markets for both polymers. The bank expects plastics prices to continue their upward trend and register another slight increase in the next few weeks. The London Metal Exchange plastics futures prices and the physical market are reacting to the upward momentum in energy prices. Polyolefins spot prices have risen in most markets since the beginning of the year as energy prices have climbed. The outlook is more neutral for SocGen is bullish on European North American PP, given strong demand. It is bullish on LLDPE in Europe for the next few weeks and sees LLDPE demand improving in Sources: LME, ICIS & SG Commodities researchSources: LME, ICIS & SG Commodities research *First week in April; **Second week in Apr
http://www.icis.com/Articles/2007/04/18/9021776/insight-short-term-polyolefins-outlook-is-good.html
CC-MAIN-2015-11
refinedweb
470
55.27
On 14/01/2006, at 2:52 PM, Fabiano Sidler wrote: > Wow! Two answers within this short time! Are you competing yourself? ;) > > 2006/1/14, Graham Dumpleton <grahamd at dscpl.com.au>: >> Hmmm, I looked at your code, its awfully complicated. > > Which one? The older is much more difficult than the one I have by > now. Actually, the container now only set the name for invoking the > application (handler in this case) and generates some static classes > like Request. In this last thread, you have only posted one bit of code. I used the term "complicated" because you use features line metaclasses, static methods creation, overloading of new, etc. It just seems to fail the test of keeping it simple for me. I can't really see the reason for doing it all, it just seems to me that it is only going to result in something that will be hard to maintain by someone done the track. > The problem now is: How can I get the name 'handler' to be set at the > right place? I'll work on it after a few hours of sleep! ;) Well, I had a snooze for a couple of hours and didn't help me much. ;-) In one of your emails you said you were using configuration of: SetHandler python-program PythonHandler mymodule PythonDebug On This means mymodule::handler() needed to exist. Your quote code did not even have a mymodule.py and there was no handler() at global scope in the code, the only reference to handler() was nested in a function.. Ie., a value containing the name of the module, not the actual module. All in all, it wasn't even clear where the entry point was going to be to trigger stuff to created. It also worried me as well that you were making req.write() a static method. Let me describe a few basic concepts about mod_python which may help you (or not). When you specify 'PythonHandler', the value can be the name of a module, in which case that module must provide a 'handler()" function as entry point. Or, it can specify 'module::name' where name is the name of a function in the module to use as an entry point. Whatever the entry point is, it needs to take the single argument which is the request object. What most people don't realise is that if 'name' identifies an unbound method of a class type, an instance of the class type will be created and then the method will be called. Thus you can have: # .htaccess SetHandler python-program PythonHandler example::MyWebApp.__call__ # example.py from mod_python import apache class MyWebApp: def __init__(self,req): apache.log_error("__init__") pass def __call__(self,req): apache.log_error("__call__") req.content_type = 'text/plain' req.write('hello') return apache.OK In terms of what you are doing, if you wanted an instance of MyWebApp to be created on each class, this is what you would be best off doing. The __init__() method could do any initialisation which might be specific to mod_python to create your server independent interface. Because __init__() must accept a request object, derived classes couldn't override it though for their own purposes. Now I don't know if this helps in anyway. I could ramble on a bit more about other things that may be of interest, but have to run out the door right now. Graham
http://modpython.org/pipermail/mod_python/2006-January/019963.html
CC-MAIN-2018-39
refinedweb
570
73.78
Opened 7 years ago Closed 7 years ago Last modified 7 years ago #13784 closed (duplicate) pre_save is dispatched before executing upload_to attributes on FileFields when using ModelForm Description Suppose you have this code: def callable_upload_to(instance, filename): full_path = os.path.join(this_year(), filename) print "PLACING", full_path return full_path class Foo(models.Model): file = models.FileField(upload_to=callable_upload_to) size = models.IntegerField(null=True) def update_foo(sender, instance, **kwargs): print "UPDATING", str(instance.file) pre_save.connect(update_foo, sender=Foo) Suppose you create one of these instances with a ModelForm you'll get this output on stdout: UPDATING sample.file PLACING 2010/sample.file That means that you can't rely on the file path as dictated by the upload_to callable in your pre_save signal. Especially important is that within your pre_save method you can't even get to the file since it doesn't exist. Change History (3) comment:1 Changed 7 years ago by comment:2 Changed 7 years ago by comment:3 Changed 7 years ago by Oh I see. I really did google the tickets but didn't find that one. Personally thinks it sucks but will keep my mouth shut till I provide a patch of my own :) I suspect that the culprit is Model.save_basein django/db/models/base.pywhere it dispatches the signal before it does whatever it does to those meta.local_filesbut I don't understand the code enough to attempt a patch at this point.
https://code.djangoproject.com/ticket/13784
CC-MAIN-2017-09
refinedweb
242
56.55
Hi, I am installed tomcat5.5 and open... know how to set the path in window xp please write the proper command and path also its very urgent Hi Soniya, I am sending you a link. I hope i...:// Thanks. Hi Soniya, We can use oracle too in struts hi hi i want to develop a online bit by bit examination process as part of my project in this i am stuck at how to store multiple choice questions options and correct option for the question.this is the first project i am doing Hi... - Struts Hi... Hi, If i am using hibernet with struts then require... of this installation Hi friend, Hibernate is Object-Oriented mapping tool... more information,tutorials and examples on Struts with Hibernate...struts hi Before asking question, i would like to thank you... into the database could you please give me one example on this where i i... { public ActionForward execute(ActionMapping am,ActionForm af...=st.executeQuery("select type from userdetails1 where username=\'"+uname Struts - Struts Struts How to display single validation error message, in stude of ? Hi friend, I am sending you a link. This link will help you. Please visit for more information. Hi ..I am Sakthi.. - Java Beginners Hi ..I am Sakthi.. can u tell me Some of the packages n Sub...;Hi friend, package javacode; import javax.swing.*; import... Tutorial"); tabbedPane.addTab("One", icon, panel1, "Does nothing how to set value in i want to set Id in checkBox from the struts action. Hi friend, For more information,Tutorials and Examples on Checkbox in struts visit to : struts i have no any idea about struts.please tell me briefly about struts?** Hi Friend, You can learn struts from the given link: Struts Tutorials Thanks Struts Struts Am newly developed struts applipcation,I want to know how to logout the page using the strus Please visit the following link: Struts Login Logout, Here my quation is can i have more than one validation-rules.xml files in a struts application Struts - Struts Struts Dear Sir , I am very new in Struts and want... validation and one of custom validation program, may be i can understand.Plz provide the that examples zip. Thanks and regards Sanjeev. Hi friend struts struts i have one textbox for date field.when i selected date from datecalendar then the corresponding date will appear in textbox.i want code for this in struts.plz help me Struts Code - Struts Struts Code Hi I executed "select * from example" query and stored all the values using bean . I displayed all the records stored in the jsp using struts . I am placing two links Update and Delete beside each record . Now I what is struts? - Struts what is struts? What is struts?????how it is used n what.... For the View, Struts works well with JavaServer Pages, including JSTL and JSF, as well... by transforming the data from one representation to another. Yes, you can use Struts file downloading - Struts Struts file downloading how to download a file when i open a file from popup option like save ,open and cancel file is opened but file content... for this response Hi friend, I am sending you a link. This link can anyone tell me how can i implement session tracking in struts? please it,s urgent........... session tracking? you mean... one otherwise returns existing one.then u can put any object value in session Developing Simple Struts Tiles Application will show you how to develop simple Struts Tiles Application. You will learn how to setup the Struts Tiles and create example page with it. What is Struts... Developing Simple Struts Tiles Application   How Struts 2 Framework works? How Struts 2 Framework works? This tutorial explains you the working.... In this tutorial you will learn How Struts 2 works with the help of an easy... creation of web applications in Java quite simple and easy. Struts 2 is based I have no.of checkboxes in jsp.those checkboxes values came from the databases.we don't know howmany checkbox values are came from... checkboxes vales in the another jsp which one we are selected with checking Not sure what I am missing ? Any ideas? Not sure what I am missing ? Any ideas? import java.util.*; public...) { if(str.length()==0) return false; for(int i=0;i<str.length();i++) if (c==str.charAt(i)); return true Hi... - Struts Hi... Hello, I want to chat facility in roseindia java expert please tell me the process and when available experts please tell me Firstly you open the browser and type the following url in the address bar to those options through which i can save the data from 1st jsp in different databases like the above i.e friends,family,etc pls can any one tell how can i do...hi on clicking on button on 1st jsp page i want to display How Struts Works How Struts Works The basic purpose of the Java Servlets in struts is to handle requests made by the client or by web browsers. In struts JavaServerPages (JSP) are Articles how to develop a simple JSR 168 compliant Struts portlet. You discover how... Picture In this article, I will describe how to work with Struts, go... to learn a great deal from Struts projects. I see JSF becoming a dominant Struts - Framework Struts Good day to you Sir/madam, How can i start struts application ? Before that what kind of things necessary to learn and can u tell me clearly sir/madam? Hi friend struts struts Hi how struts flows from jsp page to databae and also using validation ? Thanks Kalins Naik hi .. need help ASAP ..i have a project buit in eclipse , i have installed jasper. i want the steps to work on it .. on it .. .. am not getting how it works ... from were to start .. thank you...hi .. need help ASAP ..i have a project buit in eclipse , i have installed jasper. i want the steps to work on it .. hi .. need help ASAP ..i struts - Struts struts hi, i have formbean class,action class,java classes and i configured all in struts-config.xml then i dont know how to deploy and test... and do run on server. whole project i have run?or any particular struts hi i would like to have a ready example of struts using "action class,DAO,and services" for understanding.so please guide for the same. thanks Please visit the following link: Struts Tutorials Struts Tag Lib - Struts Struts Tag Lib Hi i am a beginner to struts. i dont have.... the same. Regards, Arun Hi friend, Taglib Directive.... JSP Syntax Examples in Struts : Description The taglib Using radio button in struts - Struts that i am trying to solve. Here are the details : I have a list of TV's... - selection 2 I need to get two values from the selection of the user the first... , but the radio button has only just one value that i can pass.what can i do to solve Links - Links to Many Struts Resources Struts Tutorials One of the Best Jakarta Struts available on the web. Struts... you how to develop Struts applications using ant and deploy on the JBoss... input from HTML forms through Struts. Demystifying Jakarta Struts Also How to code in struts for 3 Failed Login attempts - Struts How to code in struts for 3 Failed Login attempts Hi, I require help. I am doing one application in struts where i have to incorporate...;Hi friend, Read for more information. Struts - Struts Struts for dummies pdf download I am looking for a PDF for struts beginners.Thanks Textarea - Struts Textarea Textarea Hi, this is ramprasad and i am using latest... characters.Can any one? Given examples of struts 2 will show how to validate... we have created five different files including three .jsp, one .java and one.xml Servlet - Struts Servlet Can I can my action class from servlet? If yes, then how? Hi friend, I am sending you a link. I hope that this link will help you please visit for more information: que - Struts que how can i run a simple strut programm? please explain with a proper example. reply soon. Hi Friend, Please visit the following link: Thanks
http://www.roseindia.net/tutorialhelp/comment/13640
CC-MAIN-2014-41
refinedweb
1,383
76.62
Learn how easy it is to sync an existing GitHub or Google Code repo to a SourceForge project! See Demo You can subscribe to this list here. Showing 1 results of 1 On Sun, 29 Jun 2008 21:40:04 -0500 "Matthew D. Swank" <akopa@...> wrote: > Attached is a first pass at adding AF-LOCAL sockets with abstract > namespace addresses as a subclass of LOCAL-SOCKET. Is this something that could make it into sbcl? I realize it needs test suite support, but I wanted to make sure there was were at least some interest in merging it. If someone has a better idea, that fine; I just would like it if sbcl supported UDS's with abstract namespace addresses in some fashion. Thanks, Matt -- "You do not really understand something unless you can explain it to your grandmother." -- Albert Einstein.
http://sourceforge.net/p/sbcl/mailman/sbcl-devel/?viewmonth=200807&viewday=2
CC-MAIN-2015-14
refinedweb
141
71.14
Class-based mail views for Django Project description Class-based email views for the Django framework, including a message previewer. Introduction Rendering and sending emails in Django can quickly become repetitive and error-prone. By encapsulating message rendering within view classes, you can easily compose messages in a structured and clear manner. Basic Usage from mailviews.messages import EmailMessageView # Subclass the `EmailMessageView`, adding the templates you want to render. class WelcomeMessageView(EmailMessageView): subject_template_name = 'emails/welcome/subject.txt' body_template_name = 'emails/welcome/body.txt' # Instantiate and send a message. message = WelcomeMessageView().send(extra_context={ 'user': user, }, to=(user.email,)) This isn’t actually the best pattern for sending messages to a user – read the notes under “Best Practices” for a better approach. Using the Preview Site Registering URLs and Enabling Discovery - Add mailviews to your project’s INSTALLED_APPS setting. - Add the following somewhere within your project’s ROOT_URLCONF: from mailviews.previews import autodiscover, site autodiscover() urlpatterns = patterns('', url(regex=r'^emails/', view=site.urls), ) The preview index will now be available at the emails/ URL. Creating Preview Classes To create a simple preview, add a emails.previews submodule within one of your INSTALLED_APPS, and create a new subclass of Preview. from mailviews.previews import Preview, site from example.emails.views import WelcomeMessageView # Define a new preview class. class BasicPreview(Preview): message_view = WelcomeMessageView # Register the preview class with the preview index. site.register(BasicPreview) You can see more detailed examples within the test suite or in the code documentation for mailviews.previews. Customizing Preview Behavior You can also use Django forms to customize the creation of message previews by adding a form_class attribute to your Preview subclasses. The form must provide a get_message_view_kwargs method that returns a the keyword arguments to be used when constructing the message view instance. Best Practices - Try and avoid using the extra_context argument when sending emails. Instead, create an EmailMessageView subclass whose constructor accepts as arguments all of the objects that you require to generate the context and send the message. For example, the code shown in “Basic Usage” could written instead as the following: from mailviews.messages import EmailMessageView class WelcomeMessageView(EmailMessageView): subject_template_name = 'emails/welcome/subject.txt' body_template_name = 'emails/welcome/body.txt' def __init__(self, user, *args, **kwargs): super(WelcomeMessageView, self).__init__(*args, **kwargs) self.user = user def get_context_data(self, **kwargs): context = super(WelcomeMessageView, self).get_context_data(**kwargs) context['user'] = self.user return context def render_to_message(self, *args, **kwargs): assert 'to' not in kwargs # this should only be sent to the user kwargs['to'] = (self.user.email,) return super(WelcomeMessageView, self).render_to_message(*args, **kwargs) # Instantiate and send a message. WelcomeMessageView(user).send() In fact, you might find it helpful to encapsulate the above “message for a user” pattern into a mixin or subclass that provides a standard abstraction for all user-related emails. (This is left as an exercise for the reader.) Testing and Development Tested on Python 2.6 and 2.7, as well as Django 1.2, 1.3 and 1.4. To run the test suite against your installed Django version, run python setup.py test, or make test. (If Django isn’t already installed, the latest stable version will be installed.) All tests will automatically be run using the Django test runner when you run the tests for your own projects if you use python manage.py test and mailviews is within your settings.INSTALLED_APPS. To run tests against the entire build matrix, run make test-matrix. To view an example preview site, you can start a test server by running make test-server and visiting. Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/django-mailviews/0.5.3/
CC-MAIN-2022-33
refinedweb
620
51.65
Member 16 Points Nov 15, 2017 07:48 AM|alibasha202@gmail.com|LINK Hi, I am new to Angular 2. There is a grid in a view with a button, on click of button I just need to download the data of Grid in Excel sheet through Angular 2 in MVC. Thanks in advance. Nov 15, 2017 08:21 AM|zxj|LINK Hi Alibasha202, alibasha202@gmail.comI am new to Angular 2. There is a grid in a view with a button, on click of button I just need to download the data of Grid in Excel sheet through Angular 2 in MVC. Thanks in advance. You can use filesaver.js. Working sample for your reference. Model public class Employee { public int Id { get; set; } public string Name { get; set; } } Controller public class DemoController : Controller { public ActionResult Index() { return View(); } public JsonResult GetEmployee() { var emp = new List<Employee> { new Employee{ Id=1, Name="A"}, new Employee{ Id=2, Name="B"}, }; return Json(emp, JsonRequestBehavior.AllowGet); } } View @{ Layout = null; } <!DOCTYPE html> <html> <head> <meta name="viewport" content="width=device-width" /> <title>Index</title> <script src=""></script> <script src=""></script> <script> var angular = angular.module('mvcapp', []); angular.controller('DemoController', function ($scope, $http) { GetAllData(); $scope.isDisabledupdate = true; //Get All Employee function GetAllData() { $http.get('/Demo/GetEmployee').success(function (data) { $scope.employees = data; }); }; $scope.exportData = function () { var blob = new Blob([document.getElementById('export').innerHTML], { type: "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet;charset=utf-8" }); saveAs(blob, "Employeereport.xls"); }; }); </script> </head> <body> <div ng- <input type="button" value="Export to Excel" id="btnexport" ng- <div id="export"> <table> <tr> <th>S.No</th> <th> Name </th> </tr> <tr ng- <td>{{empModel.Id}}</td> <td>{{empModel.Name }}</td> </tr> </table> </div> </div> </body> </html> Regards, zxj Member 16 Points Nov 15, 2017 02:15 PM|alibasha202@gmail.com|LINK Hi, Thanks for quick help. I have just created an empty MVC project and took your piece of code and it worked. But can you please tell me its explanation in brief , as I am new to it. I am bit confused, without setting up Angular 2 in Visual Studio, how it worked, means it is not using Angular 2, if yes, can you tell me where we are using it. Thanks again. Nov 16, 2017 02:10 AM|zxj|LINK Hi alibasha202, Sorry for my mistake. You want to use Angular 2, you can start with the following tutorial. It will help you to step-up Angular2 in ASP.NET MVC, create the RESTful APIs in ASP.NET MVC Web API and front end in Angular2. Angular2 in ASP.NET MVC & Web API - Part 1 Regards, zxj 3 replies Last post Nov 16, 2017 02:10 AM by zxj
https://forums.asp.net/t/2131857.aspx?How+To+download+the+data+in+the+excel+sheet+through+Angular+2+in+MVC
CC-MAIN-2020-45
refinedweb
453
60.21
I don't know the difference between these codes when I use return-by-reference ed sheeran i don't care i don't care i love it i don't care song i don't care live ed sheeran songs 2019 i don't care chords beautiful people I'm learning object oriented programming. I don't know about the difference between these codes. Both of them use return-by-reference, but Code#1 is working well but Code#2 isn't working well. Professor said Code #2 has a problem when I store the return value of swap function, it causes the problem. But I don't know why. Please tell me why and the difference of two codes. Code #1: #include <iostream> using namespace std; struct Pair { int first; int second; }; Pair& swap(Pair&); int main() { Pair p1 = {10, 20}; Pair p2 = swap(p1); return 0; } Pair& swap(Pair& pair) { int temp; temp = pair.first; pair.first = pair.second; pair.second = temp; return pair; } Code #2: #include <iostream> using namespace std; struct Pair { int first; int second; }; Pair& swap(int num1, int num2); int main() { Pair p = swap(10, 20); return 0; } Pair& swap(int num1, int num2) { int temp; temp = num1; num1 = num2; num2 = temp; Pair pair = {num1, num2}; return pair; } The difference is that in Code #1, the main() function declares the variable pair, so it exists in the scope of main(). In Code #2, the swap() function declares the variable pair so it exists only in the scope of the swap() function and once you leave the swap function, the variable is destroyed. Ed Sheeran - I Don't Care (Live At Abbey Road), I don't have a problem with Justin Bieber though, mumbles incoherently I just like the song way Duration: 5:30 Posted: Dec 6, 2019 Directed by Kelly Makin. With Denise Richards, Dean Cain, Karen Cliche, Olivia Jones. Denise Richards plays Lauren, a divorced wedding-planner who falls for the groom-to-be (Dean Cain). At very first, you can avoid unnecessary code: int temp = num1; num1 = num2; num2 = temp; Pair pair = {num1, num2}; Why swapping numbers first, just create the pair with numbers swapped: Pair pair = {num2, num1}; // ^ ^ But now let's consider the differences (I dropped all parts irrelevant to the problem, i. e. the actual swapping): Pair& swap(Pair& pair) { return pair; } In the first variant, you get a pair by reference. The pair must be created outside and is passed into the function: Pair p; // created outside swap(p); // swap uses reference to p outside // p still exists // swap returns just a reference to the same p it received // -> you can use it for assignment: Pair pp = swap(p); Notice that your function swapped the original p it received by reference, so both p and pp contain the same content. So these two snippets of code are equivalent: Pair p; Pair pp = swap(p); Pair p; swap(p); // ignoring the return value Pair pp = p; In the second variant, you create the pair inside the function! Pair& swap(int num1, int num2) { Pair pair = {num1, num2}; return pair; } But the pair's lifetime ends with the function exiting. So you return a reference to a pair that actually already has been destroyed, which yields undefined behaviour. Exactly the same happens, if you accept the pair by value: Pair& swap(Pair pair) // not a reference -> pair is copied into local variable { return pair; // returning reference to local -> undefined behaviour! } In all cases you want to return local variables, you need to return them by value: Pair swap(int, int) { Pair pair; return pair; // return by value, value will be copied into target variable // (actually, due to copy elision, directly written there) } Returning by value can be useful, too, if you don't want to modify the pair passed to the function (just for completeness, it's not that you need to change your function to). You'd then make sure, though, that you won't modify the pair passed to. You could accept by const reference and create the copy inside; most simple way, though, is accepting by value, which creates the copy directly when receiving the paramter: Pair swap(Pair pair) // notice: both references dropped { return pair; }; Now, p and pp do differ (well, assuming you actually implemented the swapping, of course): Pair p; Pair pp = swap(p); I Don't Care (Ed Sheeran and Justin Bieber song), We Don't Need to Cancel George Washington. But We Should Be Honest About Who He Was. We can only have a real discussion about I Don't (The Cruz Brothers Book 1) - Kindle edition by Fox, Ella. Download it once and read it on your Kindle device, PC, phones or tablets. Use features like bookmarks, note taking and highlighting while reading I Don't (The Cruz Brothers Book 1). in #code2 the Pair exist only in scope of swap function in #code1 you need a temp becouse you change exist pair and you want to save the value before you change it. in #code2 (let's ignore about main problem) you need just form pair from the valus you got Pair& swap(int num1, int num2) { Pair pair = {num1, num2}; return pair; } and if you think about it you don't swap pair becouse you haven't pair.. We Don't Need to Cancel George Washington. But We Should Be , Why I Don't Like Cisco. Jun. 26, 2020 10:11 PM ET. |. 1 comment. |. | About: Cisco Systems, Inc. (CSCO), Includes: AAPL, CTXS, GOOG, IBM, INTC, JNPR, MSFT I Do (But I Don't) was produced for the Lifetime cable network, where it first aired on September 13, 2004. A woman is torn between her heart and her business in this made-for-cable romantic comedy. Why I Don't Like Cisco (NASDAQ:CSCO), “This is as bad as it gets, and yet the president will not confront the Russians on this score, denies being briefed,” Pelosi said on Sunday. “I don't Co-Founder of I Do Now I Don’t, Josh Opperman launched the revolutionary consumer-to-consumer Marketplace for recycled diamonds, jewelry, and watches together with his sister, Mara, in 2007. Pelosi on reported bounties on US troops: 'I don't know what the , Talk about “out of left field”. I don't know if he can handle the Pats locker room… but we'll see now. Hopefully the desire to stay in the league fuels ooh, ooh (No) [Justin Bieber:] I don't like nobody but you, it's like you're the only one here I don't like nobody but you, baby, I don Cam Newton: I'm excited as I don't know what, What We Know — And What We Don't — About 'Asymptomatic COVID-19'. June 26, 2020. Dr. Paul Sax. Data from Partners HealthCare shows that about 1 Editors’ Notes Rapper Earl Sweatshirt’s third album is a dark, fascinating trip to the bottom of the self. Lyrically, Earl is a singular talent, capable of dense, expressive lines that flip back and forth between humor and pain, despair and resolve. - You may not return reference to function-local objects since those do not exist after function has returned. - code1: swap function takes input argument as Pair&and returns a Pair&. code 2: swap takes 2 integers input and returns Pair&. So, in first case, the input parameter Pair&is created before calling the swap function and then passed to the swap function as an argument which works correctly. In code 2, the swap function takes an argument and tries to return a Pair&which is created inside the swap function itself .code 2 swap function is trying to return something which is local to the swap function itself i.e. something which does not exist outside swap function so its error. - When a function returns, all its local variables become invalidated so your reference is in fact pointing to a variable whose value is undefined and your program runs on undefined behavior. - GCC warns about your example #2 that what you do does not make sense. coliru.stacked-crooked.com/a/2cc8404c9a258fc1
https://thetopsites.net/article/58335387.shtml
CC-MAIN-2021-25
refinedweb
1,363
63.12
My goals in building this project is to build a simple web control that will inherit the basic functionality of a standard web text box, but will also wrap the validation controls into the mix. So. No more dragging’ two controls onto the page. Now I just drag out one, set the properties, and away we go. This article owes a great debt to Patick Meyer, and his article on self validating text boxes. As a secondary goal, I wanted to finally get started on building my own web control library. As an aside, perhaps the easiest way to do this is to simply inherit from all the current web controls, then build a dll. Voilà – your own extensible code library. Not very exciting, but it is a start.So, back to basics. When you are building controls (web or not-web) you have a couple of options. You can open a web control project, and start coding. This is great if you need to start from scratch and build some wild must-have control, but I am just trying to build a self validating text box here. Building a free form web control is a project for another day. Most of the functionality I will need is already in the Microsoft text box class. Let’s not reinvent the wheel (See the title of my blog for more info.) We are going to inherit from the standard text box control instead. So open that web control project, and give it an appropriate name. I chose:MyCompany.Web.UI.WebControlsThis is how Microsoft sets up their controls, so I am going to run with it for now. Notice that I named the project MyCompany.Web.UI.WebControls. This will save you heartache later if you keep the namespace names the same as the project names.Now we know we are going to inherit from text box, but the project template throws up a more generic WebControl class. Change this:public class WebControl1 : System.Web.UI.WebControls.WebControl To this:public class ValidatingTextBox : System.Web.UI.WebControls.TextBox While you are in there you can remove the Text property. Since we are inheriting from the text box, we don’t need it anymore. The text box already knows has a perfectly serviceable text property. For most inherited controls, I would also suggest removing the render control, but not for this one. We are going to use it later to emulate some of the standard validator’s functions.Now here comes the tricky part. We want to add in validation capability. Ideally we would inherit from a validator as well. However, .Net does not support multiple inheritance (for Multi-language compatibility reasons, mainly.) But we can use interfaces, and there is an IValidator interface that will work.So, add to the class declaration so it looks like this: public class ValidatingTextBox : System.Web.UI.WebControls.TextBox, IValidator The addition of an interface is like signing a contract. It is an agreement that the class will implement at the very least all the methods and properties required by the interface. In the case of the IValidator, the two properties are IsValid (surprise, surprise) and ErrorMessage. The method is (again not very surprisingly) Validate(). So I will add these right away. (If you are working with a later version of Visual Studio, It will add the stubs for you.) Along with the properties I will add the appropriate stubs public bool IsValid { get { return mIsValid; } set { mIsValid = value; } } public string ErrorMessage { get { return mErrorMessage; } set { mErrorMessage = value; } public virtual void Validate() { } There are the stubs – but they ain’t going to do much, as is. We need to add in the guts now. Specifically, I’d like to be able to control how the error message is rendered, I’d like to be able to turn on or off the entry required for this control, and I’d like to use a regular expression to check for validity. (While this is a simplification, this will be the behavior I try to duplicate.)Let’s add an enum for these three properties, and set up a private variable of type of this enum, and set up a property to get and set this private variable: public enum DisplayType { Static, Dynamic, None } private protected override void Render(HtmlTextWriter output) { //get the rendering from the base (text box ) control output.Write(formattedMessage); Use a regular expression for validation:Add in another property – this one will contain the regular expression used to evaluate the text box’s input text. public string RegExpression { get {return mRegExpression ;} set {mRegExpression = value;} Oh – and since we are using regular expressions don’t forget: using public Regex validCheck = Page.Validators.Add( Page.Validators.Remove( So all that being done, I want to point out a major shortcoming with this control. Every validation attempt means a round trip to the server. Ideally the control would send some JavaScript to the client when the page is initialized. With some study, I think I can figure out how to avoid this shortcoming. The link to the MSDN IValidator; documentation has some tantalizing hints on how to go about this. But I need to learn much more about JavaScript and regular expressions;, and how the validator controls interrelate with JavaScript. Looks like I will get out the reflector tool and decompile those validator controls.You now have your own control. The next article will be on how to put it into Visual Studio. After that – more controls; specifically I want to build one that strips out Cross site scripting and / or SQL injection characters.Complete code follows: namespace [DefaultProperty("ErrorMessage"), ToolboxData("<{0}:ValidatingTextBox runat=server>")]
http://geekswithblogs.net/stealthiscode/Articles/SelfValidatingWebTextBox.aspx
crawl-002
refinedweb
948
64.3
Vuex router Install Repository: CDNs bundle.run: jsDelivr: unpkg: vuex-router Moves app location state into a router module in the Vuex store. The store becomes the "source of truth" for location state within the app. Location state is changed by dispatching actions to the router module. The browser location history is kept in sync with the store by the router module. Why does that matter? Isn't the browser url the "source of truth" for location state? The goal of Vuex is to move most/all of the non-trivial state of an app into the store, where it can be managed and safely shared across components. vuex-router provides a relatively painless way to safely manage the application state inside the store, along with the rest of the application state, using standard Vuex state, getters, actions, and mutations. This is convenient from an API perspective, and allows for some very clean code. The location state is read and updated just like the rest of the application state, instead of requiring a separate routing API. How is this different than vue-router with vuex-router-sync? Even when used with vuex-router-sync, vue-router still uses the browser history as the "source of truth" for the location state. vuex-router-sync does not fully manage location state inside the Vuex store. It copies the browser's location state into the store for convenience. vue-router provides a separate API, outside the Vuex store, to manage location state. vuex-router Benefits - Provides a simple page-based router by default. pageRoutesare easy to configure, and suitable for many apps. - Fully functional page slide transitions are easily configured when using pageRoutes. (Other transitions are in the works.) pageRoutesare optional. vuex-routermanages the browser location history separate from the default routes, so any router can be used. - ViewModels only need access to the $store, as opposed to needing both a $storeand a separate $router. - The included Pagesand Pagecomponents still allow for easy setup of fixed headers, nav-drawers, etc. - The included Linkcomponent takes care of sending most user-interactions to the browser history. Usage Installation and project configuration Install vuex-router with yarn or npm. The easiest way to configure vuex-router is with WebPack and babel-loader. In most situations, babel-loader will find the .babelrc file in the vuex-router folder and use it to build vuex-router. If there are questions about a specific configuration scenario, feel free to open an issue, or submit a PR with an example, even if it's only partly working. A pre-transpiled version of vuex-router is included at dist/vuex-router-min.js in the npm package, but it has not been thoroughly tested. Please open an issue if there are any problems using it. Configuring the Vuex store We add the router module from vuex-router to our store, like this: import { router } from "vuex-router" const store = new Vuex.Store({ modules: { router } }) (Note, we can change the name of the routerto whatever we want, but then we will need to dispatch actions using that name. The rest of these examples assume the default routernamespace for the module. See Vuex - Modules - Namespacing to learn more.) Initializing the router using pageRoutes Before we can use the router, we need to initialize it by dispatching the router/init action on the store. The default page-router allows us to map our routes to simple page names. We supply our pageRoutes to the router/init action like this: const pageRoutes = [{ page: "home", path: ["", "/", "/home"], transIndex: 0 },{ page: "foo", path: "/foo", transIndex: 1 },{ page: "bar", path: ["/bar","/bar/:id"], transIndex: 2 }] store.dispatch("router/init", {pageRoutes}) (Notice above that we pass an object to the action, with pageRoutesas the key.) See the pageRoutes section below for more details on how to configure pageRoutes. Adding Page components to our app Note: this guide assumes a basic understanding of how Vue components work. See Components Basics and Components In-Depth to learn more. Once the router module is configured and initialized in the store, we can add vuex-router's Pages and Page components to our app, usually in our top-level app/root component. We use Page and Pages like any other Vue components. import them, and add them to the components hash of our main App component. javascript import Bar from "./components/bar.vue" import Foo from "./components/foo.vue" import Home from "./components/home.vue" import {Page, Pages} from "vuex-router" export default { components: { Bar, Foo, Home, Page, Pages } } We assign a name attribute to each of our Page components. These names will match up with the page properties on our pageRoutes. We can then slot our top-level components inside the Page components. html <div class="app"> <Pages> <!-- Root component to manage Page components --> <Page name="home"> <!-- `vuex-router` Page component with `name` set --> <Home /> <!-- The page we want to render when router page name is `home `--> </Page> <Page name="foo"> <Foo /> </Page> <Page name="bar"> <Bar /> </Page> </Pages> </div> Note: We may also "replace" the Pagecomponents with our components using the :isattribute, but this will result in page transitions being disabled. The wrapper Pagecomponents are necessary for transitions to work. Initializing the router without pageRoutes We don't have to use pageRoutes to use vuex-router. We can wire up our own router in a root component, or add a custom router module to our store. We still need to initialize the router module before history any ctions will work. We simply dispatch the router/init action without any parameters: store.dispatch("router/init") API router Vuex module Actions store.dispatch("router/init", options) Initializes the router module that was added to the store, and tells it to start managing location state. After this action runs, browser location change events will be handled by the router, and we should use actions on the router to modify the location state. store.dispatch("router/go", {steps}) Navigates the specified number of steps through the browser location history. For example, to go back one page: store.dispatch("router/go", {steps: -1}) Or to go forward two pages: store.dispatch("router/go", {steps: 2}) router/gois analogous to window.history.go. store.dispatch("router/goBack") Goes back one step in the browser location history. This works just like if the user presses the "back" button in the browser toolbar. router/goBackis analogous to window.history.back. store.dispatch("router/goForward") Goes forward one step in the browser location history. This works just like if the user presses the "forward" button in the browser toolbar. router/goForwardis analogous to window.history.forward. router/locationChange Used internally by the router. Invoking this action from outside the router could result in the location state falling out of sync with the browser history. store.dispatch("router/push", {path}) Sets the current location state to the specified path, and adds the new location to the browser history. So, to go back to the root location of our app: store.dispatch("router/push", {path: "/"}) To go to our /blog page: store.dispatch("router/push", {path: "/blog"}) Note: Relative paths can be pushed, and they will work as expected, but this can be tricky when using pageRoutes. Most apps using pageRouteswill probably want to stick to a "flat"-ish route design. router/pushis roughly analogous to window.history.pushState, but uses only the URL/path argument. store.dispatch("router/replace", {path}) Works like the router/push action, but replaces the current location with the new path. To explain, suppose we run the following code: store.dispatch("router/push", {path: "/"}) store.dispatch("router/push", {path: "/home"}) store.dispatch("router/replace", {path: "/blog"}) store.dispatch("router/goBack") The user's location will be "/", because when we added "/blog", it replaced "/home" in the browser's history. Learn more about browser history at MDN - Adding and modifying history entries router/transitionEnd Called by Page components to let the router know when page transition is complete. Should not need to be called by applications. State and Getters state shape The state stored by the router looks like this: const state = { // The current page-based route. // A route includes a page name, and any URL parameters that are // defined as part of the path. currentRoute: false, // The previous route. Stored to help with transitions. lastRoute: false, // Transition data set while transitions are in progress. transition: false, // A map of pages and their last scroll position. // The scroll position is saved and restored as pages are hidden // and shown. scrollTops: {}, // The current location of the page. Roughly analogous to the // `window.location` property in the browser. location: { pathname: "", search: "", hash: "", state: {}, key: "" } } store.getters["router/currentPage"] Returns the current page name as determined by the page-router. A pageRoute maps a browser location to a page name. If pageRoutes are not being used, this property returns false. store.getters["router/lastPage"] Returns the most recent page that was displayed before the current page. This is used mainly to track transitions away from the last page. store.getters["router/scrollTopForPage"](pageName) Returns any previously saved scroll information for the page with the given name. Used by the Page component to restore page scroll state when pages are re-shown after being hidden. store.getters[router/transitionForPage](pageName) Returns the active transition that should be applied to a page. Used by the Page component to apply transition classes to pages. Mutations The mutations defined on the router module should be considered private for most implementations. Calling them from outside of the router actions could result in location state getting out of sync with the browser history. pageRoutes Here is an example pageRoutes array: const pageRoutes = [{ page: "home", path: ["", "/", "/index", "/home"], transIndex: 0 },{ page: "about", path: "/about", transIndex: 1 },{ page: "blog", path: ["/blog", "/blog/:post"], transIndex: 2 }] A pageRoute is made of three properties: - page: the name of the page - path: a string, or array of strings that route to this page - transIndex: an integer used to pageRoute.page The page name can be any valid JS property string. This is the same name that we supply to the Page component, if we're using it. pageRoute.path Can be a string path, or an array of string paths. These paths will be matched against the browser's location to determine the current page and page parameters. paths are passed to a specially configured universal-router instance, to handle matching and to parse url parameters. Url parameters are available in the currentRoute property of the state, and via the router/currentRoute getter. For example, if we define a pageRoute like: { page: "blog", path: "/blog/:postId" } And then go to the location /blog/14, then store.state.router.currentRoute will look like this: { page: "blog", params: { postId: "14" } } Note: that the url/path parameter value will always be a string. Because we can define multiple paths for a page, we can use the same page to handle both the parameterized and parameter-less versions of the path, like this: { page: "blog", path: ["/blog", "/blog/:postId"] } Then inside our blog page, we can check whether we have a postId, and display the relevant sub-components like this: Html <div class="blog"> <Post v- <RecentPosts v-else /> </div> Javascript export default { computed: { postId () { return this.$store.getters["router/currentRoute"].params.postId } } } pageRoute.transIndex The transIndex is optional. If supplied, it is used to determine the direction for the default page slide transitions. Pages will slide to the left when the router transitions to a lower transIndex, and to the right when moving to a higher transIndex. Pages and Page components The Pages component provides basic default styling to handle full-page sliding page transitions. The Page component handles the details of showing and hiding, and transitioning pages based on the defined pageRoutes and router state. Apps should not need to use the APIs of these components directly. They can simply be added to application templates.
http://tahuuchi.info/vuex-router
CC-MAIN-2021-25
refinedweb
1,991
65.73
The filesystem is a database, but it has always been unsuitable as the computer’s primary one. Programmers have to write specialized programs to get the functionality they need. Now, new advances in software like Plan 9, the Reiser 4 filesystem and Linux are making the improvements the filesystem needs to become viable. Plan 9 is using the filesystem as the integral system interface, and the Reiser filesystem is unifying pointlessly different but equivalent namespaces. For operating systems to improve for users (that always includes programmers), they need to incorporate these new ideas. New Advances in the Filesystem Space, PDF File 2003-07-15 General Development 23 Comments The filesystem is a database, but it has always been unsuitable as the computer’s primary one. Programmers have to write specialized programs to get I read this and then printed it out. Excellent research, very very cool. I can’t wait to manages mp3s… $cat /home/adam/music/A\ Day\ In\ the\ Life.ogg/artist The Beatles (or something like that…) hmm… charg -r –artist=”The Beatles” /home/panzi/mp3 and ogg/beatles THAT would be nice! Panzi, with the system the author proposes, that would be a mere shellscript. Instead of a C app which brings it’s problems (maintainance) with it. All in all I think it’s a nice idea. The stacked mounting is a bit overdoing it I think. It might be overly complex or I just might not grasp it well enough. I never thought that Unix can be made more simplier than it already is. The main idea of Unix is that combining of simple tools will create powerful solutions. Now the tools can become even more simplier, so there is enough room for creative combinations of them. If these ideas become mainstream, this will cause a revolution in OS-design. Finally I can get rid of set, setenv and similar ugly things, which I never really understood. Can anyone tell me the difference between set and setenv on AIX? Anton …Used to host all of the articles as files in the BFS, and query with attributes. I always thought that was pretty cool, and wondered why more people didn’t do that. Actually, more to the point, I wonder why it’s taken so long for even companies even like micro$oft to realize the value in something of that nature. Hint, hint longhorn with the upcoming database like filesystem. Somehow I don’t see them getting it to work anywhere as nicely as BFS did though. I saw a quote something about apples new iChat software, to the effect that while microsoft focuses on new features, apple focused on simplicity and elegance. Couldn’t be any truer. My 2cents. > Finally I can get rid of set, setenv and similar ugly > things, which I never really understood. Using the filesystem to store environment variables is the standard on AmigaOS since it was born… also, many of the things explained in that paper are either already available (see Plan9), or thought by someone else already. Not trying to discredit the paper’s author, just stating that all that is not really “new” you are to have these attribute minifiles store data for folders too, you chould run into problems. For example: /folder/uid /folder/gid Means you can’t have files named uid or gid in folders (I guess that’s why they have double-dot prefixes in the document. But dotfiles seem a little hacky to me…) I like the idea of a unified namespace, but not the way the UNIX approaches it. Personally I think it would be better to have ‘volumes’ (which could be single or multiple partitions tied together as a single logical volume) in a ‘fs’ subdivision of the system namespace, which would have ‘dev’, ‘info’, ‘config’, etc. subdivisions. Or maybe I’m archaic in thinking that different system resources should be differentiated depending on the physical realities? Dunno. Interesting stuff though... I couldn’t disagree more. Why should the user care what type a file is anyway? Surely only programmers care about the specifics of data encoding formats. The user only wants to know it’s a music file, which you can show in an icon. Or with a program like LS, it could read the type file and display it on the screen next to the file name. Seriously, I met this guy once who though you could change the format of a file by renaming the extension. Duh. I couldn’t believe my ears. In fact that influenced me a lot in believing the file extensions are an evil hack (well that and those Ars Technica articles . Anyway, If you download foo.ogg it won’t have all those extra info files, unless the transfer protocol supports them. It’s the job of MIME types to tell the client the type of a file. It wouldn’t be much work to translate the MIME header into the FS-specific type name and create a type file for the file once you’ve finished downloading it. What happens if I dowload a file foo.ogg with an attribute type=exe inside? Lots of trouble! This is just a question of sane user interface design. In fact, most browsers should have fixed this by now, considering that HTTP servers can send mimetypes anyway. The solution *drumroll*: Alert the user whenever an external application is about to be launched, and give them enough information to decide what to do. E.g. “Do you want to open foo.ogg in ABCPlayer or save it to disk?” as opposed to: “Warning: foo.ogg is an executable that could potentially harm your system. Do you want to execute it or save it do disk?” Of course, this isn’t a real solution either. The real solution would be to implement a proper operating system design that inherently protects the user against malicious programs. Is is quite unreasonable for _any_ program to have access to _all_ the files that the user has access to. A text editor, for example, need only have access to the files/directories containing its configuration, and to the file that is currently being edited. Of course, telling the operating system which parts the text editor should be able to access is not that trivial. There have to be user-defined permissions per executable or type of executable, to tell the operating system what files said executable can always access. These permissions can obviously be backed by system-wide defaults. Now all the text editor still needs is permission to access to the file that the user wants to access. This permission should be given by the Load/Save File dialogs when the user selects a filename. Obviously, the Load/Save dialogs need to run in a different process space (probably that of a user-trusted daemon). It gets more difficult (a _lot_ more difficult) when programs such as IDEs are involved. Obviously, the “Load Project” dialog can give the IDE process permission to open the project file, but what about permission to access the files that are associated with the project? We face a similar problem with CLI programs – we’d basically have to either trust the CLI program to manage permissions itself (which is a big no-no for programs downloaded from the internet), or the shell has to invoke some kind of subscript to parse the program parameters in order to figure out which files the program is allowed to access. > /folder/uid > /folder/gid > Means you can’t have files named uid or gid in folders (I > guess that’s why they have double-dot prefixes in the > document. But dotfiles seem a little hacky to me…) Indeed, that’s a problem of the proposed implementation… for AROS I’m designing a component system based upon many of those ideas, it’s called CABOOM. In CABOOM files are objects, everything is an object, and objects can have attributes. Every object is potentially a “context” (term taken from Nemesis –), and you access contextes by means of the ‘/’ character. Every context can contain 0 or more objects, and since contextes are objects themselves, they can contain other contextes, leading to the well known hierachy scheme. Now, objects’ attributes live in a different namespace than the objects contained in the object itself, you access them by means of ” (backslash), rather than ‘/’, so there’s no way that conflicts can arise. Hi folks, How does all of this stuff compare to Oracle’s OFS? Yours truly, Jeffrey Boulier I heard of a filesystem for an operating system students at Oporto’s State University for Engeneering are developing for the OS they’re also developing. It’s called JDynFS. I saw a screenshot of a test once, where they did something like attrshow -rt[no-open,basic,type] file.movie and it returned somthing like <size>15666055</size> <filename>file.movie</filename> <title>Test stream</title> <encoding>DivX-codec-type-nnn</encoding> <author/> but the data wasn’t beeing read from the file. instead it was stored on the file’s inode. apparently, the OS recognizes only one format per file type, and the FS works as if it was a big database. you can actually use some FS instructions that use xml data directly. for example, any jpeg, gif, png that you copy to the fs is automatically converted to their format (I believe they use png), and get a name like filename.img or only filename. the inode indicates the type of document. Plan 9, quite simply, rules the school You can do stuff like union folders (ie binding the contents of two folders together), and you have absolute network transparencies, so if you want to run someone else’s binaries, you bind their binary directories say at /n/somefolder/bin to your binary /bin and programs will be none the wiser… You could even do that with devices as well… Makes me wish this kinda stuff was in other OSes as well.. >Using the filesystem to store environment variables is the >standard on AmigaOS since it was born… also, many of the >things explained in that paper are either already available >(see Plan9), or thought by Also comments on files were/are possible on AmigaOS and it could be used for mp3 artist too….. But what i like the most i url of downloaded file in file comment.. that is SO usefull…. By renoX (IP: —.w80-14.abo.wanadoo.fr) – Posted on 2003-07-15 21:21:30 it is standardized so that all files have a type/ directory, then the ls command could simply be modified to display the type attribute. Same goes for any application such as email clients. When I talk to people who don’t know a lot more about computers than what they read in PC Magazine, I ask them what big changes are in store in Longhorn (because PC Magazine devotes about one paragraph an issue to non-Windows platforms), and their eyes light up as they say, “the file system is going to be DATABASE driven!” I say, “Oh wow, that’s different,” and they go, “yeah!” Who cares about linux file systems. I can’t wait for this latest innovation from Microsoft. Melivula you are a TROLL. Microsoft does NOT innovate, they stole. Period. Eugenia: I HATE this shity board. Could please do some work here? Slashcode hmm? Oh no, no Slashcode please! I hate the layout of Slashdot. It’s so messy and you gotta have an account or else your treated like dirt. And then people’s comments are just burried under “better” ones. Leave the board the way it is… Interesting enough, if hardly new. To be honest, I miss devices in modern OSs, they gave AmigaOS much of its power. Guess you can do the same in POSIX, but somehow treating everything as a file or directory in a single rooted hierarchy doesn’t feel right to me, TCP: over /dev/tcp for example. And logical devices, particularly multi-homed ones were massivly useful. I never bothered with adding directories to the path, I just added directories to C: and they were in the default path anyway. Being able to organise fonts into multiple directories and still have the OS able to find them all via FONTS:, and being able to boot from a floppy, mount a HD (home brewed interface…) and then make the HD the boot device (re-assign SYS: etc)… But all that’s probably just me… OS/400, the operating system of IBM’s AS/400 midrange line, has been treating everything as database objects since the 80s…. everything, meaning devices, user profiles, etc. This idea with the dot files, isn’t that a bit like the Resource Fork we know from the Apple? If you would implement that, it would solve the problem too, without losing the ability to create a ..mounts file, for example. And still only one tool, something like “chres”, should be programmed in C [++/#]. Maybe it might then be used as something like this: $ chres –tofile=/tmp/$$$.xxx –res=”mounts” mydir $ vi /tmp/$$$.xxx $ chres –fromfile=/tmp/$$$.xxx mydir && rm /tmp/$$$.xxx The replacement for /etc/passwd (half-replacement, I know) would still need to be implemented as suggested in the article, but that should be no problem. Someone asked about set and setenv, so I’ll answer: in csh: set sets a variable for the current shell, and setenv sets a variable for the environment. The difference is if I start another shell or program from the current shell, it will be able to see the setenv var, but not anything I set using set.
https://www.osnews.com/story/4039/new-advances-in-the-filesystem-space-pdf-file/
CC-MAIN-2021-43
refinedweb
2,289
62.58
C# Program Structure - c# - c# tutorial - c# net Learn c# - c# tutorial - C# Program Structure - c# examples - c# programs How to Create a programming structure in C#? - It is very simple to use structure in C#. learn c# tutorial - class-object-contructor-member-function-in-csharp in c# Example c# windows applications - program structure : c# program hierarchy : - Make the code more readable - Three types of commenting syntax - 1. Inline comments - 2. Multiline comments - 3. XML documentation comments Inline Comments : - Indicated by two forward slashes (//) - Considered a one-line comment - Everything to the right of the slashes ignored by the compiler - Carriage return (Enter) ends the comment - Example - > // This is wikitechy first program written by venkat. Multiline Comment : - Forward slash followed by an asterisk (/*) marks the beginning - Opposite pattern (*/) marks the end - Also called block comments - Example - > /* This is the beginning of a block multiline comment. It can go on for several lines or just be on a single line. No additional symbols are needed after the beginning two characters. Notice there is no space placed between the two characters. To end the comment, use the following symbols. */ XML Documentation Comments : - Extensible Markup Language (XML) - Markup language that provides a format for describing data using tags Similar to HTML tags - Three forward slashes (///) mark beginning of comment - Advanced documentation technique used for XML-style comments - Compiler generates XML documentation from them c# namespace : - Namespaces provide scope for the names defined within the group - Groups semantically related types under a single umbrella - System:System: most important and frequently used namespace - most important and frequently used namespace - Can define your own namespace - Each namespace enclosed in curly braces: { } c# Main( ) function : - "Entry point" for all applications - Where the program begins execution - Execution ends after last statement in Main( ) - Can be placed anywhere inside the class definition - Applications must have one Main( ) method - Begins with uppercase character - static void Main( ) Begins with the keyword static – a static member is a member of a class that isn't associated with an instance of a class. Instead, the member belongs to the class itself. - Second keyword → return type - void signifies no value returned - Name of the method - Main is the name of Main( ) method - Parentheses "( )" used for arguments - No arguments for Main( ) – empty parentheses c# Body of a Method : - Enclosed in curly braces - Example Main( ) method body - line 7 static void Main( ) - line 8 { - line 9 Console.WriteLine("Hello World!"); - line 10 } - Includes program statements - Calls to other method - Here Main( ) calling WriteLine( ) method c# Escape Sequence : c# - Types of errors : - Syntax errors - Typing error - Misspelled name - Forget to end a statement with a semicolon - Run-time errors - Failing to fully understand the problem - More difficult to detect c# naming convention : - Pascal case - First letter of each word capitalized - Class, metahod, namespace, and properties identifiers - Camel case - Hungarian notation - First letter of identifier lowercase; first letter of subsequent concatenated words capitalized - Variables and objects - Every character is uppercase - Constant literals and for identifiers that consist of two or fewer letters c# invalid naming convention - c# invalid identifiers : c# Assemblies : Code contained in files called "assemblies" - code and metadata - It can be .exe or .dll - Executable needs a class with a "Main" method: public static void Main(string[] args) - Various types of assemblies - local : local assembly, not accessible by others - shared : well-known location, can be GAC - Gobal Assembly Cache - strong names : use crypto for signatures then can add some versioning and trust The following programming example we will show how to create and use structure in C# programming. - Go to Start and type Visual Studio. - On the menu bar and choose File, New, click on Project. - And then select Windows Forms Application from the center pane. - Expand Installed, expand Templates, expand Visual C#, and then choose Console Application. - In the Name box, specify a name for our project, and then choose - And Click on OK button. The new project appears in Solution Explorer. C# Sample Code - C# Examples: using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace wikitechy { class Program { static void Main(string[] args) { Console.WriteLine("Welcome to wikitechy world - C#..!"); Console.WriteLine("Press any key to exit."); Console.ReadKey(); } } } click below button to copy the code. By - c# tutorial - team Code Explanation: - using System; it is a keyword, these key word is used to include the System namespace in the program. - using System.Collections.Generic namespace contains interfaces and classes that define generic collections, which allow users to create strongly typed collections that provide better type safety and performance than non-generic strongly typed collections. - using System.Linq the System.Linq namespace provides classes and interfaces that support queries that use Language-Integrated Query (LINQ). - using System.Text the namespace contains classes that represent ASCII and Unicode character encodings. - A namespace is a collection of classes. The wikitechy Application namespace contains the class wikitechy. - class Program is specifying to class wikitechy contains the data and method definitions that our program uses. Classes generally contain multiple methods. Methods define the behavior of the class. However, the wikitechy class has only one method Main. - Here static void Main(string[] args) main method, which is the entry point for all C# programs. The Main method states what the class does when executed. - In Console.WriteLine, the Main method specifies its behavior with the statement "Welcome to wikitechy world - C#..!". WriteLine is a method of the Console class defined in the System namespace. This statement reasons the message " Welcome to wikitechy world - C#..!" to be displayed on the screen. - In this example Console.WriteLine, the Main method specifies its behavior with the statement "Press any key to exit.". WriteLine is a method of the Console class defined in the System namespace. This statement reasons the message "Press any key to exit." to be displayed on the screen. - Here Console.ReadKey(); is used for the VS.NET Users. This makes the program wait for a key press and it prevents the screen from running and closing quickly when the program is launched from Visual Studio .NET. Sample C# examples - Output : - Here in this output we display the "Welcome to wikitechy world - C#..!" which specifies to console WriteLine statement. - Here in this output we display the "Press any key to exit." which specifies to console WriteLine statement.
https://www.wikitechy.com/tutorials/csharp/csharp-program-structure
CC-MAIN-2019-43
refinedweb
1,051
53.41
Note: This only describes how to build and compile using Visual C++ Express 2010. If you are using VS or a variant of VS then you should be fine. irrKlang currently supports these audio formats: .wav .mp3 .ogg .flac .mod .it .s3d .xm The first thing needed is the actual irrKlang library which can be downloaded free (for non-commercial use) from the irrKlang website: Once you have downloaded the library, unzip it to a place of your choosing. Next open up Visual Studio and create a new empty Win32 Console Application. If you are unsure how to do that then look here: Getting Started In Microsoft Visual Studio 2008 I named the project 'irrKlang_test'. The name doesn't matter. Go to File-Save All and save your project. (Currently the project is temporary and no folders have been created) Next, open up irrKlang folder that you extracted from before. Copy the include folder from the library to your current project. If you are unsure which folder to place include in then look at my folderpath. Next go into the bin/win32-visualStudio folder and grab ikpMP3.dll irrKlang.dll Move those files to your project as well. Finally you need to copy the irrKlang.lib library from the irrKlang-1.3.0\lib\Win32-visualStudio folder to your project. Go back to Visual Studio and add a .cpp file. I called mine "main.cpp" but it doesn't matter. At this point your project folder should look something like this but without the .mp3 file. Now open up the newly created cpp file and start coding. =================================================== ================== The Code ========================= =================================================== First we need the proper headers #include <iostream> #include <string> #include <windows.h> #include "include/irrKlang.h" The string and windows headers are used for the audio file name and the Sleep() function. In order to use anything in the irrKlang library you need to use its namespace, irrklang. To get rid of a lot of typing just use the namespace. (NOTE: LOWERCASE 'K') using namespace std; using namespace irrklang; After all the headers and namespaces have been taken care of we need to reference irrKlang.lib so we can actually use the library. The easy way to do this is to include this line of code: #pragma comment(lib, "irrKlang.lib") Now define the main function and create the Sound Engine int main() { // Creates the Sound Engine with default parameters ISoundEngine* se = createIrrKlangDevice(); return 0; } What if the sound engine wasn't created properly? Well, use an if statement to check. if(!se) { cout << "Error: Sound Engine could not be created" << endl; return 0; } Your code should now look something similar to this: ; } return 0; } If all you wanted to do was play a file then you would call // Note the '->' syntax, se is a pointer se->play2D("somefile.wav"); Sleep(1000); That code would play somefile.wav for 1 second and then the program would exit. However, the method isn't very good. Who want's to hard code sleep times in for everything? Nobody does so I've created an easy way of playing the entire file without having to "sleep" for the exact number of milliseconds. This is where the <string> header comes in. Create a new string variable and initialize it to a media file name. (You could get input from the user here but I didn't) Now, play the sound. Here is where another what-if comes into play. What if the file name is invalid? Well we can check for that too. // play2D returns 0 if it is successfully playing if(se->play2D(soundFile.c_str()) != 0) { cout << "Error: Could not play file" << endl; return 0; } Well now the file is playing but the program will exit in another millisecond. So lets check and see if the file is still playing while(se->isCurrentlyPlaying(soundFile.c_str())) Sleep(100); isCurrentlyPlaying() returns true if the file passed in is currently playing and false otherwise. That's it your done. Who would have thought all it takes to play music is a short 39 lines of code? If you want to play something other than motor_bikes.mp3 then substitute the file name in or get input from the enduser. If you have any questions just ask. The complete code: ; } string soundFile = "motor_bikes.mp3"; // Play some sound if(se->play2D(soundFile.c_str()) != 0) { cout << "Error: Could not play file" << endl; return 0; } // Keep playing until song/sound has ended while(se->isCurrentlyPlaying(soundFile.c_str())) Sleep(100); se->drop(); return 0; } To download the complete Visual C++ solution click here Number of downloads: 852
http://www.dreamincode.net/forums/topic/185301-playing-audio-files-with-irrklang/
CC-MAIN-2016-40
refinedweb
765
75.61
Difference between revisions of "Qtile" Revision as of 16:24, 13 June 2012 Qtile (git version) is available in the AUR: qtile-gitAUR. A default configuration file is provided on the git repository. Copy it in ~/.config/qtile/config.py. An easy way to do this is: $ mkdir -p ~/.config/qtile/ $ wget -o ~/.config/qtile/config.py Starting Qtile To start Qtile add exec qtile to your ~/.xinitrc and launch Xorg. The default configuration includes the shortcut Alt+Enter to open a new xterm terminal. Configuration The configuration is fully done in python in the file ~/.config/qtile/config.py. The indentation is very important in python, therefore respect it. Before restarting Qtile you can test your config file for syntax error with the command: $ python2 ~/.config/qtile/config.py Groups In Qtile the workspaces (or views) are called Groups. They can be defined as following for instance: from libqtile.manager import Group groups = [ Group("term"), Group("web"), Group("irc"), ] Keys You can configure your shortcuts with the function Key. Here is an example of the shortcut Alt+Shift+q to quit the Windows Manager. from libqtile.manager import Key from libqtile.command import lazy keys = [ Key( ["shift","mod1"], "q", lazy.shutdown() ), ] You can find out which modX corresponds to which key with the command xmodmap. Screens and Bars Create one Screen function for every monitor you have. The bars of Qtile are configured in the Screen function as in the following example: from libqtile.manager import Screen from libqtile import bar, widget screens = [ Screen( bottom=bar.Bar([ widget.GroupBox(), widget.WindowName() ], 30)) ] Widgets You can find information on the widgets in the documentation. Some of the widgets (such as BatteryIcon and Notify) are not included in the default git repository, but you can download them here and copy them in /usr/lib/python2.7/site-packages/libqtile/widget/. Then modify /usr/lib/python2.7/site-packages/libqtile/widget/__init__.py to import the new widgets. Here is an example for BatteryIcon and Notify widgets: /usr/lib/python2.7/site-packages/libqtile/widget/__init__.py [...] from battery import Battery,BatteryIcon from notify import Notify [...]
https://wiki.archlinux.org/index.php?title=Qtile&diff=207380&oldid=201013
CC-MAIN-2016-26
refinedweb
354
51.85
Free for PREMIUM members Submit Learn how to a build a cloud-first strategyRegister Now. What OS are you running on? Are the programs on the same machine? (I'm guessing yes) A few alternatives I can think of are pipes, RPC, and shared memory. Simplest is probably sockets. Fastest is probably shared memory. Regards, Mike. OpenFileMapping MapViewOfFile UnmapViewOfFile Modern healthcare requires a modern cloud. View this brief video to understand how the Concerto Cloud for Healthcare can help your organization. You then can use the OpenFileMapping and MapViewOfFile functions in your other thread to access the data you want to share or send. Threads within the same process share the same memory space and can share heap variables and pointers intimately. That's what makes them threads... It's only when you are trying to communicate across process boundaries that things get complicated. sockets are not the fatest way of doing IPC, but it works everywhere and even on different machines, so if speed is not really an issue than you can use sockets as you have said. same process message queue, shared memory, pipes using sockets will make your program more flexible as it will work on the same or two seperate mechines #include <conio.h> #include <stdio.h> #include <process.h> #include <windows.h> DWORD WINAPI message_eater(LPVOID dummy) { MSG msg; while(::GetMessage(&msg,NU { printf("GOT Message %d:%d\n",msg.wParam,msg.lP Sleep(1000L); } printf("Pump Dead\n"); return 1; } void main(int argc, char *argv[]) { DWORD thread_id = -1; HANDLE thread = CreateThread(NULL,0,messag Sleep(1000L); if(!PostThreadMessage(thre { printf("Exit %d\n",GetLastError()); } if(!PostThreadMessage(thre { printf("Exit %d\n",GetLastError()); } if(!PostThreadMessage(thre { printf("Exit %d\n",GetLastError()); } printf("Post out\n"); getch(); } "Communication between programs." i only posted that because he was chatting about threads I'll Start from the top: MDarling: OS is WINDOWS Most Probably some version of 2000 either server or professional. I'll add some more detail sto my prob at this point. The master program will be coded by me. I will then provide it a list of other programs to execute (either from a db or a file). These wont be programmed by me, but I want to provide a set of common function calls that can be made by them intheir code(maybe using COM) to access information stored in the master program and to pass information back. Sockets sound like the best alternative as the programs may be held on another server (if its a big program) or a little exe on the same box. Does this make the decision easier? Thanks for all the help so far. If you wish to use function calls then you can wrap them in a DLL which which creates messages to send over lower level sockets layer. However you could achieve the same thing by using RPC if you wish. My own preference however would be to use sockets. You could always migrate your server (and/or clients) to linux, HP-UX, whatever in the future if you needed. Regards, Mike. The I wil store the information my child programs need in an XML file, once all the data has been stored the parent program will launch the child programs. These will then access the data in the XML file via an API (some COM functions rapped in a DLL). This way the parent program can write the data to the file as it wants and the child programs can take it at their pace. The only limitation I can see is that the child programs must be written in languages that suporrt COM,I know C++ and VB do,are there any other? >>my parent program will be waiting around just to deliver data to it not if you design it in such a way as a client can ask for data, you could have a few threads, therefore your parent could do what it likes and have a thread waiting for requests from clients. although do not have one thread for each client because this will lead to a fundamentally un-scalable application architecture. BTW, matt, what on earth is it you are doing ? and hows BT If I move the client requests from threaded sockets to a DCOM API then my server program can be long gone while the local/remote client programs do there thing getting the data via my API. Plus it gets me to use XML, which the UNI would probably like :-) Yeah BT's not bad, only a couple of months left Paul my main concern with having direct contact between the parent and many child programs is the data flow control. and I think designing/developing and API to read form an xml file and convert to an array would be easier than a whole hoard of interprocess communication stuff (christ that could almost be a final year project on it's own) when you coming back to BT oh BTW matt i got a first in me degree, sweet :) I would develop in COM using DLL's, how hard is it to convert to DCOM? I converted a COM program to DCOM a few years back. If memory serves, it was mostly a matter of jumping through the various NT security hoops/features to allow DCOM clients to connect to the server process. DCOMCFG is your friend in this case. I also seem to recall that I needed to change the server code, which was self-registering, to register properly as a DCOM server. And some of the error handling changed, due to new failure points introduced by using DCOM. Oh, yeah, I also added a feature to the server to, upon installation, generate a .reg file for clients to run to use that server. This made administration a little easier.
https://www.experts-exchange.com/questions/20138773/communication-between-programs.html
CC-MAIN-2017-51
refinedweb
975
70.53
Summary: This Project aims in the building of one application of robots in the automation of Collecting environmental aspects. The robot supports remote performance monitoring and maintenance of various factors of the environment in any given area. The details of the design, setup and the use of the robot in Data Acquisition (DAQ) system are given here. The sensors provide accurate and reliable real time data needed for autonomous monitoring and control of any type of area or industry. Data Acquired by the proposed system can be remotely accessed, plotted and analyzed. This provides a fully automated solution for the monitoring and control of remote locations. Fig. 1: Prototype of Arduino based Land Rover Robot used for Sensor Data Acquisition Description: Prerequisites & Equipment: You are going to need the following: An Arduino Board or Arduino clone(Here is a guide if you need) Two DC Motors. A 5v TTL -UART Bluetooth module. Robot chassis and wheels which suit your chassis size and motor size. Arduino IDE for the programming. Block Diagram: Fig. 2: Block Diagram of Arduino based Sensor Data Acquisition Robot Our aim of this project is to collect the sensor data and to store it for the future analysis. There are many techniques used for Data acquisition such as EEPROM, SD card. Here we are going to use internet based storage, which is a reliable and efficient way for analysis of any sensor data. Hardware assembly: Make the Robot connections as are given by the circuit diagram. Make the robot assembly with your selected parts and connect the motors to the circuit. Optocouplers are used to safeguard the Arduino from High voltage risks. Note: RX of Arduino should be connected to TX of Bluetooth module and TX of Arduino should be connected to RX of Bluetooth module. Working: In this Robot we add internet functionality by using a GSM modem which provide GPRS connection. The section below will explain how to send the sensor readings via an HTTP command to a website. We use the ThingSpeak website which provides a simple and free API for logging data from a variety of sensors. Fig. 3: Graph showing Light Intensity variations sensed by LDR on Data Acquisition Arduino Robot Fig. 4: Graph showing Humidity variations recorded by DHT11 sensor on Data Acquisition Arduino Robot Fig. 5: Graph showing Temperature variations recorded by DHT11 sensor on Data Acquisition Arduino Robot These are some example charts which were generated from sensor data sent from robot to channel on the ThingSpeak website: Thing Speak Setup Here are the steps required in order to get this example working with the ThingSpeak website: Create a New Channel Fig. 6: Screenshot of Thingspeak Website showing Creation of Channel for Data Acquisition Copy the WRITE API KEY from the APIKEYS tab for your new channel. Configure your new channel (see the Channel Settings tab). Add three fields to your channel. Name the channel and each of the fields. Save the new channel settings. Note: The channel and field names are used for labelling the data in the charts these names have no effect on the API and can be changed at any time. Here are the settings of the channel used for this Robot: Fig. 7: Screenshot of Thingspeak website showing addition of Light Intensity, Temperature and Humidity Fields in created channel for displaying sensor data Libarary Includes In addition to the existing libraries, we must now also include the Timerone, Soft serial and DHT libraries in the sketch using the #include compiler directive. #include <TimerOne.h> #include <SoftwareSerial.h> #include <DHT.h> The ThingSpeak API limits the data upload to a maximum of once every 15 seconds. Also, it takes some time to establish the GPRS connection before any data can be sent. For this reason, we adjust the Timer1.initialize(4000000); // set a timer of 4 second. And make a variable to 5 called tick_count so that the readings are taken once per 20 seconds. (the units are microseconds) Setup: In addition to the existing setup code for robotic controls, we should make some initializations to GSM. GPRS.write(“AT+CGATT=1”); //Attach a GPRS Service GPRS.write(“AT+CGDCONT=1,”IP”,”airtelgprs.com””); //Define PDP Context GPRS.write(“AT+CSTT=”airtelgprs.com”,””,”””); //Set Access point, User ID, and password GPRS.write(“AT+CIICR”); //Bring up wireless connection with GPRS Time consuming GPRS.write(“AT+CIFSR”); // Get Local IP address. No actually needed though. GPRS.write(“AT+CIPSTATUS”); // Get Connection Status P.S. It should be ‘IP STATUS’. // This can be used as a check point. GPRS.write(“AT+CIPHEAD=1”); // Add headers to the HTTP request. GPRS.write(“AT+CDNSORIP=1”); //Indicates whether connection request will be using IP address (0), or domain name (1) GPRS.write(“AT+CIPSTART=”TCP”,”api.thingspeak.com”,”80″”); //Start up TCP connection (mode, IP address/name, port) P.S. if returns ‘CONNECT OK’ then you’re lucky GPRS.write(“AT+CIPSEND”);//Telling the GSM module that we’re going to send the data Collection of Data: Light = analogRead(A0); // Reading Light intensity Reading temperature or humidity takes about 250 milliseconds! Sensor readings may also be up to 2 seconds ‘old’ (its a very slow sensor) h = dht.readHumidity(); // Read temperature as Celsius (the default) t = dht.readTemperature(); Data sending to Thing Speak: itoa (Light,LIGHT_data,10); itoa (h,HUMID_data,10); itoa (t,TEMP_data,10); //Function for converting integer to string. GPRS.write(“AT+CIPSEND”); //Telling the GSM module that we’re going to send the data GPRS.write(“GET /update?key=XXXXXXXXXXXXXXXXXXXXXXX= “); // Change to your API KEY. GPRS.write(LIGHT_data); GPRS.write(“&field2= “); GPRS.write(HUMID_data); GPRS.write(“&field3= “); GPRS.write(TEMP_data); GPRS.write(” HTTP/1.1″); //And finally here comes the actual HTTP request //The following are the headers that must be set. GPRS.write(“Host: api.thingspeak.com”); GPRS.write(“Connection: keep-alive”); GPRS.write(“Accept: */”); GPRS.write(“*”); GPRS.write(“Accept-Language: en-us”); GPRS.write(0x1A);//It tells the GSM module that we’re not going to send data anymore // char ctrlZ = 0x1A; Circuit Diagrams Project Video Filed Under: Electronic Projects, More Editor's Picks Filed Under: Electronic Projects, More Editor's Picks
https://www.engineersgarage.com/data-acquisition-robot/
CC-MAIN-2021-39
refinedweb
1,028
56.66
Home -> Community -> Usenet -> c.d.o.misc -> Re: The trouble with triggers What's in a namespace wrote: > Shakespeare > (ns.oracle.com is still down) So I've noticed. <g> I've asked Mark Townsend to look into it as it breaks demos in Oracle's production distribution. And if that doesn't work the next step will be to open an SR though I'd prefer not to use that system unless it is the last resort. -- Daniel A. Morgan University of Washington damorgan_at_x.washington.edu (replace x with u to respond) Puget Sound Oracle Users Group on Mon Nov 20 2006 - 10:54:08 CST Original text of this message
http://www.orafaq.com/usenet/comp.databases.oracle.misc/2006/11/20/0363.htm
CC-MAIN-2013-48
refinedweb
112
74.29
React js Interview questions and answers. 1. What is ReactJS? React js is javascript based UI Library developed at Facebook, to create an interactive, stateful & reusable UI components. It is one of the most popular javascript frameworks that is created for handling the presentation layer for the web and mobile apps. 2. List some advantages of ReactJS ? Advantages of React Js - client problems with other Javascript frameworks is that they. Source: 3. What are Components in ReactJS? React Components let you split the UI into independent, reusable pieces, and think about each piece in isolation. Conceptually, components are like JavaScript functions. They accept arbitrary inputs (called “props”) and return React elements describing what should appear on the screen. Below is ample component written in ES6 class to display a welcome message on the screen. class Welcome extends React.Component { render() { return <h1>Hello, {this.props.name} </h1>; } } const element = <Welcome name="Sara" />; ReactDOM.render( element, document.getElementById('root') ); source: 4. What is JSX? JSX is an XML/HTML-like syntax used by React that extends ECMAScript so that XML/HTML-like text can co-exist with JavaScript/React code. The syntax is intended to be used by preprocessors (i.e., transpilers like Babel) to transform HTML-like text found in JavaScript files into standard JavaScript objects that a JavaScript engine will parse. Basically, by using JSX you can write concise HTML/XML-like structures (e.g., DOM like tree structures) in the same file as you write JavaScript code, then Babel will transform these expressions into actual JavaScript code. Unlike the past, instead of putting JavaScript into HTML, JSX allows us to put HTML into JavaScript. By using JSX one can write the following JSX/JavaScript code: var nav = ( <ul id="nav"> <li><a href="#">Home</li> <li><a href="#">About <li><a href=”#”>Clients <li><a href=”#”>Contact Us </ul> ); And Babel will transform it into this: var nav = React.createElement( "ul", { id: "nav" }, React.createElement( "li", null, React.createElement( "a", { href: "#" }, "Home" ) ), React.createElement( "li", null, React.createElement( "a", { href: "#" }, "About" ) ), React.createElement( "li", null, React.createElement( "a", { href: "#" }, "Clients" ) ), React.createElement( "li", null, React.createElement( "a", { href: "#" }, "Contact Us" ) ) ); source: 5. Explain life Cycle of React JS Component ? React JS Component Lifecycle Each component has several “lifecycle methods” that you can override to run code at particular times in the process. Methods prefixed with will are called right before something happens, and methods prefixed with did are called right after something happens.() Class Properties - defaultProps - displayName Instance Properties - props - state Source: 6. List some features of ReactJS? Undoubtedly today React is among of one the best JavaScript UI frameworks. It comes with a lot of features that helps programmers to create beautiful application easily, we have listed some of them below. - It’s Adaptability - Free and Open Source - Decorators from ES7 - Server-side Communication - Asynchronous Functions & Generators - Flux Library - Destructuring Assignments - Usefulness of JSX Also, Read React Native Interview questions 7. How to use Events in ReactJS? Using Events is React js is very similar to handling event on DOM elements.The difference is only in syntax like. - The name of event is React js is always in camelCase. - With JSX you pass a function as the event handler, rather than a string. Lets understand by an example: // In HTML <button onclick="activateAccount()"> Activate Account </button> //In React <button onClick={activateAccount}> Activate Account </button> Another difference is that in React js you cannot return false to prevent default behavior in React. You must call preventDefault explicitly. Read more from 8. What is flux in JavaScript? Flux is an application architecture for creating data layers in JavaScript applications.It was designed at Facebook along with the React view library.Flux is not a framework or a library. It is simply a new kind of architecture that complements React and the concept of Unidirectional Data Flow. further reading 9. What are refs in React?When to use it. In React ref is used to store the reference of element or component returned by the component render() configuration function.Refs should be avoided in most cases, however, they can be useful when we need DOM measurements or to add methods to the components. Refs can be used in following cases - Managing focus, text selection, or media playback. - Triggering imperative animations. - Integrating with third-party DOM libraries. 10. What are stateless components in React? Stateless components are components that don’t have any state. When something is stateless, it calculates its internal state but it never directly mutates it.For creating a stateless components No class and this keyword is needed.You can create a stateless components using plain functions or Es6 arrow function. Below is an example of stateless component in react. //In Es6 const Pane = (props) => ; //In Es5 const Username = ({ username }) => The logged in user is: {username} 11. What is the difference between State and props in ReactJs? Props are shorthand for properties.they are very similar to an argument is passed to a pure javascript function.Props of the component are passed from parent component which invokes component. During component’s life cycle props should not change consider them as immutable.In React all props can be accessible with this.props. import React from 'react'; class Welcome extends React.Component { render() { return <h1>Hello {this.props.name}</h1>; } } const element = ; State are used to create dynamic and interactive components in React.State is heart of react component that makes it alive and determines how a component renders & behaves. // simple state example import React from 'react'; class Button extends React.Component { constructor() { super(); this.state = { count: 0, }; } updateCount() { this.setState((prevState, props) => { return { count: prevState.count + 1 } }); } render() { return (<button onClick={() => this.updateCount()} > Clicked {this.state.count} times </button>); } } export default Button; 12. What are Synthetic events? SyntheticEvent is a cross-browser wrapper around browser’s native event. In React all of your event handlers will be passed instances of SyntheticEvent.The synthetic event works the same way as the event system of browsers, the only difference is that the same code will work across all browsers. Below is a simple example of how to listen for a click event in react import React, { Component } from 'react'; class ShowAlert extends Component { showAlert() { alert("Im an alert"); } render() { return ( <button onClick={this.showAlert}>show alert </button> ); } } export default ShowAlert; 13. What is the difference between Dom and virtual Dom in React js? Ans- DOM is the acronym for Document Object Model. Dom is also called HTML DOM as it is an abstraction of structured code called HTML for web developers. Dom and HTML code are interrelated as the elements of HTML are known as nodes of DOM. It defines a structure where users can create, alter, modify documents and the content present in it. So while HTML is a text, DOM is an in-memory representation of this text. Virtual DOM is an abstraction of abstraction as it has been derived from HTML DOM. It is a representation of DOM objects like a lightweight copy. The virtual DOM was not invented by React, it is only used and provided for free. 14. Enlist the advantages and disadvantages of React js? Ans. React.js is used by web developers for creating large web pages without reloading the entire page. It uses the data and can be changed over time. The following are the advantages of using React.js- 1- React makes Search engine optimization (SEO) easy. 2- It is very efficient as it ensures readability and easy maintenance. 3- It gives extraordinary developer tools to web developers and makes Java coding easier for them. 4- UI test cases. The following are the disadvantages of React- 1- Some major configurations are required for integrating React js with traditional MVC framework such as substituting erb with React js. 2- It is a steep learning process for people who are new to web development world. 14. What are the controlled components and uncontrolled components in React? Ans- Controlled component is more advisable to use as it is easier to implement forms in it. In this, form data are handled by React components. A controlled input accepts values as props and callbacks to change that value. The uncontrolled component is a substitute for controlled components. Here form data is handled by DOM itself. In uncomfortable components, the ref can be used to get the form values from DOM. 15. Explain the difference between functional and class components. Ans- Functional components are those components that returns React elements as a result. These are just simple old JavaScript functions. React 0.14 has given a new shortcut for creating simple stateless components that are known as functional components. These components make use of easy JavaScript functions. Class components – most of the tech savvy people are more familiar with class components as they have been around the corner for a longer time. These components make use of plain old java objects for creating pages, mixins, etc in an identical way. Using React’s create class factory method, a literal is passed in defining the methods of a new component. 16. What do you understand by mixin or higher order components in React? Ans- Higher order components (HOC) is a function that takes component as well as returns a component. It is a modern technique in React that reuses the component logic. However, Higher order components are not a part of React API, per se. These are patterns that emerge from React’s compositional nature. In other words, HOC’s are functions that loop over and applies a function to every element in an array. 17. How is flux different from redux? 18. How is React different from angular and VUE js? Ans- Angular Js – developed by Google, angular is a typescript based JavaScript application framework. It is also known as Super-heroic JavaScript MVW Framework. It was developed with the motive to encounter the challenges of creating single page applications. There are several versions of angular such as Angular 2+, Angular 2 or ng2. Angular is the rewritten, mostly incompatible successor to AngularJS which means AngularJS is the oldest framework. React– React was developed by Facebook in March 2013. It is a JavaScript library that is used for building user interfaces. React creates large web applications and also provides speed, scalability, and simplicity. Vue Js- Launched in February 2014, Vue is the most famous and rapidly growing framework in JS field. Vue is an intuitive, fast and composable MVVM for building interactive interfaces. It is extremely adaptable and several JavaScript libraries make use of this. Vue is also a web application framework that helps in making advanced single page applications. 19. What is the use of arrow function in React? Ans- Arrow functions are extremely important for React operations. It prevents this bugs. Arrow functions make it easier to predict the behavior of this bugs when passed as callbacks. They don’t redefine the value of this within their function body. Hence, prevents bugs caused by the use of this within callbacks. 20.What are refs in React? Ans- Refs are used for managing focus, selecting text and triggering animations. It also integrates with third-party libraries. Refs help in getting the reference to a DOM (Document Object Model) node. They return the node that we are referencing. Whenever we need DOM measurements, we can use refs. Q21. What is the purpose of render() function in React? ANS- render() function is used to update the UI. For this, you have to create a new element and send it to ReactDOM.render(). React elements are immutable and once you create an element, you cannot change its attributes. Thus, elements are like a single frame and it depicts the UI at some point. ReactDOM.render() controls the content of the container node you pass and if there is any DOM element already present in it then it would be replaced when first called. Thus, these questions are compiled with both basic and advanced level questions on React Js. Preparing for your interview with these questions, will give you an edge over the others and will help you crack the exam. Also Read: React Native Interview Questions
https://www.onlineinterviewquestions.com/react-js-interview-questions/
CC-MAIN-2018-26
refinedweb
2,036
59.7
We recently went through a couple of optimisations on a Rails app that we’re building. The application is hosted on Heroku, but most of the points here will get you a long way even if you’re not using Heroku. We wanted to compile a generic list of optimisation points that we found ourselves doing over and over, but if you feel that we’ve missed something, please let us know. Amazon CloudFront CDN Amazon CloudFront is probably the most straightforward CDN to implement. The idea is simple: When someone visits a web page, the browser downloads the assets in parallel. To avoid overloading the server, it will only try to download a certain amount of assets from the same origin. Until all those assets are retrieved, it will not load any others. Adding one (or multiple) CDNs to your app will allow you to serve more assets at the same time. Also, since those CDNs are optimized for content delivery (caching, multiple location redundancy and so on), it will probably be faster than you at serving those assets. CloudFront is simple because you have almost nothing to do. The browser requests ‘image-a.png’ from your CloudFront distribution. If it has it cached, it will give it back, if not it will fetch it from your server first and then cache it. Simple, easy and cheap. How to setup CloudFront is also dead easy: - Create a new CloudFront distribution - Setup your origin as your website domain - Wait for it to propagate Once done, add config.action_controller.asset_host = ENV["CLOUDFRONT_URL"] to your environments/production.rb You will now need to send this ENV to Heroku: heroku config:add CLOUDFRONT_URL= Once thing to note is that the CloudFront URL won’t match your domain. If you are not using HTTPS, you can create a CNAME and use your subdomain instead of the CloudFront garbage URL. If you are using HTTPS, you still can but do it, but it will cost quite a lot of money. Currently around $600 per month. So unless you need your assets path to match your domain name, I would probably recommend you to keep the default CloudFront URL. Paperclip, S3 and CloudFront We are big fans of Paperclip and its simplicity (and almost of the thoughtbot‘s gems in general). If you are using Heroku, you’re probably already serving your assets via Amazon S3. But even if this works, it’s probably not a good idea since S3, as great as it may be, is not optimised for asset delivery. Thankfully, we can easily serve our S3 assets using CloudFront. If you want to do this, I would recommend creating a new Behaviour in your CloudFront distribution and set it up as Path Pattern: uploads/* Origin: Your S3 bucket Once done, you can tell Paperclip that you are now using CloudFront. Create a config/initializer/paperclip.rb containing if Rails.env.production? Paperclip::Attachment.default_options[:storage] = :s3 Paperclip::Attachment.default_options[:s3_credentials] = { bucket: ENV["S3_BUCKET"], access_key_id: ENV["S3_ACCESS_KEY_ID"], secret_access_key: ENV["S3_SECRET_ACCESS_KEY"] } Paperclip::Attachment.default_options[:s3_protocol] = "https" Paperclip::Attachment.default_options[:url] = ":s3_alias_url" Paperclip::Attachment.default_options[:s3_host_alias] = ENV["CLOUDFRONT_URL"] Paperclip::Attachment.default_options[:path] = "/uploads/:class/:attachment/:id_partition/:updated_at/:style/:filename" end Please note that we’ve changed the default Paperclip image path to match our CloudFront Behaviour, and to ensure a new file name if the asset is updated. Now your Paperclip assets will be loading super fast, thanks to CloudFront. If you already have some existing assets in Paperclip that need to be moved around or path to be changed, you can use this rake task that should do the job for you. namespace :paperclip_assets do desc "Migrate S3 images to new filenames" task :migrate_images => :environment do s3 = AWS::S3.new(access_key_id: ENV["S3_ACCESS_KEY_ID"], secret_access_key: ENV["S3_SECRET_ACCESS_KEY"]) bucket = s3.buckets[ENV["S3_BUCKET"]] bucket.objects.each do |object| next unless object.key =~ /\.(jpg|jpeg|png|gif)$/ path_parts = object.key.split("/") # Assumes that old interpolation pattern was # `/:class/:attachment/:id_partition/:style/:filename` resource_id = path_parts[2..4].join.to_i resource_class_name = path_parts[0].singularize.classify attachment_name = path_parts[1].singularize begin resource_class = resource_class_name.constantize resource = resource_class.find(resource_id) new_path = resource.send(attachment_name).path Rails.logger.info "Renaming: #{object.key} -> #{new_path}" object.copy_to new_path, acl: :public_read rescue => e Rails.logger.error "Error renaming #{object.key}: #{e.inspect}" end end end end A word about web fonts When you move your site to CloudFront, if you’re hosting web fonts, you will eventually end up having CORS troubles. Depending on your need, this could be a real pain. So let’s have a look. If you’re only using Font Awesome If this is the case, and all your other fonts are loaded via Google Fonts or another service, I would recommend to not include Font Awesome in your CSS files, and just load it from the MAX CDN version. You can find the links here. If you are hosting fonts Given that your fonts are stored in app/assets/fonts, you will first need to install Rack CORS and follow their setup example. You will also need to create a new CloudFront Behaviour, with /fonts/* as a path pattern and your website as an origin. In this one, you will have to Whitelist those four headers - Access-Control-Allow-Headers - Access-Control-Allow-Methods - Access-Control-Allow-Origin - Origin Then select Forward query string. Once your distribution is propagating, you should be sorted. But make sure to clear your cache before testing again. Other small nice improvements Those are simple things to do, but if were talking about serving a minimal page, every little helps. Serve your CSS and JS as GZ On Heroku, this would be as easy as adding use Rack::Deflater into your config.ru. It might not be the most elegant solution, but it works nicely. Other solutions would include asset_sync gem or the rack-zippy gem. Leverage browser caching Since you’ve defined your Paperclip path to be changed when updating an asset, you will want to leverage browser caching to not serve the asset twice. Just add this line to your config/environments/production.rb config.static_cache_control = "public, max-age=31536000" Optimise your assets Your assets hold a lot of information, with most of it is useless to the browser. Optimising images will reduce your loading time for each of those assets. Have a look at Image optim – it will automatically shrink public assets on compilation. If you don’t want that to happen, you can install the gem and run it manually from the console. Optimise perceived load time (Turbolink) I’m a big fan of Turbolink. I know a lot of people dislike it, but I think it’s a great idea. But since it only redraws your element, navigating the app can sometimes feel strange. Like you click on a button and while the page loads nothing happen. If you are using Turbolink, you can enable a nice progress loading bar, which will show at the top of your page. Turbolinks.enableProgressBar(); // For the current Turbolink version (< 3.0) Turbolinks.ProgressBar.enable(); // For Turbolink Edge (3.0+) Then you can use the following CSS to control the progress bar style html.turbolinks-progress-bar::before { background-color: red !important; height: 5px !important; } For simplicity, the Nprogress gem is a nice shortcut. So, what’s next? Well, implementing all this will certainly help your page load and improve your app grades in testing tools. While premature optimisation can be a bad idea, I think this list should be a minimum requirement for every app. Depending on your needs, there could still be a little more work to do. If you are using Heroku, remember that now Puma is the default web server. You can also use New Relic or Skylight.io to catch your app’s slowest parts. You can also speed up your views using Fragment Caching. The easier way is to follow the Rails doc and to configure Memcachier on your Heroku instance. If you are using background jobs, I recommend you to have a look at HireFire.io. For $10/month, it will keep looking over your dynos, and increase them when your site needs it, and decrease them when not in use. I hope this ‘guide’ has been useful, and if you have any more tips, please share with them with us!
https://www.cookieshq.co.uk/posts/how-to-get-more-value-out-of-your-heroku-dynos-and-speed-up-your-rails-application
CC-MAIN-2018-47
refinedweb
1,392
57.77
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. Why on_change method with one2many parameter return ids instead of value? I have two module, one parent and one child modules. Parent module have one2many field to child module. I put on_change method in the one2many field with parameter is the field itself. In on_change method(.py file) when creating new line in one2many field, the parameter looks like [(0,0,{value})] Then, I save the module, and try to edit the one2many field. It give me [(6,0,[xx,xy])] with xx and xy are id's of child. My question is, when I change some value in child's field, how can parent know the changed value in child's field. I already tried to browse the record of xx, and xy. It gives me the old value stored in db. Someone pls enlighten me! --------------------------------------------------------- EDIT ( Adding the code and some more explanation about the condition) This is the parent XML, Field childList has on_change method that have itself as parameter <record id="parent_form" model="ir.ui.view"> <field name="name">belajar.parent.form</field> <field name="model">belajar.parent</field> <field name="arch" type="xml"> <form string="Parent" version="7.0" > <group> <field name="name"/> <field name="childList" on_change="onchange_childList(childList)"/> </group> </form> </field> </record> This is parent class class parentA(osv.osv): _name = 'belajar.parent' _description = "Belajar one2many" _columns = { 'name': fields.char("Parent"), 'childList': fields.one2many("belajar.child","parent","ChildList"), } def onchange_childList(self,cr,uid,ids,cl,context=None): res = {} print cl return {'value': res} This is Child XML <record id="child_form" model="ir.ui.view"> <field name="name">belajar.child.form</field> <field name="model">belajar.child</field> <field name="arch" type="xml"> <form string="child" version="7.0" > <group> <field name="name"/> </group> </form> </field> </record> This is child Class class childA(osv.osv): _name = 'belajar.child' _description = "Belajar one2many" _columns = { 'name': fields.char("Child"), 'parent': fields.many2one("belajar.parent" , "Parent"), } When user editing child record in one2many field at parent form, it goes to onchange_childList and print some value. That value is different depending on the situation: - When creating new row / editing that new row it returns [(0,0,{value})] - When editing existing row ( row from record that has been saved / created ) it return [(0,0,[xx,xy]) How can I get user value form existing row? It only gives me the ID. About This Community Odoo Training Center Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now Hi,, Are you editing the values from view part or by using code?? Hi, thanks for the reply. I am editing from view part. Like normal user would do. Then I want the user's value to be passed to parent, in on_change method.
https://www.odoo.com/forum/help-1/question/why-on-change-method-with-one2many-parameter-return-ids-instead-of-value-81029
CC-MAIN-2018-17
refinedweb
489
69.18
Ruby SDK travel-sample Application REST Backend from github: git clone cd try-cb-ruby Ruby SDK travel-sample app. Using the Sample App Give yourself a username and password and click Register. Now try out a few queries, and see Search in action for the hotel finder feature. Sample App Backend The backend code shows Couchbase Ruby SDK in action with Query and Search, but also how to plug together all of the elements and build an application with Couchbase Server and the Ruby SDK. Here’s the airport search code, which checks to see whether the search term for the query string is a three or four letter FAA or ICAO abbreviation, and if not searches for it as an airport name: def get_airports(search_param) query_type = 'N1QL query - scoped to inventory: ' query_prep = 'SELECT airportname FROM `travel-sample`.inventory.airport WHERE ' same_case = search_param == search_param.downcase || search_param == search_param.upcase if same_case && search_param.length == 3 query_prep += "faa=?" query_args = [search_param.upcase] elsif same_case && search_param.length == 4 query_prep += "icao=?" query_args = [search_param.upcase] else query_prep += "POSITION(LOWER(airportname), ?) = 0" query_args = [search_param.downcase] end airport_list = [] options = Cluster::QueryOptions.new options.positional_parameters(query_args) res = @cluster.query(query_prep, options) res.rows.each do |row| airport_list.push('airportname' => row['airportname']) end { 'context' => ["#{query_type} #{query_prep}"], 'data' => airport_list } end Data Model See the Travel App Data Model reference page for more information about the sample data set used.
https://docs.couchbase.com/ruby-sdk/current/hello-world/sample-application.html
CC-MAIN-2022-05
refinedweb
228
51.75
When SAP system landscape is ready, the Sana add-on can be loaded and installed using the SAP Add-on Installation Tool (SAINT). The Sana add-on installation in SAP includes ABAP Objects, Service Definitions and Tables developed under our own /SANAECOM/ namespace. Therefore, the current code base in your system will not be overwritten. We recommend to back up your SAP system before installing or upgrading the Sana add-on. In most cases the installation will be executed smoothly, but sometimes due to various system settings or errors the installation can be aborted. Therefore, it is necessary to have a system restoring mechanism. Please refer to the SAP Note: 2645739 Sana partners or customers want to install the Sana add-on using one of SAP's installation tools (SAINT, SPAM, SUM), and the tool raises a warning that the package is not digitally signed. But to date, SAP has not provided any mechanism to digitally sign packages released by SAP partners like Sana. Therefore, it is necessary that you download or request the Sana add-on for SAP only from a valid source and that you have also received a valid and unaltered package. When you receive this confirmation, you can continue despite the signature warning in the import tool. Step 2: Log in to client: 000. Step 3: Open the transaction: SAINT. In the top menu click: Installation Package > Load packages > From Front End > Select. Select the .SAR file stored on the local environment or load from frontend. You will see the SAP GUI Security message asking you to grant access to this file. Click Allow. Step 4: Click Decompress in the Content of the compressed file window where you will see the "Sana [version].SAR" file. Afterwards, you should select the line with the SANAECOM add-on package and click Continue, and then click Continue again under the Support Package Selection tab. Proceed to the Installation queue window by clicking Continue. You will get a notification screen asking: "Do you want to add Modification Adjustment Transports to the queue?" where you can click No (if none are applicable in your installation scenario). After that, there will be a notification message informing you that the SANAECOM add-on is being installed. You can proceed by clicking the checkbox and wait till import request queue is finished. Click Finish to complete installation. As a final step, you will be asked to run the SPDD check in order to validate whether there are no objects in your system that will be modified / overwritten based on the current add-on installation. When it is checked, you can confirm adjustments and proceed. The result of the installation procedure should be presented in the notification message informing you that: "The Add-on Sana [version] was installed successfully". See also: Installing and Upgrading Add-Ons Using SAINT
https://help.sana-commerce.com/sana-commerce-93/installation/install-sana-add-on-in-sap-environment/sana-add-on-installation
CC-MAIN-2020-34
refinedweb
473
62.68
From: Christoph Ludwig (ludwig_at_[hidden]) Date: 2006-07-12 06:27:42 On Tue, Jul 11, 2006 at 10:44:50AM -0700, Sean Parent wrote: > What are the semantics of operator <? > > I'd state the following: [...] > 4. In the absence of established convention (such as from the domain > of mathematics), lexicographical ordering by comparing the parts > should be used. > Examples from the standard: std::string (which does not sort by > established linguistic rules), std::pair, std::vector, std::set, > tr1::tuple. > Counter examples (which should be fixed): std::complex (real, > imaginary). Do I understand correctly that you ask for namespace std { template<class T> bool operator<(const complex<T>& lhs, const complex<T>& rhs); } in the standard? That makes me certainly cringe! complex<T> models the _arithmetic_ type of complex numbers. If you order an arithmetic type A, then the ordering better respects the arithmetic. That is (among other conditions), if a,b,c in A, a < b, and 0 < c, then a * c < b * c. That implies, though, there cannot be an ordering that respects the complex arithmetic. [Let a = 0, b = 1i, c = 1i and assume you have an ordering with 0 < 1i. Then a < b, 0 = a * c > b * c = -1, i.e., your ordering does not respect the arithmetic. (If 0 > 1i, then consider b = c = -1i.)] Sure, you can "forget" the arithmetic structure of the complex numbers and restrict them to their structure as a two dimensional real vector space; I agree this is often useful and you can order finite dimensional vector spaces. Bu then either implement the ordering in a named function or define an appropriate type that models (2D) real vector spaces. Defining operator< for complex<T> for convenience's sake seems to me an abuse of operator overloading. > Mathematicians do not have memory or complexity to deal with > (mathematicians don't sort) - so arguing that the lack of a rule in > mathematics implies that one shouldn't exist in computer science is > vacuous. You forgot the theory of Groebner bases. Mathematicians most certainly sort, sometimes even with non-standard orders. :-) Regards
https://lists.boost.org/Archives/boost/2006/07/107704.php
CC-MAIN-2019-43
refinedweb
350
55.84
#include "apache_server_context.h" Creates an Apache-specific ServerContext. This differs from base class that it incorporates by adding per-VirtualHost configuration, including: This should be called after all configuration parsing is done to collapse configuration inside the config overlays into actual ApacheConfig objects. It will also compute signatures when done. Reimplemented from net_instaweb::SystemServerContext. from net_instaweb::ServerContext. These return true if the given overlays were constructed (in response to having something in config files to put in them). Called on notification from Apache on child exit. Returns true if this is the last ServerContext that exists. We only proxy external HTML from mod_pagespeed in Apache using the ProxyFetch flow if proxy_all_requests_mode() is on in config. In the usual case, we handle HTML as an Apache filter, letting something like mod_proxy (or one of our own test modes like slurp) do the fetching. Implements net_instaweb::ServerContext. Reports an error status to the HTTP resource request, and logs the error as a Warning to the log file, and bumps a stat as needed. Reports an error status to the HTTP slurp request, and logs the error as a Warning to the log file, and bumps a stat as needed. Reports an error status to the HTTP statistics request, and logs the error as a Warning to the log file, and bumps a stat as needed. These return configuration objects that hold settings from <ModPagespeedIf spdy>=""> and <ModPagespeedIf !spdy> sections of configuration. They initialize lazily, so are not thread-safe; however they are only meant to be used during configuration parsing. These methods should be called only if there is actually a need to put something in them, since otherwise we may end up constructing separate SPDY vs. non-SPDY configurations needlessly.
https://www.modpagespeed.com/psol/classnet__instaweb_1_1ApacheServerContext.html
CC-MAIN-2017-47
refinedweb
289
54.73
WS2812FX (community library) Summary A port of the WS2812FX library (by Harm Aldick,) for the Particle platform. Example Build Testing Device OS Version: This table is generated from an automated build. Success only indicates that the code compiled successfully. Library Read Me This content is provided by the library maintainer and has not been validated or approved. WS2812FX - More Blinken for your LEDs! This is a Particle-compatible fork of. This library features a variety of blinken effects for the WS2811/WS2812/NeoPixel LEDs. It is meant to be a drop-in replacement for the Particle NeoPixel library with additional features. Features - 53 different effects. And counting. - Free of any delay() - Tested on Particle Photon. - All effects with printable names - easy to use in user interfaces. - FX, speed and brightness controllable on the fly. - Ready for sound-to-light (see external trigger example) Download, Install and Example - Install the Particle NeoPixel library - Install this library (Should show up as WS2812FX under the Particle library browser) See examples for basic usage. In it's most simple form, here's the code to get you started! #include <WS2812FX.h> #define LED_COUNT 30 #define LED_PIN 12 WS2812FX ws2812fx = WS2812FX(LED_COUNT, LED_PIN, NEO_GRB + NEO_KHZ800); void setup() { ws2812fx.init(); ws2812fx.setBrightness(100); ws2812fx.setSpeed(200); ws2812fx.setMode(FX_MODE_RAINBOW_CYCLE); ws2812fx.start(); } void loop() { ws2812fx.service(); } More complex effects can be created by dividing your string of LEDs into segments (up to ten) and programming each segment independently. Use the setSegment() function to program each segment's mode, color, speed and direction (normal or reverse): - setSegment(segment index, start LED, stop LED, mode, color, speed, reverse); Note, some effects make use of more then one color (up to three) and are programmed by specifying an array of colors: - setSegment(segment index, start LED, stop LED, mode, colors[], speed, reverse); // divide the string of LEDs into two independent segments uint32_t colors[] = {RED, GREEN}; ws2812fx.setSegment(0, 0, (LED_COUNT/2)-1, FX_MODE_BLINK, colors, 1000, false); ws2812fx.setSegment(1, LED_COUNT/2, LED_COUNT-1, FX_MODE_BLINK, (const uint32_t[]) {ORANGE, PURPLE}, 1000, false); Effects - Static - No blinking. Just plain old static light. - Blink - Normal blinking. 50% on/off time. - Breath - Does the "standby-breathing" of well known i-Devices. Fixed Speed. - Color Wipe - Lights all LEDs after each other up. Then turns them in that order off. Repeat. - Color Wipe Inverse - Same as Color Wipe, except swaps on/off colors. - Color Wipe Reverse - Lights all LEDs after each other up. Then turns them in reverse order off. Repeat. - Color Wipe Reverse Inverse - Same as Color Wipe Reverse, except swaps on/off colors. - Color Wipe Random - Turns all LEDs after each other to a random color. Then starts over with another color. - Random Color - Lights all LEDs in one random color up. Then switches them to the next random color. - Single Dynamic - Lights every LED in a random color. Changes one random LED after the other to another random color. - Multi Dynamic - Lights every LED in a random color. Changes all LED at the same time to new random colors. - Rainbow - Cycles all LEDs at once through a rainbow. - Rainbow Cycle - Cycles a rainbow over the entire string of LEDs. - Scan - Runs a single pixel back and forth. - Dual Scan - Runs two pixel back and forth in opposite directions. - Fade - Fades the LEDs on and (almost) off again. - Theater Chase - Theatre-style crawling lights. Inspired by the Adafruit examples. - Theater Chase Rainbow - Theatre-style crawling lights with rainbow effect. Inspired by the Adafruit examples. - Running Lights - Running lights effect with smooth sine transition. - Twinkle - Blink several LEDs on, reset, repeat. - Twinkle Random - Blink several LEDs in random colors on, reset, repeat. - Twinkle Fade - Blink several LEDs on, fading out. - Twinkle Fade Random - Blink several LEDs in random colors on, fading out. - Sparkle - Blinks one LED at a time. - Flash Sparkle - Lights all LEDs in the selected color. Flashes single white pixels randomly. - Hyper Sparkle - Like flash sparkle. With more flash. - Strobe - Classic Strobe effect. - Strobe Rainbow - Classic Strobe effect. Cycling through the rainbow. - Multi Strobe - Strobe effect with different strobe count and pause, controlled by speed setting. - Blink Rainbow - Classic Blink effect. Cycling through the rainbow. - Chase White - Color running on white. - Chase Color - White running on color. - Chase Random - White running followed by random color. - Chase Rainbow - White running on rainbow. - Chase Flash - White flashes running on color. - Chase Flash Random - White flashes running, followed by random color. - Chase Rainbow White - Rainbow running on white. - Chase Blackout - Black running on color. - Chase Blackout Rainbow - Black running on rainbow. - Color Sweep Random - Random color introduced alternating from start and end of strip. - Running Color - Alternating color/white pixels running. - Running Red Blue - Alternating red/blue pixels running. - Running Random - Random colored pixels running. - Larson Scanner - K.I.T.T. - Comet - Firing comets from one end. - Fireworks - Firework sparks. - Fireworks Random - Random colored firework sparks. - Merry Christmas - Alternating green/red pixels running. - Fire Flicker - Fire flickering effect. Like in harsh wind. - Fire Flicker (soft) - Fire flickering effect. Runs slower/softer. - Fire Flicker (intense) - Fire flickering effect. More range of color. - Circus Combustus - Alternating white/red/black pixels running. - Halloween - Alternating orange/purple pixels running. - Bicolor Chase - Two LEDs running on a background color (set three colors). - Tricolor Chase - Alternating three color pixels running (set three colors). - ICU - Two eyes looking around. - Custom - User created custom effect. Projects using WS2812FX - Smart Home project by renat2985 using the ESP8266. Including a nice webinterface in the demo video! - WiFi LED Star by kitesurfer1404 - McLighting by toblum is a multi-client lighting gadget for ESP8266 Browse Library Files
https://docs.particle.io/reference/device-os/libraries/w/WS2812FX/
CC-MAIN-2022-27
refinedweb
929
62.04
Handling. The Problem Without additional configuration, this is what a (chrome) user will see if they visit a URL that does not exist: Fortunately, it is very simple to handle error status codes. We'll cover three techniques below. The Solution In previous versions of Asp.Net MVC, the primary place for handling 404 errors was in the web.config. You probably remember the <customErrors> section which handled 404's from the ASP.NET pipeline as well as <httpErrors> which was lower level and handled IIS 404's. It was all a little confusing. In .Net core, things are different and there is no need to play around with XML config (though you can still use httpErrors in web.config if you are proxying via IIS and you really want to :-)). There are really two different situations that we need to handle when dealing with not-found errors. There is the case where the URL doesn't match any route. In this situation, if we cannot ascertain what the user was looking for, we need to return a generic not found page. There are two common techniques for handling this but first we'll talk about the second situation. This is where the URL matches a route but one or more parameter is invalid. We can address this with a custom view. Custom Views An example for this case would be a product page with an invalid or expired id. Here, we know the user was looking for a product and instead of returning a generic error, we can be a bit more helpful and return a custom not found page for products. This still needs to return a 404 status code but we can make the page less generic, perhaps pointing the user at similar or popular products. Handling these cases is trivial. All we need to do is set the status code before returning our custom view: public async Task<IActionResult> GetProduct(int id) { var viewModel = await _db.Get<Product,GetProductViewModel>(id); if (viewModel == null) { Response.StatusCode = 404; return View("ProductNotFound"); } return View(viewModel); } Of course, you might prefer to wrap this up into a custom action result: public class NotFoundViewResult : ViewResult { public NotFoundViewResult(string viewName) { ViewName = viewName; StatusCode = (int)HttpStatusCode.NotFound; } } This simplifies our action slightly: public async Task<IActionResult> GetProduct(int id) { var viewModel = await _db.Get<Product,GetProductViewModel>(id); if (viewModel == null) { return new NotFoundViewResult("ProductNotFound"); } return View(viewModel); } This easy technique covers specific 404 pages. Let's now look at generic 404 errors where we cannot work out what the user was intending to view. Catch-all route Creating a catch-all route was possible in previous version of MVC and in .Net Core it works in exactly the same way. The idea is that you have a wildcard route that will pick up any URL that has not been handled by any other route. Using attribute routing, this is written as: [Route("{*url}", Order = 999)] public IActionResult CatchAll() { Response.StatusCode = 404; return View(); } It is important to specify the Order to ensure that the other routes take priority. A catch-all route works reasonably well but it is not the preferred option in .Net Core. While a catch-all route will handle 404's, the next technique will handle any non-success status code so you can do the following (probably in an actionfilter in production): public async Task<IActionResult> GetProduct(int id) { ... if (RequiresThrottling()) { return new StatusCodeResult(429) } if (!HasPermission(id)) { return Forbid(); } ... } Status Code Pages With Re Execute StatusCodePagesWithReExecute is a clever piece of Middleware that handles non-success status codes where the response has not already started. This means that if you use the custom view technique detailed above then the 404 status code will not be handled by the middleware (which is exactly what we want). When an error code such as a 404 is returned from an inner middleware component, StatusCodePagesWithReExecute allows you to execute a second controller action to handle the status code. You add it to the pipeline with a single command in startup.cs: app.UseStatusCodePagesWithReExecute("/error/{0}"); ... app.UseMvc(); The order of middleware definition is important and you need to ensure that StatusCodeWithReExecute is registered before any middleware that could return an error code (such as the MVC middleware). You can specify a fixed path to execute or use a placeholder for the status code value as we have done above. You can also point to both static pages (assuming that you have the StaticFiles middleware in place) and controller actions. In this example, we have a separate action for 404. Any other non-success status code hits, the Error action. [Route("error/404")] public IActionResult Error404() { return View(); } [Route("error/{code:int}")] public IActionResult Error(int code) { // handle different codes or just return the default error view return View(); } Obviously, you can tailor this to your needs. For example, if you are using request throttling as we showed in the previous section then you can return a 429 specific page explaining why the request failed. Conclusion. Together, these two techniques are the preferred methods for handling non-success HTTP status codes in Asp.Net Core. By adding StatusCodeWithReExecute to the pipeline as we have done above, it will run for all requests but this may not be what we want all of the time. In the next post we will look at how to handle projects containing both MVC and API actions where we want to respond differently to 404's for each type. Useful or Interesting? If you liked the article, I would really appreciate it if you could share it with your Twitter followers.Share on Twitter Be the first person to comment on this article.
https://www.devtrends.co.uk/blog/handling-404-not-found-in-asp.net-core
CC-MAIN-2017-34
refinedweb
955
53.71
Get the highlights in your inbox every week. 3 practical Python tools: magic methods, iterators and generators, and method magic Elegant solutions for everyday Python problems 3 tools to make your Python code more elegant, readable, intuitive, and easy to maintain. Subscribe now. Magic methodsMagic methods can be considered the plumbing of Python. They're the methods that are called "under the hood" for certain built-in methods, symbols, and operations. A common magic method you may be familiar with is __init__(), which is called when we want to initialize a new instance of a class. You may have seen other common magic methods, like __str__ and __repr__. There is a whole world of magic methods, and by implementing a few of them, we can greatly modify the behavior of an object or even make it behave like a built-in datatype, such as a number, list, or dictionary. Let's take this Money class for example: class Money: currency_rates = { '$': 1, '€': 0.88, } def __init__(self, symbol, amount): self.symbol = symbol self.amount = amount def __repr__(self): return '%s%.2f' % (self.symbol, self.amount) def convert(self, other): """ Convert other amount to our currency """ new_amount = ( other.amount / self.currency_rates[other.symbol] * self.currency_rates[self.symbol]) return Money(self.symbol, new_amount) The class defines a currency rate for a given symbol and exchange rate, specifies an initializer (also known as a constructor), and implements __repr__, so when we print out the class, we see a nice representation such as $2.00 for an instance Money('$', 2.00) with the currency symbol and amount. Most importantly, it defines a method that allows you to convert between different currencies with different exchange rates. Using a Python shell, let's say we've defined the costs for two food items in different currencies, like so: >>> soda_cost = Money('$', 5.25) >>> soda_cost $5.25 >>> pizza_cost = Money('€', 7.99) >>> pizza_cost €7.99 We could use magic methods to help instances of this class interact with each other. Let's say we wanted to be able to add two instances of this class together, even if they were in different currencies. To make that a reality, we could implement the __add__ magic method on our Money class: class Money: # ... previously defined methods ... def __add__(self, other): """ Add 2 Money instances using '+' """ new_amount = self.amount + self.convert(other).amount return Money(self.symbol, new_amount) Now we can use this class in a very intuitive way: >>> soda_cost = Money('$', 5.25) >>> pizza_cost = Money('€', 7.99) >>> soda_cost + pizza_cost $14.33 >>> pizza_cost + soda_cost €12.61 When we add two instances together, we get a result in the first defined currency. All the conversion is done seamlessly under the hood. If we wanted to, we could also implement __sub__ for subtraction, __mul__ for multiplication, and many more. Read about emulating numeric types, or read this guide to magic methods for others. We learned that __add__ maps to the built-in operator +. Other magic methods can map to symbols like []. For example, to access an item by index or key (in the case of a dictionary), use the __getitem__ method: >>> d = {'one': 1, 'two': 2} >>> d['two'] 2 >>> d.__getitem__('two') 2 Some magic methods even map to built-in functions, such as __len__(), which maps to len(). class Alphabet: letters = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' def __len__(self): return len(self.letters) >>> my_alphabet = Alphabet() >>> len(my_alphabet) 26 Custom iterators Custom iterators are an incredibly powerful but unfortunately confusing topic to new and seasoned Pythonistas alike. Many built-in types, such as lists, sets, and dictionaries, already implement the protocol that allows them to be iterated over under the hood. This allows us to easily loop over them. >>> for food in ['Pizza', 'Fries']: print(food + '. Yum!') Pizza. Yum! Fries. Yum! How can we iterate over our own custom classes? First, let's clear up some terminology. - To be iterable, a class needs to implement __iter__() - The __iter__()method needs to return an iterator - To be an iterator, a class needs to implement __next__()(or next()in Python 2), which must raise a StopIterationexception when there are no more items to iterate over. Whew! It sounds complicated, but once you remember these fundamental concepts, you'll be able to iterate in your sleep. When might we want to use a custom iterator? Let's imagine a scenario where we have a Server instance running different services such as http and ssh on different ports. Some of these services have an active state while others are inactive. class Server: services = [ {'active': False, 'protocol': 'ftp', 'port': 21}, {'active': True, 'protocol': 'ssh', 'port': 22}, {'active': True, 'protocol': 'http', 'port': 80}, ] When we loop over our Server instance, we only want to loop over active services. Let's create a new class, an IterableServer: class IterableServer: def __init__(self): self.current_pos = 0 def __next__(self): pass # TODO: Implement and remember to raise StopIteration First, we initialize our current position to 0. Then, we define a __next__() method, which will return the next item. We'll also ensure that we raise StopIteration when there are no more items to return. So far so good! Now, let's implement this __next__() method. class IterableServer: def __init__(self): self.current_pos = 0. # we initialize our current position to zero def __iter__(self): # we can return self here, because __next__ is implemented return self def __next__(self): while self.current_pos < len(self.services): service = self.services[self.current_pos] self.current_pos += 1 if service['active']: return service['protocol'], service['port'] raise StopIteration next = __next__ # optional python2 compatibility We keep looping over the services in our list while our current position is less than the length of the services but only returning if the service is active. Once we run out of services to iterate over, we raise a StopIteration exception. Because we implement a __next__() method that raises StopIteration when it is exhausted, we can return self from __iter__() because the IterableServer class adheres to the iterable protocol. Now we can loop over an instance of IterableServer, which will allow us to look at each active service, like so: >>> for protocol, port in IterableServer(): print('service %s is running on port %d' % (protocol, port)) service ssh is running on port 22 service http is running on port 21 That's pretty great, but we can do better! In an instance like this, where our iterator doesn't need to maintain a lot of state, we can simplify our code and use a generator instead. class Server: services = [ {'active': False, 'protocol': 'ftp', 'port': 21}, {'active': True, 'protocol': 'ssh', 'port': 22}, {'active': True, 'protocol': 'http', 'port': 21}, ] def __iter__(self): for service in self.services: if service['active']: yield service['protocol'], service['port'] What exactly is the yield keyword? Yield is used when defining a generator function. It's sort of like a return. While a return exits the function after returning the value, yield suspends execution until the next time it's called. This allows your generator function to maintain state until it resumes. Check out yield's documentation to learn more. With a generator, we don't have to manually maintain state by remembering our position. A generator knows only two things: what it needs to do right now and what it needs to do to calculate the next item. Once we reach a point of execution where yield isn't called again, we know to stop iterating. This works because of some built-in Python magic. In the Python documentation for __iter__() we can see that if __iter__() is implemented as a generator, it will automatically return an iterator object that supplies the __iter__() and __next__() methods. Read this great article for a deeper dive of iterators, iterables, and generators. Method magic Due to its unique aspects, Python provides some interesting method magic as part of the language. One example of this is aliasing functions. Since functions are just objects, we can assign them to multiple variables. For example: >>> def foo(): return 'foo' >>> foo() 'foo' >>> bar = foo >>> bar() 'foo' We'll see later on how this can be useful. Python provides a handy built-in, called getattr(), that takes the object, name, default parameters and returns the attribute name on object. This programmatically allows us to access instance variables and methods. For example: >>> class Dog: sound = 'Bark' def speak(self): print(self.sound + '!', self.sound + '!') >>> fido = Dog() >>> fido.sound 'Bark' >>> getattr(fido, 'sound') 'Bark' >>> fido.speak <bound method Dog.speak of <__main__.Dog object at 0x102db8828>> >>> getattr(fido, 'speak') <bound method Dog.speak of <__main__.Dog object at 0x102db8828>> >>> fido.speak() Bark! Bark! >>> speak_method = getattr(fido, 'speak') >>> speak_method() Bark! Bark! Cool trick, but how could we practically use getattr? Let's look at an example that allows us to write a tiny command-line tool to dynamically process commands. class Operations: def say_hi(self, name): print('Hello,', name) def say_bye(self, name): print ('Goodbye,', name) def default(self, arg): print ('This operation is not supported.') if __name__ == '__main__': operations = Operations() # let's assume we do error handling command, argument = input('> ').split() func_to_call = getattr(operations, command, operations.default) func_to_call(argument) The output of our script is: $ python getattr.py > say_hi Nina Hello, Nina > blah blah This operation is not supported. Next, we'll look at partial. For example, functool.partial(func, *args, **kwargs) allows you to return a new partial object that behaves like func called with args and kwargs. If more args are passed in, they're appended to args. If more kwargs are passed in, they extend and override kwargs. Let's see it in action with a brief example: >>> from functools import partial >>> basetwo = partial(int, base=2) >>> basetwo <functools.partial object at 0x1085a09f0> >>> basetwo('10010') 18 # This is the same as >>> int('10010', base=2) Let's see how this method magic ties together in some sample code from a library I enjoy using called agithub, which is a (poorly named) REST API client with transparent syntax that allows you to rapidly prototype any REST API (not just GitHub) with minimal configuration. I find this project interesting because it's incredibly powerful yet only about 400 lines of Python. You can add support for any REST API in about 30 lines of configuration code. agithub knows everything it needs to about protocol ( REST, HTTP, TCP), but it assumes nothing about the upstream API. Let's dive into the implementation. Here's a simplified version of how we'd define an endpoint URL for the GitHub API and any other relevant connection properties. View the full code instead. class GitHub(API): def __init__(self, token=None, *args, **kwargs): props = ConnectionProperties(api_url = kwargs.pop('api_url', 'api.github.com')) self.setClient(Client(*args, **kwargs)) self.setConnectionProperties(props) Then, once your access token is configured, you can start using the GitHub API. >>> gh = GitHub('token') >>> status, data = gh.user.repos.get(visibility='public', sort='created') >>> # ^ Maps to GET /user/repos >>> data ... ['tweeter', 'snipey', '...'] Note that it's up to you to spell things correctly. There's no validation of the URL. If the URL doesn't exist or anything else goes wrong, the error thrown by the API will be returned. So, how does this all work? Let's figure it out. First, we'll check out a simplified example of the API class: class API: # ... other methods ... def __getattr__(self, key): return IncompleteRequest(self.client).__getattr__(key) __getitem__ = __getattr__ Each call on the API class ferries the call to the IncompleteRequest class for the specified key. class IncompleteRequest: # ... other methods ... def __getattr__(self, key): if key in self.client.http_methods: htmlMethod = getattr(self.client, key) return partial(htmlMethod, url=self.url) else: self.url += '/' + str(key) return self __getitem__ = __getattr__ class Client: http_methods = ('get') # ... and post, put, patch, etc. def get(self, url, headers={}, **params): return self.request('GET', url, None, headers) If the last call is not an HTTP method (like 'get', 'post', etc.), it returns an IncompleteRequest with an appended path. Otherwise, it gets the right function for the specified HTTP method from the Client class and returns a partial . What happens if we give a non-existent path? >>> status, data = this.path.doesnt.exist.get() >>> status ... 404 And because __getitem__ is aliased to __getattr__: >>> owner,>> status, data = gh.repos[owner][repo].pulls.get() >>> # ^ Maps to GET /repos/nnja/tweeter/pulls >>> data .... # {....} Now that's some serious method magic! Learn more Python provides plenty of tools that allow you to make your code more elegant and easier to read and understand. The challenge is finding the right tool for the job, but I hope this article added some new ones to your toolbox. And, if you'd like to take this a step further, you can read about decorators, context managers, context generators, and NamedTuples on my blog nnja.io. As you become a better Python developer, I encourage you to get out there and read some source code for well-architected projects. Requests and Flask are two great codebases to start with. To learn more about these topics, as well as decorators, context managers, context decorators, and NamedTuples, attend Nina Zakharenko 's talk, Elegant Solutions for Everyday Python Problems, at PyCon Cleveland 2018. 2 Comments Great article - Thanks I learnt a few new things. You did lose me near the end ... beginning here: "define an endpoint URL for the GitHub API" Nice. But I did note what appears to be a (copying?) typo. The first 'class services' shows 'http' on port '80' but when you redefine it with '__iter__' you put 'http' on port '21', duplicating the 'ftp' port number.
https://opensource.com/article/18/4/elegant-solutions-everyday-python-problems
CC-MAIN-2020-05
refinedweb
2,254
57.87
After you've installed the leantouch, there's a lean> touch in gameobject. 2. An example of creating prefab by clicking on a screen. 2. 1 building a scene to join leantouch. 2. 2 build a square, drag to asset, turn to prefab. 2. 3 in leantouch ( gameobject ), build leanspawn ( component ) and drag the block into prefab. 2. 4 continue to establish the action of leanfingerdown, selecting objects for the clicked event ( leantouch, I. E. Leanspawn ), and selects leanspawn. Spawn in function. A simple format for 3. C #. Distance in 2. 5 leanspawn is the distance to the camera. is done, and clicking on the screen will appear a box. 3. 1 increase click event. 3. 2 real-time access to finger information. 3. 3 Lean.Touch. leangesture can implement multiple operations, such as rotation. 3. 4 leaving UI no action. 3. 5 uses using lean, touch;. A combination of 3. 6 provides multiple operatio &. Leangesture. Getpinchscale ( leantouch. Getfingers ( true ) ). 4. 1 edit from project settings to script execution order. 4. 2 setting order. 5. 1 leancameramove. Let the camera be dragged by finger. 5. 2 leancamerazoom. So that the camera can be. 5. 3 leancameramovesmooth. Allows the camera to be dragged by finger, and the effect is smoother. 5. 4 leancamerazoomsmooth. The camera can be amplified by, and the effect is smoother. 5. 5 leanpitchyaw. Let the camera go around and watch around space. 5. 5 leanpitchyawsmooth. Makes the camera orbit around the space, and the effect is smoother. 6. 1 leanfingertap. Tap to unde & tand as when requiredtapinterval is 2 When ignoreifovergui is set, it can be invalidated on the UI 6. 2 leanspawn. Set a precast body and set the distance from the camera 6. 3 leandestroy. Set the time to destroy 6. 4 leanfingerheld. Long press 6. 5 leanfingerline. Drag to create a line 6. 6 leanmultitap. by connecting a script, passing count, highest, 6. 6 leanfingertrail. Keep track of your fingers. 6. 7 leanopenurl. Links can be implemented by button 6. 8 leanpressselect to set the object to be clicked and dragged. After leanpressselect settings, you'll be saved in currentselectables by clicking multiple points 6. 9 leanselectablespriterenderercolor. By adding a"leanselectablespriterenderercolor"script on a clicked object, you can click on the, and note that the clicked needs to join the circlecollider2d 3d collision component, and SpriteRenderer. And leanselectable is also required. 6. 10 LeanSelectable. By adding an action to select, leanselectable can be used to set leandestroy. Destroynow to remove. 6. 11 LeanTranslate. Adding leantranslate to the object, with the above components, can implement object drag, leantranslatesmooth can smooth points. 6. 12 implements 3d objects to be clicked and dragged. In leanpressselect, selectusing set raycast, A leanselectablerenderercolor can set the color change of a 3d object. 6. 13 LeanSelectableTranslateInertia3D. A leanselectabletranslateinertia 3d can combine a rigid body to set a drag. 6. 14 LeanTapSelect. Choose an object for an object, and the selected object is recorded in currentselectable, which can only be selected 6. 15 LeanRotate. Rotate, set x, y, z axis corresponding rotation 6. 16 LeanScale. Zoom in. 7. 1 leanswipedirection4. Slide in four directions 7. 2 leanswipedirection8. Slide in eight directions 7. 3 leanswiperigidbody2d/3d leanswiperigidbody2d/3dnorelease. Rigid body sliding 8. 1 leantouchevents. 9. How to make an object can be pushed by tap.9. How to make an object can be pushed by tap. using UnityEngine;using System.Collections.Generic; namespace Lean.Touch { //This script will hook into event LeanTouch event, and spam the console with the informationpublicclass LeanTouchEvents : MonoBehaviour { protectedvirtualvoidOnEnable() { //Hook into the events we need; } protectedvirtualvoidOnDisable() { //Unhook the events; } publicvoidOnFingerDown(LeanFinger finger) { Debug.Log("Finger" + finger.Index + " began touching the screen"); } publicvoidOnFingerSet(LeanFinger finger) { Debug.Log("Finger" + finger.Index + " is still touching the screen"); } publicvoidOnFingerUp(LeanFinger finger) { Debug.Log("Finger" + finger.Index + " finished touching the screen"); } publicvoidOnFingerTap(LeanFinger finger) { Debug.Log("Finger" + finger.Index + " tapped the screen"); } publicvoidOnFingerSwipe(LeanFinger finger) { Debug.Log("Finger" + finger.Index + " swiped the screen"); } publicvoidOnGesture(List<LeanFinger> fingers) { Debug.Log("Gesture with" + fingers.Count + " finger(s)"); Debug.Log(" pinch scale:" + LeanGesture.GetPinchScale(fingers)); Debug.Log(" twist degrees:" + LeanGesture.GetTwistDegrees(fingers)); Debug.Log(" twist radians:" + LeanGesture.GetTwistRadians(fingers)); Debug.Log(" screen delta:" + LeanGesture.GetScreenDelta(fingers)); } } } 9. 1 increased a leantouch. 9. 2 adds a leantapselect and selects it to be 2d 3d ( select using ). 9. 3 on object, join leanselectable and ensure that collider is in. 9. 4 can use some of the functions of leanselect, or you can add actions to the event. Pull the object with leanselectable and select a handler to get it.
https://www.dowemo.com/article/46327/unity-leantouch-notes
CC-MAIN-2018-26
refinedweb
760
53.07
Developing for SharePoint 2010 Excel Services Using Web Services or the Excel Services REST API Summary: Learn about the new client services features that are available in Microsoft SharePoint Server 2010, including Word Automation Services and Excel Services. Applies to: Business Connectivity Services | Office 2007 | Office 2010 | Open XML | SharePoint Designer 2010 | SharePoint Foundation 2010 | SharePoint Online | SharePoint Server 2010 | Visual Studio | Word Autmomation Services Provided by: John Peltonen, 3Sharp Contents Microsoft Office 2010 Server Applications Overview - Excel Web Access Web Part Unsupported Client Features New Features in Excel Web Services User-Defined Functions ECMAScript (JScript, JavaScript) Object Model REST API Setting Values - - Download Excel Services Sample for Microsoft SharePoint Server 2010 (xlrestform.zip) Microsoft Office 2010 Server Applications Overview Users are spending more of their time in the Microsoft SharePoint Server environment and with good reason. As the functionality of this environment expands to include features such as rich social networking capabilities, task and calendar management, workflow, and forms (both no-code and code), SharePoint Server 2010 becomes more of a destination instead of just a data repository. Of course, SharePoint Server has been an excellent tool to store and manage documents produced by the Microsoft Office client applications. SharePoint Server 2010 expands the realm of what users can do with these documents from within the server. Microsoft Word, Microsoft Excel, Microsoft OneNote, Microsoft Visio, Microsoft PowerPoint, Microsoft Access, Microsoft Project, and, of course, Microsoft InfoPath all have expanded functionality on the server. Word, Excel, OneNote, and PowerPoint have corresponding Web apps that enable users to open and edit documents directly within SharePoint Server without a client. Furthermore, each supports a multiuser coauthoring scenario where more than one user can be editing the same document at the same time. Visio Services enables users to view Visio diagrams hosted in SharePoint 2010 within a browser. If Microsoft Silverlight is installed, the user will have an interactive click-through experience for panning, zooming, and following hyperlinks. If Silverlight is not installed, the diagram is rendered in PNG format. Furthermore, these diagrams can be data-driven, leading to very rich dashboard-style displays. This feature is used directly in SharePoint 2010 for visual reporting on Microsoft SharePoint Designer workflow status. Access has always been a powerful desktop application platform. In 2010, Access also becomes a server-side application development platform, as you can now publish Access databases as SharePoint sites. Data, forms, graphics, reports, and even simple logic can all be published into the site. From that point forward, users can access the same data and functionality, either from an Access client or from a browser. This can help to reduce the headache of multiple MDB format versions on desktops and email, as each MDB file is referencing the data and logic hosted within SharePoint Server. User can also manage the look, feel, and behavior of the published site directly through the Access client without involving the IT department. This article does not address any of these features. Instead, it focuses on very specific, mostly programmatic aspects of Excel Services. There are very powerful additions to Excel Services in SharePoint Server 2010, including two new programmatic models, the ECMAScript (JScript, JavaScript) object model and the Excel Services REST APIs. Word Automation Services is also a new feature that enables Word automation on the server without requiring the Word client application. For information about developing using Word Automation Services, see Developing with SharePoint 2010 Word Automation Services Using Excel Services From the perspective of each Office application, Excel Services provides by far the widest and most varied level of functionality through its services offering. You have three object models to choose from when you develop solutions (each of which are addressed in this article). In addition, great improvements have been made to the Excel Web Access Web Part, which enables much richer user interaction with an entire workbook or a named object. Finally, you can write custom server-side user-defined functions that can be called from a workbook that is hosted on the server. Each of the following sections addresses one of these features. Excel Web Access Web Part The Excel Web Access Web Part first appeared in Microsoft Office SharePoint Server 2007. Its goal was to render Excel worksheets with a high degree of fidelity. The Web Part could be pointed at a worksheet that was hosted inside SharePoint Server that was published through Excel Services. Users were presented with a fixed presentation of the targeted content in the worksheet. Excel Services in SharePoint Server 2010 has expanded on the base functionality that was provided in SharePoint Server 2007 to improve the user experience while interacting with the hosted workbook. First, more of the base-level Excel client features are supported in the Excel Web Access Web Part. The Web Part now supports scrolling. In SharePoint Server 2007, you could select a fixed grid size to expose in the Web Part, but users could not navigate to other areas. Now, if you decide to publish the entire workbook instead of a named object, the workbook is fully available to users. They can scroll within a worksheet or even navigate to other worksheets. Figure 2 shows a sample Excel Web Access Web Part in use. Figure 2. Sample Excel Web Access Web Part .gif) As you probably noticed in Figure 2, graphics are also supported. Additionally, users can interact directly with the worksheet values, automatically triggering workbook recalculations. For example, a user can modify the values in the monthly projects table shown in Figure 2 to see how they would affect the chart (mostly off screen to the right). Performance has also improved because the Web Part no longer requires a page refresh to recalculate the workbook. As in SharePoint Server 2007, the Excel Web Access Web Part provides a read-only version of the worksheet, with changes stored only with the user's session. Unsupported Client Features In SharePoint Server 2007, if the Excel Web Access Web Part encountered any unsupported features, it would not load the workbook. This led to certain frustrating situations where, for example, a cell comment on Sheet3 prevented someone from publishing a pivot table on Sheet1 through Excel Web Access. The Excel Services team has made some solid progress in this area, by classifying the types of unsupported functionality. Now, cell comments, Microsoft Visual Basic for Applications (VBA) code, and Microsoft Office art (including SmartArt) are ignored when the workbook is open. In addition, QueryTable objects and external workbook references are ignored, but the values that were last rendered by the client application are displayed. Finally, there is still a set of unsupported client functionalities. The SharePoint Server 2007 list, provided in Differences Between Using a Workbook in Excel and Excel Services, describes the unsupported functionality, with the exception of the items mentioned earlier. Most of the new Microsoft Excel 2010 features are supported to some extent in the Excel Web Access Web Part. This list from the Excel Services Team Blog includes many of the new Microsoft Excel 2010 client features and the level of Excel Web Access support for them: - Icon Improvements in Excel 2010 Data Bar Improvements in Excel 2010 Function Improvements in Excel 2010 Easy (and Even Fun!) Data Exploration: Introducing Excel 2010 Slicers PivotTable Named Sets in Excel 2010 A Few More PivotTable Improvements in Excel 2010 - The original list is available on the Microsoft Excel Team Blog. For more information about Excel Web Access, see the following articles on the Microsoft Excel Team Blog: Excel Services 2010 Overview Excel Services in SharePoint 2010 Dashboard Improvements Excel Services in SharePoint 2010 Administration Improvements Excel Services in SharePoint 2010 Feature Support Uncovering Publish to Excel Services in Excel 2010 New Features in Excel Web Services Many of the programmatic Excel Services features, such as the REST API and the ECMAScript (JavaScript, JScript) object model, are new to this release of Microsoft SharePoint Server. However, Excel Services Web services first appeared in Microsoft Office SharePoint Server 2007. In SharePoint Server 2007, you could open a worksheet and then make changes, calculate, and read specific values out of the worksheet. However, you could not save changes (all your changes were transient with your session). And, you could not load charts. These two items represent the two most-requested features of Excel Services Web services. These features and others are now available in SharePoint Server 2010. Table 1 contains the new methods that are added with this release and comes almost directly from the blog of a senior Excel Services developer, Shahar Prish. Table 1. List of new methods The example shown in Figure 3 uses the GetChartImageURL method and the OpenWorkbookForEditing method. At no point does the sample application launch an explicit Save command (SaveWorkbook). When the workbook is open for editing, changes are written directly to the worksheet, so saves are not necessary. When the workbook is open for editing, the application's credentials appear as a co-author in Excel. This means that if other people are also editing the document, they see that this application is editing it, and they also see the application's changes saved in real time. The application requires a specific workbook that estimates factory line output based on modifications to the line speed. The application is a simple Windows Forms application that loads the worksheet in read-only mode or read/write mode, lets you adjust the modifier, and then shows the resulting chart of estimated output. Figure 3. Sample Excel Web Services application .gif) First, you must make a Web reference to the Excel Services Web service. To add the reference, in Microsoft Visual Studio, in Solution Explorer, right-click Service Reference, and then select Add Service Reference. This opens the Service Reference window. You cannot use this window, because the Excel Services service is not based on Windows Communication Foundation (WCF). Instead, click Advanced to open the Service Reference Settings window, and then click Add Web Reference to load the Add Web Reference window (see Figure 4). At this point, you can enter the Excel Services URL: name/_vti_bin/ExcelService.asmx?wsdl You probably also want to give your reference a friendly name. I chose XLSvc, as shown in Figure 4. Figure 4. Creating the XLSvc Web reference .gif) The following are the using statements to add. Both of these are for the bookkeeping work of retrieving an image from a URL, which is what GetChartImageURL provides. I first tried to load it directly in the PictureBox control (PictureBox.Load(url)), but I could not find a way to pass my credentials on with the internal URL request from the control. So, instead, I had to do something different, which you will see in the GetImageFromURL helper function in the following example. // For WebRequest to retrieve the Chart URL. using System.Net; // For streams (when converting the image on the end of the URL to an "Image"). using System.IO; Use the following global variables to track the Excel Services session and, obviously, the Web service itself. // Track the current session. string sessionId = string.Empty; // Excel Services reference. XLSvc.ExcelService cli; Open the workbook as read-only. private void OpenWrite_Click(object sender, EventArgs e) { XLSvc.Status[] status; // Close the open workbook. if (sessionId != string.Empty) cli.CloseWorkbook(sessionId); // Clear out the session ID. sessionId = string.Empty; // Open the workbook. sessionId = cli.OpenWorkbookForEditing(txtURL.Text, "", "", out status); // Initialize the form's fields. initFields(); } Initialize the form fields. private void initFields() { XLSvc.Status[] status; // Exit function if no workbook is open. if (sessionId == string.Empty) return; // Just like in the 2007 release, retrieve the range as an array of arrays. object[] rows = cli.GetRangeA1(sessionId, "", "Modifier", true, out status); foreach (var o in rows) { // This is a single cell, so we break after the first row. object[] row = (object[])o; txtModifierValue.Text = row[0].ToString(); break; } // Refactored this out because it is also used by "refresh". LoadChart("Chart 2"); } And finally, load the chart. private void LoadChart(string chartName) { XLSvc.Status[] status; if (sessionId == string.Empty) return; // This call will return a URL. At the other end of the URL is our image. string url = cli.GetChartImageUrl(sessionId, null, chartName, out status); // Unfortunately, we cannot just call picturebox.Load because credentials will not be passed. pictChart.Image = GetImageFromURL(url); } private Image GetImageFromURL(string url) { WebRequest r = WebRequest.Create(url); r.Credentials = System.Net.CredentialCache.DefaultCredentials; WebResponse response = r.GetResponse(); Stream s = response.GetResponseStream(); Image img = Image.FromStream(s); return img; } If you download the code example (Excel Services Sample for SharePoint Server 2010), you will see that the function to open a file as read-only instead of read/write is almost identical. The only difference is the call to OpenWorkbook instead of OpenWorkbookForEditing. Download the rest of the code and worksheet to try this functionality (see Excel Services Sample for SharePoint Server 2010). The only piece of functionality that is left is to set the value of the Modifier named range and reload the chart. cli.SetCellA1(sessionId, "", "Modifier", (object)txtModifierValue.Text); LoadChart("Chart 2"); This code is exactly the same regardless of whether the workbook has been opened for editing. The only difference is that in the default read-only mode, the modification happens on a transient session-based copy of the workbook, whereas in edit mode, the change is written back to the main copy in SharePoint Server. As you can see, the Web services capabilities are expanded quite a bit to allow for even more powerful scenarios. For more information about this feature, read three of Shahar Prish's current blog posts: What's new in Excel Web Services in SharePoint 2010 Using the Web Services APIs to open a workbook for editing and set calculation options Using SetParameters with the Excel Web Services APIs In addition, read Excel Web Services in the SharePoint 2010 SDK. User-Defined Functions Occasionally, Excel users may have to use functions that are not native to Excel. The Excel client application has been extensible in this way for many releases, allowing developers to create an XLL add-in to surface custom functions in the Excel client. In SharePoint Server 2007, Excel Services also provided support for server-side user-defined functions. This support has continued in SharePoint Server 2010. Using managed code, you can build a server-side user-defined function to expose custom functions that users can use the same way that they can use built-in functions, such as =SUM() or RAND(). These functions are available only on the server, so workbooks that are edited in the client do not have access to them. They can be deployed through SharePoint solutions and can be quite powerful in that they can access third-party data sources and Web services. For a detailed walkthrough of creating and deploying a user-defined function, see Excel Services User-Defined Functions in the SharePoint 2010 SDK. ECMAScript (JScript, JavaScript) Object Model The JavaScript object model is another new feature of Excel Web Services for SharePoint Server 2010. It is meant to enable developers to programmatically interact with the Excel Web Services Web Parts that are hosted on a given SharePoint page. By writing JavaScript that is hosted on the SharePoint page (either directly or through a Content Editor Web Part), developers can interact with the Excel Web Access Web Parts and monitor the user's interaction with them. Using the JavaScript object model, you can set and retrieve values of cells, either through addressed or named ranges, and navigate users to different locations within the hosted workbook. You can also monitor events that occur when users add content to or edit cells. The following example walks you through an JavaScript file that sets the value of a cell (Sheet1!B3) with the value of the cell that the user selects. To begin, you must take care of some general Excel Web Access maintenance. Specifically, you must retrieve a reference to the Excel Web Access Web Parts on the page. Of course, first you must let the page load and the Web Parts initialize. So, you capture the OnLoad event, initialize it, and watch for the Excel Web AccessapplicationReady event. When you know that Excel Web Access is ready, enumerate the controls collection for all control instances. In this case, you are finding only the first instance. <script> var ewa = null; // Set page event handlers for onload and unload. if (window.attachEvent) { window.attachEvent("onload", ewaOmPageLoad); } else { window.addEventListener("DOMContentLoaded", Page_Load, false); } // Load map. function ewaOmPageLoad() { if (typeof (Ewa) != "undefined") { Ewa.EwaControl.add_applicationReady(ewaApplicationReady); } else { // Error - the Excel Services Web Access ECMAScript is not loaded. } } Now, you can capture the various Excel Web Access Web Parts on the page. You can also set event handlers for whatever events you want to start watching on that particular control. In this case, you watch the activeCellChanged event. function ewaApplicationReady() { // Find the first Excel Web Access control on the page. ewa = Ewa.EwaControl.getInstances().getItem(0); // Add a cell changed event handler to the script. ewa.add_activeCellChanged(cellChanged); } When the cell changes, you must first determine where its coordinates are (sheet, row, and cell). This can come in handy when you are trying to determine if the user has selected important data that you must respond to, or random cells that do not matter. In this case, you take the value of any cell that they select and insert it into Sheet1!A3. This code does not assume that the user has selected only a single value. Of course, the user could also select a range. When you have the details (coordinates and range), you call into the workbook to asynchronously set the value of the target range. The getRangeA1Async event expects the target range, the callback function, and the value that you want to put in the range. When the callback is executed, it is passed an asyncResult variable that contains the target range and the value. This means that you can call this function many times without having to worry about tracking what is going where. function cellChanged(rangeArgs) { // Find the sheet, column, and row the range starts in. var sheetName = rangeArgs.getRange().getSheet().getName(); var col = rangeArgs.getRange().getColumn(); var row = rangeArgs.getRange().getRow(); // Making the assumption that this is a single cell, but it does not have to be. var value = rangeArgs.getFormattedValues(); // Async call to set ranges. We pass the range, the call-back function, and the value we want to set. ewa.getActiveWorkbook().getRangeA1Async("Sheet1!B3", getRangeComplete, value); } The following is the asynchronous callback. You just pull out the range and value from asyncResult and convert the value into an array (remember, I am making an assumption that the user will only select a single cell and not a range of cells). function getRangeComplete(asyncResult) { // Find the range we are going to set. var range = asyncResult.getReturnValue(); // Find the value we will put in the range. var value = asyncResult.getUserContext(); // Assuming it is a single item (convert to a double array). var values = [[value]]; // Set the value (again, asynchronously). range.setValuesAsync(values, setValuesCallBack, null); } Finally, you catch the callback from the set values call. There are not many interesting things for you to do, but this can come in handy if you are chaining actions. function setValuesCallBack(returnValues) { // Nothing really interesting here. Just notifying the user that something happened. window.status = 'Values Set'; } </script> That may have been a good walkthrough, but it still is not enough to get you going. First, you need to determine how to get the script inside the SharePoint page. Here is one way to do it: Save the script to an HTML file (after you edit it in Microsoft Visual Studio), upload it to a document library (my very creative choice of Shared Documents should not surprise you), and then add a Content Editor Web Part to the page and point it to the HTML file in Shared Documents, as shown in Figure 5. You can also put the script directly in the Content Editor Web Part. Figure 5. Content Editor Web Part properties .gif) Also, remember that Windows Internet Explorer now includes a good developer tool that you can use to debug JavaScript on the page, including adding watches and putting break points in (see Figure 6). To load the tool, in Internet Explorer, press F12, or select Developer Tools on the Tools menu. Figure 6. Developer Tools script debugger .gif) Although this code was added by using the Content Editor Web Part, it is run as part of the page. Here you can see that the user has selected the cell on Sheet1, row 8, and column 3. REST API The Excel SOAP-based Web services are a very powerful way to programmatically interact with Excel worksheets. However, they offer absolutely no way for power users to consume Excel objects. For example, it would be nice for a user to be able to embed a chart in an internal blog post or wiki and have that Web page automatically update whenever it is loaded or refreshed. Likewise, it would be even better if that power user could update worksheet parameters when referring to that chart, to have a specific what-if scenario generated and hosted directly in a Word document—to be refreshed whenever the document is opened or saved. From a developer's perspective, one can argue that the Excel Services REST API is not a true REST implementation. However, regardless of the true conformance of the protocol, it is a powerful implementation of an API that gives developers and power users access to the objects in the workbook through nothing more than a URL. For example, the following URL returns all discoverable components of the Book1.xlsx workbook that is stored in the Shared Document library of the top-level site at.\_vti\_bin/ExcelRest.aspx/Shared%20Documents/book1.xlsx/model $format=atom This URL may seem overwhelming, but when you break it into its composite parts, it is quite simple. Table 2. URL components In addition to just listing the items in a workbook, you can use the REST service to retrieve raw data or images of tables and charts. You can also interact with the data in a workbook, setting values of specific ranges that can affect the tables and charts that you are retrieving. Table 3 lists all the commands that you can send to the REST service and the formats that work with each. Table 3. Lists of commands Because the colon (":") character is a special character in a URL, the normal range declarations, such as Sheet2!A1:B3, are not valid. The Excel REST API expects a pipe symbol ("|") instead of the colon, such as "Sheet2!A1|B3" Setting Values Although the REST API does not let you modify the saved instance of the worksheet on the server, you can modify ranges in the in-memory instance that is being used by the query. For example, you can set the value of a range that specifically modifies a chart that you are querying. The following URL sets the value of a range and requests an image of a chart.\_vti\_bin/ExcelRest.aspx/Shared%20Documents/book1.xlsx/model/Charts('sales')?Ranges('SalesYear')=2009?$format=image Users can host this content in any environment that can show text or images from a Web source. Other than the obvious blogs and wikis, one very relevant source in the context of this article is Microsoft Word. Although you may not know what a Microsoft Word field code is, you have used one if you have ever inserted a table of contents or a hyperlink into a document. A field code is a way of representing any number of dynamic types of content in a document's body. To see the entire list of field codes, on the Insert tab on the ribbon, select Quick Parts, and then select Field, as shown in Figure 7. Figure 7. Accessing the entire list of field codes .gif) There are two special field codes, IncludePicture and IncludeText, that embed text or images from a file or URL. Word keeps this content up to date by periodically refreshing it (on Open, Save, and so on). To include an up-to-date image of a chart in a workbook Create the REST URL, such as the following:\_vti\_bin/ExcelRest.aspx/Shared%20Documents/book1.xlsx/model/Charts('Chart%205')?$format=image On the Insert menu, select Quick Parts, and then select Field. Select the IncludePicture field, paste your URL in the Filename or URL field, and then click OK (see Figure 8). Figure 8. Adding the IncludePicture field Inserting text is similar, but it is slightly more complicated because the data is not the raw data that you are probably looking for. It is either in an ATOM feed or formatted as HTML. Neither are very good options for a Word document. Fortunately, this field code lets you include either an XSL transform or an XPath query as part of the field code. By writing either an XSLT transformation or XPath query to query the ATOM feed presented, you can select the relevant pieces of data and include them in the document. This sample ATOM feed represents the results of the following query.('Sheet1!F1')?$format=atom <?xml version="1.0" encoding="utf-8"?> <entry xml: <title type="text">Sheet1!F1</title> <id>('Sheet1!F1')</id> <updated>2009-12-12T22:23:39Z</updated> <author> <name /> </author> <link rel="self" href="('Sheet1!F1')?$format=atom" title="Sheet1!F1" /> <category term="ExcelServices.Range" scheme="" /> <content type="application/xml"> <x:range <x:row> <x:c> <x:v>0.05</x:v> <x:fv>5%</x:fv> </x:c> </x:row> </x:range> </content> </entry> In addition to the Filename or URL field property of the InsertText field, you must update two field options: Namespace Mappings There are two namespaces used in the XML you will be querying: the ATOM feed namespace and the Excel Services REST namespace. The following string will define those and create namespace mappings that the XPath query will reference. xmlns:a="" xmlns:x="" XPath Fortunately, the XPath is relatively simple. The <fv> element contains the formatted value of the cell. The following XPath retrieves it. /a:entry/a:content/x:range/x:row/x:c/x:fv Figure 9. Adding the InsertText field Clearly this is not something that you would ever expect anyone but the most experienced users to do. However, it can be very powerful when you consider a server-side document-generation scenario with the Office Open XML formats (where you can manipulate these field codes programmatically), or a client-side Microsoft Office development tools in Microsoft Visual Studio add-in, where you are programmatically adding the field code while the application is running. Naturally, you can also programmatically access and manipulate the ATOM feeds. The following code example is part of the sample application that is available to download (Excel Services Sample for SharePoint Server 2010). The sample application queries a worksheet that is located on a top-level site, enumerates its named ranges, and then shows the value of a selected range. Figure 10. Sample application to list named ranges in a worksheet .gif) using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Windows.Forms; // Need this to process the ATOM Feed. using System.Xml.Linq; // Specifically for the XMLUrlResolver to pass credentials to the Web service. using System.Xml; // There are streams ahead. using System.IO; namespace WindowsFormsApplication1 { public partial class Form1 : Form { // Declare all my namespaces up top so that I can just reference a short variable name down below :) const string atomNameSpace = ""; const string xlsvcNameSpace = ""; private void button1_Click(object sender, EventArgs e) { // As long as we have a server name and relative path to the worksheet, call the Web service. if(txtSite.Text != string.Empty & txtSpreadsheet.Text != string.Empty) LoadRanges(); } private void LoadRanges() { string relativeUri; XNamespace a = atomNameSpace; Stream s; // Build the relative URL for the Excel REST Web service. relativeUri = "/_vti_bin/ExcelRest.aspx/" + txtSpreadsheet.Text + "/model/Ranges?$format=atom"; // Pass the server portion of the URL and the relative URL down // and receive a stream with the ATOM feed results. s = GetAtomResultsStream(txtSite.Text, relativeUri); // Load the stream into an XDocument. XDocument atomResults = XDocument.Load(s); // Query the XDocument for all title elements that are child elements of an entry element. IEnumerable<XElement> ranges =//= atomResults.Root.Descendants(a + "title") from t in atomResults.Descendants(a + "title") where t.Parent.Name == a+"entry" select t; // Add all the elements that we found to the listbox. foreach (XElement r in ranges) { listBox1.Items.Add((string)r); } } private Stream GetAtomResultsStream(string serverName, string relativeUri) { XNamespace a = atomNameSpace; // I use the XMLUrlResolver to pass credentials to the Web service. XmlUrlResolver resolver = new XmlUrlResolver(); resolver.Credentials = System.Net.CredentialCache.DefaultCredentials; // Build the URI to pass the resolver. Uri baseUri = new Uri("http://" + serverName); Uri fullUri = resolver.ResolveUri(baseUri, relativeUri); // Mostly for debugging, display where we are going. lblUrl.Text = fullUri.ToString(); // Call the resolver and receive the ATOM feed as a result. Stream s = (Stream)resolver.GetEntity(fullUri,null,typeof(Stream)); return s; } } } The Excel REST Web service is deceptively powerful. It can span a large range of user needs and solution scenarios—from the very basic task of embedding a URL into a Web page that shows the most up-to-date data available to a custom Microsoft .NET Framework solution. For more information about the Excel REST API, see the following resources: Shahar Prish's blog, Cum Grano Salis Welcome to the new Excel Services! (the first part of a multipart series about REST) Coding the Excel Services Windows 7 Gadget – Part 1 - Settings (the first part of a seven-part series about coding a desktop gadget that consumes data from Excel Services) Microsoft Excel Team Blog Excel Services REST API in the SharePoint 2010 SDK Conclusion This article presents an intense review of some very powerful new features of the client services provided in Microsoft SharePoint Server 2010. You have many options when thinking about the architecture of your next application—the ability to programmatically render, print, and convert Word documents without the Word client; the ability to easily embedding live Excel charts in a Web page or document; and multiple ways of programmatically manipulating Excel workbooks. Hopefully, you will be ready to try them and your choice of which to use will be easier. Of course, do not think that one feature must be used independently from another or from other client and server-side features. Nothing is stopping you from creating a document generation solution that programmatically inserts live Excel charts from their ATOM feeds, renders the document by using Word Automation Services, and then converts the document to a PDF. In fact, that sounds exciting to me. I think I know what my next project is going to be. Additional Resources For more information, see the following resources: - Word Automation Services in SharePoint Server 2010 Developing with SharePoint 2010 Word Automation Services Word Team Blog: Introducing Word Automation Services Excel Team Blog: Excel Services 2010 Overview Shahar Prish's Blog: What's new in Excel Web Services in SharePoint 2010 SharePoint Developer Center Welcome to the Microsoft SharePoint 2010 SDK -
https://docs.microsoft.com/en-us/previous-versions/office/developer/sharepoint-2010/ff640648(v=office.14)?redirectedfrom=MSDN
CC-MAIN-2022-27
refinedweb
5,234
54.22
Creating 2D graphics programs under DOS is easy if you’re using [turbo c]. There is library file called graphics.h that does the tiresome work for you. But unfortunately this library is borland specific you can’t use it on other compilers. Even though some peoples somehow managed to port it outside the turbo. Some people hacked their own version of graphics.h. One such person is Micheal main, he ported some of borland graphics functions and library. Micheal main modified BGI library for windows application to be used under MinGW. This BGI library is renamed as WinBGIm. Now you can use all the borland specific functions under Dev-C++. Installation under your dev-cpp installation. Move these files under the respective folder of include and lib. like e.g. D:\Dev-cpp\ include & D:\Dev-cpp\lib . Configuration At last step you’ve downloaded & installed the WinBGIm, now you have to configure it to use under Dev-C++. You’ve to set some project options in Dev-C++ in order to run WinBGIm references properly. Follow the steps below to set proper project options for WinBGIm. 1. Go to the “File” menu and select “New”, “Project”,Choose “Empty Project” and make sure “C++ project” is selected. Give your project suitable name and click on “Ok”. OR 1. You can create individual C++” source file” instead of “project”. Go to the “File” menu and select “New Source File” OR Go to the “Project” menu and select “New File”. 2. Go to “Project” menu and choose “Project Options”. 3. Go to the “Parameters” tab. 4. In the “Linker” field, enter the following text: - -lbgi - -lgdi32 - -lcomdlg32 - -luuid - -loleaut32 - -lole32 5.Click “Ok” to save settings. Now you’ve done with the configuration for WinBGIm. Please make sure you’ve done this step properly otherwise compiler will flag error. Testing & Debugging Now let’s write a small program to test how WinBGIm works. Here is the source code for the program. Type it down,save it with .cpp extension and compile and run to see the results. #include <graphics.h> #include <iostream> using namespace std; int main() { initwindow(800,600); circle(200,300,600); while(!kbhit()); closegraph(); return 0; } This is the program for displaying circle with respective parameters on window of size 800×600.This window will close when you press any key.If you’ve made settings correctly then you can view the graphics,without any problem. What’s included ? All the borland graphics batteries included, plus some additional written by other contributors of WinBGIm. With WinBGIm you can use most of the borlands graphics function & RGB colors. You can also use detectgraph() and initgraph() or you can use new function called initwindow(). You can even use some of the old mouse function such as int mousex() & int mousey() along with getmouseclick() & clearmouseclick(). For keyboard functions,you don’t have to include conio.h some of the functions are supported without it like void delay(int millisec),int getch( ),int kbhit( ). If you want to capture the screen where you’ve created your graphics. You can do it with help of these functions getimage(),imagesize(), printimage(), putimage(), readimagefile() ,writeimagefile(). Help & Support If you’re into some trouble with installation & configuration,then please post your questions here. But please don’t post homework problems or your custom projects.Google groups is the right place to get answers in such cases. You can even get lot of support with WinBGIm and Dev-C++ at Google groups. If you want to read about the WinBGIm documentation & FAQ. If you’ve any question or suggestion then don’t hesitate to post it here.If you know any alternative than WinBGIm,please post about it here. Pingback: WINBGIm Graphics()
http://onecore.net/dev-c-graphics.htm
CC-MAIN-2014-52
refinedweb
625
68.26
If your Silverlight project compiles correctly and appears to run, but it displays only a blank page in the web browser, the first place to check is the “Startup object” in the Silverlight’s project properties: If the startup object is not set as shown in the screen cap above, click the “Startup object” drop-down and select the desired startup object from the list. But what happens if there are no objects in the drop down and the only option is “(not set)”? First, some quick background. When a Silverlight application launches, it constructs the specified “Startup object,” which must inherit from the System.Windows.Application class. By default, the startup object is defined in the App.xaml.cs file, which is automatically generated when you create a new Silverlight application project in Visual Studio. The startup object setting may be lost if you rename the startup object or change its namespace. But if no startup objects appear in the drop-down list, make sure you have defined a class in your Silverlight project that inherits from System.Windows.Application. […] File Project is not selected for building in solution configuration Silverlight 3 Released Read more… Share and […]
https://www.csharp411.com/cannot-set-startup-object-in-silverlight-project/
CC-MAIN-2021-43
refinedweb
198
58.62
Execution of Python code with -m option or not The python interpreter has -m module option that "Runs library module module as a script". With this python code a.py: if __name__ == "__main__": print __package__ print __name__ I tested python -m a to get "" <-- Empty String __main__ whereas python a.py returns None <-- None __main__ To me, those two invocation seems to be the same except package is not None when invoked with -m option. Interestingly, with python -m runpy a, I get the same as python -m a with python module compiled to get a.pyc. What's the (practical) difference between these invocations? Any pros and cons between them? Also, David Beazley's Python Essential Reference explains it as "The -m option runs a library module as a script which executes inside the main module prior to the execution of the main script". What does it mean? When you use the -m command-line flag, Python will import a module or package for you, then run it as a script. When you don't use the -m flag, the file you named is run as just a script. The distinction is important when you try to run a package. There is a big difference between: python foo/bar/baz.py and python -m foo.bar.baz as in the latter case, foo.bar is imported and relative imports will work correctly with foo.bar as the starting point. Demo: $ mkdir -p test/foo/bar $ touch test/foo/__init__.py $ touch test/foo/bar/__init__.py $ cat << EOF > test/foo/bar/baz.py > if __name__ == "__main__": > print __package__ > print __name__ > > EOF $ PYTHONPATH=test python test/foo/bar/baz.py None __main__ $ PYTHONPATH=test bin/python -m foo.bar.baz foo.bar __main__ As a result, Python has to actually care about packages when using the -m switch. A normal script can never be a package, so __package__ is set to None. But run a package or module inside a package with -m and now there is at least the possibility of a package, so the __package__ variable is set to a string value; in the above demonstration it is set to foo.bar, for plain modules not inside a package, it is set to an empty string. As for the __main__ module ; Python imports scripts being run as it would a regular module. A new module object is created to hold the global namespace, stored in sys.modules['__main__']. This is what the __name__ variable refers to, it is a key in that structure. For packages, you can create a __main__.py module and have that run when running python -m package_name; in fact that's the only way you can run a package as a script: $ PYTHONPATH=test python -m foo.bar python: No module named foo.bar.__main__; 'foo.bar' is a package and cannot be directly executed $ cp test/foo/bar/baz.py test/foo/bar/__main__.py $ PYTHONPATH=test python -m foo.bar foo.bar __main__ So, when naming a package for running with -m, Python looks for a __main__ module contained in that package and executes that as a script. It's name is then still set to __main__, and the module object is still stored in sys.modules['__main__']. ★ Back to homepage or read more recommendations:★ Back to homepage or read more recommendations: From: stackoverflow.com/q/22241420
https://python-decompiler.com/article/2014-03/execution-of-python-code-with-m-option-or-not
CC-MAIN-2019-26
refinedweb
565
66.23
Two sets program in C++ Today, we will see a basic problem in C++ and try to find an efficient algorithm for it. The problem goes like this. We are given an integer n and we have to divide the numbers 1,2,…,n into two sets such that both the sets have an equal sum. Of course, it is not necessary that both sets have an equal number of elements. Let us understand the problem more deeply and then try to come up with an algorithm and implement it in C++. Mathematically diving the two sets First of all, not all given n can give rise to 2 sets as all the elements here are integers. We will see why this is when we see some examples. Let us say an integer ‘ n ‘ is given as the input, and let ‘ s ‘ be the sum of all the integers 1,2,….,n. According to the problem the two sets will need to have an equal sum. But since the elements are taken from 1,2,…,n exhaustively, the sum of both the sets will be equal to s again. In other words, if the two sets are possible, each of them will have a sum of s/2 each. So from here, we can deduce that if s is an odd number, then s/2 will not be an integer and hence the division will not be possible using integers. Note that for a given n, there can exist multiple ways of diving the elements into 2 groups and here we just need to find one such arrangement. Let us see a few examples. Two Sets Examples Say,n=7 Here, 1,2,3,4,5,6,7 which have to be segregated into two groups with the same sum. Let us first evaluate what that sum should be. The sum of the first 7 numbers can be calculated as 7*8/2 = 28 Since 28 is even it can be split into two equal numbers, which is 14. So the split is possible and each group should have a sum of 14. We have to find one set of numbers from the given integers which add up to 14 and the rest will obviously add up to 14. One such pair of sets can be 6, 5, 3 and 1, 2, 4, 7 . We can see that each group adds up to 14. However this is not the only possible pair. 7,6,1 and 2,3,4,5 also counts as a solution. Let us take n=6 now, here s= 6*7/2=21 Now we cannot split 21 into two equal integers so the split is not possible. Algorithm To write an efficient algorithm for this, we can start with the first step as we discussed in the last section. So first we take input n and calculate sum using the formula. Then, we check if that is odd, if so, we say it is not possible and terminate the code. Else, we continue with sum as half its value. We need to find the numbers which add up to this sum now. For this one method would be to start at n and as long as the number is greater than the sum we keep adding to the first group and keep reducing sum by that number. At a point where sum becomes less than the integer we take that sum and add it to group one. The remaining elements as we discussed will go to group 2. Let us see the C++ code to understand this in a better way. C++ Code: two sets program I have used a vector for both the groups to which we can add the elements as we pick them. As we discussed the first part of the code deals with finding the sum of the n numbers and checking if it is even or not. The code will return NO if it is odd. Else, we give YES and continue. We start a loop with n as starting point and keep decrementing by one. We segregate the numbers as we check the sum and change it. Finally, we display both the groups in two seperate lines. Take a look at the C++ code below: #include<iostream> #include<vector> using namespace std; int main() { long n,i,n1=0,n2=0; long sum; vector<int> g1,g2; cin>>n; sum=n*(n+1)/2; if(sum%2==1) { cout<<"NO"; // The sum of given n integers is odd return 0; } cout<<"YES\n"; sum=sum/2; // The sum required for each group for(i=n;i>0;i--) if(i<=sum) { sum=sum-i; g1.push_back(i); // Adding to first group } else { if(i!=sum) g2.push_back(i); // Adding to the second group } if(sum!=0) g1.push_back(sum); // The remaining integer is added to the first group n1=g1.size(); // Length of the first group n2=g2.size(); // Length of the second group for(i=0;i<n1;i++) cout<<g1[i]<<" "; cout<<"\n"; for(i=0;i<n2;i++) cout<<g2[i]<<" "; return 0; } Example output: 15 YES 15 14 13 12 6 11 10 9 8 7 5 4 3 2 1 This problem is a simpler version for a popular problem called as the knapsack problem, which is also used in cryptography.
https://www.codespeedy.com/two-sets-program-implemented-using-cpp/
CC-MAIN-2021-17
refinedweb
897
78.99
import "github.com/elves/elvish/pkg/util" Package util contains utility functions. camel_to_dashed.go ceildiv.go claim.go deepprint.go feed.go gethome.go getwd.go limits.go log.go multierror.go pprinter.go search.go strings.go subseq.go temp_env.go test_utils.go testdir.go util.go wcwidth.go Limit values for uint and int. NOTE: The math package contains similar constants for explicitly sized integer types, but lack those for uint and int. ErrClaimFileBadPattern is thrown when the pattern argument passed to ClaimFile does not contain exactly one asterisk. ErrIndexOutOfRange is returned when out-of-range errors occur. ApplyDir creates the given files specified by ta directory layout to the current directory. CamelToDashed converts a CamelCaseIdentifier to a dash-separated-identifier, or a camelCaseIdentifier to a -dash-separated-identifier. CeilDiv computes ceil(float(a)/b) without using float arithmetics. ClaimFile takes a directory and a pattern string containing exactly one asterisk (e.g. "a*.log"). It opens a file in that directory, with a filename matching the template, with "*" replaced by a number. That number is one plus the largest of all existing files matching the template. If no such file exists, "*" is replaced by 1. The file is opened for read and write, with permission 0666 (before umask). For example, if the directory /tmp/elvish contains a1.log, a2.log and a9.log, calling ClaimFile("/tmp/elvish", "a*.log") will open a10.log. If the directory has no files matching the pattern, this same call will open a1.log. This function is useful for automatically determining unique names for log files. Unique filenames can also be derived by embedding the PID, but using this function preserves the chronical order of the files. This function is concurrency-safe: it always opens a new, unclaimed file and is not subject to race condition. DeepPrint is like printing with the %#v formatter of fmt, but it prints pointer fields recursively. DontSearch determines whether the path to an external command should be taken literally and not searched. Errors concatenate multiple errors into one. If all errors are nil, it returns nil. If there is one non-nil error, it is returned. Otherwise the return value is a MultiError containing all the non-nil arguments. Arguments of the type MultiError are flattened. Feed calls the function with given values, breaking earlier if the function returns false. FindContext takes a position in a text and finds its line number, corresponding line and column numbers. Line and column numbers are counted from 0. Used in diagnostic messages. FindFirstEOL returns the index of the first '\n'. When there is no '\n', the length of s is returned. FindLastSOL returns an index just after the last '\n'. ForceWcwidth forces the string s to the given display width by trimming and padding. GetHome finds the home directory of a specified user. When given an empty string, it finds the home directory of the current user. GetLogger gets a logger with a prefix. Getwd returns path of the working directory in a format suitable as the prompt. HasSubseq determines whether s has t as its subsequence. A string t is a subsequence of a string s if and only if there is a possible sequence of steps of deleting characters from s that result in t. InTestDir is like TestDir, but also changes into the test directory, and the cleanup function also changes back to the original working directory. It panics if it could not get the working directory or change directory. It is only suitable for use in tests. InTestDirWithSetup sets up a temporary directory using the given cli. If wd is not empty, it also changes into the given subdirectory. It returns a cleanup function to remove the temporary directory and restore the working directory. It panics if there are any errors. IsExecutable determines whether path refers to an executable file. MatchSubseq returns whether pattern is a subsequence of s. NthRune returns the n-th rune of s. OverrideWcwidth overrides the wcwidth of a rune to be a specific non-negative value. OverrideWcwidth panics if w < 0. SetOutput redirects the output of all loggers obtained with GetLogger to the new io.Writer. If the old output was a file opened by SetOutputFile, it is closed. SetOutputFile redirects the output of all loggers obtained with GetLogger to the named file. If the old output was a file opened by SetOutputFile, it is closed. The new file is truncated. SetOutFile("") is equivalent to SetOutput(ioutil.Discard). SubstringByRune returns the range of the i-th rune (inclusive) through the j-th rune (exclusive) in s. TestDir creates a temporary directory for testing. It returns the path of the temporary directory and a cleanup function to remove the temporary directory. The path has symlinks resolved with filepath.EvalSymlinks. It panics if the test directory cannot be created or symlinks cannot be resolved. It is only suitable for use in tests. TildeAbbr abbreviates the user's home directory to ~. TrimEachLineWcwidth trims each line of s so that it is no wider than the specified width. TrimWcwidth trims the string s so that it has a width of at most wmax. UnoverrideWcwidth removes the override of a rune. Wcswidth returns the width of a string when displayed on the terminal, assuming no soft line breaks. Wcwidth returns the width of a rune when displayed on the terminal. WithTempEnv sets an environment variable to a temporary value, and returns a function for restoring the old value. Dir describes the layout of a directory. The keys of the map represent filenames. Each value is either a string (for the content of a regular file with permission 0644), a File, or a Dir. File describes a file to create. MultiError pack multiple errors into one error. func (es MultiError) Error() string type PPrinter interface { // PPrint takes an indentation string and pretty-prints. PPrint(indent string) string } PPrinter wraps the PPrint function. Package util imports 15 packages (graph) and is imported by 14 packages. Updated 2020-02-22. Refresh now. Tools for package owners.
https://godoc.org/github.com/elves/elvish/pkg/util
CC-MAIN-2020-16
refinedweb
1,008
59.7
The logging system in NetBeans is based on the standard JDK's java.util.logging and complies to it as much as possible. This document sumarizes the basic usecases and shall be treated as a guide for writing good NetBeans ready logging code. The info given here is valid for default configuration of the logger as it is used in NetBeans. However it is possible to fully replace the system by providing own logging properties as in any other JDK application. Then of course the behaviour may get completely different. Rather than printing raw exceptions to the console or implementing custom debug or logging facililities, code may use the Logger to access logging in a higher-level fashion. This way the logging messages can be dynamically turned on and off by single switch on command line or even during runtime. Another important thing is to chain stack traces to exceptions using Throwable.initCause(Throwable), permitting you to throw an exception of a type permitted by your API signature while safely encapsulating the root cause of the problem (in terms of other nested exceptions). Code should use Logger.log(Level.SEVERE, msg, exception) rather than directly printing caught exceptions, to make sure nested annotations are not lost and to allow pluged in handlers of logging to process the exceptions. It is possible to use global logger but it is preferred to create named and shared instances of loggers. The latter has the advantage of finer control of what is going to be logged or not, as each named instance can be turned on/off individually by using a command line property. As the logging system is completely JDK based, one can use the traditional properties of LogManager and customize logging completely by themselves. However there is a simpler way how to enable logging of an named logger. Just start NetBeans with -J-Dname.of.the.Logger.level=100 or any other number and all the log Levels with higher or equal value will immediatelly be enabled and handled by the system. It is possible to turn the logging dynamically when the application is running. It is enough to just: System.setProperty("name.of.the.Logger.level", "100"); LogManager.getLogManager().readConfiguration(); and logging state for the "name.of.the.Logger" is changed. The first line in the above code snippet changes the global properties and the second one asks the system to refresh the configuration of all loggers in the system. Of course this only works if the default NetBeans logger is in place. Sometimes however it may make sence to provide completely different logger. This can be done by one of two JDK standard properties: java.util.logging.config.file or java.util.logging.config.class as described at LogManager's javadoc. If these properties are provide during the startup of the system, then the logging is fully dedicated to the configured custom loggers and of course no NetBeans standard configuration properties work. To handle an exception and send it to the log file (and possibly show a blinking icon to the user in bottom right corner of the main window): private static final Logger logger = Logger.getLogger(ThisClass.class.getName()); try { foo.doSomething(); } catch (IOException ioe) { logger.log(Level.SEVERE, null, ioe); } WARNING behaves the same way by default. If the exception is not important, and by default shall not be shown or logged at all one can use the Level.FINE, Level.FINER or Level.FINEST: try { foo.doSomething(); } catch (IOException ioe) { logger.log(Level.FINE, "msg", ioe); } The easiest way to make sure an exception is reported to the user is to use the dialog API with a code like this: try { // some operations } catch (Exception ex) { NotifyDescriptor.Exception e = new NotifyDescriptor.Exception(ex); DialogDisplayer.getDefault().notifyLater(e); } This code will present a dialog box with warning message extracted from the exception ex sometime in the "future" - e.g. when the AWT event queue is empty and can show the dialog. Use of notifyLater to plain notify is recommended in order to prevent deadlocks and starvations To rethrow an exception use standard JDK's Throwable.initCause(Throwable) method. It is going to be properly annotated and printed when sent to logger:) { logger.log(Level.WARNING, null, ioe); } Logging shall usually be done with a named loggers, as that allows proper turning on and off from the command line. To log something into the log file one should use Level.INFO or higher: private static final Logger LOG = Logger.getLogger("org.netbeans.modules.foo"); public void doSomething(String arg) { if (arg.length() == 0) { LOG.warning("doSomething called on empty string"); return; } // ... } For writing debugging messages it is also better to have a named logger, but the important difference is to use Level.FINE and lower severity levels: package org.netbeans.modules.foo; class FooModule { public static final Logger LOG = Logger.getLogger("org.netbeans.modules.foo"); } // ... class Something { public void doSomething(String arg) { FooModule.LOG.log(Level.FINER, "Called doSomething with arg {0}", arg); } } There is an easy way how to annotate exceptions with localized and non-localized messages in NetBeans. One can use Exceptions.attachMessage or Exceptions.attachLocalizedMessage . The non-localized messages are guaranteed to be printed when one does ex.printStackTrace(), to extract associated localized message one can use Exceptions.findLocalizedMessage . In spite of what one might think the JDK logging API is not just about sending textual messages to log files, but it can also be used as a communication channel between two pieces of the application that need to exchange structured data. What is even more interesting is that this kind of extended usage can coexist very well with the plain old good writing of messages to log files. This is all possible due to a very nice design of the single "logging record" - the LogRecord. Well written structured logging shall use the "localized" message approach and thus assign to all its LogRecords a ResourceBundle and use just a key to the bundle as the actually logged message. This is a good idea anyway, as it speeds up logging, because if the message is not going to be needed, the final string is not concatenated at all. However this would not be very powerful logging, so another important thing is to provide parameters to the LogRecord via its setParameters method. This, in combination with the MessageFormat used when the final logger is composing the logged message, further delay's the concatenations of strings. Morevoer it allows the advanced communication described above - e.g. there can be another module consumming the message which can directly get access to live objects and processes them in any way. Here is an example of the program that uses such structured logging: public static void main(String[] args) { ResourceBundle rb = ResourceBundle.getBundle("your.package.Log"); int sum = 0; for (int i = 0; i < 10; i++) { LogRecord r = new LogRecord(Level.INFO, "MSG_Add"); r.setResourceBundle(rb); r.setParameters(new Object[] { sum, i }); Logger.global.log(r); sum += i; } LogRecord r = new LogRecord(Level.INFO, "MSG_Result"); r.setResourceBundle(rb); r.setParameters(new Object[] { sum }); Logger.global.log(r); } Of course the two keys has to be reasonably defined in the Log.properties bundle: # {0} - current sum # {1} - add MSG_Add=Going to add {1} to {0} # {0} - final sum MSG_Result=The sum is {0} When executed with logging on, this example is going to print the expected output with the right messages and well substituted values: INFO: Going to add 0 to 0 INFO: Going to add 1 to 0 INFO: Going to add 2 to 1 INFO: Going to add 3 to 3 INFO: Going to add 4 to 6 INFO: Going to add 5 to 10 INFO: Going to add 6 to 15 INFO: Going to add 7 to 21 INFO: Going to add 8 to 28 INFO: Going to add 9 to 36 INFO: The sum is 45 This not surprising behaviour, still however it is one of the most efficient because the text Going to add X to Y is not constructed by the code itself, but by the logger, and only if really needed. So the descrbied logging style is useful of its own, however the interesting part is that one can now write following code and intercept behaviour of one independent part of code from another one: public class Test extends Handler { private int add; private int sum; private int allAdd; public void publish(LogRecord record) { if ("MSG_Add".equals(record.getMessage())) { add++; allAdd += ((Integer)record.getParameters()[1]).intValue(); } if ("MSG_Result".equals(record.getMessage())) { sum++; } } public void flush() { Logger.global.info("There was " + add + " of adds and " + sum + " of sum outputs, all adding: " + allAdd); } public void close() { flush(); } static { Logger.global.addHandler(new Test()); } } The basic trick is to register own Handler and thus get access to provided LogRecords and process them in any custom way, possibly pretty different than just printing the strings to log files. Of course, this is only possible because the handler understand the generic names of logged messages - e.g. MSG_Add and MSG_Result and knows the format of their arguments, it can do the analysis, and output: INFO: There was 10 of adds and 1 of sum outputs, all adding: 45 Indeed a structural logging can achive much more than shown in this simplistic example. Moreover it seems to be one of the most effective ways for logging, so it is highly recommended to use it where possible. From: Logging in NetBeans.
http://wiki.netbeans.org/wiki/index.php?title=DevFaqLogging&oldid=51734&printable=yes
CC-MAIN-2021-04
refinedweb
1,578
54.73
#include <itkJPEGImageIO.h> ImageIO object for reading and writing JPEG images. Definition at line 35 of file itkJPEGImageIO.h. Definition at line 41 of file itkJPEGImageIO.h. Standard class typedefs. Definition at line 39 of file itkJPEGImageIO.h. Definition at line 40 of file itkJPEG. Set/Get the level of quality for the output images. diemention information for the set filename. Implements itk::ImageIOBase. Reads 3D data from multiple files assuming one slice per file. Set/Get the level of quality for the output images.. Default = true Definition at line 100 of file itkJPEGImageIO.h. Determines the quality of compression for written files. default = 95 Definition at line 97 of file itkJPEGImageIO.h.
https://itk.org/Doxygen48/html/classitk_1_1JPEGImageIO.html
CC-MAIN-2020-40
refinedweb
113
63.36