text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
This document describes how to deploy kubernetes on ubuntu nodes, 1 master and 3 nodes involved in the given examples. You can scale to any number of nodes by changing some settings with ease. The original idea was heavily inspired by @jainvipin ‘s ubuntu single node work, which has been merge into this document. The scripting referenced here can be used to deploy Kubernetes with networking based either on Flannel or on a CNI plugin that you supply. This document is focused on the Flannel case. See kubernetes/cluster/ubuntu/config-default.sh for remarks on how to use a CNI plugin instead. Cloud team from Zhejiang University will maintain this work. Clone the kubernetes github repo locally $ git clone --depth 1 The startup process will first download all the required binaries automatically. By default etcd version is 2.2.1, flannel version is 0.5.5 and k8s version is 1.2.0. You can customize your etcd version, flannel version, k8s version by changing corresponding variables ETCD_VERSION , FLANNEL_VERSION and KUBE_VERSION like following. $ export KUBE_VERSION=1.2.0 $ export FLANNEL_VERSION=0.5.0 $ export ETCD_VERSION=2.2.0 Note For users who want to bring up a cluster with k8s version v1.1.1, controller manager may fail to start due to a known issue. You could raise it up manually by using following command on the remote master server. Note that you should do this only after api-server is up. Moreover, this issue is fixed in v1.1.2 and later. $ sudo service kube-controller-manager start Note that we use flannel here to set up overlay network, yet it’s optional. Actually you can build up k8s cluster natively, or use flannel, Open vSwitch or any other SDN tool you like. An example cluster is listed below: | IP Address | Role | |-------------|----------| |10.10.103.223| node | |10.10.103.162| node | |10.10.103.250| both master and node| First configure the cluster information in cluster/ubuntu/config-default.sh, following is a simple sample. export nodes="vcap@10.10.103.250 vcap@10.10.103.162 vcap@10.10.103.223" export roles="ai i i" export NUM_NODES=${NUM_NODES:-3} export SERVICE_CLUSTER_IP_RANGE=192.168.3.0/24 export FLANNEL_NET=172.16.0.0/16 The first variable nodes defines all your cluster nodes, master node comes first and separated with blank space like <user_1@ip_1> <user_2@ip_2> <user_3@ip_3> Then the roles variable defines the roles of above machine in the same order, “ai” stands for machine acts as both master and node, “a” stands for master, “i” stands for node. The NUM_NODES variable defines the total number of nodes. The SERVICE_CLUSTER_IP_RANGE variable defines the kubernetes service IP range. Please make sure that you do have a valid private ip range defined here, because some IaaS provider may reserve private ips. You can use below three private network range according to rfc1918. Besides you’d better not choose the one that conflicts with your own private network range. 10.0.0.0 - 10.255.255.255 (10/8 prefix) 172.16.0.0 - 172.31.255.255 (172.16/12 prefix) 192.168.0.0 - 192.168.255.255 (192.168/16 prefix) The FLANNEL_NET variable defines the IP range used for flannel overlay network, should not conflict with above SERVICE_CLUSTER_IP_RANGE. You can optionally provide additional Flannel network configuration through FLANNEL_BACKEND and FLANNEL_OTHER_NET_CONFIG, as explained in cluster/ubuntu/config-default.sh. The default setting for ADMISSION_CONTROL is right for the latest release of Kubernetes, but if you choose an earlier release then you might want a different setting. See the admisson control doc for the recommended settings for various releases..10.103.223 ... [sudo] password to start node: If everything works correctly, you will see the following message from console indicating the k8s cluster is up. Cluster validation succeeded.10.103.162 kubernetes.io/hostname=10.10.103.162 Ready 10.10.103.223 kubernetes.io/hostname=10.10.103.223 Ready 10.10.103.250 kubernetes.io/hostname=10.10.103.250 Ready Also you can run Kubernetes guest-example to build a redis backend cluster. Assuming you have a starting cluster now, this section will tell you how to deploy addons like DNS and UI onto the existing cluster. The configuration of DNS is configured in cluster/ubuntu/config-default.sh. ENABLE_CLUSTER_DNS="${KUBE_ENABLE_CLUSTER_DNS:-true}" DNS_SERVER_IP="192.168.3.10" DNS_DOMAIN="cluster.local" DNS_REPLICAS=1 The DNS_SERVER_IP is defining the ip of dns server which must be in the SERVICE_CLUSTER_IP_RANGE. The DNS_REPLICAS describes how many dns pod running in the cluster. By default, we also take care of kube-ui addon. ENABLE_CLUSTER_UI="${KUBE_ENABLE_CLUSTER_UI:-true}" After all the above variables have been set, just type the following command. $ cd cluster/ubuntu $ KUBERNETES_PROVIDER=ubuntu ./deployAddons.sh After some time, you can use $ kubectl get pods --namespace=kube-system to see the DNS and UI pods are running in the cluster. We are working on these features which we’d like to let everybody know: Generally, what this approach does is quite simple: etcdfor master node using IPs based on input from user. So if you encounter a problem, check etcd configuration of master node first. /var/log/upstart/etcd.logfor suspicious etcd log $ KUBERNETES_PROVIDER=ubuntu ./kube-down.sh $ KUBERNETES_PROVIDER=ubuntu ./kube-up.sh /etc/default/{component_name}and restart it via $ sudo service {component_name} restart. If you already have a kubernetes cluster, and want to upgrade to a new version, you can use following command in cluster/ directory to update the whole cluster or a specified node to a new version. $ KUBERNETES_PROVIDER=ubuntu ./kube-push.sh [-m|-n <node id>] <version> It can be done for all components (by default), master( -m) or specified node( -n). Upgrading a single node is currently experimental. If the version is not specified, the script will try to use local binaries. You should ensure all the binaries are well prepared in the expected directory path cluster/ubuntu/binaries. $ tree cluster/ubuntu/binaries binaries/ ├── kubectl ├── master │ ├── etcd │ ├── etcdctl │ ├── flanneld │ ├── kube-apiserver │ ├── kube-controller-manager │ └── kube-scheduler └── minion ├── flanneld ├── kubelet └── kube-proxy You can use following command to get a help. $ KUBERNETES_PROVIDER=ubuntu ./kube-push.sh -h Here are some examples: $ KUBERNETES_PROVIDER=ubuntu ./kube-push.sh -m 1.0.5 vcap@10.10.103.223to version 1.0.5 : $ KUBERNETES_PROVIDER=ubuntu ./kube-push.sh -n 10.10.103.223 1.0.5 $ KUBERNETES_PROVIDER=ubuntu ./kube-push.sh 1.0.5 The script will not delete any resources of your cluster, it just replaces the binaries. You can use the kubectl command to check if the newly upgraded kubernetes cluster is working correctly. To make sure the version of the upgraded cluster is what you expect, you will find these commands helpful. $ kubectl version. Check the Server Version. vcap@10.10.102.223: $ ssh -t vcap@10.10.102.223 'cd /opt/bin && sudo ./kubelet --version'* For support level information on all solutions, see the Table of solutions chart. Create an Issue Edit this Page
http://kubernetes.io/docs/getting-started-guides/ubuntu/manual/
CC-MAIN-2016-50
refinedweb
1,170
58.48
I”. Introduction 12000. Last line effect ‘y’ with ‘z’: . Examples Now I only have to convince the readers that it all is not my fancy, but a real tendency. To prove my point, I will show you some examples. I won’t cite all the examples, of course – only the simplest or most representative ones. Source Engine SDK inline void Init( float ix=0, float iy=0, float iz=0, float iw = 0 ) { SetX( ix ); SetY( iy ); SetZ( iz ); SetZ( iw ); } The SetW() function should be called at the end. Chromium. ReactOS if (*ScanString == L'\"' || *ScanString == L'^' || *ScanString == L'\"') Multi Theft Auto. Source Engine SDK. Trans-Proteomic Pipeline. SeqAn inline typename Value::Type const & operator*() { tmp.i1 = *in.in1; tmp.i2 = *in.in2; tmp.i3 = *in.in2; return tmp; } SlimDX. Qt } ‘patternRepeatX’ is missing in the very last block. The correct code looks as follows: pattern->patternRepeatX = false; pattern->patternRepeatY = false; ReactOS ‘mjstride’ variable will always be equal to one. The last line should have been written like this: const int mjstride = sizeof(mag[0][0]) / sizeof(mag[0][0][0]); Mozilla Firefox. Quake-III-Arena if (fabs(dir[0]) > test->radius || fabs(dir[1]) > test->radius || fabs(dir[1]) > test->radius) The value from the dir[2] cell is left unchecked. Clang return (ContainerBegLine = ContaineeEndLine && (ContainerBegLine != ContaineeBegLine || SM.getExpansionColumnNumber(ContainerRBeg) = SM.getExpansionColumnNumber(ContainerREnd))); At the very end of the block, the “SM.getExpansionColumnNumber(ContainerREnd)” expression is compared to itself. MongoDB. Unreal Engine 4 static bool PositionIsInside(....) { return Position.X >= Control.Center.X - BoxSize.X * 0.5f && Position.X = Control.Center.Y - BoxSize.Y * 0.5f && Position.Y >= Control.Center.Y - BoxSize.Y * 0.5f; } The programmer forgot to make 2 edits in the last line. Firstly, “>=” should be replaced with “<=; secondly, minus should be replaced with plus. Qt ‘h’ variable should have been used as an argument. OpenSSL. Conclusion.
https://hownot2code.com/2016/09/14/last-line-effect/?replytocom=48
CC-MAIN-2020-16
refinedweb
310
50.02
Hello Everybody. This program needs to use an array to count the amount of times a certain roll appears on the dice, after the user enters how many dice and rolls they would like to use. I was able to figure out the code to print out the values of each roll, however I'm not sure what I would have to do to use an array as a counter. Any help would be appreciated. Example Results: You Rolled 2: 1 Time You Rolled 3: 0 Times You Rolled 4: 3 Times etc. Here is my code so far: import java.util.*; public class Dice { public static Scanner in = new Scanner (System.in); public static void main (String[] args) { int dice = 0; int roll = 0; while (true) { System.out.print ("How Many Dice Do You Want To Roll? "); dice = in.nextInt(); if (dice > 0) break; System.out.println ("Must Be Positive!"); } while (true) { System.out.print ("How Many Times Do You Want To Roll? "); roll = in.nextInt(); if (roll > 0) break; System.out.println ("Must Be Positive!"); } int dicetotal = Dicecount (dice); for (int i = 0; i < roll; i++) { System.out.println (Dicecount(dice)); } } public static int Dicecount (int dice) { int dicetotal = 0; for (int x = 0; x < dice; x++) { int rollcount =(int)(1+6*(Math.random())); dicetotal+=rollcount; } return dicetotal; } }
https://www.daniweb.com/programming/software-development/threads/93922/array-in-dice-program
CC-MAIN-2017-51
refinedweb
220
76.52
Django-Chameleon-templates This Django app will provide your Django project with a nice drop-in replacement for the Django template engine shipped with Django. You will need to download Chameleon to use these template Loaders, you can either use PIP or download it directly from the Chameleon website: One you have it installed, there are a few ways you can use the template engine in your Django project. Here is a list of all the included template Loaders and what they do: djchameleon.loaders.filesystem.Loader: This is a drop-in replacement for the default Django filesystem.Loader It will use your existing TEMPLATE_DIRS to load templates. djchameleon.loaders.app_directories.Loader: This is a drop-in replacement for the default Django app_directories.Loader It will use the 'templates' directories in each of your app directories. djchameleon.loaders.chameleon_engine.Loader: This Loader allows you to specify a seperate CHAMELEON_TEMPLATES settings variable, which acts the exact same as your TEMPLATE_DIRS variable. There are also a couple settings variables you can set as well: CHAMELEON_TEMPLATES: Takes the exact same format as TEMPLATE_DIRS variable. This setting is only valid when the chameleon_engine.Loader is used. It allows you to have a specific directory for chameleon templates. CHAMELEON_EXTENSION: This forces a specific extension on all template files. This uses the functionality shipped with Chameleon, and merely passes this setting directly over to Chameleon. BASIC USAGE AND EXAMPLE TEMPLATES You will still render templates using the Django method, such as render_to_response() or render(). You will even have access to all of your usual context variables, including context_processors. Included in this package are some examples which you can take a look at. There is an example views.py, and urls.py. To use try out these examples add the following line to your project's urls.py: url(r'^chameleon/', include('djchameleon.urls')), You can of course change the base url. The reason why I suggest adding this is so that the included url tag can reverse the url in the example template. Yes, this template drop-in has a Django compatible url method you can use, more on that in the next section. In order to use these examples you will also need to add djchameleon to your INSTALLED_APPS and use the djchameleon.loaders.app_directories.Loader. In normal operation, you do not need to have this app in your INSTALLED_APPS, only the example template requires it. Chameleon Django API Now on to the fun stuff, how to use various Django resources within your Chameleon templates, such as CSRF_TOKEN and URL Reversal. At the moment, these are the only main functions taken from Django as they are a requirement when working with a Django project. The example mentioned above shows both of these in action, but I will explain how to use each one of these APIs here within your template: URL Reversal: <a tal:A Link!</a> If you need to place a URL inside JavaScript, you can use the following notation: ${url:hello_chameleon} CSRF_TOKEN tag <p metal: This will do exactly what {% csrf_token %} does. You can also render it manually as such: <input type="hidden" name="csrfmiddlewaretoken" tal: Adding METAL Macros Unlike in Django where creating template tags requires Python programming skills, you can easily extend the Chameleon template engine using METAL Macros. These can be either placed in your base template or in your app directories macros/ directory. You can see an example macro in the djchameleon/macros/example.pt file. Everything with a pt extension in your apps macros/ directory is added into the global template namespace. For example, say your app name is todo, and you want to create a namespace called todo which assists in many aspects of your todo application. This is where you would create the file: django_project/todo/macros/todo.pt Then inside that file, you can place various reusable HTML/Python code for your todo app templates to use. Here's an example template, which is included in the example.pt file: <p metal: Hello, <strong metal:World</strong> </p> To use this macro inside any Chameleon template, use one of the following code: <p metal: Or if you wish to fill in the name context, use this code: <p metal: <i metal:Kevin</i> </p> Also note that you can also change the underlying tag dynamically, here it's changed from strong to i.
https://bitbucket.org/kveroneau/django-chameleon-templates/src
CC-MAIN-2016-30
refinedweb
729
62.27
Here's how to find all the modules in some directory, and import them. Contents Finding Modules in a Directory Is there a better way than just listing the contents of the directory, and taking those tiles that end with ".pyc" or ".py"..? But perhaps there isn't. 1 import os 2 3 def find_modules(path="."): 4 """Return names of modules in a directory. 5 6 Returns module names in a list. Filenames that end in ".py" or 7 ".pyc" are considered to be modules. The extension is not included 8 in the returned list. 9 """ 10 modules = set() 11 for filename in os.listdir(path): 12 module = None 13 if filename.endswith(".py"): 14 module = filename[:-3] 15 elif filename.endswith(".pyc"): 16 module = filename[:-4] 17 if module is not None: 18 s.add(module) 19 return list(modules) Importing the Modules How do you import a module, once you have it's name? With the ImpModule! It dynamically loads named modules. Finding the Things Inside a Module Once you have your module, you can look inside it, with .__dict__. Finding Functions Within a Module We just look for dictionary values that are of type types.FunctionType. See Also The DocXmlRpcServer page includes code demonstrating the use of these techniques. Discussion I got this error when executing find_modules() in a package directory. That is the directory contained an __init.py__ file: File "C:\Python254\lib\site-packages\joedorocak\find_modules.py", line 27, in find_modules s.add(module) NameError: global name 's' is not defined It looks to me like s needs to be initialized (some place near "modules = set()"). I'm not sure what the protocol is here, so I'm just going to leave this comment in the discussion. Here's what seems to work for me. I got rid of 's' altogether. def find_modules(path="."): """Return names of modules in a directory. Returns module names in a list. Filenames that end in ".py" or ".pyc" are considered to be modules. The extension is not included in the returned list. """ modules = set() for filename in os.listdir(path): module = None if filename.endswith(".py"): module = filename[:-3] elif filename.endswith(".pyc"): module = filename[:-4] if module is not None: modules.add(module) return list(modules) All the best,
http://wiki.python.org/moin/ModulesAsPlugins?highlight=%28%28ImpModule%29%29
CC-MAIN-2013-20
refinedweb
381
70.9
Description editUnlike the vast majority of dynamic languages where there are at least two data types that can be distinguished. Hence Tcl is the [Totally untyped language].At the script level, Tcl's type system consists of exactly one type: the string. This is fundamental to the design of the language. The Dodekalogue states that a script is string, which implies each 'commands in a script is a strings, which in turn implies that each word in each command is a string. It could therefore be said that "Everything is a word", or that "everything is a command" but the fundamental building block is the string.Other languages provide tokens for types such as number, identifier, keyword, function, list, dictionary, or the special value, null. Tcl only provides string.In Tcl there are three kinds of substitutions that happen to words at runtime: variable substitution, command substitution, and backslash substitution. Since variable values, procedure results, and interpreted backslash sequences can be substituted into words, all these things must be strings. This design provides a great deal of flexibility and power, and gives Tcl a unique flavour that its arguments (which are words) - eval - interprets its argument(s) as a script What's in a String? editUnder the hood, Tcl takes advantage of its implementation language, C, to efficiently store and manipulate the data that backs the string representations of values. Here is a list of some of the things that internally are stored and manipulated in ways that make the most sense in the implementation language: - variables (both scalar and array) - lists] - dictionaries - namespaces -. Where the intended use must depend on additional information about the value, the are myariad ways to design that information into a program.. Thus, e.g., a list can be modified either by changing its string representation or by using a command like lappend, which works directly with the internal format version of the value. For performance, Tcl only updates one of the representations when that particular representation is needed, and the other representation is newer.The internal format of a value is not exposed at the script level, does not have any semantic impact on the language, and is just an implementation detail. The internal format simply has no purpose at script level. Tcl handles all the messy details of tracking and synchronizing the script-level values and the internal format(s) of those values so that the user can work in the "seamless" world of words. A user of Tcl's C API will gain an appreciation for the way Tcl values are handled at the C level, each one having both a string interface and a structured interface. The Magic of EIAS editEIAS is one of the grand unifying concepts of Tcl. As [Edsger Dijsktra] more familiar with other language are sometimes criticize Tcl's EIAS design, usually because they assume that complex algorithms requiring data structures are not going to as C features that make aliasing Misc editDonald Porter remarked in the T, Korean? Of course, everything is a Unic Unicode instead of ASCII. 2003-05-13: Recently, Bruce Eckel in Strong Typing vs. Strong Testing a make Tk widgets serializable? I was thinking about xml2gui and wondering what it would take to make a widget produce an XML GUI Tk 9.0 WishList, but I would certainly like to see whatever changes would be necessary to allow this (if it's practical at all). jcw 2003-05-17: While EIAS is indeed a wonderfully powerful and flexible abstraction, I'd like to point out that LISP'ers and Scheme LISP).NEM 2005-07-25: replying to this a couple of years too late... The difference with Lisp is that cons cells aren't universal; as I understand it, some basic data types like numbers are not represented as cons cells. You could build up everything from cons cells, in a similar way to building everything from set theory, but L Playing TRAC.JE: MUMPS (like Smalltalk) to do some useful operations on objects and classes. I like Snit references) such that a classifier (say [string is object ...]) could return a yes or no answer.Even better would be one that could tell which object system implemented the object (say [string is objectsystem ...]).This might be possible in a system like Jim if the Jim References encoded the object system and whether something was an object.Even without add-on object systems, it would be nice to be able to determine if there could be [string is command ...], but that in some respects defeats the purpose of unknown. (I'm still fuzzy from sleep, so maybe there is something that does this already, otherwise how would unknown get called?)Lars H 2005-07-24: I think the best way of pointing out how your analysis here is wrong is to point out that - Tcl has no whattype command; - What design error did you make that made you ask that question in the first place? Where did you (or someone else) throw away the information that you now find you need? type: valuecan Java and JSON.
http://wiki.tcl.tk/3018
CC-MAIN-2014-10
refinedweb
853
60.14
2017-08-03 20:59 GMT+08:00 Jason A. Donenfeld <ja...@zx2c4.com>: > Hi Wang, > > I understand your inquiry and I see what you're trying to accomplish > with your use of ip rule and fwmark. However, *WireGuard already does > this automatically*. We _do_ support reply-to-sender. We _do_ > supported multihomed servers. You wrote, "But I do wish that server > can deduce public address which the client connects to, and use the > public address to response to the client, then the configuration will > be simple and straightforward." WireGuard _does_ do this. > > To demonstrate that, I've added a more explicit test of this to the test > suite: > > > If this is not working for you, then you're either doing something > wrong, or you've uncovered a bug in either WireGuard or the kernel. In > case it's the latter, would you send me a patch for netns.sh that > demonstrated the problem in a clear way? Advertising Your test case is straightforward. And I am confident that you're right in this kind of setup. But there's significant difference. In your test case, the endpoint addresses are configured directly on attached link. In my case, the wireguard server's endpoint address is configured on dummy interface, not attached link. I reproduce the problem while using tcpdump to get some clues, 1. server can receives client's packet 2. server 'wg' output shows client's endpoint addr:port correctly 3. tcpdump on client only captures outgoing request packets, no response from server 4. tcpdump on server only captures incoming request packets, no response (on all physical interfaces) I was wrong. Response is not routed via default gateway, or other interfaces. There is NO response at all. Very sorry for misleading. I am not familiar with test in namespace. I will look into it. Hopefully I can come back with a patch. _______________________________________________ WireGuard mailing list WireGuard@lists.zx2c4.com
https://www.mail-archive.com/wireguard@lists.zx2c4.com/msg01275.html
CC-MAIN-2017-39
refinedweb
321
68.77
Performance Testing With JMeter and Locust Performance Testing With JMeter and Locust Learn how to install and use Apache JMeter, a popular free performance testing tool, and the Locust framework to write great performance tests. Join the DZone community and get the full member experience.Join For Free In the world of performance testing, JMeter and Locust are the most popular testing tools. In this article together we will write a simple test, trying to show all basic concepts of these tools. At the end of this article, we will try to find the winner. Performance tests are designed to check the ability of server, database, and application to perform under load. Test scripts are written based on usage scenarios. In test scripts, we define the number of users who are going to perform certain actions on the website. Performance testing is often associated with the concepts of Load testing, Stress testing, or Endurance testing. In fact, these are not terms that can be used interchangeably, and Performance testing should be treated as a higher-level category, containing all the aforementioned types of tests. What Is JMeter? JMeter is currently the most popular free tool for performance testing. Its first version was presented over 20 years ago, while the latest version of Apache JMeter ™ 4.0 was published on February 11th, 2018, and changed the game not only due to offering full support for Java 9 but also thanks to taking the user experience aspect into consideration. Dark background (Dracula Theme), using English as the default language, as well as easier navigation and creation of tests are undoubtedly a plus. The performance test at JMeter has a complex architecture, uses ready-made components, and allows you to extend functionality through plug-ins. It can consist of such elements as Thread Group, HTTP Request, Sampler, Processor, or Listener. I am going to present JMeter's operation on the example of testing a plan to test a website. To construct the test plan you’re going to use the following elements: Test Plan, Thread Group, HTTP Request Defaults, HTTP Cookie Manager and HTTP Request. JMeter Installation First of all, you should start by installing a 64-bit JRE or JDK environment. You can download the binary file from the Apache JMeter page, move it to the preferred location, and then extract it. To run JMeter, run the jmeter file, which you can find in the bin directory. After a short time, the JMeter GUI mode should appear. GUI mode should only be used to create a test script, while the non-GUI (Command-line) mode should be used for load testing. How to Create a Performance Test To create a performance test, you first need the Test Plan, which is the most important element of test architecture in JMeter. It describes the steps established on the basis of the usage scenario - JMeter will execute and run them. There must also be at least one Thread Group element, which is an essential part of every Test Plan. It allows you to specify the number of users that will send a specific query. To add a Thread Group element to the Test Plan, right-click, select Threads (Users) from the Add menu, and select the Thread Group option. The element should be placed under the parent Test Plan. In the next step, you should change the default values to those that were specified in the scenario. It is not necessary, but you can start the edition by changing the default name "Name: Thread Group" to a more descriptive example, such as "Users." In the following example, I created 10 users who will send a request to one subpage. Let's move to Thread Properties and change the value of the first Number of Threads (users) field to 10, which is the number of users in your scenario. For the Ramp-Up Period (in seconds) field you need to leave the default number 1, which tells JMeter what the value of delay between running subsequent users is. The last value is the Loop Count - here you have to give the number of repetitions of our test, i.e. 1. The next step is to add tasks that users will perform. You model it by selecting the HTTP Request Defaults element, which you add to the previously created Thread Group named Users. Click it with the right mouse button, select Config Element from the Add menu, and then add the HTTP Request Defaults element. The HTTP Request Defaults element is not used to send HTTP requests, but to define values that will be included in it. Add server name in the added view, which in our case means putting 127.0.0.1 in the Server Name or IP field, as well as the port number, (8000) in the Port number field. All HTTP requests now will be sent to the same web server. For your web performance test, you can also add cookie support. Add the HTTP Cookie Manager element to Thread Group (Users) by clicking on it with the right mouse button, selecting two elements: first the Config Element from the Add menu, and then the HTTP Cookie Manager. Don't worry about the specific values - you can leave the default ones. The test still needs HTTP Request. In the case of your scenario, there will be one query, which will apply to the Login Page page. You need to add the HTTP Request element to the Thread Group (Users) via the Add -> Sampler menu. For HTTP Request, you can change the value of the Name field to a more descriptive one, e.g. Login Page. Go to the Path field and insert / login /. At this point, there is no need to put the web server address - you have already done this in the HTTP Request Defaults element. The remaining values have to be left as default. In the Test Plan, you can also place a listener, i.e. elements that allow you to analyze the test results, for instance in the form of a table or a chart. It is recommended to use listener only when debugging because they can consume a lot of memory and thus break results. Want to know more? Check out the User's manual and the JMeter project on GitHub! What Is Locust? Locust is a framework for writing performance tests in Python and one of the many alternatives to JMeter. It allows you to write performance tests in Python, and its implementation is based on tasks. It is designed for testing websites and systems and allows you to check how many simultaneous users they will handle. You can disperse your tests between many machines and check their results in the shared graphical interface. Installation of the Locust Framework Locust can be installed on any operating system: MacOS, Windows, or Linux. Below is an example for MacOS, in which the installation can be done using the PIP package manager. Creating a Performance Test Below is an example of a simple script saved as a file with the .py extension. The default name of such a file is locustfile.py: from locust import HttpLocust, TaskSet, task class LoginPage(TaskSet): @task def login_page_with_response_code_assertion(self): r = self.client.get("/login/") assert r.status_code is 200, "Unexpected response code: " + r.status_code class LoginUser(HttpLocust): task_set = LoginPage host = "" min_wait = 1000 max_wait = 5000 In the file, you defined the so-called Locust task (can be more than 1) by using the @task decorator. In tasks, you define user behaviour on the website. HttpLocust, in turn, represents the user, and in it, you define how long the user should wait for the execution of the tasks. To run the test saved as locustfile.py, in the terminal you have to go to the directory in which it is located, and then enter the following command: locust --host= In the next step, you can run the graphical interface in the browser, which is located in (in our case Locust is running locally). The interface will open on the page with two inputs. In the first of them, "Number of users to simulate," we have to enter the number of users (e.g. 10.0). in the second one, "Hatch rate," you specify the number of users that will be created in a second (e.g. 1). When running a test with given values, you will see test statistics. In addition to the statistics, you can also see a graph or download statistics in a CSV format. Once again, if you want to find out more about Locust, check out its official documentation and profile on GitHub. So, Which One Should You Choose? In summary, the choice between JMeter and Locust is definitely not easy. Although both tools are open source, behind the JMeter is the giant in the form of The Apache Software Foundation. On the other hand, Locust is being developed by a small team of developers and the community centered around the tool. The test in JMeter is based on the elements of Thread Groups. To simulate many users, you must add the appropriate number of these elements. Also, JMeter is a 100% application written in Java, while Locust is based on Python, which is a language that makes writing tests possible. This distinction is important because in the case of writing advanced assertions with JMeter you can do it by using Java, Groovy, or Beanshell, while Locust is Python in the purest form. Starting with Performance testing might be easier with JMeter, especially if you don’t have much experience in creating performance tests and you prefer to create UI tests. For more experienced testers, who know Python, writing tests using Locust might be more comfortable. Published at DZone with permission of Dorota Niezborała . See the original article here. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/performance-testing-with-jmeter-and-locust
CC-MAIN-2018-43
refinedweb
1,656
62.48
MemberussiMannisto left a reply on Proper Place For Adding Language Namespaces Thanks Martin! That's pretty much what I've done, except the package doesn't have its own service provider. I'll definitely be adding that. JussiMannisto started a new conversation Proper Place For Adding Language Namespaces Hi, I'm developing a dual-site which consists of an API served by a Lumen instance, and a CMS site built with Laravel (both ver. 5.5). These sites share a database and some resources, such as translations. I'm using language namespaces to access the shared translations, like so: Lang::addNamespace('someNamespace', realpath(base_path('../shared/resources/lang'))); I'm wondering what is the correct file & place for adding the namespace? I can't load them in bootstrap/app.php since the Lang facade hasn't been set up at that point. Currently I'm doing this in a service provider and it works, but that doesn't seem to me like the "proper" place to do it. JussiMannisto left a reply on Referencing Other Environment Variables In .env @robrogers3 That is not true. Laravel's documentation tells that they use the DotEnv PHP library for environment configuration. From its source code I found out that it supports nested variables, which is exactly what I needed. See the function Dotenv\Loader->resolveNestedVariables for reference. Here's an example for anyone looking for a solution: INDEX_NAME=dev_index UPDATE_INDEXER_COMMAND="/some-path/indexer.sh ${INDEX_NAME} --update" LIVE_INDEXER_COMMAND="/some-path/indexer.sh ${INDEX_NAME} --live" JussiMannisto left a reply on Referencing Other Environment Variables In .env My only goal is to reduce repetition in the .env file. The index name is environment specific, and it is repeated many times in the environment variables. Of course I can write it out manually on each row, or use a placeholder and insert it in code. But if there's a way of injecting INDEX_NAME to other variables, that would be more elegant. JussiMannisto started a new conversation Referencing Other Environment Variables In .env Hi, When defining environment variables in the .env file, is there a way of referencing a variable that was defined earlier? I'd like to do something like this: INDEX_NAME=dev_index UPDATE_INDEXER_COMMAND="/some-path/indexer.sh $INDEX_NAME --update" LIVE_INDEXER_COMMAND="/some-path/indexer.sh $INDEX_NAME --live" ...
https://laracasts.com/@JussiMannisto
CC-MAIN-2019-04
refinedweb
380
50.84
In the previous article, we suggested that indentation is an (extremely rough) indicator of complexity. Our goal is to write. “…a loop is an imperative control structure that’s hard to reuse and difficult to plug in to other operations. In addition, it implies code that’s constantly changing or mutating in response to new iterations.” Loops We’ve been saying that control structures like loops introduce complexity. But so far we’ve not seen any evidence of how that happens. So let’s take a look at how loops in JavaScript work. In JavaScript we have at least four or five ways of looping. The most basic is the while-loop. But first, a little bit of setup. We’ll create an example function and array to work with. // oodlify :: String -> String function oodlify(s) { return s.replace(/[aeiou]/g, 'oodle'); } const input = [ 'John', 'Paul', 'George', 'Ringo', ]; So, we have an array, and we’d like to oodlify each entry. With a while-loop, it looks something like this: let i = 0; const len = input.length; let output = []; while (i < len) { let item = input[i]; let newItem = oodlify(item); output.push(newItem); i = i + 1; } Note that to keep track of where we’re up to, we use a counter, i. We have to initialise this counter to zero, and increment it every time around the loop. We also have to keep comparing i to len so we know where to stop. This pattern is so common that JavaScript provides a simpler way of writing it: The for-loop. It looks something like this: const len = input.length; let output = []; for (let i = 0; i < len; i = i + 1) { let item = input[i]; let newItem = oodlify(item); output.push(newItem); } This is a helpful construct because it puts all that counter boilerplate together at the top. With the while-loop version it is very easy to forget to increment i and cause an infinite loop. A definite improvement. But, let’s step back a bit and look at what this code is trying to achieve. What we’re trying to do is to run oodlify() on each item in the array and push the result into a new array. We don’t really care about the counter. This pattern of doing something with every item in an array is quite common. So, with ES2015, we now have a new loop construct that lets us forget about the counter: The for…of loop. Each time around the loop it just gives you the next item in the array. It looks like this: let output = []; for (let item of input) { let newItem = oodlify(item); output.push(newItem); } This is much cleaner. Notice that the counter and the comparison are all gone. We don’t even have to pull the item out of the array. The for…of loop does all that heavy lifting for us. If we stopped here and used for…of loops everywhere instead of for-loops, we’d be doing well. We would have removed a decent amount of complexity. But… we can go further. Mapping The for…of loop is much cleaner than the for-loop, but we still have a lot of setup code there. We have to initialise the output array and call push() each time around the loop. We can make our code even more concise and expressive, but to see how, let’s expand the problem a little. What if we had two arrays to oodlify? const fellowship = [ 'frodo', 'sam', 'gandalf', 'aragorn', 'boromir', 'legolas', 'gimli', ]; const band = [ 'John', 'Paul', 'George', 'Ringo', ]; The obvious thing to do would be a loop for each: let bandoodle = []; for (let item of band) { let newItem = oodlify(item); bandoodle.push(newItem); } let floodleship = []; for (let item of fellowship) { let newItem = oodlify(item); floodleship.push(newItem); } This works. And code that works is better than code that doesn’t. But, it’s repetitive—not very DRY. We can refactor it to reduce some of the repetition. So, we create a function: function oodlifyArray(input) { let output = []; for (let item of input) { let newItem = oodlify(item); output.push(newItem); } return output; } let bandoodle = oodlifyArray(band); let floodleship = oodlifyArray(fellowship); This is starting to look much nicer, but what if we had another function we wanted to apply? function izzlify(s) { return s.replace(/[aeiou]+/g, 'izzle'); } Our oodlifyArray() function won’t help us now. But if we create an izzlifyArray() function we’re repeating ourselves again. Let’s do it anyway so we can see them side-by-side: function oodlifyArray(input) { let output = []; for (let item of input) { let newItem = oodlify(item); output.push(newItem); } return output; } function izzlifyArray(input) { let output = []; for (let item of input) { let newItem = izzlify(item); output.push(newItem); } return output; } Those two functions are scarily similar. What if we could abstract out the pattern here? What we want is: Given an array and a function, map each item from the array into a new array. Do this by applying the function to each item. We call this pattern map. A map function for arrays looks like this: function map(f, a) { let output = []; for (let item of a) { output.push(f(item)); } return output; } Of course, that still doesn’t get rid of the loop entirely. If we want to do that we can write a recursive version: function map(f, a) { if (a.length === 0) { return []; } return [f(a[0])].concat(map(f, a.slice(1))); } The recursive solution is quite elegant. Just two lines of code, and very little indentation. But generally, we don’t tend to use the recursive version because it has bad performance characteristics in older browsers. And in fact, we don’t have to write map ourselves at all (unless we want to). This map business is such a common pattern that JavaScript provides a built-in map method for us. Using this map method, our code now looks like this: let bandoodle = band.map(oodlify); let floodleship = fellowship.map(oodlify); let bandizzle = band.map(izzlify); let fellowshizzle = fellowship.map(izzlify); Note the lack of indenting. Note the lack of loops. Sure, there might be a loop going on somewhere, but that’s not our concern any more. This code is now both concise and expressive. It is also simple. Why is this code simple? That may seem like a stupid question, but think about it. Is it simple because it’s short? No. Just because code is concise, doesn’t mean it lacks complexity. It is simple because we have separated concerns. We have two functions that deal with strings: oodlify and izzlify. Those functions don’t have to know anything about arrays or looping. We have another function, map that deals with arrays. But it doesn’t care what type of data is in the array, or even what you want to do with the data. It just executes whatever function we pass it. Instead of mixing everything in together, we’ve separated string processing from array processing. That is why we can call this code simple. Reducing Now, map is very handy, but it doesn’t cover every kind of loop we might need. It’s only useful if you want to create an array of exactly the same length as the input. But what if we wanted to add up an array of numbers? Or find the shortest string in a list? Sometimes we want to process an array and reduce it down to just one value. Let’s consider an example. Say we have an array of hero objects: const heroes = [ {name: 'Hulk', strength: 90000}, {name: 'Spider-Man', strength: 25000}, {name: 'Hawk Eye', strength: 136}, {name: 'Thor', strength: 100000}, {name: 'Black Widow', strength: 136}, {name: 'Vision', strength: 5000}, {name: 'Scarlet Witch', strength: 60}, {name: 'Mystique', strength: 120}, {name: 'Namora', strength: 75000}, ]; We would like to find the strongest hero. With a for…of loop, it would look something like this: let strongest = {strength: 0}; for (let hero of heroes) { if (hero.strength > strongest.strength) { strongest = hero; } } All things considered, this code isn’t too bad. We go around the loop, keeping track of the strongest hero so far in strongest. To see the pattern though, let’s imagine we also wanted to find the combined strength of all the heroes. let combinedStrength = 0; for (let hero of heroes) { combinedStrength += hero.strength; } In both examples we have a working variable that we initialise before starting the loop. Then, each time around the loop we process a single item from the array and update the working variable. To make the loop pattern even clearer, we’ll factor out the inner part of the loops into functions. We’ll also rename the variables to further highlight similarities. function greaterStrength(champion, contender) { return (contender.strength > champion.strength) ? contender : champion; } function addStrength(tally, hero) { return tally + hero.strength; } const initialStrongest = {strength: 0}; let working = initialStrongest; for (hero of heroes) { working = greaterStrength(working, hero); } const strongest = working; const initialCombinedStrength = 0; working = initialCombinedStrength; for (hero of heroes) { working = addStrength(working, hero); } const combinedStrength = working; Written this way, the two loops look very similar. The only thing that really changes between the two is the function called and the initial value. Both reduce the array down to a single value. So we’ll create a reduce function to encapsulate this pattern. function reduce(f, initialVal, a) { let working = initialVal; for (let item of a) { working = f(working, item); } return working; } Now, as with map, the reduce pattern is so common that JavaScript provides it as a built-in method for arrays. So we don’t need to write our own if we don’t want to. Using the built-in method, our code becomes: const strongestHero = heroes.reduce(greaterStrength, {strength: 0}); const combinedStrength = heroes.reduce(addStrength, 0); Now, if you’re paying close attention, you may have noticed that this code is not much shorter. Using the built-in array methods, we only save about one line. If we use our hand-written reduce function, then the code is longer. But, our aim is to reduce complexity, not write shorter code. So, have we reduced complexity? I would argue, yes. We have separated the code for looping from the code that processes individual items. The code is less intertwined. Less complex. The reduce function might seem fairly primitive at first glance. Most examples with reduce do fairly simple things like adding numbers. But there’s nothing saying that the return value for reduce has to be a primitive type. It can be an object, or even another array. This blew my mind a little bit when I first realised it. So we can, for example, write map or filter using reduce. But I’ll leave you to try that out for yourself. Filtering We have map to do something with every item in an array. And we have reduce to reduce an array down to a single value. But what if we wanted to extract just some of the items in an array? To explore further, we’ll expand our hero database to include some extra data: const heroes = [ {name: 'Hulk', strength: 90000, sex: 'm'}, {name: 'Spider-Man', strength: 25000, sex: 'm'}, {name: 'Hawk Eye', strength: 136, sex: 'm'}, {name: 'Thor', strength: 100000, sex: 'm'}, {name: 'Black Widow', strength: 136, sex: 'f'}, {name: 'Vision', strength: 5000, sex: 'm'}, {name: 'Scarlet Witch', strength: 60, sex: 'f'}, {name: 'Mystique', strength: 120, sex: 'f'}, {name: 'Namora', strength: 75000, sex: 'f'}, ]; Now, let’s say we have two problems. We want to: - Find all the female heroes; and - Find all the heroes with a strength greater than 500. Using a plain-old for…of loop, we might write something like this: let femaleHeroes = []; for (let hero of heroes) { if (hero.sex === 'f') { femaleHeroes.push(hero); } } let superhumans = []; for (let hero of heroes) { if (hero.strength >= 500) { superhumans.push(hero); } } All things considered, this code isn’t too bad. But we definitely have a repeated pattern. In fact, the only thing that really changes is our if-statement. So what if we factored just the if-statements into functions? function isFemaleHero(hero) { return (hero.sex === 'f'); } function isSuperhuman(hero) { return (hero.strength >= 500); } let femaleHeroes = []; for (let hero of heroes) { if (isFemaleHero(hero)) { femaleHeroes.push(hero); } } let superhumans = []; for (let hero of heroes) { if (isSuperhuman(hero)) { superhumans.push(hero); } } This type of function that only returns true or false is sometimes called a predicate. We use the predicate to decide whether or not to keep each item in heroes. The way we’ve written things here makes the code longer. But now that we’ve factored out our predicate functions, the repetition becomes clearer. We can extract it out into a function. function filter(predicate, arr) { let working = []; for (let item of arr) { if (predicate(item)) { working = working.concat(item); } } return working; } const femaleHeroes = filter(isFemaleHero, heroes); const superhumans = filter(isSuperhuman, heroes); And, just like map and reduce, JavaScript provides this one for us as an Array method. So we don’t have to write our own version (unless we want to). Using array methods, our code becomes: const femaleHeroes = heroes.filter(isFemaleHero); const superhumans = heroes.filter(isSuperhuman); Why is this any better than writing the for…of loop? Well, think about how we’d use this in practice. We have a problem of the form Find all the heroes that…. Once we notice we can solve this problem using filter then our job becomes easier. All we need to do is tell filter which items to keep. We do this by writing one very small function. We forget about arrays and working variables. Instead, we write a teeny, tiny predicate function. That’s it. And as with our other iterators, using filter conveys more information in less space. We don’t have to read through all the generic loop code to work out that we’re filtering. Instead, it’s written right there in the method call. Finding Filtering is very handy. But what if we wanted to find just one hero? Say we wanted to Black Widow. We could use filter to find her, like so: function isBlackWidow(hero) { return (hero.name === 'Black Widow'); } const blackWidow = heroes.filter(isBlackWidow)[0]; The trouble with this is that it’s not very efficient. The filter method looks at every single item in the array. But we know that there’s only one Black Widow and we can stop looking after we’ve found her. But having this approach of using a predicate function is neat. So let’s write a find function that will return the first item that matches: function find(predicate, arr) { for (let item of arr) { if (predicate(item)) { return item; } } } const blackWidow = find(isBlackWidow, heroes); And again, JavaScript provides this one for us, so we don’t have to write it ourselves: const blackWidow = heroes.find(isBlackWidow); Once again, we end up expressing more information in less space. By using find our problem of finding a particular entry boils down to just one question: How do we know if we’ve found the thing we want? We don’t have to worry about the details of how the iteration is happening. Summary These iteration functions are a great example of why (well-chosen) abstractions are so useful and elegant. Let’s assume we’re using the built-in array methods for everything. In each case we’ve done three things: - Eliminated the loop control structure, so the code is more concise and (arguably) easier to read; - Described the pattern we’re using by using the appropriate method name. That is, map, reduce, filter, or find. - Reduced the problem from processing the whole array to just specifying what we want to do with each item. Notice that in each case, we’ve broken the problem down into solutions that use small, pure functions. What’s really mind-blowing though, is that with just these four patterns (though there are others, and I encourage you to learn them), you can eliminate nearly all loops in your JS code. This is because almost every loop we write in JS is processing an array, or building an array, or both. And when we eliminate the loops we (almost always) reduce complexity and produce more maintainable code. Update upon the 23rd of February 2017 A few people have pointed out that it feels inefficient to loop over the hero list twice in the reduce and filter examples. Using the ES2015 spread operator makes combining the two reducer functions into one quite neat. Here’s how I would refactor to iterate only once over the array: function processStrength({strongestHero, combinedStrength}, hero) { return { strongestHero: greaterStrength(strongestHero, hero), combinedStrength: addStrength(combinedStrength, hero), }; } const {strongestHero, combinedStrength} = heroes.reduce(processStrength, {strongestHero: {strength: 0}, combinedStrength: 0}); It’s a little bit more complicated than the version where we iterate twice, but it may make a big difference if the array is enormous. Either way, the order is still O(n). Atencio, Luis. 2016, Functional Programming in JavaScript. Manning Publications. iBooks. ↩︎
https://jrsinclair.com/articles/2017/javascript-without-loops/
CC-MAIN-2021-31
refinedweb
2,848
66.33
Yup - got that :) Fixed my problem too - thanks!!!! Yup - got that :) Fixed my problem too - thanks!!!! Here is my current code: import javax.swing.*; import java.awt.*; import java.util.Scanner; public class Heart extends JFrame{ This is the issue: 2656 It should look like: 2657 Also the issue is occurring whenever I add another page.drawstring("-"); line of code. I have been crashing a number of lectures at the university. I would read the textbook I bought, only a friend is using it. I have done what you recommended, which has made my programme MUCH tidier -... Is there any chance you could show me how to do this? My every attempt is ending in failure... --- Update --- Also I recently added this to my code - it works fine, but for some reason it seems... import javax.swing.*; import java.awt.*; import java.util.Scanner; public class Heart extends JFrame{ Sorry! If I enter 1 as my input it works fine, only asks me to enter another input (and if I do, that doesn't work). However, for any value over 1, Ray3 goes into the completely wrong place. ---... I'm confused as how to do this - I shifted the input to this but it is still crazy public static int input(){ System.out.println("Enter positive distance from lens (in cm):"); Scanner... It was all going smooth until I added the equations so that it could calculate the lines for ray3 on its own... Then it started looping the input and returning only a single value for both y1 and...
http://www.javaprogrammingforums.com/search.php?s=fcb2da7bca5d823f4e69bedd36769e80&searchid=203724
CC-MAIN-2016-30
refinedweb
260
74.49
Python Thread Tutorial (Part 1) Python Thread Tutorial (Part 1) In this tutorial, we take a look at how Python threads can be used to make code run smoother and faster, helping you garner insights from data quicker. Join the DZone community and get the full member experience.Join For Free, ainherits the Python threading.Threadclass. __init__(self [,args]): Override the constructor. run(): This is the section where you can put your logic part. start(): The start()method starts a Python thread. - The mythreadclass overrides the constructor, so the base class constructor ( Thread.__init__()) must be invoked. Python Thread Creation Using a threads that are active. Let us understand by the example. Let us see the output.()) Figure 1 : All threads and thread list Now comment out the line time.sleep(1) and then run the code again. Figure 2: Without sleep time You can see the difference. The statement threading.activeCount() and threading.enumerate() have run the main thread or main process. So, the main thread is responsible for running the entire program. In Figure 1, when the main thread executed threading.activeCount(), thread1 and thread2 were active. When we commented out the time.sleep(1) syntax, and executed the code again then thread1 and thread3 might not be active. The Join Method Before discussing the significance of the join method, let us see the following program. In the above program, the two threads have been created with arguments. In the target function, fun1, a global list, list1, is appended with an argument. Let us see the result.) Figure 3: Showing Empty list The above figure is showing the empty list, but the list is expected to be filled with values 1 and 6. The statement print (“List1 is : “, list1) is executed by the main thread and the main thread printed the list before getting it filled. The main thread, thus, must be paused until all threads complete their jobs. To achieve this, we shall use the join method. Let's look at) Ane here is the output. Figure 4: Use of join method In the above code, the syntax thread1.join() blocks the main thread until thread1 finishes its task. In order to achieve parallelism, the join method must be called after the creation of all the threads. Let's see more use cases a for loop to create 10 threads. Each thread is getting appended in the list, list_thread. After the creation of all the threads, we used the join method. Let's see the output. Figure 5: Out of join with a loop The time taken to execute is approximately 1 second. Since every thread takes 1 second and we used thread-based parallelism the total time taken to execute the code is close to 1 second. If you use the join method after the creation of each thread program it the time taken is 10 seconds. This means no parallelism has been achieved because the join() method of the first thread has been called before the creation of the second thread. join() Method With Time Let us look at the following piece of code. import threading import time def fun1(a): time.sleep(3)# complex calculation takes 3 seconds thread1 = threading.Thread(target = fun1, args = (1, )) thread1.start() thread1.join() print(thread1.isAlive()) A couple of things are new here. The isAlive() method returns True or False. If the thread is currently active then the method isAlive() returns True; otherwise, it returns False. Output: The above output shows that the thread is not active. Let's make a small change — change thread1.join() to thread1.join(2). This tells the program to block the main thread only for 2 seconds. Let's see the output: In the above output, the thread was still active because join(2) kept blocking the main thread for 2 seconds but the thread took 3 seconds to complete its task. threading.Timer() This method is used to set the time. Let us understand the syntax. threading.Timer(interval, function, args=[], kwargs={}) The meaning of the above syntax is that after a specified interval, in seconds, the interpreter will execute the function with args and kwargs. Published at DZone with permission of Chandu Siva . See the original article here. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/python-thread-part-1
CC-MAIN-2019-18
refinedweb
728
76.82
I wroted some script using tkinter. What confued me was that the script always starts with the shell. Anybody knows how to close the shell? These are from the PythonCE Wiki, but it is not clearly stated. I still don't know how to do it. PythonCE Command Line Options /nopcceshell Starts PythonCE without the graphical shell. This is only useful when also passing a script filename on the command line. /new By default, PythonCE only allows one instance to be running at a time. This option allow you to start a multiple instances. Thanks! Vladimir Sokolovsky 2009-02-08 Change the file type with .py to .pyw Vladimir Sokolovsky 2009-02-08 Can you post your source code here? from tkFileDialog import askopenfilename # get standard dialogs from tkColorChooser import askcolor # they live in Lib/lib-tk from tkMessageBox import askquestion, showerror, askokcancel from tkSimpleDialog import askfloat from Tkinter import * # get base widget set demos = { 'Open': askopenfilename, 'Color': askcolor, 'Query': lambda: askquestion('Warning', 'You typed "rm *"\nConfirm?'), 'Error': lambda: showerror('Error!', "He's dead, Jim"), 'Input': lambda: askfloat('Entry', 'Enter credit card number'), } class Quitter(Frame): # subclass our GUI def __init__(self, parent=None): # constructor method Frame.__init__(self, parent) self.pack( ) widget = Button(self, text='Quit', command=self.quit) widget.pack(side=LEFT) def quit(self): ans = askokcancel('Verify exit', "Really quit?") if ans: Frame.quit(self) class Demo(Frame): def __init__(self, parent=None): Frame.__init__(self, parent) self.pack( ) Label(self, text="Basic demos").pack( ) for (key, value) in demos.items( ): Button(self, text=key, command=value).pack(side=TOP, fill=BOTH) Quitter(self).pack(side=TOP, fill=BOTH) if __name__ == '__main__': Demo().mainloop( ) Vladimir Sokolovsky 2009-10-20 Just rename your script from .py to .pyw!
http://sourceforge.net/p/pythonce/discussion/358834/thread/946d87bf
CC-MAIN-2015-18
refinedweb
290
62.24
A widget that represents a navigatable tree. More... #include <Wt/WTree> A widget that represents a navigatable tree. WTree provides a tree widget, and coordinates selection functionality. Unlike the MVC-based WTreeView, the tree renders a widget hierarchy, rather than a hierarhical standard model. This provides extra flexibility (as any widget can be used as contents), at the cost of server-side, client-side and bandwidth resources (especially for large tree tables). The tree is implemented as a hierarchy of WTreeNode widgets. Selection is rendered by calling WTreeNode::renderSelected(bool). Only tree nodes that are selectable may participate in the selection. Usage example: A screenshot of the tree: Sets the selection mode. The default selection mode is Wt::NoSelection. Sets the tree root node. The initial value is 0. Returns the root node.
https://webtoolkit.eu/wt/wt3/doc/reference/html/classWt_1_1WTree.html
CC-MAIN-2021-31
refinedweb
133
53.17
- 6 Example LDC binaries If you just want to download the very latest LDC binaries, head over to Latest LDC binaries for Windows. Advice It is hard for us to keep these wiki pages up-to-date. If you run into trouble, have a look at the build scripts for our Continuous Integration platforms: the files .travis.yml (Ubuntu Linux and OSX) and appveyor.yml (Windows) are always up-to-date with the latest build setup. Building LDC Required software - Windows, of course! - Visual Studio or stand-alone Visual C++ Build Tools ≥ 2015. Make sure to install the C++ toolchain. - A D compiler (the ltsmaster branch does not need a D compiler to build). - git ≥ 2.0 (I use PortableGit) - Python 2.7.x or Python 3.3.x (I use Winpython) - CMake ≥ 2.8.9 - Ninja, a neat little and fast build system - Curl library (just use a precompiled VS 2015 x64 Native Tools Command Prompt.6.0;%~dp0Tools\PortableGit-2.9.3.2-64-bit\usr\bin;%~dp0Tools\PortableGit-2.9.3.2-64-bit\bin;%~dp0Tools\make-4.2.1;%~dp0Tools\cmake-3.3.0-win32-x86\bin;%~dp0Tools\WinPython-64bit-2.7.13.1Zero\python-2.7.13.amd64;%PATH% set DMD=%~dp0dmd2\windows\bin\dmd.exe if not exist "%TERM%" set TERM=msys start /belownormal %comspec% /k "C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\vcvarsall.bat" amd64 Use x86 instead of amd64 as argument to vcvarsall.bat if you want to build a 32-bit LDC. Open a shell by executing the batch file. - Running cl should display the banner from the MS compiler. - Running git --version should display the banner from git. - Running python --version should display the banner from python. - Running cmake --version should display the banner from cmake. - Running ninja --version should display the ninja version. Build LLVM To build LLVM from the command line, just execute the following steps (from C:\LDC): - Get the source: git clone llvm - Branch release_40 (git checkout release_40) is currently recommended. - Create a build directory: md build-llvm-x64 - Change into it: cd build-llvm-x64 Use a command like this (in one line) to create the Ninja build files: cmake -G Ninja -DCMAKE_INSTALL_PREFIX="C:\LDC\LLVM-x64" -DCMAKE_BUILD_TYPE=RelWithDebInfo -DLLVM_USE_CRT_RELWITHDEBINFO=MT -DPYTHON_EXECUTABLE="C:\LDC\Tools\WinPython-64bit-2.7.13.1Zero\python-2.7.13.amd64\python.exe" -DLLVM_TARGETS_TO_BUILD=X86 -DLLVM_ENABLE_ASSERTIONS=ON ..\llvm Omit the CMAKE_BUILD_TYPE definition to build a debug version. The LLVM page on CMake documents other variables you can change. The most common is to add more targets. E.g. to build a target for ARM you change the targets to build to -DLLVM_TARGETS_TO_BUILD=X86;ARM. - Build LLVM: ninja - Install it: ninja install git://github.com/ldc-developers/ldc.git ldc - md build-ldc-x64 - cd build-ldc-x64 Set environment variable to which D compiler should be used to build LDC:set DMD=c:\path\to\dmd\bin\dmd.exe Use a command like this (in one line), omitting the variables starting with LIBCONFIG when building LDC ≥ 1.3: cmake -G Ninja -DCMAKE_INSTALL_PREFIX="C:\LDC\LDC-x64" -DCMAKE_BUILD_TYPE=RelWithDebInfo -DLLVM_ROOT_DIR="C:/LDC/LLVM-x64" -DLIBCONFIG_INCLUDE_DIR="C:/LDC/libconfig/lib" -DLIBCONFIG_LIBRARY="C:/LDC/libconfig/lib/x64/ReleaseStatic/libconfig.lib" ..\ldc - Build LDC and the runtimes: ninja - If you want to install it: ninja install Tests Running the LIT-based tests You'll need to have lit installed for Python. To run the tests from your build dir you can do: - cd C:\LDC\build-ldc-x64 - ctest --output-on-failure runtime unit tests - cd C:\LDC\build-ldc-x64 - Build the unit tests: ninja druntime-ldc-unittest druntime-ldc-unittest-debug phobos2-ldc-unittest phobos2-ldc-unittest-debug - Run the tests, excluding dmd-testsuite and the LIT tests: ctest --output-on-failure -E "dmd-testsuite|lit-tests" For troubleshooting be sure to examine the file C:\LDC\build-ldc-x64\Testing\Temporary\LastTest.log.: set OS=Win_32 - 64-bit: - cd C:\LDC - md vs-ldc-x64 - cd vs-ldc-x64 - Use the cmake command from the Build LDC section, but use the VS generator instead of Ninja this time: cmake -G "Visual Studio 14 Win64" ... This creates the VS 2015 solution C:\LDC\vs-ldc-x64\ldc.sln. A Visual Studio solution for LLVM can be created the same way. Example The simple D program hello.d import std.stdio; int main() { writefln("Hello LDC2"); return 0; } can be compiled and linked with the commands: ldc2 -c hello.d ldc2 hello.obj or simply with: ldc2 hello.d Windows
https://wiki.dlang.org/?title=Building_and_hacking_LDC_on_Windows_using_MSVC&oldid=8608
CC-MAIN-2018-09
refinedweb
760
50.12
Introduction : RandomAccessFile is an important class in the Java IO package. Using this class, we can easily point to any position of a file, read any specific part of a file or write content to anywhere within a file. It behaves like a large array of bytes. The cursor, that is used to point to the current position of a file is called file pointer. the file’s content is written synchronously to the underlying storage device. In this tutorial, we will learn different usage of RandomAccessFile with examples. Let’s have a look : How to use RandomAccessFile : import java.io.File; import java.io.FileNotFoundException; import java.io.IOException; import java.io.RandomAccessFile; public class Main { private static final String PATH = "C:\\user\\Desktop\\file.txt"; public static void main(String[] args) { try { //1 RandomAccessFile raFile = new RandomAccessFile(PATH, "r"); //2 raFile.seek(0); byte[] bytes = new byte[4]; //3 raFile.read(bytes); System.out.println(new String(bytes)); //4 raFile.seek(10); raFile.read(bytes); raFile.close(); System.out.println(new String(bytes)); } catch (IOException e) { e.printStackTrace(); } } } Explanation: The commented numbers in the above program denote the step number below: - First of all, we have created one RandomAccessFile _variable for the file defined by _PATH _i.e. _file.txt. This file contains the following text: This is a file with few random words.Hello World!Hello Universe!Hello again!Welcome! The created file is in read-only mode‘r’. - First of all, we need to position the file pointer to the position where we want to read or write. seek() _method is used for it. In our example, we have moved it to the _0th _position,i.e.to the first position. Then we have created one byte array. The size of this array is _4. - Using read() _method, we have read the content of the file starting from _0th _position. It will read it and put the content in the byte array _bytes. It will read the same amount of content as the size of _bytes _array. Next, convert the content of _bytes _to a String and print out the result. - Similar to the above example, we have moved to_10th _position, read the content and stored it in a byte array and print out the content. The output of both print lines will be as below: This file So, the first println _printed out the first word _This, and the second one printed out file, This is starting from the _0th _position and _file _is starting from te _10th _position. Both have _4 _letters. Writing content to a file: Writing content to a file is similar to reading. The method looks like as below: try{ //1 RandomAccessFile raFile=new RandomAccessFile(PATH,"rw"); //2 raFile.seek(4); raFile.write("Hello".getBytes()); raFile.close(); }catch(IOException e){ e.printStackTrace(); } - File creation is similar to the previous example. The only difference is that we are opening it in _‘rw’ _or _read-write _mode. - For writing content, write() _method is used. One _byte array _is passed to this method and it will write down the content of this array. The write operation starts from the current pointer position. In our case, we have moved the pointer to4thposition using _seek() , so it will start writing from the4thposition of the file. The content of the text file will look like as below: ThisHello file with few random words.Hello World!Hello Universe!Hello again!Welcome! So, the content is overridden after the write operation. Getting the size of a file: We can get the size or length of a file using the _length() _method. It will return the size of the file: try{ RandomAccessFile raFile=new RandomAccessFile(PATH,"r"); System.out.println("length of the file is "+raFile.length()); }catch(IOException e){ e.printStackTrace(); } So, we can append a text to the end of a file by seeking the pointer to the last position of a file first : public static void main(String[] args) { try{ RandomAccessFile raFile=new RandomAccessFile(PATH,"rw"); raFile.seek(raFile.length()); raFile.write("Hello".getBytes()); raFile.close(); }catch(IOException e){ e.printStackTrace(); } } Setting a new length to a file : We can also change the length of a file using setLength(long newLength) method. It will change the length of the size to newLength. If the new length size is smaller than the previous length, it will truncate the content of the file. Else, the file size will be increased. For example : public static void main(String[] args) { try{ RandomAccessFile raFile=new RandomAccessFile(PATH,"rw"); System.out.println("File length "+raFile.length()); raFile.setLength(3); System.out.println("File length after setting : "+raFile.length()); raFile.close(); }catch(IOException e){ e.printStackTrace(); } } If our input file contains the word Hello, it will print out the following output : File length 5 File length after setting the length 3 i.e. the length of the file was 5 previously but after changing the length, it becomes 3. If you open it, you can see that it contains only word Hel. Get the current pointer position : For reading the current pointer position, i.e. on which position it is pointed to, we can use getFilePointer() method. It returns the position in long format. public static void main(String[] args) { try{ RandomAccessFile raFile=new RandomAccessFile(PATH,"r"); System.out.println("Current file pointer position 1 : "+raFile.getFilePointer()); raFile.seek(3); System.out.println("Current file pointer position 2 : "+raFile.getFilePointer()); raFile.close(); }catch(IOException e){ e.printStackTrace(); } } } The output will be like below : Current file pointer position 1 : 0 Current file pointer position 2 : 3 At first, the pointer was pointed to the 0th position. Then we have changed it to the 3rd position using seek(). Conclusion : We have checked different examples of RandomAccessFile in this post. Always remember to open the file in the correct mode, i.e. if you are opening it only for reading, open it in ‘r’ mode, not in ‘rw’ mode. Also, always close it using close() method after completing your job. RandomAccessFile contains a few more methods. You can check the official guide here for more info on it. Similar tutorials : - Java BufferedReader and FileReader example read text file - Java program to read contents of a file using FileInputStream - Java Program to get the last modified date and time of a file - Java Program to create a temporary file in different locations - Java example to filter files in a directory using FilenameFilter - Convert Java file to Kotlin in Intellij Idea
https://www.codevscolor.com/java-randomaccessfile-explanation-example
CC-MAIN-2020-50
refinedweb
1,083
58.48
You can use WSGI to make rewriting middleware; WebOb specifically makes it easy to write. And that’s cool, but it’s more satisfying to use your middleware right away without having to think about writing applications that might live behind the middleware. There’s two libraries I’ll describe here to make that possible: paste.proxy to send WSGI requests out via HTTP, and lxml.html which lets you rewrite the HTML to fix up the links. To start, we need some kind of middleware that at least is noticeable. How about something to make a word jumble of the page? We’ll use lxml as well: from lxml import html from random import shuffle def jumble_words(doc): """Mixes up the words in an HTML document (doesn't touch tags or attributes)""" doc = html.fromstring(doc) # .text_content() gives the text without tags or attributes, # .body is the <body> tag: words = doc.body.text_content().split() shuffle(words) for el in doc.body.iterdescendants(): # The ElementTree model puts all text in .text and .tail on elements, so that's # what we mix up: el.text = random_words(el.text, words) el.tail = random_words(el.tail, words) return html.tostring(doc) def random_words(text, words): """Pulls some words from the list words, with the same number of words in the previous `text`""" # text can be None, so we need this test: if not text: return text word_count = len(text.split()) try: return ' '.join(words.pop() for i in range(word_count)) except IndexError: # This shouldn't happen, because we should have exactly # the right number of words, but just in case... return text from webob import Request class JumbleMiddleware(object): """Middleware that jumbles the words of HTML responses """ # This __init__ and __call__ are the basic pattern for middleware: def __init__(self, app): self.app = app def __call__(self, environ, start_response): req = Request(environ) # We don't want 304 Not Modified responses, because we mix up the response # differently every time. So we'll make sure all the headers that could call that # (If-Modified-Since, etc) are removed with .remove_conditional_headers(): req.remove_conditional_headers() # This calls the application with the request, and then returns a response; this # is the typical pattern for response-modifying middleware using WebOb: resp = req.get_response(self.app) if resp.content_type == 'text/html': resp.body = jumble_words(resp.body) return resp(environ, start_response) Well, you don’t really need to jumble up your own pages, right? Much more fun to jumble other people’s pages. Enter the proxy. Here’s a basic proxy: from paste.proxy import Proxy # We use this to make sure we didn't mess up anything with JumbleMiddleware; # the validator checks for many WSGI requirements: from wsgiref.validate import validator import sys def main(): proxy_url = sys.argv[1] app = JumbleMiddleware( Proxy(proxy_url)) app = validator(app) from paste.httpserver import serve serve(app, 'localhost', 8080) if __name__ == '__main__': main() If you look at the full source the command-line is a bit fancier, but it’s all obvious stuff. OK, so this will work, but the links will often be broken unless the server only gives relative links. But you can rewrite the links using lxml… import urlparse class LinkRewriterMiddleware(object): """Rewrites the response, assuming the HTML was generated as though based at `dest_href`, and needs to be rewritten for the incoming request""" # The normal __init__, __call__ pattern: def __init__(self, app, dest_href): self.app = app if dest_href.endswith('/'): dest_href = dest_href[:-1] self.dest_href = dest_href def __call__(self, environ, start_response): req = Request(environ) # .path_info (aka environ['PATH_INFO']) is the path of the request # (URL rewriting doesn't really have to care about query strings) dest_path = req.path_info dest_href = self.dest_href + dest_path # req.application_url is the base URL not including path_info or the query string: req_href = req.application_url def link_repl_func(link): link = urlparse.urljoin(dest_href, link) if not link.startswith(dest_href): # Not a local link return link new_url = req_href + '/' + link[len(dest_href):] return new_url resp = req.get_response(self.app) # This decodes any possible gzipped content: resp.decode_content() if (resp.status_int == 200 and resp.content_type == 'text/html'): doc = html.fromstring(resp.body, base_url=dest_href) doc.rewrite_links(link_repl_func) resp.body = html.tostring(doc) # Redirects need their redirect locations rewritten: if resp.location: resp.location = link_repl_func(resp.location) return resp(environ, start_response) Then we rewire the application: app = JumbleMiddleware( LinkRewriterMiddleware(Proxy(proxy_url), proxy_url)) Now there’s a fun little proxy for you to play with. You can see the code here.
http://www.ianbicking.org/blog/2008/07/making-a-proxy-with-wsgi-and-lxml.html
CC-MAIN-2019-43
refinedweb
737
50.84
This appendix describes all of the fl_ functions. For a description of the FLTK classes, see Appendix A. #include <FL/fl_ask.H> void fl_alert(const char *, ...); Same as fl_message() except for the "!" symbol. Note: Common dialog boxes are application modal. No more than one common dialog box can be open at any time. Requests for additional dialog boxes are ignored. int fl_ask(const char *, ...); Displays a printf-style message in a pop-up box with an "Yes" and "No" button and waits for the user to hit a button. The return value is 1 if the user hits Yes, 0 if they pick No or another dialog box is still open. The enter key is a shortcut for Yes and ESC is a shortcut for No. Note: Common dialog boxes are application modal. No more than one common dialog box can be open at any time. Requests for additional dialog boxes are ignored. Note: Use of this function is strongly discouraged, and it will be removed in a later FLTK release. Instead, use fl_choice() and provide unambiguous verbs in place of "Yes" and "No". void fl_beep(int type = FL_BEEP_DEFAULT) Sounds an audible notification; the default type argument sounds a simple "beep" sound. Other values for type may use a system or user-defined sound file: type int fl_choice(const char *q, const char *b0, const char *b1, const char *b2, ...); Shows the message with three buttons below it marked with the strings b0, b1, and b2. Returns 0, if button 0 is hit or another dialog box is still open. Returns 1 or 2 for buttons 1 or 2, respectively. ESC is a shortcut for button 0 and the enter key is a shortcut for button 1. Notice the buttons are positioned "backwards". You can hide buttons by passing NULL as their labels. #include <FL/Enumerations.H> Fl_Color fl_color_average(Fl_Color c1, Fl_Color c2, float weight); Returns the weighted average color between the two colors. The red, green, and blue values are averaged using the following formula: color = c1 * weight + c2 * (1 - weight) Thus, a weight value of 1.0 will return the first color, while a value of 0.0 will return the second color. weight #include <FL/Fl_Color_Chooser.H> int fl_color_chooser(const char *title, double &r, double &g, double &b); int fl_color_chooser(const char *title, uchar &r, uchar &g, uchar &b); The double version takes RGB values in the range 0.0 to 1.0. The uchar version takes RGB values in the range 0 to 255. The title argument specifies the label (title) for the window. double uchar fl. #include <FL/fl_draw.H> Fl_Color fl_color_cube(int r, int g, int b); Returns a color out of the color cube. r must be in the range 0 to FL_NUM_RED (5) minus 1. g must be in the range 0 to FL_NUM_GREEN (8) minus 1. b must be in the range 0 to FL_NUM_BLUE (5) minus 1. To get the closest color to a 8-bit set of R,G,B values use: fl_color_cube(R * (FL_NUM_RED - 1) / 255, G * (FL_NUM_GREEN - 1) / 255, B * (FL_NUM_BLUE - 1) / 255); Fl_Color fl_contrast(Fl_Color fg, Fl_Color bg); Returns the foreground color if it contrasts sufficiently with the background color. Otherwise, returns FL_WHITE or FL_BLACK depending on which color provides the best contrast. FL_WHITE FL_BLACK void fl_cursor(Fl_Cursor cursor, Fl_Color fg, Fl_Color bg); Sets the cursor for the current window to the specified shape and colors. The cursors are defined in the <FL/Enumerations.H> header file. <FL/Enumerations.H> Fl_Color fl_darker(Fl_Color c); Returns a darker version of the specified color. #include <FL/Fl_File_Chooser.H> char *fl_dir_chooser(const char * message, const char *fname, int relative = 0); The fl_dir_chooser() function displays a Fl_File_Chooser dialog so that the user can choose a directory. message is a string used to title the window. fname is a default filename to fill in the chooser with. If this is NULL then the last filename that was choosen is used. The first time the file chooser is called this defaults to a blank string. relative specifies whether the returned filename should be relative (any non-zero value) or absolute (0). The default is to return absolute paths. The returned value points at a static buffer that is only good until the next time fl_dir_chooser() is called. char *fl_file_chooser(const char * message, const char *pattern, const char *fname, int relative = 0); FLTK provides a "tab completion" file chooser that makes it easy to choose files from large directories. This file chooser has several unique features, the major one being that the Tab key completes filenames like it does in Emacs or tcsh, and the list always shows all possible completions. fl_file_chooser() pops up the file chooser, waits for the user to pick a file or Cancel, and then returns a pointer to that filename or NULL if Cancel is chosen. pattern is used to limit the files listed in a directory to those matching the pattern. This matching is done by (*)". Pass NULL to show all files. fname is a default filename to fill in the chooser with. If this is NULL then the last filename that was choosen is used (unless that had a different pattern, in which case just the last directory with no name is used). The first time the file chooser is called this defaults to a blank string. The returned value points at a static buffer that is only good until the next time fl_file_chooser() is called. void fl_file_chooser_callback(void (*cb)(const char *)); Sets a function that is called every time the user clicks a file in the currently popped-up file chooser. This could be used to preview the contents of the file. It has to be reasonably fast, and cannot create FLTK windows. void fl_file_chooser_ok_label(const char *l); Sets the label that is shown on the "OK" button in the file chooser. The default label (fl_ok) can be restored by passing a NULL pointer for the label string. #include <FL/filename.H> int fl_filename_absolute(char *to, int tolen, const char *from); int fl_filename_absolute(char *to, const char *from); Converts a relative pathname to an absolute pathname. If from does not start with a slash, the current working directory is prepended to from with any occurances of . and x/.. deleted from the result. The absolute pathname is copied to to; from and to may point to the same buffer. fl_filename_absolute returns non-zero if any changes were made. The first form accepts a maximum length (tolen) for the destination buffer, while the second form assumes that the destination buffer is at least FL_PATH_MAX characters in length. int fl_filename_expand(char *to, int tolen, const char *from); int fl_filename_expand(char *to, const char *from); This function replaces environment variables and home directories with the corresponding strings. Any occurrence of $X is replaced by getenv("X"); if $X is not defined in the environment, the occurrence is not replaced. Any occurence of ~X is replaced by user X's home directory; if user X does not exist, the occurrence is not replaced. Any resulting double slashes cause everything before the second slash to be deleted. The result is copied to to, and from and to may point to the same buffer. fl_filename_expand() returns non-zero if any changes were made. const char *fl_filename_ext(const char *f); Returns a pointer to the last period in fl_filename_name(f), or a pointer to the trailing nul if none is found. int fl_filename_isdir(const char *f); Returns non-zero if the file exists and is a directory. int fl_filename_list(const char *d, dirent ***list, Fl_File_Sort_F *sort = fl_numericsort); This is a portable and const-correct wrapper for the scandir() function. d is the name of a directory; it does not matter if it has a trailing slash or not. For each file in that directory a "dirent" structure is created. The only portable thing about a dirent is that dirent.d_name is the nul-terminated file name. An array of pointers to these dirent's is created and a pointer to the array is returned in *list. The number of entries is given as a return value. If there is an error reading the directory a number less than zero is returned, and errno has the reason; errno does not work under WIN32. The name of directory always ends in a forward slash '/'. The sort argument specifies a sort function to be used when on the array of filenames. The following standard sort functions are provided with FLTK: You can free the returned list of files with the following code: for (int i = return_value; i > 0;) { free((void*)(list[--i])); } free((void*)list); int fl_filename_match(const char *f, const char *pattern); Returns non-zero if f matches pattern. The following syntax is used by pattern: const char *fl_filename_name(const char *f); Returns a pointer to the character after the last slash, or to the start of the filename if there is none. int fl_filename_relative(char *to, int tolen, const char *from); int fl_filename_relative(char *to, const char *from); Converts an absolute pathname to an relative pathname. The relative pathname is copied to to; from and to may point to the same buffer. fl_filename_relative returns non-zero if any changes were made. char *fl_filename_setext(char *to, int tolen, const char *ext); char *fl_filename_setext(char *to, const char *ext); Replaces the extension in to with the extension in ext. Returns a pointer to to. Fl_Color fl_gray_ramp(int i); Returns a gray color value from black (i == 0) to white (i == FL_NUM_GRAY - 1). FL_NUM_GRAY is defined to be 24 in the current FLTK release. To get the closest FLTK gray value to an 8-bit grayscale color 'I' use: fl_gray_ramp(I * (FL_NUM_GRAY - 1) / 255) Fl_Color fl_inactive(Fl_Color c); Returns the inactive, dimmed version of the give color const char *fl_input(const char *label, const char *deflt = 0, ...); Pops up a window displaying a string, lets the user edit it, and returns the new value. The function returns NULL if the Cancel button is hit or another dialog box is still open. The returned pointer is only valid until the next time fl_input() is called. Due to back-compatibility, the arguments to any printf commands in the label are after the default value. Fl_Color fl_lighter(Fl_Color c); Returns a lighter version of the specified color. void fl_message(const char *, ...);. The message text is limited to 1024 characters. void fl_message_font(Fl_Font fontid, uchar size); Changes the font and font size used for the messages in all the popups. Fl_Widget *fl_message_icon(); Returns a pointer to the box at the left edge of all the popups. You can alter the font, color, label, or image before calling the functions. void fl_open_uri(const char *uri, char *msg = (char *)0, int msglen = 0); fl_open_uri() opens the specified Uniform Resource Identifier (URI) using an operating-system dependent program or interface. For URIs using the "ftp", "http", or "https" schemes, the system default web browser is used to open the URI, while "mailto" and "news" URIs are typically opened using the system default mail reader and "file" URIs are opened using the file system navigator. On success, the (optional) msg buffer is filled with the command that was run to open the URI; on Windows, this will always be "open uri". On failure, the msg buffer is filled with an English error message. const char *fl_password(const char *label, const char *deflt = 0, ...); Same as fl_input(), except an Fl_Secret_Input field is used. #include <FL/Fl_Shared_Image.H> void fl_register_images(); Registers the extra image file formats that are not provided as part of the core FLTK library for use with the Fl_Shared_Image class. Fl_Shared_Image This function is provided in the fltk_images library. fltk_images Fl_Color fl_rgb_color(uchar r, uchar g, uchar b); Fl_Color fl_rgb_color(uchar g); Returns the 24-bit RGB color value for the specified 8-bit RGB or grayscale values. #include <FL/fl_show_colormap.H> Fl_Color fl_show_colormap(Fl_Color oldcol) fl_show_colormap() pops up a panel of the 256 colors you can access with fl_color() and lets the user pick one of them. It returns the new color index, or the old one if the user types ESC or clicks outside the window. How the file_chooser_callback finds its canvas to display the file preview? [ Reply ]
http://fltk.org/documentation.php/doc-1.1/functions.html
CC-MAIN-2018-09
refinedweb
2,036
63.9
Syntax: #include <vector> iterator end(); const_iterator end() const; The end() function returns an iterator just past the end of the vector. Note that before you can access the last element of the vector using an iterator that you get from a call to end(), you'll have to decrement the iterator first. This is because end() doesn't point to the end of the vector; it points just past the end of the vector. For example, in the following code, the first “cout” statement will display garbage, whereas the second statement will actually display the last element of the vector: vector<int> v1; v1.push_back( 0 ); v1.push_back( 1 ); v1.push_back( 2 ); v1.push_back( 3 ); int bad_val = *(v1.end()); cout << "bad_val is " << bad_val << endl; int good_val = *(v1.end() - 1); cout << "good_val is " << good_val << endl; The next example shows how begin() and end() can be used to iterate through all of the members of a vector. vector<int> v1( 3, 5 );. Related Topics: begin, rbegin, rend
http://www.cppreference.com/wiki/stl/vector/end
crawl-002
refinedweb
165
66.84
Lately I’ve been involved on starting up my team blog: if you read Italian and want to listen about stories from Support Engineers working on various technologies, you can’t avoid signing up to the feed: Now guess what’s been the topic of the posts I’ve been writing there… yes! Memory Management on Windows CE\Mobile and NETCF!! (so far the intro and then part 0, 1, 2 – don’t ask me why I made it 0-based, even after an intro… I really don’t remember) And since I’ve been adopting an approach that is not usually the one used to explain how things work, I honestly think I can reuse the same one and translate it here… probably I’m going to repeat some concepts I’ve already blogged about, however it may be worth in order to have one single document, this post, as hopefully a possible quite-ultimate way to describe how memory is handled… is the bar too high?? INTRODUCTION ABOUT WINDOWS CE\MOBILE So, let’s start: Windows CE and Windows Mobile are not the same thing. After working on this for a while it can be obviuos, but if you’re at your first experience with these so-called “Smart Devices” then it may not be so. We must be clear about the terminology, specifically about terms like “platform”, “operating system”, “Platform Builder”, “Adaptation Kit”, “OEM”, “ODM”, etc.. In reality the true name of “Windows CE” would nowadays be “Windows Embedded CE”, however I don’t want to mess with a product that was once known as “Windows XP Embedded” and which nowadays is differentiated in the following products: So, when I’ll write “Windows CE” here I’ll mean the historical name of “Windows Embedded CE”: let’s forget the other “Windows Embedded X” products (out of my scope) and let’s concentrate on Windows CE\Mobile. Windows CE is a platform for OEMs (Original Equipment Manufacturer). This means that we provide to the manufacturer an Integrated Development Environment very similar to Visual Studio (indeed, Windows CE 6.0 is integrated with Visual Studio), but with the aim of developing Operating Systems (instead of applications), based on the platform provided by Microsoft. The tool is called "Platform Builder for Windows CE”, and up to version 5.0 was a separate tool from Visual Studio. Windows CE is a modular platform. This means that the OEM is totally free to include only the modules, drivers and applications of his interest. Microsoft provides about 90% of the source code of the Windows CE platform, as well as code examples for drivers and various recommendations (which the OEM might or might not follow). For example, if the device is not equipped with an audio output, then the OEM won’t add a sound driver. If it doesn’t have a display, then the OEM will not develop nor insert a video driver. And so on for the network connectivity, or a barcode-scanner, a camera, and so on. On a Windows CE-based device, the OEM can include whatever he wants. That's why, from the point of view of technical support to application-developers, sometimes we can’t help a programmer who is targeting a specific device whose operating system is based on Windows CE: furthermore, the OEM can decide whether to offer application-developers the opportunity to programmatically interact with special functions of the device through a so-called "Private SDK” (which may also contain a emulator image, for example). An important detail: differently from Windows Embedded OSs (Standard \ Enterprise \ POSReady \ NavReady \ Server), for Operating Systems based on Windows CE the OEMs actually *COMPILE* the source code of the platform (apart from about 10% provided by Microsoft and corresponding to the core kernel and other features). Now: Windows Mobile is a particular customization of Windows CE, but in this case the OEM needs to create an Operating System that meets a set of requirements, called "Windows Mobile Logo Test Kit". The tool used by Windows Mobile-OEM is called "Adaptation Kit for Windows Mobile", a special edition of "Platform Builder" and allows to adapt the "Windows Mobile”-Platform to the exact hardware that the OEM has built or that he requested to an ODM (“Original Device Manufacturer”). In the Windows Mobile’s scenario we can’t forget Mobile Operators also, which often "brand" a device requiring the OEM to include specific applications and usually configure the connectivity of the mobile network (GPRS, UMTS, WAP, etc.).. WARNING: nothing prohibits a WinMo-OEM to include special features such as a barcode-scanner or an RFID chip or anything else... the important thing is that the minimal set is the same. Moreover, also WinMo-OEM can provide a “Private SDK” to programmatically expose specific functionality related to their operating system (see for example Samsung SDK containing private APIs for the accelerometer and other features, which are documented and supported by Samsung itself). Finally, one last thing before starting talking about memory: Windows Mobile 5.0, 6, 6.1 and 6.5 are all platforms based on Windows CE 5.0. So, they all share the same Virtual Memory management mechanisms, except in some details for the latter (mainly with some benefits for application-developers). VIRTUAL MEMORY ON WINDOWS CE\MOBILE So now we can start talking about how memory is managed on Operating Systems based on Windows CE 5.0. And I’m being specific on Windows Embedded CE *5.0* because on 6.0 memory management is (finally!) totally changed and there are no more the limitations we’re going to discuss in the remainder. Incidentally, this is part of the same limitations described for Windows CE 4.2 by Doug Boling’s Windows CE .NET Advanced Memory Management, although the article is dated 2002! Fortunately, some improvements have been introduced from Windows Mobile 6 and especially in 6.1 (and therefore also in 6.5), whereby not only the applications have more virtual memory available, but also the entire operating system is more stable as a whole. I don’t want to repeat here what can be found in documentation and on various blogs: in contrast, I’d like to actually show the theory, because only by looking at data like the following you can realize what a good programmer a Developer for Windows Mobile has to be! The following is the output of a tool developed by Symbol (later acquired by Motorola), which allowed the manufacturer to understand how the device.exe process was responsible (or not) for various problems related to memory. Why? Because The result was something like the following (I obfuscated possibly sensitive data of the software-house I worked with during the Service Request): So, above all, what did Symbol\Motorola mean by “Code” (blue) and “Data” (green)? • "Code": these are the *RAM* DLLs loaded or mapped into a process-slot. They start from the top of the 32MB slot and goes down. If several processes use the same DLL, the second one maps it to the same address where the first one had loaded it. • "Data" is the executable’s compiled code + Heap(s) + Stack. It starts from the bottom and grows. Finally, the red vertical line represents the "DLL Load Point", i.e. the address where a DLL is loaded in case it hadn’t yet by any other process. That is the situation of only the process slots, not the whole virtual memory - in particular the contents of the Large Memory Area is not shown: Why did I specify *RAM* DLLs? Because those placed by the OEM in the ROM (= firmware) are executed directly there, without the need of the process loading their "compiled" code into its Address Space (they’re XIP DLL, i.e. "Executed in Place" – in Slot 1). That picture also shows that the green part (code + heap + stack) may exceed the DLL Load Point. Indeed, the problems related to lack of available virtual memory is usually of 2 types: That's also because in general one of the advices to avoid memory problems had always been to load all DLLs used by the application itself at application startup, through an explicit call to LoadLibrary(). Another visual example is the following: We’ll later discuss in detail the particularities of NETCF, but it's worth at this point noting a detail: apart from the actual CLR DLLs (mscoree*.dll, netcfagl*.dll), every other assembly doesn’t waste address space in the Process slot, but is loaded into the Large Memory Area. Even more, if you are using the version of the runtime included in the ROM by the OEM, also the runtime DLLs do not affect the process’ virtual memory space. Obviously it is different when the application P/Invoke native DLLs: these will be loaded in the process slot. Moreover, if you look at the picture showing all the alive processes, you’ll notice that in the upper bound of all the slots there’s a portion of "blue" virtual memory, which is the same for all the processes. This is the memory blocked by the Operating System whose size is equal to the sum of the binaries (the simple .EXE files) active at any given moment. So large monolithic EXEs (large for example because containing “many” resources) are not recommended at all on Windows CE 5.0! And in general supporting NETCF developers I can say I’ve seen many “big” applications... that is not a good practice for this reason! Through those pictures it is also easy to understand why the whole system stability is a function of all active processes, and in particular it is easy to see that very often DEVICE.EXE can be a source of headaches! Think of those Windows Mobile-based devices that have the radio stack (i.e. the phone), Bluetooth, WiFi, Camera, barcode-scanner, etc. .. each of these drivers is a DLL that device.exe has to load (blue line), and each can also create its own stack and heap (green line). Some OEMs allowed developers to programmatically disable some drivers (to reduce pressure done by device.exe), but obviously we can not take for granted for example that a user manually restarts that feature (or this is done by another application...). So, what has been done to fight device.exe’s power? In many cases, the driver-DLLs were loaded by services.exe, which is the host process for Service-DLLs on Windows CE. But very often it was not enough... What Windows Mobile 6.1 introduced is that native DLLs with size > 64KB are typically loaded into the so-called slots 60 and 61, which are part of the Large Memory Area. Another improvement in Windows Mobile 6.1 was to dedicate another slot (slot 59) to the driver stack (part of the Green Line to device.exe). Of course, this means that the memory-mapped files have now less space available (and I have recently handled a request for exactly this purpose, coming by a software company that was developing a GPS navigation software that could not load some map files in WinMo6.1), but in general the whole operating system has gained a stability that hadn’t before... To conclude, the tool I mentioned was developed by Symbol and I don’t think it’s publicly available. But a similar tool has recently been published on CodePlex (source code included!) through the article Visualizing the Windows Mobile Virtual Memory Monster. The term "Virtual Memory Monster" was invented years ago by Reed Robison… (part 1 e part 2). I've already been using it in a couple of requests and highly recommend it! TROUBLESHOOTING MEMORY LEAKS FOR NETCF APPLICATIONS Instead of explaining how things work in theory, which is a task I leave to more authoritative sources like the documentation itself and various blogs – one for all that of Abhinaba Basu, who is precisely the GC Guru inside the NETCF Dev Team (Back to basic: Series on dynamic memory management), I’d like to follow the troubleshooting flow I run through when a new Service Request arrives, about for example the following issues: Firstly, we must determine whether the problem is specific to an OEM. The best approach, when possible, is to verify if the error occurs even on emulators contained in the various Windows Mobile SDK. If not, the help that Microsoft Technical Support can provide is limited, as it is possible that the error is due to a customization of the Windows Mobile platform by the OEM. In this case, it may be helpful to know about what I wrote about device.exe above. Another initial step, in the case of applications NETCF v2 SP2, is to check if just running the application on NETCF v3.5 gives any improvement. There is no need to recompile the application with the Visual Studio 2008 - just like for .NET Desktop applications, add a configuration XML file in the same folder that contains the file TheApplication.exe, named TheApplication.exe.config and whose content is simply (I mentioned here): <configuration> <startup> <supportedRuntime version="v3.5.*"/> </startup> </configuration> <configuration> <startup> <supportedRuntime version="v3.5.*"/> </startup> </configuration> So, after having considered possible “trivial” causes, you can proceed to the analysis... Historically NETCF developers haven’t had an easy time in troubleshooting due to lack of appropriate tools – unlike Desktop cousins! – but over the years Microsoft has released tools that have gradually evolved over the current Power Toys for .NET Compact Framework 3.5. Apart from these you must know the(freeware!) EQATEC’s ones (Tracer and Profiler) and recently a tool on CodeProject that I mentioned earlier, that displays the status of virtual memory (VirtualMemory, with source code). Regarding power-toys, when you are dealing with a problem around memory, you have 2 of them that are of great help: the "CLR Profiler" and "Remote Performance Monitor (RPM)”. The first one is useful in visually making problems with objects’ allocation almost immediate and allows you to notice the problem in a visual way. Info on how using it are available through The CLR Profiler for the .Net Compact Framework Series Index. The second one provides, both in real time and through an analysis a posteriori, counters about the usage of MANAGED memory; also, through the "GC Heap Viewer” it allows not only to study the exact content of the managed heap, but also allows you to compare contents of the heap in different moments, in order to bring out a possible unexpected growth of a certain type of objects. Some images are available on Finding Managed Memory leaks using the .Net CF Remote Performance Monitor, which is useful also to get an idea about which counters are available, while a list and related explanations are provided on Monitoring Application Performance on the .NET Compact Framework - Table of Contents and Index. What I'd like to do here is not to repeat the same explanations, already detailed in the links above, but share some practical experience... For example, in the vast majority of cases I have handled about memory leaks, the problem was due to Forms (or Controls) that unexpectedly were NOT removed by the Garbage Collector. The instances of the Form class of the application are therefore the first thing to check through the Remote Performance Monitor and GC Heap Viewer. For this reason, where appropriate (e.g. if the total form are "not so many"), to avoid memory problems with NETCF applications it may be useful to adopt the so-called "Singleton Pattern": this way a single managed instance of a given form will exist throughout the application life cycle. So, supposing to be in the following situation: I used the Remote Performance Monitor and saved different .GCLOG files during normal use of the application, and thanks to the GC Heap Viewer I noticed that an unexpected number of forms stays in memory, and also that this increases during the life of the application, although there have been a certain number of Garbage Collections. Why the memory of a Form is not cleaned up by the garbage collector? Thanks to the GC Heap Viewer you can know exactly who maintains a reference to what, in the "Root View" on the right pane. Obviously knowing application’s architecture will help in identifying unexpected roots. A special consideration must be done for MODAL Forms in .NET (the dialogs, those that on Windows Mobile have the close button "Ok" instead of "X", and which permits a developer to prevent the user to return to the previous form). In many cases I have handled, the problem was simply due to the fact that the code was not invoking Close() (or. Dispose ()) after .ShowDialog(): Form2 f2 = new Form2(); f2.ShowDialog(); f2.Close(); Form2 f2 = new Form2(); f2.ShowDialog(); f2.Close(); Why should it matter? Because often (not always, for example, not when you expect a DialogResult) on Windows Mobile the user clicks on 'Ok' in the top right "close" the dialog. Also on Desktop, when a dialog is "closed" in this way the window is not closed, but "hidden"! And it could happen that the code creates a new instance of the form, without removing the old one in memory. It’s documented in “Form..::.ShowDialog Method” (the doc talks about “X” but of course for Windows Mobile refers to the 'Ok' referred to above): […]. Anyway, we assumed so far that the memory leak is MANAGED, but in reality it may be that leak is with the NATIVE resources that are used by a .NET instance, which have not been successfully released by implementing the so-called "IDisposable Pattern. And around this there are some peculiarities in NETCF, that Desktop-developers don’t need to worry about, particularly with respect to SQL Compact objects and "graphical" objects, i.e. classes of the System.Drawing namespace. In NETCF the Font, Image, Bitmap, Pen, Brush objects are simple wrappers around their native resources, which in the Windows CE-based operating systems are handled by the GWES (Graphics, Windowing and Event Subsystem). What does this mean? It means that in their own .Dispose() they effectively release their native resources, and therefore one *must invoke .Dispose() for Drawing objects* (or invoke methods that indirectly call it, for example .Clear() in ImageList.ImageCollection – which has not the .Dispose() itself). Note that among the counters provided by the Remote Performance Monitor, the category "Windows.Forms" contains indeed: Note that I’m not talking about only objects directly “born” as Brush, Pen, etc.. I’m talking also about those objects whose properties contain graphic objects, such as a PictureBox or ImageList (or indirectly, an ImageList of a ToolBar). So, when you close a form, remember to: this.ImageList1.Images.Clear(); this.ToolBar1.ImageList.Images.Clear(); this.PictureBox1.Image.Dispose(); //etc... this.ImageList1.Images.Clear(); this.ToolBar1.ImageList.Images.Clear(); this.PictureBox1.Image.Dispose(); //etc... Finally, still about Forms, a simple technique I often used to identify possible problems with items not properly released at the closing of a form has been to emulate user interaction by “automatic” opening and closing of the form. I’m purely talking about a test code like this:."); After running the loop N times, the Remote Performance Monitor will be of considerable help to see what is going wrong... A final note before concluding this paragraph. It may be that an application is so complex to require "a lot of" virtual memory. This would not be a problem, as long as there is room for the lines "green" in my previous post. But requiring "a lot of" memory means that the Garbage Collector will get kicked more frequently, thus impacting general application performance (because the GC must first “lock” the threads in a safe state). The point is that if the application is so complex to require too frequent use of garbage collection (and therefore performance may not be acceptable by end-users), then it might be worthwhile to split the application into 2 parts, such as one for memory dog-guard and another for the user interface. This process at the cost of an additional process slot, but often it is something that can be paid. Or, since the managed DLLs are loaded in the Large Memory Area without wasting precious process’ address space, an idea would be to place all classes, even those of the form, not in the EXE but in the DLLs! A simple yet very effective idea, which Rob Tiffany has discussed about in his post MemMaker for the .NET Compact Framework. Enjoy! ~raffaele
http://blogs.msdn.com/b/raffael/archive/2009/11.aspx
CC-MAIN-2014-41
refinedweb
3,444
51.28
Hey a friend introduced me to Project Euler and having fun solving there problems. I'm a really newbie and only been programming for 5 days now so please explain everything like you would to child. If you dont know what Project Euler is, its math problems solved mainly with programming (google it). Just done Problem 3: The prime factors of 13195 are 5, 7, 13 and 29. What is the largest prime factor of the number 600851475143 ? Solved it but there are some changes I would like to make to improve it, that I require your help for. First I would like to add in x % 2 into the condition for the second FOR command ie replace y <= x with y <= x % 2. I thought I could simply replace the term but that didnt work also have tried creating a new integer and writing that equal to x%2 after the first IF command but that didnt work either. Secondly at the moment the only way to see the prime factors is that the output only shows the prime factors once and non prime factors several times. I would like the code to show only the prime factors or even better the highest prime factor. I think I can do that with an ELSE command if I can get the x % 2 to work but how would you do it. Thirdly I am using code blocks 8.02 as recommended in the tutorials the output only seems to show a certain number of outputs after that it deletes the top number to place the last number. How can I remove this and show all outputs? Finally this is my first time using tags I have read the sticky but apologies if I get this wrong. Code: #include <iostream> using namespace std; int main() { long long int num=600851475143LL; for ( int x=1 ; x<num ; x++) if ( num%x==0 ) for ( int y = 2; y <= x ; y++) if ( x % y == 0 ) cout<< x <<endl; }
http://cboard.cprogramming.com/cplusplus-programming/110097-project-euler-solved-but-want-help-improving-code-newbie-printable-thread.html
CC-MAIN-2014-35
refinedweb
334
76.15
I'm trying to use TemplateLookup from Mako, but can't seem to get it to work. Layout of the test site is: /var/www main.py templates/ index.html Nginx's config is setup as: location / {; } Cherrypy's config has: [global] server.socket_port = 8080 server.thread_pool = 10 engine.autoreload_on = False tools.sessions.on = True A simple cherrypy setup in main.py seems to work fine. import cherrypy class Main: @cherrypy.expose def index(self): return 'Hello' cherrypy.tree.mount(Main(), '/', config='config') Now, if I modify this to use Mako's template lookup, I get a 500 error. I know it has something to do with serving static files, but I've tried over a dozen different configurations accoring to the cherrypy wiki, but none of them work. Here's a bare setup I have for the templates: import cherrypy from mako.template import Template from mako.lookup import TemplateLookup templates = TemplateLookup(directories=['templates'], output_encoding='utf-8') class Main: @cherrypy.expose def index(self): return templates.get_template('index.html').render(msg='hello') cherrypy.tree.mount(Main(), '/', config='config') Does anyone know how I can get this to work ? I just was doing the same with Django.. Took me a while to figure out i needed to install flup to run the application for fastcgi.. You probably have the same problem as me.. 15 days ago
http://serverfault.com/questions/200155/setting-up-mako-with-cherrypy-on-nginx-through-fastcgi
crawl-003
refinedweb
226
62.85
package require StateManager set state StateManager %AUTO% ?options? $state method ?parameters? set singleton [StateManagerSinleton %AUTO% ?options?] While packaged with the ReadoutGUI this is actually a general purpose utility that provides support for Tcl script to save and restore state variables. A state variable can be pretty much anything that might define the state of a program or control how a program operates. State is saved to and restored from Tcl scripts that consist entirely of set commands. These scripts are sourced into a safe interpreter in order to ensure they cannot damage or inject insecure code into the application itself. Only pre-declared state variables will be saved or restored from the file, further securing the application script from malicious or erroneous restores. Note that if an application has several independent components that wish to share a single configuration file, the StateManagerSingleton can be used to provide access to an application speciric singleton state manager object. StateManager objects include the standard configure and cget methods. These operate on the object option(s) described below. -file file-path Provides the path to the file that will be used by save and restore operations. See METHODS below. In addtion to the configure and cget methods described in OPTIONS above, the following methods are provided by StateManager objects destroy Destroys the object. addStateVariable name getter setter Defines a state variable that will be saved/restored by the state manager. name is the name of the variable as it will be defined in the file (e.g. set name value). getter is a command to which name will be appended that will be used by save to obtain the variable value. setter is a command which will be called by restore to restore the value of name. name and value will be appended to the setter command. If this business of getters and setters is not clear see save and restor and finally the EXAMPLES section below. listStateVariables Returns a list of the state variables. The return value is a Tcl list of triplets. Each triplet conisists of a variable name, its getter and setter in that order. save Saves the variables to -file. If the -file option is blank an error is thrown. The save operates by iterating over all registered variables and writing a command that is something like Creates a secure slave interpreter and sources -file into that interpreter. For each variable in the list of state variables, if the slave interpreter has a definition for that variable, the setter for that variable is called in code something like this: The example below shows how to define two state variables ::State::var1 and ::State::var2 and their associated getter/setter procs. Example 1. Getters and setters for StateManager namespace eval ::State { variable var1 variable var2 } ... proc ::State::getter name { return [set ::State::$name]variable var2 } ... proc ::State::getter name { return [set ::State::$name] } proc ::State::setter {name value} { set ::State::$name $value} proc ::State::setter {name value} { set ::State::$name $value } set sm [StateManagerSingleton %AUTO%] $sm addStateVariable var1 ::State::getter ::State::setter} set sm [StateManagerSingleton %AUTO%] $sm addStateVariable var1 ::State::getter ::State::setter $sm addStateVariable var2 ::State::getter ::State::setter ... $sm configure -file /path/to/configuration/file.tcl$sm addStateVariable var2 ::State::getter ::State::setter ... $sm configure -file /path/to/configuration/file.tcl $sm save$sm save ... $sm restore... $sm restore var1and var2which will be saved and restored. var1and var2are registered with getters and setters defined as described above so that each variable is bound to the corresponding variable in the ::State namespace. It's worth nothing that more interesting setter and getter functions are possible. For example, a setter could load a piece of a graphical user interface, and a getter could retrieva a value from an element of a graphical user interface. The ReadoutShell does this in a few places. -filemust be configured to point at a file (or specify a writable file for save) that is used as the target for the save or source for the restore. -filecan be freely configured many times. For example your application might prompt the user for a filename into which some configuration information can be written/read. ::State::var1and ::State::var2to the last configured -file. This is done by invoking the getter registered for each of those variables (and any other variables that were added for that matter) in turn passing in var1 and var2 to retrieve their values. -fileinto a slave safe interpreter and queries that interpreter to see if each registered variable is defined. For each defined variable, the value is fetched out of the interpreter and that variable's setter is invoked to update whatever in the application is bound to that configuration variable.
http://docs.nscl.msu.edu/daq/newsite/nscldaq-11.0/r53966.html
CC-MAIN-2017-30
refinedweb
786
55.03
Norman Khine wrote: > thanks denis, > > On Tue, Feb 2, 2010 at 9:30 AM, spir <denis.spir at free.fr> wrote: > >> On Mon, 1 Feb 2010 16:30:02 +0100 >> Norman Khine <norman at khine.net> wrote: >> >> >>> On Mon, Feb 1, 2010 at 1:19 PM, Kent Johnson <kent37 at tds.net> wrote: >>> >>>> On Mon, Feb 1, 2010 at 6:29 AM, Norman Khine <norman at khine.net> wrote: >>>> >>>> >>>>> thanks, what about the whitespace problem? >>>>> >>>> \s* will match any amount of whitespace includin newlines. >>>> >>> thank you, this worked well. >>> >>> here is the code: >>> >>> ### >>> import re >>> file=en('producers_google_map_code.txt', 'r') >>> data =repr( file.read().decode('utf-8') ) >>> >>> block =e.compile(r"""openInfoWindowHtml\(.*?\\ticon: myIcon\\n""") >>> b =lock.findall(data) >>> block_list =] >>> for html in b: >>> namespace =} >>> t =e.compile(r"""<strong>(.*)<\/strong>""") >>> title =.findall(html) >>> for item in title: >>> namespace['title'] =tem >>> u =e.compile(r"""a href=\"\/(.*)\">En savoir plus""") >>> url =.findall(html) >>> for item in url: >>> namespace['url'] =tem >>> g =e.compile(r"""GLatLng\((\-?\d+\.\d*)\,\\n\s*(\-?\d+\.\d*)\)""") >>> lat =.findall(html) >>> for item in lat: >>> namespace['LatLng'] =tem >>> block_list.append(namespace) >>> >>> ### >>> >>> can this be made better? >>> >> The 3 regex patterns are constants: they can be put out of the loop. >> >> You may also rename b to blocks, and find a more a more accurate name for block_list; eg block_records, where record =et of (named) fields. >> >> A short desc and/or example of the overall and partial data formats can greatly help later review, since regex patterns alone are hard to decode. >> > > here are the changes: > > import re > file=en('producers_google_map_code.txt', 'r') > data =repr( file.read().decode('utf-8') ) > > get_record =e.compile(r"""openInfoWindowHtml\(.*?\\ticon: myIcon\\n""") > get_title =e.compile(r"""<strong>(.*)<\/strong>""") > get_url =e.compile(r"""a href=\"\/(.*)\">En savoir plus""") > get_latlng =e.compile(r"""GLatLng\((\-?\d+\.\d*)\,\\n\s*(\-?\d+\.\d*)\)""") > > records =et_record.findall(data) > block_record =] > for record in records: > namespace =} > titles =et_title.findall(record) > for title in titles: > namespace['title'] =itle > urls =et_url.findall(record) > for url in urls: > namespace['url'] =rl > latlngs =et_latlng.findall(record) > for latlng in latlngs: > namespace['latlng'] =atlng > block_record.append(namespace) > > print block_record > >> The def of "namespace" would be clearer imo in a single line: >> namespace =title:t, url:url, lat:g} >> > > i am not sure how this will fit into the code! > > >> This also reveals a kind of name confusion, doesn't it? >> >> >> Denis >> >> > Your variable 'file' is hiding a built-in name for the file type. No harm in this example, but it's a bad habit to get into. What did you intend to happen if the number of titles, urls, and latIngs are not each exactly one? As you have it now, if there's more than one, you spend time adding them all to the dictionary, but only the last one survives. And if there aren't any, you don't make an entry in the dictionary. If that's the exact behavior you want, then you could replace the loop with an if statement: (untested) if titles: namespace['title'] = titles[-1] On the other hand, if you want a None in your dictionary for missing information, then something like: (untested) for record in records: titles = get_title.findall(record) title = titles[-1] if titles else None urls = get_url.findall(record) url = urls[-1] if urls else None latlngs = get_latlng.findall(record) lating = latings[-1] if latings else None block_record.append( {'title':title, 'url':url, 'lating':lating{ ) DaveA
https://mail.python.org/pipermail/tutor/2010-February/074152.html
CC-MAIN-2014-15
refinedweb
573
69.38
In Chapter 2, we looked at the basic I/O system calls in Linux. These calls form not only the basis of file I/O, but also the foundation of virtually all communication on Linux. In Chapter 3, we looked at how user-space buffering is often needed on top of the basic I/O system calls, and we studied a specific user-space buffering solution, C’s standard I/O library. In this chapter, we’ll look at the more advanced I/O system calls that Linux provides: Scatter/gather I/O Allows a single call to read or write data to and from many buffers at once; useful for bunching together fields of different data structures to form one I/O transaction. Epoll Improves on the poll() and select() system calls described in Chapter 2; useful when hundreds of file descriptors have to be polled in a single program. Memory-mapped I/O Maps a file into memory, allowing file I/O to occur via simple memory manipulation; useful for certain patterns of I/O. File advice Allows a process to provide hints to the kernel on its usage scenarios; can result in improved I/O performance. Asynchronous I/O Allows a process to issue I/O requests without waiting for them to complete; useful for juggling heavy I/O workloads without the use of threads. The chapter will conclude with a discussion of performance considerations and the kernel’s I/O subsystems.{mospagebreak title=Scatter/Gather I/O} Scatter/gather I/O is a method of input and output where a single system call writes to a vector of buffers from a single data stream, or, alternatively, reads into a vector of buffers from a single data stream. This type of I/O is so named because the data is scattered into or gathered from the given vector of buffers. An alternative name for this approach to input and output is vectored I/O. In comparison, the standard read and write system calls that we covered in Chapter 2 provide linear I/O. Scatter/gather I/O provides several advantages over linear I/O methods: More natural handling If your data is naturally segmented—say, the fields of a predefined header file—vectored I/O allows for intuitive manipulation. Efficiency A single vectored I/O operation can replace multiple linear I/O operations. Performance In addition to a reduction in the number of issued system calls, a vectored I/O implementation can provide improved performance over a linear I/O implementation via internal optimizations. Atomicity Unlike with multiple linear I/O operations, a process can execute a single vectored I/O operation with no risk of interleaving of an operation from another process. Both a more natural I/O method and atomicity are achievable without a scatter/gather I/O mechanism. A process can concatenate the disjoint vectors into a single buffer before writing, and decompose the returned buffer into multiple vectors after reading—that is, a user-space application can perform the scattering and the gathering manually. Such a solution, however, is neither efficient nor fun to implement.readv( ) and writev( ) POSIX 1003.1-2001 defines, and Linux implements, a pair of system calls that implement scatter/gather I/O. The Linux implementation satisfies all of the goals listed in the previous section. The readv() function reads count segments from the file descriptor fd into the buffers described by iov : #include <sys/uio.h> ssize_t readv (int fd, const struct iovec *iov, int count); The writev() function writes at most count segments from the buffers described by iov into the file descriptor fd : #include <sys/uio.h> ssize_t writev (int fd, const struct iovec *iov, int count); The readv() and writev() functions behave the same as read() and write() , respec tively, except that multiple buffers are read from or written to. Each iovec structure describes an independent disjoint buffer, which is called a segment: #include <sys/uio.h> struct iovec { void *iov_base; /* pointer to start of buffer */ size_t iov_len; /* size of buffer in bytes */ }; A set of segments is called a vector. Each segment in the vector describes the address and length of a buffer in memory to or from which data should be written or read. The readv() function fills each buffer of iov_len bytes completely before proceeding to the next buffer. The writev() function always writes out all full iov_len bytes before proceeding to the next buffer. Both functions always operate on the segments in order, starting with iov[0] , then iov[1] , and so on, through iov[count–1] . {mospagebreak title=Return values} On success, readv() and writev() return the number of bytes read or written, respectively. This number should be the sum of all count iov_len values. On error, the system calls return -1 , and set errno as appropriate. These system calls can experience any of the errors of the read() and write() system calls, and will, upon receiving such errors, set the same errno codes. In addition, the standards define two other error situations. First, because the return type is an ssize_t , if the sum of all count iov_len values is greater than SSIZE_MAX , no data will be transferred, -1 will be returned, and errno will be set to EINVAL . Second, POSIX dictates that count must be larger than zero, and less than or equal to IOV_MAX , which is defined in <limits.h> . In Linux, IOV_MAX is currently 1024 . If count is 0 , the system calls return 0 .* If count is greater than IOV_MAX , no data is transferred, the calls return -1 , and errno is set to EINVAL . Optimizing the Count Optimizing the Count During a vectored I/O operation, the Linux kernel must allocate internal data structures to represent each segment. Normally, this allocation would occur dynamically, based on the size of count . As an optimization, however, the Linux kernel creates a small array of segments on the stack that it uses if count is sufficiently small, negating the need to dynamically allocate the segments, and thereby providing a small boost in performance. This threshold is currently eight, so if count is less than or equal to 8 , the vectored I/O operation occurs in a very memory-efficient manner off of the process’ kernel stack. Most likely, you won’t have a choice about how many segments you need to transfer at once in a given vectored I/O operation. If you are flexible, however, and are debating over a small value, choosing a value of eight or less definitely improves efficiency. writev( ) example Let’s consider a simple example that writes out a vector of three segments, each containing a string of a different size. This self-contained program is complete enough to demonstrate writev() , yet simple enough to serve as a useful code snippet: #include <stdio.h> #include <sys/types.h> #include <sys/stat.h> #include <fcntl.h> #include <string.h> #include <sys/uio.h> int main () { struct iovec iov[3]; ssize_t nr; int fd, i; char *buf[] = { "The term buccaneer comes from the word boucan.n", "A boucan is a wooden frame used for cooking meat.n", "Buccaneer is the West Indies name for a pirate.n" }; fd = open ("buccaneer.txt", O_WRONLY | O_CREAT | O_TRUNC); if (fd == -1) { perror ("open"); return 1; } /* fill out three iovec structures */ for (i = 0; i < 3; i++) { iov[i].iov_base = buf[i]; iov[i].iov_len = strlen (buf[i]); } /* with a single call, write them all out */ nr = writev (fd, iov, 3); if (nr == -1) { perror ("writev"); return 1; } printf ("wrote %d bytesn", nr); if (close (fd)) { perror ("close"); return 1; } return 0; } Running the program produces the desired result: $ ./writev wrote 148 bytes As does reading the file: $ cat buccaneer.txt The term buccaneer comes from the word boucan. A boucan is a wooden frame used for cooking meat. Buccaneer is the West Indies name for a pirate. {mospagebreak title=readv() example} Now, let’s consider an example program that uses the readv() system call to read from the previously generated text file using vectored I/O. This self-contained exam ple is likewise simple yet complete: #include <stdio.h #include <sys/types.h> #include <sys/stat.h> #include <fcntl.h> #include <sys/uio.h> int main () { char foo[48], bar[51], baz[49]; struct iovec iov[3]; ssize_t nr; int fd, i; fd = open ("buccaneer.txt", O_RDONLY); if (fd == -1) { perror ("open"); return 1; } /* set up our iovec structures */ iov[0].iov_base = foo; iov[0].iov_len = sizeof (foo); iov[1].iov_base = bar; iov[1].iov_len = sizeof (bar); iov[2].iov_base = baz; iov[2].iov_len = sizeof (baz); /* read into the structures with a single call */ nr = readv (fd, iov, 3); if (nr == -1) { perror ("readv"); return 1; } for (i = 0; i < 3; i++) printf ("%d: %s", i, (char *) iov[i].iov_base); if (close (fd)) { perror ("close"); return 1; } return 0; } Running this program after running the previous program produces the following results: $ ./readv 0: The term buccaneer comes from the word boucan. 1: A boucan is a wooden frame used for cooking meat. 2: Buccaneer is the West Indies name for a pirate. A naïve implementation of readv() and writev() could be done in user space as a simple loop, something similar to the following: #include <unistd.h> #include <sys/uio.h> ssize_t naive_writev (int fd, const struct iovec *iov, int count) { ssize_t ret = 0; int i; for (i = 0; i < count; i++) { ssize_t nr; nr = write (fd, iov[i].iov_base, iov[i].iov_len); if (nr == -1) { ret = -1; break; } ret += nr; } return ret; } Thankfully, this is not the Linux implementation: Linux implements readv() and writev() as system calls, and internally performs scatter/gather I/O. In fact, all I/O inside the Linux kernel is vectored; read() and write() are implemented as vectored I/O with a vector of only one segment. Please check back next week for the continuation of this article.
http://www.devshed.com/c/a/braindump/advanced-file-io/3/
CC-MAIN-2017-34
refinedweb
1,653
62.78
Sure, this article should have been published mid-March to play off of the title. It would have been nice to include my favorite recipe for Caesar Salad dressing. It didn't happen. If you want, I can bring in an excuse note from my mom. Some of you sent in your favorite Integrated Development Environments (IDEs) and I tried them all. I expect that many of you will have much to add to this discussion. Please use the comments at the end of the article to highlight features or weaknesses in the various IDEs. Until mid-December I never used an IDE except as a teaching tool. I always used a text editor and command line tools for my real work. When I taught, I used the environment the school supported. For those schools determined to standardize on an IDE I even pushed the ones I liked the best. For the most part, the students used Windows boxes for development. For the past four years, I've most often recommended JBuilder for schools. Borland (), the makers of JBuilder, has a very generous license for academic organizations and offers a solid tool that works well on different platforms. I do sometimes use it at home and, for the most part, really like what they've done with it. It works well on Mac OS X and includes features not available in other IDEs. I've enjoyed using JBuilder and recognize their commitment and contributions to Java on the Mac. That being said, I now use IDEA from JetBrains (formerly IntelliJ) for my daily work. There are no RAD tools and it isn't as slick as some of the other offerings. They've done nothing to provide support for a Mac look and feel. On the other hand, the support for coding is darn near perfect. IDEA helps me write code. It thinks of things before I do and lets me know what it's come up with in a nice way. It helps me write and refactor code. For example, once I've moved a couple of methods out of a class and no longer need a particular import statement, it provides me with a gentle visual reminder that I can remove that line of code. Their slogan is "Develop with Pleasure," and I think they help make that happen. In this article, I'll look at JBuilder, IDEA, and a few other IDEs. There's a bigger issue here that I'll address in the Pre-ramble: look at the number of choices you have for developing in Java on Mac OS X. Looking ahead, Apple holds their Worldwide Developers Conference (WWDC) next month in San Jose. The Web page promises that this year's WWDC looks to the "future of Mac OS X." Also, Apple announced that at that time we can expect a beta of JDK 1.4. If you aren't an Apple Developer Connection (ADC) member, at least sign up for the free subscription (). Although they've made no announcements about the details of their distribution plans for JDK 1.4 betas in the past, Apple has made the Java pre-releases available to all ADC members. As always, you can send me email about this column or with suggestions for future columns to DSteinberg@core.com, with the subject line "O'Reilly Mac Java." Apple ran two technical sessions and a BOF (Birds of a Feather) at last month's JavaOne Conference in San Francisco. The sessions were targeted more at those attendees who weren't familiar with what Apple is doing with Java on Mac OS X. Ted Goldstein, Apple's new "guy in charge of Java and other stuff," presented an overview of Java on Mac OS X. Goldstein's actual title is vice president, Development Tools. In addition to highlighting J2SE on Mac OS X, he listed J2EE certified offerings from Pramati and Lutris that run on the platform. JBoss and Orion also run well on a Mac. Goldstein told a story about how hours before this session he had approached the guys at the Trifork booth to challenge them to port their application to Mac OS X. Half an hour later the port was finished. For Goldstein, that's a big part of the Java story on the Mac: stuff just works. An interesting thing happened at the JavaOne Mac BOF. Alan Samuel, Apple's Java Technology evangelist, and Allen Dennison, Apple's Java Product marketing manager, started off the session by answering the inevitable questions before they were even asked. They said Java 1.4 was on the way and although Apple has already been working on the 1.4 release, the focus was to deliver a solid 1.3.1 version of Java on the platform. Audience members then asked about this feature or that. One after another the questions were either "when are we going to see" a given technology or "when are you going to fix" some problem with another technology. This isn't unusual. Sun's Swing team had a similar BOF the next night. When developers are able to meet with the engineers responsible for a technology, they tend to ask these type of questions. Alan, Allen, and the engineers present answered all of the questions as they were asked. And then the interesting thing happened Up until this point Goldstein had been quietly standing against a wall, way off to the side. At this point he asked if there was anything about Java on Mac OS X that people liked. The mood in the room completely changed. Longtime Mac developers raised their hands to make comments about how the issues being raised were, by and large, minor and that in general they were very pleased with the performance of Java on the Mac. It was as if the family of Mac developers in the audience realized that they were in front of company. The company may not know the context of these public complaints about flaws in Apple's support of Java. Goldstein's comment was able to redirect the feeling of the room because Alan, Allen, and the engineers weren't dodging the questions being asked. By the end of the BOF, audience members were again asking about plans for supporting their pet features. After that BOF I checked with various friends who had worked with Goldstein before and asked if people had made the mistake of underestimating him. They all smiled and answered, "Not twice." Much of the reason that Java on the Mac is so solid is due to Goldstein's predecessor Steve Naroff, and the quality of the engineers working on the project. The great news is that Naroff is still at Apple. It's early, but I think we're lucky to have Goldstein working at Apple as the guy in charge of Java. With his ties to Sun, I hope that Goldstein can help Sun see how valuable Apple is to the Java story. The 1.4 support issue is not a small one. The argument on one side is that Sun has only gone final with 1.4 in February so Apple isn't that far behind. Apple has been promising to narrow the gap in Java releases to 60-90 days after Sun's final release for more than four years now. Dennison said the reason the delay was a little longer this time was that Apple's Java engineers have been working on improving the performance of 1.3.1. The 1.3.x release was the first version of Java on Mac OS X and Dennison points out that many of the improvements being made to 1.3.1 will pay off as Apple moves forward to 1.4. The argument on the other side of the 1.4 support issue is that developers have had access to 1.4 beta releases for over a year now. If Apple wants programmers to develop on Mac OS X then, many argue, Apple needs to supply these pre-release versions in addition to the final releases. As a practical matter, Apple has answered, it can't provide the engineers to work on a moving target. In discussions in online forums developers have questioned whether the programmers asking for 1.4 support really need 1.4. There have been instances where engineers have said they need particular functionality. Often they have been given workarounds; in other cases they've been told there is no alternative and they'll have to wait for 1.4. The underlying concern seems to be with the future of Java on the Mac. The question of "When will version xxx be available for the Mac?" has been asked since the first release of Java. At that time, Mac users would have to wait more than a year for the latest release of Java on the platform. Consider that Mac OS X just became the default operating system a few months ago and that no version of Java 2 (that's versions 1.2, 1.3, and 1.4) is supported on the classic Mac platform. With Mac OS X the default OS, Apple can deliver the latest versions of Java through a software update to both developers and to end users. As the install base of classic Mac OS wanes, the Java on the Mac story gets stronger. Both IDEA and JBuilder are top choices for the Java developer on Mac OS X. I still think that a new iMac -- with a second keyboard and second mouse, loaded up with IDEA -- is a perfect choice for Extreme Programming, but I also like a lot of what JBuilder offers. If you're looking for tools that are free as in beer, Apple provides Project Builder and you can also use Net Beans or JEdit. JEdit isn't a full-featured IDE, but it does have many of the features you use most of all. Couple it with an AppleScript that manages your other tasks (see last month's column) or learn to work with Ant, and you have all that you need. Heck, you might have all that you need with Emacs, vi, TextEdit, or BBEdit. The traditional favorite of Mac Developers, Metrowerk's CodeWarrior, is still the IDE of choice for Mac users coding in C or C++. Metrowerk's says that most of the ports from Classic Mac to Mac OS X were done with CodeWarrior. Unfortunately, the Java tools are still lacking. To be fair to Metrowerks, they are a release behind everyone else. It may not be fair to compare their release from more than a year ago with the more recent releases of other vendors. CodeWarrior 8 is scheduled to be released this summer. IDEA is available from IntelliJ Software. The current version is 2.5.2, although you can get a look at betas of version 3.0 at. The list price is just under $400, but they are now running a half-price Easter sale for personal licensing. They also make a version available to academia for $99. To run IDEA, open up a terminal window, navigate inside of the bin directory and run the shell script idea.sh. You should see something like this: idea.sh IDEA from IntelliJ software (Click for larger image) The first thing you'll notice is that it doesn't look very Mac-like. IDEA actually runs much better on my Mac box than on my Windows box but, except for the title bar, the IDEA window looks very much like a Windows application. The menu bar is attached to the window and not to the top of the screen in the standard Mac position. I'm less rabid about this than others. I think that in an IDE it's easy enough to use the menu bar in the same window that you're working in. The third line of code includes an import statement that isn't needed yet. At the right side of the screen you can see a yellow box that signals that something superfluous is in the code. If I click on the yellow stripe, it highlights the offending code and provides me with the message "Import statement is redundant." I can, of course, choose to ignore the warning and the code will compile and run without any problems. If there was an actual syntax error the box would have been red with red horizontal lines that would take me to any line of code found to contain an error. As with compiler errors, fix the first one first and see which others are taken care of. A dropped bracket could render a lot of code to be non-compilable. A nice feature of IDEA is that you can configure the help to be presented in a way that works with your coding style. If you don't want to see warnings, you can turn them off. Suppose you introduced an instance variable of type Jframe. IDEA can be configured to add the import statement import javax.swing.Jframe or import javax.swing.*. It can also just insert javax.swing before Jframe where you've typed it in the source file. Jframe import javax.swing.Jframe import javax.swing.* javax.swing You set these defaults so that you aren't annoyed by the help you are getting from the IDE. I initially ignored much of the help that IDEA provided. The more that I work with the IDE and trust it, the more I've come to benefit from its help. One of my favorite tools is the support for refactoring. Click on the Refactor menu item and you'll find support for some of your favorite refactorings. You can also refactor classes by highlighting a class in the left project window. In that context you can move or rename classes. When you do, the relevant names are changed and imports are altered. You can also extract the interface from a class or create a superclass from a given class. While you're looking around the project hierarchy, you'll notice there's also support for CVS as well as Ant. I don't tend to use RAD tools. If you do, then IDEA may not be the tool for you. On the other hand, when it comes to programming, I like it more and more each day. As an aside, as a teacher IDEA is the best tool for me to examine student code. I can use options such as "Find Usages" to see where methods are called and to quickly navigate their code and figure out what they're doing. IDEA also makes it easy to create my own templates to make frequently used coding patterns (such as unit tests) easy to.
http://www.macdevcenter.com/pub/a/mac/2002/04/16/osx_java.html?page=1
CC-MAIN-2015-40
refinedweb
2,471
72.56
/* $NetBSD: extern.c,v 1.8 2003/08/07 09:37:20[] = "@(#)extern.c 8.1 (Berkeley) 5/31/93"; #else __RCSID("$NetBSD: extern.c,v 1.8 2003/08/07 09:37:20 agc Exp $"); #endif #endif /* not lint */ #include "hangman.h" bool Guessed[26]; char Word[BUFSIZ], Known[BUFSIZ]; const char *const Noose_pict[] = { " ______", " | |", " |", " |", " |", " |", " __|_____", " | |___", " |_________|", NULL }; int Errors, Wordnum = 0; unsigned int Minlen = MINLEN; double Average = 0.0; const ERR_POS Err_pos[MAXERRS] = { {2, 10, 'O'}, {3, 10, '|'}, {4, 10, '|'}, {5, 9, '/'}, {3, 9, '/'}, {3, 11, '\\'}, {5, 11, '\\'} }; const char *Dict_name = _PATH_DICT; FILE *Dict = NULL; off_t Dict_size;
http://cvsweb.netbsd.org/bsdweb.cgi/src/games/hangman/extern.c?rev=1.8&content-type=text/x-cvsweb-markup&hideattic=0&sortby=author&only_with_tag=bouyer-socketcan
CC-MAIN-2021-49
refinedweb
101
64.71
name_attach() Register a name in the namespace and create a channel Synopsis: #include <sys/iofunc.h> #include <sys/dispatch.h> name_attach_t * name_attach( dispatch_t * dpp, const char * path, unsigned flags ); Arguments: - dpp - NULL, or a dispatch handle returned by a successful call to dispatch_create() or dispatch_create_channel(). - path - The path that you want to register under /dev/name/[local|global]/. This name shouldn't contain any path components consisting of .. or start with a leading slash (/). - flags - Flags that affect the function's behavior: - NAME_FLAG_ATTACH_GLOBAL — attach the name globally instead of locally. Library: libc Use the -l c option to qcc to link against this library. This library is usually included automatically. Description: The name_attach(), name_close(), name_detach(), and name_open() functions provide the basic pathname-to-server-connection mapping, without having to become a full resource manager. If you've already created a dispatch structure, pass it in as the dpp. If you provide your own dpp, set flags to NAME_FLAG_DETACH_SAVEDPP when calling name_detach(); otherwise, your dpp is detached and destroyed automatically. If you choose to pass a NULL as the dpp, name_attach() calls dispatch_create() and resmgr_attach() internally to create a channel, however, it doesn't set any channel flag by itself. The created channel will have the _NTO_CHF_DISCONNECT, _NTO_CHF_COID_DISCONNECT and _NTO_CHF_UNBLOCK flags set. The name_attach() function puts the name path into the path namespace under /dev/name/[local|global]/path. The name is attached locally by default, or globally when you set NAME_FLAG_ATTACH_GLOBAL in the flags. You can see attached names in /dev/name/local and /dev/name/global directories. The application that calls name_attach() receives an _IO_CONNECT message when name_open() is called. The application has to handle this message properly with a reply of an EOK to allow name_open() connect. If the receive buffer that the server provides isn't large enough to hold a pulse, then MsgReceive() returns -1 with errno set to EFAULT. name_attach_t The name_attach() function returns a pointer to a name_attach_t structure that looks like this: typedef struct _name_attach { dispatch_t* dpp; int chid; int mntid; int zero[2]; } name_attach_t; The members include: - dpp - The dispatch handle used in the creation of this connection. - chid - The channel ID used for MsgReceive() directly. - mntid - the mount ID for this name. The information that's generally required by a server using these services is the chid. Returns: A pointer to a filled-in name_attach_t structure, or NULL if the call fails (errno is set). Errors: - EBUSY - An error occurred when the function tried to create a channel; see ChannelCreate(). - EEXIST - The specified path already exists. - EINVAL - An argument was invalid. For example, the path argument was NULL, the path was empty, it started with a leading slash (/), or it contained .. components. - ENOMEM - There wasn't enough free memory to complete the operation. - ENOTDIR - A component of the pathname wasn't a directory entry. Examples: #include <stdio.h> #include <errno.h> #include <stdlib.h> #include <string.h> #include <sys/dispatch.h> #define ATTACH_POINT "myname" /* We specify the header as being at least a pulse */ typedef struct _pulse msg_header_t; /* Our real data comes after the header */ typedef struct _my_data { msg_header_t hdr; int data; } my_data_t; /*** Server Side of the code ***/ int server() { name_attach_t *attach; my_data_t msg; int rcvid; /* Create a local name (/dev/name/local/...) */ if ((attach = name_attach(NULL, ATTACH_POINT, 0)) == NULL) { return EXIT_FAILURE; } /* Do your MsgReceive's here now with the chid */ while (1) { rcvid = MsgReceive(attach->chid, &msg, sizeof(msg), NULL); if (rcvid == -1) {/* Error condition, exit */ break; } if (rcvid == 0) {/* Pulse received */ switch (msg.hdr.code) { case _PULSE_CODE_DISCONNECT: /* * A client disconnected all its connections (called * name_close() for each name_open() of our name) or * terminated */ ConnectDetach(msg.hdr.scoid); break; case _PULSE_CODE_UNBLOCK: /* * REPLY blocked client wants to unblock (was hit by * a signal or timed out). It's up to you if you * reply now or later. */ break; default: /* * A pulse sent by one of your processes or a * _PULSE_CODE_COIDDEATH or _PULSE_CODE_THREADDEATH * from the kernel? */ break; } continue; } /* name_open() sends a connect message, must EOK this */ if (msg.hdr.type == _IO_CONNECT ) { MsgReply( rcvid, EOK, NULL, 0 ); continue; } /* Some other QNX IO message was received; reject it */ if (msg.hdr.type > _IO_BASE && msg.hdr.type <= _IO_MAX ) { MsgError( rcvid, ENOSYS ); continue; } /* A message (presumable ours) received, handle */ printf("Server receive %d \n", msg.data); MsgReply(rcvid, EOK, 0, 0); } /* Remove the name from the space */ name_detach(attach, 0); return EXIT_SUCCESS; } /*** Client Side of the code ***/ int client() { my_data_t msg; int server_coid; if ((server_coid = name_open(ATTACH_POINT, 0)) == -1) { return EXIT_FAILURE; } /* We would have pre-defined data to stuff here */ msg.hdr.type = 0x00; msg.hdr.subtype = 0x00; /* Do whatever work you wanted with server connection */ for (msg.data=0; msg.data < 5; msg.data++) { printf("Client sending %d \n", msg.data); if (MsgSend(server_coid, &msg, sizeof(msg), NULL, 0) == -1) { break; } } /* Close the connection */ name_close(server_coid); return EXIT_SUCCESS; } int main(int argc, char **argv) { int ret; if (argc < 2) { printf("Usage %s -s | -c \n", argv[0]); ret = EXIT_FAILURE; } else if (strcmp(argv[1], "-c") == 0) { printf("Running Client ... \n"); ret = client(); /* see name_open() for this code */ } else if (strcmp(argv[1], "-s") == 0) { printf("Running Server ... \n"); ret = server(); /* see name_attach() for this code */ } else { printf("Usage %s -s | -c \n", argv[0]); ret = EXIT_FAILURE; } return ret; } Classification: Caveats: As a server, you shouldn't assume that you're doing a MsgReceive() on a clean channel. In QNX Neutrino (and QNX 4), anyone can create a random message and send it to a process or a channel. We recommend that you do the following to assure that you're playing safely with others in the system: #include <sys/neutrino.h> /* All of your messages should start with this header */ typedef struct _pulse msg_header_t; /* Now your real data comes after this */ typedef struct _my_data { msg_header_t hdr; int data; } my_data_t; where: - hdr - Contains a type/subtype field as the first 4 bytes. This allows you to identify data which isn't destined for your server. - data - Specifies the receive data structure. The structure must be large enough to contain at least a pulse (which conveniently starts with the type/subtype field of most normal messages), because you'll receive a disconnect pulse when clients are detached.
http://developer.blackberry.com/native/reference/bb10/com.qnx.doc.neutrino.lib_ref/topic/n/name_attach.html
CC-MAIN-2013-20
refinedweb
1,030
55.64
Your browser does not seem to support JavaScript. As a result, your viewing experience will be diminished, and you have been placed in read-only mode. Please download a browser that supports JavaScript, or enable it if it's disabled (i.e. NoScript). Hello, I need to access uvw coordinartes outside of the 0-1 boundary, in a ShaderData. In UV Edit mode, I scaled plane's uvw to go beyonx 1,1, but the coordinates passed to ShaderData.Output() always normalized to 1,1. How can I access the actual uvs I need? Maybe some trick on material tag? Some external renderers use this workflow for UDIM textures, so it looks possible. My plugin is in C++ but, but I made a simple test plugin and it has the same issue... Py-ShaderUvw.pyp def Output(self, sh, cd): c = c4d.Vector( cd.p.x, cd.p.y, 0 ) return c Here's how the UVW is setup, the viewport uvws are repeating... Confirming that the UVW tag contains coordinates up to 2.0... And the actual render is even stranger, it makes no sense... hi, the V axis have been reverse in the UVs editor but not internally. So you have to introduce a 1-V operation. Float posX = cd->p.x; Float posY = 1 - cd->p.y; The same issue talked in this thread as you already pointed out Cheers, Manuel @m_magalhaes Thank you, that works. It would be super useful to have this piece of information on the Output() reference. What about having the correct uvw on the viewport, is it possible? @rsodre said in UVW coordinates in ShaderData.Output(): What about having the correct uvw on the viewport, is it possible? you mean the render ? @m_magalhaes Difference between the viewport uvs and render uvs. This is the render, it is correct now, with uvs from 0..2: But the material rendered at viewport does has normalized uvs (0...1): As far as I remember, Viewport drawing is handled in Draw() method. I've opened a bug report for that. The viewport should reflect as mush as we can the final render. the Shader is limited to 0,1 uvs space in the viewport. That's exactly what you can see for example with the noise shader. Except this shader have a special mode (HQ noise) so they can be displayed properly. You can also see that with the brick shader. (HQ Noise doesn't help in that case) @m_magalhaes Ok, is it going to be fixed on the bug report or is that a limitation we have to live with? the bug report have been defined as a limitation. It doesn't mean that it will never be fixed, but not now. And I'm still convinced that I had the same problem a while ago and managed to draw a custom preview via Draw() method. (There should be a BL entry) But still, I don't know if I remember right. I don't see how we could do it with the Draw Method in a efficient way.
https://plugincafe.maxon.net/topic/13215/uvw-coordinates-in-shaderdata-output
CC-MAIN-2021-49
refinedweb
514
76.22
Hi,Another release of the Compaq Hotplug PCI driver is available against2.4.10-pre12 is at: a full changelog at: since the last release: - forward ported to 2.4.10-pre12 - cleaned up the portions of the patch that touched the pci core kernel code. The patch against those files is now smaller and less intrusive. - pci core only exports symbols needed by the hotplug pci drivers if CONFIG_HOTPLUG is enabled - Compaq driver cleanups, with more global symbols removed, and a common namespace for the driver. - Compaq controller specific /proc interface has been moved to the proper /proc/drivers location. - lots of testing with different pci device types.Again, the old Compaq tool will not work with this version of thedriver. An updated version must be downloaded from the cvs tree atsf.net at: current generic hotplug_pci interface is based on a controllermodel. This will be changed to a slot based model, which will enablethe userspace interface code to be much cleaner, and models what the pcihotplug spec recommends. Any comments on this is appreciated.thanks,greg k-h-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
https://lkml.org/lkml/2001/9/20/197
CC-MAIN-2018-39
refinedweb
208
66.13
A 2D convolution class. More... #include <pcl/2d/convolution.h> A 2D convolution class. Definition at line 62 of file convolution.h. Extra pixels are added to the input image so that convolution can be performed over the entire image. (kernel_height/2) rows are added before the first row and after the last row (kernel_width/2) columns are added before the first column and after the last column border options define what values are set for these extra rows and columns Assume that the three rows of right edge of the image looks like this: .. 3 2 1 .. 6 5 4 .. 9 8 7 BOUNDARY_OPTION_CLAMP : the extra pixels are set to the pixel value of the boundary pixel This option makes it seem as if it were: .. 3 2 1| 1 1 1 .. .. 6 5 4| 4 4 4 .. .. 9 8 7| 7 7 7 .. BOUNDARY_OPTION_MIRROR : the input image is mirrored at the boundary. This option makes it seem as if it were: .. 3 2 1| 1 2 3 .. .. 6 5 4| 4 5 6 .. .. 9 8 7| 7 8 9 .. BOUNDARY_OPTION_ZERO_PADDING : the extra pixels are simply set to 0 This option makes it seem as if it were: .. 3 2 1| 0 0 0 .. .. 6 5 4| 0 0 0 .. .. 9 8 7| 0 0 0 .. Note that the input image is not actually extended in size. Instead, based on these options, the convolution is performed differently at the border pixels. Definition at line 100 of file convolution.h. Definition at line 107 of file convolution.h. This is an over-ride function for the pcl::Filter interface. Definition at line 140 of file convolution.h. Performs 2D convolution of the input point cloud with the kernel. Uses clamp as the default boundary option. Definition at line 43 of file convolution.hpp. Definition at line 125 of file convolution.h. Sets the kernel to be used for convolution. Definition at line 116 of file convolution.h.
http://docs.pointclouds.org/trunk/classpcl_1_1_convolution.html
CC-MAIN-2018-09
refinedweb
327
69.99
RSS,. Informa is a relatively new open source Java API for parsing RSS files available from. The Informa project was the result of merging two Java-based aggregator services: HotSheet and Risotto. This article aims to show how you can use the Informa API to quickly access RSS feeds to add some dynamic news and information content to your web sites. To begin with, we'll take a quick look at an RSS example. It's a very simple format (hence the name Really Simple Syndication), but for those who would like a more in-depth introduction to RSS, you could do far worse than checking out O'Reilly's RSS site, or by reading Mark Pilgrim's very good overview. Here's the example: <?xml version="1.0"? > <!-- The version of RSS we are using --> <rss version="0.91"> <!-- Information about our channel --> <channel> <title>Random News</title> <link></link> <description> Random news from the random news website! </description> <language>en-us</language> <copyright>Copyright: (C) 2003 Random News.com</copyright>  <item> <title>News piece one</title> <link></link> </item> <item> <title>News piece two</title> <link></link> </item> </channel> This is using version 0.91 of the RSS specification. You need a channel that describes the source for the information we are getting. There will be one channel per XML document. Without going into too much detail of this format, this is how you describe a channel: <channel> <title>Random News</title> <link></link> <description> Random news from the random news website! </description> <language>en-us</language> <copyright>Copyright: (C) 2003 Random News.com</copyright> </channel> The following defines an image provided by the site.  This is the real meat of the file. The <item> block gives us the title of a piece of information, a link to the original post, and optionally, a description of the post. This is by no means all of the data that an RSS file may provide, but this enough for our purposes. <item> <title>News piece one</title> <link></link> <description>Its an article</description> </item> There are thousands of news sites and blogs out there with feeds available in this format. Just think -- instead of doing the normal morning check on Slashdot, Freshmeat, or wherever, what if their content was delivered straight to your own personal portal, or RSS aggregate service? Implementing such a solution is very simple -- in the rest of the article we'll look at how we can process this data and display it on our JSP pages. Currently at version 0.3.0, Informa works perfectly well at reading RSS versions 0.91, 0.92, 1.0, and 2.0. Let's have a quick look at its usage: try { URL feed = new URL("file:/C:/samplefeed.rss"); ChannelFormat format = FormatDetector.getFormat(feed); ChannelParserCollection parsers = ChannelParserCollection.getInstance(); ChannelParserIF parser = parsers.getParser(format, feed); parser.setBuilder(new ChannelBuilder()); ChannelIF channel = parser.parse(); for (Iterator iter = channel.getItems().iterator(); iter.hasNext();) { ItemIF item = (ItemIF)iter.next(); System.out.println(item.getTitle()); } } catch (MalformedURLException mue) { mue.printStackTrace(); } catch (UnsupportedFormatException ufe) { ufe.printStackTrace(); } catch (ParseException pe) { pe.printStackTrace(); } This simple example gets the RSS feed and prints out the news items. This small piece of code will form the basis for much of what follows, so it's worth going over in detail. Begin by creating a URL object that will point to the feed to be loaded. We then use the handy FormatDetector method to determine which version of RSS the feed uses and gets us the relevant parser. URL feed = new URL("file:/C:/samplefeed.rss"); ChannelFormat format = FormatDetector.getFormat(feed); ChannelParserCollection parsers = ChannelParserCollection.getInstance(); Next, we get the correct parser for our feed type (there is one per supported version of the RSS specification) and create a default builder object for the parser. In Informa, a builder object is responsible for the creation and storage of a feed. Currently in development is a Hibernate Builder, which will allow database persistence of a feed. Here, the default ChannelBuilder is used, which simply creates an in-memory feed. ChannelParserIF parser = parsers.getParser(format, feed); parser.setBuilder(new ChannelBuilder()); Finally, we parse the document to create a bean representing an RSS channel. Now, we could embed this code directly into our JSP code as a scriptlet, but this is not best practice. Instead, we are going to produce a reusable custom tag that will allow us to display any named feed. ChannelIF channel = parser.parse(); Let's start by looking at how our tag will look in our JSP page when requesting a feed from the BBC: <%@ taglib Pretty simple -- we have a tag with one required method that names the feed. Now let's look at the code: public class SimpleRssFeedTag extends TagSupport { private String uri; public String getUri() { return uri; } public void setUri(String uri) { this.uri = uri; } public int doEndTag() throws JspException { JspWriter out = pageContext.getOut(); try { URL feed = new URL(getUri()); ChannelParserCollection parsers = ChannelParserCollection.getInstance(); ChannelFormat format = FormatDetector.getFormat(feed); ChannelParserIF parser = parsers.getParser(format, feed); parser.setBuilder(new ChannelBuilder()); ChannelIF channel = parser.parse(); out.print("<b>" + channel.getTitle() + "<b><br />"); for (Iterator iter = channel.getItems().iterator(); iter.hasNext();) { ItemIF item = (ItemIF) iter.next(); out.print("<a href=\"" + item.getLink() + "\">"); out.println(item.getTitle() + "</a><br />"); } } catch (MalformedURLException mue) { throw new JspException(mue); } catch (UnsupportedFormatException ufe) { throw new JspException(ufe); } catch (ParseException pe) { throw new JspException(pe); } catch (IOException e) { throw new JspException(e); } return EVAL_PAGE; } } This time, rather than printing the titles and links to the command prompt, we are formatting our links and titles as HTML. Let's look at an example page where we are requesting a couple of feeds, say, OnJava.com and Java.sun.com's technology highlights. Use this tag in a JSP page as follows: <%@ taglib </td> <td> <rss:simpleRssFeed </td> </tr> </table> And the result: Figure 1. Our first RSS tag in action We have managed to display the news items, and clicking on the links will take you to the articles. Whenever the syndicates update their RSS files, your page will change too! As an exercise, consider limiting the number of posts or adding the display of a syndicate's logo to this basic tag. Currently, we are doing too much of the actual formatting of the display in the tag itself. This is inconvenient, as it means that in order to change the formatting of the tag's results, we need to change the code. It would be much better if we could leave the mechanics of reading the feeds up to the tag, and have all of the formatting in the JSP. In order to achieve this, we need to allow the web designer to decide what parts of an RSS channel are required, and embed them in standard HTML. The JSP Standard Tag Library (JSTL) introduced a simple Expression Language (EL), which allows us to quickly and easily access JavaBean properties at runtime. We are using the JSTL EL for accessing beans and displaying properties, which is fairly straightforward. For example, to print out the name property of a bean, we would do the following: <c:out Here, the bean is a JavaBean available in the page. The <c:out> tag is used to retrieve the returned value of the expression ${bean.name} and print it to the output stream. Our new custom tag is going to expose the RSS feed as a series of beans, and then use the JSTL EL to access and display its data. Let's look an example use of our new tag: <rss:readFeed <strong><c:out</strong> <ol> <c:forEach <li> <a href="<c:out"> <c:out</a> </li> </c:forEach> </ol> </rss:readFeed> The first tag <rss:readFeed> iterates over the feeds channels and loads the into the page scope as the bean name channel. The use of the ${channel.title} code gets the title property and displays it. Next, we use a standard <c:foreach> tag to iterate over the items property in channel bean, using the JSTL EL to display each item's title and link. As you can see, all of the formatting is done by the JSP code itself -- here we create a series of HTML lists for each channel in a feed, but this could as easily be a series of <div>s, table rows, or whatever. Surprisingly, this code isn't much more complicated than the original example. Let's take a look: public class RefinedRssFeedTag extends TagSupport { private static final ChannelBuilder DEFAULT_BUILDER = new ChannelBuilder(); private static final ChannelParserCollection PARSERS = ChannelParserCollection.getInstance(); private String uri; private String var; private ChannelIF channel; public String getVar() { return var; } public void setVar(String var) { this.var = var; } public String getUri() { return uri; } public void setUri(String uri) { this.uri = uri; } public int doStartTag() throws JspException { JspWriter out = pageContext.getOut(); try { URL feed = new URL(getUri()); ChannelFormat format = FormatDetector.getFormat(feed); ChannelParserIF parser = PARSERS.getParser(format, feed); parser.setBuilder(DEFAULT_BUILDER); channel = parser.parse(); //store the channel in the page... pageContext.setAttribute(getVar(), channel); } catch (MalformedURLException mue) { throw new JspException(mue); } catch (UnsupportedFormatException ufe) { throw new JspException(ufe); } catch (ParseException pe) { throw new JspException(pe); } return EVAL_BODY_INCLUDE; } } The main work is done in the doStartTag method. We parse the RSS file specified in the uri attribute, and then we store it in the pageContext under the name specified by the var attribute (this is standard practice throughout the JSTL). This allows ${channel} to be used in the tag body. And that's pretty much it! Now let's use it to view a couple of feeds -- two of a computer programmer's best friends, Slashdot and Freshmeat. <rss:readFeed uri= <IMG src="<c:out"> <a href="<c:out"> <strong><c:out</strong></a> <ol> <c:forEach <li><a href="<c:out"> <c:out</a></li> </c:forEach> </ol> </rss:readFeed> <rss:readFeed <strong><c:out</strong><br /> <a href="${channel.location}">[Feed]</a><br /> <c:forEach <a href="<c:out"> <c:out</a><br /> </c:forEach> </rss:readFeed> When reading the Slashdot feed, we format the title as a link right back to Slashdot itself, with the rest of the items formatted as a standard HTML ordered list. With Freshmeat, we know they don't provide an image, so we ignore that, but we also provide a URL link back to the source of the RSS itself, with items' simple links separated by <br /> tags. The result can be seen below. Figure 2. Example use of a more complex RSS Tag I have shown with this article how you can quickly and simply create an RSS tag that should enable you to quickly insert RSS feeds while keeping with your site's current design. In no way should this be considered the end of the world -- the RefinedRssFeedTag as presented here is far from perfect. Most importantly, no caching of the requested feeds is done, resulting in feeds being loaded and parsed every time the tag is run. Over the course of the next several articles, we will look at approaches that improve upon the solutions provided here, and will also look at other ways in which we can use RSS to enrich our software. Sam Newman is a Java programmer. Check out his blog at magpiebrain.com.
http://today.java.net/lpt/a/18
crawl-002
refinedweb
1,905
56.25
Knowing When A Python Thread Has Died A few months ago I had to solve a problem in PyMongo that is harder than it seems: how do you register for notifications when the current thread has died? The circumstances are these: when you call start_request in PyMongo, it gets a socket from its pool and assigns the socket to the current thread. We need some way to know when the current thread dies so we can reclaim the socket and return it to the socket pool for future use, rather than wastefully allowing it to be closed. PyMongo can assume nothing about what kind of thread this is: It could've been started from the threading module, or the more primitive thread module, or it could've been started outside Python entirely, in C, as when PyMongo is running under mod_wsgi. Here's what I came up with: import threading import weakref class ThreadWatcher(object): class Vigil(object): pass def __init__(self): self._refs = {} self._local = threading.local() def _on_death(self, vigil_id, callback, ref): self._refs.pop(vigil_id) callback() def watch(self, callback): if not self.is_watching(): self._local.vigil = v = ThreadWatcher.Vigil() on_death = partial( self._on_death, id(v), callback) ref = weakref.ref(v, on_death) self._refs[id(v)] = ref def is_watching(self): "Is the current thread being watched?" return hasattr(self._local, 'vigil') def unwatch(self): try: v = self._local.vigil del self._local.vigil self._refs.pop(id(v)) except AttributeError: pass The key lines are highlighted, in watch(). First, I make a weakref to a thread local. Weakrefs are permitted on subclasses of object but not object itself, so I use an inner class called Vigil. I initialize the weakref with a callback, which will be executed when the vigil is deleted. The callback only fires if the weakref outlives the vigil, so I keep the weakref alive by storing it as a value in the _refs dict. The key into _refs can't be the vigil itself, since then the vigil would have a strong reference and wouldn't be deleted when the thread dies. I use id(key) instead. Let's step through this. When a thread calls watch(), the only strong reference to the vigil is a thread-local. When a thread dies its locals are cleaned up, the vigil is dereferenced, and _on_death runs. _on_death cleans up _refs and then voilà, it runs the original callback. When exactly is the vigil deleted? This is a subtle point, as the sages among you know. First, PyPy uses occasional mark and sweep garbage collection instead of reference-counting, so the vigil isn't deleted until some time after the thread dies. In unittests, I force the issue with gc.collect(). Second, there's a bug in CPython 2.6 and earlier, fixed by Antoine Pitrou in CPython 2.7.1, where thread locals aren't cleaned up until the thread dies and some other thread accesses the local. I wrote about this in detail last year when I was struggling with it. gc.collect() won't help in this case. Thirdly, when is the local cleaned up in Python 2.7.1 and later? It happens as soon as the interpreter deletes the underlying PyThreadState, but that can actually come after Thread.join() returns— join() is simply waiting for a Condition to be set at the end of the thread's run, which comes before the locals are cleared. So in Python 2.7.1 we need to sleep a few milliseconds after joining the thread to be certain it's truly gone. Thus a reliable test for my ThreadWatcher class might look like: class TestWatch(unittest.TestCase): def test_watch(self): watcher = ThreadWatcher() callback_ran = [False] def callback(): callback_ran[0] = True def target(): watcher.watch(callback) t = threading.Thread(target=target) t.start() t.join() # Trigger collection in Py 2.6 # See watcher.is_watching() gc.collect() # Cleanup can take a few ms in # Python >= 2.7 for _ in range(10): if callback_ran[0]: break else: time.sleep(.1) assert callback_ran[0] # id(v) removed from _refs? assert not watcher._refs The is_watching() call accesses the local object from the main thread after the child has died, working around the Python 2.6 bug, and the gc.collect() call makes the test pass in PyPy. The sleep loop gives Python 2.7.1 a chance to finish tearing down the thread state, including locals. Two final cautions. The first is, you can't predict which thread runs the callback. In Python 2.6 it's whichever thread accesses the local after the child dies. In later versions, with Pitrou's improved thread-local implementation, the callback is run on the dying child thread. In PyPy it's whichever thread is active when the garbage collector decides to run. The second caution is, there's an unreported memory-leak bug in Python 2.6, which Pitrou fixed in Python 2.7.1 along with the other bug I linked to. If you access a thread-local from within the weakref callback, you're touching the local in an inconsistent state, and the next object stored in the local will never be dereferenced. So don't do that. Here's a demonstration: class TestRefLeak(unittest.TestCase): def test_leak(self): watcher = ThreadWatcher() n_callbacks = [0] nthreads = 10 def callback(): # BAD, NO!: # Accessing thread-local in callback watcher.is_watching() n_callbacks[0] += 1 def target(): watcher.watch(callback) for _ in range(nthreads): t = threading.Thread(target=target) t.start() t.join() watcher.is_watching() gc.collect() for _ in range(10): if n_callbacks[0] == nthreads: break else: time.sleep(.1) self.assertEqual(nthreads, n_callbacks[0]) In Python 2.7.1 and later the test passes because all ten threads' locals are cleaned up, and the callback runs ten times. But in Python 2.6 only five locals are deleted. I discovered this bug when I rewrote the connection pool in PyMongo 2.2 and a user reported that in Python 2.6 and mod_wsgi, every second request leaked one socket! I fixed PyMongo in version 2.2.1 by avoiding accessing thread locals while they're being torn down. (See bug PYTHON-353.) Update: I've discovered that in Python 2.7.0 and earlier, you need to lock around the assignment to self._local.vigil, see "Another Thing About Threadlocals". For further reading: - My whole gist for ThreadWatcher and its tests - Pitrou's new thread-local implementation for Python 2.7.1 - PyMongo's thread utilities Post-script: The image up top is a memento mori, a "reminder you will die," by Alessandro Casolani from the 16th Century. The memento mori genre is intended to offset a portrait subject's vanity—you look good now, but your beauty won't make a difference when you face your final judgment. This was painted circa 1502 by Andrea Previtali: The inscription is "Hic decor hec forma manet, hec lex omnibus unam," which my Latin-nerd friends translate as, "This beauty endures only in this form, this law is the same for everyone." It was painted upside-down on the back of this handsome guy: The painting was mounted on an axle so the face and the skull could be rapidly alternated and compared. Think about that the next time you start a thread—it may be running now, but soon enough it will terminate and even its thread-id will be recycled.
https://emptysqua.re/blog/knowing-when-a-python-thread-has-died/
CC-MAIN-2017-13
refinedweb
1,239
67.65
Inject dynamically in for loopPaul Grillo Apr 17, 2012 12:12 PM Forgive me if this is a question with a simple solution, i'm trying to get my hands around all of the power of CDI. Our traditional products were based on home grown factories that were explicitly called, and i'm trying to put together a capability that replaces all that i have. At any rate, we have situation in which we need to obtain a new instance of an object at runtime. Example: // I would like Instance to be a container bean, most likely @Produced, and scoped @Dependent. Assume this producer exists ... void populateLineItems(){ List<Instance> aList = new ArrayList<Instance>(); for (int i=0;i<x;i++){ aList.add(new Instance()); // well, this is not what i want. I want the container to create it. Moreover the producer knows what "type" of Instance to create // @Inject doesn't seem to be available here, so how do i add the Container created bean in the line above? } } While my homegrown factory could serve up the appropriate Instance, i wish them to be contaner/cdi created so they display the same capabilities as all of our beans, dependent or scoped. Thanks for any ideas, i'm hoping i just missed something that was flat obvious... 1. Re: Inject dynamically in for loopMarko Lukša Apr 17, 2012 1:16 PM (in response to Paul Grillo) Can you give an example of what the actual classes would be (instead of your generic "Instance"). You probably have some concrete classes in mind? The fact that you are storing Instances in a List, gives me the feeling that you want to make managed beans out of objects that aren't really meant to be managed. Anyhow, you can obtain bean objects dinamically through BeanManager (see getBeans(), resolve() and getReference()). 2. Re: Inject dynamically in for loopPaul Grillo Apr 17, 2012 1:56 PM (in response to Marko Lukša) Well, that's the rub, for me. I put together basic capabilities common code so that many engineers in the company build separate products from it. We generally provide these other applications the capability to specialize most classes. We try to abstract as much complication as we can from it. So traditionally, we have a base factory that builds "all" objects, and our products specialize that factory and override construction of their own objects. It's probably a bit more complicated, but that is the idea. So, we have now introduced thin client (JSF/CDI...) capabilities and need to provide the same general functionality. We do generally have two types of beans. ManageXXXBean and XXXBean, in which XXXBean generally is bound to jsf/xhtml and ManageXXXBean orchestrates some of the events (process, save, etc) So, to give you an idea. ManageOrderBean and OrderBean are scoped contextually. ManageOrderBean has a hypothetical method in there that might be called from a page to add another LineItem to the Order. The LineItem that gets created might be OrderLineItem or it could be a specialized one. So, i'm guessing that your question is whether OrderLineItem should be "managed". Well, here's the problem for me. We have a few application and session scoped beans that hold information that all of our beans might need access to, and these are generally injected. So, i don't want our engineers to have to "know" or figure out what beans are "managed" and which are not. OrderLineItemBean needs access to an application bean that provides base stuff like productID (our application specific id, etc). So, i'm wanting OrderLineItemBean to be Managed and @Dependent on the container (OrderBean, in this case). Now our engineers can utilize the same "tools" injection or whatnot regardless of what "bean" they are in... So that's the long story. In the example below, i need the container to create the appropriate "instance" of OrderLineItemBean (might be specialized), could be injected, but also needs to be created at will. So, essentially, my approach is to have a Factory that @Produces my beans, and the factory can have an alternate so that our produce a special one where needed. However, i'm still left with ensuring that i can create a bean at will but have the container do it for me... i will look into BeanManager. Looks like an extension. getBean() returns a new one from the container if required? Thanks for your time. @Named("manageOrderBean") @ConversationScoped public class ManageOrderBean { @Inject OrderBean orderBean; public void addNewOrderLineItem() { OrderLineItemBean oli = new OrderLineItemBean(); orderBean.getLineItems().add(oli); } } @Named("orderBean") @ConversationScoped public class OrderBean { List<OrderLineItemBean> lineItems = new ArrayList<OrderLineItemBean>(); public List<OrderLineItemBean> getLineItems(){ return lineItems; } } public class OrderLineItemBean { ... } 3. Re: Inject dynamically in for loopMarko Lukša Apr 17, 2012 4:46 PM (in response to Paul Grillo) CDI beans are singleton-ish (in the context of a scope), so you will not be able to create multiple OrderLineItemBeans simply by calling BeanManager's methods. But, you can create instances yourself and then perform injection into them through InjectionTarget. See
https://community.jboss.org/message/730639?tstart=0
CC-MAIN-2015-35
refinedweb
840
54.12
If you’re looking for something with which you can use complete DB operations into your application without having to install any database server program such as MySQL, PostgreSQL, or Oracle, python sqlite3 module is for you. Table of Contents Python SQLite Python sqlite3 is an excellent module with which you can perform all possible DB operations with in-memory and persistent database in your applications. This module implements the Python DB API interface to be a compliant solution for implementing SQL related operations in a program. Using sqlite3 module In this section, we will start using the sqlite3 module in our application so that we can create databases and tables inside it and perform various DB operations on it. Let’s get started. Python SQLite Create Database When we talk about databases, we’re looking at a single file which will be stored on the file system and its access is managed by the module itself to prevent corruption when multiple users try to write to it. Here is a sample program which creates a new database before opening it for operations: import os import sqlite3 db_filename = 'journaldev.db' db_exists = not os.path.exists(db_filename) connection = sqlite3.connect(db_filename) if db_exists: print('No schema exists.') else: print('DB exists.') connection.close() We will run the program twice to check if it works correctly. Let’s see the output for this program: As expected, second time we run the program, we see the output as DB exists. Python SQLite Create Table To start working with the database, we must define a table schema on which we will write our further queries and perform operations. Here is the schema we will follow: For the same schema, we will be writing related SQL Query next and these queries will be saved in book_schema.sql: CREATE TABLE book ( name text primary key, topic text, published date ); CREATE TABLE chapter ( id number primary key autoincrement not null, name text, day_effort integer, book text not null references book(name) ); Now let us use the connect() function to connect to the database and insert some initial data using the executescript() function: import os import sqlite3 db_filename = 'journaldev.db' schema_filename = 'book_schema.sql' db_exists = not os.path.exists(db_filename) with sqlite3.connect(db_filename) as conn: if db_exists: print('Creating schema') with open(schema_filename, 'rt') as file: schema = file.read() conn.executescript(schema) print('Inserting initial data') conn.executescript(""" insert into book (name, topic, published) values ('JournalDev', 'Java', '2011-01-01'); insert into chapter (name, day_effort, book) values ('Java XML', 2,'JournalDev'); insert into chapter (name, day_effort, book) values ('Java Generics', 1, 'JournalDev'); insert into chapter (name, day_effort, book) values ('Java Reflection', 3, 'JournalDev'); """) else: print('DB already exists.') When we execute the program and check what all data is present in chapter table, we will see the following output: See how I was able to request the db file directory from the command line. We will be querying data from sqlite3 module itself in next section. Python SQLite Cursor Select Now, we will retrieve data in our script by using a Cursor to fetch all chapters which fulfil some criteria: import sqlite3 db_filename = 'journaldev.db' with sqlite3.connect(db_filename) as conn: cursor = conn.cursor() cursor.execute(""" select id, name, day_effort, book from chapter where book = 'JournalDev' """) for row in cursor.fetchall(): id, name, day_effort, book = row print('{:2d} ({}) {:2d} ({})'.format( id, name, day_effort, book)) Let’s see the output for this program: This was a simple example of fetching data from a table where one column matches a specific value. Getting Metadata of Table In our programs, it is also important to get metadata for a table for documentation purposes and much more: import sqlite3 db_filename = 'journaldev.db' with sqlite3.connect(db_filename) as connection: cursor = connection.cursor() cursor.execute(""" select * from chapter where book = 'JournalDev' """) print('Chapter table has these columns:') for column_info in cursor.description: print(column_info) Let’s see the output for this program: Due to the reason while creating schema, we didn’t provided the column anything apart from their names, most of the values are None. Using Named Parameters With named parameters, we can pass arguments to our scripts and hence, the SQL Queries we write in our programs. Using Named Parameters is very easy, let’s take a look at how we can do this: import sqlite3 import sys db_filename = 'journaldev.db' book_name = sys.argv[1] with sqlite3.connect(db_filename) as conn: cursor = conn.cursor() query = """ select id, name, day_effort, book from chapter where book = :book_name """ cursor.execute(query, {'book_name': book_name}) for row in cursor.fetchall(): id, name, day_effort, book = row print('{:2d} ({}) {:2d} ({})'.format( id, name, day_effort, book)) Let’s see the output for this program: See how easy it was to pass a named parameter and substitute it in the query right before we execute it. Python SQLite3 Transaction Management Well, Transactions are a feature for which relational databases are known for. The sqlite3 module is completely capable of managing the internal state of a transaction, the only thing we need to do is letting it know that a Transaction is going to happen. Here is a sample program which describes how we write transactions in our program by explicitly calling the commit() function: import sqlite3 db_filename = 'journaldev.db' def show_books(conn): cursor = conn.cursor() cursor.execute('select name, topic from book') for name, topic in cursor.fetchall(): print(' ', name) with sqlite3.connect(db_filename) as conn1: print('Before changes:') show_books(conn1) # Insert in one cursor cursor1 = conn1.cursor() cursor1.execute(""" insert into book (name, topic, published) values ('Welcome Python', 'Python', '2013-01-01') """) print('\nAfter changes in conn1:') show_books(conn1) # Select from another connection, without committing first print('\nBefore commit:') with sqlite3.connect(db_filename) as conn2: show_books(conn2) # Commit then select from another connection conn1.commit() print('\nAfter commit:') with sqlite3.connect(db_filename) as conn3: show_books(conn3) Let’s see the output for this program: When the show_books(...) function is called before conn1 has been committed, the result depends on which connection is being used. As the changes were made from the conn1, it sees the made changes but conn2 doesn’t. Once we committed all the changes, all connections were able to see the made changes, including the conn3. Conclusion In this lesson, we studied the basics of the sqlite3 module in Python and committed transactions as well. When your program wants to work with some relational data, sqlite3 module provides an easy way to deal with data and obtain results across the life of the program as well. Thank you for making this post, I found it very useful. Could you make a write up on how to store images into the sqlite database preferably by means of letting users select their desired image file by using a filechooser let’s say with tkinter?
https://www.journaldev.com/20515/python-sqlite-tutorial
CC-MAIN-2021-25
refinedweb
1,135
54.42
Components and supplies Apps and online services About this project Introduction With the development of civil engineering field, we can identify a lot of constructions everywhere. Metal structures, Concrete beams, Multi-platform buildings are some of them. Further, most of us are used to stay in a building or home during most times of the day. But how can we assure that building is safe enough to stay? What if there’s a small crack or over-inclined beam in your building? It would risk a hundreds of lives. Earthquakes, Soil hardness, Tornadoes and many more things, could be factors for internal cracks and the deviation of the structures or beams from the neutral position. Most of the times we are not aware of the situation of the surrounding structures. Maybe the place everyday we walk in has cracked concrete beams and can collapse at anytime. But without knowing it we are freely going inside. As a solution for this, we need a good method to monitor concrete, wood, metal beams of constructions where we cannot reach. Solution “Structure Analyzer” is a portable device which can be mounted on a concrete beam, metal structure, slabs etc. This device measures the angle and analyze bends where it’s mounted and send the data to mobile app through Bluetooth. This device uses an accelerometer/ Gyroscope to measure the angle in x,y,z planes and flex sensor to monitor the bends. All raw data are processed and information is sent to the mobile app. Circuit Collect the following components. - Arduino 101 Board - 2 X Flex sensors - 2 X 10k Resistors To reduce the number of components Arduino 101 board is used here as it contains an accelerometer and a BLE module. Flex sensors are used to measure the amount of bending as it changes it's resistance when bending. The circuit is a very small one as only 2 resistors and 2 flex sensors needed to be connected. Following diagram shows how to connect a flex sensor to the Arduino board. One pin of the resistor is connected to the A0 pin of the Arduino board. Follow the same procedure to connect the second flex sensor. Use A1 pin to connect the resistor. Connect the buzzer directly to the D3 pin and Gnd pin. Finishing the device After making the circuit, it has to be fixed inside an enclosure. According to the above 3D model, 2 flex sensors have to be placed at the opposite side of the enclosure. Make space for the USB port to program the board and supply the power. As this device is needed to be used for a long period, the best method to supply power is using a fixed power pack. Mobile App Download and install Blynk from the Android Play Store. Start a new project for Arduino 101. Select the communication method as BLE. Add 1 terminal, 2 buttons and BLE to the interface. Following images show you how to make the interface. Code files After making the interface on Blynk you will receive an authorization code. Enter that code at the following place. #include <EEPROM.h> #include <SPI.h> char auth[] = "**************"; //Blynk Authorization Code WidgetTerminal terminal(V2); BLEPeripheral blePeripheral; In the calibration process, current sensor readings are saved in the EEPROM. values(); EEPROM.write(0,flx1); EEPROM.write(1,flx2); EEPROM.write(2,x); EEPROM.write(3,y); EEPROM.write(4,z); terminal.print("Calibration Succesful"); After calibrating, the device will compare the deviation with the threshold values and beeps the buzzer if they exceed the value.); } Functionality Stick the device to on the structure needed to be monitored. Stick the 2 flex sensors as well. Supply power to the board using the USB cable. Open the Blynk interface. Connect with the device by touching the Bluetooth icon. Press the calibration button. After calibrating the terminal will show a message as "Successfully Calibrated." Reset the device. Now it will monitor the structure and notifies you through the buzzer if it deviates of deforms. You can check the angle and bend values at any time by pressing the Status button. This might looks like a small device. But its' uses are priceless. Sometimes we forget to check the condition of our home, office etc, with our busy schedules. But if there is a small problem, it might end like...... But with this device, a hundreds of lives can be saved by informing the small yet dangerous problems in constructions. Code Arduino101CodeArduino /*This code file is related to the "Structure Analyzer" which * is a device to maintain the standards and levels of many * types of structures. * Developed by Tharindu Suraj. 14/03/2017 */ #define BLYNK_PRINT Serial #define flex1 A0 #define flex2 A1 //Define flex sensor and buzzer pins #define buzzer 3 #include "CurieIMU.h" #include <BlynkSimpleCurieBLE.h> #include <CurieBLE.h> #include <Wire.h> #include <EEPROM.h> #include <SPI.h> char auth[] = "**************"; //Blynk Authorization Code WidgetTerminal terminal(V2); BLEPeripheral blePeripheral; int m_flx1,m_flx2,m_x,m_y,m_z; //values saved in memory int flx1, flx2,x,y,z; //Current readings void values(){ for(int i=0;i<100;i++){ flx1 = analogRead(flex1); //Get raw readings from sensors flx2 = analogRead(flex2); x = CurieIMU.readAccelerometer(X_AXIS)/100; y = CurieIMU.readAccelerometer(Y_AXIS)/100; z = CurieIMU.readAccelerometer(Z_AXIS)/100; delay(2); } flx1=flx1/100; flx2=flx2/100; x = x/100; //Get the average values of the readings y = y/100; z = z/100; } void setup(){ //pinMode(3,OUTPUT); pinMode(flex1,INPUT); pinMode(flex2,INPUT); //Setting the sensor pin modes Serial.begin(9600); blePeripheral.setLocalName("Arduino101Blynk"); blePeripheral.setDeviceName("Arduino101Blynk"); blePeripheral.setAppearance(384); Blynk.begin(auth, blePeripheral); blePeripheral.begin(); m_flx1 = EEPROM.read(0); m_flx2 = EEPROM.read(1); m_x = EEPROM.read(2); //Read pre saved sensor values from EEPROM m_y = EEPROM.read(3); m_z = EEPROM.read(4); } void loop(){ Blynk.run(); blePeripheral.poll();); } tone(buzzer, 0); } /*VO indicates the calibration mode. In this mode the values of sensors * are saved in the EEPROM */ BLYNK_WRITE(V0){ int pinValue = param.asInt(); if (pinValue == 1){ values(); EEPROM.write(0,flx1); EEPROM.write(1,flx2); EEPROM.write(2,x); EEPROM.write(3,y); EEPROM.write(4,z); terminal.print("Calibration Succesful"); } } /*We can request current deviation values * by pressing the button V1 */ BLYNK_WRITE(V1){ int pinValue = param.asInt(); if (pinValue == 1){ values(); terminal.print("X angle deviation- "); terminal.print(abs(x-m_x)); terminal.println(); terminal.print("Y angle deviation- "); terminal.print(abs(y-m_y)); terminal.println(); terminal.print("Z angle deviation- "); terminal.print(abs(z-m_z)); terminal.println(); terminal.print("Flex 1 deviation- "); terminal.print(abs(flx1-m_flx1)); terminal.println(); terminal.print("Flex 2 deviation- "); terminal.print(abs(flx2-m_flx2)); terminal.println(); } } BLYNK_WRITE(V2){ } Schematics Author Tharindu Suraj - 1 project - 3 followers Published onMarch 23, 2017 Members who respect this project you might like
https://create.arduino.cc/projecthub/tharindu-suraj/save-your-life-with-the-building-collapse-monitor-3d5681
CC-MAIN-2018-09
refinedweb
1,124
51.14
runipy 0.0.9!') Programmatic use It is also possible to run IPython notebooks from Python, using: from runipy.notebook_runner import NotebookRunner from IPython.nbformat.current import read notebook = read(open("MyNotebook.ipynb"), 'json') r = NotebookRunner(notebook) r.run_notebook() and you can enable pylab with: r = NotebookRunner(notebook, pylab=True) The notebook is stored in the object and can be saved using: from IPython.nbformat.current import write write(r.nb, open("MyOtherNotebook.ipynb", 'w'), 'json') Credit Portions of the code are based on code by Min RK Thanks to Kyle Kelley, Nitin Madnani, George Titsworth, Thomas Robitaille, Andrey Tatarinov, Matthew Brett, Adam Haney, and Nathan Goldbaum for patches, documentation fixes, and suggestions. - Downloads (All Versions): - 83 downloads in the last day - 404 downloads in the last week - 421 downloads in the last month - Author: Paul Butler - Categories - Package Index Owner: paulgb - DOAP record: runipy-0.0.9.xml
https://pypi.python.org/pypi/runipy/0.0.9
CC-MAIN-2015-11
refinedweb
149
55.34
Are they passed by reference or by value as the default? Are they passed by reference or by value as the default? Regarding functions, everything is passed by value in C++ unless you signify otherwise in the function signature (by passing via a reference or pointer, etc.) aren't arrays passed by reference as default? Ah, yes, I should have thought to mention that. My mistake!Ah, yes, I should have thought to mention that. My mistake!Originally posted by Korn1699 aren't arrays passed by reference as default? I guess if you wanna look at it like that, then yeah, arrays can be considered passed by reference, though I guess it's kind of debatable to call it that. The reason why arrays are in effect passed it by reference is because the signature of passing an array of a certain length is just interpretted as meaning a pointer to the element type of the array. So when you pass an array using a function such as void Foo( int array[5] ); You're really neither passing the array by value OR by reference, but rather you end up just passing the first element of the array by reference (which is a subtle difference). void Foo( int (&array)[5] ); would be a function passing the array by reference. The difference comes into the fact that the first example has no length data (nor data even saying you have to pass an array at all, despite its syntax). void Foo( int array[5] ); is actually the same as void Foo( int* array ); So you could even do int a; Foo( &a ); Anyways, I'm just rambling because I said something wrong and I don't wanna seem completely ignorant I should have mentioned array in the first replyI should have mentioned array in the first reply Strictly speaking arrays aren't passed at all. You can't actually pass an array (not directly, anyway).Strictly speaking arrays aren't passed at all. You can't actually pass an array (not directly, anyway).Originally posted by Korn1699 aren't arrays passed by reference as default? When you pass an array as a function parameter what gets passed is a pointer to the first element of the array. This is still essentially pass by value, the function receives a copy of a pointer to the first element. I make the distinction here because it's important to recognise that passing a pointer as a function parameter is still pass by value. A copy of the pointer gets passed, not the pointer itself. For example, consider the following code: What value will be output by the above? 10? 12? 30? Some other value?What value will be output by the above? 10? 12? 30? Some other value?Code:#include <iostream> using namespace std; void func(int* iPtr) { iPtr++; iPtr++; } int main() { int array[5] = {10,20,30,40,50}; int* p = array; func(p); cout << *p << endl; } Not everybody will get the correct answer, because not everybody appreciates that a copy of the pointer is passed, not the actual pointer. It's still pass by value, not pass by reference. When people talk about passing an array as a function parameter as being pass by reference, they're using the term loosely. No problem with that, as long as it's understood what is meant. In C all parameter passing is pass by value. In C++, it's pass by value unless you choose otherwise.
https://cboard.cprogramming.com/cplusplus-programming/49929-passing-vectors.html
CC-MAIN-2017-09
refinedweb
582
61.97
I've developed a WCF service which is hosted in my localserver. I'm currently being able to consume data from that service in my Xamarin.Forms application. My WCF service is basically some SQL Querys to my local database. I have a ListView that is linked to and ObservableCollection that is updated from my WCF service. My ListView: ` <ListView.ItemTemplate> <ViewCell.View> <AbsoluteLayout> <Label Text="{Binding Class}" AbsoluteLayout. </AbsoluteLayout> <StackLayout Padding="5, 0, 0, 0" VerticalOptions="Center"> <Label Text="{Binding AlarmText}" Font="Bold, Medium" /> <StackLayout Orientation="Horizontal" Spacing="0"> <Label Text="{Binding AlarmTime}"/> </StackLayout> </StackLayout> </StackLayout> </ViewCell.View> </ViewCell> </DataTemplate> </ListView.ItemTemplate> ` My alarm class: `public class Alarm { public Alarm() { } public string Name { get; set; } public string Id { get; set; } public string Category { get; set; } public string Area { get; set; } public string Class { get; set; } public string Status { get; set; } public string AlarmTime { get; set; } public string NoOfAlarms { get; set; } public string AlarmText { get; set; } public string SigType { get; set; } public string ActionText { get; set; } public string Description { get; set; } public Color Color { get; set; } }` My Globals class (where the static ObservableCollection that is the ItemsSource of the ListView is stored): `public class Globals { public static string username; public static ObservableCollection alarms = new ObservableCollection(); public static Color Orange = Color.FromRgb(255, 165, 0); public static void fillAlarmsList(List<String> alarmsStringList) { alarms.Clear(); for (int i = 0; i < alarmsStringList.Count; i = i + 12) { Alarm alarm = new Alarm(); alarm.Name = alarmsStringList.ElementAt(i); alarm.Id = alarmsStringList.ElementAt(i + 1); alarm.Category = alarmsStringList.ElementAt(i + 2); alarm.Area = alarmsStringList.ElementAt(i + 3); alarm.Class = alarmsStringList.ElementAt(i + 4); alarm.Status = alarmsStringList.ElementAt(i + 5); alarm.AlarmTime = alarmsStringList.ElementAt(i + 6); alarm.NoOfAlarms = alarmsStringList.ElementAt(i + 7); alarm.AlarmText = alarmsStringList.ElementAt(i + 8); alarm.SigType = alarmsStringList.ElementAt(i + 9); alarm.ActionText = alarmsStringList.ElementAt(i + 10); alarm.Description = alarmsStringList.ElementAt(i + 11); if (alarm.Class.Equals("A")) alarm.Color = Color.Red; else if (alarm.Class.Equals("B")) alarm.Color = Orange; else if (alarm.Class.Equals("C")) alarm.Color = Color.Yellow; alarms.Add(alarm); } } }` I have a simple MainPage that just has a button that is "Clicked method" calls my SQLClient.getAlarmsAsync(). The SQLClient.getAlarmsCompleted handler gets a List alarmsData and I just call the Globals.fillAlarmsList(alarmsData) and then changes to the page that has my ListView. Until this point, there is no problem, my ListView is populated as it was supposed to. By clicking in any element of the listview, I go to a different contentpage where I can remove that element from the list and when I go back to the contentpage, the listview is updated as it is supposed to. My problem is that when I am in the listview page, I want the user the able to refresh that list, so I implemented the method -> Refreshing = "OnRefresh". My problem is that I can't refreh inside this function because the WCF function that I call is an async function and it is handled in the EventHandler: ` void OnRefresh(object sender, EventArgs e) { /*var list = (ListView)sender; var itemList = alarms.Reverse().ToList(); alarms.Clear(); foreach (var s in itemList) { alarms.Add(s); } list.IsRefreshing = false;*/ client.getAlarmsDataByUserAsync(Globals.username); } private void SQLService_getAlarmsDataByUserCompleted(object sender, getAlarmsDataByUserCompletedEventArgs e) { Globals.fillAlarmsList(e.Result.ToList()); Device.BeginInvokeOnMainThread(() => { alarmsListView.IsRefreshing = false; }); } ` When I reach the end of the SQLService_getAlarmsDataByUserCompleted, I have a breakpoint and the "alarmsListView.IsRefreshing = false;" doen't do anything because in the watchList, I checked that the alarmsListView property IsRefrehsing is still true. I'm kind of a rookie but I thought that when I updated my ObservableCollection, the ListView would be automatically updated, that's why I have an ObservableCollection instead of a List. I accept any suggestions that allow me to make a refresh of the listview with the only condition being that I want to make that refresh without leaving that page. I realise that I can't change my ObservableColletion outside the main thread because it is the itemsSource of an UI element, that's why I always put the instructions that handle that object inside of the: Device.BeginInvokeOnMainThread(() => { }); Any help would be appreciated. Thanks. . Answers I think you should look a bit into tutorials and demo applications Is deprecated for example. You should also avoid stacking StackLayouts, thats not the purpose of them. Use Grids! There is a XAML Previewer in alpha available that you can use to design your app btw.: IsRefresh=false; You can use yourListView.EndRefresh(); To your main problem: I think your Alarm model should implement INotifyPropertyChanged, look that one up! Alternatively you can just use PropertyChanged.Fody which is available at the Nuget browser. Konrad, thanks for your quick response. I will try to use grids, and remove the XAlign and YAlign properties are related to this thread that I have created and had no answers: I agree 100% with you and I've downloaded some XAMLSamples and view a some tutorials in facebook but it is being hard to view something that can help me, I can't find many things about Xamarin.Forms and I just post something here when I don't know what more to do. I'll try to implement the INotifyPropertyChanged, but I think that would be useful if my alarm propertys changed but that's not the case. The only thing that is changing is my List that is cleared and then populated again, an attribute of an alarm will never change. But may be you're right, I will try that. Thanks in advance, I will try to solve my problem, with your suggestions (I am currently using Visual Studio so I have never used the previewer, may be I just have to move to xamarin ide..) You can find some tutorials at the microsoft virtual academy aswell If the value of your properties change but the listview does not update NotifyPropertyChanged should help. Good luck! Hi @zRG , . Thank you Mabrouk, instead of using what is supposedly more appropriate for an updatable listview, an ObservableCollection (an observableCollection:, where it says "(...) Lists that can change during the course of an application should be stored in ObservableCollection instances. Such a list raises the CollectionChanged event when its content changes (addition, removal or a different sorting order).(...)"), if I just rebind the itemsSource instead of updating it, it works. I know that there is a more elegant way to do this, for sure, but that will work for now. Thanks!
https://forums.xamarin.com/discussion/comment/219364
CC-MAIN-2020-40
refinedweb
1,073
55.54
I want to make a custom sorting method in C++ and import it in Python. I am not an expert in C++, here are implementation of "sort_counting" #include <iostream> #include <time.h> using namespace std; const int MAX = 30; class cSort { public: void sort( int* arr, int len ) { int mi, mx, z = 0; findMinMax( arr, len, mi, mx ); int nlen = ( mx - mi ) + 1; int* temp = new int[nlen]; memset( temp, 0, nlen * sizeof( int ) ); for( int i = 0; i < len; i++ ) temp[arr[i] - mi]++; for( int i = mi; i <= mx; i++ ) { while( temp[i - mi] ) { arr[z++] = i; temp[i - mi]--; } } delete [] temp; } private: void findMinMax( int* arr, int len, int& mi, int& mx ) { mi = INT_MAX; mx = 0; for( int i = 0; i < len; i++ ) { if( arr[i] > mx ) mx = arr[i]; if( arr[i] < mi ) mi = arr[i]; } } }; int main( int* arr ) { cSort s; s.sort( arr, 100 ); return *arr; } and then using it in python from ctypes import cdll lib = cdll.LoadLibrary('sort_counting.so') result = lib.main([3,4,7,5,10,1]) compilation goes nice How to rewrite a C++ method to receive an array and then return a sorted array? The error is quite clear: ctypes doesn't know how to convert a python list into a int * to be passed to your function. In fact a python integer is not a simple int and a list is not just an array. There are limitations on what ctypes can do. Converting a generic python list to an array of ints is not something that can be done automatically. This is explained here: None, integers, bytes objects and (unicode) strings are the only native Python objects that can directly be used as parameters in these function calls. Noneis passed as a C NULLpointer, bytes objects and strings are passed as pointer to the memory block that contains their data ( char *or wchar_t *). Python integers are passed as the platforms default C inttype, their value is masked to fit into the C type. If you want to pass an integer array you should read about arrays. Instead of creating a list you have to create an array of ints using the ctypes data types and pass that in instead. Note that you must do the conversion from python. It doesn't matter what C++ code you write. The alternative way is to use the Python C/API instead of ctypes to only write C code. A simple example would be: from ctypes import * lib = cdll.LoadLibrary('sort_counting.so') data = [3,4,7,5,10,1] arr_type = c_int * len(data) array = arr_type(*data) result = lib.main(array) data_sorted = list(result)
http://databasefaq.com/index.php/answer/15874/python-c-sorting-c-lib-in-python-custom-sorting-method
CC-MAIN-2018-09
refinedweb
442
68.91
Viewing State Machines (ROS)Description: This tutorial shows you how to use the smach viewer, a simple tool to monitor state machines and get introspection into the data flow between states. Tutorial Level: BEGINNER This is what you want to debug a running SMACH state machine: The SMACH viewer shows a graphical representation of all the states in your (sub) state machines, the possible transitions between states, the currently active state, and the current values of the userdata. The SMACH viewer even allows you to set the initial state of your state machine. This tutorial teaches you how to start using the SMACH viewer. Creating an Introspection Server SMACH containers can provide a debugging interface (over ROS) which allows a developer to get full introspection into a state machine. The SMACH viewer can use this debugging interface to visualize and interact with your state machine. To add this debugging interface to a state machine, add the following lines to your code: 1 # First you create a state machine sm 2 # ..... 3 # Creating of state machine sm finished 4 5 # Create and start the introspection server 6 sis = smach_ros.IntrospectionServer('server_name', sm, '/SM_ROOT') 7 sis.start() 8 9 # Execute the state machine 10 outcome = sm.execute() 11 12 # Wait for ctrl-c to stop the application 13 rospy.spin() 14 sis.stop() server_name: this name is used to create a namespace for the ROS introspection topics. You can name this anything you like, as long as this name is unique in your system. This name is not shown in the smach viewer. SM_ROOT: your state machine will show up under this name in the smach viewer. So you can pretty much choose any name you like. If you have sub-state machines that are in different executables, you can make them show up as hierarchical state machines by choosing this name in a clever way: if the top level state machine is called 'SM_TOP', you can call the sub state machine 'SM_TOP/SM_SUB', and the viewer will recognize the sub state machine as being part of the top state machine. For more details on the introspection server, see the API documentation. The smach viewer will automatically traverse the child containers of sm, if any exist, and add ros hooks to each of them. So you only need to hook up one introspection server to the top level state machine, not to the sub state machines. Once the introspection server has been instantiated, it will advertise a set of topics with names constructed by appending to the server name given to it on construction. In this case three topics would be advertised: /server_name/smach/container_structure /server_name/smach/container_status /server_name/smach/container_init The first two publish heartbeat and event info regarding the structure and status of the SMACH containers served out by "server_name" The third is a topic for setting the configuration of the SMACH tree over ROS. The "SM_ROOT" argument is simply used for visualization, and forced nesting of different servers. Running SMACH viewer Once you have one or more introspection servers running in your ROS system, you can start the smach viewer using: rosrun smach_viewer smach_viewer.py The viewer will automatically connect to all running introspection servers.
http://wiki.ros.org/smach/Tutorials/Smach%20Viewer
CC-MAIN-2020-40
refinedweb
536
51.18
Feedback Getting Started Discussions Site operation discussions Recent Posts (new topic) Departments Courses Research Papers Design Docs Quotations Genealogical Diagrams Archives Hello LtU community, As this is my first post here in LtU, I'll briefly introduce myself. My name is Denis Washingotn, and I am an end-of-first-semester CS student at the Humboldt University Berlin, Germany. I am very interested in programming language theory and did much personal research on it in the last few years, including writing an LLVM-based compiler for a safe statically compiled C-like language in C++ as a hobby project. (I actually planned to release that as open source and had already implemented several control structures, boolean and number types, arrays and first-class functions, but lost interest because after more and more research because I increasingly found the language to be too "baroque" and unexciting.) Nowadays, my interest lies primarily in highly dynamic, reflective object-oriented languages; in fact, I am in the process of trying to design a prototype-based language vaguely in the vain of Self, NewtonScript and Io, with a special emphasis on conceptual simplicity and reflective capabilities a'la Smalltalk while trying to maintain a reasonable level of approachability for users of established mainstream scripting languages (like Python, Ruby, PHP, etc.). In relation to this, I have been investigating the possibility of effective encapsulation of state for highly modifiable slot-based objects as found in several object-centered languages. Regarding this I found a paper on the encapsulation mechanism of Self prior to version 3 (Parents are Shared Parts: Inheritance and Encapsulation in Self) where slots could be declared as "private"; such slots can then only be referred to by objects of the same delegation family. (However, according to the comments on the selflanguage.org discussion forum, the privacy declarations have been degraded to pure annotations in Self version 3 because it "was found to be unworkable", partly because the protection mechanism could be easily circumvented due to Self's dynamic inheritance, which is also mentioned in the paper.). Other than that, I haven't found anything concrete on the topic yet. So I thought about the issue myself, and came up with an idea that I would like to share with you. Suppose an object is a collection of slots, each of which associates a name symbol with a value. It is possible to retrieve a slot's value from an object by sending it a message, "getSlot", with the name of the slot as argument. Now, further suppose that name symbols are themselves objects, and are unique: there are never two symbol objects which denote the same name. So, in fact, objects slots can be thought of as object-object (instead of string-object) associations. We could thus generalize: instead of special name symbols, "getSlot" could instead allow any object to act as slot key, making it possible to to associate any object with any other in the context of a specific object. Now my idea: given the above, it is clear that in order to refer to a slot in an object, two things are needed: the object itself, and the slot's key object. This means that without a reference to the slot key object, it is impossible to access to corresponding object slot. In the simple case that all keys are name symbols, this is a trivial statement, as in most languages, it is possible to refer to any symbol from any context by the way of literals. But if slots are modeled as generalized object-object associations, it is possible to create a key object which is only available in the context of its creator, and which may be used to control access to the slot by passing it only to privileged parties. Thus, such a system would enable the programmer to use arbitrary objects as capabilities for accessing a slot, in the spirit of existing capability systems on the object reference level. This feature could be used as a powerful way to encapsulate slots in any level of granularity. For instance, in a language with full block closures, one could write something like the following to define slots which are private to a specific set of object methods (Io/Smalltalk-like pseudo code): o := Object clone [ priv := Object clone o foo := method [ ... foo := o getSlot(priv) ... ] o bar := method [ ... o setSlot(priv, "bar") ... ] ] value (The [ ... ] value should signify that the code is run in a private local activation record to ensure that priv does not leak out to the surrounding namespace.) These methods could also grant other objects the right to access the slot by sending them the priv object, which would amount to something like a dynamic version of the C++ "friend" feature. Numerable other arrangements are thinkable. [ ... ] value priv What do you think of this idea? Is it sound? I know that this requires careful choices regarding the reflective features of a language with such a protection mechanism; for instance, it shouldn't be possible to enumerate all slot keys of an object (as in Javascript). But except for this, is there anything I have overlooked? I am sure I'm not the first one to have this idea. ;) Any feedback on this is very welcome! P.S.: Lambda the Ultimate help me tremendously with my PLT research, so a big thanks to everyone who is posting here! You helped me a lot in the last years, and I am absolutely certain that you will continue do so. :). Though it isn't as efficient as it could be, I don't think you'd necessarily need to copy the entire method. For example, what about just creating a trampoline that keeps track of the private variable closure and forwards it along to the "real" method? Or is this already the overhead you were concerned about? Yes, I have implied an efficiency concern with that statement, although I haven't been giving it much thought; as you have mentioned, there are ways of implementating this which are probably sufficiently performant. The main concern I have, however, is how this influences the prototype-instance relationship: because for the methods are explicitly assigned to each new object, the prototype inheritance relationship is lost. That is, if I do the following: function Counter() { var value = 0; this.increment = function() {value += 1;}; this.getValue = function() {return value;} } var c = new Counter(); then I cannot change the implementation of all Counter's increment() functions, like so: Counter.prototype.increment = function() {print ("Incremented!");} c.increment(); // still the old version because c has an explicitly set value for bar and thus doesn't query its prototype. For "normal" methods (that is, those only assigned to Counter.prototype) this would work, however. Thus, you need to know beforehand which methods are private member accessing and which not, which is awkward. (I know the example isn't very illustrative, but you probably get the idea.) c bar I think that one crucial point about 'object capabilities' is that they cannot be faked|guessed, so another constraint of your system is that it must not be possible to fake object references. If you don't have deserialization, that's easy I think, if you do have this feature then object references must be something different than simple pointers (perhaps a pointer and a big unguessable number). Thanks for pointing out serialization, I haven't thought about that. However, I didn't plan for any form of built-in runtime-level serialization mechanisms in my language, so this shouldn't be an issue. Object references are naturally designed to be unforgeable (that is, there is nothing like pointer arithmetic or a global object table). Shouldn't setSlot be protected by an ocap? Scenario: your system has a "console" object that's used widely for console I/O. You download some untrusted code from wherever and your system passes the console object to the new code with the print reference because the docs said it needs to log some stuff. The untrusted code does a "console setSlot(print, doNothing)". Now suddenly none of your code can print to console. Or do you always do a clone of any ocap before passing it to untrusted code? The model is unfortunately not sufficient to model separate read and write capabilities; the object key is always a capability for both. There are ways around that, like introducing the ability to "freeze" slots make them immutable, or to only pass a key to a new slot which returns the actual slot's value through a method; but I admit that these are not optimal (though maybe sufficient in practice; in any case, the assumed main use of this scheme, which is slots private to a single object - or a group of mutually trusting objects -, does not seem affected by this restriction). Nice catch. setSlot( getSlot, \ slot -> "Muhuhahahaha!" ) do you always do a clone of any ocap before passing it to untrusted code? If you need special protocols to handle 'untrusted code', you certainly don't have an ocap system. I may have oversimplified a bit by bringing "getSlot" on the table. I guess that for this to work, slot lookup needs to be intrinsic to the system, and this is actually how I planned it for my language. More specifically, a slot of an object can only be accessed by sending that object a message with the key as selector. As in Self, if the slot contains a method object it is executed and its return value is the result of the message send, otherwise the slot's value itself is returned. So, no "getSlot" overriding madness. ;) What you present above is actually a rights amplification pattern... i.e. to access a slot, you need both an unforgeable reference to an object, and an unforgeable reference to its key. (You could just as easily use a password, GUID, or certificate for the latter.) Authority is therefore being mediated through this rights amplification protocol - similar to certificate authority systems or ADTs with first-class modules. By comparison, ocaps are all about making permission indivisible from holding a reference, which by nature excludes rights amplification. With rights amplification, you cannot be assured that the recipient whom you pass an object reference will have the same rights to it you have. This can become a problem for reasoning about security. (You can model rights amplification protocols, password systems, identity-based authority, certificate authority, et cetera using ocaps, but that doesn't mean these other models represent ocap systems.) (I'm not saying your system is 'bad', just that your description is a bit misleading. The JavaScript approach of hiding caps and cloning methods is far more direct a way to protect private elements by use of ocaps, i.e. by simply not sharing references to new variables. I'll write a separate post to discuss quality of your solution.) I understand your objections to using the term "object capabilities" in this context. I was actually just referring to the analogy to object capabilities regarding access control through references and unforgeability, but you are right that this makes objects themselves move away to be object capabilities in the pure sense of the word (in that a reference to an object may mean different capabilities in different contexts depending on which keys are available in these contexts). However, the password analogy you mentioned is a bit misleading, too, as passwords are forgeable - they are inherently based on being "only" hard to guess, while I am explicitly talking about the use of unforgeable keys (I know you actually noticed that, I just wanted to highlight the flawed analogy). Hi Denis -- you hit on a really tricky issue challenging capabilities for modern dynamic languages: the meta object protocol! You might want to poke at Gilad Bracha's work on mirrors (for Newspeak?), Mark Miller's on Proxy objects for ECMAScript / Caja, and our joint work on mixing in the single origin policy with JavaScript (object views) (we first locked down the MOP and then started playing with exposing it with our extension to the membrane pattern). Short of rolling your own language, you might find some our the base libraries for controlled passing of references across browser frames to be a fruitful starting point. As for the performance issues, there was a thread about this last summer or fall, but, essentially: the indirection table in Smalltalk gives you a lot of flexibility (I think something similar was adopted for proxy objects in ecmascript), PICs are another approach for folding in mostly-static accessors (you typically wouldn't be changing them every 1000 instructions), and, of course, tracing would also remove the overhead. Addressing the issue for compilation approaches used in static languages seems like either an anachronism or a mismatch for the dynamic nature of the ability to dynamically control the MOP. While you're reviewing the literature, also have a look at Web Sandbox. It was influential in establishing the proxy model of capabilities for JavaScript, and while it's still a research project, it is currently in use as an DOM isolation technology for Hotmail. The key-based solution you present could just as easily be modeled by storing all the state in the key as in the object. That is, getSlot(foo,bar) could be implemented to look inside 'foo' or 'bar' or even a global relation inside 'getSlot'. From the semantics perspective, it doesn't make much sense to distinguish one or the other as bearing the dictionary (unless some other op makes it relevant). getSlot(foo,bar) It seems this keyed approach could easily introduce hidden channels for purposes other than hiding private data. If your "interest lies primarily in highly dynamic, reflective" systems, then I caution you against prototype-based OO. In large or long-lived prototype systems, such as a LambdaMoo, decisions and data easily become entrenched deep in your object model, and the system stagnates. It becomes difficult to extend the system for processing outside your initial design. For highly dynamic systems, you need to support multiple, consistent, parallel data models. Decisions made in one model must propagate to the others, possibly coordinate with them. You'd be better off pursuing bi-directional and reactive programming, or pluggable architectures. I haven't seen enough work in bidirectional nor reactive systems to substantiate claims about large, long-lived systems (and I am aware of large, long-lived projects that use them). Furthermore, I don't typically associate bidirectional languages with reflective ones -- they're very constraining in practice, at least today, which isn't fun when working with reflective (and, I assume of interest, introspective) abilities: that's hard code to analyze, so inverting it is a bit of a reach. Likewise, bidirectionality is typically orthogonal to dynamism (they don't mix well, so they're shielded from each other whenever I see a system with both). Overall, I'm confused by your positioning of these ideas. This isn't a critique of the ideas. They're fascinating -- I used to make reactive languages and am now doing the runtime optimizations for, in part, a bidirectional language. Most of my dayjob work is in large, long-lived, reactive systems. However, the systems I speak of are architecture-based rather than language-based... e.g. publish/subscribe, blackboards or tuple spaces. These architectures have proven very flexible (and moderately scalable) for integrating and independently developing new components and extensions. While these architectures are flexible (aka dynamic) 'in-the-large', it turns out they are rather painful to work with 'in-the-small'. The languages we use (C++ and JavaScript, mostly) are simply not well adapted to the architecture. Managing subscriptions and caches are especially painful. On occasion the 'native' data format (the initial view presented to modules) happens to be the view we need to make good decisions and take actions. In that case, all we need is a simple subscription, or simple updates. Since it simplifies our lives, we try to ensure this is the case as often as possible by intelligently designing (and redesigning) the data model. But there will always be some user story that needs a different view. A common case I work with is adapting to new protocols. In those cases, a module is forced to gather a bunch of scattered data into a new view, often managing a large number of subscriptions, dropping or adding subscriptions based on changes in the data. Keeping a view 'consistent' with the native model is a non-trivial exercise. Eventually, we make decisions and take action. These cannot just be reflected in the local 'view' we have constructed; our decisions must propagate back to the native model (and then further, to sensors and actuators). Thus, we also maintain the relationship from the view back to its source. The fundamental requirements seem to be reactivity (to maintain multiple, consistent views in a concurrent system) and bidirectional mapping (so decisions and actions made on a view will propagate to its source). This becomes difficult when a 'view' comes from composing observations from diverse elements in some other model - which is not uncommon. Multiple models and bidirectional influence, together, give us dynamism - the ability to extend the system with new protocols, new data sources, new ideas, new domains and disciplines. (Modeling everything as data is also useful for highly dynamic, reflective systems. For my dayjob project, the design we came up with added method-calls separate from the data model for reasons of 'efficiency'. We have since come to regret that decision because it hinders extending the system with replay, auditing, and post-mission analysis. Were we to do it over, every method-call would be associated with a fresh data entity.) You say bidirectional is 'very constraining', but I do not mean to suggest our languages should constrain us such that every function and data-binding must be bidirectional and lossless. I'd be very happy if our languages simply made the bidirectional properties a lot easier to achieve and compose. At the very least, we need to more tightly couple the notions of 'reactive', 'view', and 'control'. It is only natural that, when we see something, we want to reach in and touch it. The problem with most OO data models is that such changes tend to modify the view rather than the cause for the view. My own efforts towards more bidirectional programming are based on the notion of demand-driven systems, e.g. a video camera might power on only because it is being viewed. Parameterized demands can serve both as queries and constraints. I've read a lot about your work with flapjax. I don't see anything on your perpetual-student page about bidirectional programming. What is it you're working on? Ah, I misunderstood reactive and bidirectional systems -- which are inherent properties of the program -- with language-based techniques for achieving them. My point was just that we don't know have proof that the language-based approach is the way to go or how for systems of scale; they're awesome, but very much research. Thibaud Hottelier is designing a bidirectional layout and animation language. As one backend, a synthesizer finds attribute grammars to compute various directions of flow, which feeds into my part, a fast (parallel, SIMD, memory-optimized) AG solver, and a more murky story on the rendering (OpenGL and Qt today, hopefully canvas and OpenVG soon). As a frontend, you can use a lot of the normal syntactic abstraction facilities of the browser -- xml, selectors, cascading -- optimization problems we've done great with already wrt parallelism. we don't have proof that the language-based approach is the way to go or how for systems of scale; they're awesome, but very much research I grant this is true. On the other hand, I've seen plenty of evidence that prototype-based OO does not retain 'highly dynamic and reflective' properties at scale. I believe seeking alternative solutions among architectures and experimental language design is quite rational. Taking the well-trodden path of prototype-based OO and expecting different results (without significant changes) is insanity. Your new project sounds interesting. I'll look into it over the next week. On prototypes: I used to do a lot actionscript development: there, the ability to manipulate the central MovieClip object was powerful. Unfortunately, the common use of prototypes -- js for the DOM -- crippled the main object of manipulation (dom) and now, while finally more consistent, the security camp (cough capabilities) put the nail in the coffin. I agree about the comments about scale and introspection. The dynamism isn't amazing, but does serve a purpose -- however, as seen in JS's continuing evolution, the MOP needs to be rich (eg, dynamic accessors). We haven't written much up recently. There was a rejected paper last year on the constraint language and I have a thesis proposal draft touching part of it; we've made great strides recently and will probably write up a few things this year (css semantics, new parallel algorithms and language implementations, etc.) oops wrong place (bad phone, bad!) Keys in Lua tables are arbitrary objects, and it's a well-known idiom to use an object as a key (instead of a string) in order to avoid accidental collisions. (Lua doesn't attempt to be capability-secure.) Some prior work in this area was done by Pavel Curtis in his MOO language and system: It was nothing like capabilities, and there were problems with it in practice, but it was an early and interesting approach to the problem of providing controllable access to sub-parts of an object in a multiuser setting.
http://lambda-the-ultimate.org/node/4217
CC-MAIN-2017-51
refinedweb
3,629
50.26
This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project. Previosuly reported and patch fomulated by Paul Archard GLIBC versions: tested with 2.7, 2.9, 2.11, 2.14 built from official source (by Paul Archard) GLIBC versions: tested with 2.18 built from official source (by MyungJoo Ham) EGLIBC versions: tested with 2.13 built for Tizen/armv7l (by Beomho Seo and MyungJoo Ham) Quoting Paul Archard's message: In a program satisfying the conditions listed below, reusing a cached stack causes a bounds overrun of the thread's DTV structure, leading to a probable crash. The _dl_allocate_tls_init function re-initializes the dtv based on the current slotinfo_list, however it is possible for the dtv to be smaller than the highest module id loaded. When this happens the function will write over memory outside of the dtv, leading to unpredictable behavior and an eventual crash. See proposed fix below and attached test-case for repro. Conditions needed: The use of a relatively large number of dynamic libraries, loaded at runtime using dlopen The use of thread-local-storage within those libraries A thread exiting prior to the number of loaded libraries increasing a significant amount, followed by a new thread being created after the number of libraries has increased. Example Valgrind output: ==27966== Invalid write of size 8 ==27966== at 0x4010A7A: _dl_allocate_tls_init (dl-tls.c:418) ==27966== by 0x4E35294: pthread_create@@GLIBC_2.2.5 (allocatestack.c:252) followed by: ==27966== Address 0x5b4e6d0 is 0 bytes after a block of size 304 alloc'd ==27966== at 0x4C26D85: calloc (vg_replace_malloc.c:566) ==27966== by 0x4010439: allocate_dtv (dl-tls.c:297) ==27966== by 0x4010B4D: _dl_allocate_tls (dl-tls.c:461) ==27966== by 0x4E357E9: pthread_create@@GLIBC_2.2.5 (allocatestack.c:575) Tested-by: Beomho Seo <beomho.seo@samsung.com> Signed-off-by: MyungJoo Ham <myungjoo.ham@samsung.com> --- ChangeLog | 5 +++++ elf/dl-tls.c | 48 ++++++++++++++++++++++++++++++++++++++++++++++-- 2 files changed, 51 insertions(+), 2 deletions(-) diff --git a/ChangeLog b/ChangeLog index b9201fc..39f9ce7 100644 --- a/ChangeLog +++ b/ChangeLog @@ -1,3 +1,8 @@ +2013-11-22 MyungJoo Ham <myungjoo.ham@samsung.com> + + * elf/dl_tls.c: Prevent bound overrun and allocate another chunk of + memory when needed for dtv. + 2013-11-21 Roland McGrath <roland@hack.frob.com> * malloc/malloc.c: Move #include <sys/param.h> to the top; comment why diff --git a/elf/dl-tls.c b/elf/dl-tls.c index 576d9a1..d19c9f5 100644 --- a/elf/dl-tls.c +++ b/elf/dl-tls.c @@ -34,14 +34,12 @@ /* Out-of-memory handler. */ -#ifdef SHARED static void __attribute__ ((__noreturn__)) oom (void) { _dl_fatal_printf ("cannot allocate memory for thread-local data: ABORT\n"); } -#endif size_t @@ -387,6 +385,52 @@ _dl_allocate_tls_init (void *result) TLS. For those which are dynamically loaded we add the values indicating deferred allocation. */ listp = GL(dl_tls_dtv_slotinfo_list); + + /* check if current dtv is big enough */ + if (dtv[-1].counter < GL(dl_tls_max_dtv_idx)) + { + dtv_t *newp; + size_t newsize = GL(dl_tls_max_dtv_idx) + DTV_SURPLUS; + size_t oldsize = dtv[-1].counter; + + if ( +#ifdef SHARED + dtv == GL(dl_initial_dtv) +#else + 0 +#endif + ) + { + /* This is the initial dtv that was allocated + during rtld startup using the dl-minimal.c + malloc instead of the real malloc. We can't + free it, we have to abandon the old storage. */ + newp = malloc ((2 + newsize) * sizeof (dtv_t)); + if (newp == NULL) + oom (); + memcpy (newp, &dtv[-1], (2 + oldsize) * sizeof (dtv_t)); + } + else + { + newp = realloc(&dtv[-1], (2 + newsize) * sizeof (dtv_t)); + if (newp == NULL) + oom(); + } + + newp[0].counter = newsize; + + /* Clear the newly allocated part. */ + memset (newp + 2 + oldsize, '\0', (newsize - oldsize) * sizeof (dtv_t)); + + /* Point dtv to the generation counter. */ + dtv = &newp[1]; + + /* Install this new dtv in the given thread */ + INSTALL_DTV (result, newp); + + assert(dtv[-1].counter >= GL(dl_tls_max_dtv_idx)); + } + while (1) { size_t cnt; -- 1.8.3.2
https://sourceware.org/legacy-ml/libc-alpha/2013-11/msg00665.html
CC-MAIN-2020-16
refinedweb
619
58.89
Plack::Middleware::Access - Restrict access depending on remote ip or other parameters version 0.2 # in your app.psgi use Plack::Builder; builder { enable "Access" rules => [ allow => "goodhost.com", allow => sub { <some code that returns true, false, or undef> }, allow => "192.168.1.5", deny => "192.168.1.0/24", allow => "192.0.0.10", deny => "all" ]; $app; }; This middleware is intended for restricting access to your app by some users. It is very similar with allow/deny directives in web-servers. A reference to an array of rules. Each rule consists of directive allow or deny and their argument. Rules are checked in the order of their record to the first match. Code rules always match if they return a defined value. Access is granted if no rule matched. Argument for the rule is a one of four possibilites: Always matched. Typical use-case is a deny => "all" in the end of rules. Matches on domain or subdomain of remote_host if it can be resolved. If $env{REMOTE_HOST} is not set, the rule is skipped. Matches on one ip or ip range. See Net::IP for detailed description of possible variants. An arbitrary code reference for checking arbitrary properties of the request. This function takes $env as parameter. The rule is skipped if the code returns undef. Either an error message which is returned with HTTP status code 403 ("Forbidden" by default), or a code reference with a PSGI app to return a PSGI-compliant response if access was denied. You can also the allow method of use this module just to check PSGI requests whether they match some rules: my $check = Plack::Middleware::Access->new( rules => [ ... ] ); if ( $check->allow( $env ) ) { ... } If your app runs behind a reverse proxy, you should wrap it with Plack::Middleware::ReverseProxy to get the original request IP. There are several modules in the Plack::Middleware::Auth:: namespace to enable authentification for access restriction. Jakob Voss Yury Zavarin <yury.zavarin@gmail.com> This software is copyright (c) 2010 by Yury Zavarin. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself.
http://search.cpan.org/~tadam/Plack-Middleware-Access-0.2/lib/Plack/Middleware/Access.pm
CC-MAIN-2016-44
refinedweb
361
68.87
Microsoft Teaming up with RadioShack 259 ViceClown writes "Microsoft is teaming up with RadioShack in a sweeping 5 year deal to set up Microsoft 'stores' inside RadioShack brick and morter shops. Customers will be able to view demonstrations and sign up for MSN internet access. " There goes the neighborhood! (Score:1) *MS*NBC (Score:1) I remember back when MSNBC launched there was alot of assurance of Microsoft Corp having 0 involvement in it newswise... hmmm... Re:There goes the neighborhood! (Score:1) So it goes. boo (Score:1) Then again, who shops there? Most people I know order parts online or go to other electronic stores. Most RadioShack employees are clueless beings. More (Score:1) It mentions that MS are investing $100Million in Radio Shack's web site. Surely some mistake? Oh boy (Score:1) *Example, I got a package of 4 telephone wire spicers at Radio Shack for about $2.00. The next day I got a package of 25 for about the same price at Home Depot. makes sence (Score:2) Radio Shack... (Score:1) Sweet Man (Score:3) Here are two reasons: 1) It have wished and wished for an R/C Car that had a 'start' button. Maybe it will even shutdown and require a reboot every two laps. 2) I always hate being able to go to a store that has all the cool electronic bits and peices I want but never carries a good copy of "learning Win98." I mean, who in their right mind solders breadboard chips and circuits without their trusty Win98 for dummies books? -Davidu An obvious attempt to doom Radio Shack (Score:1) --- Dirtside The History of Microsoft partners. (Score:4) Spyglass was to share the profits from the sale of Internet Explorer. Digital was to benefit from NT. (Oh, and NT was going to be VMS done right.) Sybase was going to benefit from its SQL partnership with M$. Microfocus was offered a deal. M$ was to take 10% of its cash across the whole product line so Microsoft would keep selling its COBOL product. Microsoft has a history of leaving its partners in worse shape than before they started. Now, I'm waiting for Radio Shack to get the short sticky brown end of the stick. Cuz thats the end most EVERYONE else has gotten. Re:Oh, Good, an alliance of obsolescence (Score:1) They do have this strange ability to have that ONE really odd part that you must have. Radio Shack has saved many an ass at any time (At least, the ones that have that big wall of electronic components). Good for Microsoft. (Score:2) This is a good thing for Microsoft to do. Creating partnerships is a good way to benefit from the markets that the other company has. Hopefully Radio Shack did this out of their own will and because Microsoft told them that if they didn't Radio Shack would be out of business in 3 years. Microsoft wouldn't violate anti-trust laws just after the Judge released the FoF documenting their transgressions in they past, would they? Nah...-Brent -- Hahaha.. BEWARE MICROSOFT!!! (Score:1) Tandy has a habit of blowing deals left and right. IBM sold their machines in RS, now IBM refuses to even let Tandy service those machines. Compaq has had much lower sales than expected in RS. The incredible universe (iu), McDuffs (md), Computer City (cc)... all BLOWN deals.. one every 3 years. RS market share is dwindling. MS just wasted 100M.. ohh well! Pan Good for Microsoft. (Score:2) This is a good thing for Microsoft to do. Creating partnerships is a good way to benefit from the markets that the other company has. Hopefully Radio Shack did this out of their own will and *not* because Microsoft told them that if they didn't Radio Shack would be out of business in 3 years. Microsoft wouldn't violate anti-trust laws just after the Judge released the FoF documenting their transgressions in they past, would they? Nah...-Brent -- Seriously though... (Score:1) Match made in heaven? (Score:1) --joe Re:Oh boy (oops) (Score:1) Also, does this mean that the RadioHack clerks will now ask for your email address too, so M$FT can send you junk email catalogs? New Microsoft Radio Shack! (Score:1) -PovRayMan Re:*MS*NBC and other (Score:2) On a slightly related note, did anybody watch the "Smoke in the Eye" Frontline episode a couple of weeks ago? It was a pretty damming account of the Big Tobacco vs. CBS debacle. It showed pretty plainly that the 60 Minutes/CBS lawyers didn't want to report a bad story about the tobacco industry so CBS wouldn't have a huge lawsuit in it's books when the higher ups were trying to sell it to Westinghouse. An excellent example of what can/does happen when giant corps own the conduits of the news. More overpriced shiznit! (Score:1) It won't be long... (Score:1) Build Your Own Cable Descrambler and Windows Stablizer! ....For only 19.95 we will send you instructions that will tell you how to get hundreds of cable channels for free using parts obtained from any Radio Shack. New developments in the software world now make it possible for the same device to make your Windows computer run better than ever!! No more running all over town, get it all in one place!..... Nose -Common sense isn't. Estimate of comments... (Score:2) Oh goody (Score:1) Guess it business as usual in Redmond (Score:2) Right ? I wonder why MS simply doesn't set up an online store for all their software ? It isn't like stores will stop carrying MS product if they do. MS could catch some of that margin for themselves. Like they need the money. Well, if Win 2000 is as crappy as the RC2 indicates it might be, then they may actually need every penny they can scrape up. Win 2000's DNS server managed to destroy all the domain records on my intranet (Linux 2.2.5 Bind 4) when I started it...that was a neat trick. I know everyone on the Internet will want that feature. It is time to upgrade to BIND 8 people. Worse Win 2000 Server didn't give me the option NOT to install its DNS Manager. Win 2000RC2 also uses 30% more RAM on my system immediately after a system startup. (104MB with 2000RC2, 68MB on NT4 same hardware.) Oops, this turned into an anti-Windows rant...sorry. Then again Win2K deserves it more than any Windows to-date. Beta or otherwise Still waiting... (Score:2) That's the way buying computers used to work. You tried the Apple IIe, the Atari 800, and the Commodore 64 in your local department store, then picked the one you liked best. Currently, consumers think they have no choice, Linux or no Linux, because they can't play with Linux in the store. If they could, then they could make an informed decision. Right now they have to go by the reviews on the Internet, which just isn't the same IMHO as actually getting down and trying the OS. Now that all Intel instruction set based computers need not be the same, I think it's important to find a way to get Linux machines set up on display in major computer stores to help boost its growth even more. They make a good pair (Score:2) Radio shack is a retail outlet and might be a perfect host for Microsoft. Why would the software giant kill such a lucrative host that can push its warez? Radio Shack may be a poor place to buy parts (or anything else!) but they cater to the public and push credit so they may buy. Christmas shoppers and gift looking people for birthdays, etc., often find their catalogs attractive and take advantage of Radio Shack's offerings. If you want electronics, there are many good places to find parts on the web. I'd rather take apart a television than go to the shack these days! Re:*MS*NBC (Score:1) KS Is this interesting? (Score:1) abcd (Score:1) Seriously, why doesn't radio shack just die? Perhaps it's the same reason AOL is the biggest and most successful ISP? Oh well, it's not like anyone makes you go into radio shack while you're at the mall. Re:*MS*NBC and other (Score:1) think about it, if the story is reported by them first, then other news orgs will be less likely to make a big deal of it (since they weren't first), effectively giving MS an amazing ammount of spin control Outdated... (Score:1) You mean they weren't already? Sure, they've got stuff there that no one else has... because no one else will carry stuff as old... Re: A little off-topic question (Score:1) Re:Guess it business as usual in Redmond (Score:1) Hey...didn't Apple....? (Score:1) So what? (Score:1) Sheesh. Might as well post "Coca-cola signs deal to put soda in every McDonalds," or "Hershey's puts candy bars in grocery store checkout lanes." Yawn... (Score:2) This turned out to be a big non event. It seems that the collective reaction has so far been: "So what?" There was a lot of hype yesterday in the mainstream news. Both CNET and CNN were reporting breathlessly that some really really really big announcement from MS will be forthcoming tomorrow. Even Yahoo ran a little blurb. It was going to be a major announcement about a mega-mega deal, I read yesterday, we promise. And the announcement is ... [drumroll] ... Microsoft is going to sell stuff in Radio Shack. Huh? That's the big announcement? By complete accident, I happen to find myself at earlier today (what a useless site, BTW). They had this splattered all over the home page. They had one of the fancy live webcast thingies going on. Really, I must be missing something, but I don't see what the big deal is. The reason that only MSNBC is reporting on this, and only now, is because nobody else really cared about it, once they found out what was the big announcement. -- Re:Is this interesting? (Score:1) Re:Radio Shack... (Score:1) Re:*MS*NBC and other (Score:1) For the most part, the content of hard news is of little concern--most people are intelligent enough to notice if there were discrepancies between an MSNBC article and a similar article on the New York Times. The problem is the articles that they do print. Look at the MSNBC Slashdot response [msnbc.com] article. As was pointed out in an earlier news post on here, they took a few relatively meaningless quotes off of Slashdot and represented them as the ideas of an average Slashdot user. The result? Someone that reads the article thinks that the Slashdot community is a pretty inarticulate bunch--the average 'net user won't take the time to hunt through /. to find the article in question. This, of course, can be applied to any subject. A version of Netscape has a security flaw? You can bet they'll slap an article up. The fact that MSNBC posts a relatively harsh article on MS when every other site on the net is doing the same? Not surprising. ~=Keelor radioshack.com (Score:1) Another sad day... (Score:2) Now, Microsoft's ripping Apple off yet again (this deal looks disturbingly similar to Apple's Stores-Within-A-Store at CompUSA, Fry's, and Micro Center). It'll be interesting to see how they do this one. Re:makes sence (Score:1) If MS starts to make good revenue on this, they will eventually want more. That's how their apps started. They tested the waters of app developmenet, an found that they could make a lot more money there. So they use their business/marketing smarts to kill the competition and take over the market. You might see the same here, and RShack will take the brunt of it. Steven Rostedt Re:Match made in heaven? (Score:2) Hear hear!!!! Because we all know what excellent, knowledgeble salespeople work at Fry's! I'm glad I live 40 minutes from Fry's! ----------------------------------------- My analysis (Score:2) This is actually convenient. I can avoid everything at once. Looks like they're pushing MSN stuff only (Score:2) This is a pretty smart move. Selling Win98 in Radio Shack would probably not be a bit hit, but nowadays at least my local Radio Shacks are havens for clueless people who for some reason desperately need cell phones. Great audience for pushing the consumer connectivity stuff. Re:okay... (Score:1) Targeting the poor.... (Score:1) On the surface unwise, but whats the real angle? (Score:3) From the miniscule press release it sounds like they're trying to sell MS wireless and internet access but how many computers does Radio Shack really sell? Radio Shack isn't exactly the first place most people run to for finding an ISP either. Microsoft doesn't usually make unwise marketing moves, so there's got to be an angle, I'm just not seeing it. Were there any other people trying to get their software or services in Radio Shack that Microsoft is effectively keeping out? Red Hat? Apple? AOL? Re:Another sad day... (Score:1) Re: A little off-topic question (Score:1) Steven Rostedt just like banks in grocery stores (Score:1) than Microsoft can pop up inside of Radio shack. No big deal I guess. Anyone know if MS owns or has interest in Radio Shack? You forgot one ... (Score:1) The comment that tells you what all the other comments will be about. Steven Rostedt Re:abcd (Score:1) alright alright....I suppose if I start out at 2 I should actually have something to say heh heh Re:Sweet Man (Score:1) Microsoft/Radio Shack partnership (Score:1) Shops in shops (Score:1) Of course, Microsoft is a completely different animal and Radio Shack is at least in the same ballpark of supply but I still don't think it's a really great idea. Maybe as a stepping-stone for Microsoft to open their own high-street stores, just test the water first? On an semi-related note, when I was younger, I heard a lot about how RadioShack(USA) was so cool with all this electronics stuff to buy. Their sister-chain here (Tandy) is pretty disappointing with electronics components stretching to audio cables, a few resistors and LEDs and some chips (that you would have to go to a proper component store to get the support components for anyway). As such, I was really looking forward to actually visiting a RadioShack in the States but to my dismay, it was almost just the same. Oh well, maybe it's just one more thing where I missed the window on when it was good (Like I hear that MTV was actually something to enjoy watching once upon a time) Oh well, at least now there's plenty of Maplins (though they've started to get a little to heavily into consumer electronics) and Frasers (a small shop in Portsmouth with excellent stock and prices) Erm, relevance? What's that? Rich Alternatives to RadioShack?? (Score:1) Radio Shack employees (Score:1) True story, from about ten years ago: an electrical engineering student buys a beautiful old vintage ('40's) radio and finds that it works fine except for the power indicator light bulb being burned out. Uncertain whether this particular model of light bulb is still being made, he measures the juice flowing through the socket (60V AC) and takes the bulb to the local Radio Shack, hoping he can just pick something up without having to mail order it (waiting several days and paying shipping for a $0.60 part being a pain in the neck). He asks the clueless clerk whether they stock a replacement bulb; the clerk can't find anything with that model number in their catalog, at which point the customer mentions that it had 60V AC going through it. "Oh, that explains it," says the clerk. "We only have DC light bulbs." The student goes back to the dorm and tells his roommate (me) this story. I fall over laughing. The buried point... (Score:3) "...found a home connectivity partner, offering not just services but innovative technologies as well. Where else in the country is there a place to go specifically for "home connectivity"? I know my house is connected, but I did that myself from hacking together DSS, Cable Modem and a nifty little p90 linux gateway. But what do you do when you're joe schmoe, and don't have the knowledge to do it yourself? Now the average guy may have somewhere to go to get it all in one package. Sprint, Microsoft, RCA, etc... One stop shopping for all the hardware and software to wire your home. All run by a simple Microsoft interface. This may actually be a good thing. Something my mother could do. What's easier to understand? This: 1) install linux 2) configure network scripts to run dhcpcd 3) Setup dhcpd sever on eth1 4) ipchains -q 5) debug terminal 6) and the list goes on... or this: 1) push power 2) push start button. 3) Something bad happens, repeat. Us dorks might have Architecture issues with the system, but the average guy just wants it to work. "You want to kiss the sky? Better learn how to kneel." - U2 "It was like trying to herd cats..." - Robert A. Heinlein Re:*MS*NBC (Score:1) I first noticed the story via my CNNfn Slashbox (the MS-phobic can, at least temporarily, peruse CNN's 12:10 Redmond-time take here [cnnfn.com] -- there's no time stamp on the MSNBC.com story, but surely they had no "world exclusive"). While I'd love to put MSNBC in the conspiracy-theory in-box, I'm pretty sure this story (actually a Waggener Edstrom press release [tandy.com]) reached every organization at roughly the same time. -- SHACK (Score:2) SHACK, after all, evokes imagery of a crappy, run-down, outhouse type of thing. Well, it's appropriate but not a very strategic marketing move. I think Microsoft should change their name to reflect the partnership. Junk-ass-stuff, or Dubious-Morals-Software, Inc. Something like that. ------------------------------------------ yes (Score:1) Whoops!! correction... (Score:1) 3) If something bad happens then repeat. I do not mean to imply that something bad *would* happen each time the box was turned on. =) "You want to kiss the sky? Better learn how to kneel." - U2 "It was like trying to herd cats..." - Robert A. Heinlein Re:Match made in heaven? (Score:1) Just do what I do (Score:1) Me: Cash. Cashier: And how do you spell that, Mr Cash? Me: Cash. I'm paying you with cash. Re:The buried point... (Score:1) M$: 1)push power 2)push start button 3)something bad happens, goto 1 (indefinitely) or Apple: 1)push power 2)start surfing Re: A little off-topic question (Score:1) Yeah, but Digital bought into the whole NT thing.. They even ported NT to the Alpha.. Digital, the once mighty minicomputer giant, then started losing a lot of money, and were bought by a PC company. -joev, former DEC employee, who actually worked in Digital's NT marketing group... Actually, I forgot two... (Score:1) Worst of Both Worlds (Score:2) public class SlashdotRant { public static void main (String[] args) { String microsoft, radioshack; if ((microsoft == overpriced_software) && (radioshack == overpriced_hardware)) { microsoft + radioshack = worst of both worlds; } else { System.out.println ("It's still not worth it. Shop somewhere else."); } } } Woohoo. Posting in java. I feel like a geek, and I love it! The humanity! (Score:2) 1. Lie. 2. "Sorry, that's not something I have to give you. If you want to push it, I'll take my business elsewhere." Number one has the effect of pissing off the schlepp that lives in my old apartment. Number two has always gotten me out quick, with item in hand. Seriously, I fail to see how this is a good thing. Bookstores and computer stores are already swamped in MS books and paraphenalia- a partnership w/ RS is only increasing their reach into one area they don't control. Isn't this spreading the monopoly? When I think "quality", MS is the last thing on my list- why RS would want to promote a substandard product is beyond me. Since the whole environment of the store seems to be more for electronics hobbyists and people trying to connect their cuisinart to their Dreamcast through their Amiga, one would think it an ideal environment for Linux. Combining a desktop monopoly with the vast database of customers that Radio Shack has is a disturbing thought. Microsoft wanting to get their mitts into that, possibly? Ouch. "Sympathy for Microsoft!" junk mail, anyone? Anyone? Only Death is Silence. Acceptance is Surrender. Re:*MS*NBC (Score:1) i wonder if MSNBC is *obligated* to hype every lame MS press release. Re:The buried point... (Score:2) I've recently come to realise the wisdom behind a teacher's quote at my old school. "The man in the street? Sometimes, I wish they just left him there!". The sensible point behind the quote is that it's not necessarily the case that having all ignorant - or I should say, unknowing - folks coming to Linux is a good thing, rather that there will be some to whom other packages are better suited. Simply because, Linux wouldn't be Linux with that sort of market-awareness: the whole thing could go down the pan pretty fast, as it hits the increasingly-commercial arena. Where are the geeks yelling 'let's keep linux free!'? (Apart from me, that is Re:makes sence (Score:1) engineers never lie; we just approximate the truth. Re:Oh, Good, an alliance of obsolescence (Score:1) Windows 95/98/2000 kernel is older than Dirt (Dirt, of course, having been invented in 1994, just after MS-DOS 6.0 Gee when did the linux kernel get made? When did Unix get invented? Oh hang on, all those things actually have improvements over the years - even the *evil* microsoft empire seems to continue development on the NT kernel. It'd be better for everyone ... (Score:1) (o.k. a cheap shot, I couldn't resist) Re:NOOOOOOOOOOOOOOOOOOOOOOOO!!!!!!!!!!!! (Score:1) Radio Shack the lacky (Score:1) Also is mine the only Radio Shack where the employees think they know everything, but know nothing. I HATE getting into arguments with Radio Shack employees. I could go in there and say I am going to invent an Astral Demoleculizer to travel to another dimension and the employees at my radio shack would insist that I am buying the wrong parts to make it work even though they obviously don't know anything about Astral Demoleculizers they feel the need to be right. It really annoys me. *NOTE*: If you are with a Government agency I do not know anything about Astral Demoleculizers and have by no circumstances built such a device and traveled to PS389 to read the blue book. Honest. Re:Alternatives to RadioShack?? (Score:1) Maybe in order to be successful in that market, you have to relentlessly collect addresses and phone numbers of your clientele. Re:*MS*NBC (Score:1) Re:Radio Shack employees (Score:1) Someone has led you awry (Score:2) Radio Shack has always sucked, as has MTV. Ok, at one point back in the early 80's MTV sucked _less_ but it still sucked. Microsoft has always sucked too, so I can see the commercials now: Hey! You got your Microsoft in my Radio Shack. No, you got your Radio Shack in my Microsoft! (voice over) MS RadioShack! Two sucky things that suck tgoether! This is not so bad... (Score:2) I guess what I am getting to is this - just because it is Radio Shack and their 6000+ stores does not make this a good deal for either party. RS is becomming more and more of a K-mart/TG&Y like place. You ain't gonna find the top quality stuff there, and everybody knows it. If MS wants to be associated with that image, more power to them. Re:There goes the neighborhood! (Score:1) Not die, just change (Score:1) Re:SHACK (Score:2) Re:Alternatives to RadioShack?? (Score:1) Anything that I'd buy at Radio Shack, I'd rather buy somewhere else. I only shop there if (for some reason) I need to buy inkjet cartridges and I don't feel like going all the way to the local computer store. Re:Not die, just change (Score:1) Re:okay... (Score:1) What does this really mean? (Score:1) $5 says those Radio Shack batteries stop working in my Palm Pilot, and will only work in a wince device. So will Windoze be OEM'd as ``Realistic OS''? (Score:2) Get real! Get Realistic OS! ;) Radio Shack (Score:1) The last time I walked into a Radio Shack was last year, and I was looking for some cat5 cable... I asked the man working the counter if they had any eternet cable, and he looked at me, puzzled, and asked "Internet cable? I don't think we have that." Ugh. The microsoft deal seemed inevitable, or at least something like it was. Radio Shack is no longer the place it used to be. It's kind of sad. Top Ten annoyances at the new MicroShack... (Score:2) 9) Sound level meter now constantly asks "Are you sure you want to use that decible setting"? 8) RC cars now automatically attempt to seek and destroy nearest DOJ agent. 7) Old Tandy computers are back, running Microsoft BobCE (tm) as ther OS. Dual processor models availaible, in the BobCE Twin model... 6) Now asked for name, address, and full list of licenced Microsoft products in house. 5) Salesmen required to wear Microsoft Bob masks to appear more friendly. 4) Required to show proof of MSCE to buy most electronics. 3) Radio Shack computers now overpriced and unreliable - the more things change, the more they stay the same. 2) New Microsoft demo of Microsoft Laser Pointer 2000 accidentially blinds entire mall security force. 1) New toy of the year - RC Paperclip. Re:The buried point... (Score:2) while(1){ system.Reboot(); } There *IS* something in it for Microsoft (Score:2) It's the connectivity, stupid! Microsoft has invested millions already into companies that provide cable modems... they have also invested heavily in DSL companies such as Northpoint, who signed a deal with Tandy/Radio Shack to market their wares in Radio Shack.. what that means is that Microsoft's fledgling DSL service can work with Northpoint's national DSL service in offering high-speed connectivity across the nation, and of course the default ISP for these "great deals" will be MSN. Radio Shack will have their own install trucks and personnel to bring DSL to the masses, probably using the newest, most consumer-friendly type of DSL which allows respectable bandwidth and telephone calls over the same line. In other words, Microsoft wants to dominate your desktop, your web browser, your gate way to the net, and even make a profit off of getting you connected. Contrary to what some people think, DSL is a good thing, and getting a lot better real soon. Sure there is the possibility of fast access on cable modems, assuming that all your neighbors don't want on the 'net too... but do you really want to share your bandwidth (and your "secure" network) with Billy down the street? Already, there is information showing that DSL is faster than cable modems during evening hours. Why? Because little Billy is watching that streaming porn again... Personally, I hope a lot of you are right and that this flops, but I suspect that many drones will jump on the bandwagon once the price range hits about $40 a month... Hey! They can offer three months of free service with every upgrade to Windows 2000! Whee. The idea of using a good DSL modem to deliver MSN is kinda repulsive, no?!? It's like racing your new sports car with the great paint job over gravel roads. Pretty grating... Tandy 2000: Microsoft already fucked 'em once (Score:2) The Tandy 2000 featured an 80186, which is an 8086 with built in UART and DMA controller. The Tandy 2000 also featured a 640x480 color display at a time when the CGA was standard. All in all, it was about 4 years ahead of its time. I remember a strange thing about the announcement: it was "85% IBM compatable." Huh? Who would make a "sorta compatible PC"? What software would it run reliably? Well, it would run all this great new software for a new environment called "Windows." The slight differences in hardware would be hidden by "drivers." Cool, huh? Except Microsoft didn't ship Windows 1.0 in time. When it did, it sucked. Worse yet, Microsoft decided to put Windows on the backburner in order to produce a new operating system with IBM called "OS/2." The net result is that Tandy ended up with a warehouse full of Tandy 2000's they couldn't sell. It put them out of the computer business pretty much permanently. The IBM PC didn't kill the TRS-80, Microsoft did. Makes perfect sense (Score:2) Microsoft sort of provides the same users experience in software. lessee (Score:2) 1) the desktop client 2) the server 3) the proprietary protocols 4) the physical stores that doesn't sound a like a recipe for choice to me... Match made in heaven... (Score:2) Y'know, Radio Shack has had so many chances to be a really good, useful store, and they have screwed it up horribly every time. They could have been a great parts repository for people into electronics, a/v, and radio, but the substandard quality of the parts and their blockheaded sales staff truly makes the Rat Shack a last resort for even the smallest purchases (yeah, I'll grit my teeth and go in there for the RCA Y-cable, coz its faster than mail order...) Likewise, Radio Shack has been around since the very beginning of the personal computer revolution - I wrote my very first program around 1980 on a TRS-80 Model III - but they've just never seemed to "get it". They could've made a killing if they'd jumped the gun selling good quality PC accessories rather than overpriced "Tandy" brand (aka Tandy crashtastic floppies for $30+ a box). And I just can't resist adding yet another rant about their policy of polling customers for name and address. My last Rat Shack experience was as follows: I needed a pair of mid-range headphones in a hurry, and RS was conviently located. Bought a pair of headphones for ~$40 US, took them home, and one channel didn't work. Went back the next day for an exchange - this time I tested them in the store. ANOTHER defective pair! At this point, I wanted my money back, but had to argue with the salesbeing for a while because it wouldn't give me a refund until I divulged my name & address. When I finally revealed my identity as "Zarathustra Rosenthorpe", the salesbeing finally relented. As far as the Microsoft partnership is concerned, the deal may get them a little more exposure with Random P. Consumer, almost certainly at the expense of a further tarnished reputation. I expect to see MS displays popping up in McDonalds and 7-11 any minute now... Re:Another sad day... (Score:2)
http://slashdot.org/story/99/11/11/1558232/microsoft-teaming-up-with-radioshack
CC-MAIN-2015-22
refinedweb
5,354
73.98
Advanced Web Service Interoperability In Easy Steps NetBeans IDE 6.1 comes with enhanced support for web services development, reflecting the state of industry-wide adopted technologies in web services and Service Oriented Architecture (SOA). NetBeans includes unique and easy to use tools, from visual web service design to enabling powerful technologies for security, reliability or transactions. Bundled together with ready-to-run examples, help and documentation, it provides an easy entrance point for beginners in web service development as well as a broad set of features required by enterprise class solutions and SOA. METRO Overview Most of the web service related features in NetBeans are built with the use of Project Metro. Project Metro is the Web services stack (framework) from Sun Microsystems. The stack is integrated in GlassFish V2, a high-performance, production-quality, Java EE 5 compatible application server. The individual components of Metro can be divided in two categories: ● JAX-WS Implementation – The core Web services platform ● Project Tango, also referred to as Web Services Interoperability Technology (WSIT) JAX-WS is the core Web Service platform, including all the SOAP message functionality, and Project Tango adds interoperability with Microsoft .NET, Reliability, Security, and Transactions. Tango Terminology -. - Direct Authentication – A type of authentication where the service validates credentials directly with an identity store, such as a database or directory service. - Impersonation – The act of assuming a different identity on a temporary basis so that a different security context or set of credentials can be used to access the resource. - Message Layer Security – Represents an approach where all the information that is related to security is encapsulated in the message. In other words, with message layer security, the credentials are passed in the message. - Mutual Authentication – This is a form of authentication where the client authenticates the server in addition to the server that authenticates the client. -. - Transport Layer Security – Represents an approach where security protection is enforced by lower level network communication protocols (such as SSL). - Trusted subsystem (domain) – This is a process where a trusted business identity is used to access a resource on behalf of the client. The identity could belong to a service account or it could be the identity of an application account created specifically for access to remote resources. Get Ready For Web Services Implement and Deploy web service In this example, we show how to develop a web service. We enhance this service with additional capabilities in later chapters. Our service will be able to receive banking orders and store them in a map (for simplicity, we'll skip databases in this example). First, create an Enterprise Application in which we will host our web service, by choosing File -> New Project and selecting Enterprise Application. Click Next and name the application BankApplication. Leave the other values at default. Your wizard screen for Enterprise Application should look similar to Figure 1. Note: The target server is GlassFish v2. You can have it installed with your NetBeans IDE 6.1 installation if you choose the Full or Web & J2EE download or specifically select GlassFish). If you don't have it, download GlassFish from, and register it into NetBeans through Tools -> Servers. Figure 1. Creating Enterprise Application Now create the web service itself in the EJB module. Right-click BankApplication-ejb node and select New -> Web Service. In the wizard window, name the web service BankOrderService and place it in package bankorder.service as shown on Figure 2. Figure 2. Creating a Web Service in an EJB module After clicking Finish, you should see the Visual Designer window for your web service. It is empty, because we did not define any operations for the service yet. Our application should be able to receive orders, and we would like to model them as a Java class with 3 fields for recipient account number, sender account number, and the actual amount being transferred. We will represent this data in a class called bankorder.data.BankOrder. Create the class as shown on Listing 1, and then use the Refactoring -> Encapsulate Fields feature in the editor to generate setters and getters for all fields. Listing 1. BankOrder.java Class for transferring information about the Banking orders between service and client package bankorder.data; public class BankOrder { private int id; private String receiverAccount; private String sender Account; } With the data transfer class ready, we are able to implement the web service itself. Return to the web service BankOrderService we created in previous step, click Add Operation in the visual designer, and fill in the details as shown in Figure 3. The operation should return String as a status code to reflect if the order has been successfully submitted or not, and will take our BankOrder data transfer object as a parameter. After you added the operation, select Source tab at the top of the visual designer, which will navigate you to the service source code. There, make sure the implementation of the receiveBankOrder() operation corresponds to Listing 2. Figure 3. Creating Data Transfer Object Listing 2. BankOrderService.java Banking Web Service implementation package bankorder.service; import bankorder.data.BankOrder; import java.util.HashMap; import javax.jws.*; import javax.ejb.Stateless; @WebService() @Stateless() public class BankOrderService { public static final HashMap bankOrderStorage = new HashMap(); @WebMethod(operationName = "receiveBankOrder") public String receiveBankOrder(@WebParam(name = "order") BankOrder order) { String status = ""; try { order.setId(bankOrderStorage.size()); bankOrderStorage.put(order.getId(), order); return "OK"; } catch (Exception e) { status += e.getLocalizedMessage(); } return "FAIL" + status; } } Implementing the web service is the final step, and we're ready to deploy the web service to the application server (GlassFish). Right-click the BankApplication node, and select the Undeploy & Deploy menu item. Once the application is deployed, verify your service by invoking BankOrderService -> Test Web Service action. After invocation, your browser window should show a page similar to Figure 4. Figure 4. Web Service Tester Page (Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.) Jeff Rubinoff replied on Mon, 2008/06/16 - 11:11am nasim khan replied on Tue, 2009/10/06 - 12:04am Very nice simple introductin of wsit.Any way I'll be subscribing to your feed and I hope you post again soon. pankaj barman replied on Wed, 2009/10/07 - 12:20pm Pankaj Shukla replied on Thu, 2010/01/21 - 8:37am giminicologu (not verified) replied on Mon, 2010/05/31 - 8:54am Ritesh Kumar replied on Thu, 2011/03/10 - 5:32am Loops Like replied on Sat, 2011/09/17 - 11:55am Hello, It is given in Mack OSX. How can I impliment it in Windows? Please give me a suggession. loops@ san antonio dentists Jen Worton replied on Sat, 2011/11/12 - 8:21am Good tutorial - easy to follow and informative. Thanks for the detailed instructions on advanced web services. Jen @ Christmas Advent Calendars Lucie Hauri replied on Mon, 2012/01/16 - 4:11am in response to: pankaj barman Bratnyh Tnyh replied on Mon, 2012/02/13 - 4:40am Inchanto Delmar replied on Tue, 2012/02/21 - 1:18pm Carla Brian replied on Sun, 2012/04/01 - 5:57pm Mateo Gomez replied on Mon, 2012/05/21 - 3:45am this is a great option for web service mexican dip recipes Jesse Jaackson replied on Thu, 2012/05/31 - 6:09am Jesse Jaackson replied on Mon, 2012/06/04 - 4:44am Really your post is really very good and I appreciate it. It’s hard to sort the good from the bad sometimes, but I think you’ve nailed it. You write very well which is amazing. I really impressed by your post. clubmz reviews Jesse Jaackson replied on Tue, 2012/06/12 - 2:24am. cell spyware Eulana Twepo replied on Thu, 2012/06/21 - 12:01pm Tariq Honey replied on Sat, 2012/11/03 - 8:41am. vakantiehuis dordogne Matt Brown replied on Wed, 2012/11/28 - 3:28pm Herymangter Mangter replied on Thu, 2013/02/07 - 12:17am Thank you for taking time to share it with the readers, I am more than happy to have come across it. Keep up the good work. grosir jilbab murah Bryan Low replied on Sun, 2013/02/24 - 4:45am Launch Bryan Low replied on Thu, 2013/03/07 - 10:16pm Hillview Peak is also near to Bukit Gombak Stadium and Bukit Batok Golf Range. Entertainment for your loved ones and friends is therefore at your fingertips with the full condo facilities as well as the amenities near Hillview Peak. Hillview Peak Atiq Rumi replied on Mon, 2013/04/08 - 4:37am I am in the middle of working on a school report on this topic and your post has helped me with the information I needed to complete it. Thanks.GoPro Australia Rontu Moniriu replied on Mon, 2013/04/08 - 1:01pm Definitely right place to discuss this topic here, so I am quite sure they will learn lots of new stuff here than anybody else! Paras Onikas replied on Mon, 2013/05/06 - 5:55am Paras Onikas replied on Wed, 2013/05/08 - 2:51am Definitely right place to discuss this topic here, so I am quite sure they will learn lots of new stuff here than anybody else! Santa Clara Real Estate Star Khan replied on Wed, 2013/05/15 - 4:41am Michael Tompson replied on Thu, 2013/05/16 - 2:24am Very profound and uniq information, thank you for sharing it with us. The post really helpful for my job.Online Payment Gateway Dilshad Ara replied on Mon, 2013/06/03 - 5:39am I really enjoy simply reading all of your weblogs. Simply I wanted to inform you that you have people like me who appreciate your work. Definitely it is a great post. Hats off to you! The information that you have provided is very helpful.GoPro Hero Dilshad Ara replied on Mon, 2013/06/03 - 8:52am It took me studying art in college to really learn the fact that all people have different ideas of what art is and of what beauty is.GoPro Hero HD
http://netbeans.dzone.com/news/advanced-web-service-interoper
CC-MAIN-2014-10
refinedweb
1,679
52.8
>> find out the number of rooms in which a prize can be hidden Beyond Basic Programming - Intermediate Python 36 Lectures 3 hours Practical Machine Learning using Python 91 Lectures 23.5 hours Practical Data Science using Python 22 Lectures 6 hours Suppose, in a game show there are 2n number of rooms that are arranged in a circle. In one of the rooms, there is a prize that the participants have to collect. The rooms are numbered from 1, 2, 3,...., n, -n, -(n - 1),...., -1. in a clockwise manner. Each room has a door and by that door, a different room can be visited. Every door has a marking x on it, which means another room is located at a distance of x from the current room. If the value of x is positive, then the door opens to the xth room in the clockwise direction from that room. If the value of x is negative, then it means the room opens to the xth room in the anti-clockwise direction. We have to find out the number of rooms in which the prize can be kept and the participants have difficulty finding the prize. So, if the input is like input_array = [[4, 2]], then the output will be [2] The input has two values, the first value is n that is half the number of rooms, and the second value is the room number where the participants start finding for the prize. Here, there are 2x4 = 8 rooms and the participants start finding the prize from the 2nd room in the clockwise direction. The rooms are numbered like this in a clockwise manner 1, 2, 3, 4, -4, -3, -2, -1. The participants will start visiting the rooms in this manner: 2, -4, -1, 1, 3, -2, -1, 1, 3, -2, ...... So rooms 4 and -3 never get visited, if the prize is hidden in one of these two rooms then the participants can't find it. To solve this, we will follow these steps − - Define a function prime_num_find() . This will take n p_nums := a new list initialized with value 2 check := a new list containing byte representations of elements for value in range 3 to n, increase by 2, do - if check[value] is non-zero, then - go for next iteration - insert value at the end of p_nums - for i in range 3 * value to n, update in each step by 2 * value, do - check[i] := 1 - return p_nums - Define a function factor_finder() . This will take p - p_nums := prime_num_find(45000) - f_nums := a new map - for each value in p_nums, do - if value * value > p is non-zero, then - come out from the loop - while p mod value is same as 0, do - p := floor value of (p / value) - if value is in f_nums, then - f_nums[value] := f_nums[value] + 1 - else, - f_nums[value] := 0 - if p > 1, then - f_nums[p] := 1 - return f_nums - Define a function euler_func() . This will take p - f_nums := factor_finder(p) - t_value := 1 - for each value in f_nums, do - t_value := t_value * ((value-1) * value^(f_nums[value] - 1)) - return t_value - From the main function/method, do the following − - output := a new list - for each item in input_array, do - p := item[0] - q := item[1] - r := 2 * p + 1 - r := floor value of (r / gcd value of (r, q mod r)) - t_value := euler_func(r) - for each value in factor_finder(t_value), do - while t_value mod value is same as 0 and (2 ^ floor value of(t_value / value) mod r) is same as 1, do - t_value := floor value of (t_value / value) - insert 2 * p - t_value at the end of output - return output Example Let us see the following implementation to get better understanding − import math def prime_num_find(n): p_nums = [2] check = bytearray(n) for value in range(3, n, 2): if check[value]: continue p_nums.append(value) for i in range(3 * value, n, 2 * value): check[i] = 1 return p_nums def factor_finder(p): p_nums = prime_num_find(45000) f_nums = {} for value in p_nums: if value * value > p: break while p % value == 0: p //= value f_nums[value] = f_nums.get(value,0) + 1 if p > 1: f_nums[p] = 1 return f_nums def euler_func(p): f_nums = factor_finder(p) t_value = 1 for value in f_nums: t_value *= (value-1) * value ** (f_nums[value]-1) return t_value def solve(input_array): output = [] for item in input_array: p, q = item[0], item[1] r = 2 * p + 1 r //= math.gcd(r, q % r) t_value = euler_func(r) for value in factor_finder(t_value): while t_value % value == 0 and pow(2, t_value // value, r) == 1: t_value //= value output.append(2 * p - t_value) return output print(solve([[4, 2]])) Input [[4, 2]] Output [2] - Related Questions & Answers - C++ code to find out which number can be greater - Program to find out number of blocks that can be covered in Python - C++ program to find out the maximum number of cells that can be illuminated - C++ program to find out the number of coordinate pairs that can be made - C++ program to find out the number of ways a grid with boards can be colored - Problem to Find Out the Maximum Number of Coins that Can be Collected in Python - Program to find out the number of boxes to be put into the godown in Python - Program to find the Hidden Number in C++ - Which MySQL function can be used to find out the length of the string in bits? - Program to find out the number of accepted invitations in Python - Program to Find Out the Number of Squares in a Grid in Python - Program to Find Out the Number of Corrections to be Done to Fix an Equation in Python - Program to find out the total number of characters to be changed to fix a misspelled word in Python - Program to find out how many transfer requests can be satisfied in Python - Program to find out how many boxes can be put into the godown in Python
https://www.tutorialspoint.com/python-program-to-find-out-the-number-of-rooms-in-which-a-prize-can-be-hidden
CC-MAIN-2022-40
refinedweb
984
50.64
This is my script: using UnityEngine; using System.Collections; public class Score : MonoBehaviour { public GUISkin ScoreSkin; int playerScore = 0; int enemyScore = 0; public void IncreaseScore (int PlayerType) { if (PlayerType == 1) { playerScore++; } else if (PlayerType == 2) { enemyScore++; } } void OnGUI() { if (GUI.skin != ScoreSkin) { GUI.skin = ScoreSkin; } GUI.Label (new Rect (20, 10, 300, 30), "Player Score: " + playerScore.ToString ()); GUI.Label (new Rect (20, 35, 300, 30), "Enemy Score: " + enemyScore.ToString ()); } function Update () { if (Input.GetKeyDown (KeyCode.Return)) { Application.LoadLevel (0); } } } I'm trying to use 'function Update ()' to restart the game (it's a simple Pong game) but Unity gives me this error. I've never programmed before in my life so I have no idea what to do, I've followed tutorials up until this point to make the game. It seems to work fine using 'void Update ()' instead, but that's probably not right, not that I know why. Can anyone explain what I did wrong, why it's wrong, and how to fix it? Ty, from a noob. Answer by ejpaari · Apr 09, 2014 at 10:05 AM Use void Update() in C# because methods require a return value and function is not a known type. function Update() is used in Javascript. I would also suggest doing yourself a favor and learning some programming first. void Update() function function Update() Answer by HarshadK · Apr 09, 2014 at 10:22 AM As you said you are a noob, to start with let me tell you that there are multiple languages available to script in for Unity. These languages are c#, Javascript (UnityScript), and Boo. You can program for Unity using any of these languages but when you write a script it is mandatory to write the whole script in only one language out of these three. Now in relation to the error you are facing, you have written your script in c#. In C# you specify return type before a function name when declaring them. As you have done here with your 'IncreaseScore' function. You specified its return type as 'void'. When you declare functions in Jaavascript, you prepend it with word 'function' to let Unity compiler know it is a function. As you are writing your script in c# you can not use keyword 'function' to specify that it is a function as it is not required in c#. That's why specifying 'void' before Update works in your script and not 'function'. Its like you are trying to put your shoes in your fridge. That's why there is shoe rack for it, my friend! And before you dive into coding directly, I suggest you to read this section for Unity Scripting and watch these videos for Unity Scripting Tutorials.. Incremental game need help 1 Answer Change Automatic to Semi-Automatic 1 Answer Many issues 0 Answers Error : UnityEngine.Rendering.HighDefinition.HDShadowManager.PrepareGPUShadowDatas 0 Answers Can't get Math Functions to Work in Compute Shader 0 Answers EnterpriseSocial Q&A
https://answers.unity.com/questions/683349/problem-using-function-update-to-restart-the-game.html
CC-MAIN-2022-40
refinedweb
493
72.05
Search the Community Showing results for tags 'gsap/all'. import { ... } from 'gsap' vs from 'gsap/all' Friebel posted a topic in GSAPGreat', because the project structure is build that way, it seems. Both ways work fine in having the same results onscreen, so I just looked at the output bundle size when I build the same project using imports from 'gsap', vs imports from 'gsap/all', and to my surprise there is a big difference in filesize: Importing from 'gsap': total project bundle here is 547kB Importing from 'gsap/all': total project bundle here is 612kB So importing from 'gsap/all' results in a 65kB larger bundle!! Btw, these import line's I'm using: import { TweenLite, TimelineMax, Linear, Back, Sine } from 'gsap/all'; // vs import { TweenLite, TimelineMax, Linear, Back, Sine } from 'gsap'; I thought that would be the other way around, because 'gsap/all' was adviced in the doc. But with this result I'd say it's better to import from 'gsap'. @GreenSock Am I missing something here? What's the reason there is an extra option to import from 'gsap/all' instead of just 'gsap'? Thanks in advance!
https://greensock.com/tags/gsap/all/?_nodeSelectName=nexus_package_item_node&_noJs=1
CC-MAIN-2022-27
refinedweb
191
74.22
. Thx! Very useful Thanks for this, it made for an easy solution on an issue I had. Muchas gracias por el post, sencillo y conciso, muy bueno en realidad, me ha servido de mucho, GRACIAS.. saludos desde CUBA Very good article Thanks !!! Cheer's hi,i have created a page in wpf which has a button on it, on clicking that button one window is opened. I want to implement a functionality using which the another window on the same button click is not generated untill the firs window is closed..can anybody help me?? Can you make the window that's opened modal? Use ShowDialog instead of Show. hi.thanks for your help..I tried that.. but it is not working..I mean when i click on the button on page, windows are getting generated on every click..without checking whether previously opened window is closed or not..This is what I tried.. private void button_Click(object sender, RoutedEventArgs e) { Window w1 = new Window1(); w1.ShowDialog(); } Thanks ...... smoothy explained.......! Awesome tutorial! Thanks Hi, I am not quite sure what causes "The calling thread cannot access this object because a different thread owns it." error on the following snippet: _form.Dispatcher.BeginInvoke(DispatcherPriority.Normal, new Action(delegate { _form.ShowDialog(); _form = null; })); On first load of the app it seems the _form.ShowDialog() is working. But when I hide it and then fires up the method that contains the codes above, then it raises that error. Hope you can help me. Thanks. Thanks! It helped me here ;) Good Article for sure. great article.....helped me a lot to understand nicely. thx. Good article, it really helped me, thanks. we also need in vb.net new article THIS ITS SIMPLE . Lets say you have Combobox1 in your application. when you want to add new items to it, just SIMPLE use: Me.Combobox1.Dispacher.Invoke(New Action(Sub() /*Add here*/ End Sub)) Hi There, I have a WPF Class in which i am hosting windows Controls. I am facing the problem when i am calling a method in WPF class from the Windows Control to create a tab.I get a message saying "The calling thread cannot access this object because a different thread owns it". Can you pleas tell me what might be the Problem. thanks. This is what i was looking for (didn't read through everything, but just the first sample was what i needed. Many Thanks i think i'm gonna find a solution from the mess i'm trapped in.. Thank you: I've described it in details in my post in blog here Thank you! .Net Follower () Hi, I want to know if there is any difference between WPF - Dispatcher.Invoke and Winform - Control.Invoke (not the syntactical difference)? Excellent article... really helpful Really helpful !! Thank you very much :) Great article, well done. This solved my problem. Hi, I am new to WPF and facing one difficulty. I have a complete GUI with lots of menus and buttons and heavy data is loaded and transfer to the database. There is one button on the from "Export to PDF" which perform some action like convert text to byte array and then append to PDF etc my problem is that when user clicks the button the entire application become irresponsive. Is there any way to do the same thing in background with out making entire application idle. Waiting for your reply Yes, use the BackgroundWorker Class. This article is really awesome! Thanks a lot man you saved my time. Thank you very much, this helps me a lot.... Not sure I get this. This seems to block my main UI thread for 10 seconds. Should not the Thread.Sleep be executing on another thread? System.Threading.Thread thread = new System.Threading.Thread( new System.Threading.ThreadStart( delegate() { System.Windows.Threading.DispatcherOperation dispatcherOp = this.Dispatcher.BeginInvoke( System.Windows.Threading.DispatcherPriority.Normal, new Action( delegate() { Thread.Sleep(10000); } )); } )); thread.Start(); The Thread.Sleep is not being executed on the new thread. Since you've invoked it onto the dispatcher, the dispatcher is the one doing the work. Since the dispatcher is also responsible for drawing the UI, doing this will block the UI. BeginInvoke, underneath, does the same thing has Invoke. The difference is that the calling code doesn't wait for the dispatcher to actually complete the work. Thanks for the response. I am using Thread.Sleep just as a sample. How could I execute code that can access the UI without blocking the main thread? Do I need to create a background worker and that calls Invoke on the window? A thread will work fine. You just have to do the bulk of the work directly in the thread and only invoke to update the GUI. .NET also provides a BackgroundWorker class that might help you out. I have a window that takes quite a while to load. I was hoping to just toss the instantiation/creation off to another thread and not have the UI block. Is that possible? I'm no expert but I wanted to call a multiple parameter delegate. I created the TextBox object directly in the xaml file and the following code works for .net 3.5. .NET has several multi-parameter delegates that can be used instead of declaring your own. Check out Action. There are action delegates that accept anywhere from 1 to 10 parameters. It helps me. Thanks man, but I have a question! I am trying to keep on refreshing my TextBLock independent of the main thread! i am trying to do this, this way: The problem is that it is not refreshing! I would appreciate any help, thanks :) You've got a tight loop here with no sleep. You're basically consuming the entire CPU, leaving nothing to actually redraw the screen. Try putting a sleep after the invoke and see what happens. Other than that, the code looks good. Hi, I am facing similar problem as of above. I dont have tight loop. I am just displayng the message ones. But still my UI is not refresning. Thanks a lot, it worked like a charm :) This url refers to the backgroundworker class, part of the componentmodel namespace. This is a stable solution to the problem of refreshing WPF controls which running and cancelling threads. With regards, Richard Thanks bro ;) Nice work - thanks! Excellent tutorial - I appreciate the time you took to explain that. Just a couple of comments... A "using System.Windows.Threading" might have made the code a lot easier to read. Anonymous methods tidy up the code but perhaps make it a little less readable But perhaps that's just me - again, Thanks Great explanation of what for me, is a difficult subject to grasp. Thank you for creating this tutorial. I was able to resolve my problem and look like a pro thanks to you. Very well done indeed!!! Nice one! Thanks Very good tutorial, thanks! I have a problem: i open a WPF Window from an existing WinForm project, and the code loads database data (with progress bar update), but loading freezes apparition of splash window (2000ms). So i tried a BeginInvoke and thread creation to show splash screen in another thread, but it doesn't work. It seems that the "main" thread must be free to animate WPF. I can't move database loading code into another thread because there's winform interaction and even with an "STA thread", it raises an error "A control in a thread cannot be parent of a control in another thread". If you have an idea... Thanks ! :-) Hi Emmanuel, I am facing the similar problem. Did you get any help for your problem. If so, please share that with me. Thanks for your help. Hi, you should load the database data on a background worker thread and report the progress to the gui thread. You can use a Dispatcher.Invoke to call the 'UpdateProgressBarMethod' Greetz, JvanLangen Very good. Thanks I keep on getting an error, Error 3 Using the generic type 'System.Action' requires '1' type arguments C:\WPF1\Window1.xaml.cs 281 23 WPF1
http://tech.pro/tutorial/800/working-with-the-wpf-dispatcher
CC-MAIN-2014-35
refinedweb
1,358
69.07
Ember Concurrency and Liquid Fire team up to help us load less data and improve the initial render of a critical page. 👋 Hey there, Ember dev! We hope you enjoy this free video 🙂 If you like it and want to keep learning with us, we've written a free 6-lesson email course about the fundamental patterns of modern component design in Ember. These patterns will make your components more flexible, less buggy, and easier to understand. You'll honestly wonder how you ever lived without them! To get the first lesson of our course 6 Techniques for Writing Better Components in Ember now, enter your best email address below: Our goal is to speed up the Video page of EmberMap. Right now, for a given video, the entire series for that video – including all of that series' clips and those clips' encodes – are loaded, just to render the video page. We can improve this data-loading story. First, we'll slim down the topic route's model hook to just load the parent series. No more requesting the entire series-clips-encodes graph. model() { return this.get('store') .loadAll('series', { - include: 'clips.encodes', filter: { slug } }) .then(series => series.get('firstObject')); } } Now that our parent topic route doesn't load all the data, we can fine-tune the data loading for the video route. Instead of looking up the video by slug in Ember Data's cache, we return a new query: - model({ video_slug }) { - return this.store.peekAll('clip').findBy('slug', video_slug); + model(params) { + let slug = params.video_slug; + let filter = { slug }; + + return this.get('store') + .loadAll('clip', { + filter, + include: 'encodes,series' + }) + .then(clips => clips.get('firstObject')); }, This way, the only data that blocks our Video page's initial render is the video, its series, and its encodes. Now our Video page is rendering faster, but it doesn't have the data for the series sidebar. Let's turn that series-playlist component into a data-loading component, so it can lazily fetch its data after initial render. We'll use an Ember Concurrency task. import Component from '@ember/component'; import { task } from 'ember-concurrency'; import { inject as service } from '@ember/service'; export default Component.extend({ series: null, activeClip: null, store: service(), loadSeries: task(function*() { let slug = this.get('series.slug'); let filter = { slug }; yield this.get('store') .loadAll('series', { filter, include: 'clips' }) .then(posts => posts.get('firstObject')); }).on('didInsertElement') }); Now we can use loadSeries.isRunning combined with a liquid-if to subtly fade in the sidebar content when its ready: {{#liquid-if loadSeries.isRunning {{#ui-p {{x-autoplay}} </div> {{/ui-p}} </div> {{video-list videos=series.clips activeVideo=activeClip}} {{/liquid-if}} Finally, we discuss how wrapping all this in an {{#if loadSeries.performCount}} prevents some flashing during the initial render, due to the way isRunning behaves when an Ember Concurrency task is kicked off on didInsertElement.
https://embermap.com/video/refactoring-smarter-data-loading
CC-MAIN-2018-51
refinedweb
476
58.08
Sample Python code reproduced here. import urllib2, re headers = {'User-agent': 'I promise I\'m not doing this a lot',} req = urllib2.Request("", None, headers) website = urllib2.urlopen(req) html = website.read() links = re.findall('"((http|ftp)s?://.*?)"', html) for i in links: if '' in i[0]: print i[0] My rewrite using PHP using file_get_contents and stream_context_create, something new for me. <?php $options['http']['header'] = "User-agent: I promise I'm not doing this a lot'\r\n"; $context = stream_context_create($options); $url = ""; $html = file_get_contents($url, TRUE, $context); preg_match_all('/"((http|ftp)\s?:\/\/.*?)"/i', $html, $links); foreach ( $links[1] as $link ) { if ( strstr($link, '') ) echo $link, "\n"; } Comparison of both code snippet. - Regex is simpler and more readable in Python. You don’t need to escape certain character (example is forward slash /) like in PHP. API is simpler and make more sense, result are returned instead of using callback in PHP where you have two sets of array. - file_get_contents() is awesome and dangerous as well for reading both offline and online file. Nothing equivalent is found in Python. - Finding and matching string is way more readable in Python.
http://www.kianmeng.org/2013/03/extract-hyperlinks-using-python-and-php.html
CC-MAIN-2018-34
refinedweb
187
68.36
There are many options available in flutter which you can use to provide space and make UI attractive. So, in this article, we will see how to Add Space Between Widgets in Flutter. How to Add Space Between Widgets in Flutter? If you use Row and Column for arranging widgets, then by default limited options are available for alignment. There are many options available for the spacing of widgets like Padding, Spacer, Fractionally, SizedBox, Expanded, Flexible, etc. Here, we’ll learn about SizedBox, as it is easier to implement, provides more flexibility in alignment, and is also easier to understand. A SizedBox is basically an empty box if no constraints are provided. By default, it can become as big as its parent widget allows, but you can set its height and width according to your needs. Constructors: const SizedBox({ Key key, double width, double height, Widget child }) Below is the description of the above-mentioned constraints:- - Key key: This argument is of type key. A key is basically an identifier for widgets. A unique key can be provided to widgets to identify them. - double width: This argument is of type double. You can provide double value as width to be applied to the child. - double height: This argument is also of type double. The height that is to be applied to a child, is passed here as a double value. - Widget child: The widget which is below this widget in the tree is passed here as a child and the above-mentioned constraints are automatically applied to it. It is not compulsory to provide a child widget to SizedBox. For instance, if you are having two Card widgets and you want to give space between them, then you can use SizedBox. You can add SizedBox between those cards and pass the required height and width values. Note: If you don’t provide a child widget to SizedBox and height and width are also null, then SizedBox would try to be as big as its parent widget allows. On the other hand, if the child widget is provided but height and width are null, then SizedBox would try to match it’s child’s size. Example: With SizedBox import 'package:flutter/material.dart'; void main() => runApp(MyApp()); class MyApp extends StatelessWidget { @override Widget build(BuildContext context) { return MaterialApp( home: Scaffold( appBar: AppBar( title: const Text('Flutter Agency'), backgroundColor: Colors.green, ), body: Center( child: Column( children: const [ SizedBox( height: 20, ), Card( elevation: 10, child: Padding( padding: EdgeInsets.all(25), child: Text('Child widget 1', style: TextStyle(color: Colors.green), ), ), ), SizedBox( //Use of SizedBox height: 30, ), Card( elevation: 10, child: Padding( padding: EdgeInsets.all(25), child: Text('Child widget 2', style: TextStyle(color: Colors.green), ), ), ), ], ), ), ), ); } } OUTPUT: Here, in the above example, we have made two cards. By, default we can’t add custom space between them. So, in order to add the required amount of space, we’ve used SizedBox with custom height, which is required. Conclusion: Thanks for being with us on a Flutter Journey! So, in this article, we have seen how to Add Space Between Widgets.
https://flutteragency.com/add-space-between-widgets/
CC-MAIN-2021-43
refinedweb
516
64.41
Rapid Front-End Prototyping With WordPress Prototyp. ! Further Reading on SmashingMag: - Optimizing Your Design For Rapid Prototype Testing - Choosing The Right Prototyping Tool - Content Prototyping In Responsive Web Design - Secrets Of High-Traffic WordPress Blogs About Prototyping “[A] prototype is an early sample, model, or release of a product built to test a concept or process or to act as a thing to be replicated or learned from.” —Wikipedia This sentence neatly sums up the pros and cons of a prototype. The advantage of creating a prototype is that it lets you test an idea and learn from the process. It allows you to make a reasonable assumption about the feasability of your project before you put in hundreds of hours of work. One of the downsides is that prototypes are made so that we may learn from them. Often, designers and coders look on this as wasted time. Why make a protoype. . . Using front-end frameworks like Bootstrap or Foundation can can increase your fidelity somewhat, but you’d need to write a good amount of CSS to make it your own. . The real trick is managing the time spent on making such prototypes — which is where WordPress comes in. Which One To Use. What I recommend is getting a good pre-made HTML admin template.. A Complete Example When I started this article, I wanted to provide a complete example of how I use WordPress to prototype. I ended up recording a video of the process and writing about it in a bit more detail. The video below will walk you through my process of creating a prototype of the Twitter front page. Following the video I’ll go into even more detail, giving you some instruction about setting up test servers, using the WordPress menu system to populate menus, and much more. I recommend watching the video first, and then looking at the details below, as they will be a lot clearer. An In-Depth Guide To A Prototyping Setup Step 1: Set Up A Server This sounds really difficult but don’t worry, it’s all very simple. We’ll use free tools and copy-pastable code to get things done! First of all you’ll need to download and install VirtualBox and Vagrant. In each case simply select your OS and install the downloaded file. Once done, create a folder anywhere on your computer to hold your website-to-be. Let’s call this prototyper, I’ll put it in my home directory on Mac OS X. This folder will contain all the data for our website, WordPress installation and all. Open the terminal or the command prompt and navigate to the folder you just created. This can be done with the cd command. In my case, I would type cd ~/prototyper. On Windows you may need to use something like cd C:\users\Username\prototyper. To set up a Vagrant server you will need a Vagrant file and an optional setup.sh file. Never fear, I’ve created both for you. Download this GitHub Gist and extract it. Copy the contained files into the folder you created previously. Back in your command prompt or terminal type vagrant up. This will set up the server for you. The first time you do this it may take 10–20 minutes — the box needs to be downloaded and configured and run. Any time you do this again, it will take no more than 15–20 seconds. Once done, you should be able to go to in your browser to see your server up and running. We won’t go into configuration details here but you can create multiple sites with one Vagrant server easily, and you can load up your site using prototyper.local instead of an IP address and much more! Step 2: Install WordPress This one isn’t difficult. Now that you have a server up and running you should be able to follow the Famous 5-Minute Install from WordPress. Before you can install WordPress you need to have a database created for it, let’s do that now. Back in the terminal type vagrant ssh. Once that command has run you are now inside your virtual machine, ready to perform some tasks. Type mysql -uroot -p and type root when prompted for the password. You should now be logged into your MySQL console. Type CREATE DATABASE prototyper; and press Enter. This will create a database named “prototyper” for your WP installation. Don’t forget that the username for your MySQL server is “root” and so is the password. Continue by going to WordPress.org and downloading the latest WordPress version. Extract the contents of the archive and go into the created WordPress directory. Copy all files and paste them into the html directory which was created in your server’s folder when you first ran vagrant up. If you created the directory at ~/prototyper, the location of the html folder should be ~/prototyper/html. You now have everything you need and you won’t need to muck about in the terminal any more, I promise! Head on over to 192.168.55.55 and follow the on-screen instructions for the WordPress install. Don’t forget that the database username and password is “root”. Step 3: Creating A Theme We’ll be converting an HTML admin template to a WordPress theme but let’s begin by creating an empty theme first. In your themes directory (wp-content/themes/) create a folder named prototyper. In this new directory create two files: index.php and style.css. In the style sheet, paste the following (feel free to modify as you see fit): /* Theme Name: Prototyper Author: Daniel Pataki Author URI: description: >- This theme was made for quick prototyping using the AdminLTE HTML admin theme from Almsaeed Studio. Version: 1.0 License: GNU General Public License v2 or later License URI: This theme, like WordPress, is licensed under the GPL. Use it to make something cool, have fun, and share what you've learned with others. */ Let’s make sure that this file is loaded on each page. We can do this by creating a functions.php file and enqueueing the style sheet. Don’t worry if you’re not familiar with enqueueing just yet, it’s an easily copy-pastable example. <?php function prototyper_styles() { wp_enqueue_style( 'prototyper', get_stylesheet_uri() ); } add_action( 'wp_enqueue_scripts', 'prototyper_styles' ); At this stage you should be able to go to the Appearance section in WordPress and switch to this theme — do so now. Since we haven’t done any coding, the front-end of your site should be completely white. This is perfectly OK, we’ll fill it up with content soon. Step 4: Converting An Admin Template To A WordPress Theme This is the most complex step because each admin template is different. In general we’ll be following the following pattern. - Extract the theme header and footer files - Replace local resource references with ones that will work I will be preforming these steps for the AdminLTE HTML template specifically, but the logic would be the same for any other admin template out there. Extracting The Theme Header And Footer First of all, download the admin template and extract it. Move all files into the WordPress theme. Delete our empty index.php file and rename the index.html file to index.php and open it up in a text editor. Our goal now is to extract the header and the footer files. Practically, this means that we want to separate out content which will be loaded on all pages. This would include the header, the footer and the left-hand navigation menu. This is where a well-coded and properly documented admin theme comes in handy. Since I’ve been doing this for a while I have a feeling that the top bar and the left navigation are at the top of the index file. Then there’s everything in the main area, followed by the footer. To separate off the code which is used everywhere at the top of the file we need to find where the breadcrumb is. I did a quick search for “breadcrumb” in the file and I found the section which handles the display of the main section. It starts like this: <!-- Right side column. Contains the navbar and content of the page --> <aside class="right-side"> Create a header.php file and copy-paste everything in index.php up to and including the code above. Make sure to remove this from the index file so it is only in the header file. Then at the very top of the index file type the following PHP code: <?php get_header() ?> It’s time to do the same thing with the footer. In our header file we open an element which has a class of right-side. If the admin theme is well documented we may find the closing element for it by searching for this class. If you perform this search you should find the end tag. Copy everything after and including this end tag and paste it into a new file named footer.php. Make sure to delete the code you copied from the index file and at the very end of the file paste this code: <?php get_header() ?> Replace Local Resource References If you load the theme now, it displays a lot of things but it has no styling or JavaScript applied. This is because all our references point to something like css/bootstrap.min.css. We need to make sure that they point to. We can start using WordPress functions now to make this all better. Use your text editor to replace all instances of href=” with href=”<?php echo get_template_directory_uri() ?>/. This will make sure all references to style sheets work. Now replace all instances of src=” with src=”<?php echo get_template_directory_uri() ?>/. You should do this in header.php and index.php but not footer.php. At this stage you should be able to load the website at 192.168.55.55 and see the correct layout. Some boxes which rely on JavaScript won’t work just yet, but we’ll fix that in a moment. The footer file has two JavaScript scripts which are loaded from a CDN so they should not be replaced. The first one is jQuery, which is the very first script loaded. The second is a script called Raphaël. Replace all instances of src=” with src=”<?php echo get_template_directory_uri() ?>/ in the footer file, except for these two resources. One last thing to do: add the header and footer hooks. These are used by WordPress to load scripts and styles (among other things) — they are a necessary component of every theme. Add <?php wp_head() ?> just before the closing <head> tag in the header file and <?php wp_footer() ?> right before the closing <body> tag in the footer. If you’ve done everything correctly you should now see all the boxes intact with graphs and maps galore. Congratulations, you’ve just set up your environment! We’ll continue to make this easily usable but the basics are now there. Pages And Navigation The interface should be fully navigable, even though we only have a single index file. If you click on “Widgets” in the sidebar you’ll see all the elements on the widgets page. Note that this is not being served by our theme. It is simply pointing to the widgets.html file within our theme folder structure; it is a static page. Managing Pages We’ll need to create pages that are served by WordPress. I recommend leveraging the template hierarchy to create files for specific pages. If you create a page named “Weight Log” it will most likely have the slug: weight-log. This means you can create a page-weight-log.php file which will handle the output for that page. For every page in your prototype you will need to make a WordPress page. There is some overhead in this, but I find it acceptable. If you need to use page variants you can always use query parameters in the URL. Navigation The next thing to think about is the left-hand menu. This can be a bit tricky, especially if you want to display those nice icons and active elements as well. There are three ways to go about this. Hard-coding the menu is easiest but it is less flexible, and dynamically detecting the current menu is a bit more difficult. We can register a menu and use the menu builder but this will only allow for single-level menus. We can also use our own function to build a menu which allows for a complete implementation. Creating Our Files We’ll be looking at all three methods mentioned above and we’ll create almost the same menu structure for each. Let’s create the files which will handle our pages now: - Weight Diary (page-weight-diary.php) - Your Stats (page-your-stats.php) - Yearly (page-yearly.php) - Monthly (page-monthly.php) - Weekly (page-weekly.php) - Settings (page-settings.php) Use the Pages section to create these pages in WordPress, then create a file in the theme for each. The files should contain the following code: <?php get_header() ?> <!-- Content Header (Page header) --> <section class="content-header"> <h1> Dashboard <small>Control panel</small> </h1> <ol class="breadcrumb"> <li><a href="#"><i class="fa fa-dashboard"></i> Home</a></li> <li class="active">Dashboard</li> </ol> </section> <!-- Main content --> <section class="content"> Content goes here... </section> <?php get_footer() ?> I basically copy-pasted this from our index file. The code above pulls in everything from the header and then outputs a title section which you can rewrite to be relevant to the current page. The main content section is then opened and closed; your content for this page should go here. Finally, the content of the footer is retrieved. Hard-coding Menus To hard-code our menus we’ll take a look at how the admin template outputs its menus and modify it to suit our needs. There are only three things we need to rewrite: page titles, URLs and icons. Page titles should be easy. Remember that since we are hard-coding things the page title you write here does not have to be the same as the title you created in the WordPress back-end. URLs should point to the actual WordPress page. You can copy-paste this from the back-end, or, even better, use the get_template_directory_uri() function and append the page slug to it as I have done in the example below. Icons use Font Awesome, a very popular free icon font. If you’ll need to replace the fa-dashboard-style classes with the ones you want to use. Visit the link above for a cheatsheet of all the icons and their classes. <ul class="sidebar-menu"> <li class="active"> <a href="<?php echo get_template_directory_uri() ?>"> <i class="fa fa-dashboard"></i> <span>Dashboard</span> </a> </li> <li> <a href="<?php echo get_template_directory_uri() ?>/weight-diary/"> <i class="fa fa-book"></i> <span>Weight Diary</span> </a> </li> <li class="treeview"> <a href="<?php echo get_template_directory_uri() ?>/your-stats/"> <i class="fa fa-bar-chart-o"></i> <span>Your Stats</span> <i class="fa fa-angle-left pull-right"></i> </a> <ul class="treeview-menu"> <li><a href="<?php echo get_template_directory_uri() ?>/your-stats/yearly/"><i class="fa fa-angle-double-right"></i> Yearly</a></li> <li><a href="<?php echo get_template_directory_uri() ?>/your-stats/monthly/"><i class="fa fa-angle-double-right"></i> Monthly</a></li> <li><a href="<?php echo get_template_directory_uri() ?>/your-stats/weekly/"><i class="fa fa-angle-double-right"></i> Weekly</a></li> </ul> </li> <li> <a href="<?php echo get_template_directory_uri() ?>/settings/"> <i class="fa fa-cog"></i> <span>Settings</span> </a> </li> </ul> Find the sidebar menu in your theme’s header.php file and replace it with the menu above. This will give us the menu structure we need. The pages we link to don’t exist yet but we can go ahead and create them in the admin. Here’s the full list: - Dashboard (set as the front page in settings->reading) - Weight Diary - Your Stats - Yearly (child of Your Stats) - Monthly (child of Your Stats) - Weekly (child of Your Stats) - Settings Registering WordPress Menus When theme authors create themes they usually define the menu location and allow users to create their own menus in them. We will leverage this functionality to automatically output our menus. Open your theme’s functions file and add the following code: register_nav_menu(‘left_menu’, ‘Left hand navigation menu’); This tells WordPress to display the Appearance → Menus section in the admin and allow the user to create a menu for the left_menu location. You can go there in the admin and create your menu now. Note that owing to limitations on how WordPress outputs menus, we won’t be able to create multi-level menus with this method, at least not just yet! The next step is to tell the theme itself where this theme should be added. Paste the following code just above the existing menu in the header file. <?php wp_nav_menu( array( 'location' => 'left_menu', 'container' => false, 'menu_class' => 'sidebar-menu', 'menu_id' => 'sidebar-menu' )) ?> The parameters I’ve added make sure the correct location is used, the menu doesn’t have a container, and that the class and ID used for the menu is sidebar-menu. This class name is crucial to mimic the menu style of our admin template as completely as we can. If you have your menu set up (don’t forget to assign it to the location using the checkbox at the bottom) it should show up. At this point you can delete the large menu that shipped with the template. One thing to note is the lack of icons. We can fix this with some CSS! #sidebar-menu > li:nth-of-type(1) > a::before { content: "\f0e4"; margin-right:7px; font-family:FontAwesome; } The code above should be placed in the style.css. It designates the icon used for the first item in the menu. To assign icons to other menu items just increase the number in the parenthesis. The icon used is dictated by the string value of the content property. To figure out what this is, take a look at the list of available icons and click on the one you’ll need. The Unicode value is shown in the header of the icon’s page. Building Our Own Menu The menu structure we have is similar enough to the one the admin template uses to work, but is not the same. As a result, icons had to be added in a different way and subpages won’t work properly. We can fix this by registering and using menus as we did above, but using our own Walker class to output it. Let’s extend the Walker class with our own. Here’s the code; explanation ensues! class Prototyper_Walker ) { $classes = array(); if( $item->object_id == get_the_ID() || ( $item->url == site_url() . '/' AND is_home() ) ) { $classes[] = 'active'; } if( in_array( 'menu-item-has-children', $item->classes ) ) { $classes[] = 'treeview'; } $arrow = ( in_array( 'menu-item-has-children', $item->classes ) ) ? '<i class="fa pull-right fa-angle-left"></i>' : ’; $arrow = ( in_array( 'menu-item-has-children', $item->classes ) ) ? '<i class="fa pull-right fa-angle-left"></i>' : ’; $output .= sprintf( "\n<li%s><a href='%s'>%s<span>%s</span> %s</a>\n", ' class="' . implode( ' ', $classes ) . '"', $item->url, $icon, $item->title, $arrow ); } function start_lvl(&$output, $depth) { $indent = str_repeat("\t", $depth); $output .= "\n$indent<ul class=\"treeview-menu\">\n"; } function end_lvl(&$output, $depth) { $output .= "\n</ul>"; } function end_el(&$output) { $output .= '</li>'; } } We use the Walker function to modify the output of the start of elements and levels, and the ends of elements and levels. The start_el() method is used to determine how a list item starts. In our case it should use this template: <li class="active treeview"><a href=""><span>Link text</span> <i class="fa pull-right fa-angle-left"></i></a></li> We need to add some logic to make sure the active and treeview classes are only added when they are needed. The icon at the end of the link is also only needed if the item is a parent element; we account for this within the function. I also used the value of the attr_title property. This is the title attribute you can set in the menu builder when you click the arrow next to the item’s name. I used this field to add the class names of the Font Awesome icons. This way we also have full control over the icon. We use the start_lvl() method only to add the treeview-menu class to the list. The end_lvl() and end_el() methods simply close the elements we opened in the other functions. As the last step, we need to tell our menu to use this Walker class. We do this by modifying the call to wp_nav_menu() in the header, adding a walker parameter. Here’s the full code: <?php wp_nav_menu( array( 'location' => 'left_menu', 'container' => false, 'menu_class' => 'sidebar-menu', 'menu_id' => 'sidebar-menu', 'walker' => new Prototyper_Walker() )) ?> Hiding The Test Menu If you’ve been following along you should now have a hard-coded or a custom function menu, followed by the long menu that shipped with the HTML template. You could delete this as-is but you would then lose the ability to look up elements — something we will rely on heavily soon. You could also leave it there, but it does litter up the place a bit. My solution to the problem is a bit of JavaScript. Why not leave the menu there but hide it by default, showing it when the user double-presses the letter T on the keyboard. Let’s add a new ID to this menu and hide it using some inline CSS. The opening tag of the menu becomes: <ul class="sidebar-menu" id='elements-menu' style="display:none"> Next, let’s add a new JavaScript file to our theme using the enqueueing method from earlier. Create a prototyper.js file in your theme’s root directory, and in the functions file add the following: add_action( 'wp_enqueue_scripts', 'prototyper_scripts' ); function prototyper_scripts() { wp_enqueue_script( 'prototyper', get_template_directory_uri() . '/prototyper.js', array('jquery') ); } The final step is to write the jQuery to make this happen. We’ll need to detect the keypress of the letter T. In addition we’ll make sure that the two keypresses must be within 500ms of each other. function toggle_menu() { jQuery('#elements-menu').toggle(); } var lastKeypressTime = 0; jQuery( document ).on( 'keyup', function( e ) { if( e.keyCode === 84 ) { var thisKeypressTime = e.timeStamp; if( thisKeypressTime - lastKeypressTime <= 500 ) { toggle_menu(); } lastKeypressTime = thisKeypressTime; } }) Voilà! Our menu is hidden, but by quickly pressing T twice we can get it to reappear or disappear again. Super helpful when finding elements, but out of the way when not needed. Preserving Test Menu Links Links in the test menu are relative, which means they won’t work anymore. They will point to, for example. In reality, these files are now within your theme folder, so the links should be: To fix this, simply prepend <?php echo get_template_directory_uri() ?> function to each link’s href attribute in the test menu. Building Pages We’ve finally arrived at the step where we can start assembling pages! Let’s create that weight diary I’ve been referring to. This will be a table of foods the user has eaten with an option to add another. You should have a file for this page already created: page-weight-diary.php. Open it now. It should contain the HTML we discussed in the “Creating Our Files” section. In the content header section change the title texts as you wish. I am using the following code in case you want to copy-paste: <?php get_header() ?> <!-- Content Header (Page header) --> <section class="content-header"> <h1> Weight Diary <small>Everything I Ate</small> </h1> <ol class="breadcrumb"> <li><a href="<?php echo site_url() ?>"><i class="fa fa-dashboard"></i> Home</a></li> <li class="active">Weight Diary</li> </ol> </section> <!-- Main content --> <section class="content"> </section><!-- /.content --> <?php get_footer() ?> Tap T twice to bring up the original menu and click the “Tables” entry, followed by “Simple Tables”. Let’s use the bottom-most “Responsive Hover Table” one. You can view this page’s source code to find the HTML rerquired for this table. I like to copy it from the developer tools window. Right-click somewhere in the table and click on “Inspect Element” (or something similar, depending on your browser). In the window that pops up, navigate up the DOM until you come to the row the table is in and copy the whole row. Paste it inside the main content section of the weight diary file and modify the table as you see fit. I decided to add a date, food, calories, healthiness and notes column. The next step is to add a button users can click to add a new diary entry. We’ll first need to create a box for this for proper spacing. I copy-pasted everything below from the admin template so it took me only a few seconds to create. <div class="row"> <div class="col-xs-12"> <div class='box'><div class='box-body'> <a href='<?php echo site_url() ?>/add-weight-entry/' class="btn btn-primary btn-sm">Add New</a> </div></div> </div> </div> Since we are referring to an add-weight-entry page let’s create that in the WordPress admin; create the page-add-weight-entry.php file and paste our little skeleton into it. Once done, I will copy-paste form elements from our HTML template to build the form: Just like before, I copy-pasted all the elements from the admin template. The only addition I made was to make the form direct back to the diary. I modified the opening tag of the form to look like this: <form role="form" method='post' action='<?php echo site_url() ?>/weight-diary/?message=added_entry'> Whenever the form is submitted it will go back to the weight diary. It will contain the additional query parameter message with a value of added_entry. We can leverage this to show a nice little success message back in our weight-diary.php file. I copy-pasted the HTML for the success display above from UI Elements → General in the HTML template and I placed it between the “Add New” button and the table. Copy-Pasting Secrets At this stage you should be able to build out a static site with some dynamic components (menu, page creation). However, the exact element you need to copy-paste may not be so obvious. I’ve had a bit of experience with this so let me take a couple of sentences to help you out! All proper HTML admin templates will work within a grid system. It is a good idea to always use these rows and columns. When I wanted the whole table for the weight diary, I copied the row which contains the single full-width column and the table. When I wanted just the button for the “add new” functionality, I just copied the button and created the row and box around it manually. That brings me to another point: container elements — most templates use some sort of boxes as well. It’s up to you if you want to use these or not. I’ve found that using the exact same structure as the template will help the continuity of your project a great deal. I wouldn’t normally have added a box around my “Add New” button but, since this is how it is positioned properly, I thought this was the best solution. All in all, be mindful of the grid system that a template uses and use the container elements it provides for maximum efficiency. Dynamic Front-end Features Most admin themes, AdminLTE included, use JavaScript extensively. You have a number of elements at your disposal that you may need to set up using JavaScript. Data tables are a good example; they allow for the sorting and searching of data within HTML tables. Let’s replace our diary with a sortable table. I usually copy-paste JavaScript-heavy things from the source file as opposed to the source code or the DOM. The reason is that once the element has loaded, JavaScript may dynamically change the DOM, meaning that a copy-paste may not be successful; so I will go into pages/tables/data.html and copy-paste the full “Data Table With Full Features” section. If you take a look around in data.html you may notice that it contains four things that our framework doesn’t just yet. It has one extra style sheet, two extra scripts and a little JavaScript code at the end. Since data tables aren’t used on every page it makes sense to only load these on pages where they are necessary. While we could also go ahead and do this, our main goal is quick prototyping, not an optimized application. Therefore, I will add the style to the header, the two scripts to the footer, and the JavaScript snippet to the weight diary file. In your header.php paste the following: <!-- DATA TABLES --> <link href="<?php echo get_template_directory_uri() ?>/css/datatables/dataTables.bootstrap.css" rel="stylesheet" type="text/css" /> In your footer.php paste the following, taking care to paste it exactly before the final script which should be app.js <script src="<?php echo get_template_directory_uri() ?>/js/plugins/datatables/jquery.dataTables.js" type="text/javascript"></script> <script src="<?php echo get_template_directory_uri() ?>/js/plugins/datatables/dataTables.bootstrap.js" type="text/javascript"></script> The last step is to add the bit of JavaScript code which makes the table a data table. I just put this somewhere below the table in the weight-diary.php file. <script type="text/javascript"> jQuery(function() { jQuery("#example1").dataTable(); }); </script> This methodology can be followed for any JavaScript-heavy element. First, go into the file of the element you need and make sure you add any style sheets or scripts it requires. Then, copy the element followed by any JavaScript snippet which you need to make it work. Dynamic Server-Side Functionality The last piece of the puzzle is adding dynamic server-side functionality. This requires a bit more familiarity with WordPress but you can create pretty high-quality prototypes without implementing this bit. First of all, let’s create our weight diary functionality in the back-end. I’ll use a custom post type to hold my diary entries and use the Advanced Custom Fields (ACF) plugin to create the custom fields. You can grab ACF from the Advanced Custom Fields website, or the plugin repository. I use the code below in functions.php to create a very quick custom post type: add_action( 'init', 'weight_diary_post_type' ); function weight_diary_post_type() { $args = array( 'public' => true, 'label' => 'Weight Diary' ); register_post_type( 'weight_diary', $args ); } After this is added, you should see a Weight Diary custom post type in the admin. Now install and activate Advanced Custom Fields and create the two custom fields we need: healthiness and calories. I made the former a radio button with the three values, and the latter a number field. Go to the Weight Diary custom post type and add some entries. We’ll be using the publication date for the date, the post title for the name of the food, and the post content for the notes. Now, let’s go back to our table and use a WordPress query to populate it. Paste the following code into your table, replacing the content of the tbody tag. <?php $diary = new WP_Query( array( 'post_type' => 'weight_diary', 'post_status' => 'publish' )); if( $diary->have_posts() ) : while( $diary->have_posts() ): $diary->the_post(); ?> <tr> <td><?php the_time( 'F-d-Y' ) ?></td> <td><?php the_title() ?></td> <td><span class="label label-success"><?php the_field( 'healthiness' ) ?></span></td> <td><?php the_field( 'calories' ) ?></td> <td><?php the_content() ?></td> </tr> <?php endwhile; endif ?> At this stage the table is populated with the ten most recent diary entries. Note that to retrieve a custom field value from ACF all you need is the the_field() function and the name of the field (which you can look up in the custom fields section). The next step is enabling users to submit actual data. Let’s go to the page-add-weight-entry.php file and modify its submit location to <?php echo admin_url( ‘admin-ajax.php’ ) ?>. In addition, add a hidden field anywhere before the closing <form> tag. This contains a parameter whose value we can use to intercept the user’s action of adding a weight diary entry. <input type='hidden' name='action' value='add_diary_entry'> We can intercept this action from our functions file using the value of that hidden field. Use the following convention: add_action( 'wp_ajax_add_diary_entry', 'add_diary_entry' ); add_action( 'wp_ajax_nopriv_add_diary_entry', 'add_diary_entry' ); function add_diary_entry() { //code goes here } The first action intercepts the addition of an entry for logged-in users; the second intercepts it for logged-out users. If you only need to do this for one or the other, remove the one which isn’t needed. In a real-world scenario we would need to add some more code to check for intent to make sure it’s safe, but for our purposes this will do. Using the data passed via the $_POST array we can create a WordPress post with the proper details. Once the post is created, we redirect the user back to the diary with the success message. function add_diary_entry() { $data = array( 'post_type' => 'weight_diary', 'post_status' => 'publish', 'post_title' => $_POST['post_title'], 'post_content' => $_POST['post_content'] ); $post_id = wp_insert_post( $data ); add_post_meta( $post_id, 'calories',$_POST['calories'] ); add_post_meta( $post_id, 'healthiness',$_POST['healthiness'] ); header( 'Location: '. site_url() . '/?page_id=9&message=added_entry' ); die(); } And So On At this point it becomes a matter of repetition. To add a static feature, open the menu, find the elements you need, copy-paste them and modify the HTML content as necessary to fit your needs. If you need any advanced front-end magic, apply some JavaScript; if you need server-side functionality, build out your functions and apply them to the front-end elements already in place. Using this method you should be able to build a high-quality high-fidelity prototype without too much effort. Your final build will be of much higher quality and your clients will be kept happy throughout the coding phase.
https://www.smashingmagazine.com/2015/06/rapid-front-end-prototyping-with-wordpress/
CC-MAIN-2022-33
refinedweb
5,734
65.83
Paul Eggert <address@hidden> writes: > Simon Josefsson <address@hidden> writes: > >> Hello. iconvme.* has been installed into libc. This patch propose to >> add it to gnulib via libc. What do you think? > > There was something wrong with that email. It claimed to have a file > iconvme.h, but the contents of that file were identical to the > contents of iconvme.c. What caused that problem? Can you please > resubmit the suggestion? Oops! A mistaken 'cp'. The iconvme.h is as below. > I did notice one unchecked arithmetic overflow. This: > >> + size_t outbuf_size = (inbytes_remaining + 1) * MB_LEN_MAX; > > can overflow and cause buffer overruns. I suggest adding something > like this as a conservative check, just after the declarations: > > if (1 < MB_LEN_MAX && SIZE_MAX / MB_LEN_MAX <= inbytes_remaining) > { > errno = ENOMEM; > return NULL; > } Neat, I have filed a libc bug report about it: Thanks. /* Recode strings between character sets, using iconv. Copyright (C) 2004 Free Software Foundation, Inc. Written by Simon Josefsson. This program is free software; you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation; either version 2.1,. */ #ifndef ICONVME_H # define ICONVME_H extern char *iconv_string (const char *string, const char *from_code, const char *to_code); #endif /* ICONVME_H */
http://lists.gnu.org/archive/html/bug-gnulib/2005-02/msg00034.html
CC-MAIN-2013-20
refinedweb
204
69.38
Technical Blog Post Abstract Automation Script : How to make memo field required depending on status to be changed Body Do you have a requirement to make the memo field required when status is gonna be changed to specific status? Actually, your requirment may not be done using conditional UI setting. You may need to use automization script or java code in order to achieve your requirement. You can reference below automation script example. The requirment : make the memo field required when status is gonna be changed to 'Approved' in the Change Status Dialog. - Go to the Automation Scripts - Select Action / Create script with Attribute Launch Point. Launch Point name : any name Object : WOCHANGESTATUS Attribute : STATUS - Click Next Button. Set Scrpt Name and Script Language as jython. Then, add two variables ( vSTATUS , vMEMO) for STATUS , MEMO attribute. - Click Next Button. In source code, add the codes followings : from psdi.mbo import MboConstants from psdi.server import MXServer if MXServer.getMXServer().getMaximoDD().getTranslator().toInternalString("WOSTATUS",vSTATUS, mbo) == 'APPR' : vMEMO_required = True else : vMEMO_required = False Save - change Status to Active. - Now, go to the Work Order Tracking application - Open a WO having WAPPR status. - Click 'Change Status' button. Then, you can get 'Change Status' dialog. If you change 'New Status' to Approved, Memo field will be required. If you change 'New Status' to other status such as Completed, Memo field won't be required. Thanks, UID ibm11130505
https://www.ibm.com/support/pages/node/1130505?lang=en
CC-MAIN-2020-10
refinedweb
233
60.11
_VPTOCNP — translate a vnode to its component name #include <sys/param.h> #include <sys/vnode.h> int VOP_VPTOCNP(struct vnode *vp, struct vnode **dvp, char *buf, int *buflen); This translates a vnode into its component name, and writes that name to the head of the buffer specified by buf. vp The vnode to translate. dvp The vnode of the parent directory of vp. buf The buffer into which to prepend. The vnode should be locked on entry and will still be locked on exit. The parent directory vnode will be unlocked on a successful exit. However, it will have its use count incremented. Zero is returned on success, otherwise an error code is returned. [ENOMEM] The buffer was not large enough to hold the vnode's component name. [ENOENT] The vnode was not found on the file system. VOP_LOOKUP(9), vnode(9) This interface is a work in progress. The function VOP_VPTOCNP appeared in FreeBSD 8.0. This manual page was written by Joe Marcus Clarke. All copyrights belong to their respective owners. Other content (c) 2014-2017, GNU.WIKI. Please report site errors to webmaster@gnu.wiki.Page load time: 0.100 seconds. Last modified: November 09 2017 18:38:06.
http://gnu.wiki/man9/VOP_VPTOCNP.9freebsd.php
CC-MAIN-2018-22
refinedweb
202
70.5
Opened 7 years ago Closed 7 years ago Last modified 7 years ago #16105 closed Bug (invalid) libgdal1-1.7.0 broke django.contrib.gis.gdal.libgdal Description importing django.contrib.gis.gdal.libgdal results in a segfault when I use libgdal1-1.7.0 Using 1.6 it doesn't. Quickfix I made that worked was to install both but modify gdal.libgdal to prioritize 1.6. This may be a problem with my libgdal maybe. Using debian-testing. Change History (4) comment:1 Changed 7 years ago by comment:2 Changed 7 years ago by This program: from ctypes import CDLL from ctypes.util import find_library lib_path = find_library('gdal1.7.0') lgdal = CDLL(lib_path) _version_info = lgdalGDALVersionInfo? Gives the following output: Traceback (most recent call last): File "test.py", line 4, in <module> lgdal = CDLL(lib_path) File "/usr/lib/python2.6/ctypes/init.py", line 353, in init self._handle = _dlopen(self._name, mode) OSError: /usr/lib/libgdal1.7.0.so.1: symbol cxa_pure_virtual, version libmysqlclient_16 not defined in file libmysqlclient.so.16 with link time reference I'm using 1.7.3-4 I should find the right place to post this... comment:3 Changed 7 years ago by Sorry: from ctypes import CDLL from ctypes.util import find_library lib_path = find_library('gdal1.7.0') lgdal = CDLL(lib_path) _version_info = lgdal['GDALVersionInfo'] comment:4 Changed 7 years ago by Based on this traceback, I'd say it's a Debian bug. I suggest The error message is similar to. In my opinion, your segfault could be caused by: ctypes— django.contrib.gis.gdal.libgdaluses this module in a very straightforward way to load libgdal1, libgdal1— you're using a testing version, after all, Since ctypesis part of the standard library, I think you should take the offending code from, extract the minimum failing test case, and post that to Python's bug tracker. If might be as simple as: (Disclaimer: I don't have access to a box running debian-testing, so I don't know if this is sufficient to exhibit the bug.) Anyway, Django does not contain any C code, it's written entirely in Python. So while it can raise exceptions, it can't (in theory) create segfaults. For this reason, I'm going to mark the bug as invalid.
https://code.djangoproject.com/ticket/16105
CC-MAIN-2018-39
refinedweb
383
60.51
🏷 Tagged A wrapper type for safer, expressive code. Table of Contents - Motivation - The problem - The solution - Features - Installation - Interested in learning more? - License Motivation We often work with types that are far too general or hold far too many values than what is necessary for our domain. Sometimes we just want to differentiate between two seemingly equivalent values at the type level. An email address is nothing but a String, but it should be restricted in the ways in which it can be used. And while a User id may be represented with an Int, it should be distinguishable from an Int-based Subscription id. Tagged can help solve serious runtime bugs at compile time by wrapping basic types in more specific contexts with ease. The problem Swift has an incredibly powerful type system, yet it's still common to model most data like this: struct User { let id: Int let email: String let address: String let subscriptionId: Int? } struct Subscription { let id: Int } We're modeling user and subscription ids using the same type, but our app logic shouldn't treat these values interchangeably! We might write a function to fetch a subscription: func fetchSubscription(byId id: Int) -> Subscription? { return subscriptions.first(where: { $0.id == id }) } Code like this is super common, but it allows for serious runtime bugs and security issues! The following compiles, runs, and even reads reasonably at a glance: let subscription = fetchSubscription(byId: user.id) This code will fail to find a user's subscription. Worse yet, if a user id and subscription id overlap, it will display the wrong subscription to the wrong user! It may even surface sensitive data like billing details! The solution We can use Tagged to succinctly differentiate types. import Tagged struct User { let id: Id let email: String let address: String let subscriptionId: Subscription.Id? typealias Id = Tagged<User, Int> } struct Subscription { let id: Id typealias Id = Tagged<Subscription, Int> } Tagged depends on a generic "tag" parameter to make each type unique. Here we've used the container type to uniquely tag each id. We can now update fetchSubscription to take a Subscription.Id where it previously took any Int. func fetchSubscription(byId id: Subscription.Id) -> Subscription? { return subscriptions.first(where: { $0.id == id }) } And there's no chance we'll accidentally pass a user id where we expect a subscription id. let subscription = fetchSubscription(byId: user.id) 🛑 Cannot convert value of type 'User.Id' (aka 'Tagged<User, Int>') to expected argument type 'Subscription.Id' (aka 'Tagged<Subscription, Int>') We've prevented a couple serious bugs at compile time! There's another bug lurking in these types. We've written a function with the following signature: sendWelcomeEmail(toAddress address: String) It contains logic that sends an email to an email address. Unfortunately, it takes any string as input. sendWelcomeEmail(toAddress: user.address) This compiles and runs, but user.address refers to our user's billing address, not their email! None of our users are getting welcome emails! Worse yet, calling this function with invalid data may cause server churn and crashes. Tagged again can save the day. struct User { let id: Id let email: Email let address: String let subscriptionId: Subscription.Id? typealias Id = Tagged<User, Int> typealias Email = Tagged<User, String> } We can now update sendWelcomeEmail and have another compile time guarantee. sendWelcomeEmail(toAddress address: Email) sendWelcomeEmail(toAddress: user.address) 🛑 Cannot convert value of type 'String' to expected argument type 'Email' (aka 'Tagged<EmailTag, String>') Handling Tag Collisions What if we want to tag two string values within the same type? struct User { let id: Id let email: Email let address: Address let subscriptionId: Subscription.Id? typealias Id = Tagged<User, Int> typealias Email = Tagged<User, String> typealias Address = Tagged</* What goes here? */, String> } We shouldn't reuse Tagged<User, String> because the compiler would treat Address as the same type! We need a new tag, which means we need a new type. We can use any type, but an uninhabited enum is nestable and uninstantiable, which is perfect here. struct User { let id: Id let email: Email let address: Address let subscriptionId: Subscription.Id? typealias Id = Tagged<User, Int> enum EmailTag {} typealias Email = Tagged<EmailTag, String> enum AddressTag {} typealias Address = Tagged<AddressTag, String> } We've now distinguished User.Email and User.Address at the cost of an extra line per type, but things are documented very explicitly. If we want to save this extra line, we could instead take advantage of the fact that tuple labels are encoded in the type system and can be used to differentiate two seemingly equivalent tuple types. struct User { let id: Id let email: Email let address: Address let subscriptionId: Subscription.Id? typealias Id = Tagged<User, Int> typealias Email = Tagged<(User, email: ()), String> typealias Address = Tagged<(User, address: ()), String> } This may look a bit strange with the dangling (), but it's otherwise nice and succinct, and the type safety we get is more than worth it. Accessing Raw Values Tagged uses the same interface as RawRepresentable to expose its raw values, via a rawValue property: user.id.rawValue // Int You can also manually instantiate tagged types using init(rawValue:), though you can often avoid this using the Decodable and ExpressibleBy- Literal family of protocols. Features Tagged uses conditional conformance, so you don't have to sacrifice expressiveness for safety. If the raw values are encodable or decodable, equatable, hashable, comparable, or expressible by literals, the tagged values follow suit. This means we can often avoid unnecessary (and potentially dangerous) wrapping and unwrapping. Equatable A tagged type is automatically equatable if its raw value is equatable. We took advantage of this in our example, above. subscriptions.first(where: { $0.id == user.subscriptionId }) Hashable We can use underlying hashability to create a set or lookup dictionary. var userIds: Set<User.Id> = [] var users: [User.Id: User] = [:] Comparable We can sort directly on a comparable tagged type. userIds.sorted(by: <) users.values.sorted(by: { $0.email < $1.email }) Codable Tagged types are as encodable and decodable as the types they wrap. struct User: Decodable { let id: Id let email: Email let address: Address let subscriptionId: Subscription.Id? typealias Id = Tagged<User, Int> typealias Email = Tagged<(User, email: ()), String> typealias Address = Tagged<(User, address: ()), String> } JSONDecoder().decode(User.self, from: Data(""" { "id": 1, "email": "blob@pointfree.co", "address": "1 Blob Ln", "subscriptionId": null } """.utf8) ExpressiblyBy-Literal Tagged types inherit literal expressibility. This is helpful for working with constants, like instantiating test data. User( id: 1, email: "blob@pointfree.co", address: "1 Blob Ln", subscriptionId: 1 ) // vs. User( id: User.Id(rawValue: 1), email: User.Email(rawValue: "blob@pointfree.co"), address: User.Address(rawValue: "1 Blob Ln"), subscriptionId: Subscription.Id(rawValue: 1) ) Numeric Numeric tagged types get mathematical operations for free! struct Product { let amount: Cents typealias Cents = Tagged<Product, Int> } let totalCents = products.reduce(0) { $0.amount + $1.amount } Why not use a type alias? Type aliases are just that: aliases. A type alias can be used interchangeably with the original type and offers no additional safety or guarantees. Why not use RawRepresentable, or some other protocol? Protocols like RawRepresentableare useful, but they can't be extended conditionally, so you miss out on all of Tagged's free features. Using a protocol means you need to manually opt each type into synthesizing Equatable, Hashable, Decodableand Encodable, and to achieve the same level of expressiveness as Tagged, you need to manually conform to other protocols, like Comparable, the ExpressibleBy- Literalfamily of protocols, and Numeric. That's a lot of boilerplate you need to write or generate, but Tagged gives it to you for free! Installation Carthage If you use Carthage, you can add the following dependency to your Cartfile: github "pointfreeco/swift-tagged" ~> 0.2 CocoaPods If your project uses CocoaPods, just add the following to your Podfile: pod 'Tagged', '~> 0.2' SwiftPM If you want to use Tagged in a project that uses SwiftPM, it's as simple as adding a dependencies clause to your Package.swift: dependencies: [ .package(url: "", from: "0.2.0") ] Xcode Sub-project Submodule, clone, or download Tagged, and drag Tagged.xcodeproj into your project. Interested in learning more? These concepts (and more) are explored thoroughly in Point-Free, a video series exploring functional programming and Swift hosted by Brandon Williams and Stephen Celis. Tagged was first explored in Episode #12: License All modules are released under the MIT license. See LICENSE for details. Github Help us keep the lights on Dependencies Used By Total: 2 Releases 0.2.0 - Jun 23, 2018 - Removed ExpressibleByNilLiteralconformance: - Added CustomPlaygroundDisplayConvertibleconformance: - Added mapto Tagged: - Fixed deployment targets: 0.1.0 - Apr 16, 2018 This preliminary release has the basic Tagged type implemented with a few starting conformances to common Swift protocols.
https://swiftpack.co/package/pointfreeco/swift-tagged
CC-MAIN-2018-34
refinedweb
1,455
57.57
The. jtrader is the root command, it will then have sub-commands for each of the brokers $ jtrader Usage: jtrader [OPTIONS] COMMAND [ARGS]... Options: --help Show this message and exit. Commands: zerodha Command line utilities for managin Zerodha account Get started by... Overview of utilties available with zerodha - $ jtrader zerodha Usage: jtrader zerodha [OPTIONS] COMMAND [ARGS]... Command line utilities for managin Zerodha account Get started by creating a session $ jtrader zerodha startsession Options: --help Show this message and exit. Commands: configdir Print app config directory location rm Delete stored credentials or sessions config To delete... savecreds Saves your creds in the APP config directory startsession Saves your login session in the app config folder This is the preffered method for logging in to your Zerodha account $ jtrader zerodha startsession User ID >: USERID Password >: Pin >: Logged in successfully as XYZ Saved session successfully This will save your session in the config folder. Once you have done this in your code you can call set_access_token or load_session to use this session. Please note that like your browser login session expires after some time, same way this session will also expire but this is much safer than storing your credentials in code or plain text. from jugaad_trader import Zerodha kite = Zerodha() kite.set_access_token() # loads the session from config folder profile = kite.profile() This methods saves credentials in your config folder in ini format. ⚠️ Please note that you are storing your password in plain text $ jtrader zerodha savecreds Saves your creds in app config folder in file named .zcred User ID >: USERID Password >: Pin >: Saved credentials successfully Once you have done this, you can call load_creds followed by from jugaad_trader import Zerodha kite = Zerodha() kite.load_creds() kite.login() print(kite.profile()) jugaad-trader uses python click for its CLI. Click provides get_app_dir function to get config folder. Refer to documentation on how it works. You can simple run below command to get the config folder location $ jtrader zerodha configdir In case you wish to delete configuration, you can delete using rm command- To delete SESSION $ jtrader zerodha rm SESSION To delete CREDENTIALS $ jtrader zerodha rm CREDENTIALS jugaad-traderhome
https://marketsetup.in/documentation/jugaad-trader/cli/
CC-MAIN-2020-50
refinedweb
354
51.99
Database.CouchDB.ViewServer Contents Description This is a CouchDB view server in and for Haskell. With it, you can define design documents that use Haskell functions to perform map/reduce operations. Database.CouchDB.ViewServer is just a container; see the submodules for API documentation. Synopsis - type MapSignature = Object -> ViewMap () - type ReduceSignature a = [Value] -> [Value] -> Bool -> ViewReduce a - module Database.CouchDB.ViewServer.Map - module Database.CouchDB.ViewServer.Reduce Installation This package includes the executable that runs as the CouchDB view server as well as some modules that your map and reduce functions will compile against. This means, for instance, that if CouchDB is running as a system user, this package must be installed globally in order to work. The executable is named couch-hs. Without any arguments, it will run as a view server, processing lines from stdin until EOF. There are two options that are important to the compilation of your map and reduce functions ( couch-hs -h will print a short description of all options). -x EXT - Adds a language extension to the function interpreters. OverloadedStringsis included by default. -m MODULE[,QUALIFIED] - Imports a module into the function interpreter context. You may include a qualified name or leave it unqualified. The default environment is equivalent to the following (the last entry varying for map and reduce functions): import Prelude import Data.Maybe import Data.Ratio import Data.List as L import Data.Map as M improt Data.Text as T import Data.Aeson.Types as J import Control.Monad import Control.Applicative import Database.CouchDB.ViewServer.[Map|Reduce] Assuming the package is installed properly, just add it to your CouchDB config file: [query_servers] haskell = /path/to/couch-hs [options] Development Modes In addition to the server mode, couch-hs has some special modes to aid development. CouchDB isn't very good at reporting errors in view functions, so the following modes can be used to make sure your functions compile before installing them into a view. These can be run manually, although they're especially useful when integrated into your editor. They can also serve as a sanity check in your deployment process. To ensure valid results, be sure to match the couch-hs options with those in CouchDB's config file. couch-hs [options] -M [CODE|@PATH] ... - Attempt to compile one or more map functions. Each argument can either be a source string or a path to a file prefixed by @. If no arguments are given, one function will be read from stdin. For each map function that is successfully compiled, couch-hswill print OK. If any function fails, the interpreter error(s) will be printed. If there are any failures, couch-hswill exit with a non-zero status. couch-hs [options] -R [CODE|@PATH] ... - The same as -M, except to compile reduce functions. Use Overview Here is a simple summation example to get started. This example assumes documents of the form: {"name": "Bob", "value": 5} The map function emits name/value pairs: \doc -> emitM(doc .:"name" :: ViewMapString) (doc .:"value" :: ViewMapInteger) The reduce function adds up all of the values: \keys values rereduce -> sum <$> parseJSONListvalues :: ViewReduceInteger The key things to note here: - Map and reduce operations take place in a monadic context. The map and reduce monads are transformers on top of Parser, which is used to parse the decoded JSON into native values. Lifted parsing tools are provided for convenience. - Both map and reduce functions will parse JSON values and produce output and log messages. If any JSON parsing operation fails, the entire computation will fail and no results nor log messages will be returned to the server. To handle parse failures, you can use Alternativeor .:?. - Both map and reduce computations are parameterized in some way. In the case of map functions, it's the emitfunction; for the reduce functions, it's the return type. In either case, since there is no top-level type annotation, it will be necessary to include annotations at key points in the functions. I find that annotations usually belong at the points where the JSON objects are parsed. Map Functions A map function takes a single JSON object as an argument and evaluates to . The map computation may call ViewMap () emit or emitM to returnkey/value pairs for the given document. The emit functions accept any type that can be converted by toJSON, which is a long list. If you want to emit null, pass Null or Nothing (Null is easier, as it doesn't require annotation). Map functions will generally use .: and .:? to access fields in the object and may need parseJSON to parse embedded values. If the map computation fails, the result will be equivalent to return (). type MapSignature = Object -> ViewMap ()Source The type of your map functions as they are stored in CouchDB. The trivial example: \doc -> return () Reduce Functions A reduce function takes three arguments: a list of keys as JSON Values, a list of values as JSON Values, and a Bool for rereduce. The ViewReduce monad may wrap any value that can be converted by toJSON; a type annotation will generally be necessary. A reduce function will normally use parseJSONList to parse the JSON values into primitive types for processing. If the reduce computation fails, the result will be equivalent to return Null. type ReduceSignature a = [Value] -> [Value] -> Bool -> ViewReduce aSource The type of your reduce functions as they are stored in CouchDB. The trivial example: \keys values rereduce -> return Null Example Here's a larger example that shows off a more practical application. Suppose a set of documents representing shared expenses. We'll include a couple of malformed documents for good measure. {"date": "2011-06-05", "what": "Dinner", "credits": {"Alice": 80}, "shares": {"Alice": 1, "Bob": 2, "Carol": 1}} {"date": "2011-06-17", "credits": {"Bob": 75}, "shares": {"Bob": 1, "Doug": 1}} {"date": "2011-06-08", "what": "Concert", "credits": {"Carol": 150}, "shares": {"Alice": 1, "Carol": 1, "Doug": 1}} {"date": "2011-05-25", "what": "Bogus", "credits": {"Alice": 50}, "shares": {"Bob": 0}} {"food": "pizza", "toppings": ["mushrooms", "onions", "sausage"]} The following map function will calculate the total credit or debt for each person for each valid document. The what field is carried along. The reduce function sums all of the nets to produce the bottom line. \doc -> let net credits shares = let debts = shareAmounts (sumMap credits) (sumMap shares) shares in M.unionWith (+) credits debts shareAmounts totCredit totShares = M.map (\shares -> -(shares / totShares) * totCredit) sumMap = M.fold (+) 0 in do date <- doc .: "date" :: ViewMap T.Text what <- doc .:? "what" :: ViewMap (Maybe T.Text) -- Optional field credits <- doc .: "credits" :: ViewMap (M.Map T.Text Double) shares <- doc .: "shares" :: ViewMap (M.Map T.Text Double) guard $ (sumMap shares) > 0 -- Just say no to (/ 0) emit date $ object ["net" .= net credits shares, "what" .= what] \_ values rereduce -> L.foldl' (M.unionWith (+)) M.empty <$> case rereduce of False -> mapM (.: "net") =<< parseJSONList values :: ViewReduce [(M.Map T.Text Double)] True -> parseJSONList values :: ViewReduce [(M.Map T.Text Double)] Map results: "2011-06-05": {what: "Dinner", net: {Alice: 60, Bob: -40, Carol: -20}} "2011-06-08": {what: "Concert", net: {Alice: -50, Carol: 100, Doug: -50}} "2011-06-17": {what: null, net: {Bob: 37.5, Doug: -37.5}} Which reduces to: {Alice: 10, Bob: -2.5, Carol: 80, Doug: -87.5}
https://hackage.haskell.org/package/couch-hs-0.1.6/docs/Database-CouchDB-ViewServer.html
CC-MAIN-2015-18
refinedweb
1,203
67.35
Preventing ESC in Full Screen Interactive Penultimate post on the AIR 1.5.2 update: Prior to AIR 1.5.2, if a user hit the escape key when an application was running in fullScreen or fullScreenInteractive, the application would be forced out of full screen mode. This remains the intended behavior for fullScreen, but was a defect in the implementation of fullScreenInteractive. Starting with AIR 1.5.2, hitting escape causes fullScreenInteractive to exit by default but the behavior can be canceled by calling preventDefault() on the keydown event. Applications that use full screen to keep users (perhaps kids) from too easily leaving the application may find this change useful. As always, remember to update your namespace to take advantage of this new behavior.
http://blogs.adobe.com/simplicity/2009/08
crawl-003
refinedweb
125
56.35
I am sending emails to users using Django through Google Apps. When the user receives emails sent from the Django app, they are from: when looking at all emails in the inbox, people see the email’s sender as : do_not_reply or If I log into that “do_not_reply” account using the browser and Google Apps itself and then send an email to myself, the emails are from: Dont Reply<[email protected]> As a result, the name displayed for the email’s sender in the inbox is: Dont Reply In Django, is there a way to attach a “name” to the email account being used to send emails? I have reviewed Django’s mail.py, but had no luck finding a solution Using: Django 1.1 Python 2.6 Ubuntu 9.1 settings.EMAIL_HOST = ‘smtp.gmail.com’ Thanks You can actually use "Dont Reply <[email protected]>" as the email address you send from. Try this in the shell of your django project to test if it also works with gapps: >>> from django.core.mail import send_mail >>> send_mail('subject', 'message', 'Dont Reply <[email protected]>', ['[email protected]']) Apart from the send_mail method to send email, EmailMultiAlternatives can also be used to send email with HTML content with text content as an alternative. try this in your project from django.core.mail import EmailMultiAlternatives text_content = "Hello World" # set html_content email = EmailMultiAlternatives('subject', text_content, 'Dont Reply <[email protected]>', ['[email protected]']) email.attach_alternative(html_content, 'text/html') email.send() This will send mail to [email protected] with Dont Reply wil be dispalyed as name instead of email ‘[email protected]’. I use this code to send through gmail smtp (using google apps). and sender names are OK def send_mail_gapps(message, user, pwd, to): import smtplib mailServer = smtplib.SMTP("smtp.gmail.com", 587) mailServer.ehlo() mailServer.starttls() mailServer.ehlo() mailServer.login(user, pwd) mailServer.sendmail(user, to, message.as_string()) mailServer.close()
https://exceptionshub.com/giving-email-account-a-name-when-sending-emails-with-django-through-google-apps.html
CC-MAIN-2021-21
refinedweb
311
64.41
.]]> Introduction […]). […] The post Introduction to Linux interfaces for virtual networking. Anyone with a network background might be interested in this blog post. A list of interfaces can be obtained using the command ip link help. This post covers the following frequently used interfaces and some interfaces that can be easily confused with one another: After reading this article, you will know what these interfaces are, what’s the difference between them, when to use them, and how to create them.. Use a bridge when you want to establish communication channels between VMs, containers, and your hosts. Here’s how to create a bridge: # ip link add br0 type bridge # ip link set eth0 master br0 # ip link set tap1 master br0 # ip link set tap2 master br0 # ip link set veth1 master br0 This creates a bridge device named br0 and sets two TAP devices ( tap1, tap2), a VETH device ( veth1), and a physical device ( eth0) as its slaves, as shown in the diagram above. The Linux bonding driver provides a method for aggregating multiple network interfaces into a single logical “bonded” interface. The behavior of the bonded interface depends on the mode; generally speaking, modes provide either hot standby or load balancing services. Use a bonded interface when you want to increase your link speed or do a failover on your server. Here’s how to create a bonded interface: ip link add bond1 type bond miimon 100 mode active-backup ip link set eth0 master bond1 ip link set eth1 master bond1 This creates a bonded interface named bond1 with mode active-backup. For other modes, please see the kernel documentation. Similar a bonded interface, the purpose of a team device is to provide a mechanism to group multiple NICs (ports) into one logical one (teamdev) at the L2 layer. The main thing to realize is that a team device is not trying to replicate or mimic a bonded interface. What it does is to solve the same problem using a different approach, using, for example, a lockless (RCU) TX/RX path and modular design. But there are also some functional differences between a bonded interface and a team. For example, a team supports LACP load-balancing, NS/NA (IPV6) link monitoring, D-Bus interface, etc., which are absent in bonding. For further details about the differences between bonding and team, see Bonding vs. Team features. Use a team when you want to use some features that bonding doesn’t provide. Here’s how to create a team: # teamd -o -n -U -d -t team0 -c '{"runner": {"name": "activebackup"},"link_watch": {"name": "ethtool"}}' # ip link set eth0 down # ip link set eth1 down # teamdctl team0 port add eth0 # teamdctl team0 port add eth1 This creates a team interface named team0 with mode active-backup, and it adds eth0 and eth1 as team0‘s sub-interfaces. A new driver called net_failover has been added to Linux recently. It’s another failover master net device for virtualization and manages a primary (passthru/VF [Virtual Function] device) slave net device and a standby (the original paravirtual interface) slave net device. A VLAN, aka virtual LAN, separates broadcast domains by adding tags to network packets. VLANs allow network administrators to group hosts under the same switch or between different switches. The VLAN header looks like: Use a VLAN when you want to separate subnet in VMs, namespaces, or hosts. Here’s how to create a VLAN: # ip link add link eth0 name eth0.2 type vlan id 2 # ip link add link eth0 name eth0.3 type vlan id 3 This adds VLAN 2 with name eth0.2 and VLAN 3 with name eth0.3. The topology looks like this: Note: When configuring a VLAN, you need to make sure the switch connected to the host is able to handle VLAN tags, for example, by setting the switch port to trunk mode. VXLAN (Virtual eXtensible Local Area Network) is a tunneling protocol designed to solve the problem of limited VLAN IDs (4,096) in IEEE 802.1q. It is described by IETF RFC 7348. With a 24-bit segment ID, aka VXLAN Network Identifier (VNI), VXLAN allows up to 2^24 (16,777,216) virtual LANs, which is 4,096 times the VLAN capacity. VXLAN encapsulates Layer 2 frames with a VXLAN header into a UDP-IP packet, which looks like this: VXLAN is typically deployed in data centers on virtualized hosts, which may be spread across multiple racks. Here’s how to use VXLAN: # ip link add vx0 type vxlan id 100 local 1.1.1.1 remote 2.2.2.2 dev eth0 dstport 4789 For reference, you can read the VXLAN kernel documentation or this VXLAN introduction. With VLAN, you can create multiple interfaces on top of a single one and filter packages based on a VLAN tag. With MACVLAN, you can create multiple interfaces with different Layer 2 (that is, Ethernet MAC) addresses on top of a single one. Before MACVLAN, if you wanted to connect to physical network from a VM or namespace, you would have needed to create TAP/VETH devices and attach one side to a bridge and attach a physical interface to the bridge on the host at the same time, as shown below. Now, with MACVLAN, you can bind a physical interface that is associated with a MACVLAN directly to namespaces, without the need for a bridge. There are five MACVLAN types: 1. Private: doesn’t allow communication between MACVLAN instances on the same physical interface, even if the external switch supports hairpin mode. 2. VEPA: data from one MACVLAN instance to the other on the same physical interface is transmitted over the physical interface. Either the attached switch needs to support hairpin mode or there must be a TCP/IP router forwarding the packets in order to allow communication. 3. Bridge: all endpoints are directly connected to each other with a simple bridge via the physical interface. 4. Passthru: allows a single VM to be connected directly to the physical interface. 5. Source: the source mode is used to filter traffic based on a list of allowed source MAC addresses to create MAC-based VLAN associations. Please see the commit message. The type is chosen according to different needs. Bridge mode is the most commonly used. Use a MACVLAN when you want to connect directly to a physical network from containers. Here’s how to set up a MACVLAN: # ip link add macvlan1 link eth0 type macvlan mode bridge # ip link add macvlan2 link eth0 type macvlan mode bridge # ip netns add net1 # ip netns add net2 # ip link set macvlan1 netns net1 # ip link set macvlan2 netns net2 This creates two new MACVLAN devices in bridge mode and assigns these two devices to two different namespaces. IPVLAN is similar to MACVLAN with the difference being that the endpoints have the same MAC address. IPVLAN supports L2 and L3 mode. IPVLAN L2 mode acts like a MACVLAN in bridge mode. The parent interface looks like a bridge or switch. In IPVLAN L3 mode, the parent interface acts like a router and packets are routed between endpoints, which gives better scalability. Regarding when to use an IPVLAN, the IPVLAN kernel documentation says that MACVLAN and IPVLAN “are very similar in many regards and the specific use case could very well define which device to choose. if one of the following situations defines your use case then you can choose to use ipvlan – (a) The Linux host that is connected to the external switch / router has policy configured that allows only one mac per port. (b) No of virtual devices created on a master exceed the mac capacity and puts the NIC in promiscuous mode and degraded performance is a concern. (c) If the slave device is to be put into the hostile / untrusted network namespace where L2 on the slave could be changed / misused.” Here’s how to set up an IPVLAN instance: # ip netns add ns0 # ip link add name ipvl1 link eth0 type ipvlan mode l2 # ip link set dev ipvl0 netns ns0 This creates an IPVLAN device named ipvl0 with mode L2, assigned to namespace ns0. MACVTAP/IPVTAP is a new device driver meant to simplify virtualized bridged networking. When a MACVTAP/IPVTAP instance is created on top of a physical interface, the kernel also creates a character device/dev/tapX to be used just like a TUN/TAP device, which can be directly used by KVM/QEMU. With MACVTAP/IPVTAP, you can replace the combination of TUN/TAP and bridge drivers with a single module: Typically, MACVLAN/IPVLAN is used to make both the guest and the host show up directly on the switch to which the host is connected. The difference between MACVTAP and IPVTAP is same as with MACVLAN/IPVLAN. Here’s how to create a MACVTAP instance: # ip link add link eth0 name macvtap0 type macvtap MACsec (Media Access Control Security) is an IEEE standard for security in wired Ethernet LANs. Similar to IPsec, as a layer 2 specification, MACsec can protect not only IP traffic but also ARP, neighbor discovery, and DHCP. The MACsec headers look like this: The main use case for MACsec is to secure all messages on a standard LAN including ARP, NS, and DHCP messages. Here’s how to set up a MACsec configuration: # ip link add macsec0 link eth1 type macsec Note: This only adds a MACsec device called macsec0 on interface eth1. For more detailed configurations, please see the “Configuration example” section in this MACsec introduction by Sabrina Dubroca. The VETH (virtual Ethernet) device is a local Ethernet tunnel. Devices are created in pairs, as shown in the diagram below. Packets transmitted on one device in the pair are immediately received on the other device. When either device is down, the link state of the pair is down. Use a VETH configuration when namespaces need to communicate to the main host namespace or between each other. Here’s how to set up a VETH configuration: # ip netns add net1 # ip netns add net2 # ip link add veth1 netns net1 type veth peer name veth2 netns net2 This creates two namespaces, net1 and net2, and a pair of VETH devices, and it assigns veth1 to namespace net1 and veth2 to namespace net2. These two namespaces are connected with this VETH pair. Assign a pair of IP addresses, and you can ping and communicate between the two namespaces. Similar to the network loopback devices, the VCAN (virtual CAN) driver offers a virtual local CAN (Controller Area Network) interface, so users can send/receive CAN messages via a VCAN interface. CAN is mostly used in the automotive field nowadays. For more CAN protocol information, please refer to the kernel CAN documentation. Use a VCAN when you want to test a CAN protocol implementation on the local host. Here’s how to create a VCAN: # ip link add dev vcan1 type vcan Similar to the VETH driver, a VXCAN (Virtual CAN tunnel) implements a local CAN traffic tunnel between two VCAN network devices. When you create a VXCAN instance, two VXCAN devices are created as a pair. When one end receives the packet, the packet appears on the device’s pair and vice versa. VXCAN can be used for cross-namespace communication. Use a VXCAN configuration when you want to send CAN message across namespaces. Here’s how to set up a VXCAN instance: # ip netns add net1 # ip netns add net2 # ip link add vxcan1 netns net1 type vxcan peer name vxcan2 netns net2 An IPOIB device supports the IP-over-InfiniBand protocol. This transports IP packets over InfiniBand (IB) so you can use your IB device as a fast NIC. The IPoIB driver supports two modes of operation: datagram and connected. In datagram mode, the IB UD (Unreliable Datagram) transport is used. In connected mode, the IB RC (Reliable Connected) transport is used. The connected mode takes advantage of the connected nature of the IB transport and allows an MTU up to the maximal IP packet size of 64K. For more details, please see the IPOIB kernel documentation. Use an IPOIB device when you have an IB device and want to communicate with a remote host via IP. Here’s how to create an IPOIB device: # ip link add ipoib0 type ipoib mode connected NLMON is a Netlink monitor device. Use an NLMON device when you want to monitor system Netlink messages. Here’s how to create an NLMON device: # ip link add nlmon0 type nlmon # ip link set nlmon0 up # tcpdump -i nlmon0 -w nlmsg.pcap This creates an NLMON device named nlmon0 and sets it up. Use a packet sniffer (for example, tcpdump) to capture Netlink messages. Recent versions of Wireshark feature decoding of Netlink messages. A dummy interface is entirely virtual like, for example, the loopback interface. The purpose of a dummy interface is to provide a device to route packets through without actually transmitting them. Use a dummy interface to make an inactive SLIP (Serial Line Internet Protocol) address look like a real address for local programs. Nowadays, a dummy interface is mostly used for testing and debugging. Here’s how to create a dummy interface: # ip link add dummy1 type dummy # ip addr add 1.1.1.1/24 dev dummy1 # ip link set dummy1 up The IFB (Intermediate Functional Block) driver supplies a device that allows the concentration of traffic from several sources and the shaping incoming traffic instead of dropping it. Use an IFB interface when you want to queue and shape incoming traffic. Here’s how to create an IFB interface: # ip link add ifb0 type ifb # ip link set ifb0 up # tc qdisc add dev ifb0 root sfq # tc qdisc add dev eth0 handle ffff: ingress # tc filter add dev eth0 parent ffff: u32 match u32 0 0 action mirred egress redirect dev ifb0 This creates an IFB device named ifb0 and replaces the root qdisc scheduler with SFQ (Stochastic Fairness Queueing), which is a classless queueing scheduler. Then it adds an ingress qdisc scheduler on eth0 and redirects all ingress traffic to ifb0. For more IFB qdisc use cases, please refer to this Linux Foundation wiki on IFB. netdevsim is a simulated networking device which is used for testing various networking APIs. At this time it is particularly focused on testing hardware offloading, tc/XDP BPF and SR-IOV. A netdevsim device can be created as follows # ip link add dev sim0 type netdevsim # ip link set dev sim0 up To enable tc offload: # ethtool -K sim0 hw-tc-offload on To load XDP BPF or tc BPF programs: # ip link set dev sim0 xdpoffload obj prog.o To add VFs for SR-IOV testing: # echo 3 > /sys/class/net/sim0/device/sriov_numvfs # ip link set sim0 vf 0 mac To change the vf numbers, you need to disable them completely first: # echo 0 > /sys/class/net/sim0/device/sriov_numvfs # echo 5 > /sys/class/net/sim0/device/sriov_numvfs Note: netdevsim is not compiled in RHEL by default The post Introduction to Linux interfaces for virtual networking. […] The post Troubleshooting FDB table wrapping in Open vSwitch. When the FDB table is full and a new entry needs to be added, an older entry is removed to make room for the new one1. This is called FDB wrapping. If a packet is then received from the MAC address whose entry was removed, another entry is removed to make room, and the source MAC address of the packet will be re-added. When more MAC addresses exist in the network than can be held in the configured FDB table size and all the MAC addresses are seen frequently, a lot of ping/ponging in the table can happen. The more ping/ponging there is, the more CPU resources are needed to maintain the table. In addition, if traffic is received from evicted MAC addresses, the traffic is flooded out of all ports. 1 The algorithm for removing older entries in Open vSwitch is as follows. On the specific bridge, the port with the most FDB entries is found and the oldest entry is removed. In addition to the FDB table updates, Open vSwitch also has to clean up the flow table when an FDB entry is removed. This is done by the Open vSwitch revalidator thread. Because this flow table cleanup takes quite a bit of CPU cycles, the first indication you might have of an FDB table wrapping issue is a high revalidator thread utilization. The following example shows a high revalidator thread utilization of around 83% (deduced by adding the percentages shown in the CPU% column) in an idle system: $ pidstat -t -p `pidof ovs-vswitchd` 1 | grep -E "UID|revalidator" 07:37:56 AM UID TGID TID %usr %system %guest %CPU CPU Command 07:37:57 AM 995 - 188565 5.00 5.00 0.00 10.00 2 |__revalidator110 07:37:57 AM 995 - 188566 6.00 4.00 0.00 10.00 2 |__revalidator111 07:37:57 AM 995 - 188567 6.00 5.00 0.00 11.00 2 |__revalidator112 07:37:57 AM 995 - 188568 5.00 5.00 0.00 10.00 2 |__revalidator113 07:37:57 AM 995 - 188569 5.00 5.00 0.00 10.00 2 |__revalidator116 07:37:57 AM 995 - 188570 5.00 6.00 0.00 11.00 2 |__revalidator117 07:37:57 AM 995 - 188571 5.00 5.00 0.00 10.00 2 |__revalidator114 07:37:57 AM 995 - 188572 5.00 6.00 0.00 11.00 2 |__revalidator115 Let’s figure out if the high revalidator thread CPU usage is related to the FDB requesting a cleanup. This can be done by inspecting the coverage counters. The following shows all coverage counters (that have a value higher than zero) related to causes for the revalidator running: $ ovs-appctl coverage/show | grep -E "rev_|Event coverage" Event coverage, avg rate over last: 5 seconds, last minute, last hour, hash=e4a796fd: rev_reconfigure 0.0/sec 0.067/sec 0.0144/sec total: 299 rev_flow_table 0.0/sec 0.000/sec 0.0003/sec total: 2 rev_mac_learning 20.4/sec 18.167/sec 12.4039/sec total: 44660 In the above output, you can see that rev_mac_learning has triggered the revalidation process about 20 times per second. This is quite high. In theory, it could still happen due to the normal FDB aging process, although in that specific case the last minute/hour values should be lower. Hower normal aging can be isolated by using the same coverage counters: $ ovs-appctl coverage/show | grep -E "mac_learning_|Event" Event coverage, avg rate over last: 5 seconds, last minute, last hour, hash=086fdd98: mac_learning_learned 1836.2/sec 1157.800/sec 1169.0800/sec total: 7752613 mac_learning_expired 0.0/sec 0.000/sec 1.1378/sec total: 4353 As you can see, there are mac_learning_learned and mac_learning_expired counters. In the above output, you can see a lot of new MAC addresses have been learned: around 1,836 per second. For an FDB table with the size of 2K, this is extremely high and would indicate we are replacing FDB entries. If you are running Open vSwitch v2.10 or newer, it has additional coverage counters: $ ovs-appctl coverage/show | grep -E "mac_learning_|Event" Event coverage, avg rate over last: 5 seconds, last minute, last hour, hash=0ddb1578: mac_learning_learned 0.0/sec 0.000/sec 10.6514/sec total: 38345 mac_learning_expired 0.0/sec 0.000/sec 2.2756/sec total: 8192 mac_learning_evicted 0.0/sec 0.000/sec 8.3758/sec total: 30153 mac_learning_moved 0.0/sec 0.000/sec 0.0000/sec total: 1 Explanation of the above: mac_learning_learned: Shows the total number of learned MAC entries mac_learning_expired: Shows the total number of expired MAC entries mac_learning_evicted: Shows the total number of evicted MAC entries, that is, entries moved out due to the table being full mac_learning_moved: Shows the total number of “port moved” MAC entries, that is, entries where the MAC address moved to a different port Now, how can you determine which bridge has an FDB wrapping issue? For v2.9 and earlier, it’s a manual process of dumping the FDB table a couple of times, using the command ovs-appctl fdb/show, and comparing the entries. For v2.10 and higher a new command was introduced, ovs-appctl fdb/stats-show, which shows all the above statistics on a per-bridge basis: $ ovs-appctl fdb/stats-show ovs0 Statistics for bridge "ovs0": Current/maximum MAC entries in the table: 8192/8192 Total number of learned MAC entries : 52779 Total number of expired MAC entries : 8192 Total number of evicted MAC entries : 36395 Total number of port moved MAC entries : 1 NOTE: The statistics can be cleared with the command ovs-appctl fdb/stats-clear, for example, to get a per-second rate: $ ovs-appctl fdb/stats-clear ovs0; sleep 1; ovs-appctl fdb/stats-show ovs0 statistics successfully cleared Statistics for bridge "ovs0": Current/maximum MAC entries in the table: 8192/8192 Total number of learned MAC entries : 1902 Total number of expired MAC entries : 0 Total number of evicted MAC entries : 1902 Total number of port moved MAC entries : 0 With Open vSwitch, you can easily adjust the size of the FDB table, and it’s configurable per bridge. The command to do this is as follows: ovs-vsctl set bridge <bridge> other-config:mac-table-size=<size> When you change the configuration, take note of the following: Why not change the default to 1 million and stop worrying about this? Resource consumption: each entry in the table allocates memory. Although Open vSwitch allocates memory only when the entry is in use, changing the default to a too-high value could become a problem, for example, when someone does a MAC flooding attack. So what would be the correct size to configure? This is hard to tell and depends on your use case. As a rule of thumb, you should configure your table a bit larger than the average number of active MAC addresses on your bridge. If you would like to experiment with the counters, the following reproducer script from Jiri Benc, which lets you reproduce the effects of FDB wrapping, will let you do this. Create an Open vSwitch bridge: $ ovs-vsctl add-br ovs0 $ ip link set ovs0 up Create the reproducer script: $ cat > ~/reproducer.py <<EOF #!/usr/bin/python from scapy.all import * data = [(str("00" + str(RandMAC())[2:]), str(RandIP())) for i in range(int(sys.argv[1]))] s = conf.L2socket(iface="ovs0") while True: for mac, ip in data: p = Ether(src=mac, dst=mac)/IP(src=ip, dst=ip) s.send(p) EOF $ chmod +x ~/reproducer.py NOTE: The reproducer Python script requires Scapy to be installed. Start the reproducer: $ ./reproducer.py 10000 Now you can use the counter commands in the previous troubleshooting section to see the FDB table wrapping information and then set the size of the FDB appropriately. Many of Red Hat’s products, such as Red Hat OpenStack Platform and Red Hat Virtualization, are now using Open Virtual Network (OVN) a sub-project of Open vSwitch. Red Hat OpenShift Container Platform will be using OVN soon. Some other virtual networking articles on the Red Hat Developer blog: The post Troubleshooting FDB table wrapping in Open vSwitch […] The post Non-root Open vSwitch in RHEL will be the integration of the `–ovs-user` flag to allow for an unprivileged user to interact with Open vSwitch. Running as root can solve a lot of pesky problems. Want to write to an arbitrary file? No problem. Want to load kernel modules? Go for it! Want to sniff packets on the wire? Have a packet dump. All of these are great when the person commanding the computer is the rightful owner. But the moment the person in front of the keyboard isn’t the rightful owner, problems occur. There’s probably an astute reader who has put together some questions about why even bother with locking down the OvS binaries to non-root users. After all, the OvS switch uses netlink to tell the kernel to move ports, and voila! It happens! That won’t be changing. But, that’s expected. On the other hand, it would be good to restrict Open vSwitch as much as possible. As an example, there’s no need for Open vSwitch to have the kinds of privileges which allow writing new binaries to /bin. Additionally, Open vSwitch should never need access to write to Qemu disk files. These sorts of restrictions help to keep Open vSwitch confined to a smaller area of impact. Since Open vSwitch version 2.5, the infrastructure has been available to run as a non-root user, but it always seemed a bit scary to turn it on. There were concerns about interaction with Qemu, libvirt, and DPDK. Even further, issues would really crop up with selinux. Lots of background work has been going on to address these, and after running this way for a while in Fedora, we think we’ve worked out the worst of the kinks. So what do you need to do to ensure your Open vSwitch instance runs as a non-root user? Ideally nothing; a fresh install of the openvswitch rpm will automatically ensure that everything is configured properly to run as a non-root user. This is evident when checking with ps: $ ps aux | grep ovs openvsw+ 15169 0.0 0.0 52968 2668 ? S<s 10:30 0:00 ovsdb-s openvsw+ 15214 200 0.3 5840636 229332 ? S<Lsl 10:30 809:16 ovs-vs For new installs, this should be sufficient. Even DPDK devices will work when using a vfio-based PMD (most PMDs support vfio, so you really should use it). Users who upgrade their Open vSwitch versions may find that the Open vSwitch instances run as root. This is intentional; we didn’t want to break any existing setups. Yet all of the fancy infrastructure is there allowing you to switch if you so desire. Just a few simple steps to take: /etc/sysconfig/openvswitchand modify the OVS_USER_IDvariable to openvswitch:hugetlbfs(or whatever user you desire) /etc/openvswitch, /var/log/openvswitch, and /dev/vfio) have the correct ownership/permissions. This includes files and sub-directories. systemctl start openvswitch). If something goes wrong in this step, usually it will be evident in either journalctl (use journalctl -xe -u ovsdb-server for example), or in the log files. Once the non-root changes are in effect, you could still encounter some permissions issues that aren’t evident from journalctl. The most common one is when using libvirtd to start your VMs. In that case, the default libvirt configuration (either Group=root, or Group=qemu) may not grant the correct groupid to access vhost-user sockets. This can be configured by editing the Group= setting in the /etc/libvirt/qemu.conf configuration file to match with Open vSwitch’s group (again, default is hugetlbfs). I hope that was helpful! The post Non-root Open vSwitch in RHEL appeared first on Red Hat Developer Blog.]]>
https://developers.redhat.com/blog/tag/virtual-networking/feed/atom/
CC-MAIN-2019-22
refinedweb
4,541
60.55
InventoryItem. Thats what the post set is covering. By the end we have auto running tests, intellisense, some basic refactorings, prj/sln support, things like to goto definition. Sorry, Greg. I’m not really feeling you on this one. Of course, I’m not the one working in Linux most of the time… I *do* use Sublime for my non-C# work and I think it’s a great editor, but I feel like you’re trying to fit a square peg into a round hole here. To me, tools like Sublime are awesome… if you’re not working in a statically typed language. What tools are the Java community using? I’m guessing they use IDEs just like the C# devs do. But I don’t think the tooling is really the issue. I think, like you said above, that the problem is it’s Windows-only and expensive relative to the common dynamic language toolsets. For my complaint on the cost of Windows dev, you can check out my blog article. I also have to flag you for a straw man argument for citing the cost of VS Ultimate. No human being needs it and I’m sure you know that. VS Professional is $500 and all anyone ever really needs. What I think would be awesome would be an open source effort at getting the best functionality of R# and VS into some sort of modular plugin framework so that it could be done independent of keystroke, text editor, and OS. Now *that’s* an open source project I would jump on. Is anyone doing it? Also yesterday was a classic example of the problems I have. Visual Studio open->git pull in command shell. Oops files are locked by VS There are some things I can’t do in sublime well. A big one is large namespace refactorings in which case I will bring out another tool like R#. For day to day stuff I find I am actually faster in sublime. One interesting bit that I made the point of here but not strongly enough is that often codebases written using tools require such tools to work in them efficiently. Dependency messes are a good example of this and I am currently cleaning one up that I do not believe would have been created without the use of such tools as its really painful to work with without them. Further to my previous posts: Visual Studio 2012 + Resharper takes 45 seconds to open on my Core2Duo/4GB laptop. I just timed this and, I admit, this is slow! However, I experienced no lock ups over a whole day of usage as you have described… maybe I’m lucky I have seen really large solutions in Visual Studio. It doesn’t cope well. I’ve seen solutions with over 100 projects (this is insane and Sublime is not a solution to this insanity) gregyoung, what you’ve fallen victim to is called confirmation bias. I have tried other tools including Sublime. I use Notepad++ if I want to open something quickly (probably on a daily basis), this supports many languages too. I’m also a big fan of other JetBrains tools like, IntelliJ IDEA and PyCharm. I’m not criticizing Sublime for what it is. It may be able to do >90% of what you spend your time on – but it simply can’t do the rest. For an example, have you ever read, “Working effectively with Legacy Code”, specifically, the “Legacy Code Dilemma”? Tools like Resharper enable you to more easily bring code under test that you wouldn’t be able to do with the same speed or confidence. I made sure I counted the >5s lock ups or long unit tests runs for a whole day. Guess how many? zero. Although I will admit VS takes an age to load up! Although, that isn’t very relevant as, I didn’t actually restart Visual Studio. Have you actually done a whole day coding in Visual Studio 2012 or 2013? It could be a 2010 legacy as that had some more lockups. Of course this is said without ever actually trying the other side. What are the problems VS and R# solve well that aren’t solved well outside? How heavy is r#/vs? How many times per day does it just lock up for 15-30 seconds? Why does it at times take 15-30 seconds to run a single unit test? There are definite downsides to the massive tooling chain. I actually find I am now just as fast overall using either tool. In fact much of the tooling outside of VS is far superior to the VS tooling. Emmet would be a perfect example of this (formerly known as zencoding) By the end of the series we will also have fully automated build/minimized test runs as well as further tooling which simply doesn’t exist in VS. That said there are some things VS does really well. Debugging, code profiling, loading up unmanaged dumps. Its worth keeping a copy around for these things. The fact that my tooling also works in basically any other language is just a bonus. Not having to spend 3-6 months relearning a heavy tool chain because I want to be working in erlang is nice. This is one of the most retarded things I’ve read this year. Next, you’ll be debating Vi vs Emacs. Resharper is a great tools that enables a bunch of things that wouldn’t be possible otherwise. I have a toolbox. I don’t like carrying it round with me everywhere because its too heavy. I keep hitting myself on my thumb with my hammer. From now on, I’m only going to use a screwdriver. You can have my R# when you pry it from my cold dead IDE Tools like sublime have things like class/filename completion in search windows. That said find usages is quite useful. Its something I miss when working in code that is new While the ninja tricks videos are entertaining, one of the main advantages of R# is actually not typing speed enhancements.I can easily imagine master of Vim or Emacs equipped with handy macros beating R# editing speed, but as you said yourself, most of the time developer spends on the code is looking at it and thinking. And that’s where R# analysis shines – it really aids you you see your code as an interconnected entity and easily jump from one piece of connected logic to another and back while thinking. Especially when you are getting used to existing codebases you inherit, it is vital to see what piece of code references what, what inherits what, class/file name completion in the search window, all that fun. <3 R#
http://codebetter.com/gregyoung/2014/03/20/sublime-is-sublime-part-1/
CC-MAIN-2014-41
refinedweb
1,140
72.05
has a few years of/12/2016, · You pop the ,exfoliating foot masks, on for 10 to 15 minutes, and in the days that follow, your ,feet, will shed their rough edges, calluses, and any thick skin caused by excess stilettio use to ... import numpy as np # (pip install numpy) from skimage import measure # (pip install scikit-image) from shapely.geometry import Polygon, MultiPolygon # (pip install Shapely) def create_sub_,mask,_annotation(sub_,mask,, image_id, category_id, annotation_id, is_crowd): # Find contours (boundary lines) around each sub-,mask, # Note: there could be multiple contours if the object # is … If consistent exfoliation and silky-soft ,feet, is the goal, Aliver's Tea Tree ,Exfoliating Foot Mask, is a great, budget-friendly option. "The tea tree oil found in this ,mask, has natural anti ... The Bealuz ,Foot, Peel ,Mask, Exfoliant is an extremely effective ,exfoliating, and peeling ,mask, for the ,feet,. It gently sloughs away rough and dead skin to reveal brand new baby-soft ,feet, in just 2 weeks. You do not need to scrub or peel the skin yourself. If consistent exfoliation and silky-soft ,feet, is the goal, Aliver's Tea Tree ,Exfoliating Foot Mask, is a great, budget-friendly option. "The tea tree oil found in this ,mask, has natural anti ... Exfoliating, your ,feet, are just as important as ,exfoliating, your face and body. It's the only way to get rid of calluses and rough skin in this area. ,Exfoliating, your ,feet, once a week is enough to keep them nice and smooth.. One of the best natural ingredients that you can use for this is sugar which, when added to almond oil, makes the perfect way exfoliator. The Purederm ,Exfoliating Foot Mask, seems to work just as well as the Tony Moly ones. I keep the ,mask, on for about 1.5 hours. It is important to soak your ,feet, the following day and every day thereafter for at least one hour or so until you notice your ,feet, starting to peel. 7/12/2016, · You pop the ,exfoliating foot masks, on for 10 to 15 minutes, and in the days that follow, your ,feet, will shed their rough edges, calluses, and any thick skin caused by excess stilettio use to ... Best Overnight ,Mask,: Karuna ,Exfoliating Foot Mask, I've long been a fan of Karuna's sheet ,masks,, so I was stoked to hear it had launched its own version of Baby ,Foot,. The ,custom, load function, e.g. load_,dataset,() is responsible for both defining the classes and for defining the images in the ,dataset,. Classes are defined by calling the built-in add_class() ... We have successfully defined a ,Dataset, object for the ,mask,-,rcnn, library for our Kangaroo ,dataset,.
https://www.ksv-gladbeck.de/info/dupont-protective-clothing-has-a-few-years-of-validity.html
CC-MAIN-2021-49
refinedweb
453
68.7
"Sven" == Sven Luther <sven.luther@wanadoo.fr> writes: Sven> On Sun, Dec 26, 2004 at 06:24:19PM -0800, shyamal wrote: >> "Sven" == Sven Luther <sven.luther@wanadoo.fr> writes: >> Sven> We worked out some missing patches in the debian kernel for Sven> support of newer kernels with benh, well, benh sent me the Sven> needed patches, and i added it to the debian upcoming Hi Sven, I'm presuming that the patches you are applying are similar, or a superset, of what I used (which is in #287030). I found that solution because of following up on #276397 "Kernel won't wrk on XServe G5". I think that if your current fix for PowerMac7,3 has the patch below added then perhaps the XServe problem will go away (I constructed it by hand, so it might not apply cleanly but you get the idea). I don't have an XServe, and I've not tested this, but it's a thought :-) Cheers, Shyamal ---, }, + { "RackMac3,1", "XServe G5", + PMAC_TYPE_POWERMAC_G5, g5_features, + 0, + }, #endif /* CONFIG_POWER4 */ };
https://lists.debian.org/debian-powerpc/2004/12/msg00740.html
CC-MAIN-2018-43
refinedweb
172
61.8
NEW: Learning electronics? Ask your questions on the new Electronics Questions & Answers site hosted by CircuitLab. Project Help and Ideas » Modified Tempsensor Program!!!! OK, I got my variation of the temperature sensor program done, thanks to the help of you guys and the Nerdkits staff! There is a video here. Two questions: It seems that the more current or voltage drawn away, (say, from LEDs, the lcd backlight, and other power consuming stuff)the higher the temperature. This is a problem because when I have the backlight on or the batteries are getting low, (I know I shouldn't use a backlight on batteries, but I'm using rechargables and the are no 120v. outlets on my robot)the temperature goes up. How can I graph the temperature? I see graphs on some nerdkits videos and I've seen a pice of code that outputs temperature via the serial port but I don't know how to read it via my computer. BTW I'm running Vista (Thank god Windows Seven is coming out!!!) Hi dylan, I saw the temperature increasing when the battery was low (or if there was a lot of load on it). I put this down to (and someone will correct me if I'm wrong) the ADC compares the voltage from the sensor with the voltage from the +5 rail. If the rail voltage alters (or is low) then the comparison is different, thus giving a false reading for the temperature. To graph the temperature (I've just played with this), check out the meat thermometer program, included with the code is a python script that does the graphing for you. You can use the meat thermometer program with the standard temp sensor circuit and then run the pc-pygame.py program to graph the results. There may be some tweaking needed to get everything to how you want it, but the basics are there (I call them the basics but I dont really understand them). Hope this helps. BTW - nice video. I like the idea of having a temperature scale based around room temperature, nice touch. Keep making the vids! Thanks Mcai8sh4, I'll try the graph on the meatsensor program. Also, about the temperature sensor, can I run a seperate battery on it? Or does it have to be the same one? Thanks for your help, I really appreciate it :) How do I run the python script? And will the python show the graph or do I have to download a program to do that? Please help! Thanks. Hi Dylan, the python script needs to be 'interpreted' by python. Go get it to run you must download and install python (I assume you are using windows) you can get it from HERE I think the process is quite simple, but I don't use Win so I'm not sure on the details. For it to run I imagine that you need the libraries that it uses - these are the import lines at the top of the program, ie. import threading import time import serial import sys import pygame once these are installed on your machine, I think it should all work. Like I mentioned, I haven't used python on windows so I'm not sure on the details, but I hope this points you in the right direction. There are millions of pythons! Which one should I use? I have verion 2.5.2 installed, but the recommended current version seems to be 2.6 The windows version can be downloaded from here : Python2.6.2 (32bit version) I haven't tried this on windows, so maybe somone else with more experiance could help better. If I was installing it on windows, thats the one I'd go for. I downloaded it, but what do I have to open and what do I have to do? I'm just guessing here (I've not done this). Try double clicking the .msi file. That should install python onto your machine. After that test it's ok with something along the lines of pyhton --version if you don't get any errors then python in installed correctly. Then comes the bit I can't help you with - downloading and installing the import files mentioned before. I'm not sure if everything will run ok on windows (I've never used python on win). Hopefully someone who has a little more experience than me will be able to step in and assist you further if you still get stuck. Best of luck, keep us all posted on how you get on (and if you get it working, maybe tell us all how you did it) Maybe I should E-mail support. The mac version sounds a bit different. When I download it I get two main files. Python (command line) and IDLE (Python GUI). If nobody knows how to do it on windows, then I'll ask support. I think you need the PySerial module to connect to the nerdkit with python. here are download links: win version other OSs . this should help if you get an error about importing serial. Please log in to post a reply.
http://www.nerdkits.com/forum/thread/140/
CC-MAIN-2018-09
refinedweb
864
81.73
Those days are over. Not only was that very cludgey but wasn't all that effecient as well. Now with .NET we can create a CLR function to do all the heavy lifting. First you will need to create a Database Project. For this post I will be documenting how I did it using Visual Studio 2010. First thing you will do is open up your IDE and navigate to "New Project --> Database --> SQL Server". Take special note on how you name this project. I would name it something like {dbName}CLR as it will be the project where you will put all of your CLR Functions for a specific database. Next you will then "Add a New Item" and choose the type "Class". This class file can be named anything, but I would keep it sort of generic as it will most likely be the same project that you add all of your functions to. Next cut and paste the following code below into your class file: using System; using System.Collections; using System.Collections.Generic; using System.Data; using System.Data.SqlClient; using System.Data.SqlTypes; using Microsoft.SqlServer.Server; public partial class UserDefinedFunctions { [SqlFunction(Name = "fnToList", FillRowMethodName = "FillRow", TableDefinition = "ID NVARCHAR(255)")] public static IEnumerable SqlArray(SqlString str, SqlChars delimiter) { if (delimiter.Length == 0) return new string[1] { str.Value }; return str.Value.Split(delimiter[0]); } public static void FillRow(object row, out SqlString str) { str = new SqlString((string)row); } }; Before you can deploy the class, you will need to enable clr on your SQL Server. To do this you can execute the following commands: sp_configure 'clr enabled', 1; reconfigure with override; reconfigure with override; After you compile the Class, which will hopefully be error-free, you can then Deploy the project. When you deploy the project it will push the Assembly to the database that you specified during the Project creation. Visual Studio will also create the user defined function for you. If you did everything correctly you should see in SSMS in your Table-valued Functions a new function. To test this, you can simply run the following: nd it should produce the following output:
http://throwex.blogspot.com/2010/11/sql-clr-function.html
CC-MAIN-2017-17
refinedweb
360
66.44
More Info on the October 2002 DNS Attacks 232 MondoMor writes "One of the guys who invented DNS, Paul Mockapetris, has written an article at ZDnet about the October '02 DNS attacks. Quoting the article: "Unlike most DDoS attacks, which fade away gradually, the October strike on the root servers stopped abruptly after about an hour, probably to make it harder for law enforcement to trace." Interesting stuff." Damn terrorists... (Score:4, Funny) Re:Damn terrorists... (Score:4, Interesting) A better description would be anarchists. Anarchy is lawlessness and disorder as a result of governmental failure (in this case, to set up a system where the root servers are safe, but not particularly so). But then,we can't say that, can we? Anarchy is popular here on slashdot. Re:Damn terrorists... (Score:2, Insightful) There is such a thing as good hackers and even good crackers but a stupid DOS against the root dns servers? How can you defend that? Re:Wow, you're oblivious (Score:2) Solution? (Score:4, Funny) Re:Solution? (Score:2) Re:Solution? (Score:5, Insightful) Noone has a legitimate need for streaming several hundereds or thousands pings per second... Or at least put a lid on it when someone starts sending lots of pings for more than a couple of seconds... Re:Solution? (Score:5, Insightful) Doing so would require remembering who pinged, and when, for the last few seconds. Under normal conditions, that sounds trivial, but pings don't cause any problems under "normal" conditions. In a DDoS, you might have a million machines all pinging. How do you propose to store, look up, and update the last ping time for 100 million pings per second? A quick off-the-cuff calculation shows that *just the storage* for 10 seconds of such recording would take around 8Gb (32b IP and 32b timestamp). That doesn't include the CPU time to find matches (not that bad, since you can use the IP as an array index, but you can almost guarantee a continually invalid CPU cache) or update the list. And, that assumes you *always* dedicate that 8Gb to each server running on the machine, since otherwise the search you propose requires adding new pings to a dynamic list, making the lookup time become very very non-trivial. More importantly, even if you *do* manage such a feat (or even get rid of ping altogether), attackers can still use other services (like, for example, DNS lookups, which I'd like to see a DNS server try to stop supporting). Actually, it surprises me that no DDoS clients use SSH yet... Although not every machine (ie, Windows) runs an attackable server, a well-planned attack could suck up significant bandwidth, memory, *and* CPU power, all in one tidy packet. Re:Solution? (Score:3, Insightful) You don't need to keep track of every ping. Keep track of each IP and the number of pings recieved. Flush the data periodically to expire them. Length of attack becomes irrelevant, as does the exact ping rate. (as far as storage goes anyway) So 1 million * 12-byte record (4-IP, 4-last ping time, 4-count) = 12MB. The CPU time required to check would probably still make this infeasable. Re:Solution? (Score:2) I certainly is infeasible. There is a simple way to make this work. At the edges (and probably adjacent routers too), set a rate limit on ICMP. No tracking of IPs, just counting traffic and dropping the excess. As a bonus, the software to do this is already deployed. brilliant! is this what the article suggests? (Score:2) Re:Solution? (Score:2) Yes, you *would* need to check into the package to see if it's a ping and store a table of pings per second per ip. But you could use a simple counter. You wouldn't need to actually store every package. Look at the package, "oh, it's a ping", check if ip is in list, increase counter, if the counter show a abnormal amount of pings in a short period, don't send it towards it's destination. Then you'd go through the table at regular intervals and remove ip's that haven't increased since the last check... The crucial part here is cpu power. You need to be able to look into packages without slowing down the routing. But many of todays more powerful routers are allready capable of looking into packages at line speed. Especially if you do it close to the source. You could settle for doing it at the ISP connection routers, the one closest to the subscriber. Those routers rarely need to deal with a multitude of 1 or 10Gbit lines, or at least most are capped at the bandwith the customer subscribes for. That way you wouldn't need to burden the more central routers with this. Other DOS attacks might be harder to fight though... A really busy web-server configured to reverse dns-lookups of every connection might actually produce a frightening amount of dns-lookups per second. =/ But it should be possible to recognise abnormal traffic of dns, ssh and other protocols too... Re:Solution? (Score:2, Insightful) Secondly, assuming all DDoS are just simple ping is very short sighted. A much more effective DDoS is to spoof packets from IP addresses that arent being routed on the internet, when these reach the routers that connect to the name servers, depending on their configs, they would end up flooding their ip routing cache with useless entries, leading to the routers going down, leading to the nameservers being down. Re:Solution? (Score:2) You probably send your 10000 ping at a more moderate speed. Otherwise your test would end in a few milliseconds. What I'm talking about is of course to stop routing pings when they exceed a certain ping/sec for a period of time. Noone *needs* to send 100Mbit/sec pings for several hours. And I don't assume all DDoS'es are pings. But some are. And stopping some are better than stopping none. What is really needed though is a new set of protocols that's designed for a new where idiots and bad guys are present. Allmost every aspect of the internet structure is designed from the bottom up to be used only by nice people. Re:Solution? (Score:5, Informative) Windows has only the most vague concept of a "root" user, and rooting a Windows box takes about 40 lines of code (basically, the problem comes from the GUI - any program running with administrator privelage, such as a virus scanner, can spawn additional processes also running as the administrator. Making them do so requires nothing more than getting a handle to a text edit control, pasting in the desired malicious code, and using the address of the edit's buffer as a start-of-execution point. All of which *any* user can do. Re:Solution? (Score:2) Go ahead - root my box. Oh dear, you can't. What a shame. BTW: The Shatter attack is easily preventable. Start the antivirus UI process as part of an isolated job with limited UI privs. It'll be in a separate windowing namespace, and the shatter attack will no longer work. Simon Re:Solution? (Score:5, Informative) Tell me, do you run *all* your programs in a private UI context? The antivirus program just makes the "classic" example. How about your usually-hidden-but-always-instantiated NVidia setup panel? Any services you run that have a control panel for configuring them (Tardis, for example)? A local web server? One of those annoying (but often necessary for proper functioning of the related device) printer or scanner control panels? Aside from not trusting the so-called "privacy" of running something on a private desktop, you don't even need to bother breaking that layer of security. Just look for something else running as administrator... or backup... or power user... or replicator... or even "guest", which by default has an obscenely high level of privelage (relative to a Unix box, which doesn't even usually *have* an account as conceptually insecure as Window's guest account). If you've managed to configure a Windows box to have *everything* run as a specific, seperate user, in its own UI context, I tip my hat to you. I also do not envy the hell of making even trivial config changes to such systems, nor do I envy the frustration your users must feel at trying to use such a system productively. Put simply, Windows lacks the *design level* security to make it generally useable yet reasonably safe against its own users. Finally, even if you change the default permissions on "ping" as the parent suggested, under Windows that doesn't do a damned thing to stop a trojan that *includes* its own ping program from working just fine. Remember that, in dealing with a DDoS problem, it doesn't matter if a security expert *can* lock down a given box - It only matters that 99% of the people out there won't bother to fix (or even *know about*) a given exploit allowing raw network access. Re:Solution? (Score:3, Funny) Don't say stuff like that on slashdot. Re:Solution? (Score:2) If you get your trojan to 8 million boxes, you'll have 8 million boxes flooding their closest router with pings with that specified destination adress. If that router can see that "Oh, this user is sending me 1Mbit/second pings" and after half a second dicides to stop forwarding those pings, you'll stop those packages from even *starting* their journey. The attacked box will be DOS'ed for a second instead for several hours... This has actually happened to a company i used to work for. We got complaints that the internet was slow. I got a sniffer and checked the traffic on our internet link and found out that one of our servers where sending 100Mbit/second pings at the router. It turned out that someone had hacked the server and installed a program that at a given signal would start DOS'ing a server... All those packages, or at least the 4Mbit/second that our ISP's router was capped at, got onto the internet and probably did some damaged at the recieving end. If our ISP's router had done what I suggested, that wouln't have been the case. And there is no ID system in sight. Just a small extra check to see if the package the router just checked the recieving IP of is a ping or not. Re:Solution? (Score:3, Funny) Not only would this directly contradict pr0n's charter of advancing telecommunications technology, but it would also inevitably lead to the banning of pr0n... and nobody wants that. For the sake of our pr0n, let the terrorists have their ping! Re:Solution? (Score:2) Get serious. The ping command is definitely not a tool made to damage other people's computers. And though the article is a litle unclear on that issue, it sounds like this attack could in fact not have been done using the ping command. The ping command is used to send legitimate ICMP ECHO REQUEST packets, which a computer according to the stanards MUST reply to with an ICMP ECHO REPLY packet. What the attack did was to produce ICMP ECHO REQUEST packets with forged source address, so all the replies would be sent to the root DNS servers. This could not have been done by the use of the ping command. You just shouldn't remove usefull tools and install firewalls to break the standards just to "improve" your security. Your efforts will be useless either because you are protecting against something that is not a problem, or you fail to defend against the actual problem because the attack could have been done in other ways as well. In fact flooding is impossible to defend against, but a correctly configured system is going to be responsive again just a few seconds after the flooding has stopped. Re:Solution? (Score:2) Re:Solution? (Score:3, Interesting) This is just as should be expected... (Score:5, Interesting) Re:This is just as should be expected... (Score:5, Interesting) I'm really curious how "The October attacks showed a greater level of sophistication" than past attacks? As far as I can tell the attacker just had a bunch of cracked boxes with decent pipes to the internet and started a ping -f on all of them. In other news.... (Score:5, Funny) Unfortunatley, the theives didn't wait for law enforcement officials to show up, making it much harder to identify them. Re:In other news.... (Score:5, Funny) How is it a crime to kill cereal [reference.com]? Yeah, I guess it's a bit agressive, but hardly a crime. They come up with all sorts of weight watching schemes these days and I suppose cereal killing is just one in the crowd. And just like many other such schemes, this proves that method doesn't work very well, since he suddenly stopped. Killing Cereal... (Score:2) Dalnet DDOS Attacks (Score:5, Interesting) Why don't Dalnet and the FBI (or whoever) get together to solve a mutual problem ? Dalnet could get some much-needed help, and the FBI could get some much-needed experience into investigating this sort of attack. They would also be dealing with someone (or some people) who could move on to attacking bigger things. Also if they caught the attackers, they would get some useful publicity, some justification for an increased spend on cyber-deterrence, and the deterrent effect of having the perpetrators suitably punished - as well as putting a genuine menace behind bars. Re:Dalnet DDOS Attacks (Score:5, Insightful) M$ is just as much a part of the problem as well. With more and more cable, DSL and other "always on" connectivity available, more and more of these machines are vulnerable. Scanners out there can easily identify and infect 1000 home user's machines, and these attacks come from them. The actual perpetrator is long gone. All they do is momentarily log in and "fire it off", then they immediately log out, and they are gone. Tracing IPs back to the attacker is just going to identify the innocent machines or owners who are totally unaware of their activity until they either power down their machines or somehow discover it. Re:Dalnet DDOS Attacks (Score:2) But an ISP (or some body such as the FBI) may be able to identify all the packets travelling to an infected machine on its network, and perhaps trace which machine is connecting to it to co-ordinate the attacks - or at least the first machine in a chain. Or perhaps other means of dealing with the problem could be investigated (routing protocols, or whatever). Also, the ISPs which allow outgoing source IP addresses to be spoofed could be identified. If spoofed source IP addresses become a huge problem to significant parts of the internet, those ISPs could be asked, pressurised (or legislated against) in order to stop this - if technically feasible (sorry, but I'm no networking expert). OK, people may not think it worth doing just to save a single IRC network, but it's not a problem that can be ignored for ever while it gets worse and worse (due to the reasons you give in your post) and becomes a threat to more and more of the internet. Re:Dalnet DDOS Attacks (Score:2, Interesting) It is beyond me why the ISP's would even want one crap packet come out of their network. Its costing them money. Their upstream connection costs money... For some interesting numbers go take a look at MyNetWatchman [mynetwatchman.com] These dudes even TELL the ISP's that there is something wrong. But most just get ignored. Truth is most people could care less that their computer is doing something wrong. They just want a bit of email and to surf a bit. Hell most just want it to stay up long enough, and be a bit faster. Considering the 300 programs they are running out of the box. The only way I have ever been able to explain to a person what its about is the apartment analogy. A theif goes into an apartment building and rattles every doorknob. He finds one that opens. He then uses that apartment as a base to sneak around to rattle other doorknobs. Most people get very upset when I tell them someone is basicly trying to break into their house. The next words out of their mouths are usually 'who can I report this to?' All I can tell them is no one. Re:Dalnet DDOS Attacks (Score:4, Informative) He basically got his hands on one of the "zombie" trojans the DDoS'ers use, reverse engineered it to find out how it works (and which IRC servers it talks to to receive its commands), wrote his own to connect to said server and waited until the attackers personally logged in. It really is a good read. CRIKEY! Script Kiddie Hunter! (Score:3, Funny) I hadn't read that guy's site in a while because it's too alarmist. But I read the linked GRC article and found roughly 5-15% useful text among all of that. The IRC log was priceless; ^^boss^^ was stupid if he was surprised someone could've figured that out how to locate and connect to his IRC server. (I'm not necessarily dissing Gibson with that stament, though; he's alarmist but is fairly knowledgable although he can sound fairly stupid at points, too.) What struck me is how much his articles read like Crocodile Hunter: CRIKEY!! I've been DDoS'ed by SCRIPT KIDDIES' WIN9x ZOMBIES!! Lucky for me they weren't Win2k or WinXP zombies or I'd be DEAD!! [Imagine the following text centered, large, bold and in a different color] etc., etc.. I actually enjoy Crocodile Hunter, though. Re:Dalnet DDOS Attacks (Score:4, Interesting). With 13 current servers, this means that 8-9 servers can be taken out at one time and have negligible impact on the world's DNS queries, assuming that the outage is at a peak time and the servers are being hit very hard. Practically speaking, the existing root servers are probably built even more toughly, so the remaining 4-5 servers can probably handle shorter outages (such as that mentioned in the article) without significant effort, and even if brought down to 2-3 could probably handle things with some difficulty. According to root-servers.org, the existing servers are fairly concentrated, with only those in Stockholm, London, and Tokyo not in the United States. Perhaps three more, with one maybe in South Korea, one in Australia, and one in North Africa or the Middle East (Cairo would be ideal to cover both) would be a viable option? I realize that the last is probably going to be questionable for some, given the censorship agendas often in place in the area, but it would help to make further attacks a little more difficult, as well as adding a little prestige and maybe tech investment to the area. Just an idea. As for Dalnet, why isn't the FBI involved? (I'm not aware of current happenings on the network, as I don't use it.) Re:Dalnet DDOS Attacks (Score:2) "How are you going to find a knowledgeable operator for the one in America? That country consists entirely of spammers who can only write English, and service providers who cannot read your complaint written in Korean. I doubt that there exists any American provider or organisation where an employee has the required level of understanding of computing and Korean language to support such a system." Re:Dalnet DDOS Attacks (Score:2) Well they're both motor vehicles which take you from A to B, powered by an internal combustion engine, travelling on the non-internet super-highway. "Here's why no one in the FBI cares about DalNET or their DDoS attacks: No one outside of DalNET gives a shit." Please read the post. The third and fourth paragraphs give a few reasons why it might be useful if they did. "It's pretty damn offtopic. Yes, it deal with DDoS attacks but in no way is it remotely relevant to DNS root servers." The title of the article is "More Info on the October 2002 DNS Attacks". Personally I think a comment about another large-scale internet attack, carried out in the same way, is pretty on-topic. Re:Dalnet DDOS Attacks (Score:2) No, Seymore's 1977 accord will take you approximately 3/4(B-A) then it breaks down and you get laughed at. The title of the article is "More Info on the October 2002 DNS Attacks". Personally I think a comment about another large-scale internet attack, carried out in the same way, is pretty on-topic. Well, except one in a mission critical DNS based attack and the other is an attack on a bunch of fat guys sitting in their mothers basement jacking off to kitty porn. What outage? (Score:2, Informative) It would take about a week (Score:2, Informative) Re:It would take about a week (Score:2) Doesn't that assume that you're only visiting sites that are already cached on your DNS server? Re:It would take about a week (Score:3, Informative) If you run your own DNS, you should cache it. Re:It would take about a week (Score:2) Re:It would take about a week (Score:2) However, I do know that the Win2k and later series OSs from Microsoft do contain what is called "DNS Client". This client has the job of doing DNS caching. (And a bunch of other stuff I think.) Restarting the thing can be a quick way to do what would otherwise require a reboot. The Win98/ME/95 series stuff had a client too, but it couldnt be cleared without rebooting. Though I think it's timeout was not as long. So yes there is caching going on, one of the main reasons why my first question to my clients is "when did you last reboot?" Re:It would take about a week (Score:4, Informative) Re:It would take about a week (Score:2) Almost all web browsers have caches. Usually, they work correctly. Sometimes they don't. Tools->Internet Options->Settings->Check for newer versions of stored pages: Every visit to the page Re:It would take about a week (Score:2) Your upstream provider almost certainly has a cache. His upstream providers likely have caches. Their upstream providers likely have caches. Depending on the exact path taken, a name request might be erratic as to whether (and to what) it resolved. It would probably take a week for killing all the root servers to take down the internet, although some breakage would be noticeable after about 24-36 hours. Things working off of fixed ip addresses would continue to work. If intermediate caching DNS servers keep used stale addresses until a fresher valid address is known, a lot of the internet would keep on going indefinitely. Re:What outage? (Score:2, Interesting) Long outages would change the whole thing. Imagine that we could't read slashdot for a whole week! Responsibility of the ISP (Score:5, Insightful) Well then, isn't it logical to try and rate limit/filter as close to the source as possible then? Of course this shifts responsibility... If all ISPs were proactive in dealing with customers machines being used as zombies to launch attacks, then internet users as a whole would have less problems trying to deal with being the target of an attack. A few logical steps: Some ISPs may do this, I don't know, but from the articles I read about DDoS attacks it appears that most don't. Re:Responsibility of the ISP (Score:3, Insightful) I know its possible.... im sure they wouldnt waste time if someone was uncapping their modem. Re:Responsibility of the ISP (Score:2) "To: John Doe From: ABC Networks Subject: Your computer has a virus Dear John Doe, according to our records, at 01/10/2002 modem XYZ was--" [DELETE] John Doe: Damned spammers. You really do have to make the call to make sure it gets fixed. It used to be that most people just cannot read well enough to understand a virus warning (well, once the Internet wasn't a snobby elitist club anymore, at least). Now there's the spam goggles everyone wears that filter it before they have a chance to not understand it. If you call them, you can do one of two things: Get someone who goes "Oh, OK. I will fix it tonight." (Then you check up on them.) Or, you get someone who goes "Oh my God oh my God what do I do, did I hurt anything this is horrible!" You have to send that person to a shop, but which is worse karma- sending a person to a shop where they're gonna get whacked 150 bucks, or not doing anything about it at all? Re:Responsibility of the ISP (Score:2) how about this one: Re:Responsibility of the ISP (Score:2) [...] then email the poor sap... That reminds me of some Nimda hunting I did at work. My intranet web server kept getting hit from within the intranet in a different English speaking country. I reported it to the proper company groups, but it kept on happening. Finally I tried to hack into it using remote MMC management. I don't know why, but it let me in. I was able to copy a text file to the c$ share, start the scheduling service and use the at command to run notepad and display the text file on the desktop. The text file, of course, said something along the lines of "this pc is infected with the Nimda virus; please notify your network administrator or pc tech and unplug it from the network." I did that several times over 3 days. I think it took about 5 days before I finally quit getting hits from it. (I resisted the urge to try to remotely disinfect it since I didn't know what business function the PC served.) I can believe people ignoring emails, but people are so paranoid about viruses that if Notepad kept popping messages on their screens I would think they'd go running screaming to their administrator begging him to save their data. Maybe I should've made the note sound sinister instead of helpful and then they'd get help? That reminds me, I intended to check out why the hell I could administer a PC in a different country and find out if my PCs were as vulnerable. I'll put that on tomorrow's to do list. Re:Responsibility of the ISP (Score:3, Interesting) Get in touch with MS for the rate limit on ammounts of pings that can be sent. Get them to code into their OS some sort of rate limit for icmp-echo-reply packets, like you described. Also, make ISPs far, FAR more aggresive when dealing with this. Is a computer sending out code red/nimda attacks? Disconnect it, write letter to the owner and disconnect them permanently after a few times. Same thing for ping flooding. If it happens often, (testing network strain over the internet shouldn't happen often) engage the same procedure as with code red/nimda infected computers. Re:Responsibility of the ISP (Score:2) And it would take about 2 hours before someone compiled and distributed a "raw" ping client for windows. Egress Filtering (Score:5, Insightful) Implementation of simple egress filtering rules at border routers or at firewalls (regardless of who owns them) would dramatically decrease the efficacy of DDoS attacks. If my organization owns the A.B.C network, there is no reason why any packets bearing a source address of anything other than A.B.C.* should be permitted to leave my network. NAT environments can implement this by dropping packets with source addresses that do not belong to the internal network. Of course, for this to be effective it would have be used on a broad scale, i.e. around the world... Re:Egress Filtering (Score:3, Informative) Re:Egress Filtering (Score:2) The idea is that for each host on the Internet, there is at least one independently administrated router in front of it which performs source address validation before forwarding packets further upstream to a transit network (where address validation becomes complicated). However, it would take quite a long time until you saw any effect, like any other DoS mitigation tactic which does not support incremental deployment. ICMP Traceback is promising, though. I really hope that it's as useful as it looks. Re:Egress Filtering (Score:3, Informative) Actually, there is at least one very good reason. If company A has 2 internet connections through provider A and B, and wishes to do load balancing, but for one reason or another can not announce a single subnet through both providers, they can at least do outbound load balancing and change the source address on a per packet basis, so incoming traffic for connections initiated by someone local are evenly distributed through both connections. Obviously any connections that originate from the outside world (i.e. someone on the internet trying to view this company's website) have to be answered with the same IP that the request originally went to as the source address (or stuff will break(tm)), so this wont work in that situation, but any request that originated on the company's network, and goes out to the internet, can have the outbound traffic load balanced on a per packet basis over their multiple internet connections, even if they can't announce the same block through both providers. This however requires that some packets have a source address in the subnet of for instance provider A, when they go out through the circuit with provider B, to evenly load balance packets. The other option, which does not require sending packets with a source address for one provider when it goes through another, is to do it on a per connection basis, and not a per packet basis, however depending on your traffic, etc.. this may not work nearly as well. While obviously, the number of people implimenting something like this is few, and the benefits are many to implement anti-spoof measures, to the few people doing something like the above, it sucks. However, there is an answer, that will satisfy both causes. To the few people that do load balance in the method mentioned above, a simple ACL allowing only packets with either subnet as the source (for either line A or B's block), and deny all other sources, will both allow them to load balance outbound traffic, and it will protect your network (and others) (since they can't spoof any other address, other than their block with the other provider through you, as the ACL will drop it). For everyone else, you can use the following command on a Cisco with CEF enabled, which drops all traffic that does not have a source address that is routed through the interface the packet was received on: "ip verify unicast reverse-path" Re:Egress Filtering (Score:2) For everyone else, you can use the following command on a Cisco with CEF enabled, which drops all traffic that does not have a source address that is routed through the interface the packet was received on: "ip verify unicast reverse-path" The way to turn on reverse-path filtering on a Linux firewall is: /proc/sys/net/ipv4/conf/*/rp_filter; do for i in echo 2 > $i done Re:Egress Filtering -- needs more work (Score:3, Insightful) Of course, unless the zombies were smart enough to know the IP range within the border router, you'd still get a metric buttload of invalid packets at the border router. Some kind of threshhold alarm might be a good idea -- but then there's the problem of locating what machine within the border is generating the packets... In a perfect world, the best solution would be that people didn't let their machines get 0wn3d in the first place, [Insert maniacal laughter]! Egress filtering is a good thing but it's not a complete solution. (And it's a good thing that I turned back from the Insufficient-light Side of the Hack many years ago.) Here's an explaination of a reflection attack. [grc.com] (Yes, that "end of the Internet" grc. :^) Disclaimer! (Score:2, Funny) I guess that I shouldn't worry, unlike script-kiddie h4x0rs, Slashdot users are intelligent, wise .. , never do stupid things .. , never abuse the system .. oh shit Re:Responsibility of the ISP (Score:2, Insightful) It's almost certainly an easier thing for the ISP to do:: your implicit assumption that everyone's a BSD-user with 30 years of security experience is not that appropriate when describing people who got a PC for christmas and had to get a friend to show them how to plug the monitor in... and these people do need the net just as much as we do, before we get the élitists flaming back as reply to this. The ISP will typically be spending more time than is healthy measuring peoples' bandwidth anyway, even if for nothing better than to check they've not got an uncapped modem. So when someone who typically browses a few web-pages a minute suddenly starts requesting files at 300 per second, it's pretty easy to see they're either testing a spider, or they got infected. The credit-card companies seem to manage such pattern-matching, although admittedly that's not real-time. Conversely, the ISPs will need to be smart enough to realise that if someone's playing RavenShield then there's a good reason for them to be pinging the same computer twice a second, and sending unnatural amounts of data. But then, that's not such a hard problem to solve. Neural networks and all that... (says someone who's never had to program a neural network!) And arguably, it's more useful than the tecchies spending all their waking hours trying to detect connection-sharing, or rogue linux machines on their network. How to Protect the DNS (Score:3, Interesting) apparentlyicannwatch [icannwatch.org]new year resolution was to migrate [icannwatch.org] from nuke to slash. TLD Question (Score:5, Interesting) I'm not an expert, but as I understand it, DNS attacks are relatively benign, since DNS info is cached all over the place and doesn't change much anyway (this is essentially what the article says). Now, the author seems much more worried about attackts against Top Level Domains, because of reasons related to the nature of the information that TLD servers have, and he suggests a few techniques that they could use. What he doesn't say is what techniques the TLD's are using currently, and how secure they are. /. know? Does anyone out there on Re:TLD Question (Score:2) [cr.yp.to] Hrrrmmm (Score:5, Funny) Hrrrrmmm. That makes it look deliberate. Hrrrrmmm. Chief Wiggum's on the case! (Score:2) Re:Hrrrmmm (Score:2, Insightful) That would be funny. IDEA for DNS Survivability (Score:4, Informative) Why not allow the admin to specify the maximum diskspace that the cache can use up, and then only prun the records when that (possibly huge) database grows too large? In addition, DNS records should not just arbitrarily expire... If a record has not reached it "expire" date, the cache is just fine. If a record HAS reached it's "expire", it should still remail valid UNTIL the DNS server has been able to get a valid update. Now, that would allow large DNS servers to maintain quite a bit of functionality even if all other DNS servers go down, and would do so while requiring only the most popular queries are saved on the server (so not everyone has to become a full root DNS server). Re:IDEA for DNS Survivability (Score:2, Informative) Generally there are two ways to keep caches relatively fresh: expire records based on some precondition (such as time) or have the master source send out notifications when data was changed. And DNS can do BOTH. First, there are three kinds of expirations in DNS, all time based where the periods are selected by the owner of the domain. The first is when you attempt to look up a name which doesn't exist; that's called negative caching and is typically set to just and hour or two. The next is the refresh time which indicates when an entry in a cache should be checked to see if it is still current and is typically about a half a day. And finally the time-to-live is the time after which the cache entry is forcibly thrown away, and is usually set to a couple weeks or more. Finally DNS servers can coordinate notification messages, whereby the primary name server for a domain will send a message to any secondaries whenever the data has changed. This allows dirty cache entries to be flushed out almost immediately . But DNS notifications are usually used only between coordinated DNS servers, and not all the way to your home PC. It should be noted though that most end users' operating systems do not really perform DNS caching very well if at all...usually it is your ISP that is doing the caching. Windows users are mostly out of luck unless you are running in a server or enterprise configuration. Linux can very easily run a caching nameserver if you install the package. I don't know what the Macs do by default. Re:IDEA for DNS Survivability (Score:2) This is only for DNS servers such as BIND that use AXFR to update slaves. Finally DNS servers can coordinate notification messages, whereby the primary name server for a domain will send a message to any secondaries whenever the data has changed. Modern DNS servers use better methods such as rsync over SSH or database replication, which provide real security, instant updates and more efficient network usage. Re:IDEA for DNS Survivability (Score:2) Re:IDEA for DNS Survivability (Score:2) Because I like to actually be able to change my DNS records after they are published. In addition, DNS records should not just arbitrarily expire... They don't arbitrarily expire. They expire when the TTL for the record has been reached. If a record HAS reached it's "expire", it should still remail valid UNTIL the DNS server has been able to get a valid update. That would allow an attacker to blind your DNS resolver to DNS changes by keeping it from contacting a remote DNS server. And if the same attacker can poison your cache, the cache will keep the poisoned records forever. Re:IDEA for DNS Survivability (Score:2) There are so many flaws with this logic that I'm not sure where to begin. First of all, if an attackers has poisoned your cache, that almost always requires Admin intervention anyhow. Second, if an attacker can blind your DNS server to updates, in the current scheme, your DNS would completely fail, instead of one record being invalid, so this is not a capability attackers have, and even if they did, you would be much better off with my modifications, than with the current scheme. Re:IDEA for DNS Survivability (Score:2) If, by ``new", you mean I've been at it for ``less than 5 years", you're absolutely right. Umm, well, yes... I knew that. Maybe I missed something, but I don't believe I said that it was very difficult to poison a DNS cache, so I'm not sure what you are trying to say. BTW, I've already read several of djb's DNS documents, including the one you referenced. Re:IDEA for DNS Survivability (Score:2) First, I was refering to expiring in the current, standard sense. The owner of the DNS record picking an expiration IS essentially arbitrary... It's certainly arbitrary as far as a caching DNS server is concerned. Now, if you'd like to post what you think is wrong with the solution, that might be useful. Re:IDEA for DNS Survivability (Score:2) Re:IDEA for DNS Survivability (Score:2) I do not suggest ignoring the expirations, nor simply caching them forever. What I am suggesting is that it should not be automatically removed when it's expiration comes... Instead, if an expired record is requested, the DNS server should TRY to fetch the update from it's parent DNS servers, HOWEVER, if it is UNABLE to get that update, it should (instead of returning an error) return the expired record. Doing that with a host record would be fine ONLY IF you had software that would update each of the records in Re:IDEA for DNS Survivability (Score:2) Well Mr. Troll or Idiot, whichever is the case, I know exactly what I am talking about... Those questions were purely rhetorical. Well, if the question was rhetorical, why even bother asking? Were you just talking to hear yourself speak? Re:IDEA for DNS Survivability (Score:2) Go look-up the definition of rhetorical... Then you will know. Re:IDEA for DNS Survivability (Score:2) The timing of this is incredibly coincidental. Less than a week ago I was setting up MaraDNS as a new caching DNS server, all the while wondering how difficult it might be to impliment this in MaraDNS. In addition, after first posting this message, I was considering sending off a message to the MaraDNS developer's mailing list proposing the idea... I guess I don't need to worry about that, now. I'm aware of it, and I very much like that feature. Heh... You know, I just realized that my Anyhow, I was quite glad to get this message, and I certainly hope I'll see this feature in future versions of MaraDNS. For those who can't be bothered to RTFA... (Score:4, Interesting) What we can do (Score:3, Insightful) It's difficult for any reasonable person to know where to begin solving these issues. Traditionally, nailing down machines and networks so they are more secure has been seen as the best approach, but there's little anyone can do about having bandwidth used up by unaccountable "hacked" machines, as is seemingly more and more the modus-operandi. Attempts to trace crackers are frequently wastes of time, and stiffer penalties for hackers are compromised by the fact that it's hard to actually catch the hackers in the first place. The situation is made worse that many of the most destructive hackers do not, themselves, set up anything beyond sets of scripts distributed to and run by suckers - so-called "script kiddies". Given that hackers usually work by taking over other machines and coopting them into damaging clusters that can cause all manner of problems, less focus than you'd expect is put onto making machines secure in the first place. The responsibility for putting a computer on the Internet is that of a system administrator, but frequently system administrators are incompetent, and will happily leave computers hooked up to the Internet without ensuring that they're "good Internet citizens". Bugs are left unpatched, if the system administrators have even taken the trouble to discover if there are any problems in the first place. This is, in some ways, the equivalent of leaving an open gun in the middle of a street - even the most pro-gun advocates would argue that such an act would be dangerously incompetent. But putting a farm of servers on the Internet, and ignoring security issues completely, has become a widespread disease. There is a solution, and that's to make system adminstrators responsible for their own computers. An administrator should be assumed, by default, to be responsible for any damage caused by hardware under his or her control unless it can be shown that there's little the admin could reasonably have done to prevent their machine from being hijacked. Clearly, a server unpatched a few days after a bug report, or a compromise unpatched that has never been publically documented, is not the fault of an admin, but leaving a server unpatched years after a compromise has been documented and patches have been available certainly is. Unlike hackers, it is easy to discover who is responsible for a compromised computer system. So issues of accountability are not a problem here. Couple this with suitably harsh punishments, and not only will system administrators think twice before, say, leaving IIS 4 out in the wild vulnerable to NIMDA, but hackers too - for the same reasons as they avoid attacking hospital systems, etc - will think twice about compromising someone else's system. Fines for first offenses and very minor breaches can be followed by bigger deterents. If you were going to release a DoS attack into the wild, but knew that the result would be that many, many, system administrators would be physically castrated because of your actions, would you still do it? Of course not. But even if you were, the fact that someone has been willing to allow their system to be used to close the DNS system, or take Yahoo offline, ought to be reason enough to be willing to consider such drastic remedies. Castration may sound harsh, but compared to modern American prison conditions, it's a relatively minor penalty for the system administrator to pay, and will merely result in discomfort combined with removal from the gene-pool. At the same time, such an experience will ensure that they take better care of their systems in future, without removing someone who might have skills critical to their employer's well being from being taken out of the job market. The assumption has always been made that incompetent system administrators deserve no blame when their systems are hijacked and used for evil. This assumption has to change, and we must be willing to force this epidemic of bad administration to be resolved. Only by securing the systems of the Internet can we achieve a secure Internet. Only by making the consequences of hacking real and brutal can we create an adequate response to the notion that hacking, per-se, is not wrong, that it causes no damage. This quagmire of people considering system administrators the innocents in computer security when they are themselves the most responsible for problems and holes] [senate.gov]. Write also to Jack Valenti [mpaa.org], the CEO and chair of the MPAA, whose address and telephone number can be found at the About the MPAA page [mpaa.org] [mpaa.org]. Write too to Bill Gates [mailto] [mailto], Chief of Technologies and thus in overall charge of security systems built into operating systems like Windows NT, at Microsoft. Tell them security is an important issue, and is being compromised by a failure to make those responsible for security accountable for their failures. Tell them that only by real, brutal, justice meted out to those who are irresponsible on the Internet will hacking be dealt with. Tell them that you believe it is a reasonable response to hacking to ensure that administrators who fail time and time again are castrated, and that castration is a reasonable punishment that will ensure a minimal impact on an administrator's employer while serving as a huge deterent against hackers and against incompetence. Tell them that you appreciate the work being done to patch servers by competent administrators but that if incompetent admins are not kept accountable, security harms all three. Let your legislators know that this is an issue that effects YOU directly, that YOU vote, and that your vote will be influenced, indeed dependent, on their policies concerning maladministration of computer systems connected to the public Internet. You CAN make a difference. Don't treat voting as a right, treat it as a duty. Keep informed, keep your political representatives informed on how you feel. And, most importantly of all, vote. Need more secure desktops (Score:3, Interesting) It seems to me that this is another call for more secure computers. If the "zombies" were not so easy to create, then such attacks would not be so easy to mount. I think security has gotten better, but there is still great room for improvements. I have some random thoughts that might help. First, broadband providers should not sell bandwidth without standard firewall. I do not see such a proposition to be expensive, as a standalone unit is quite cheap, and the cost to integrate such circuitry into a DSL or cable box should be even less expensive. Broadband providers should stop their resistance to home networking and use bandwidth caps or other mechanism, if necessary. Second, the default setting in web browsers must be more strict. Web browser should not automatically accept third party cookies or images. Web browser should not automatically pop up new windows or redirect to third party sites. Advertising should not be an issue. I know of no legitimate web site that requires third party domains. For instance /. uses "images.slashdot.org" and the New York Times uses "graphics7.nytimes.com". Of course, these default setting should be adjustable, with the appropriate message stating that web sites that use such techniques are likely to be illegitimate. I know of a few sites that require all imagers and cookies to be accepted, but I consider those to be fraudulent. Third, email mail programs should by default render email as plain text. There should a button to allow the mail to render HTML and images. There should be a method to remember domains that will always render or never render. Again, third party domain should not render automatically. In addition, companies need to not promote HTML and image based email. Apple is particularly guilty of this. The emails they send tend to be illegible without images. Fourth, the root must be the responsibility of the user or a third agent must have full liability for a hack. This should be basic common sense, but it apparently is not. MS wants access to the root of all Windows machines, but I do not see MS saying they will accept all responsibility for damage. Likewise, the RIAA wants access to everyone root, but again, are they going to pay for the time it takes to reinstall an OS. I think not. With privilege come responsibility. Without responsibility all you have are children playing with matches. Re:Need more secure desktops (Score:3, Insightful) Nice idea, but what about the ad-supported sites that use agencies to get advertising, rather than selling ad space direct to the advertiser. Then it makes perfect sense for to have an image on it from images.adagency.com. I agree entirely that html email should be banished from the face of the net, and third party cookies serve litle or no purpose. Question: (Score:4, Interesting) Whose laws are being enforced, and upon whom? DDoS attacks and IPv6 (Score:3, Insightful) I was wondering - does IPv6 solve this problem (using some sort of digital signatures or another ingenious way), or sites will be still vulnureable to script kiddies? Re:DDoS attacks and IPv6 (Score:3, Insightful) IPv6 can though provide a very secure layer (IPsec) but it comes at an expense. It is not something that you would want to use for DNS queries, where the name of the game is speed and the number of hosts involved can be thousands or even millions. But for the less voluminous DNS messages, such as zone transfers which occur between mirrors, authenticity is much more of a concern. IPsec could be very useful there, but it is probably unnecessary as DNS already has it's own security protocol built into it (DNSSEC). In general though IPv6 does provide many benefits over IPv4 and in some ways does provide many new tools to address the DDoS and script kiddies; but like any single technology it is not a super pill that makes all the ills go away. Re:DDoS attacks and IPv6 (Score:2) This will unfortunately remain a problem for the same reason it'll remain a problem with email - unless all possible nodes that traffic can be routed through are known and trusted, you have to take much of your routing information on faith. End users don't need root or TLD servers (Score:4, Insightful) End users don't need root or TLD servers; they just need to have DNS queries answered. That's why normally, they are configured to query the ISP or corporate DNS servers, which in turn do the recursive query to root, TLD, and remote DNS servers. Given that, consider the possibility of the ISP or corporate data center intercepting any queries done (as if the end user were running a recursive DNS server instead of a basic resolver) and handle them through a local cache (within the ISP or corporate data center). It won't break normal use. It won't break even if someone is running their own DNS (although they will get a cached response instead of an authoritative one). It will prevent a coordinate attack-load from the network that does this. They talk about root and TLD servers located at major points where lots of ISPs meet, which poses a potential risk of a lot of bandwidth that can hit a DNS server. So my first thought was why not have multiple separate servers with the same IP address, each serving part of the bandwidth, much like load balancing. And then, you don't even have to have them at the exchange point, either; they can be in the ISP data center. They could be run as mimic authoritative servers if getting zone data is possible, or just intercepting and caching. Re:End users don't need root or TLD servers (Score:3, Interesting) Wrong. I run my own local DNS resolver, dnscache [cr.yp.to]. I don't trust my ISP to manage a DNS resolver properly. What if they are running a version of BIND vulnerable to poison [cr.yp.to] or other issues [cr.yp.to]? What if I am testing DNS resolution and need to flush the cache? (I do this routinely.) They also don't need to see every DNS query I make. If they want to sniff and parse packets, fine, but no need to make it any easier on them. It won't break even if someone is running their own DNS (although they will get a cached response instead of an authoritative one). That would be possible only if they were in fact intercepting every single DNS packet and rewriting it. It would make it impossible for me to perform diagnostic queries to DNS servers. And unless they were doing some very complex packet rewriting, it would break if an authoritative server was providing different information depending on the IP address that sent the query. If you can't even get ISPs to perform egress filtering, why would they do something as stupid and broken as this? Egress filtering would do much more to stop these types of attacks. Besides, how does this stop me if I am the ISP? There are plenty vulnerable machines that are on much better connections than dialup or broadband. Re:End users don't need root or TLD servers (Score:2) What egress filtering? The kind that blocks DNS queries sent to the root or TLD servers with a source address of the actual machine doing the querying, while under control of a virus or trojan that has infected a million machines? Sure egress filtering will stop a few bad actors who are forging source addresses, such as bouncing attacks off of broadcast responders. And egress filtering is not easy to do on large high traffic routers where there are a few hundred prefixes involved, belonging to the ISP and multitudes of their customers. You think an access list that big isn't going to bring a router to its knees? Rate limiting is worthless... (Score:3, Insightful) ..if the flood is randomly generated queries from thousands of compromised hosts. There would be no way to separate flood traffic from legit traffic. A worm could do this, or a teenager with a lot of time on their hands. It's easier for peons to get together a smurf list to attack the roots, but a nice set of compromised hosts issuing bogus spoofed queries would be just devastating. The solution is not more root servers. Attackers gain compromised hosts for free, root servers must be paid for. The solution is to make some kind of massively distributed root server system. Was that a weapons test? (Score:2) Now, that could be an actual government, military operation [including our own], as part of a general preparedness effort for war: when you strike, you use a combination of surprise attacks to make your main attack more effective. Or it could be terrorists, running a weapons test in the same way. Or it could be some grad student, testing out a theory of his. It just doesn't sound like a normal cracker. Oh really? (Score:2, Insightful) Ok, let's pretend such a magical replacement actually exists, and you have it up and running. Then, the skr1pt k1dd1es show up and start a 'trinoo' or 'tribal flood' type DoS that floods your network and slows all your servers down to a crawl. Tell me again how your magical new DNS replacement is going to deal with this situation better than the old one? Re:Why we need to abandon DNS (Score:2, Informative) If things were as bad as you seem to think they are, the whole Internet system would have crumbled to rubble long ago. In reality, it has scaled amazingly well, and has been unbelievably robust. Perhaps you should go purchase a clue, you obviously don't have one of your own. Re:Why we need to abandon DNS (Score:2) I have single-handendly written a working recursive DNS server without getting paid for my work. There is a reason why there are only three [cr.yp.to] of [t-online.de] us [maradns.org] in the entire world; DNS is that bad. Actually, it is a good deal worse than you can imagine. Let me put it this way. Writing a DNS client (or a non-recursive DNS server) is sort of like Highlander I [imdb.com]. Entertaining, really. You think to youself "Hey! That was easy! A recursive server can't be too bad!" Well, writing a working recursive DNS server is like watching Highlander II [imdb.com]. Suddenly, just as Highlander II changes your outlook on the entire Highlander franchise, writing a recursive DNS server changes your outlook on the entire DNS protocol. But, hey, don't take my word for it. Dan, one of the other three of us, feels the same way. Thomas, the last of us, has made no statements either for or against DNS. If we were to review recursive DNS the same way Rotten Tomatoes [rottentomatoes.com] reviews movies, DNS would get a 0%; possibly a 33% if Thomas secretly loves DNS and hasn't told anyone. By any standard, that makes for a bomb that should have tanked at the box office. Alas, it didn't. And so we are stuck with a horrible mess of a protocol today. - Sam Re:Why we need to abandon DNS (Score:3) The question is: Who is going to develop such a protocol? I have heard a lot of mumbling for a DNS replacment; I have seen little actual action done to make such a replacment. If such a protocol gets developed, I most assurably will be one of the first to implement. What real solutions do people have to the fragile root servers issue (these days, the fragile .com servers issue). - Sam Re:Why we need to abandon DNS (Score:2) You should try assembler once! Oh, and when you're on it, please write a replacement for it.
https://slashdot.org/story/03/01/11/1813206/more-info-on-the-october-2002-dns-attacks
CC-MAIN-2018-22
refinedweb
10,142
60.95
Yield. This article explains the use of the yield keyword with examples. Syntax of Yield The yield syntax is simple and straightforward. The yield is initiated with the yield keyword and syntax as follows: Examples Now, let’s see examples to understand the use and works of yield statements. Traditionally, the return keyword terminates the execution of the program and return a value at the end, while yield returns the sequence of values. It does not store the value in memory and returns the value to the caller at run time. In the given below example, a generator function is defined to determine the leap year. A leap is that year when divisible by four returns zero as a remainder. The yield keyword returns the value of leap year to the caller. As it will get the value of leap year, it will pause the program execution, return the value, and then resume the execution from where it was stopped. def leapfunc(my_list): for i in my_list: if(i%4==0): #using yield yield i #declaring the years list year_list=[2010,2011,2012,2016,2020,2024] print("Printing the leap year values") for x in leapfunc(year_list): print(x) Output The output shows the series of leap years. Let’s see another example where the generator function yields various numbers and strings. def myfunc(): yield "Mark" yield "John" yield "Taylor" yield "Ivan" yield 10 yield 20 yield 30 yield 40 yield 50 #calling and iterating through the generator function for i in myfunc(): #printing values print(i) Output Let’s implement a generator function to calculate and print the cube value of sequence of numbers. We are generating the cube values from 1 to 30. def calcube(): val=1 #the infinite while loop while True: #calcumating cube yield val*val*val #incrementing value by 1 val=val+1 print("The cube values are: ") #calling the generator function for i in calcube(): if i>30: break print(i) Output The output shows the cube value less than 30. Conclusion Yield is a Python built-in keyword that does not terminate the execution of the program and generate a series of values. In comparison to the return keyword, the yield keyword produces multiple values and returns to the caller. This article explains the Python Yield with examples.
https://linuxhint.com/python-yield/
CC-MAIN-2022-05
refinedweb
385
58.52
Behavior Driven Development - putting testing into perspective The ultimate aim of writing software is to produce a product that satisfies the end user and the project sponsor (sometimes they are the same, sometimes they are different). How can we make sure testing helps us obtain these goals in a cost-efficient manner? To satisfy the end user (the person who ends up relying on your software to make his or her work easier), you need to provide the optimal feature set. The main challenge here is that the optimal feature set is not always what the users ask for, nor is it always what the BA comes up with at the start. So you need to keep on your toes, and be able to change direction quickly as the users discover what they really need. But that's the realm of Agile Development, and not really what I wanted to discuss here... To satisfy the project sponsor (the person who has to fork out the cash), you need to satisfy your users, but you also need to write your application as efficiently as possible. Efficiency means writing code quickly, but it also means avoiding having to come back later on to fix silly mistakes. For example, I cn wriet ths tezt REASLDY QUIFKLY but if I don't keep an eye on the quality, the end user (you, the reader, in this case) will suffer. Your code needs to be reliable (not too many bugs) and maintainable (easy enought to understand so that the poor bastard who comes after you can work on the code with minimum hair loss). So what has all this got to do with testing? Writing a test takes time and effort, so, ideally, you need to balance the cost of writing a test against the cost of _not_ writing_ the test. Does the test you are writing directly contribute to delivering a feature for the user? Will it lower long-term costs by making it more flexible and reliable? If your tests are to contribute positively to the global outcome of your project, you need to think about this, and design your tests so that they will provide the most benefit for the project as a whole. It is fairly well-established that, in all but the most trivial of applications, unit testing will help to make your code more reliable. The cost of writing unit tests is the time it takes to write (and maintain) them. The cost of not writing them is the time it takes to fix the bugs that they miss. Techniques such as Test-Driven Development (TDD) help to do this by incorporating testing as a first-class part of the design process. When you code using Test-Driven Development, you begin by writing unit tests to exercise your code, and then write the code to make the tests work. Writing the unit test helps (in fact, forces) you to think about the optimal design of your class _from the point of view of the rest of the application_. This is a subtle but significant shift in how you write your code. However, when it comes to testing, developers are often at a loss as to what exactly should be tested. In addition, they tend to focus on the low-level mechanics of their unit tests, rather than Behavior-Driven Development, or BDD, can provide some interesting strategies here. If you're not familiar with BDD, Andy Glover has written an excellent introduction here. Behavior-Driven Development takes Test-Driven Development (TDD) a step further. It is actually more a crystallization of good TDD-based practices, rather than a revolutionary new way of thought. Indeed, you may well be doing it already without realizing it. Using BDD, your tests help define how the system is supposed to behave as a whole. Using BDD, developers are encouraged to "What is the next most important thing the system doesn't yet do?" (see). This, in turn, leads to meaningful unit tests, with meaningful (albeit verbose) names, such as shouldReturnZeroForIncomeBelowMinimumThreshold. For example, suppose you need to write a tax calculator. In the country in question, no tax is payable for incomes under a certain threshold, the exact value of which is determined by the laws of the land, and which, today, happens to be $5000. Using a basic TDD approach, you might simply test directly against the $5000 value, as shown here: @Test public class TaxCalulatorTest { @Test public void testTaxCalculation() {)); } } However, at a more abstract level, this gets us thinking - where does this value come from? From a configuration file? A database? A web service? In any case, this is the sort of thing that can change at the whim of a politician, so it's probably not a good idea to hard-code it. So we should add a property to our TaxCalculator class to handle this parameter. While we're at it, we rename the test to better reflect what behaviour we are trying to model. So, instead of talking about "testTaxCalculation" (where the emphasis is on what we are testing), we would use a name like "shouldReturnZeroForIncomeBelowMinimumThreshold". The use of the word "should" is deliberate - we are describing how the class should behave. Note how our intentions suddenly becomes clearer. The tests might now look like this: @Test public class TaxCalulatorTest { @Test public void <b>shouldReturnZeroForIncomeBelowMinimumThreshold</b>() { <b>taxCalculator.setMinimumThreshold(5000);</b>)); } } These examples are a little contrived, and obviously incomplete, but the idea is there. Tests written this way are a much clearer way of expressing your intent than with tests with names like "testCalculation1", "testCalulation2" and so on. In addition to the usual JUnit and TestNG, there are also some frameworks such as JBehave which make BDD even more natural. And, with tests like this, tools like TestDox () can be used to extract documentation describing the _intent_ of the classes. For example, for the test class above, TestDox would generate something like the following: TaxCalulator - should return zero for income below minimum threshold ... - Login or register to post comments - Printer-friendly version - johnsmart's blog - 1461 reads
http://weblogs.java.net/blog/2008/02/19/behavior-driven-development-putting-testing-perspective
crawl-003
refinedweb
1,019
59.33
It is unlikely we can tell you anything new about the extended Berkeley Packet Filter, eBPF for short, if you've read all the great man pages, docs, guides, and some of our blogs out there. But we can tell you a war story, and who doesn't like those? This one is about how eBPF lost its ability to count for a while1. They say in our Austin, Texas office that all good stories start with "y'all ain't gonna believe this… tale." This one though, starts with a post to Linux netdev mailing list from Marek Majkowski after what I heard was a long night: Marek's findings were quite shocking - if you subtract two 64-bit timestamps in eBPF, the result is garbage. But only when running as an unprivileged user. From root all works fine. Huh. If you've seen Marek's presentation from the Netdev 0x13 conference, you know that we are using BPF socket filters as one of the defenses against simple, volumetric DoS attacks. So potentially getting your packet count wrong could be a Bad Thing™, and affect legitimate traffic. Let's try to reproduce this bug with a simplified eBPF socket filter that subtracts two 64-bit unsigned integers passed to it from user-space though a BPF map. The input for our BPF program comes from a BPF array map, so that the values we operate on are not known at build time. This allows for easy experimentation and prevents the compiler from optimizing out the operations. Starting small, eBPF, what is 2 - 1? View the code on our GitHub. $ ./run-bpf 2 1 arg0 2 0x0000000000000002 arg1 1 0x0000000000000001 diff 1 0x0000000000000001 OK, eBPF, what is 2^32 - 1? $ ./run-bpf $[2**32] 1 arg0 4294967296 0x0000000100000000 arg1 1 0x0000000000000001 diff 18446744073709551615 0xffffffffffffffff Wrong! But if we ask nicely with sudo: $ sudo ./run-bpf $[2**32] 1 [sudo] password for jkbs: arg0 4294967296 0x0000000100000000 arg1 1 0x0000000000000001 diff 4294967295 0x00000000ffffffff Who is messing with my eBPF? When computers stop subtracting, you know something big is up. We called for reinforcements. Our colleague Arthur Fabre quickly noticed something is off when you examine the eBPF code loaded into the kernel. It turns out kernel doesn't actually run the eBPF it's supplied - it sometimes rewrites it first. Any sane programmer would expect 64-bit subtraction to be expressed as a single eBPF instruction $ llvm-objdump -S -no-show-raw-insn -section=socket1 bpf/filter.o … 20: 1f 76 00 00 00 00 00 00 r6 -= r7 … However, that's not what the kernel actually runs. Apparently after the rewrite the subtraction becomes a complex, multi-step operation. To see what the kernel is actually running we can use little known bpftool utility. First, we need to load our BPF $ ./run-bpf --stop-after-load 2 1 [2]+ Stopped ./run-bpf 2 1 Then list all BPF programs loaded into the kernel with bpftool prog list $ sudo bpftool prog list … 5951: socket_filter name filter_alu64 tag 11186be60c0d0c0f gpl loaded_at 2019-04-05T13:01:24+0200 uid 1000 xlated 424B jited 262B memlock 4096B map_ids 28786 The most recently loaded socket_filter must be our program ( filter_alu64). Now we now know its id is 5951 and we can list its bytecode with $ sudo bpftool prog dump xlated id 5951 … 33: (79) r7 = *(u64 *)(r0 +0) 34: (b4) (u32) r11 = (u32) -1 35: (1f) r11 -= r6 36: (4f) r11 |= r6 37: (87) r11 = -r11 38: (c7) r11 s>>= 63 39: (5f) r6 &= r11 40: (1f) r6 -= r7 41: (7b) *(u64 *)(r10 -16) = r6 … bpftool can also display the JITed code with: bpftool prog dump jited id 5951. As you see, subtraction is replaced with a series of opcodes. That is unless you are root. When running from root all is good $ sudo ./run-bpf --stop-after-load 0 0 [1]+ Stopped sudo ./run-bpf --stop-after-load 0 0 $ sudo bpftool prog list | grep socket_filter 659: socket_filter name filter_alu64 tag 9e7ffb08218476f3 gpl $ sudo bpftool prog dump xlated id 659 … 31: (79) r7 = *(u64 *)(r0 +0) 32: (1f) r6 -= r7 33: (7b) *(u64 *)(r10 -16) = r6 … If you've spent any time using eBPF, you must have experienced first hand the dreaded eBPF verifier. It's a merciless judge of all eBPF code that will reject any programs that it deems not worthy of running in kernel-space. What perhaps nobody has told you before, and what might come as a surprise, is that the very same verifier will actually also rewrite and patch up your eBPF code as needed to make it safe. The problems with subtraction were introduced by an inconspicuous security fix to the verifier. The patch in question first landed in Linux 5.0 and was backported to 4.20.6 stable and 4.19.19 LTS kernel. The over 2000 words long commit message doesn't spare you any details on the attack vector it targets. The mitigation stems from CVE-2019-7308 vulnerability discovered by Jann Horn at Project Zero, which exploits pointer arithmetic, i.e. adding a scalar value to a pointer, to trigger speculative memory loads from out-of-bounds addresses. Such speculative loads change the CPU cache state and can be used to mount a Spectre variant 1 attack. To mitigate it the eBPF verifier rewrites any arithmetic operations on pointer values in such a way the result is always a memory location within bounds. The patch demonstrates how arithmetic operations on pointers get rewritten and we can spot a familiar pattern there Wait a minute… What pointer arithmetic? We are just trying to subtract two scalar values. How come the mitigation kicks in? It shouldn't. It's a bug. The eBPF verifier keeps track of what kind of values the ALU is operating on, and in this corner case the state was ignored. Why running BPF as root is fine, you ask? If your program has CAP_SYS_ADMIN privileges, side-channel mitigations don't apply. As root you already have access to kernel address space, so nothing new can leak through BPF. After our report, the fix has quickly landed in v5.0 kernel and got backported to stable kernels 4.20.15 and 4.19.28. Kudos to Daniel Borkmann for getting the fix out fast. However, kernel upgrades are hard and in the meantime we were left with code running in production that was not doing what it was supposed to. 32-bit ALU to the rescue As one of the eBPF maintainers has pointed out, 32-bit arithmetic operations are not affected by the verifier bug. This opens a door for a work-around. eBPF registers, r0.. r10, are 64-bits wide, but you can also access just the lower 32 bits, which are exposed as subregisters w0.. w10. You can operate on the 32-bit subregisters using BPF ALU32 instruction subset. LLVM 7+ can generate eBPF code that uses this instruction subset. Of course, you need to you ask it nicely with trivial -Xclang -target-feature -Xclang +alu32 toggle: $ cat sub32.c #include "common.h" u32 sub32(u32 x, u32 y) { return x - y; } $ clang -O2 -target bpf -Xclang -target-feature -Xclang +alu32 -c sub32.c $ llvm-objdump -S -no-show-raw-insn sub32.o … sub32: 0: bc 10 00 00 00 00 00 00 w0 = w1 1: 1c 20 00 00 00 00 00 00 w0 -= w2 2: 95 00 00 00 00 00 00 00 exit The 0x1c opcode of the instruction #1, which can be broken down as BPF_ALU | BPF_X | BPF_SUB (read more in the kernel docs), is the 32-bit subtraction between registers we are looking for, as opposed to regular 64-bit subtract operation 0x1f = BPF_ALU64 | BPF_X | BPF_SUB, which will get rewritten. Armed with this knowledge we can borrow a page from bignum arithmetic and subtract 64-bit numbers using just 32-bit ops: u64 sub64(u64 x, u64 y) { u32 xh, xl, yh, yl; u32 hi, lo; xl = x; yl = y; lo = xl - yl; xh = x >> 32; yh = y >> 32; hi = xh - yh - (lo > xl); /* underflow? */ return ((u64)hi << 32) | (u64)lo; } This code compiles as expected on normal architectures, like x86-64 or ARM64, but BPF Clang target plays by its own rules: $ clang -O2 -target bpf -Xclang -target-feature -Xclang +alu32 -c sub64.c -o - \ | llvm-objdump -S - … 13: 1f 40 00 00 00 00 00 00 r0 -= r4 14: 1f 30 00 00 00 00 00 00 r0 -= r3 15: 1f 21 00 00 00 00 00 00 r1 -= r2 16: 67 00 00 00 20 00 00 00 r0 <<= 32 17: 67 01 00 00 20 00 00 00 r1 <<= 32 18: 77 01 00 00 20 00 00 00 r1 >>= 32 19: 4f 10 00 00 00 00 00 00 r0 |= r1 20: 95 00 00 00 00 00 00 00 exit Apparently the compiler decided it was better to operate on 64-bit registers and discard the upper 32 bits. Thus we weren't able to get rid of the problematic 0x1f opcode. Annoying, back to square one. Surely a bit of IR will do? The problem was in Clang frontend - compiling C to IR. We know that BPF "assembly" backend for LLVM can generate bytecode that uses ALU32 instructions. Maybe if we tweak the Clang compiler's output just a little we can achieve what we want. This means we have to get our hands dirty with the LLVM Intermediate Representation (IR). If you haven't heard of LLVM IR before, now is a good time to do some reading2. In short the LLVM IR is what Clang produces and LLVM BPF backend consumes. Time to write IR by hand! Here's a hand-tweaked IR variant of our sub64() function: define dso_local i64 @sub64_ir(i64, i64) local_unnamed_addr #0 { %3 = trunc i64 %0 to i32 ; xl = (u32) x; %4 = trunc i64 %1 to i32 ; yl = (u32) y; %5 = sub i32 %3, %4 ; lo = xl - yl; %6 = zext i32 %5 to i64 %7 = lshr i64 %0, 32 ; tmp1 = x >> 32; %8 = lshr i64 %1, 32 ; tmp2 = y >> 32; %9 = trunc i64 %7 to i32 ; xh = (u32) tmp1; %10 = trunc i64 %8 to i32 ; yh = (u32) tmp2; %11 = sub i32 %9, %10 ; hi = xh - yh %12 = icmp ult i32 %3, %5 ; tmp3 = xl < lo %13 = zext i1 %12 to i32 %14 = sub i32 %11, %13 ; hi -= tmp3 %15 = zext i32 %14 to i64 %16 = shl i64 %15, 32 ; tmp2 = hi << 32 %17 = or i64 %16, %6 ; res = tmp2 | (u64)lo ret i64 %17 } It may not be pretty but it does produce desired BPF code when compiled3. You will likely find the LLVM IR reference helpful when deciphering it. And voila! First working solution that produces correct results: $ ./run-bpf -filter ir $[2**32] 1 arg0 4294967296 0x0000000100000000 arg1 1 0x0000000000000001 diff 4294967295 0x00000000ffffffff Actually using this hand-written IR function from C is tricky. See our code on GitHub. The final trick Hand-written IR does the job. The downside is that linking IR modules to your C modules is hard. Fortunately there is a better way. You can persuade Clang to stick to 32-bit ALU ops in generated IR. We've already seen the problem. To recap, if we ask Clang to subtract 32-bit integers, it will operate on 64-bit values and throw away the top 32-bits. Putting C, IR, and eBPF side-by-side helps visualize this: The trick to get around it is to declare the 32-bit variable that holds the result as volatile. You might already know the volatile keyword if you've written Unix signal handlers. It basically tells the compiler that the value of the variable may change under its feet so it should refrain from reorganizing loads (reads) from it, as well as that stores (writes) to it might have side-effects so changing the order or eliminating them, by skipping writing it to the memory, is not allowed either. Using volatile makes Clang emit special loads and/or stores at the IR level, which then on eBPF level translates to writing/reading the value from memory (stack) on every access. While this sounds not related to the problem at hand, there is a surprising side-effect to it: With volatile access compiler doesn't promote the subtraction to 64 bits! Don't ask me why, although I would love to hear an explanation. For now, consider this a hack. One that does not come for free - there is the overhead of going through the stack on each read/write. However, if we play our cards right we just might reduce it a little. We don't actually need the volatile load or store to happen, we just want the side effect. So instead of declaring the value as volatile, which implies that both reads and writes are volatile, let's try to make only the writes volatile with a help of a macro: /* Emits a "store volatile" in LLVM IR */ #define ST_V(rhs, lhs) (*(volatile typeof(rhs) *) &(rhs) = (lhs)) If this macro looks strangely familiar, it's because it does the same thing as WRITE_ONCE() macro in the Linux kernel. Applying it to our example: That's another hacky but working solution. Pick your poison. So there you have it - from C, to IR, and back to C to hack around a bug in eBPF verifier and be able to subtract 64-bit integers again. Usually you won't have to dive into LLVM IR or assembly to make use of eBPF. But it does help to know a little about it when things don't work as expected. Did I mention that 64-bit addition is also broken? Have fun fixing it! 1 Okay, it was more like 3 months time until the bug was discovered and fixed. 2 Some even think that it is better than assembly. 3 How do we know? The litmus test is to look for statements matching r[0-9] [-+]= r[0-9] in BPF assembly.
https://blog.cloudflare.com/ebpf-cant-count/
CC-MAIN-2020-05
refinedweb
2,346
77.67
And then:And then:using System; using Mono.Unix; public class Foo { public static void Main (string[] args) { foreach (UnixDriveInfo d in UnixDriveInfo.GetDrives ()) { Console.WriteLine(String.Format("drive {0} available {1}", d.Name, d.AvailableFreeSpace)); } } } mjolnir:~ # mcs -r:Mono.Posix a.csThat did the trick in figuring out why rug thought it did not have enough disk space to install updates. (Answer: the cache-directory must have an entry in fstab; just mounting it is not enough.) mjolnir:~ # mono ./a.exe drive / available 1136402432 drive /big available 22027657216 drive /proc available 0 drive /sys available 0 drive /sys/kernel/debug available 0 drive /proc/bus/usb available 0 drive /dev/pts available 0 drive /media/floppy available 1136402432 Before you ask, this does not mean that I'm rewriting all my shell scripts to Mono. I'm just happy to have passed this unknown territory successfully.
http://mvidner.blogspot.com/2007/
CC-MAIN-2018-05
refinedweb
147
64.2
In the interest of full disclosure, this is a lab assignment that is due tomorrow. We are allowed to work with classmates in/out of class on it, so I am not cheating by asking for help. i am much older than most of my classmates, so we don't associate outside of class. (kinda creepy for a 40 year old dude hanging out with teenagers no matter what the reason if you ask me) I have to modify a program that I have previously written to include a loop. When I first wrote the program I was aggravated that it terminated after each execution even though looping was not a requirement. I researched how to get it to loop (started a thread about it on here for help). I wasn't aware at the time that I would have to make the same program loop later. I finally managed, with some help from my tutor, to figure out how to get it to loop by prompting the user to answer if they wanted to continue. This assignment is a bit different than what I had accomplished. We are required to have it loop until the years of service entered are <=0, or >=99. I am pretty sure that it requires a pre-test condition, but have tried both pre and post-test conditions while trying to get it to work. It will loop, but will not exit when any of the exit numbers are entered. if you can spot my mistake, please offer some help or a hint. It would be greatly appreciated. Thanks ahead of time for any help you may offer. Code://Travis //CPT-168-A01 //2nd Program #include <iostream> using namespace std; int main() { system( "color f0"); cout<< "\t\t\t***********************************"<<endl; cout<< "\t\t\t* Travis *"<<endl; cout<< "\t\t\t* Second Program *"<<endl; cout<< "\t\t\t* CPT-168-A01 *"<<endl; cout<< "\t\t\t***********************************"<<endl; int yrswrkd = 0; double hrswrkd = 0.0, hrlyrat = 0.0,notpay = 0.0, otpay = 0.0, bnspay = 0.0, pay1 = 0.0, totpay = 0.0; do { cout<< "Please Enter Years of Service(Less than 1 or Greater than 98 to Exit): "; cin>>yrswrkd; cout<< "Please Enter Hours Worked: "; cin>>hrswrkd; cout<< "Please Enter Hourly Pay Rate: "; cin>>hrlyrat; if (hrswrkd > 40) notpay = hrlyrat * 40, otpay = (hrswrkd - 40) * 1.5 * hrlyrat, pay1 = otpay + notpay; else pay1 = hrswrkd * hrlyrat; cout<< "Your Gross Pay Is: $"<<pay1<<endl; if (yrswrkd >= 1 && yrswrkd <= 5) bnspay = pay1 * .05; else if (yrswrkd >= 6 && yrswrkd <= 9) bnspay = pay1 * .10; else if (yrswrkd == 10) bnspay = pay1 * .15; else if (yrswrkd > 10) bnspay = pay1 * .20; //endif //endif //endif //endif cout<< "Your Employee Longevity Bonus is: $"<<bnspay<<endl; totpay = pay1 + bnspay; cout<< "Your Total Pay Is: $"<<totpay<<endl<<endl; cout<< "Thank You For Your Service!"<<endl<<endl; } while (yrswrkd >= 1 || yrswrkd <= 98); cout<< "HAVE A NICE DAY!!!"<<endl<<endl; system( "pause"); return 0; }
http://cboard.cprogramming.com/cplusplus-programming/155072-need-help-looping.html
CC-MAIN-2014-52
refinedweb
484
81.43
C# Task example, here we learn how to create task and consume task in C# programming. Task comes under Threading namespace, you need to add reference of using System.Threading.Tasks; Create a simple C# task object without any method Task t = Task.Delay(100); Create a task object with an Action. When you say Task.Run() it will always take a method name. Task t1 = Task.Run(()=>Method1()); Run method takes an action name as parameter Task t1 = Task.Run(()=>Method1()); Delay method is used when you want some task to be completed after some time delay, you can specify time in milliseconds. Task t = Task.Delay(100); Cerate a simple task as a method. public Task GetData() { create a task object with an Action. */ Task t1 = Task.Run(()=>Method1()); return t1; } Create a task object of specific data type with an Action. In earlier example, we have seen how to create a task, now we see how to create a task with some specific data type, for example, here I have created a task that will get student information based on student id and return a Student object. public Task<Student> GetStudent(int studentId) { /* create a task object of specific data type with an Action. */ Task<Student> t1 = Task.Run(() => getStudentInfo(studentId)); return t1; } Student getStudentInfo(int studentId) { Student s = new Student(); //Get student details from database return s; } Return custom object list using Task, here in example below I have a student list, so in my function I am returning the student list Task<IList<Student>> GetStudents() from the code below public Task<IList<Student>> GetStudents() { Task<IList<Student>> t2 = Task.Run(() => getAllStudents()); return t2; } IList<Student> getAllStudents() { IList<Student> s = new List<Student>(); //Get student details from database return s; } to learn more about C# Task, You should also look at following tutorials
https://www.webtrainingroom.com/csharp/task-example
CC-MAIN-2021-49
refinedweb
307
56.66
Python NLTK/Neo4j: Analysing the transcripts of How I Met Your Mother After reading Emil’s blog post about dark data a few weeks ago I became intrigued about trying to find some structure in free text data and I thought How I met your mother’s transcripts would be a good place to start. I found a website which has the transcripts for all the episodes and then having manually downloaded the two pages which listed all the episodes, wrote a script to grab each of the transcripts so I could use them on my machine. I wanted to learn a bit of Python and my colleague Nigel pointed me towards the requests and BeautifulSoup libraries to help me with my task. The script to grab the transcripts looks like this: import requests from bs4 import BeautifulSoup from soupselect import select episodes = {} for i in range(1,3): page = open("data/transcripts/page-" + str(i) + ".html", 'r') soup = BeautifulSoup(page.read()) for row in select(soup, "td.topic-titles a"): parts = row.text.split(" - ") episodes[parts[0]] = {"title": parts[1], "link": row.get("href")} for key, value in episodes.iteritems(): parts = key.split("x") season = int(parts[0]) episode = int(parts[1]) filename = "data/transcripts/S%d-Ep%d" %(season, episode) print filename with open(filename, 'wb') as handle: headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36'} response = requests.get("" + value["link"], headers = headers) if response.ok: for block in response.iter_content(1024): if not block: break handle.write(block) the files containing the lists of episodes are named 'page-1' and 'page-2' The code is reasonably simple - we find all the links inside the table, put them in a dictionary and then iterate through the dictionary and download the files to disk. The code to save the file is a bit of a monstrosity but there didn’t seem to be a 'save' method that I could use. Having downloaded the files, I thought through all sorts of clever things I could do, including generating a bag of words model for each episode or performing sentiment analysis on each sentence which I’d learnt about from a Kaggle tutorial. In the end I decided to start simple and extract all the words from the transcripts and count many times a word occurred in a given episode. I ended up with the following script which created a dictionary of (episode -> words + occurrences): import csv import nltk import re from bs4 import BeautifulSoup from soupselect import select from nltk.corpus import stopwords from collections import Counter from nltk.tokenize import word_tokenize def count_words(words): tally=Counter() for elem in words: tally[elem] += 1 return tally episodes_dict = {} with open('data/import/episodes.csv', 'r') as episodes: reader = csv.reader(episodes, delimiter=',') reader.next() for row in reader: print row transcript = open("data/transcripts/S%s-Ep%s" %(row[3], row[1])).read() soup = BeautifulSoup(transcript) rows = select(soup, "table.tablebg tr td.post-body div.postbody") raw_text = rows[0] [ad.extract() for ad in select(raw_text, "div.ads-topic")] [ad.extract() for ad in select(raw_text, "div.t-foot-links")] text = re.sub("[^a-zA-Z]", " ", raw_text.text.strip()) words = [w for w in nltk.word_tokenize(text) if not w.lower() in stopwords.words("english")] episodes_dict[row[0]] = count_words(words) Next I wanted to explore the data a bit to see which words occurred across episodes or which word occurred most frequently and realised that this would be a much easier task if I stored the data somewhere. s/somewhere/in Neo4j Neo4j’s query language, Cypher, has a really nice ETL-esque tool called 'LOAD CSV' for loading in CSV files (as the name suggests!) so I added some code to save my words to disk: with open("data/import/words.csv", "w") as words: writer = csv.writer(words, delimiter=",") writer.writerow(["EpisodeId", "Word", "Occurrences"]) for episode_id, words in episodes_dict.iteritems(): for word in words: writer.writerow([episode_id, word, words[word]]) This is what the CSV file contents look like: $ head -n 10 data/import/words.csv EpisodeId,Word,Occurrences 165,secondly,1 165,focus,1 165,baby,1 165,spiders,1 165,go,4 165,apartment,1 165,buddy,1 165,Exactly,1 165,young,1 Now we need to write some Cypher to get the data into Neo4j: // words LOAD CSV WITH HEADERS FROM "file:/Users/markneedham/projects/neo4j-himym/data/import/words.csv" AS row MERGE (word:Word {value: row.Word}) // episodes LOAD CSV WITH HEADERS FROM "file:/Users/markneedham/projects/neo4j-himym/data/import/words.csv" AS row MERGE (episode:Episode {id: TOINT(row.EpisodeId)}) // words to episodes LOAD CSV WITH HEADERS FROM "file:/Users/markneedham/projects/neo4j-himym/data/import/words.csv" AS row MATCH (word:Word {value: row.Word}) MATCH (episode:Episode {id: TOINT(row.EpisodeId)}) MERGE (word)-[:USED_IN_EPISODE {times: TOINT(row.Occurrences) }]->(episode); Having done that we can write some simple queries to explore the words used in How I met your mother: MATCH (word:Word)-[r:USED_IN_EPISODE]->(episode) RETURN word.value, COUNT(episode) AS episodes, SUM(r.times) AS occurrences ORDER BY occurrences DESC LIMIT 10 ==> +-------------------------------------+ ==> | word.value | episodes | occurrences | ==> +-------------------------------------+ ==> | "Ted" | 207 | 11437 | ==> | "Barney" | 208 | 8052 | ==> | "Marshall" | 208 | 7236 | ==> | "Robin" | 205 | 6626 | ==> | "Lily" | 207 | 6330 | ==> | "m" | 208 | 4777 | ==> | "re" | 208 | 4097 | ==> | "know" | 208 | 3489 | ==> | "Oh" | 197 | 3448 | ==> | "like" | 208 | 2498 | ==> +-------------------------------------+ ==> 10 rows The main 5 characters occupy the top 5 positions which is probably what you’d expect. I’m not sure why 'm' and 're' are in the next two position s - I expect that might be scraping gone wrong! Our next query might focus around checking which character is referred to the post in each episode: episode.id, topWord.word AS word, topWord.times AS occurrences LIMIT 10 ==> +---------------------------------------+ ==> | episode.id | word | occurrences | ==> +---------------------------------------+ ==> | 72 | "Barney" | 75 | ==> | 143 | "Ted" | 16 | ==> | 43 | "Lily" | 74 | ==> | 156 | "Ted" | 12 | ==> | 206 | "Barney" | 23 | ==> | 50 | "Marshall" | 51 | ==> | 113 | "Ted" | 76 | ==> | 178 | "Barney" | 21 | ==> | 182 | "Barney" | 22 | ==> | 67 | "Ted" | 84 | ==> +---------------------------------------+ ==> 10 rows If we dig into it further there’s actually quite a bit of variety in the number of times the top character in each episode is mentioned which again probably says something about the data: MIN(topWord.times), MAX(topWord.times), AVG(topWord.times), STDEV(topWord.times) ==> +-------------------------------------------------------------------------------------+ ==> | MIN(topWord.times) | MAX(topWord.times) | AVG(topWord.times) | STDEV(topWord.times) | ==> +-------------------------------------------------------------------------------------+ ==> | 3 | 259 | 63.90865384615385 | 42.36255207691068 | ==> +-------------------------------------------------------------------------------------+ ==> 1 row Obviously this is a very simple way of deriving structure from text, here are some of the things I want to try out next: Detecting common phrases/memes/phrases used in the show (e.g. the yellow umbrella) - this should be possible by creating different length n-grams and then searching for those phrases across the corpus. Pull out scenes - some of the transcripts use the keyword 'scene' to denote this although some of them don’t. Depending how many transcripts contain scene demarkations perhaps we could train a classifier to detect where scenes should be in the transcripts which don’t have scenes. Analyse who talks to each other or who talks about each other most frequently Create a graph of conversations as my colleagues Max and Michael have previously blogged about.
https://markhneedham.com/blog/2015/01/10/python-nltkneo4j-analysing-the-transcripts-of-how-i-met-your-mother/
CC-MAIN-2021-17
refinedweb
1,213
65.01
if pred - vs Start of an if pred - vs...else - vs...endif - vs block, with the condition taken from the content of the predicate register. Syntax Where: - [!] an optional NOT modifier. This modifies the value in the predicate register. - pred is the predicate register, p0. See Predicate Register. - replicateSwizzle is a single component that is copied (or replicated) to all four components (swizzled). Valid components are: x, y, z, w or r, g, b, a. Remarks This instruction is used to skip a block of code, based on a channel of the predicate register. Each if_pred block must end with an else or endif instruction. Restrictions include: if_pred blocks can be nested. This counts to the total dynamic nesting depth along with if_comp blocks. An if_pred block cannot straddle a loop block, it should be either completely inside it or surround it. Related topics
https://docs.microsoft.com/en-us/windows/win32/direct3dhlsl/if-pred---vs
CC-MAIN-2019-43
refinedweb
144
68.87
One of the main advantages of Vue is its versatility. Although the library is focused on the view layer only, we can easily integrate it with a wide range of existing JavaScript libraries and/or Vue-based projects to build almost anything we want — from a simple to-do app to complex, large-scale projects. Today, we’ll see how to use Vue in conjunction with Vuetify and howler.js to create a simple but fully functional music step sequencer. Vuetify offers a rich collection of customizable, pre-made Vue components, which we’ll use to build the UI of the app. To handle the audio playing, we’ll use howler.js, which wraps the Web Audio API and make its use easier. Getting started In this tutorial, as its title suggests, we’re going to build a simple music step sequencer. We’ll be able to create simple beats and save them as presets for future use. In the image below you can see what the music sequencer will look like when it is run for the first time and there are no presets created yet. As you can see, there will be four tracks, each one representing a single sound (kick, snare, hi-hat, and shaker). Beats are created by selecting the steps — the cells in the tracks row — which we want to be playable. Each track can be muted separately for finer control. For changing the playback speed we will use the tempo slider. We also have a metronome, which measures each beat. Finally, we have the ability to save the current state of the tracks and tempo value as a reusable preset. If we’re not satisfied with the current outcome, we can start over by clicking the Clear Tracks button, which will deselect all selected steps. The image below shows the music sequencer with some presets created: The saved presets are represented as a list of tags. To load a preset, we click the corresponding tag, and to delete it, we click the trash icon. We can also delete all presets at once by clicking the red button. The presets are stored in the user browser via the Web Storage API. You can find the project’s source files at the GitHub repo and test the app on the demo page. Note, before we get started, you need to make sure that you have Node.js, Vue, and Vue CLI installed on your machine. To do so, download the Node.js installer for your system and run it. For Vue CLI follow the instructions on its installation page. To get started, let’s create a new Vue project: vue create music-step-sequencer When prompted to pick a preset, just leave the default one (which is already selected) and hit Enter. This will install a basic project with Babel and ESLint support. The next step is to add Vuetify to the project: cd music-step-sequencer vue add vuetify Vuetify also asks you to select a preset. Again, leave the default preset selected and hit Enter. And finally, we need to add howler.js: npm install howler After we have all the necessary libraries installed, we need to clean up the default Vue project a bit. First, delete the HelloWorld.vue inside the src/components folder. Then, open App.vue and delete everything between <v-app> element. Also, delete the import statement and the registration of HelloWorld.vue component. Now we are ready to start building our new app. Adding the base app template We’ll build the app from top to bottom starting with the title bar which will look like this: To create it, let’s add the following markup between the <v-app> element: <v-card dark <v-toolbar <v-iconmdi-piano</v-icon> <v-toolbar-titleMusic Step Sequencer</v-toolbar-title> </v-toolbar> <!-- Add <SoundTracks></SoundTracks> component here--> <!-- Add <SoundControls></SoundControls> component here--> <!-- Add <SoundPresets></SoundPresets> component here--> </v-card> We wrap our app in v-card component, and use the v-toolbar to create the title bar. Now, it’s time to test what we’ve already done. Use the following command to run the project: npm run serve This will start the server and will serve the project at. After you open it in your browser, you should see a blank page with the title bar we’ve just created. If all is good, let’s move on. Creating the base audio functionality Before we start creating the app’s components, we need to set up the base audio functionality needed for the sequencer to work. The first thing we need to do is to include four sound sample files – kick.mp3, snare.mp3, hihat.mp3, and shaker.mp3. So, let’s do this: import { Howl } from "howler"; const kick = new Howl({src: ["",],}); const snare = new Howl({src: ["",],}); const hihat = new Howl({src: ["",],}); const shaker = new Howl({src: ["",],}); Here, we import the Howl object from howler.js and then use it to set up the sounds. The new Howl() function creates a sound object, which we can play, mute, etc. Note, the reason I use external sound sources here is that the modern browsers have some restrictions about loading and playing audio files. So, local URLs won’t work. The solution is to use absolute URLs from real servers and also to allow automatic audio playing in your browser. For Chrome, click the icon before the the address bar and choose Site settings. Then, navigate to Sound and change the selection from Automatic (default) to Allow. Unfortunately for each browser the location of this setting and the actual words used to describe it differentiates. So, if you’re not using Chrome, you will need to use your favorite search engine to find the procedure for the browser you work in. To use your own audio files, just swap the URLs before the deployment with the appropriate links pointing to sounds from your project directory. After we have added the sound samples, below them we create a new instance of the audio context let audioContext = new AudioContext(); . Then, in the Vue instance object we define the following properties: data() { return { tempo: 120, tracks: { kick: [], snare: [], hihat: [], shaker: [], }, futureTickTime: audioContext.currentTime, counter: 0, timerID: null, isPlaying: false, }; }, computed: { secondsPerBeat() { return 60 / this.tempo; }, counterTimeValue() { return this.secondsPerBeat / 4; }, }, The tempo property defines the speed of audio playing. The tracks object contains the arrays, which will be used to define the selected steps for each track. The futureTickTime, counter, and timerID will be used to loop through the steps in the soundtracks. The secondsPerBeat and counterTimeValue computed properties calculate the time of a single step based on the tempo‘s value. For example, at tempo 120 BPM (beats per minute) secondsPerBeat will equals 0.5 (half second for each beat). When we divide this value by four in counterTimeValue, the result will be 0.125 (the duration of one step). So, when we change the tempo, the step’s duration will update accordingly. Let’s now create the methods for scheduling, playing, and stopping the sounds: methods: { scheduleSound(trackArray, sound, counter) { for (var i = 0; i < trackArray.length; i += 1) { if (counter === trackArray[i]) { sound.play(); } } }, } The first method takes three parameters: - An array with numbers, representing the steps we want to play - The actual sound we want to play - A counter variable used to check if the array contains an item that matches the current step’s number The method iterates through the track, and when the counter value and an array item match, the sound associated with the track is next method is to move the playback to the next step in the track: playTick() { this.counter += 1; this.futureTickTime += this.counterTimeValue; if (this.counter > 15) { this.counter = 0; } }, This method increments the counter by one and shifts the futureTickTime by adding the time of one step to it. If the counter becomes greater than 15, it starts over, and in that way, it loops through the 16 steps of the tracks. The next method schedules the sounds, and loops the playback through the steps: scheduler() { if (this.futureTickTime < audioContext.currentTime + 0.1) { this.scheduleSound(this.tracks.kick, kick, this.counter); this.scheduleSound(this.tracks.snare, snare, this.counter); this.scheduleSound(this.tracks.hihat, hihat, this.counter); this.scheduleSound(this.tracks.shaker, shaker, this.counter); this.playTick(); } this.timerID = window.setTimeout(this.scheduler, 0); }, The method checks if the futureTickTime is within a tenth of a second of the audioContext.currentTime, and if it’s true, the scheduleSound() runs for each sound. The playTick() runs once, moving the playback one step further. In the end, a setTimeout() — which runs the scheduler() recursively — is assigned to the timerID. As we saw earlier, the playTick() increments the futureTickTime with the time of one step. The resulting value remains intact until the audioContext.currentTime catches up with it. Then the futureTickTime is incremented again with one step (in the next run of playTick()). All this “time racing” continues for as long as the scheduler() is allowed to run. The next two methods are used to play and stop the playback: play() { if (this.isPlaying === false) { this.counter = 0; this.futureTickTime = audioContext.currentTime; this.scheduler(); this.isPlaying = true; } }, If the sequencer is not playing, this method starts the scheduler() and sets the corresponding properties from the data object: stop() { if (this.isPlaying === true) { window.clearTimeout(this.timerID); this.isPlaying = false; } }, If the sequencer is playing, this method stops it by clearing the setTimeout() set previously in the timerID. Creating the app components Now, the base audio functionality is set up. We’re ready to start creating the actual app components. Creating the SoundTracks component We’ll start with the component responsible for rendering the tracks. Here is how it will look: In the components folder, create a new SoundTracks.vue component with the following content: <template> </template> <script> export default { } </script> Note, this is the starting template for each new component. Next, let’s add the properties and methods we’ll need: props: ["tracks"], data() { return { toggles: { kick: true, snare: true, hihat: true, shaker: true, }, }; }, methods: { playSound(sound) { this.$emit('playsound', sound); }, muteSound(sound, toggle) { this.$emit('mutesound', {sound, toggle}); } } We pass a prop tracks, which we’ll use to render the tracks. The playSound() method emits a playsound custom event and the actual sound — kick, snare, etc. The muteSound() method is similar but here the second argument is an object with the sound we want to mute/unmute and a toggle variable which determines the current state of the sound. As we have four individual sounds, we need four different toggle variables. Otherwise, the sounds will be muted/unmuted all together. That’s why we create the toggles object with a separate property for each track. Now, let’s create the component’s template. Add the following markup inside the <template> element: <v-container> <template v- <v-row :</v-switch> </v-col> <v-col> <v-btn fab raised small @ <v-iconmdi-volume-high</v-icon> <v-iconmdi-volume-off</v-icon> </v-btn> </v-col> <v-col> <v-btn-toggle</v-btn> </v-btn-toggle> </v-col> </v-row> </template> </v-container> Here, we iterate over the tracks using v-for directive and create three columns. - The first column renders a toggle button (with v-switchcomponent), which mute/unmute the sound for the corresponding track - The second column renders a button, which plays the sound associated with the track - The third column renders the actual track. We use v-button-togglecomponent to create a group of 16 toggle buttons representing the steps of the track Now, let’s switch back to App.vue and add the following methods: playSound(sound) { eval(sound).play() }, muteSound(obj) { eval(obj['sound']).mute(!obj['toggle']) }, The playSound() method plays the sound received from the child component. The muteSound() method mutes/unmutes the sound received from the child component, depending on the toggle’s value. The next thing to do is to include the component in the template: <SoundTracks :</SoundTracks> We bind the tracks prop to the corresponding prop in the data object. We also add listeners to the playsound and mutesound events, which will run the playSound() and the muteSound() methods. Lastly, we import the component and register it: import SoundTracks from "./components/SoundTracks.vue"; ... components: { SoundTracks, }, Now, the first component is completed. When we check if everything works properly, we can move on to the next one. Creating the SoundControls component In this component, we’ll create all controls for playing and stopping the sequencer, changing the tempo, and running a metronome. Here is how it will look: In the components folder, create new SoundControls.vue component and add the following properties and methods: props: ['tempo', 'counter', 'isPlaying'], data() { return { localTempo: this.tempo, metronome: 0, }; }, watch: { counter(val) { if (this.isPlaying) { if (val >= 0 && val <= 3) { this.metronome = 1; } else if (val > 3 && val <= 7) { this.metronome = 2; } else if (val > 7 && val <= 11) { this.metronome = 3; } else if (val > 11 && val <= 15) { this.metronome = 4; } } }, tempo(val) { this.localTempo = val; } }, methods: { play() { this.$emit("play"); }, stop() { this.$emit("stop"); this.metronome = 0; }, updateTempo() { this.$emit("update:tempo", this.localTempo); }, }, First, we pass three props from the parent, tempo, counter, and isPlaying. Next, we assign the tempo to a new local variable ( localTempo), because it’s not recommended to mutate a prop directly from the child. We also add a metronome property needed for the metronome functionality. We need to add watchers for two variables. We need to watch for the counter, because we need its current value in order to change/update the metronome‘s value. We also watch for the tempo in order to update the localTempo when the tempo in the parent changes. The play() and stop() methods emit the corresponding events. And the latter also reset the metronome‘s value. The updateTempo() method emits an update event for the tempo and the localTempo‘s value. Now, let’s put the necessary markup in the component’s template: <div> <div class="d-flex justify-space-between"> <div class="ml-2"> <v-btn @ <v-iconmdi-play</v-icon> </v-btn> <v-btn @ <v-iconmdi-stop</v-icon> </v-btn> </div> <div class="d-flex align-center"> <v-iconmdi-metronome-tick</v-icon> <v-rating</v-rating> </div> </div> <div> <v-slider</v-slider> </div> </div> First, we create the play and stop buttons. Then, we use the v-rating component to simulate a metronome. We set the length property to 4 because we have four beats in the sequencer. We add the readonly property to disable user interaction. And we bind it with the metronome‘s value. Finally, we add a v-slider component for changing the tempo’s value. The slider is bound to the localTempo, so we can change the tempo safely without Vue warnings. Now, let’s add the component in App.vue‘s template: <SoundControls :tempo.</SoundControls> Here, we bind the child props to the corresponding properties in the parent. We use the .sync modifier for the tempo to create two-way data binding between parent and child. We also set event listeners for running the play() and stop() methods, which we created earlier. Lastly, we import the component and register it: import SoundTracks from "./components/SoundTracks.vue"; import SoundControls from "./components/SoundControls.vue"; ... components: { SoundTracks, SoundControls, }, Now, we can play with the controls to see if everything works as intended and then move on to the final component. Creating the SoundPresets component The last functionality left to create is the ability to save beats presets for later use. The component should look something like this: The image above shows the component before the presets are created and saved. And the image below shows the component with a list of saved presets: In the components folder, create a new SoundPresets.vue component and add the following properties and methods: created() { this.userPresets = JSON.parse(localStorage.getItem('userPresets') || '{}'); }, props: ['currentPreset'], data() { return { dialog: false, presetName: '', rules: { required: value => !!value || 'Required.', counter: value => value.length >= 3 || 'At least 3 characters required', }, userPresets: {}, selectedPreset: '', }; }, Here, we pass a currentPreset prop, which we’ll need to save new presets. We define dialog and presetName props needed for the modal for saving a preset, which we’ll create later. The rules prop will be used for validation of the preset name’s text field. The first rule checks to ensure that the text’s value is not empty and the second rule checks to ensure that it contains more than two characters. In both cases, if the check returns false, the specified error message appears. We need a userPresets object to temporarily store the presets. We’ll use this object to render the presets. The last property we’ll need is selectedPreset, which is needed for displaying the currently selected preset. Finally, we use the created() event hook to load previously saved presets if any. Now, let’s create all necessary methods: methods: { clearTracks() { this.$emit('cleartracks'); this.selectedPreset = ''; }, loadPreset(preset) { this.$emit('loadpreset', preset); this.selectedPreset = preset; }, savePreset() { this.dialog = false; this.userPresets[this.presetName] = {}; let tracks = Object.assign({}, this.currentPreset.tracks); this.userPresets[this.presetName].tempo = this.currentPreset.tempo; this.userPresets[this.presetName].tracks = tracks; localStorage.setItem('userPresets', JSON.stringify(this.userPresets)); this.presetName = ''; }, cancelDialog() { this.dialog = false; this.presetName = ''; }, deletePreset(preset) { this.$delete(this.userPresets, preset); localStorage.setItem('userPresets', JSON.stringify(this.userPresets)); if (preset == this.selectedPreset) { this.selectedPreset = ''; } }, deleteAllPresets() { localStorage.clear(); this.userPresets = {}; }, isEmpty(obj) { return Object.entries(obj).length === 0; } } Let’s explain the above methods one by one: clearTracks()emits a cleartracksevent, and resets the selectedPreset loadPreset()emits a loadpresetevent and the current preset’s name. It also assigns the latter to the selectedPreset savePreset()closes the modal by changing the dialogprop to false. It creates a new empty object for the preset we want to save. Then, it defines a tracksvariable and assigns a copy of the tracks object to it. (This is needed because otherwise the same object is referenced which leads to saving one and the same object for each new preset.) Next, we assign the tempo and tracks properties to the new preset object. Then it saves the updated userPresetsobject in the local storage. Lastly, it resets the presetName cancelDialog()fires on clicking the Cancelbutton. It closes the modal and resets the presetName deletePreset()deletes a preset and updates the storage. If we delete the currently selected preset, then we reset the selectedPresetproperty upon deletion deleteAllPresets()deletes all saved user presets in the storage and empties the userPresetsobject isEmpty()is a utility method checking if an object is empty Let’s now start creating the component’s template: <div> <div class="d-flex ma-2"> <v-btn Clear Tracks </v-btn> <v-btn Save Preset </v-btn> <v-spacer></v-spacer> <v-btn Delete All Presets </v-btn> </div> ... </div> Here, we create the buttons for clearing the tracks, saving a preset, and deleting all presets. In the Save Preset’s click event listener, we use the .stop modifier to stop the event’s propagation. Let’s add the next portion of the template: <v-card> <v-card-title>Presets: {{selectedPreset}}</v-card-title> <v-slide-y-transition> <v-card-textCurrently there are no presets created. To create a new preset fill in some cells in the tracks and hit the Save Preset button.</v-card-text> </v-slide-y-transition> <v-scroll-y-transition> <div v- <v-chip close {{name}} </v-chip> </div> </v-scroll-y-transition> </v-card> ... Here, we create a presets list. In the top, we add a title Presets: followed by the name of the currently selected preset, if such exists. Below, we set a message shown if there are no presets. We use the isEmpty() to check if presets exist. If presets exist, the message hides, and the presets are rendered. Each preset is rendered as a v-chip component with a delete icon. Clicking the preset loads it into the tracks. Clicking the trash icon deletes the preset. And the last part of the template is the markup for the modal dialog: <v-dialogSave preset as:</v-card-title> <v-text-field</v-text-field> <v-card-actions> <v-spacer></v-spacer> <v-btn Cancel </v-btn> <v-btn Save </v-btn> </v-card-actions> </v-card> </v-dialog> Here, we use the v-text-field component and bind it with the validation rules created before. Then we add Cancel and Save buttons. To prevent saving a preset without a specified name the Save button is disabled when the length of the text field’s value is less than three characters. Here, you can see the modal with the first validation rule failed: And here, the modal is with the second validation rule failed: Now, let’s switch to App.vue and add the currentPreset computed property which represents the current state of the tracks and tempo: currentPreset() { return { tempo: this.tempo, tracks: this.tracks }; } Next, we need to add the methods for clearing the tracks and loading a preset: clearTracks() { this.tracks = { kick: [], snare: [], hihat: [], shaker: [], } }, loadPreset(preset) { let presets = JSON.parse(localStorage.getItem('userPresets')); this.tempo = presets[preset].tempo; this.tracks = presets[preset].tracks; }, clearTracksempties the arrays of the tracks loadPresetcreates a new presetsvariable and assigns to it the presets fetched from the local storage - Here again, we use the data from the local storage instead of the userPresetsobject because the latter is passed by reference. Then, it updates the tempoand tracksproperties with the values of the selected preset Next, we include the component in the template: <SoundPresets :</SoundPresets> Lastly, we import the component and register it: import SoundTracks from "./components/SoundTracks.vue"; import SoundControls from "./components/SoundControls.vue"; import SoundPresets from "./components/SoundPresets.vue"; ... components: { SoundTracks, SoundControls, SoundPresets }, Et voila! We’ve finished our project successfully. Conclusion Congrats! You have just built a fully functional music step sequencer. As you just saw, Vue can be easily combined with both Vue-based projects and vanilla JavaScript libraries to build any functionality we want. This gives you the freedom to create a wide range of projects with ease._9<<.
https://blog.logrocket.com/build-a-music-step-sequencer-with-vue-and-vuetify/
CC-MAIN-2022-40
refinedweb
3,702
57.37
8.4. Multi-GPU Computation¶ In this section, we will show how to use multiple GPU for computation. For example, we can train the same model using multiple GPUs. As you might expect, running the programs in this section requires at least two GPUs. In fact, installing multiple GPUs on a single machine is common because there are usually multiple PCIe slots on the motherboard. If the Nvidia driver is properly installed, we can use the nvidia-smi command to view all GPUs on the current computer. In [1]: !nvidia-smi Sun Jan 13 08:53:20 33C P0 38W / 150W | 2639MiB / 7618MiB | 0% Default | +-------------------------------+----------------------+----------------------+ | 1 Tesla M60 Off | 00000000:00:1E.0 Off | 0 | | N/A 36C P8 14W / 150W | 11MiB / 7618MiB | 0% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| +-----------------------------------------------------------------------------+ As we discussed in the “Automatic Parallel Computation” section, most operations can use all the computational resources of all CPUs, or all computational resources of a single GPU. However, if we use multiple GPUs for model training, we still need to implement the corresponding algorithms. Of these, the most commonly used algorithm is called data parallelism. 8.4.1. Data Parallelism¶ In the deep learning field, Data Parallelism is currently the most widely used method for dividing model training tasks among multiple GPUs. Recall the process for training models using optimization algorithms described in the “Mini-batch Stochastic Gradient Descent” section. Now, we will demonstrate how data parallelism works using mini-batch stochastic gradient descent as an example. Assume there are \(k\) GPUs on a machine. Given the model to be trained, each GPU will maintain a complete set of model parameters independently. In any iteration of model training, given a random mini-batch, we divide the examples in the batch into \(k\) portions and distribute one to each GPU. Then, each GPU will calculate the local gradient of the model parameters based on the mini-batch subset it was assigned and the model parameters it maintains. Next, we add together the local gradients on the \(k\) GPUs to get the current mini-batch stochastic gradient. After that, each GPU uses this mini-batch stochastic gradient to update the complete set of model parameters that it maintains. Figure 8.1 depicts the mini-batch stochastic gradient calculation using data parallelism and two GPUs. Fig. 8.1 Calculation of mini-batch stochastic gradient using data parallelism and two GPUs. In order to implement data parallelism in a multi-GPU training scenario from scratch, we first import the required packages or modules. In [2]: import gluonbook as gb import mxnet as mx from mxnet import autograd, nd from mxnet.gluon import loss as gloss import time 8.4.2. Define the Model¶ We use LeNet, introduced in the “Convolutional Neural Networks (LeNet)” section, as the sample model for this section. Here, the model implementation only uses NDArray. In [3]: # Initialize model parameters. scale = 0.01 W1 = nd.random.normal(scale=scale, shape=(20, 1, 3, 3)) b1 = nd.zeros(shape=20) W2 = nd.random.normal(scale=scale, shape=(50, 20, 5, 5)) b2 = nd.zeros(shape=50) W3 = nd.random.normal(scale=scale, shape=(800, 128)) b3 = nd.zeros(shape=128) W4 = nd.random.normal(scale=scale, shape=(128, 10)) b4 = nd.zeros(shape=10) params = [W1, b1, W2, b2, W3, b3, W4, b4] # Define the model. def lenet(X, params): h1_conv = nd.Convolution(data=X, weight=params[0], bias=params[1], kernel=(3, 3), num_filter=20) h1_activation = nd.relu(h1_conv) h1 = nd.Pooling(data=h1_activation, pool_type='avg', kernel=(2, 2), stride=(2, 2)) h2_conv = nd.Convolution(data=h1, weight=params[2], bias=params[3], kernel=(5, 5), num_filter=50) h2_activation = nd.relu(h2_conv) h2 = nd.Pooling(data=h2_activation, pool_type='avg', kernel=(2, 2), stride=(2, 2)) h2 = nd.flatten(h2) h3_linear = nd.dot(h2, params[4]) + params[5] h3 = nd.relu(h3_linear) y_hat = nd.dot(h3, params[6]) + params[7] return y_hat # Cross-entropy loss function. loss = gloss.SoftmaxCrossEntropyLoss() 8.4.3. Synchronize Data Among Multiple GPUs¶ We need to implement some auxiliary functions to synchronize data among the multiple GPUs. The following get_params function copies the model parameters to a specific GPU and initializes the gradient. In [4]: def get_params(params, ctx): new_params = [p.copyto(ctx) for p in params] for p in new_params: p.attach_grad() return new_params Try to copy the model parameter params to gpu(0). In [5]: new_params = get_params(params, mx.gpu(0)) print('b1 weight:', new_params[1]) print('b1 grad:', new_params[1].grad) b1 weight: [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] <NDArray 20 @gpu(0)> b1 grad: [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] <NDArray 20 @gpu(0)> Here, the data is distributed among multiple GPUs. The following allreduce function adds up the data on each GPU and then broadcasts it to all the GPUs. In [6]: def allreduce(data): for i in range(1, len(data)): data[0][:] += data[i].copyto(data[0].context) for i in range(1, len(data)): data[0].copyto(data[i]) Perform a simple test of the allreduce function. In [7]: data = [nd.ones((1, 2), ctx=mx.gpu(i)) * (i + 1) for i in range(2)] print('before allreduce:', data) allreduce(data) print('after allreduce:', data) before allreduce: [ [[1. 1.]] <NDArray 1x2 @gpu(0)>, [[2. 2.]] <NDArray 1x2 @gpu(1)>] after allreduce: [ [[3. 3.]] <NDArray 1x2 @gpu(0)>, [[3. 3.]] <NDArray 1x2 @gpu(1)>] Given a batch of data instances, the following split_and_load function can split the sample and copy it to each GPU. In [8]: def split_and_load(data, ctx): n, k = data.shape[0], len(ctx) m = n // k # For simplicity, we assume the data is divisible. assert m * k == n, '# examples is not divided by # devices.' return [data[i * m: (i + 1) * m].as_in_context(ctx[i]) for i in range(k)] Now, we try to divide the 6 data instances equally between 2 GPUs using the split_and_load function. In [9]: batch = nd.arange(24).reshape((6, 4)) ctx = [mx.gpu(0), mx.gpu(1)] splitted = split_and_load(batch, ctx) print('input: ', batch) print('load into', ctx) print('output:', splitted) input: [[ 0. 1. 2. 3.] [ 4. 5. 6. 7.] [ 8. 9. 10. 11.] [12. 13. 14. 15.] [16. 17. 18. 19.] [20. 21. 22. 23.]] <NDArray 6x4 @cpu(0)> load into [gpu(0), gpu(1)] output: [ [[ 0. 1. 2. 3.] [ 4. 5. 6. 7.] [ 8. 9. 10. 11.]] <NDArray 3x4 @gpu(0)>, [[12. 13. 14. 15.] [16. 17. 18. 19.] [20. 21. 22. 23.]] <NDArray 3x4 @gpu(1)>] 8.4.4. Multi-GPU Training on a Single Mini-batch¶ Now we can implement multi-GPU training on a single mini-batch. Its implementation is primarily based on the data parallelism approach described in this section. We will use the auxiliary functions we just discussed, allreduce and split_and_load, to synchronize the data among multiple GPUs. In [10]: def train_batch(X, y, gpu_params, ctx, lr): # When ctx contains multiple GPUs, mini-batches of data instances are divided and copied to each GPU. gpu_Xs, gpu_ys = split_and_load(X, ctx), split_and_load(y, ctx) with autograd.record(): # Loss is calculated separately on each GPU. ls = [loss(lenet(gpu_X, gpu_W), gpu_y) for gpu_X, gpu_y, gpu_W in zip(gpu_Xs, gpu_ys, gpu_params)] for l in ls: # Back Propagation is performed separately on each GPU. l.backward() # Add up all the gradients from each GPU and then broadcast them to all the GPUs. for i in range(len(gpu_params[0])): allreduce([gpu_params[c][i].grad for c in range(len(ctx))]) for param in gpu_params: # The model parameters are updated separately on each GPU. gb.sgd(param, lr, X.shape[0]) # Here, we use a full-size batch. 8.4.5. Training Functions¶ Now, we can define the training function. Here the training function is slightly different from the one used in the previous chapter. For example, here, we need to copy all the model parameters to multiple GPUs based on data parallelism and perform multi-GPU training on a single mini-batch for each iteration. In [11]: def train(num_gpus, batch_size, lr): train_iter, test_iter = gb.load_data_fashion_mnist(batch_size) ctx = [mx.gpu(i) for i in range(num_gpus)] print('running on:', ctx) # Copy model parameters to num_gpus GPUs. gpu_params = [get_params(params, c) for c in ctx] for epoch in range(4): start = time.time() for X, y in train_iter: # Perform multi-GPU training for a single mini-batch. train_batch(X, y, gpu_params, ctx, lr) nd.waitall() train_time = time.time() - start def net(x): # Verify the model on GPU 0. return lenet(x, gpu_params[0]) test_acc = gb.evaluate_accuracy(test_iter, net, ctx[0]) print('epoch %d, time: %.1f sec, test acc: %.2f' % (epoch + 1, train_time, test_acc)) 8.4.6. Multi-GPU Training Experiment¶ We will start by training with a single GPU. Assume the batch size is 256 and the learning rate is 0.2. In [12]: train(num_gpus=1, batch_size=256, lr=0.2) running on: [gpu(0)] epoch 1, time: 2.4 sec, test acc: 0.10 epoch 2, time: 2.0 sec, test acc: 0.61 epoch 3, time: 2.0 sec, test acc: 0.77 epoch 4, time: 2.0 sec, test acc: 0.78 By keeping the batch size and learning rate unchanged and changing the number of GPUs to 2, we can see that the improvement in test accuracy is roughly the same as in the results from the previous experiment. Because of the extra communication overhead, we did not observe a significant reduction in the training time. In [13]: train(num_gpus=2, batch_size=256, lr=0.2) running on: [gpu(0), gpu(1)] epoch 1, time: 2.2 sec, test acc: 0.10 epoch 2, time: 1.9 sec, test acc: 0.68 epoch 3, time: 1.9 sec, test acc: 0.74 epoch 4, time: 1.9 sec, test acc: 0.78 8.4.7. Summary¶ - We can use data parallelism to more fully utilize the computational resources of multiple GPUs to implement multi-GPU model training. - With the same hyper-parameters, the training accuracy of the model is roughly equivalent when we change the number of GPUs. 8.4.8. Problems¶ - In a multi-GPU training experiment, use 2 GPUs for training and double the batch_sizeto 512. How does the training time change? If we want a test accuracy comparable with the results of single-GPU training, how should the learning rate be adjusted? - Change the model prediction part of the experiment to multi-GPU prediction.
http://gluon.ai/chapter_computational-performance/multiple-gpus.html
CC-MAIN-2019-04
refinedweb
1,764
60.11
Hi,. Hey guys. Does anyone know how to change the DEFAULT user agent? I know about developer tools and how i can change it EVERY time i open a new tab. But i wanna set the default UA to a custom string. So that every time i open the IE9 it defaults to my custom UA. Thanks! I don't believe you can change the Default to anything other than IE. The best way would be to install the other browsers. When i print a pdf for instance it gives a name for the author only problem it isn t my name i traded laptops with a friend User Default ? Vista - Change and her Change Default User - Vista ? name comes up as pdf author and her folder is default download location etc How do i change this is this called default user on start up welcome screen there is an icon for me and and for guest -there is a third other user me my friend and another listed under users in explorer that i would also like to delete what happens if i change default user loose installed applications desktop themes icon arrangements or other application settings or folders or files what should I save before deleting quot default user quot what will i have to re-setup thanks in advance mac Toshiba Satellite A OS Version Microsoft Windows Vista amp Home Premium Service Pack bit Processor AMD Turion tm X Mobile Technology TL- x Family Model Stepping Processor Count RAM Mb Graphics Card ATI Radeon X Mb Hard Drives C Total - MB Free - MB E Total - MB Free - MB Motherboard ATI SB Antivirus avast Antivirus Updated and Enabled nbsp I purchased a new laptop and was using it with only one account I had a custom lock screen However on making another account the lock screen became a default swirling image I do not know How do the default change multiple user screen? I how to change that I attempted the following solution Here's how to change the sign out screen go to folder C ProgramData Microsoft How do I change the multiple user default screen? Windows Right click How do I change the multiple user default screen? Hello Rahul, and welcome to Eight Forums. You might see if the tutorial below may be able to help. OPTION ONE is the easiest way. Lock Screen Default Background Image - Change in Windows 8 Hi everyone, I was wondering if it is possible to change the icons for default folders in the user folder (My Documents, Downloads, Saved Games, etc). These folders have a different properties window that includes a "Customize" tab, but no option to change the icon. Any help would be appreciated! ~Rjes Hello Rjes, Most certainly, Option Two in the tutorial below can help show you how to. User Folders - Change Default Icon Hope this helps, Shawn I purchased a new laptop and was using it with only one account I had do default user How change screen? the multiple I a custom lock screen However on making another account the lock screen became a default swirling image I do not know how to change that I attempted the following solution Here's how to change the sign out screen go to folder C ProgramData Microsoft Windows Right click How do I change the multiple user default screen?) Hey, There are a couple of default values that I'm looking to change in Vista. First and most importantly is there a way to change the location of the default "Users" folder. My new laptop came with two separate hard drives, and the main C:/ is 30gb, while the secondary D:/ is 100gb. I'd like to make the D drive my main folder for Users files, such as Documents, Music, Pictures, Videos, etc. For example, make it an easy to remember folder like D:/Users. How do I go about doing this? Also, whenever I want to open a document or music file, the default location that it would first open is the new D:/Users, not the old location. This would make things much easier for me. Any help is greatly appreciated. Thank you. I've just completed a restore on my Asus K72F. It's hard drive is split in two: Drive C, where Windows 7 and all software and programs are and Drive D, which is empty. Before the restore I had all my user files such as My Documents, My Music, My Pictures, Contacts, Downloads, Favorites, etc. save to Drive D, utilizing it just for "storage". I'll be darned if I can remember how I set that up. I also had my Libraries "link" to those files on Drive D. Can anyone help? I need instructions on have my user files automatically save to Drive D and good instructions on setting up Libraries. Thanks so much! Lisa Scroll down to the "Windows 7" section on this page:. While inspecting the HKLM Software Microsoft WindowsNT Current Version winlogon I noticed the Default User Name still holds a previous record of previous ISP email address existing previous to Jan I did go into the Microsoft account and change the default email for the Win Microsoft Acct when I changed servers Obviously there hasn't been an issue with this inaccurate registry default user entry but I wondered if it should be changed in the registry The reason I went to that registry entry was due to the Pin log in routine Yesterday the pin log in was not acceptable and I change in Name MSaccount & in User log in HKLM-Winlogon Default had to use the original password The Anniversary update kb has been installed for a few days so it isn't a direct result I thought it might be connected to the fact I was trying to get a share working from the laptop Home MS account to the Desktop Pro local account no log in It was an unsuccessful effort This morning the laptop log in was a choice structure -use password or use pin- icons Previously I just started typing the Pin into the log in bar Is this the log in routine now Default User Name in HKLM-Winlogon & change in MSaccount log in.. Could use some help on how to allow a user to our default own default as still user browser in Explorer the allowing and to their set Internet Windows 10 Keeping set their default browser permanently even with IE being set via Group Policy as the usual default For logistical reasons we are still on IE instead of using Edge for our default browser Keeping Internet Explorer as our default browser in Windows 10 and still allowing the user to set their own default - mostly because our software hasn't been tested with Edge and to avoid overall confusion in having made the jump as a company to Windows recently I successfully created a default configuration associations xml file and it is pushing IE as the default However knowing we have some users who would like to use a browser other than IE and let it be their default we're trying to accommodate them while still enforcing the policy in place Additionally we don't want to dissuade users from utilizing Edge so we Keeping Internet Explorer as our default browser in Windows 10 and still allowing the user to set their own default haven't tried removing it from public visibility within the OS nbsp The users in question are local admins on Keeping Internet Explorer as our default browser in Windows 10 and still allowing the user to set their own default their machines and should in theory be allowed that capability Any suggestions? just looked in the C:\Users folder and found two folders "Default" and "Default User". What is the difference between the two? Hi and Welcome to Bleepingcomputer! You must have bought your computer with the operating system already installed on the hard drive. Since the makers of your computer have no idea who might buy the computer they use the name default where it asks for a default user name. When you purchase a retail copy of windows and install it yourself you have the option of picking a default name. You can rename that if you want to, to make it more personal to your identity. Hope this answers your question. Happy computing! Hi: I should be able to figure this out, but it's not my day I guess. Using the XP Registry and selecting a folder, the view of my files is always displayed in "icon". I would appreciate it if someone would tell me how to change this default so that it is always set for "details". Thanks for any help. ve6tp How do I Change the Default setting from Musicmatch Jukebox back to Windows Media Player. I am running XP Home. Thanks, Barry Easiest way to try : Open Media Player, go to Tools > Options ; go to the File Types tab ; and Select the file types for which you want WMP to be the default player how do i change my default internet from opera to ie? thank you See this Microsoft How-To article: The how-to is for Windows 7 but the procedure is substantially the same for all recent Windows versions. Originally Posted by sm0epm its easy and possible. you need to tell us what router you have. Hello All, I have a standalone Home PC, which is not connected to any network including the internet. It doesnot even have the network adapters installed. Now I want to change the default Ip(127.0.0.1) assigned to it. I have changed the hosts file and rebooted but without any success. I am in a desparate situation. please help! Okay, I have to first say- THIS forum is AWESOME!! Thanks for all the time you spend helping others, like me. So, here is my questions.........After a succesful Repair in place (TY BRINK!!) Profiles seem, ummmm, strange, sort of! Here are 3 screenshots of what I am seeing with: 1) Hidden & System File Settings SHOW 2) Hidden & System File Settings HIDE 3) System Advanced Settings Profile List Regarding screenshot 1, is that just a shortcut to the actual Default Profile that can safely be deleted? Regarding Screenshot 2, is that THE HIDDEN SYSTEM ADMIN profile? If so, why is it showing with HIDE ticked? Regarding Screenshot 3, I am the Owner with tons of Music/Data etc. so I understand my Profile being huge. So, but why is the other ADMIN (hidden system admin or not???) & Default there with both over 15 GB in size? Is this how you're Users folder looks too? Thanks for the INPUT everyone! xxx Patti Hello Patti, Screenshot 1Default Profile with the shortcut arrow on it is not really a folder but a junction point pointing to the Default user profile folder in the same screenshot for backwards compatibility for older programs. Screenshot 2The Administrator folder is the built-in Administrator's profile folder. The Owner folder appears to be your user account's profile folder. Both them are not hidden since they are not faded though. Screenshot 3I'm not sure why the other user profiles are so large either. You might double check in their user folders to see if something may be saved in them taking up so much space. Hope this helps some, Shawn I downloaded a new "media" player. It has taken over my files so that it is the default player. How can I make it my choice on which player I want to use (such as Real Player, etc,etc) with each file? I want to make Mozilla Firefox my default browser. I checked firefox as my default browser, and its diagnosis says it is the default, but whenever I click on any link IE6 still opens first. How can I stop this? What OS? I have Win 7 Ultimate 64 bit and had a HD crash so I rebuilt it with a new drive. It is a "virgin" install of Windows 7 Ultimate. I downloaded all updates including IE10. Then I installed Office 2007 Home and Student followed by Office 2007 Professional Upgrade. Everything seemed to work okay until I noticed when in IE10 the 'send to" option did not work. I discovered Outlook was not the default email package ("use existing" was). So I went to Start/Default Programs and selected Outlook to be the default. It seems to take it but if I go back to Start/Default Programs I see that Outlook is not the default. I also tried to set the default media player to Media center and had the same issue. how can I get Windows 7 Ultimate 64 bit to accept default programs settings? Are you the "True" Administrator? Built-in Administrator Account - Enable or Disable Hi, when I right click on an image I get the message "save the image as" and the default location is 'my pictures'. how do I change the default location? any help would be appreciated, thanks, Mike hi guys...juz wanna ask,after i install few updates yesterday,i realize my wmp skin turn to default.before that,i use this red skin.. can someone teach me to use my red skin back?coz i want to match with my desktop.. p/s sorry if i post in wrong section.. Anyone? Would this be a third party theme, I stopped using them as they are very temperamental when updates are done or other setting changed in windows. have you tried to reset your them again? Oh and this should be customizations. Hi, I'm running Vista Ultimate x64, For some reason upon installing it, It installed the russian language. It took me a few minutes to change it to english but some of the text is still in russian, Trying to change the default language to english seems difficult as it says "The language can't be selected because it is the system language (default language of the user interface)." I have changed the format, location, keyboard and input language and the administrative language. How do I change the default language to English so I can uninstall the Russian? This is about all that can be done with contacting your computer manufacturer and/or Microsoft Answers.com - How to change Windows vista language from french to english This is the site that the article makes mention of I am trying to make so that if you right click on an XML document and click EDIT that NOTEPAD is not the program that opens but a differently installed program. I have figured out how to change how EDIT responds on TXT files, but can't figure it out with XML files. I have also considered that the drop down box in Internet Explorer on the default editor might be the trick, but the only option is NOTEPAD. So if I can figure out how to change the entries in the list then maybe that would be the fix, otherwise I'm sure it will be a registry fix. Thanks, George Jackson You can change the association here: Control Panel\Programs\Default Programs\Set Associations See also this: Default Programs - Associate a File Type or Protocol I have a program that displays XML documents in Internet Explorer (Iexplore.exe). The documents are straight XML text, with no style sheet information (which I don't know how to create). My question is: Is there a way to change the default colors for font, strings, special symbols, etc... the current default colors are ugly to me. I searched all the options of IExplore, Googled XML style sheets, etc., and can't find a straight, simple answer to this question. I have an Acer Aspire 2920 notebook and it uses the new 'empowering technology'. On this, it has audio settings for Movies, Music and Games. On the Movies and Games, the surround sound is set to on, but on the music it is set to off. You change it back to on by double clicking, but as soon as you do this it goes straight back to off again as this is the default setting. All I want to do is listen to my music via the speakers, and headphones, but it will not play. This is purely the Acer empowering technology, so does anyone have any idea how to change it? I feel like I've tried everything, so fresh suggestions are welcome! Thanks. using photoshop express 7. Unable to email photos with message that "default addy cannot be located". Using Vista. **sorry for miss-type. i ment to say drive not driver oops hi i just got a brand new laptop and was going to install windows 7 on it, but it has a blank ssd and hdd i wanted to have the computer load essential/windows start-up files from the ssd, but anything besides that from then on i want to be put and loaded from the hdd are there any relevant guides you guys could point me to? i wouldn't even know what to search for. thanks in advance The whole system should go on the SSD. It is counterproductive to split parts onto the HDD. Exception could be very large games. If you have the option during the game installation, you can direct it towards the HDD. You probably have to initialize the SSD or define an aligned active partition on it (preferable for the installation). There are some tips in the guide I made here: SSD - Install and Transfer the Operating System Else there is also this tutorial: SSD / HDD : Optimize for Windows Reinstallation I need help changing the program that opens files that I have downloaded,. I want it to be the WinRar program but I don't know how to change it back to that. Right now files are being opened by Windows Live Essentials. Please Help...Wrenie[/SIZE] I downloaded a cursor then I went to Mouse Pointers to change the white cursor, it worked but one time when I restart the laptop, my cursor was gone instead it was the white cursor that appeared in the desktop. I read a problem like this but I still didn't know what to do. Did you click on Apply after selecting the new pointer? I have windows vista, 64 bit, something automatically changed my default printer settings and now I cannot designate a default printer and it will not allow me to print anything but a word document and email. I cannot print a pdf or an excel sheet, each time I get an error message saying that I have to have a printer installed. I have a printer, the same one I have BEEN using, but it doesn't read that it is there. I have seen previous posts saying to go into regedit and change the HKEY_CURRENT_USER/SOFTWARE/MICROSOFT/WINDOWSNT/CURRENT VERSION/WINDOWS and change the entry for the device, deleting the printer name and leaving 'winspool, Ne00' and then to restart. I've done that and it doesn't help. PLEASE HELP. I've spent several days now trying to figure this out. You may want to try a system restore from about a week before the problem began. Your stuff will not be affected. System Restore - How to If that does not work, uninstall printer, boot computer and install again. Trouble getting new email address (hotmail.com) recognized as default email. Tried all methods posted on forums: tools option and control panel. Control panel default settings will not allow option to disable Windows Mail. (grayed out). RegisterHotmail software says only Vista but I am using Vista. Also cannot open old email program (Windows Mail) : getting error message MSOE.DLL 0x8007002: Checked and followed all advice on forums. Have also stopped virus software from scanning emails. Unistall with this then download and use Windows Live Mail Revo Uninstaller Pro - Uninstall Software, Remove Programs, Solve uninstall problems Always make a system restore point first. I set Firefox as my default browser now I want to change my default browser back to internet explorer and can not seem to anywhere in the options area to do this please can anyone give me assistance Brobilly When I try to change my profile picture it will not work. I can change it to one of the pictures provided but I can't browse for more of them. I tried restarting my computer but it did nothing. I can't seem to find anything on the internet that would help my problem either. Thanks for the help. Btw, I'm using Windows 7 64 bit. Just to make sure we're on the same wavelength, are you talking about the picture on this site? If so, delete the current picture first. I want to change default player from Windows Media Player to VLC for streaming TV. Example: mms://media.tv.consoll.no/tv2sport How do I do that? You would usually do that here: Control Panel\Programs\Default Programs\Set Associations. But you need to know the filetype. hi .. I work on an office uses internet access through ADSL, and my pc is connected to the office domain. MSN, Yahoo.. Messenger, all are blocked. I have also a Broadband 3G modem that I want to use MSN through it.. but the Messenger always try to connect to the ADSL connection not the 3G.. How can I make it use the broadband instead of the ADSL or domain. thanx. You can only connect to one, adsl or wireless. You cannot run both at the same time. Go into Network services and disable the adsl connection, enable the wireless. When doing office work, do the opposite. I'm having trouble getting some websites to load for me. This didn't start happening until I got satellite internet recently. I've been reading around and the most common answer I see is to increase the dns timeout value. But I can't find the proper steps on how to do it for win 7. Can somebody help me out? Welcome to the Forums Configuring DNS lookup time out period Hi, Recently my computer's internet gliched and I can no longer connect to any webpage. I think it might have somthing to do with me deleting route 0.0.0.0 on cmd. this is what I wrote: route delete -p 0.0.0.0 Is there any way i can restore the deleted route? Hi and welcome to the Forum Can you do a system restore to a point behind when you did the deletion? It would also beg the question why did you delete it? I want to transfer video using Pinnacle, but the software isn't in the list of default programs. How can I get it to show up there? This has to be an easy question, right? Hello Bling You can set default programs one of two ways Through the defaults list or through right click 1. Find the item you want to change the default program for 2. Right click on the item and select "Open With" 3. On the bottom left check "Always Use The Selected Program To Open This Kind Of File" 4. Either select a program from the list or select browse 5. When you have selected the program you want select "OK" This should now be the default program that opens that item, This works for files, images, videos, video converters and audio converters :) Regards Craig I've always used Window Internet Explorer. I never believed in the things like "Firefox, Netscape, etc. are better," but then my friend pratically made me try Firefox, and I did. When I first installed it, it asked if I wanted to change to my default browser and I said no because I might now like it. Well, now a few weeks later and I love Firefox, but how do I change it to my default browser? In Tools->Options, in the General section, you'll see Default browser, click the "Check now" button. I want to change my email from outlook express to either incredimail or outlook.When I launch BT it always comes up with oex and when I launch Ie6 no email comes up.I have reset default for oex and inc but no change.I have uninstalled oex through control panel and even removed registry and renamed oex files to old.( when I did that there was an error saying oex not found).I have since repaired my os and got everything back to normal except for oex problem. Can anyone help. Thanks If you are using Windows XP there should be a button in your start menu that says Set Program Access and Defaults. It would be in there where you want to change it. I am writting a program in C to take input from the user and depeneding on that output the results to either a file and/or the screen. So, my question was is there a way to change the default out stream for printf()? I know that I could use fprintf to write to files. I want to just change the stream, then use a universal funtion to write to the desired medium. printf() AFAIK doesn't handle files streams, but fprintf() handles both so you could do it like this: Code: #include <stdio.h> int main() { fprintf(stdout,"\nHello World!\n"); FILE * pfile; pfile = fopen("anexample.txt","w"); fprintf(pfile,"a line of text"); fclose(pfile); return 0; } Hi as stated I cant change any of the default app settings in windows 10. I tried to change my default browser to chrome,nothing happens after i click google chrome,its still edge. This also applies to other defaults like maps,music player,photo viewer etc. Hello, have you researched this on the forum? Questions about this have been asked so often.. the forum is a useful resource tool. Note that you cannot change defaults from within programs- true from Win 8 on. Please see: Default Apps - Choose in Windows 10 - Windows 10 Forums Have you tried by these means? Also note that there are special instructions about Chrome which you can find which many have had difficulties with. Search Chrome default browser e.g. Note too that a couple of old version programs e.g. Winzip if installed have caused defaults to be reset on restart. Hope that helps and points you in the right direction. Hello, I'm new on SevenForums and I need to help. I was testing my Small Basic app, but when I launched it, the TextWindow font size was only 4?6 pixels. So I launched the Command Prompt via Run, and the size was also 4?6. In Properties, there were only 4 tabs and 9 sizes of Terminal font. The default one, 8?12, wasn't in the options. I can show you screenshot of Properties window. (I'm using WindowsBlinds 8 with Mac skin, where active tab texture is same with inactive tab.) How can I restore font size to 8?12? It would be a big help for me if you post value of this: Sorry for my English, I'm not a native speaker. You could try sfc /scannow and that would undo all changes/modifications, and then you could just reapply the settings in windows blinds and see if that helps.....maybe something went funky when applying the theme. I have a Samsung laptop with Windows7 home edition and an Epson WF3520 Wireless printer both in use for some time now. I can print OK from any package so I assume that the driver is correctly installed. but it always defaults to an old printer which is no longer connected. I just want to change the default printer to the new one. The problem is that nothing shows up in Devices & printers, no matter how long I wait for the bar at the top to trundle along. I have another laptop (windows 8.1) and the printer appears on there OK. Could anyone suggest: a. What the problem might be, or b. An alternative way to change the default printer. Thanks in advance, Geoff. Hi ... Read the Link below ... [WIN7] 64 bit - Devices & Printers will not show - FIXED! Hooray! - Microsoft | DSLReports Forums.. Lately my computer won't open pictures to view with Windows Photo Viewer. I've already read several threads on this and have tried the customary links that are supposed to do this. When I go the the "Chooses default program" on the preview link, it stays frozen on Microsoft Picture Manager whenever I try to change it. I've been able to open with Internet Explorer, but I would like to go back to opening files with Windows Photo Viewer. I believe that this does not need to be installed as it is already a Windows feature. (7) Any help and ideas beyond what I have already tried are greatly appreciated. BTW........LOVE this site! I folowed this link How to Rename a Drive in Windows 7 to change the default label of my DVD RW drive. That only worked,,, kind of!! What occured was it appended my new name to the original name. It began with the name: CD Drive. After changing the name as in the afore mentioned link it now has the name: CD Drive Sony Blu-Ray Drive. All I wanted to see was Sony Blu-Ray Drive. Anyone know what's going on here & how I might fix this?? Thanks in advance for anyone who can lend some help on this. Jim HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Enum\SCSI drive / id / FriendlyName you'll need permissions on the key to edit it Hello I use W b SP with IE For business purposes I IE8 page / default change W7 need to change default error information web pages in IE like the page cannot be displayed browsing was canceled I checked all over the internet and found some advices W7 / IE8 change default page about changing the reg keys but it didn t work for me Windows Registry Editor Version HKEY LOCAL MACHINE SOFTWARE Microsoft Internet Explorer AboutURLs quot blank quot quot res mshtml dll blank htm quot quot NoAdd-onsInfo quot quot res ieframe dll noaddoninfo htm quot quot InPrivate quot quot res ieframe dll inprivate htm quot quot NavigationFailure quot quot res ieframe dll W7 / IE8 change default page navcancl htm quot quot NoAdd-ons quot quot res ieframe dll noaddon htm quot quot Home quot dword e quot PostNotCached quot quot res ieframe dll repost htm quot quot DesktopItemNavigationFailure quot quot res ieframe dll navcancl htm quot quot NavigationCanceled quot quot res ieframe dll navcancl htm quot quot Tabs quot quot res ieframe dll tabswelcome htm quot quot OfflineInformation quot quot res ieframe dll offcancl htm quot quot SecurityRisk quot quot res ieframe dll securityatrisk htm quot I also entered a new key under HKEY CURRENT USER SOFTWARE Microsoft Internet Explorer AboutURLs but it s still not working Can somebody help me out because I m running out of time on my project Upgrading or downgrading any app or OS on this computer is due to some regulations not an option Thank you all nbsp Hello I just put a music CD in the drive a popup window appeared I only glanced at it and hit ok without thinking about it It said something like quot Rip audio from CD using media player quot with a window ticked with something like quot make this the default option when putting in a CD quot Now every time when I put the CD in the drive windows media player starts up and isn t able to rip the songs and freezes - I m not worried about that - I just want to find the option so that it won t try to automatically start media player and rip what for cd put when change happens option default in it every time I put it in the drive Actually this is one of those little problems that has bugged me over the years but I change default option for what happens when put in cd never asked about it where are the settings to change the default options for when you put in a CD connect something If I ever accidentally click that box that says quot make this the default option for when you do such and such quot I m always stuck with it because change default option for what happens when put in cd I don t know how to change them Thanks Is it possible for me to change my default email? I only use Gmail, but when I try to attach anything, such as a snip, it says I must use "Outlook" even tho I've never used it. All I'm not sure if Im in deep trouble or if this is something trivial Please help me out I had once tried to install windows xp SP long OS boot? default it to Is possible change at back and it had failed The result was that I got options healthy xp and not working SP Everytime during boot I continued working choosing the healthy one The default always was the corrupt sp and if I was not alert to change it it would try to install Sp Say something got corrupted and restart again during which time I would select the healthy Xp Sometime back I got Windows and installed the same Rather than clean up and format my drive I went for an install that didn't delete my other drives Is it possible to change default OS at boot? It took Is it possible to change default OS at boot? a back up of xp and installed win My C drive had Gb of space and I continued working At boot up I now had options annoyed at constantly being asked choices at start up I found my way to uncheck the 'time' box in the advanced setting in windows I thought this meant that win would load automatically without asking for other choices Now when I start my system it doesn't ask any choices but goes straight to install from the the corrupt SP version fails and prompts for a restart At restart the same thing happens again I am now stuck and can't seem to either boot up windows or get that list of OS to choose at boot Are there any options in bios through which I can increase the time so that it displays that list again or can I change default OS to boot there I am not sure of installing win again as I believe I don't have enough space on C drive and even if I take the pain of doing all the installation will it still go to the corrupt SP because no where during installation do I change the time to list the OS on the machine Plus I don want to lose my data as there is years of work I do have a back up on an external drive but it is with a friend who has borrowed it for Atleast a month In case you ask me to install win again can I just do it in such a way that only the C drive is formatted But I need to be sure that corrupt sp is not there to cause problem at boot Please help me I'm in deep trouble I don't have any other machine to work on and I type this on phone If you need any further info please let me know Thanks If you type "msconfig" in the start menu and then choose the "Boot" tab, are there different operating systems listed? This is going to seem very shallow compared to the crashed harddrives and any number of horrible problems with your computers I'm sorry I'm the sort of person Icons Default Change who has problems with the OS itself I don't have problems with security or technical support I have searched google and probably seven different forums about this problem And it's baffled me how many people don't know how to change an Icon So much that they'd have hundreds of threads dedicated to RIGHT CLICK CHANGE ICON APPLY Just Retarded At the same time I don't know how many Change Default Icons threads I went through where people were like quot I HAVE A VIRUS CALLED DESKTOP INI PLEASE HELP quot More Retarded But furthermore no matter what I type into Google I can't find a way to make Windows do ONE thing And that is To quot Change the Default Folder Icon quot so that when I quot Create a NEW FOLDER quot it comes out BLANK --- That's all I want --- I went into Shell dll and I replaced the Default Folder Icon with an empty Icon Set When it creates a New Folder It's STILL the default Folder Worse If I try to CHANGE the Folder to it's Default Folder It's STILL the Default folder Icon Instead of the Empty one EVEN IF IT SHOWS THAT ITS AN EMPTY ICON Also for some reason when I upload to my webspace image gets all garbled I reloaded it I changed it to a gif I changed the way it was compiled Why the hel is it always glitched like that bump please someone try to help or suggest I use note pad for my txt files yet win has set it as word pad by default When 8 for to want Win change what I default setting I trying remove the check box for txt in word pad it refused to change default setting to what I want for Win 8 allow me to do that At the same time I cannot edit my raw html pages on the server as the pages come up as either IE or FF program that open the real view of the page that can not be edit live as it impossible to do so in the first Even if I try removing the check box for html for FF not allow too How do I get rid of crap default setting by Win as I have never had this issues for any OS starting with win beta default program control panel set program list word pad chose default check box failure to uncheck Same for FF This has only happen in the last month or so and have made no changes to cause this Tap your Windows Key and type Default, you should see Default Programs show up. You can also get to this in Control Panel. Press Enter, and choose Associate a file type or protocol with a program. Select the file type you want to change and choose Change Program in the upper right. Select the program you want to use from the list, if the program is not in the list click "More Options" and if it's not in that list click "Look for another app on this PC", navigate to note pad, and you're done. The Set Program Associations does not remove associations, it only adds them. Meaning, if a filetype is NOT selected, you can select it and click save to make it start using that file type. Unchecking and clicking save does not REMOVE a filetype. Use the Set Associations for the specific filetype instead. Please someone help, it's driving me mad .? Any help would be greatly appreciated. I usually just move my user folders like Documents, Pictures, Music etc. Some move their whole profile. Installing programs to your spinner just negates the speed advantage of having one, IMHO. OK, i have a partition set aside only for the system. WINNT is the folder with it am i correct. Well i want to move documents and settings + program files over to another partition. What do i do to accomplish this. I want to transfer the files and then make those places the default location for documents and settings and program files. The system stays in its partition. Hey everybody! I CANNOT add to, subtract from, or in any other way alter my DEFAULT PROGRAMS LIST, and I am going INSANE! I have tried all of the routine simple stuff (IE, right clicking files and selecting 'open with', 'change programs', 'browse', etc, etc) and NOTHING works. I have also tried to 'change associated file types, yadayada' again, ZIPPOLA. I am seriously losing my hair over this, if anyone knows anything about this PLEASE help me out! Yours truly, joreilly. Are you an Administrator? I have a Toshiba laptop running XP the main screen back light went out so I added another monitor its native resolution is x the highest the laptop will go is x my problem is I can t get the new default monitor? change do my How I monitor to go to it native resolution it maxes out at x unless I extend the desktop then it works but since my main screen is broken I can t use it because no icons on the extended side and every time I set it as default it just goes back to the original setting but at x Its running Intel How do I change my default monitor? graphics device manager shows display adapters both identical Intel R GM GME graphics controllers and as for monitors device manager shows different ones Default monitor Toshiba Internal x panel and ViewSonic VX wm the last one is the new one and I ve installed all the drivers and software for it I m really at a loss I know my graphics card can display the higher resolution but it just won t change over Alternatively if the is a way to duplicate my desktop on extended desktop that may work Thanks for any help nbsp I!!! Why doesn't he simply use Wordpad to open any .doc files? Wordpad also uses the .doc file extension. Either that or he simply clicks 'No' when asked to save changes in Works. Does Works have a default save word file as option in the tools\preferences menu (or any of its menus - I don't use Works so I don't know its options)? If your customer has documents that are only printed, not edited, then they should consider Adobe Acrobat .pdf format for the documents. At least then they can't be acidentally changed and the layout will be consistent regardless of wp program settings. How can I chang my default browser? Hello thefabe, Usually during installation of the 3rd party party browser (ex: Firefox), you will have the option to make it the default program. If not, then you can also select it in Default Programs to make it the default browser. The program itself will usually have it's own settings that you can select to make it default as well. Hope this helps, Shawn I have an hpdv6700 pavilion laptop. The backlight is out and am using a second monitor as main screen. In attempting to connect a new flat screen, I have totally lost the 2 monitor icons on the Set Resolution page, and now it only shows Default monitor when I open it in safe mode. I'm aware that pressing Fn + F4 repeatedly will toggle screens, and it worked before but not now. Anybody??????????? i SO hate this icon for desktop i dont know how i did but i changed it long ago and it's bothering me now so bad...so i want to change it back to default Desktop icon... or stock one i dont care but not this one... i want to be able to change icon to be same one on both of this location circled in Green in attached Picture... Thanks for any help! Did you change your theme? Did you make any changes and then notice the icon? Hey everyone I just need some help with this real quick if anyone knows how I typically use Mozilla Firefox which is all well and dandy When I type something in the address bar it brings me to the appropriate destination using Google for example if I type in BLEEPING COMPUTER it would normally have directed me to http www bleepingcomputer com without asking any questions Ever since I downloaded Windows messenger this no longer is the case When I type something in it goes off of some quot Live Search quot search engine with all kinds of loopy results and what's even worse is that it isn't even REGULAR Live Search it actually provides me a link to the standard page of Live Search Default I Browser? do the change How Now when I open Mozilla Firefox in the upper right corner to the immediate right of the address bar there is a tool which allows me to select How do I change the Default Browser? a default search engine I have Google selected yet if I type something or an address into the address bar it still directs me to those stupid search results from Live Search All I want to do is put it back to how it was where it would search using Google and if I typed something into the address bar it would bring me to the appropriate web page without ever opening any LIVE SEARCH I even deleted Windows Messenger as soon as I learned of this issue in hopes that it would fix it Nope that didn't work -Many Thanks Chamus What operating system are you using ? I need to change jzip from being default in my download. I am unable to edit an html book and oto nor upload the files from the archives. I have tried to change the zip files to other zip programs but jzip is the default. How can I change this? Thanks. This might help I find the easiest way is to right click on the file you want to open choose "open with" and browse for the program you want to use.If you want to permanently want to use that program tick the saying that. Hi there currently my window folders sort by filename as the factory default. I know I can change this by right clicking on the white space and then choosing how I want my files to be displayed. however I have huge folders and then have to wait for explorer to reindex the files every time I close or open a folder. How do I change the default file sort from name to date? so that I don't have to do this every time. I have looked that this link and others connected to it but they dont' seem to address my specific problem. I don't want to change icons or layout, simply the way files are organized when they are compiled. can anyone help me out with this? Im not a noob to windows by any means, but a step by step explanation would be much appreciated. Are the files located on a separate drive? I have been trying to customize my fresh install of windows 7 and i think i might have changed something with regards to the default font and i'm not sure how to go back to default. Here is a picture of how the font looks like now I don't recall the default font looking like this. Can someone confirm this is NOT the default font look? and if so how do i turn it back to normal? I did not set automatic restoration point so i cannot go back. I have also been doing font modification through the registry to change the default font of sticky note. I'm thinking maybe that has set something off ? Any help is appreciated. Thanks Read the Link below from an old Thread .. How to restore application fonts to default in Windows 7? Can I change the default settings in PSP7 (and PS6) to JPG? Now, I save something I have to stop and scroll down and pick JPG. I looked in PREFERENCES and in the SAVE box and don't see anything. I'd rather have it as JPG and if I want to use something else (GIF, or even PSP I can scroll to that). Since I mainly save as JPG. ~ Carrie I have figured what the problem is. I don't need any more input Open Tools in Firefox, click on Options, toward the bottom of the page you will find System Defaults, click on Check Now. In windows 7, click 1.start, 2.Device & Printer, 3.right click printer, Set Default, EASY. But on Windows 8, go to on desktop - charms, 1.settings, .2control panel, 3.view devices & printers, 4.right click set default Is there a easier way for the user to change their default printer. IF we get Windows 8 for work, we DONT want users going into control panel to change there default printers. That's what Group Policy is for.
http://winassist.org/thread/1807952/Default-User-Name-Change.php
CC-MAIN-2018-13
refinedweb
8,232
67.49
On Thu, Oct 18, 2007 at 22:58:48 +1000, Matthew Brecknell wrote: >Magnus Therning: >> Still no cigar :( > >Yes, this is a little more subtle than I first thought. Look at liftM >and filterM: > >liftM f m1 = do { x1 <- m1; return (f x1) } > >filterM :: (Monad m) => (a -> m Bool) -> [a] -> m [a] >filterM _ [] = return [] >filterM p (x:xs) = do > flg <- p x > ys <- filterM p xs > return (if flg then x:ys else ys) > >In liftM, the result of (f x1) is not forced, and in filterM, flg is >not tested until after xs is traversed. The result is that when filterM >runs the (p x) action, a file is opened, but hasEmpty (and thus >readFile) is not forced until all other files have likewise been >opened. > >It should suffice to use a more strict version of liftM: > >liftM' f m1 = do { x1 <- m1; return $! f x1 } > >That should also fix the problem with Jules' solution, or alternatively: > >readFile' f = do s <- readFile f > return $! (length s `seq` s) Another question that came up when talking to a (much more clever) colleague was whether the introduction of either of the solutions in fact means that only a single file is open at any time? /M (I really miss IRC connectivity at work at times like :
http://www.haskell.org/pipermail/haskell-cafe/2007-October/033361.html
CC-MAIN-2014-35
refinedweb
216
66.1
MachinistMachinist "Generic types and overloaded operators would let a user code up all of these, and in such a way that they would look in all ways just like types that are built in. They would let users grow the Java programming language in a smooth and clean way." -- Guy Steele, "Growing a Language" OverviewOverview One of the places where type classes incur some unnecessary overhead is implicit enrichment. Generic types have very few methods that can be called directly, so Scala uses implicit conversions to enrich these types with useful operators and methods. However, these conversions have a cost: they instantiate an implicit object which is only needed to immediately call another method. This indirection is usually not a big deal, but is prohibitive in the case of simple methods that may be called millions of times. Machinist's ops macros provide a solution. These macros allow the same enrichment to occur without any allocations or additional indirection. These macros can work with most common type class encodings, and are easily extensible. They can also remap symbolic operators (e.g. **) to text names (e.g. pow). Machinist started out as part of the Spire project. For a more detailed description, you can read this article at typelevel.org. ExamplesExamples Here's an example which defines a very minimal typeclass named Eq[A] with a single method called eqv. It is designed to support type-safe equals, similar to scalaz.Equal or spire.algebra.Eq, and it is specialized to avoid boxing primtive values like Int or Double. import scala.{specialized => sp} import machinist.DefaultOps trait Eq[@sp A] { def eqv(lhs: A, rhs: A): Boolean } object Eq { implicit val intEq = new Eq[Int] { def eqv(lhs: Int, rhs: Int): Boolean = lhs == rhs } implicit class EqOps[A](x: A)(implicit ev: Eq[A]) { def ===(rhs: A): Boolean = macro DefaultOps.binop[A, Boolean] } } object Test { import Eq.EqOps def test(a: Int, b: Int)(implicit ev: Eq[Int]): Int = if (a === b) 999 else 0 } Here are some intermediate representations for how the body of the test method will be compiled: // our scala code if (a === b) 999 else 0 // after implicit resolution if (Eq.EqOps(a)(Eq.intEq).===(b)) 999 else 0 // after macro application if (Eq.intEq.eqv(a, b)) 999 else 0 // after specialization if (Eq.intEq.eqv$mcI$sp(a, b)) 999 else 0 There are a few things to notice: EqOps[A]does not need to be specialized. Since we will have removed any constructor calls by the time the typerphase is over, it will not introduce any boxing or interfere with specialization. We did not have to write very much boilerplate in EqOpsbeyond specifying which methods we want to provide implicit operators for. We did have to specify some type information though (in this case, the type of rhs(the "right-hand side" parameter) and the result type. machinist.DefaultOpsautomatically knew to connect the ===operator with the eqvmethod, since it has a built-in mapping of symbolic operators to names. You can use your own mapping by extending machinist.Opsand implementing operatorNames. Including Machinist in your projectIncluding Machinist in your project Machinist supports Scala 2.10, 2.11, 2.12, and 2.13.0-M3. If you have an SBT project, add the following snippet to your build.sbt file: libraryDependencies += "org.typelevel" %% "machinist" % "0.6.4" Machinist also supports Scala.js. To use Machinist in your Scala.js projects, include the following build.sbt snippet: libraryDependencies += "org.typelevel" %%% "machinist" % "0.6.4" Shapes supported by MachinistShapes supported by Machinist Machinist has macros for recognizing and rewriting the following shapes: // unop conversion(lhs)(ev).method() -> ev.method(lhs) // unop0 conversion(lhs)(ev).method -> ev.method(lhs) // unopWithEv conversion(lhs).method(ev) -> ev.method(lhs) // binop conversion(lhs)(ev).method(rhs) -> ev.method(lhs, rhs) // rbinop, for right-associative methods conversion(rhs)(ev).method(lhs) -> ev.method(lhs, rhs) // binopWithEv conversion(lhs).method(rhs)(ev) -> ev.method(lhs, rhs) // rbinopWithEv conversion(rhs).method(lhs)(ev) -> ev.method(lhs, rsh) Machinist also supports the following oddball cases (which may only be useful for Spire): // binopWithLift conversion(lhs)(ev0).method(rhs: Bar)(ev1) -> ev.method(lhs, ev1.fromBar(rhs)) // binopWithSelfLift conversion(lhs)(ev).method(rhs: Bar) -> ev.method(lhs, ev.fromBar(rhs)) In both cases, if "method" is a symbolic operator, it may be rewritten to a new name if a match is found in operatorNames. Details & FiddlinessDetails & Fiddliness To see the names Machinist provides for symbolic operators, see the DefaultOperatorNames trait. One caveat is that if you want to extend machinist.Ops yourself to create your own name mapping, you must do so in a separate project or sub-project from the one where you will be using the macros. Scala macros must be defined in a separate compilation run from where they are applied. It's also possible that despite the wide variety of shapes provided by machinist.Ops your shape is not supported. Machinist only provides unary and binary operators, meaning that if your method takes 3+ parameters you will need to write your own macro. It should be relatively easy to extend Ops to support these cases, but that work hasn't been done yet. Pull requests will be gladly accepted. All code is available to you under the MIT license, available at as well as in the COPYING file.
https://index.scala-lang.org/typelevel/machinist/machinist/0.6.5?target=_2.12
CC-MAIN-2019-43
refinedweb
896
50.33
I'm stuck on Referring to member variables. I feel like I may have missed a step or something. I get this error message (note that the formatting on here has changed where the arrow is pointing, it is actually under the second underscore after init in the actual error message): File "python", line 2 def __ init__(self, model, color, mpg): ^ SyntaxError: invalid syntax And because of this Codeademy says: Oops, try again. Make sure you define your own init() function. What have i missed or gone wrong with? class Car(object): def __ init__(self, model, color, mpg): self.model = model self.color = color self.mpg = mpg condition="new" my_car=Car("DeLorean", "silver", 88) print my_car.model print my_car.color print my_car.mpg
https://discuss.codecademy.com/t/referring-to-member-variables/164155
CC-MAIN-2018-39
refinedweb
125
69.58
Opened 3 years ago Closed 3 years ago Last modified 3 years ago #21902 closed Cleanup/optimization (fixed) Document search order for list_display Description Suppose I have a ModelAdmin with list_display = ["some","model","fields"] but then I also want to override how one of those fields is displayed, so I create a method on the ModelAdmin Class def some(self, obj): return "blah" It seems the model field takes precedence over the method on the ModelAdmin, and I don't see "blah" returned in the changelist. This is not clear from the documentation. (I figured I would be able to override it). On a related note, I think it makes sense to be able to override it in the ModelAdmin. Change History (7) comment:1 Changed 3 years ago by comment:2 Changed 3 years ago by comment:3 Changed 3 years ago by comment:4 Changed 3 years ago by PR for this comment:5 Changed 3 years ago by Patch looks good to me :) Hi, As described in the documentation [1], you can pass four different kinds of values for list_display. However, what that section doesn't say is that the given list is actually the order in which Django tries each possibility. I agree that it'd be useful to amend the documentation to mention explicitly that the order of the list is the one Django uses. As for the feature you're proposing, I don't see much value in it, for two reasons: 1) It's already possible to override a field's display by defining a method on the ModelAdminyou just need to give it a different name 2) Backwards-compatibility would be tricky So I'm marking this ticket as acceptedfor the documentation issue (which should be fairly trivial to fix), but I'm -0 on the proposed change. Thanks. [1]
https://code.djangoproject.com/ticket/21902
CC-MAIN-2016-40
refinedweb
308
51.75
Behringer Xenyx 1622FX The XENYX FX mixers incorporate a new studio-grade 24-bit FX processor. Get 100 real-world and awesome effect presets at your fingertips. Details Brand: BEHRINGER Part Numbers: 1622FX, XENYEX1622FX, XENYX 1622FX, XENYX-1622FX, XENYX1622FX, xenyx1622fx UPC: 04033653020800, 4033653020800 [ Report abuse or wrong photo | Share your Behringer Xenyx 1622FX photo ] Manual Preview of first few manual pages (at low quality). Check before download. Click to enlarge. Behringer Xenyx 1622FX Video review Behringer Xenyx 1622 FX Mixer Unboxing User reviews and opinions Comments posted on are solely the views and opinions of the people posting them and do not necessarily reflect the views or opinions of us. Documents 1622FX Technical Specifications Version 1.0 January 2006 XENYX 1622FX Premium 16-Input 2/2-Bus Mixer with XENYX Mic Preamps, British EQs, 24-Bit MultiFX Processor and USB/Audio Interface s Premium ultra low-noise, high headroom analog mixer s 4 state-of-the-art XENYX Mic Preamps comparable to stand-alone boutique preamps s Neo-classic British 3-band EQs with semi-parametric mid band Channel inserts on each mono channel for flexible connection of outboard equipment s 2 aux sends per channel: 1 pre/post fader switchable for monitoring/FX applications, 1 post fader (for internal FX or as external send) s Peak LEDs, mute, main mix and subgroup routing switches, solo and PFL functions on all channels s 2 subgroups with separate outputs for added routing flexibility; 2 multi-functional stereo aux returns with flexible routing s Main mix outputs with " jack and gold-plated XLR connectors, separate control room, headphones and stereo tape outputs s Control room/phones outputs with multi-input source matrix; BLOCK DIAGRAM SPECIFICATIONS +22 dBu @ 0dB Gain) 1/4" TS connector unbalanced approx. 240 W balanced / 120 W unbalanced +22) -101 dB -96 dB -83 dB 90 dB 89 dB 89 dB 100 to 240 V~, 50/60 Hz 37 W 100 - 240 V~: T 1.6 A H 250 V Standard IEC receptacle +0 dB / -1 dB +0 dB / -3 dB approx. 3 7/8" x 11 7/8" x 13 7/8" (97 mm x 301 mm x 351 mm) approx. 3.3 kg 1/4" TRS connector, electronically balanced approx. 20 kW +22 dBu Weight (net). 2:<< User Manual XENYX 1622FX/1832FX/ 2222FX/2442FX Premium 16/18/22/24-Input 2/2, 3/2, 4/2-Bus Mixer with XENYX Mic Preamps, British EQs, 24-Bit Multi-FX Processor and USB/Audio Interface PREMIUM 16-INPUT 2/2-BUS MIXER 24-BIT MULTI-FX PROCESSOR MAIN MIX Thank you Congratulations! In purchasing the BEHRINGER XENYX you have acquired a mixer whose small size belies its incredible versatility and audio performance.. Table of Contents Thank you... 1 Important Safety Instructions. 2 1. INTRODUCTION.. 3 2. CONTROL ELEMENTS AND CONNECTORS. 5 3. GRAPHIC 9-BAND EQUALIZER (1832FX only). 13 4. DIGITAL EFFECTS PROCESSOR.. 14 5. REAR PANEL CONNECTORS.. 14 6. INSTALLATION... 16 7. SPECIFICATIONS.. 18 Limited Warranty... 20 Legal Disclaimer.. 21-00000-02999 Important Safety Instructions [6]Clean only with dry cloth. [7]Do not block any ventilation openings. Install in accordance with the manufacturers instructions. 1. INTRODUCTION EN XENYX Mic Preamp quality This way, what once used to be a labor-intensive search for feedback frequencies is now an activity that even a child could master. Voice Canceller We have added another useful feature to the XENYX 1832FX: sufficient magnitude to constitute risk of electric shock. Use only high-quality commercially-available speaker cables with " TS plugs pre-installed. All other installation or modification should be performed only by qualified personnel. sufficient to constitute a risk of shock. {11}The apparatus shall be connected to a MAINS socket outlet. {13}Only use attachments/accessories specified by the manufacturer. CAUTION! {14}Use only with the cart, stand, tripod, bracket,-mode power supply (SMPS). Unlike conventional circuitry an SMPS provides an optimum supply current regardless of the input voltage. And thanks to its considerably higher efficiency a switched-mode power supply uses less energy than conventional power supplies. {15}Unplug this apparatus during lightning storms or when unused for long periods of time.: Signal processing: Preamplification Microphones convert sound waves into voltage that has to be amplified several-fold; then, this voltage is turned into sound that is reproduced in a loudspeaker. Because micro hone capsules are very delicate in their construcp tion,. To reduce the risk of fire or electric shock, do not expose this appliance to rain and moisture. The apparatus shall not be exposed to dripping or splashing liquids and no objects filled with liquids, such as vases, shall be placed on the apparatus. XENYX2222FX XENYX2442FX " jack. You can also connect unbalanced devices using mono jacks to these inputs. Please remember that you can use either the microphone input or the line input of a channel, but not both at the same time!. The block diagram supplied with the mixing console gives you an overview of the connections between the inputs and outputs, as well as the associated switches and controls. 2.1.2 Equalizer All mono input channels have a 3-band equalizer with semiparametric mid bands. All bands provide boost or cut of up to15 dB. In the central position, the equalizer is off (flat).. If you are using the built-in effects processor, make sure that STEREO AUX RETURN 3 has nothing plugged into it (2442FX and 2222FX), otherwise the internal effects return will be muted. This is not relevant if you use the FX OUT jack to drive an external effects device. 1622FX and 1832FX: On these consoles, the above note refers to the STEREO AUX RETURN 2 jacks as these models do not have a dedicated effect output. MUTE The MUTE switch breaks the signal path pre-channel fader, hence muting that channel in the main mix. The aux sends which are set to post-fader are likewise muted for that channel, while the pre-fader monitor paths remain active irrespective of whether the channel is muted or not. MUTE LED The MUTE LED indicates a muted channel. CLIP-LED The CLIP-LED lights up when the input signal is driven too high. If this happens, back off the GAIN 2442FX has 4 subgroups (1-2 and 3-4). MAIN The MAIN switch routes the signal to the main mix bus. The channel fader determines the channels volume in the main mix (or submix). Each stereo channel has two balanced line level inputs on jacks for left and right channels. Channels 9/10 and 11/12 on the 2442FX feature an additional XLR microphone jack with phantom power. If only the left jack (marked L) is used, the channel operates in mono. The stereo channels are designed to handle typical line level signals, and, depending on model, have a level switch (+4 dBu or -10 dBV) and/or a line GAIN control. Both jack inputs will also accept unbalanced connectors. LOW CUT and MIC GAIN These two control elements operate on the XLR connectors of the 2442FX, and are used to filter out frequencies below 75 Hz (LOW CUT) and to adjust microphone levels (MIC GAIN). LINE GAIN Use this control to adjust the line signal levels on channels 13-16 (2442FX only). LEVEL For level matching, the stereo inputs on the 1622FX, 1832FX and 2222FX have a LEVEL switch to select between +4dBu and -10dBV. At -10dBV (homerecording level), the input is more sensitive than at +4dBu (studio level). All) 2.1.4 Routing switch, PAN, SOLO Stereo channels 2.2.1 Channel inputs XENYX1622FX XENYX2442FX Fig. 2.3: Aux Send control MON and FX in the channel strips 2.2.3 Aux sends stereo channels In principle, the aux sends of the stereo channels function the same way as those of the mono channels. As the aux sends are mono, the send from a stereo channel is first summed to mono before it reaches the aux bus. Monitor and effects busses (AUX sends) source their signals via a control from one or more channels and sum these signals to a so-called bus. This bus signal is sent to an aux send connector (for monitoring applications: MON OUT) and.2 Aux send jacks 2.3.1 MON control, aux sends 1, 2 and 3 (FX) Turning up the AUX 1 control in a channel routes the signal to the aux send bus 1. As the 1832FX. XENYX2442FX Fig. 2.8: Aux send jacks XENYX1832FX (2222FX and 2442FX only). XENYX1832FX Fig. 2.11: Monitor fader of the 1832FX MUTE Press the MUTE switch to mute the monitor send. SOLO The SOLO switch routes the monitor send to the solo bus (post-fader and post-mute) or to the PFL bus (pre-fader and pre-mute). The position of the MODE switch in the main section determines which of the buses is selected. AUX SEND jacks The AUX SEND jack should be used when hooking up a monitor power amp or active monitor speaker system. The relevant aux path should be set pre-fader. On the 2222FX, aux send 1 is hard wired as pre-fader and hence called MON. Model 1832FX. Monitor mix with effect In this instance, your effects device should be set up as follows: the AUX SEND 2 jack should be connected to the L/ Mono input of your effects device, with its outputs coming back into the STEREO AUX RETURN 1 jacks. Connect the AUX SEND 1 jack output to the amplifier of your monitor system. The AUX SEND 1 master control determines the overall volume of the monitor. External effects device External effects device The effect signal reaches receives signal from routes signal back to the monitor mix via.. XENYX1832FX Fig. 2.14: Control elements of the surround function The XPQ surround function can be enabled/ disabled with the XPQ TO MAIN switch. This is a built-in effect that widens the stereo width, thus making the sound more lively and trans-parent.. Connect the signal sources you wish to process using the Voice Canceller to the CD/TAPE INPUT connectors. The Voice Canceller circuitry is not available for other inputs. Possible applications for the Voice Canceller are obvious: you can very simply stage background music for Karaoke events. Of course, you can also do this at home or at your rehearsal room before you hit the stage. Singers with their own band can practice singing difficult parts using a complete playback from a tape player or a CD, thus minimizing rehearsal time. 2.3.6 Supplement to 1832FX The 1832FX. Fig. 2.17: PHONES jack PHONES jack You can connect headphones to this " stereo jack (2442FX: 2 phones jacks). The signal routed to the PHONES connection is the same as that routed to the control room output.. XENYX1832FX Fig. 3.1: The graphic stereo equalizer of the 1832FX. Logically, at least one (ideally several) microphone channels have to be open for feedback to occur at all!. The peak meters of your XENYX display level almost independent of frequency. A recording level of 0 dB is recommended for all types of signal. XENYX2442FX Fig. 2.18: Subgroup and main mix faders Feedback is particularly common when stage monitors (wedges) are concerned, because monitors project sound in the direction of microphones. Therefore, you can also use the FBQ Feedback Detection for monitors by placing the equalizer in the monitor bus (see MAIN MIX/MONITOR). 5.1 Main mix outputs, insert points and control room outputs 5.5 Voltage supply, phantom power supply and fuse XENYX1832FX Fig. 4.1: Digital effects module All Models Fig. 5.5: Voltage supply and fuse The built-in stereo effects processor has the advantage that it does not need to be wired up. This excludes the danger of humming or level mismatch right from the start and thus 2222FX and 2442FX. The 2442FX has the effect output on the rear, 2222FX. XENYX2442FX Fig. 5.1: Main Mix outputs, main mix insert points and control room outputs 5.3 Inserts FUSE HOLDER/IEC MAINS RECEPTACLE The console is connected to the mains via the cable supplied, which meets the required safety standards. Blown fuses must only be replaced by fuses of the same type and rating. The mains connection is made via a cable with IEC mains connector. An appropriate mains cable is supplied with the equipment. POWER switch Use the POWER switch to turn on the mixing console. The POWER switch should always be in the Off position when you are about to connect your unit to the mains. To disconnect the unit from the mains, pull out the main cord plug. When installing the product, ensure that the plug is easily accessible. If mounting in a rack, ensure that the mains can be easily disconnected by a plug pull or by an allpole disconnect switch on or near the rack. MAIN OUTPUTS The MAIN outputs carry the MAIN MIX signal and are on balanced XLR jacks with a nominal level of +4 dBu. In parallel with this, " phone jacks carry the main mix signal in a balanced format (1622FX: here, the phone jack outputs are unbalanced and located on the front panel). CONTROL ROOM OUTPUTS (CTRL OUT) The control room output is normally connected to the monitoring system in the control room and carries the stereo mix or, when selected, the solo signals. MAIN INS(ERTS) (2442FX. XENYX1622FX Fig. 5.3: Insert points On the 2442FX the channel insert points are located on the control panel between the line input and the GAIN control. Insert points are very useful to process channel signals with dynamic processors or equalizers. plug: tip = signal output; ring = return input). All mono input channels are equipped with inserts. They are pre-fader, pre-EQ and pre-aux send. Inserts can also be used as pre-EQ direct outputs, without interrupting the signal path. To this end, you will need a cable fitted with mono phone plugs on the tape machine or effect device end, and a bridged stereo phone plug on the console side (tip and ring connected). loud-speakers. 6.2 Cable connections You will need a large number of cables for the various connections of the console. The illustrations below show the wiring of these cables. Be sure to use only high-grade cables. strain relief clamp sleeve tip Caution! You must never use unbalanced XLR connectors (PIN 1 and 3 connected) at the MIC input jacks if you want to use the phantom power supply. Strain relief clamp Sleeve Tip strain relief clamp sleeve ring tip sleeve ground/shield sleeve pole 1/ground Sleeve (ground/shield) ring return (in) tip send (out) Connect the insert send with the input and the insert return with the output of the effects device.. tip pole 2 The footswitch connects both poles momentarily Tip (signal) Insert send return 1/ TRS connector 4" Fig. 6.5: Insert send/return stereo plug strain relief clamp sleeve Unbalanced " TS connector Fig. 6.3: 1/4 mono plug 1/ TS footswitch connector 4" Fig. 6.1: Foot switch connector ring tip 6.2.1 Audio connections Please use commercial RCA cables to wire the 2-track inputs and outputs. You can, of course, also connect unbalanced devices to the balanced input/outputs. Use either mono plugs, or use stereo plugs to link the ring and shaft (or pins 1 & 3 in the case of XLR connectors). Fuse Mains connection Limited Warranty 1 Warranty [1] This limited warranty is valid only if you purchased the product from a BEHRINGER [2] This limited warranty does not cover the product if it has been electronically or authorized dealer in the country of purchase. A list of authorized dealers can be found on BEHRINGERs website under Where to Buy, or you can contact the BEHRINGER oce specied modied in any way. If the product needs to be modied or adapted in order to comply with applicable technical or safety standards on a national or local level, in any country which is not the country for which the product was originally developed and manufactured, this modication/adaptation shall not be considered a defect in materials or workmanship. This limited warranty does not cover any such modication/adaptation, regardless of whether it was carried out properly or not. Under the terms of this limited warranty, BEHRINGER shall not be held responsible for any cost resulting from such a modication/adaptation. [3] This limited warranty covers only the product hardware. It does not cover VP-DC575WB CW-21M63N Optimizer DVP3020 CD2301S MF-14 White DTH220E Mypal A626 Xn EW MC-809NC DD200 1100 AE Performance STR-DB900 CDC635 MHC-GRX10AV KX-TG5240M 1 2 Pro TL FC6095 DCT3400 BL-C10 Wkpc54G Rcdc1 Jetdirect 615N Strd1011 PCG-F701 QC5055 Scattergories 2003 DHC-AZ33D 37x20E DES-802 P4P800-MX Vision Mixing Desk Arxd 149 CDX-737 Photosmart 7830 HVL-MT24AM SC-EH780 Camera PT-6 6261D KX-TG6423 Finepix F20 Blumat Price TM160SP RSH1utrs 2F2607 MV1502B 1800-804 DMR-E85 Review 37PF9986 DCD-825 WS-32M66V Powerflex 4M 2043BW 7 7E A 200 8391D Travelmate 5320 MT01-2006 FD630U H 4210 IC-2100-T CDA-7893R VT999 XS-drive II VGN-TT11ln B CH-DVD402 RF26vabbp V2 USB Liebherr SGN MM-DA25R 90250 5100 Bike 1064 D H3100 Slg120NW DEH-P5100UB HDW-750 Mf 9545 MC-141 KL-750E Dryer BV3550 Ducati 749 250 U AH215-JD NC-200 PB Bionaire CM1 Raymarine ST60 CMS 1000 Class 5 NAD T761 MX 1100 B2220 RX-6032V I5871 Streetpilot C510 RB-970
http://www.ps2netdrivers.net/manual/behringer.xenyx.1622fx/
crawl-003
refinedweb
2,930
61.46
android.util.Log is the log class that provide log function. It provide below methods to log data into LogCat console. 1. Android Log Methods. - Log.v() : Print verbose level log data. Verbose level is the lowest log level, if you print so much this kind of log data, it is not meaningful. - Log.d() : Print debug level log data. Debug level is one step higher than verbose level. Debug log data is usually useful in android application development and testing. - Log.i() : Print info level log data. Info level is one step higher than debug level. Info log data is used to collect user actions and behaviours. - Log.w() : Print warn level log data. Warn level is one step higher than info level. When you see this kind of log data, it means your code exists potential risks, you need to check your code carefully. - Log.e() : Print error level log data. Error level is the highest level. It is always used in java code catch block to log exception or error information. This kind of log data can help you to find out the root cause of app crash. 2. Android Log Methods Example. This example is very simple. When you click the button, it will print above 5 kinds of log data in the LogCat console. When you input the search keyword LogActivity in LogCat panel, the app log data will be filtered out. For each line of log data, there are log time, class name and log message. You can also filter out the log data by it’s type, verbose, info, debug, warn or error. This can make log data search more easily and accurate. Click the Settings icon in LogCat panel, you can config which column to be displayed for each line of log. 3. Android Log Example Source Code. 3.1 Main Layout Xml File. activity_log.xml <Button android: 3.2 Activity Java File. LogActivity.java package com.dev2qa.example; import android.os.Bundle; import android.support.v7.app.AppCompatActivity; import android.util.Log; import android.view.View; import android.widget.Button; public class LogActivity extends AppCompatActivity { private static final String LOG_TAG = "LogActivity"; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_log); setTitle("dev2qa.com --- Android Log Methods Example."); // Get the button instance. Button createLogButton = (Button)findViewById(R.id.createLogButton); // When the button is clicked, print log data. createLogButton.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View view) { Log.v(LOG_TAG, "This is verbose log"); Log.d(LOG_TAG, "This is debug log"); Log.i(LOG_TAG, "This is info log"); Log.w(LOG_TAG, "This is warn log"); Log.e(LOG_TAG, "This is error log"); } }); } } 3.3 Log Tag Name Tip. The first parameter for the log method is a tag name, we can use this tag name to filter out the useful log data easily. Commonly the tag is the class name. You can simply create the tag name follow below method in activity java file. - Write logt in activity java file outside of onCreate() method. - Click Tab key then android studio will generate the tag name automatically. - Besides the class name, you can create the tag name by the class function also. You had better create a class which include all static String Constant that used as tag name as below. package com.dev2qa.example.constant; /** * Created by Jerry on 12/9/2017. */ public class LogTagName { public static final String LOG_TAG_UI = "LOG_TAG_UI"; public static final String LOG_TAG_NETWORK = "LOG_TAG_NETWORK"; public static final String LOG_TAG_DATABASE = "LOG_TAG_DATABASE"; public static final String LOG_TAG_LOGIC = "LOG_TAG_LOGIC"; public static final String LOG_TAG_APP = "LOG_TAG_APP"; } Log.w(LogTagName.LOG_TAG_NETWORK, "This is warn log"); This way can make the log more readable, even none programmer can understand the meaning of the log. 4. LogCat Filter. There are four items in LogCat filter drop down list. - Show only selected application. - Firebase. This is a log analytic tool provided by google. - No Filters. - Edit Filter Configuration. Select the Edit Filter Configuration drop down item, there will popup a dialog which let you create or edit log filters. You can specify below conditions to filter out related log. For example, filter by log tag, log message, package name, pid and log level. Log tag, message and package name support regular expression. 5. LogCat Operation. You can clear logcat content by right click the logcat output console, click Clear logcat menu in popup menu list. If you find the log data can not be cleared after above action. That means the emulator has been stopped or disconnected. You need to select an activity emulator in the drop down list to run the android app and watch the log data. 6. How To Retrieve Android Crash Logs Use ADB Command. You have learn how to record logs in android studio. But how to get those log data when you need to analyze the data for development. You can follow below steps. But before you can try, you had better read below articles if you do not know. Save LogCat Log Data To Local Text File. - Open a dos command, cd to your %ANDROID_HOME% \ platform-tools folder. - Input command adb devices, this command will list all devices that connected. - Run command adb logcat>>logcatData.txt , after a while, you can find the file in the platform-tools folder. This file will include all the logcat logs in it. The log file will increase while the emulator runs. 1 Comment Permalink Thanks for this post! Especially “3.3 Log Tag Name Tip” 👍
https://www.dev2qa.com/android-logcat-and-logging-best-practice/
CC-MAIN-2020-05
refinedweb
917
61.22
The topic you requested is included in another documentation set. For convenience, it's displayed below. Choose Switch to see the topic in its original location. How to: Read Text from a File .NET Framework 3.0 The following code examples show how to read text from a text file. The second example notifies you when the end of the file is detected. This functionality can also be achieved by using the ReadAllLines or ReadAllText methods. Example); } } } using System; using System.IO; public class TextFromFile { private const string FILE_NAME = "MyFile.txt"; public static void Main(String[] args) { if (!File.Exists(FILE_NAME)) { Console.WriteLine("{0} does not exist.", FILE_NAME); return; } using (StreamReader sr = File.OpenText(FILE_NAME)) { String input; while ((input=sr.ReadLine())!=null) { Console.WriteLine(input); } Console.WriteLine ("The end of the stream has been reached."); sr.Close(); } }:
http://msdn.microsoft.com/en-us/library/office/db5x7c0d(v=vs.85).aspx
CC-MAIN-2013-48
refinedweb
137
55
Wed 26 Feb 2014 Python 101: Reading and Writing CSV Files Posted by Mike under Cross-Platform, Education, Python Python has a vast library of modules that are included with its distribution. The csv module gives the Python programmer the ability to parse CSV (Comma Separated Values) files. A CSV file is a human readable text file where each line has a number of fields, separated by commas or some other delimiter. You can think of each line as a row and each field as a column. The CSV format has no standard, but they are similar enough that the csv module will be able to read the vast majority of CSV files. You can also write CSV files using the csv module. Reading a CSV File There are two ways to read a CSV file. You can use the csv module’s reader function or you can use the DictReader class. We will look at both methods. But first, we need to get a CSV file so we have something to parse. There are many websites that provide interesting information in CSV format. We will be using the World Health Organization’s (WHO) website to download some information on Tuberculosis. You can go here to get it:. Once you have the file, we’ll be ready to start. Ready? Then let’s look at some code! import csv #---------------------------------------------------------------------- def csv_reader(file_obj): """ Read a csv file """ reader = csv.reader(file_obj) for row in reader: print(" ".join(row)) #---------------------------------------------------------------------- if __name__ == "__main__": csv_path = "TB_data_dictionary_2014-02-26.csv" with open(csv_path, "rb") as f_obj: csv_reader(f_obj) Let’s take a moment to break this down a bit. First off, we have to actually import the csv module. Then we create a very simple function called csv_reader that accepts a file object. Inside the function, we pass the file object into the csv_reader function, which returns a reader object. The reader object allows iteration, much like a regular file object does. This let’s us iterate over each row in the reader object and print out the line of data, minus the commas. This works because each row is a list and we can join each element in the list together, forming one long string. Now let’s create our own CSV file and feed it into the DictReader class. Here’s a really simple one: first_name,last_name,address,city,state,zip_code Tyrese,Hirthe,1404 Turner Ville,Strackeport,NY,19106-8813 Jules,Dicki,2410 Estella Cape Suite 061,Lake Nickolasville,ME,00621-7435 Dedric,Medhurst,6912 Dayna Shoal,Stiedemannberg,SC,43259-2273 Let’s save this in a file named data.csv. Now we’re ready to parse the file using the DictReader class. Let’s try it out: import csv #---------------------------------------------------------------------- def csv_dict_reader(file_obj): """ Read a CSV file using csv.DictReader """ reader = csv.DictReader(file_obj, delimiter=',') for line in reader: print(line["first_name"]), print(line["last_name"]) #---------------------------------------------------------------------- if __name__ == "__main__": with open("data.csv") as f_obj: csv_dict_reader(f_obj) In the example above, we open a file and pass the file object to our function as we did before. The function passes the file object to our DictReader class. We tell the DictReader that the delimiter is a comma. This isn’t actually required as the code will still work without that keyword argument. However, it’s a good idea to be explicit so you know what’s going on here. Next we loop over the reader object and discover that each line in the reader object is a dictionary. This makes printing out specific pieces of the line very easy. Now we’re ready to learn how to write a csv file to disk. Writing a CSV File The csv module also has two methods that you can use to write a CSV file. You can use the writer function or the DictWriter class. We’ll look at both of these as well. We will be with the writer function. Let’s look at a simple example: import csv #---------------------------------------------------------------------- def csv_writer(data, path): """ Write data to a CSV file path """ with open(path, "wb") as csv_file: writer = csv.writer(csv_file, delimiter=',') for line in data: writer.writerow(line) #---------------------------------------------------------------------- if __name__ == "__main__": data = ["first_name,last_name,city".split(","), "Tyrese,Hirthe,Strackeport".split(","), "Jules,Dicki,Lake Nickolasville".split(","), "Dedric,Medhurst,Stiedemannberg".split(",") ] path = "output.csv" csv_writer(data, path) In the code above, we create a csv_writer function that accepts two arguments: data and path. The data is a list of lists that we create at the bottom of the script. We use a shortened version of the data from the previous example and split the strings on the comma. This returns a list. So we end up with a nested list that looks like this: [['first_name', 'last_name', 'city'], ['Tyrese', 'Hirthe', 'Strackeport'], ['Jules', 'Dicki', 'Lake Nickolasville'], ['Dedric', 'Medhurst', 'Stiedemannberg']] The csv_writer function opens the path that we pass in and creates a csv writer object. Then we loop over the nested list structure and write each line out to disk. Note that we specified what the delimiter should be when we created the writer object. If you want the delimiter to be something besides a comma, this is where you would set it. Now we’re ready to learn how to write a CSV file using the DictWriter class! We’re going to use the data from the previous version and transform it into a list of dictionaries that we can feed to our hungry DictWriter. Let’s take a look: import csv #---------------------------------------------------------------------- def csv_dict_writer(path, fieldnames, data): """ Writes a CSV file using DictWriter """ with open(path, "wb") as out_file: writer = csv.DictWriter(out_file, delimiter=',', fieldnames=fieldnames) writer.writeheader() for row in data: writer.writerow(row) #---------------------------------------------------------------------- if __name__ == "__main__": data = ["first_name,last_name,city".split(","), "Tyrese,Hirthe,Strackeport".split(","), "Jules,Dicki,Lake Nickolasville".split(","), "Dedric,Medhurst,Stiedemannberg".split(",") ] my_list = [] fieldnames = data[0] for values in data[1:]: inner_dict = dict(zip(fieldnames, values)) my_list.append(inner_dict) path = "dict_output.csv" csv_dict_writer(path, fieldnames, my_list) We will start in the second section first. As you can see, we start out with the nested list structure that we had before. Next we create and empty list and a list that contains the field names, which happens to be the first list inside the nested list. Remember, lists are zero-based, so the first element in a list starts at zero! Next we loop over the nested list construct, starting with the second element: for values in data[1:]: inner_dict = dict(zip(fieldnames, values)) my_list.append(inner_dict) Inside the for loop, we use Python builtins to create dictionary. The **zip** method will take two iterators (lists in this case) and turn them into a list of tuples. Here’s an example: zip(fieldnames, values) [('first_name', 'Dedric'), ('last_name', 'Medhurst'), ('city', 'Stiedemannberg')] Now when your wrap that call in **dict**, it turns that list of of tuples into a dictionary. Finally we append the dictionary to the list. When the **for** finishes, you’ll end up with a data structure that looks like this: [{'city': 'Strackeport', 'first_name': 'Tyrese', 'last_name': 'Hirthe'}, {'city': 'Lake Nickolasville', 'first_name': 'Jules', 'last_name': 'Dicki'}, {'city': 'Stiedemannberg', 'first_name': 'Dedric', 'last_name': 'Medhurst'}] At the end of the second session, we call our csv_dict_writer function and pass in all the required arguments. Inside the function, we create a DictWriter instance and pass it a file object, a delimiter value and our list of field names. Next we write the field names out to disk and loop over the data one row at a time, writing the data to disk. The DictWriter class also support the writerows method, which we could have used instead of the loop. The csv.writer function also supports this functionality. You may be interested to know that you can also create Dialects with the csv module. This allows you to tell the csv module how to read or write a file in a very explicit manner. If you need this sort of thing because of an oddly formatted file from a client, then you’ll find this functionality invaluable. Wrapping Up Now you know how to use the csv module to read and write CSV files. There are many websites that put out their data in this format and it is used a lot in the business world. Have fun and happy coding! Additional Reading - Python Documentation – Section 13.1 csv - Reading and Writing CSV Files with Python DictReader and DictWriter - Python Module of the Week: csv
http://www.blog.pythonlibrary.org/2014/02/26/python-101-reading-and-writing-csv-files/
CC-MAIN-2014-15
refinedweb
1,404
65.01
Content-type: text/html getprlpnam, putprlpnam - Manipulate printer control database entry (Enhanced Security) Security Library - libsecurity.a #include <sys/types.h> #include <sys/security.h> #include <prot.h> struct pr_lp *getprlpnam ( char *name ); int putprlpnam ( char *name, struct pr_lp *pr ); Specifies a printer control database entry name. Specifies a printer control database entry structure. The getprlpnam() function returns a pointer to an object with the following structure containing the broken-out fields of a line in the printer control database. Each line in the database contains a pr_lp structure, declared in the prot.h header file as follows: /* Printer Control Database Entry */ struct l_field { char fd_name[15]; /* holds printer name */ char fd_initseq[256];/* initial sequence */ char fd_termseq[256];/* termination sequence */ char fd_emph[256]; /* emphasize sequence */ char fd_deemph[256]; /* de-emphasize sequence */ char fd_chrs[130]; /* characters to filter */ ushort fd_chrslen; /* length of string of illegal chars */ char fd_escs[256]; /* escape sequences */ ushort fd_escslen; /* length of string: illegal escapes */ int fd_linelen; /* length of a line in characters */ int fd_pagelen; /* length of a page in lines */ char fd_truncline; /* printer truncates long lines? */ }; struct l_flag { unsigned short fg_name:1, /* Is fd_name set? */ fg_initseq:1, /* Is fd_initseq set? */ fg_termseq:1, /* Is fd_termseq set? */ fg_emph:1, /* Is fd_emph set? */ fg_deemph:1, /* Is fd_deemph set? */ fg_chrs:1, /* Is fd_chrs set? */ fg_chrslen:1, /* Is fd_chrslen set? */ fg_escs:1, /* Is fd_escs set? */ fg_escslen:1, /* Is fd_escslen set? */ fg_linelen:1, /* Is fd_linelen set? */ fg_pagelen:1, /* Is fd_pagelen set? */ fg_truncline:1 /* Is fd_truncline set? */ ; }; struct pr_lp { struct l_field ufld; struct l_flag uflg; struct l_field sfld; struct l_flag sflg; }; The getprlpnam() function searches from the beginning of the database until a printer name matching name is found, and returns a pointer to the particular structure in which it was found. If an end-of-file or an error is encountered on reading, the function returns a null pointer. The putprlpnam() function puts a new or replaced printer control entry pr with key name into the database. If the uflg.fg_name field is 0, the requested entry is deleted from the printer control database. The putprlpnam() function locks the database for all update operations, and performs a endprlpent() after the update or failed attempt. For ASCII printers, the fields in the printer control database contain the characteristics of the printer so the trusted line printer subsystem can apply labels to the top and bottom of printed pages. The ufld.fd_name field matches the printer model, supplied by the line printer scheduler to the lprcat program to access the appropriate entry in this database. The ufld.fd_initseq field is a null-terminated string containing the initialization sequence for the printer; it is sent by line printer software at the start of each job. Similarly, ufld.fd_termseq contains a null-terminated string, which is sent to the printer when each job is complete. The size of the printed page is specified in ufld.fd_linelen (width) and ufld.fd_pagelen (height). These values are expressed in characters for dot matrix printers, and points (1/72 inch) for laser printers. Other values are used for character filtering (supported only on dot matrix printers). The ufld.fd_emph field is a null-terminated character string that causes the printer to begin emphasizing characters printed. Similarly, ufld.fd_deemph is a null-terminated character string that resumes normal printing. The ufld.fd_chrs array is a list of characters that is automatically filtered out by the line printer software because it causes carriage motion that would cause the line printer software to lose its place on the page. The length of this array is ufld.fd_chrslen. The ufld.fd_escs string contains characters that cause carriage motion after an escape character (ASCII \033). The ufld.fd_truncline Boolean indicator specifies whether the printer truncates lines when they are too long. This allows the printer software to keep track of the logical line printed. All information is contained in a static area, so it must be copied if it is to be saved. Specifically, specifying a buffer returned to putprlpnam() does not perform the intended action. Programs using this functions must be compiled with -lsecurity. A null pointer is returned on EOF or error. Line printer subsystem configuration file. General security databases file. Functions: getprpwent(3), getprtcent(3), getprdfent(3) Security delim off
http://backdrift.org/man/tru64/man3/putprlpnam.3.html
CC-MAIN-2017-04
refinedweb
707
57.87
Part 16 - Generators Generator Expressions Generator Expressions have similar syntax to the for loops that we have covered, and serve a similar purpose. The best way to learn how to use Generator Expressions is by example, so here we load up a booish prompt. $ booish >>> List(x for x in range(5)) // simplest Generator Expression [0, 1, 2, 3, 4] >>> List(x * 2 for x in range(5)) // get double of values [0, 2, 4, 6, 8] >>> List(x**2 for x in range(5)) // get square of values [0, 1, 4, 9, 16] >>> List(x for x in range(5) if x % 2 == 0) // check if values are even [0, 2, 4] >>> List(x for x in range(10) if x % 2 == 0) // check if values are even [0, 2, 4, 6, 8] >>> List(y for y in (x**2 for x in range(10)) if y % 3 != 0) // Generator Expression inside another [1, 4, 16, 25, 49, 64] >>> List(cat.Weight for cat in myKitties if cat.Age >= 1.0).Sort() [6.0, 6.5, 8.0, 8.5, 10.5] >>> genex = x ** 2 for x in range(5) generator(System.Int32) >>> for i in genex: ... print i ... 0 1 4 9 16 The cat-weight example is probably what Generator Expressions are most useful for. You don't have to create Lists from them either, that's mostly for show. generators are derived from IEnumerable, so you get all the niceties of the for loop as well. Generator Methods A Generator Method is like a regular method that can return multiple times. Here's a Generator Method that will return exponents of 2. [1, 2, 4, 8, 16, 32, 64, 128, 512, 1024] Generator Methods are very powerful because they keep all their local variables in memory after a yield. This can allow for certain programming techniques not found in some other languages. Generators are very powerful and useful. Exercises - Create a Generator that will destroy mankind. Go on to Part 17 - Macros 1 Comment Michel Casabianca Exercice: write a generator for Fibonaci sequence. Response: -8<------------------------------------------------------------ def FiboGenerator(nb as int): a, b = 1, 1 yield a yield b for i in range(nb-2): a, b = b, a+b yield b print List(FiboGenerator(7)) -8<------------------------------------------------------------
http://docs.codehaus.org/display/BOO/Part+16+-+Generators
CC-MAIN-2014-10
refinedweb
380
69.01
I am making an app in which I want to get the current time from internet. I know how to get the time from the device using System.currentTimeMillis You can get time from internet time servers using the below program import java.io.IOException; import org.apache.commons.net.time.TimeTCPClient; public final class GetTime { public static final void main(String[] args) { try { TimeTCPClient client = new TimeTCPClient(); try { // Set timeout of 60 seconds client.setDefaultTimeout(60000); // Connecting to time server // Other time servers can be found at : // Make sure that your program NEVER queries a server more frequently than once every 4 seconds client.connect("nist.time.nosc.us"); System.out.println(client.getDate()); } finally { client.disconnect(); } } catch (IOException e) { e.printStackTrace(); } } } 1.You would need Apache Commons Net library for this to work. Download the library and add to your project build path. (Or you can also use the trimmed Apache Commons Net Library here :. This is enough to get time from internet ) 2.Run the program. You will get the time printed on your console.
https://codedump.io/share/45Q6vKWCK5oW/1/how-to-get-current-time-from-internet-in-android
CC-MAIN-2017-09
refinedweb
178
60.31
While Android puts a powerful built-in database at your disposal, it doesn't come with the best set of debugging tools. In fact, unless you have a rooted device, you can't even get the SQLite tables off your device without jumping through some hoops. Fortunately, the Android emulator doesn't have this restriction. This walk-thru demonstrates how I generally debug SQLite tables. If you aren't familiar with SQLite tables on Android, read my TechRepublic article, "Use SQLite to create a contacts browser in Android." 1. This tutorial doesn't cover CRUD operations for SQLite; however, we do need data to debug in order to demonstrate the technique. I used the short code snippet below to create a sample database. package com.authorwjf.sqlitetablemaker; import android.os.Bundle;import android.widget.Toast; import android.app.Activity; import android.database.sqlite.SQLiteDatabase; public class MainActivity extends Activity { private static final String SAMPLE_DB_NAME = "TrekBook"; private static final String SAMPLE_TABLE_NAME = "Info"; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); SQLiteDatabase sampleDB = this.openOrCreateDatabase(SAMPLE_DB_NAME, MODE_PRIVATE, null); sampleDB.execSQL("CREATE TABLE IF NOT EXISTS " + SAMPLE_TABLE_NAME + " (LastName VARCHAR, FirstName VARCHAR," + " Rank VARCHAR);"); sampleDB.execSQL("INSERT INTO " + SAMPLE_TABLE_NAME + " Values ('Kirk','James, T','Captain');"); sampleDB.close(); Toast.makeText(this, "DB Created!", Toast.LENGTH_LONG).show(); } } 2. The second step is to open an emulator that contains the table you want to browse. In this instance using Eclipse, I opened a new emulator and loaded the sqlitetablemaker.apk on it (Figure A). Figure A 3. Within Eclipse, you need to switch to DDMS mode by going to Window | Open Perspective | DDMS. If you've never used DDMS, you'll likely need to go to Window | Open Perspective | Other and then browse for DDMS within the list (Figures B and C). Figure B Figure C 4. Once the DDMS view is active, choose the File Explorer tab. You'll find your database in the /data/data/your.app.namespace/databases directory. There are two virtually indistinguishable icons in the upper right-hand corner of the tab that represent a pull and a push of a file, respectively. Use the pull icon (the one on the left) to save a copy of the SQLite database to your development machine (Figure D). Figure D 5. Now you have a copy of your database on your workstation, but you still need some kind of SQLite viewer to take a peek. I use SQLite Database Browser, because it is free and runs on my Wintel, Mac, and Ubuntu boxes. 6. Open the database and inspect the data (Figure E). Figure E While it's not terribly difficult to get a look at the SQLite tables you create in your Android apps, it requires quite a few steps. And remember this only works with the emulator — if you need to debug SQLite tables on an actual device, you'll need to create an export function that copies the database to the SD card, at which point you can follow this walk-thru to open the tables. I hope future versions of the Android development tools will make this process less painful. Until then, you may want to save this.
http://www.techrepublic.com/blog/software-engineer/browse-sqlite-data-on-the-android-emulator/
CC-MAIN-2017-22
refinedweb
532
57.57
A library is a collection of precompiled object files which can be linked into programs. The most common use of libraries is to provide system functions, such as the square root function sqrt found in the C math library. sqrt Libraries are typically stored in special archive files with the extension '. ar The standard system libraries are usually found in the directories '/usr/lib' and '/lib'.(5) For example, the C math library is typically stored in the file '/usr/lib/libm.a' on Unix-like systems. The corresponding prototype declarations for the functions in this library are given in the header file '/usr/include/math.h'. The C standard library itself is stored in '/usr/lib/libc.a' and contains functions specified in the ANSI/ISO C standard, such as 'printf'---this library is linked by default for every C program. Here is an example program which makes a call to the external function sqrt in the math library 'libm.a': #include <math.h> #include <stdio.h> int main (void) { double x = sqrt (2.0); printf ("The square root of 2.0 is %f\n", x); return 0; } Trying to create an executable from this source file alone causes the compiler to give an error at the link stage: $ gcc -Wall calc.c -o calc /tmp/ccbR6Ojm.o: In function `main': /tmp/ccbR6Ojm.o(.text+0x19): undefined reference to `sqrt' The problem is that the reference to the sqrt function cannot be resolved without the external math library 'libm.a'. The function sqrt is not defined in the program or the default library 'libc.a', and the compiler does not link to the file 'libm.a' unless it is explicitly selected. Incidentally, the file mentioned in the error message '/tmp/ccbR60jm.o' is a temporary object file created by the compiler from 'calc.c', in order to carry out the linking process. To enable the compiler to link the sqrt function to the main program 'calc.c' we need to supply the library 'libm.a'. One obvious but cumbersome way to do this is to specify it explicitly on the command line: $ gcc -Wall calc.c /usr/lib/libm.a -o calc The library 'libm.a' contains object files for all the mathematical functions, such as sin, cos, exp, log and sqrt. The linker searches through these to find the object file containing the sqrt function. sin cos exp log 'libm.a'. To avoid the need to specify long paths on the command line, the compiler provides a short-cut option '-l' for linking against libraries. For example, the following command, $ gcc -Wall calc.c -lm -o calc is equivalent to the original command above using the full library name '/usr/lib/libm.a'. In general, the compiler option -lNAME will attempt to link object files with a library file 'libNAME.a' in the standard library directories. Additional directories can specified with command-line options and environment variables, to be discussed shortly. A large program will typically use many -l options to link libraries such as the math library, graphics libraries and networking libraries. -lNAME -l
https://www.linuxtopia.org/online_books/an_introduction_to_gcc/gccintro_17.html
CC-MAIN-2018-22
refinedweb
517
58.99
27 April 2011 07:34 [Source: ICIS news] SINGAPORE (ICIS)--UK’s oil giant BP reported on Wednesday a near-tripling of replacement cost profit at its refining and marketing operations to $2.08bn (€1.41bn) in the first quarter of 2011 on the back of improved refining environment. The segment’s pre-tax profit surged more than threefold to $4.37bn from $1.41bn in the first quarter of 2010, BP said in a statement. “In the international businesses, strong operational performance in our petrochemicals business has enabled us to benefit from the favourable margin environment and lubricants continued to deliver earnings growth,” the company said. The second quarter, however, may see weaker contribution from supply and trading, with some softening in petrochemical margins, BP said. “In March, BP announced that it had agreed to sell a package of 33 refined products terminals and 992 miles of pipelines across 13 states in the ?xml:namespace> Meanwhile, BP’s overall replacement cost profit in the first quarter slipped 2.1% year on year to $5.48bn, as it recognised a $400m pre-tax charge related to the The company takes into account inventory holding gains or losses, as well as taxes, in the computation of replacement cost profit. In the March quarter, BP booked $1.64bn in inventory holding losses, which were deducted from the net profit of $7.12bn. (
http://www.icis.com/Articles/2011/04/27/9455243/bp-refining.html
CC-MAIN-2014-42
refinedweb
230
54.83
In my previous question, I asked: I would like to take a vcf file and a reference genome from the 1000Genomes project, and obtain a fasta file that lists the genomes for each individual in the vcf, according to the SNPs each individual has in the vcf file. Answers showed that GATK (FastaAlternativeReferenceMarker) and vcftools (vcf-consensus,) were able to do something similar to this. However, I'd like to skip indels when creating these sequences. Do existing tools have options to do this? You might try just removing INDELs from your VCF file before passing to these tools, but you could also consider looking at the FastaVariant class in pyfaidx. See A: Make fasta file from SNPs in two vcf files for an example. EDIT: The FastaVariant approach is currently too slow for whole chromosome access. I've tried parsing through the vcf files using python to remove indels but I've run it for days and it hasn't finished, and I'm not sure if it's a bug in my script or because there's 84.4 million SNPs to go through. I've looked at FastaVariant; does it create an alternate sequence like FastaAlternateReferenceMaker in GATK? Yes, it creates a variant-substituted FASTA sequence that you can work with using the Fasta interface. It's only designed to handle SNPs, so in fact it does ignore INDELs (I thought this was a design flaw and was thinking about correcting it but maybe not...). I ran the script and created a FastaVariant object. It works well but it outputs 1 alternative genome. I was wondering if I can use it to get an alternative sequence for each individual (so if I want the genomes of 100 individuals, including HG00097, from the vcf file, can pyfaidx do this?) I think this script would do the trick: I just realized I hadn't documented the use of the sample argument, so thanks for pointing this out! Thank you! I will try it I got this error: Traceback (most recent call last): File "pyfaidx_test.py", line 11, in <module> sample_fasta.write(record.seq) AttributeError: 'FastaRecord' object has no attribute 'seq' What does it mean? Oops. I didn't test before submitting. I just edited the Gist, so please try again. What's the estimated run time for one individual? I've ran it for around 30 min and it hasn't given output yet for the first individual, HG00096. It's created the file but it's empty so far. I can continue running it for several more hours to see what happens, though I was wondering if I might be doing something if it's supposed to be faster. Also, just to understand how the code works better, does this give the same DNA sequence for each individual as what you wrote above? It hasn't finished running yet so I don't know: from pyfaidx import FastaVariant import vcf samples = vcf.Reader(open('calls.vcf.gz', 'r')).samples for sample in samples: with FastaVariant('reference.fa', 'calls.vcf.gz', sample=sample, het=True, hom=True) as consensus: with open(sample + '.fasta', 'w') as sample_fasta: sample_fasta.write('>' + sample) sample_fasta.write(str(consensus['20'])) That should give you chromosome 20 for each of the individuals, yes. Thanks! It took 33 seconds for str(consensus['20'][0:100000]) to finish running, so I estimate it'll take 6 hours for str(consensus['20']) to finish running. LOL. I'm glad the code actually works, and granted the module wasn't purpose built for this, but it's still a bit embarrassing... I hadn't ever tested fetching entire (large) chromosomes using FastaVariant, and it turns out that I really designed the class to support efficient random access of smaller subsequences. I'm testing your use case right now and you're correct - it's excruciatingly slow. The reason for this is that pysam is used to query every variant position on every chromosome for every individual. It will be much better to use a tool that is designed for this purpose. You might take a look at the recommendation from the 1000 Genomes team: I've tried the option for 1000genomes, along with many other options (at least 6, spending hours on each option) but the vcf-subset doesn't work for me as it gives me 'Broken VCF header- no columns names?', and Data Slicer crashes (it says page taking too long to respond) when I try to use it to get an entire chromosome for 1 individual. So I think pyfaidx is the best option i have right now, until I figure out why vcf-subset doesn't work. Actually 100000 takes 30 seconds, while 500000 takes 7 minutes, so 6 hrs is far off. I've tracked how much the time increases for each increment i of 10000 for consensus[0:i]. It might take a day to run, I will see. It's going to depend on SNP density. Maybe it would be faster if I was to take chunks of it, says str(consensus[0:10000]) + str(consensus[10000:20000]), and add them together. That way it sort of 'parallelizes' it. EDIT: len(str(consensus['20'][100000:110000])) takes around 12 seconds len(str(consensus['20'][200000:210000])) takes around 12 seconds. So maybe it could work? I'll test it out. Awesome!! I just ran it in parallel on the sun grid engine from 0 to a million and it all finished in less than 2 minutes!! I ran the whole thing for 1 individual... finished in around 3-5 minutes. There were around 600 jobs. Next time I can use only 60 jobs and it'll probably finish in 30 min, and 6 jobs (10 mil each) for a couple of hours. Yes, the access times will be linear with the size of the query, independent of the position of the query in the genome. Glad you figured out a way to parallelize your job! In the phased vcf file it gives the information for each diploid chromosome for each individual. For example, for column HG00096, it has 0|0, or 0|1, etc., where 1 indicates the chromosome has the alternative, while 0 means it has the reference SNP. Does the pyfaidx FastaVariant object only create it for 1 of the 2 chromosomes in a set? Is it possible to use it to get both chromosomes in a set (say, both chromosome 1s?) I guess it may have something to do with line 790 in the __init__.py script? if sample.gt_type in self.gt_type and eval(self.filter): gt_type depends on 'het' and 'hom', which seem to have something to do with heterozygous/homozygous? I'm not sure how to interpret them. I think what you're asking for is full respect of phasing information, and currently pyfaidx ignores phasing completely. You only get one sequence out, and it's SNP-substituted using either hetero/homozygous combination of SNPs. Really you seem to want haplotypes output separately, but unless your entire call set is completely phased you won't get "just" 2 versions of every chromosome. In fact, for unphased calls you'll get an exponential growth in the number of possible chromosomes. That's why I ignore phrasing currently... In this thread: VCF to FASTA You linked to a program called vcf2fasta.cpp. Would that do what I need? EDIT: I wrote a simple python script that gets phased fasta from vcf for 1 individual, and it takes 5 min to run, so if I run it in parallel for all individuals I should get fastas
https://www.biostars.org/p/203117/
CC-MAIN-2019-22
refinedweb
1,276
64.1
If you don't understand what the heck a forecast scalar is, then you might want to read my book (chapter 7). If you haven't bought it, then you might as well know that the scalar is used to modify a trading rule's forecast so that it has the correct average absolute value, normally 10. Here are some of my thoughts on the estimation of forecast scalars, with a quick demo of how it's done in the code. I'll be using this "pre-baked system" as our starting point: from systems.provided.futures_chapter15.estimatedsystem import futures_system system=futures_system() The code I use for plots etc in this post is in this file. Even if you're not using my code it's probably worth reading this post as it gives you some idea of the real "craft" of creating trading systems, and some of the issues you have to consider. Targeting average absolute value The basic idea is that we're going to take the natural, raw, forecast from a trading rule variation and look at it's absolute value. We're then going to take an average of that value. Finally we work out the scalar that would give us the desired average (usually 10). Notice that for non symmetric forecasts this will give a different answer to measuring the standard deviation, since this will take into account the average forecast. Suppose you had a long biased forecast that varied between +0 and +20, averaging +10. The average absolute value will be about 10, but the standard deviation will probably be closer to 5. Neither approach is "right" but you should be aware of the difference (or better still avoid using biased forecasts). Use a median, not a mean From above: "We're then going to take an average of that value..." Now when I say average I could mean the mean. Or the median. (Or the mode but that's just silly). I prefer the median, because it's more robust to having occasional outlier values for forecasts (normally for a forecast normalised by standard deviation, when we get a really low figure for the normalisation). The default function I've coded up uses a rolling median (syscore.algos.forecast_scalar_pooled). Feel free to write your own and use that instead: system.config.forecast_scalar_estimate.func="syscore.algos.yesMumIwroteMyOwnFunction" Pool data across instruments In chapter 7 of my book I state that good forecasts should be consistent across multiple instruments, by using normalisation techniques so that a forecast of +20 means the same thing (strong buy) for both S&P500 and Eurodollar. One implication of this is that the forecast scalar should also be identical for all instruments. And one implication of that is that we can pool data across multiple markets to measure the right scalar. This is what the code defaults to doing. Of course to do that we should be confident that the forecast scalar ought to be the same for all markets. This should be true for a fast trend following rule like this: ## don't pool system.config.forecast_scalar_estimate['pool_instruments']=False results=[] for instrument_code in system.get_instrument_list(): results.append(round(float(system.forecastScaleCap.get_forecast_scalar(instrument_code, "ewmac2_8").tail(1).values),2)) print(results) [13.13, 13.13, 13.29, 12.76, 12.23, 13.31] Close enough to pool, I would say. For something like carry you might get a slightly different result even when the rule is properly scaled; it's a slower signal so instruments with short history will (plus some instruments just persistently have more carry than others - that's why the rule works). results=[] for instrument_code in system.get_instrument_list(): results.append(round(float(system.forecastScaleCap.get_forecast_scalar(instrument_code, "carry").tail(1).values),2)) print(results) [10.3, 58.52, 11.26, 23.91, 21.79, 18.81] The odd one's out are V2X (with a very low scalar) and Eurostoxx (very high) - both have only a year and a half of data - not really enough to be sure of the scalar value. One more important thing, the default function takes a cross sectional median of absolute values first, and then takes a time series average of that. The reason I do it that way round, rather than time series first, is otherwise when new instruments move into the average they'll make the scalar estimate jump horribly. Finally if you're some kind of weirdo (who has stupidly designed an instrument specific trading rule), then this is how you'd estimate everything individually: ## don't pool system.config.forecast_scalar_estimate.pool_instruments=False ## need a different function system.config.forecast_scalar_estimate.func="syscore.algos.forecast_scalar" Use an expanding window As well as being consistent across instruments, good forecasts should be consistent over time. Sure it's likely that forecasts can remain low for several months, or even a couple of years if they're slower trading rules, but a forecast scalar shouldn't average +10 in the 1980's, +20 in the 1990's, and +5 in the 2000's. For this reason I don't advocate using a moving window to average out my estimate of average forecast values; better to use all the data we have with an expanding window. For example, here's my estimate of the scalar for a slow trend following rule, using a moving window of one year. Notice the natural peaks and troughs as we get periods with strong trends (like 2008, 2011 and 2015), and periods without them. If you insist on using a moving window, here's how: ## By default we use an expanding window by making this parameter *large* eg 1000 years of daily data ## Here's how I'd use a four year window (4 years * 250 business days) system.config.forecast_scalar_estimate.window=1000 Goldilocks amount of minimum data - not too much, not too little The average value of a forecast will vary over the "cycle" of the forecast. This also means that estimating the average absolute value over a short period may well give you the wrong answer. For example suppose you're using a very slow trend following signal looking for 6 month trends, and you use just a month of data to find the initial estimate of your scalar. You might be in a period of a really strong trend, and get an unrealistically high value for the average absolute forecast, and thus a scalar that is biased downwards. Check out this, the raw forecast for a very slow trend system on Eurodollars: On the other hand, using a very long minimum window means we'll eithier have to burn a lot of data, or effectively be fitting in sample for much of the time (depending on whether we backfill - see next section). The default is two years, which feels about right to me, but you can easily change it, eg to one year: ## use a year (250 trading days) system.config.forecast_scalar_estimate.min_periods=250 Cheat, a bit So if we're using 2 years of minimum data, then what do we do if we have less than 2 years? It isn't so bad if we're pooling, since we can use another instrument's data before we get our own, but what if this is the first instrument we're trading. Do we really want to burn our first two years of precious data? I think it's okay to cheat here, and backfill the first valid value of a forecast scalar. We're not optimising for maximum performance here, so this doesn't feel like a forward looking backtest. Just be aware that if you're using a really long minimum window then you're effectively fitting in sample during the period that is backfilled. Naturally if you disagree, you can always change the parameter: system.config.forecast_scalar_estimate.backfill=False Conclusion Perhaps the most interesting thing about this post is how careful and thoughtful we have to be about something as mundane as estimating a forecast scalar. And if you think this is bad, some of the issues I've discussed here were the subject of debates lasting for years when I worked for AHL! But being a successful systematic trader is mostly about getting a series of mundane things right. It's not usually about discovering some secret trading rule that nobody else has thought of. If you can do that, great, but you'll also need to ensure all the boring stuff is right as well. Hi Rob, For variations within a trading rule ie differing lookbaks for the EWMAC are you calculating a diversification multiplier between variations? If you are then is it using a bootstrapping method or are you just looking at the entire history? I'm piecing together how it's being done from your code but so far my attempts have produced a very slow version of code. I don't do it this way, no. I throw all my trading rules into a single bit pot, and on that work out FDM and weights. There is an alternative which is to group rules together as you suggest, work out weights and an FDM for the group; and then repeat the same exercise across groups. However it isn't something I intend to code up. Dear Rob, I've been playing with the sytem and estimated forecasts for ES EWMAC(18,36) rule from 2010 to 2016 using both MA and EWMA volatility approach. Scaled ewma forecast looks fine, but scaled MA forecast is biased upwards oscilating around 10 with range -20 to 50. I believe this is due to low volatility (uptrend in equity prices and index futures) especialy in the period from 2012 to 2014. Should i just floor the volatility at some lvl when calculating forecast using MA approach? Also, should I annualize stddev when working out the MA forecast? From your experience, which approach is better - MA or EWMA when calculating forecasts? I'd never use MA. When you get a large return the MA will pop up. When it drops out of the window the MA pops down. So it's super jumpy, and this affects it's general statistical properties. Fixing the vol at a floor is treating the symptom, not the problem, and will have side affects (the position won't respond to an initial spike in vol when below the floor). Note: The system already includes a vol floor (syscore.algos.robust_vol_calc) which defaults to flooring the vol at the 5% quantile of vol in the last couple of years. Dear Rob, could I ask for a bit of clarification (it could just be my own made confusion, but I would appreciate your help) In the book under "Summary for Combining forecasts" you mention that the order is to first combine the raw forecast from all trading variations and then apply the forecast scaler. Whereas above here the forecast scaler is at the trading rule variation level. My confusion is that 1 - which approach do you really recommend 2 - if the scaler is to be applied on combined forecast , how to assess the absolute average forecast across variations ? Thank you in advance . Puneet I think you've misread the book (or it's badly written). The forecast scalar is for each individual trading rule variation. Rob thank you for the clarification. Book is fantastic. The confusions are of my own making one more for you on Forecast Scalers - When you talk about looking at absolute average value of forecast across instruments, are these forecast raw or volatility adjusted? If the forecasts are not vol. adjusted, the absolute value of the instrument will play a part in the forecast scaler determination ? That would not be right in my opinion. More so if we want a single scaler across instruments. Thanks in advance for your help. Yes forecasts should always be adjusted so that they are 'scale-less' and consistent across instruments before scaling. This will normally involve volatility adjusting. I probably missed it but do you explicitly test that forecast magnitude correlates with forecast accuracy? For example, do higher values of EWMAs result in greater accuracy? I'm assuming there's an observed albeit weak correlation. I haven't yet immersed myself in your book so I may be barking up the wrong tree. I have tested that specific relationship, and yes it happens. If you think about it logically most forecasting rules "should" work that way. Rob thank you for being so generous with your expertise. I really enjoyed your book and recommended it to several friends who also purchased it. In your post, you show that the instrument forecast scalars are fairly consistent for the EWMAC trading rule but not for the carry rule. Do you still then pool and use one forecast scalar for all instruments when using the carry rule? Yes I use a single scalar for all instruments with carry as well. Dear Rob, So for pooling, I do following: 1. I generate forecasts for each instrument separatley. 2. I take cross sectional median for forecast absolute values for all the instruments I'm pooling. What if dates i have data for each instrument are inconsistent? 3. I take time average of cross-sectional median found above. 4. I divide 10 into this average to get forecast scalar. Am I correct on that? Also, not related to this topic, but rather to trading rules: What do you think of an idea behind using a negative skew rule such as mean reversion to some average - e.g. take a linear deviaion from some moving average - the further the price from it, the stronger the forecast but with opposite sign. Wouldn't this harm EWMAC trend rule? Thanks, Peter 1. yes 2. yes. It just means in earlier periods you will have fewer data points in your median. 3, 4, yes. "e.g. take a linear deviaion from some moving average - the further the price from it, the stronger the forecast but with opposite sign. Wouldn't this harm EWMAC trend rule? " It's fine. Something like this would probably work at different time periods to trend following which seems to break down for holding periods of just a few days or for over a year. It would also be uncorrelated which is obviously good. But running it at the same frequency as EWMAC would be daft, as you'd be running the inverse of a trading rule at the same time as the rule itself. Dear Rob, I'm still struggling with forecast scalars. 1. For example for one of my ewmac rule variations i get forecast scalars ranging from 7 to 20 for different instruments and one weird outlier with monotonically declining scalar from 37 to 20. Do I still pool data to get one forecast scalar for all instruments or not? 2. Also, forecast scalar changes with every new data point, which value do i use to scale forecasts? for example i get following forecasts nd scalars for 5 dta points 5 2 4 2.5 4 2.5 6 1.7 7 1.25 for a scaled forecast do i just multiply values by line e.g. 5*2, 4*2.5 ... 7*1.25, or use the latest value of the forecast scalar for all my forecast data points e.g. 5*1.25, 4*1.25 ... 7*1.25? I would love to show a picture, however i have no idea how to attach it to the comments, is it possible? 1. If you are sure you have created a forecast that ought to be consistent across instruments (and ewmac should be) then yes pool. Such wildly different values suggest you have *very* short data history. Personally in this situation I'd use the forecast scalars from my book rather than estimating them. 2. In theory you should use the value of the scalar that was available at that time, otherwise your backtest is forward looking. I don't think you can attach pictures here. You could try messaging me on twitter, facebook or linkedin. Hi Rob (and P_Ser) If you want to share pictures, it can be done via screencast for example. I've used it to ask my question below :-) For my understanding, is the way of working in the following link correct with what you mean in this article : First step : take the median of all instruments for each date (= cross section median I suppose) Second step : take a normal average (or mean) of all this medians In the screenshot you can see this worked out for some dummy data. I also have the idea to calculate the forecast scalers only yearly and then smoothing them out like described in this link : I think it has no added value to calculate the forecast scaler each date like Peter mentioned, do you agree with this ? Kris Hopefully it's clear from function forecast_scalar what I do. # Take CS average first # we do this before we get the final TS average otherwise get jumps in # scalar if xcross.shape[1] == 1: x = xcross.abs().iloc[:, 0] else: x = xcross.ffill().abs().median(axis=1) # now the TS avg_abs_value = pd.rolling_mean(x, window=window, min_periods=min_periods) scaling_factor = target_abs_forecast / avg_abs_value Hi, I've already found the code on github but I must confess that reading other people's code in a language that I don't know is very difficult to me... but that's my problem. So everything I write my own is based on the articles you wrote. For so far I can read the code, it is the same as what I mean in the Excel-screenshot. Thanks, Kris Let's assume that you choose a forecast scalar such that the forecast is distributed as a normal distribution centred at 0 with expected absolute value 10. Since the expected absolute value of a Gaussian is equal to the stddev * sqrt(2/pi), this then seems to imply that the standard deviation of your forecast will be (10/sqrt(2/pi))=12.5. Looking at Chapter 10 of your book, when you calculate what size position to take based on your forecast, it appears that you choose a size assuming that the stddev of the forecast is 10. You use this assumption to ensure that the stddev of your daily USD P&L equals your daily risk target. But since the stddev is actually 12.5 rather than 10, doesn't this mean that you are taking on 25% too much risk? However, I think the effect is less than this in practice because we're truncating the ends of the forecast Gaussian (at +-20) which will tend to bring the realised stddev down again. Applying the formula from Wikipedia for the stddev of a truncated Gaussian with underlying stddev 12.5, mean 0, and limits +-20, I get that the stddev of the resulting distribution is 9.69, which is almost exactly 10 again. So, I think this explains why we end up targeting the right risk, despite fudging the difference between the expected absolute value and the stddev. This does imply that you shouldn't change the limits of the forecast -- increasing the limits from +-20 to +-30 will cause you to take 18% too much risk! Python code for calculation: Yes I used E(abs(forecast)) in the book as it's easier to explain than the expected standard deviation. The other difference, of course, is that if the mean isn't zero (which for slow trend following and carry is very likely) then the standard deviation will be biased downwards compared to abs() although that would be easy to fix by using MAD rather than STDEV. But yes you're right the effect of applying a cap is to bring things down to the right risk again. You can add this to the list of reasons why using a cap is a good thing. Incidentally the caps on FDM and IDM also tend to supress risk; and I've also got a risk management overlay in my own system which does a bit more of that and which I'll blog about in due course. Good spot, this is a very subtle effect! And of course in practice forecast distributions are rarely gaussian and risk isn't perfectly predictable so the whole thing is an approximation anyway and +/- 25% on realised risk is about as close as you can get. Thanks for the reply Robert - interesting points. I look forward to your post on risk management. For the record I realised that my reasoning above is slightly wrong because we don't discard forecasts outside +-20 but cap them, so we can't apply the formula for the truncated Gaussian, we need to use a slightly modified version (see g at). With this fix, the stddev of the forecast turns out to be 11.3 i.e. it causes us to take on 13% too much risk. Hi Rob Congratulations on the great first book which I thoroughly enjoyed reading. I am trying to follow the logic presented by Stephen above. Is the vol of the forecast a reflection of system risk? My understanding is that the forecast vol is a measure of the dispersion of the risk over time whereas average values in either half of the gaussian is the measure of the average risk. The fact the average abs forecast is linked to implied sdev of a gaussian forecast does not appear to my mind to change that. Finally with caps at +- 20 in a gaussian dist it seems you are compressing 10% (or more if tails are fatter) of the extremities as sdev is 12.5 (and +-20 equates +- 1.6 sd). Or have I misunderstood something? Average absolute value (also called mean absolute deviation) and standard deviation are just two different measures of the second moment of a distribution (it's dispersion), the difference being of course that the MAD is not corrected for the mean, whilst the standard deviation is indeed standardised (plus of course the fact that one is an arithmetic mean of absolute values; the other a geometric mean). Their relationship will depend on the underlying distribution; whether it is mean zero, or Gaussian, or something else. [For instance with any Gaussian distributions mean zero MAD~0.8*SD. If the mean is positive, and equal to half the standard deviation, then MAD~0.9*SD. You can have a lot of fun generating these relationships in Excel...] So doubling the standard deviation of a mean zero Gaussian will also double it's average absolute value. Because we scale our target vol both measures work equally well to measure the dispersion of the risk, as well as the average absolute risk. Indeed at AHL we used both MAD and SD for forecast scaling at different times. I appreciate that intuitively they seem to mean different things; but they are inextricably linked. The reason I chose MAD rather than SD in my book was deliberately to play on the more intuitive nature of MAD as a measure of average risk - I'm sorry if it has confused things. Anyway, I hope this clears that up a little. As to your final point, yes with a +/- 20 cap on a Gaussian forecast we are indeed capping 10% of the values. Hi Rob, you haven't confused things. I think I have explained myself poorly. I sort of get the link between MAD and stdev (albeit only after reading the comments above, your post on thresholding AND a little bit of googling). Its a nice way quickly to target average risk directly. What I am struggling with is Stephen's observation, "doesn't this mean that you are taking on 25% too much risk?". When looked at from the perspective of average risk, I don't see that you are. I have spent some time thinking about this, so if I have got it wrong, I apologise in advance for not getting it (aka 'being thick'): The system is either long or short. If we split the normal distribution of the forecast into two halves, then once properly scaled, the conditional mean forecast given a short is -10, and the conditional mean when long is +10 (unconditional mean is obviously 0). Hence the average risk in a subystem comes from a position equating to 10 forecast ‘points’ which is how you have designed your system. This implies the second moment ~ 12.5, but to my mind, the second moment doesn't impact average 'risk' which is a linear function of the averages in conditional halves of the gaussian. And this average is still 10 and therefore translates to desired average position size in your system over time. In a nutshell, I am struggling to equate the 'dispersion' of forecasts with 'average risk' in the system and this where my thinking deviates from Stephen's observation. Hi Patrick. You've clearly thought about this very deeply, and actually I'm coming to the conclusion that you are right (and indeed Stephen is wrong!) which also means that the risk targeting in my book is exactly right. We shouldn't really think about 'risk' when measuring forecast dispersion. You are correct to say that if the average absolute value is 10, and the system is scaled accordingly, then the risk of the system will be whatever you think it is. You'd only be at risk of mis-estimating the risk if you used MAD to measure the dispersion of *instrument returns*. Thanks again for this debate, it's great to still be discovering more about the subtleties of this game. Thanks for clarifying, Rob and you're welcome. Its a tiny contribution compared to the ocean of wisdom you have imparted. Would it be feasible to use a scaling function for this, such as: (40*cdf(0.5*(Value-Median)/InterQuartileRange))-20 (cdf = cumulative density function, and Value is your volatility normalised prediction, and the Median and InterQuartileRange calculated from an accumulating Value series.) The above is a standard function in one of the tools I use and I've found it pretty useful for scaling problems relating to inputs/outputs for financial time series. And superb book(s), many thanks. Yes that is a nice alternative. The results will depend on the distributional properties of the underlying forecast; if they're normal around zero it won't really matter what you choose. So one word of warning about your method is that it will de-mean forecasts by their long run average; which you may not want to do. Of course it's easy to adapt it so this doesn't happen.? Many thanks also for the warning re the long run average. Will try to resist the temptation to optimise the average length... Assuming one has scaled individual forecasts (say of the various EWMAC speeds) to +/-20 and wishes to use bootstrapping + Markowitz (rather than handcrafting) to derive the forecast weights, how are you defining the returns generated by the individual forecasts that are needed as Markowitz inputs? The next day's percentage return? (Assuming daily bars being used.) You just run a trading system for each forecast individually (with vol scaling as usual to decide what positions should be) for some nominal risk target and see what the returns are. Sorry I'm being dim. "each forecast individually" = each daily forecast or each set of forecasts? So would the trading system return be the annualised Sharpe ratio for a desired risk target? Hi Rob, I have found my historic estimates of pooled forecast scalars (pooled across 50 instrument) for carry to be more or less a straight line upwards with time (over 40+years). In order to make sure this isn't due to a bug in my code, I was wondering if this is something you might have seen in your own research? No, haven't seen that. Sounds like a bug I was using MAD to get forecast scalars for both carry and ewmacs and I am getting results in line with expectations on ewmacs, so the calculation seems to have been applied correctly. For carry I get a line showing an fscalar in 1970s of around 12 to about 24 last year. This whole exercise got me looking deeper into the distribution of carry (not carry returns), and its seems as you hinted above that each instrument has a unique distribution with different degrees of skewness (and sign) depending on the instrument, and this skew may be persistent in certain instruments. Since my understanding is that targeting risk using MAD requires 0 skew gaussian, I therefore tried to calculate the carry scalar directly from the s.d. along the time axis for each instrument, as an alternative, to see if this made more sense. And then applied pooling on the basis the sum of skewed distributions should approach a symmetric gaussian. Doing it this way around still gives gave me an upward path from the 1970s but stabilising at a value close to 30 over the last ten years, a number mentioned in your book of course, but it was noisy so I applied a smooth. The upward path I cannot completely get rid of. I suspect it is a feature of the increasing number and type of instruments entering the pool (presumably with lower carry vol over time). Does my latter approach seem like a reasonable approach to you? sorry -correction - I didn't mean MAD above. I was referring to the mean of the absolute values. Very different of course! Carry clearly doesn't have a zero average value, which means that using MAD will give different answers to using Mean(absolute value) which is what I normally use. It's hard to normalise something which has a systematic bias like this (and that's even without considering non gaussian returns). Smoothing makes sense and pooling makes sense. If you plot the average scalar for each instrument independently you can verify whether your hypothesis makes sense. It seems I am calling bias, "skew" and average(abs), "MAD". Lol, I need to brush up on my stats. Thank you for the pointers in the meantime. Ok so pooling seems to be problematic for carry, because of the relatively wide dispersion in scalar values across instruments. I guess you did warn us. A minor technical question on how to apply these scalars in an expanding window backtest: Do you recommend scaling forecasts daily over the entire history by the daily updated values? Would you also therefore update the forecast scalar daily in a live system? During a backtest yes I'd update these values (maybe not daily, but certainly annually) In live trading I'd probably use fixed parameters (the last value from the backtest) Belated thank you, you answer led me to refactor my code after your answer and I only just got working again. Results are a little more sensible now. Thanks again! I'm using bootstrapping + Markowitz to select forecast weights and am intrigued as to how you apply the costs to obtain after cost performance (as per footnote 87 of chapter nine of Systematic Trading). Assuming continuous forecasts, where (for example) a particular EWMAC parameter set's scaled values would only incur transaction costs on a zero crossover (switch long/short), do you just amortize total costs per parameter set continuously on a per day basis? Apologies - forgot to add thanks to previous message! I'm thinking that the alternative of calculating and applying costs on a per bootstrap sample basis might distort the results. For example, if using a one year look back for returns (as an input to Markowitz) the number of zero crossovers (and hence costs) in that time could vary significantly depending on sampling. Also, if trying to allow for serial correlation (as you mention in the book in your bootstrap comments) you might take one random sample and then the next N succeeding samples (before taking another random sample and repeating the process) which could cause even greater distortion. At present, I'm calculating the total costs for the period over which I'm estimating forecast weights (based on the number of forecast zero crossovers) and then averaging that as a daily cost 'drag'. So the net return each day is the gross return minus that cost (with gross return obviously depending on whether the forecast from the previous day was above or below zero). I'm not entirely convinced this is necessarily the ideal approach. Would much value your opinion/insight. Many thanks. You should pay costs whenever your position changes, not just when the crossover passes through zero (or are you running a binary system?) Leaving that aside for the moment I'm not convinced it will make much difference whether your costs are distributed evenly across bootstraps or unevenly; if you have enough bootstraps it will end up shifting the distribution of returns by exactly the same amount. As it happens I use the daily cost drag approach myself when I'm doing costs in Sharpe Ratio units, so it's not a bad approach. Many thanks - much appreciated. Sorry - just re-reading your answer and specifically "You should pay costs whenever your position changes". I'm trying to calculate forecast weights, so if I have scaled my forecasts to a range of +/-20 (i.e. not binary but continuous) then surely I'm only changing from a long to short position as I cross zero? Say on day 1 the forecast is 7, so a long position is opened, and on day 2 it's 7.5, so it's still long. Or do you mean that even if bootstrapping for forecast weights I have to factor in the costs of increasing the long position size because the forecast has increased from 7 to 7.5? Surely that would cause a cost explosion if repeated every day? If your forecast changes by enough then your position will change... and that will mean you will trade... and you have to pay costs. Absolutely - I understand - the reason for my confusion was that if using a platform like Oanda with no defined contract or microlot size, your block value is miniscule therefore your instrument value volatility and volatility scalar are similar small and so only a very small change in forecast would be needed to cause in a change in position size. Hence... Many thanks (and apologies). Except that you should use 'position inertia' (as I describe it in the book) so you won't do very small trades. Good point. Thx. No there is no problem. You calculate the position for a given rule variation assuming it has all the capital allocated (and the amount of capital can be arbitrary). Then you calculate the turnover.. Sorry for the delay in responding. I calculate turnover for a trading rule just purely looking at the turnover of the individual forecast. This won't include any turnover created by other sources; vol scaling, position rounding, etc. Many thanks
https://qoppac.blogspot.com/2016/01/pysystemtrader-estimated-forecast.html
CC-MAIN-2019-22
refinedweb
5,859
61.06
First Steps Coding¶ This section gives a brief step-by-step introduction on how to set up Evennia for the first time so you can modify and overload the defaults easily. You should only need to do these steps once. It also walks through you making your first few tweaks. Before continuing, make sure you have Evennia installed and running by following the Getting Started instructions. You should have initialized a new game folder with the evennia --init foldername command. We will in the following assume this folder is called “mygame”. It might be a good idea to eye through the brief Coding Introduction too (especially the recommendations in the section about the evennia “flat” API will help you here and in the future). To follow this tutorial you also need to know the basics of operating your computer’s terminal/command line. You also need to have a text editor to edit and create source text files. There are plenty of online tutorials on how to use the terminal and plenty of good free text editors. We will assume these things are already familiar to you henceforth. Your first changes¶ Below are some first things to try with your new custom modules. You can test these to get a feel for the system. See also Tutorials for more step-by-step help and special cases. Tweak default Character¶ We will add some simple rpg attributes to our default Character. In the next section we will follow up with a new command to view those attributes. Edit mygame/typeclasses/characters.pyand modify the Characterclass. The at_object_creationmethod also exists on the DefaultCharacterparent and will overload it. The get_abilitiesmethod is unique to our version of Character. class Character(DefaultCharacter): # [...] def at_object_creation(self): """ Called only at initial creation. This is a rather silly example since ability scores should vary from Character to Character and is usually set during some character generation step instead. """ #set persistent attributes self.db.strength = 5 self.db.agility = 4 self.db.magic = 2 def get_abilities(self): """ Simple access method to return ability scores as a tuple (str,agi,mag) """ return self.db.strength, self.db.agility, self.db.magic Reload the server (you will still be connected to the game after doing this). Updating yourself¶ Note that the new Attributes will only be stored on newly created characters ( at_object_creation is only called when the object is first created). So if you call the get_abilities hook on yourself at this point you will see the Attribute have not been set: # (you have to be superuser to use @py) @py self.get_abilities() <<< (None, None, None) This is because your Character was already created before you made your changes to the Character class and thus the at_object_creation() hook will not be called again. This is easily remedied though - you can force re-run the startup hooks on yourself with the @typeclass command: @typeclass/force self This will re-run at_object_creation on yourself (in code you can use the Character.swap_typeclass method with the same typeclass set). You should henceforth be able to get the abilities successfully: @py self.get_abilities() <<< (5, 4, 2) See the Object Typeclass tutorial for more help and the Typeclasses and Attributes page for detailed documentation about Typeclasses and Attributes. Trouble Shooting: Updating yourself¶ One may experience errors for a number of reasons. Common beginner errors are spelling mistakes, wrong indentations or code omissions leading to a SyntaxError. Let’s say you leave out a colon from the end of a class function like so: def at_object_creation(self). The client will reload without issue. However, if you look at the terminal/console (i.e. not in-game), you will see Evennia complaining (this is called a traceback): Traceback (most recent call last): File "C:\mygame\typeclasses\characters.py", line 33 def at_object_creation(self) ^ SyntaxError: invalid syntax Evennia will still be restarting and following the tutorial, doing @py self.get_abilities() will return the right response (None, None, None). But when attempting to @typeclass/force self you will get this response: AttributeError: 'DefaultObject' object has no attribute 'get_abilities' The full error will show in the terminal/console but this is confusing since you did add get_abilities before. Note however what the error says - you ( self) should be a Character but the error talks about DefaultObject. What has happened is that due to your unhandled SyntaxError earlier, Evennia could not load the character.py module at all (it’s not valid Python). Rather than crashing, Evennia handles this by temporarily falling back to a safe default - DefaultObject - in order to keep your MUD running. Fix the original SyntaxError and reload the server. Evennia will then be able to use your modified Character class again and things should work. Note: Learning how to interpret an error traceback is a critical skill for anyone learning Python. Full tracebacks will appear in the terminal/Console you started Evennia from. The traceback text can sometimes be quite long, but you are usually just looking for the last few lines: The description of the error and the filename + line number for where the error occurred. In the example above, we see it’s a SyntaxErrorhappening at line 33of mygame\typeclasses\characters.py. In this case it even points out where on the line it encountered the error (the missing colon). Learn to read tracebacks and you’ll be able to resolve the vast majority of common errors easily. Add a new default command¶ The @py command used above is only available to privileged users. We want any player to be able to see their stats. Let’s add a new command to list the abilities we added in the previous section. Open mygame/commands/command.py. You could in principle put your command anywhere but this module has all the imports already set up along with some useful documentation. Make a new class at the bottom of this file: class CmdAbilities(Command): """ List abilities Usage: abilities Displays a list of your current ability values. """ key = "abilities" aliases = ["abi"] lock = "cmd:all()" help_category = "General" def func(self): "implements the actual functionality" str, agi, mag = self.caller.get_abilities() string = "STR: %s, AGI: %s, MAG: %s" % (str, agi, mag) self.caller.msg(string) Next you edit mygame/commands/default_cmdsets.pyand add a new import to it near the top: from commands.command import CmdAbilities In the CharacterCmdSetclass, add the following near the bottom (it says where): self.add(CmdAbilities()) Reload the server (noone will be disconnected by doing this). You (and anyone else) should now be able to use abilities (or its alias abi) as part of your normal commands in-game: abilities STR: 5, AGI: 4, MAG: 2 See the Adding a Command tutorial for more examples and the Commands section for detailed documentation about the Command system. Make a new type of object¶ Let’s test to make a new type of object. This example is an “wise stone” object that returns some random comment when you look at it, like this: > look stone A very wise stone This is a very wise old stone. It grumbles and says: 'The world is like a rock of chocolate.' Create a new module in mygame/typeclasses/. Name it wiseobject.pyfor this example. In the module import the base Object( typeclasses.objects.Object). This is empty by default, meaning it is just a proxy for the default evennia.DefaultObject. Make a new class in your module inheriting from Object. Overload hooks on it to add new functionality. Here is an example of how the file could look: from random import choice from typeclasses.objects import Object class WiseObject(Object): """ An object speaking when someone looks at it. We assume it looks like a stone in this example. """ def at_object_creation(self): "Called when object is first created" self.db.wise_texts = \ ["Stones have feelings too.", "To live like a stone is to not have lived at all.", "The world is like a rock of chocolate."] def return_appearance(self, looker): """ Called by the look command. We want to return a wisdom when we get looked at. """ # first get the base string from the # parent's return_appearance. string = super(WiseObject, self).return_appearance(looker) wisewords = "\n\nIt grumbles and says: '%s'" wisewords = wisewords % choice(self.db.wise_texts) return string + wisewords Check your code for bugs. Tracebacks will appear on your command line or log. If you have a grave Syntax Error in your code, the source file itself will fail to load which can cause issues with the entire cmdset. If so, fix your bug and reload the server from the command line (noone will be disconnected by doing this). Use @create/drop stone:wiseobject.WiseObjectto create a talkative stone. If the @createcommand spits out a warning or cannot find the typeclass (it will tell you which paths it searched), re-check your code for bugs and that you gave the correct path. The @createcommand starts looking for Typeclasses in mygame/typeclasses/. Use look stoneto test. You will see the default description (“You see nothing special”) followed by a random message of stony wisdom. Use @desc stone = This is a wise old stone.to make it look nicer. See the Builder Docs for more information. Note that at_object_creation is only called once, when the stone is first created. If you make changes to this method later, already existing stones will not see those changes. As with the Character example above you can use @typeclass/force to tell the stone to re-run its initialization. The at_object_creation is a special case though. Changing most other aspects of the typeclass does not require manual updating like this - you just need to @reload to have all changes applied automatically to all existing objects. Where to go from here?¶ There are more Tutorials, including one for building a whole little MUSH-like game - that is instructive also if you have no interest in MUSHes per se. A good idea is to also get onto the IRC chat and the mailing list to get in touch with the community and other developers.
http://evennia.readthedocs.io/en/latest/First-Steps-Coding.html
CC-MAIN-2018-13
refinedweb
1,672
56.86
Set placeholder character for glyph? Is it possible to set the placeholder character for a glyph? For instance when making non-latin unicodes? hello Erik, as far as I know, there’s only the global setting in Preferences > Character Set > Template glyphs preview font. but it’s possible to set this preference using a script, so you can at least switch between two or more template fonts quickly: from mojo.UI import getDefault, setDefault key = 'templateGlyphFontName' print(getDefault(key)) setDefault(key, 'RoboType-Mono') print(getDefault(key)) Thanks Gustavo! Perhaps a fallback scheme might be useful? If a unicode is not available in the first, check the second, etc. mmm, see Character Set Preferences There is a default font and a fallback when there is no unicode. I guess its up to the user to pick a font that contains the glyph.unicode, if a unicode is not in the selected font, the system fallback font is used to draw the template glyph.
https://forum.robofont.com/topic/525/set-placeholder-character-for-glyph
CC-MAIN-2019-22
refinedweb
162
66.03
I have a very simple input form where I am trying to update a single field in a database. When I run in Preview mode, it Creates a New entry instead of updating the one in the Filter? Here is the CODE: import wixData from 'wix-data'; // For full API documentation, including code examples, visit $w.onReady(function () { //Add your code for this event here: }); export function submitButton_click(event, $w) { $w('#dataset2').setFilter( wixData.filter() .eq('dashno', $w('#input4').value) ); console.log($w('#input4').value); console.log($w('#input6').value); $w("#dataset2").setFieldValue("thumbnail", $w('#input6').value); $w("#dataset2").save(); } Here is the FORM: I think you will have to wait for the setFilter promise to be done using a promise .then() and then you can set the field value and save it. Could you please elaborate, better yet point to an example? Hi CW: What Andreas is saying is as follows: Your code, outlined in red, is performing an asynchronous action. It is asking a data collection to go through its entries and find the matching records. It then sets up the collection to match the resulting items. This doesn't happen immediately so setFilter returns something called a promise. The promise 'promises' to tell you what happened to your request at some point in the future but not now. When the filter completes it tells the promise you were given that it is ready and you can now trust the filtered result. When the promise completes (is resolved) .then() you can do something with the filtered data. So I would advise you to read the setFilter() documentation and also get familiar with Promises so that the suggestion below makes sense. Basically what you need to do is catch the promise result AND also any exceptions that might happen. Your code should do what is expected if you add a .then() and a .catch()... Hope this is more clear ;-) Thank you very much! I understand now. I took the code as you modified it and added to my page. While it APPEARED to work.. (got the successful update message), fact is it added a new record to the wrong person. The registry entry is primary key "dash" and secondary "owner email". So I deleted the duplicate and added another input field to the input form and added additional filter data. This time it DID work with the correct car, BUT it added a duplicate record instead of just updating the current record? Here is the code as modified. I also used Properties Panel to name the fields for clarity... import wixData from 'wix-data'; // For full API documentation, including code examples, visit $w.onReady(function () { //Add your code for this event here: }); export function submitButton_click(event, $w) { $w('#dataset2').setFilter( wixData.filter() .eq('dash', $w('#Dash').value) .eq('email', $w('#Email').value) ) // Process our promise result! .then(() => { console.log($w('#Dash').value); console.log($w('#Email').value); console.log($w('#URL').value); // Check to make sure we have the item that we want to update ;-) let currentItem = $w("#dataset2").getCurrentItem(); console.log(currentItem); // Defensive programming helps prevent unexpected side effects! if (!currentItem.hasOwnProperty('dash') || currentItem.dash !== $w('#Dash').value || !currentItem.hasOwnProperty('email') || currentItem.email !== $w('#Email').value) { // We have a problem with the filtered result this is a problem so lets throw and error // for our catch below throw Error('filter failed'); } // If we get this far we have the record we want to update $w("#dataset2").setFieldValue("thumbnail", $w('#URL').value); return $w("#dataset2").save(); // <====== NOTE this also returns a promise // By returning it we can add an additional .then() below if we want to or catch other exceptions // In the one catch() below. }) // Because we returned the save() above we can now get hold of the // Save result in a then call here // wix-dataset.save() returns On fulfillment. Object The saved item. .then((savedItem) => { console.log(savedItem); }) // Handle any exceptions raised by the process. .catch((error) => { // Perform some sort of error handling // The setFilter docs states On rejection Error An error object is returned. console.log(error.message); console.log(error.stack); // Optional gives stack trace that lead to the error }); } CW I am not sure why you are getting a new record. The dataset docs say: So this should only update the currentItem and should throw an exception if the current item isn't the one you want. Do the console.log statements show that the currentItem is the one you should be updating? Did the console.log of the savedItem show the correct record? Remember if you have multiple records already you may be updating the wrong record. Try checking the number of records that you filter returns and if there are more than one then the record might be being added somewhere else. Other than that I am not sure what can be wrong. Presumably you have other code creating the dataset items. Ahhhh... I had the input fields of the form connected to the database. Then after reading thru the code again realized that the URL value gets saved. I guess since those fields were populated it created another record. Once I changed those input boxes to Not Connected, it is now working as expected. Thanks So Much. Now I understand the "Promises, Promises"... JD I would like it if use can do a video of this, I have done what use have said but still not working as it should. = updating but not replacing old data, making a new one insted. I WOULD LOVE YOUR HELP <3 I have also same problem. if you are got the solution .please give me reply that solution
https://www.wix.com/corvid/forum/community-discussion/update-a-field-in-a-database-creates-new-entry
CC-MAIN-2020-16
refinedweb
943
66.33