text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Context managers for use with the with statement. Note When using Python 2.5, you will need to start your fabfile with from __future__ import with_statement in order to make use of the with statement (which is a regular, non __future__ feature of Python 2.6+.) Note If you are using multiple directly nested with statements, it can be convenient to use multiple context expressions in one single with statement. Instead of writing: with cd('/path/to/app'): with prefix('workon myvenv'): run('./manage.py syncdb') run('./manage.py loaddata myfixture') you can write: with cd('/path/to/app'), prefix('workon myvenv'): run('./manage.py syncdb') run('./manage.py loaddata myfixture') Note that you need Python 2.7+ for this to work. On Python 2.5 or 2.6, you can do the following: from contextlib import nested with nested(cd('/path/to/app'), prefix('workon myvenv')): ... Finally, note that settings implements nested itself – see its API doc for details. Context manager that keeps directory state when calling remote operations. Any calls to run, sudo, get, or put within the wrapped block will implicitly have a string similar to "cd <path> && " prefixed in order to give the sense that there is actually statefulness involved.. Force local terminal pipe be character, not line, buffered. Only applies on Unix-based systems; on Windows this is a no-op.. Relative path arguments are relative to the local user’s current working directory, which will vary depending on where Fabric (or Fabric-using code) was invoked. You can check what this is with os.getcwd. It may be useful to pin things relative to the location of the fabfile in use, which may be found in env.real_fabfile. Alias to settings(hide('everything'), warn_only=True). Useful for wrapping remote interrogative commands which you expect to fail occasionally, and/or which you want to silence. Example: with quiet(): have_build_dir = run("test -e /tmp/build").succeeded When used in a task, the above snippet will not produce any run: test -e /tmp/build line, nor will any stdout/stderr display, and command failure is ignored. See also env.warn_only, settings, hide New in version 1.5. Create a tunnel forwarding a locally-visible port to the remote target. For example, you can let the remote host access a database that is installed on the client host: # Map localhost:6379 on the server to localhost:6379 on the client, # so that the remote 'redis-cli' program ends up speaking to the local # redis-server. with remote_tunnel(6379): run("redis-cli -i") The database might be installed on a client only reachable from the client host (as opposed to on the client itself): # Map localhost:6379 on the server to redis.internal:6379 on the client with remote_tunnel(6379, local_host="redis.internal") run("redis-cli -i") remote_tunnel accepts up to four arguments: Note By default, most SSH servers only allow remote tunnels to listen to the localhost interface (127.0.0.1). In these cases, remote_bind_address is ignored by the server, and the tunnel will listen only to 127.0.0.1. Nest context managers and/or override env variables. settings serves two purposes: Most usefully, it allows temporary overriding/updating of env with any provided keyword arguments, e.g. with settings(user='foo'):. Original values, if any, will be restored once the with block closes. - The keyword argument clean_revert has special meaning for settings itself (see below) and will be stripped out before execution. In addition, it will use contextlib.nested to nest any given non-keyword arguments, which should be other context managers, e.g. with settings(hide('stderr'), show('stdout')):.. If clean_revert is set to True, settings will not revert keys which are altered within the nested block, instead only reverting keys whose values remain the same as those given. More examples will make this clear; below is how settings operates normally: # Before the block, env.parallel defaults to False, host_string to None with settings(parallel=True, host_string='myhost'): # env.parallel is True # env.host_string is 'myhost' env.host_string = 'otherhost' # env.host_string is now 'otherhost' # Outside the block: # * env.parallel is False again # * env.host_string is None again The internal modification of env.host_string is nullified – not always desirable. That’s where clean_revert comes in: # Before the block, env.parallel defaults to False, host_string to None with settings(parallel=True, host_string='myhost', clean_revert=True): # env.parallel is True # env.host_string is 'myhost' env.host_string = 'otherhost' # env.host_string is now 'otherhost' # Outside the block: # * env.parallel is False again # * env.host_string remains 'otherhost' Brand new keys which did not exist in env prior to using settings are also preserved if clean_revert is active. When False, such keys are removed when the block exits. New in version 1.4.1: The clean_revert kwarg. Set shell environment variables for wrapped commands. For example, the below shows how you might set a ZeroMQ related environment variable when installing a Python ZMQ library: with shell_env(ZMQ_DIR='/home/user/local'): run('pip install pyzmq') As with prefix, this effectively turns the run command into: $ export ZMQ_DIR='/home/user/local' && pip install pyzmq Multiple key-value pairs may be given simultaneously.. Alias to settings(warn_only=True). See also env.warn_only, settings, quiet
http://docs.fabfile.org/en/latest/api/core/context_managers.html
CC-MAIN-2014-42
refinedweb
870
59.8
This is your resource to discuss support topics with your peers, and learn from each other. 12-17-2009 11:56 PM I need to display some scaled images and I noticed the EncodedImage class while looking at the Graphics class javadocs so I explored it: here are my findings. EncodedImage is only partially effective at scaling images. It provides a deprecated method named setScale which, used in combination with getBitmap, allowed the user to scale a bitmap. The documentation for setScale references scaleImage32(int scaleX, int scaleY) as the method introduced to replace it. The new method works well for reducing bitmaps but doesn't do a very good job of upscaling them. It also chokes when you try to upscale a bitmap and don't use the Fixed32 class properly. I highly suggest you thoroughly read the javadoc for Fixed32 before trying to figure out how to use scaledImage32. Fixed32 stores a decimal as a whole number with an implied decimal (i.e. 1.0 = 0x00010000, 1.5 = 0x00018000). This method is used because scaleImage32 requires an integer version of a decimal to upscale an image. Why they didn't just make the parameters for scaleImage32 double is a mystery to me. On a related note, you can probably use hexadecimal math to make the Fixed32 scaling factors make more sense and be proportional to the scaling percentage. The sandbox class I've provided produces a lot of information. First it gives two visual demonstrations of scaling a simple image of the number 1 both up and down. Notice in the images generated by the sandbox class that reducing the image results in repetition in some of the smaller scales. When you get to some of the higher scale values resulting in smaller scaled versions of the image the scaling actually produces no further change. Next it performs the same operation but returns statistics on the scaled images instead. Note in this info that the scaling values don't seem to be directly proportional to the percentage scaled so you'll need to play around with them to find the right one. Finally, the class shows the value of most of the parameters with the exception of those which return objects such as getData. Although the sandbox doesn't show it, the EncodedImage class supports multiple frames in an image. This would be useful for creating animations and for simplifying storing multiple images that are the same size. But, enough about that, lets look at some code... import net.rim.device.api.math.Fixed32; import net.rim.device.api.system.Bitmap; import net.rim.device.api.system.EncodedImage; import net.rim.device.api.ui.Field; import net.rim.device.api.ui.UiApplication; import net.rim.device.api.ui.component.BitmapField; import net.rim.device.api.ui.component.LabelField; import net.rim.device.api.ui.component.SeparatorField; import net.rim.device.api.ui.container.HorizontalFieldMan ager;ager; import net.rim.device.api.ui.container.MainScreen; /** * This class is intended to be subclassed and used for exploring BlackBerry * development. * * @author Wyatt Williamson */ public class GenericSandBox extends UiApplication { /** * The usual constructor creating a mainscreen object and pushing it * onto the screen stack. This method should be left unchanged. */ public GenericSandBox() { SandboxScreen screen = new SandboxScreen(); pushScreen(screen); } // end GenericSandBox() /** * The usual main method attaching the object to the event dispatcher. * This method should be left unchanged. * * @param args * Typical main arguments which, in this case, are unused. */ public static void main(final String args[]) { GenericSandBox test = new GenericSandBox(); test.enterEventDispatcher(); } // end main() /** * The MainScreen class that's pushed onto the screen stack. This inner * class is the class where all the changes need to be made. * * @author gzusphish */ private class SandboxScreen extends MainScreen { /** * Constructor responsible for building the application's screen. * It is where all components and containers are instantiated and * added to the default field manager and where the majority of the * sandbox's logic resides. This is the only part that should be * modified when using the sandbox. */ public SandboxScreen() { /* * This MUST be the first statement in this method. If you need * to change the way the screen or its default * VerticalFieldManager is constructed, add flags as arguments * to the constructor: i.e. super(Manager.NO_VERTICAL_SCROLLBAR * | Field.FIELD_VCENTER) tocenter content vertically but hide * the vertical scrollbar. */ super(); // Declare and instantiate required items. EncodedImage eImage; eImage = EncodedImage.getEncodedImageResource("num_1.png"); HorizontalFieldManager line1 = new HorizontalFieldManager(); HorizontalFieldManager line2 = new HorizontalFieldManager(); add(new SeparatorField()); add(line1); add(new SeparatorField()); add(line2); int frameIndex = 0; int scale; int originalWidth = eImage.getWidth(); add(new LabelField("Original width = " + originalWidth)); // Scale from what I'm assuming is 130% through 100% in 5% // increments for (int i = 70; i <= 100; i += 5) { scale = Fixed32.tenThouToFP(i * 100); EncodedImage tmpEI = eImage.scaleImage32(scale, scale); Bitmap bitmap = tmpEI .getBitmap(); BitmapField bf = new BitmapField(bitmap, Field.FOCUSABLE); line1.add(bf); } // end for // Scale from 100% to what I'm assuming is 10% in 5% increments for (int i = 10; i <= 100; i += 5) { scale = Fixed32.tenThouToFP(i * 1000); EncodedImage = eImage.scaleImage32(scale, scale); Bitmap bitmap = tmpEI.getBitmap(); BitmapField bf = new BitmapField(bitmap, Field.FOCUSABLE); line2.add(bf); } // end for /* Compare scaling factor to effective percentage. */ add(new LabelField("Scaling Factor:New Width:Percent")); // Increment scale from 7000 to 10000 at intervals of 500 for (int i = 70; i <= 100; i += 5) { int factor = i * 100; add(new SeparatorField()); add(new LabelField("Scaling Factor:New Width:Percent")); // Increment scale from 10000 to 100000 at intervals of 5000 for (int i = 10; i <= 100; i += 5) { int factor = i * 1000; // Tour de método. Let's see what all this thing can do. add(new LabelField("getBitmapType(int Index) = " + eImage.getBitmapType(frameIndex), Field.FOCUSABLE)); add(new LabelField("getDecodeMode() = " + eImage.getDecodeMode(), Field.FOCUSABLE)); add(new LabelField("getFrameCount() = " + eImage.getFrameCount(), Field.FOCUSABLE)); add(new LabelField("getFrameHeight(Index) = " + eImage.getFrameHeight(frameIndex), Field.FOCUSABLE)); add(new LabelField("getFrameMonochrome(Index) = " + eImage.getFrameMonochrome(frameIndex), Field.FOCUSABLE)); add(new LabelField("getFrameTransparency(Index) = " + eImage.getFrameTransparency(frameIndex), Field.FOCUSABLE)); add(new LabelField("getFrameWidth(Index) = " + eImage.getFrameWidth(frameIndex), Field.FOCUSABLE)); add(new LabelField("getHeight() = " + eImage.getHeight(), Field.FOCUSABLE)); add(new LabelField("getImageType() = " + eImage.getImageType(), Field.FOCUSABLE)); add(new LabelField("getLength() = " + eImage.getLength(), Field.FOCUSABLE)); add(new LabelField("getMIMEType() = " + eImage.getMIMEType(), Field.FOCUSABLE)); add(new LabelField("getOffset() = " + eImage.getOffset(), Field.FOCUSABLE)); add(new LabelField("getScale() = " + eImage.getScale(), Field.FOCUSABLE)); add(new LabelField("getScaledFrameHeight(Index) = " + eImage.getScaledFrameHeight(frameIndex), Field.FOCUSABLE)); add(new LabelField("getScaledFrameWidth(Index) = " + eImage.getScaledFrameWidth(frameIndex), Field.FOCUSABLE)); add(new LabelField("getScaledHeight() = " + eImage.getScaledHeight(), Field.FOCUSABLE)); add(new LabelField("getScaledHeight() = " + eImage.getScaledHeight(), Field.FOCUSABLE)); add(new LabelField("getScaledWidth() = " + eImage.getScaledWidth(), Field.FOCUSABLE)); add(new LabelField("getScaledWidth(srcWidth) = " + eImage.getScaledWidth(), Field.FOCUSABLE)); add(new LabelField("getScaleX32() = " + eImage.getScaleX32(), Field.FOCUSABLE)); add(new LabelField("getScaleY32() = " + eImage.getScaleY32(), Field.FOCUSABLE)); add(new LabelField("getWidth() = " + eImage.getWidth(), Field.FOCUSABLE)); add(new LabelField("hasTransparency() = " + eImage.hasTransparency(), Field.FOCUSABLE)); add(new LabelField("isMonochrome() = " + eImage.isMonochrome(), Field.FOCUSABLE)); } // end SandboxScreen() } // end SandboxScreen } // end GenericSandBox 12-18-2009 09:28 AM Sorry I have not had time to review the code or comments in detail, but just wanted to say thank you for taking the time to do this. Good job. 12-18-2009 10:55 AM - edited 12-18-2009 10:57 AM Thanks for the kudos Mr. Anderson . . . erm . . . Mr. Strange (sorry, lapsed into a Matrix moment there). The BlackBerry has you . . . GAA! I just hope I did the tagging right. I don't know if I was supposed to or not, so I decided not to go overboard with them but I think I still put one too many. Is "scale" a good tag? It's what the example is doing but it may be a bit broad. I also left off Bitmap and BitmapField because the example contains them but that's not what the post is about. The only thing left to do is figure out how to calculate the scaling value. Say, for instance, I wanted to scale five images to fit side-by-side across the screen. In my case, I need to fit twelve images across the screen since I'm making a timer display as follows HH:MM S:mmm and I'd like them to be as large as possible on the screen of any given model. I think the answer will have something to do with hex math, but I'm not sure. 12-18-2009 11:40 AM Here's a composit screen shot of the output of the sandbox. Here's the image - num_1.png - that was used for the example. 12-18-2009 11:06 PM - edited 12-18-2009 11:07 PM Good job, I will respond to your post "backwards." One idea is to add up all the default width's of the images that you want to display, then use that in comparison with screen width/height, whatever you are trying to fit it to. Then scale each appropriately based on that information. For example you have 2 images that are 30 and 20 pixels wide, the field/screen you are putting them in is 100 pixels wide. 30+20=50 so you scale it to 100 pixels (by multiplying by 2). 30*2=60, 20*2=40. There you have scaled them to fit the area. Don't worry about tagging. You can do it for what you see fit (I myself can never seem to find what I am looking for when I do a search so I tag all my posts, this is evident if you look at the Top Taggers ranking). When scaling the simple equation I remember reading was "scale = original size / new size", this can be translated to code as "int scale = Fixed32.div(Fixed32.toFP(original_size), Fixed32.toFP(new_size));" I don't know why they did it in that manner but that equation will get the job done. BlackBerry's often don't have a FPUs and if you search around there is a post where one of the more professional developers on the forum compared the different data types and found that Float's and Double's are (I think it said) almost 2 times slower then processing Long's which are (I think it said) about 1.25-1.5 times slower then using an Integer. This is why Fixed32 is used because it uses Integer's to do all the processing and can operate extremely fast. FInally if you are using 5.0 API you have a little more options with scaling. If you will be generating all images on device (as opposed to loading them from files) then Bitmap is the way to go, otherwise EncodedImage. Either way you can get it as a Bitmap which, not only have general scaling functions now but, have scaling functions that you can choose how you want the scaling (Box, Bilinear, or Lancozos). It also has a, what you might consider, better way of scaling where you pass in a image at the destination size-Bitmap. Overall a very good post with a relatively large amount of information. 12-19-2009 11:19 AM - edited 12-19-2009 12:53 PM Thanks for mentioning the versioning as that's one of the things I wanted to include. The scaleImage32 method only goes back to version 4.2.0 whereas setScale and getBitmap go all the way back to 3.6.0. The Fixed32 class was avaliable back in 4.0.0 so it should be okay to use with scaleImage32 so long as you're targeting users with V4.2.0 and onward. As rc implied, if you want to use the latest and greatest, the Bitmap class has lots of bells and whistles that make scaling images easier and will produce better results because of the inclusion of what I believe are called interpolation algorithms (ow, that almost hurt to say). Also, I figured out the exact method of calculating the scaling factor which should be passed to Fixed32.tenThouToFP if you're using that method. It's kind of backward because if you want to reduce an image you have to provide a scaling factor higher than 10000, whereas if you want to increase an image you have to provide a scaling factor lower than 10000. If you want to scale by a whole percentage then you have to modify the percentage before you can do so. When scaling an image larger you have to subtract one hundred from the percentage; for instance, to scale an image by 150% you actually use 50 instead and, likewise, 30 for 130. The percentage must add one hundred to the percentage when scaling smaller; so use 150 to scale to 50% and 190 to scale to 90%. Once you have the modified percentage, divide one million by that percentage [i.e. 1000000/175 will produce the factor to scale the image to 75% of its original size and 1000000/37 will produce the factor to scale the image to 137% of its original size]. If you want to scale an image using a fractional percentage, as rc mentioned, divide the original size by the new size to get the fractional percentage and then modify it similar to using a whole percentage before using it to derive the scaling factor. When increasing or decreasing an image by scaling you subtract 1 and add 1 respectively to the fractional percentage. To scale an image to 150% (1.5) of its original size, subtract 1 from the fractional percentage (i.e. 1.5 - 1 = .5). To scale an image to 75% (.75) of its original size, add 1 to the fractional percentage (i.e. .75 + 1 = 1.75). Using the modified fractional percentage, divide ten thousand by that fractional percentage [i.e. (10000 / ( (30 / 60) + 1) ) willproduce the factor to scale by 50% as will 10000 / 1.5]. 12-19-2009 01:11 PM More good information, I used toFP because I don't know of any images that are 100.5 pixels wide. 12-20-2009 11:23 AM you could pass your field manager dynamic with and height, and then you can calculate? height: return Math.max(getFont().getHeight(), image.getHeight()) width: int width = getFont().getAdvance(label) width += image.getWidth() return width; So you have dynamic values and can write a scaling algo that does not depend on the device type.. i hope i'm right ;-) 12-20-2009 08:12 PM Sorry bout the images. I uploaded them when I made the post and they haven't been approved yet. I'll probably use Flickr from now on. 12-21-2009 03:23 PM - edited 12-21-2009 03:28 PM rcmaniac25 wrote: More good information, I used toFP because I don't know of any images that are 100.5 pixels wide. I used tenThouToFP because I couldn't - or was too lazy to - figure out how to use toFP. Also, I didn't think about ending up with half pixels. Maybe I'll do a post on Fixed32 next. I'll probably use the method you mentioned in an earlier reply as it seems to fit the bill and is a lot more simple and elegant than anything I've thought of. int scale = Fixed32.div(Fixed32.toFP(original_size), Fixed32.toFP(new_size));
http://supportforums.blackberry.com/t5/Java-Development/Diary-of-a-noob-4-Exploring-EncodedImage/m-p/402582
CC-MAIN-2014-41
refinedweb
2,545
57.67
Hello everyone, first off, im completely new to Blender and Python, trying to fumble my way through bit by bit. Now i’ve come to a point, where i nee someone to point me in the right direction. What I’m trying to do is have a on screen text, that displays the position of an animated object in realtime. For my object im using the following script: from bge import logic cont = logic.getCurrentController() sphere = cont.owner spherepos = “%(name)s” % dict(name=sphere.position) print (spherepos) (the print command is just do check in the console, if my variable spherepos contains any plausible data and so far it works) for my on screen text i use this code: from bge import logic from sphere import spherepos cont = logic.getCurrentController() pos = cont.owner pos.text = “%(name)s” % dict(name = spherepos) for some reason though, my on screen text just staticly shows one set of coordinates, which i dont recognize from anywhere… I’d greatly appreciate any help.
https://blenderartists.org/t/accesseing-variables-from-script-a-in-script-b/616901
CC-MAIN-2020-45
refinedweb
167
62.68
This article describes a set of templates that I have written which are intended to help create simple WPF applications based on the Composite WPF Framework using the Prism libraries. Why write these templates when there are already templates available, e.g., Calcium? This is a very valid question. There are a number of reasons. How many times do you have the need to create a simple application either at work or home? This could be a small utility you are creating, a prototype application to prove some new functionality / concept, or some code just to experiment with Prism etc. You know that you would like to use the Prism libraries to build it, but to be honest, you also know that to setup a Prism application from scratch, hooking the shell, infrastructure, and modules together, will take more time than you plan to spend creating the actual application. You could, of course, just take an existing Prism based application if you have one and throw out what you don't need, but this is not usually a clean way to go. The same applies if you use one of the existing Prism frameworks, e.g., Calcium. What I need are a couple of VSTemplates that would allow me to create a Prism based application quickly and which will have a common format, e.g., Shell, Infrastructure, and modules that I could create quickly and that I could quickly instantiate, enabling me to focus on the task at hand. It is also important to me that they use the basic functionality of Prism without any adaptations. I don't want to have to learn a framework on a framework just to create my simple application. What I have read and learned from the web on Prism should just work out of the box. I also wanted to be able to add additional modules that would automatically wire themselves up, and have the flexibility to add new regions to the default shell. The two VSTemplates described in this application provide just that. I don't, for one minute, compare these templates with the professional templates provided with Calcium. But my new templates have enabled me to create my simple applications quickly and with a standard architecture. They have also been useful for training and introducing new people to Prism concepts, allowing them to be able to prototype / test applications easily. Although I have tried to incorporate many best practices when creating these templates, I have taken a pragmatic approach which not all people will support. For example, the MVVP modules created do have a reference to the view. Although none of the examples use it in favour of a more traditional MVVM, it is there should your simple application require it. The templates in this article include code that have been collected from different sources over time and included for use by the developer if they need it. I have tried to only include those utilities that I have found useful in the past when creating simple apps. I have added references to the original location of such code, and added a reference to the author at the end of this article. This section provides a short summary of the features provided by the new templates. Below is a picture of the solution that is created by default when you create and execute a new solution directly out of the box. After selecting the Simple Prism Solution template and generating a solution, Visual Studio is loaded with a complete compliable solution. The solution generated uses the standard layout as seen in the Prism examples on the web, e.g., with a shell, infrastructure, and module projects. When compiled, the output is written to a single bin directory. The layout of this directory is such that it can be directly copied and the resulting application executed without any modifications to code or config files. The MVVMView is intended to be used when adding new MVVM modules into the modules folder of the Prism solution. It creates a new module project, the output of which is linked automatically into the CompositeWPF application. The implementation is based on the MVVM pattern. It provides a number of useful features by default, e.g., automatically hooks up the Data Context to the ViewModel, and provides references to key objects like the logger, UnityContainer etc., by extending a common base class. It is mandated in our organisation that Log4Net is the default logging component that shall be used for logging in .NET apps. The provided solution has overridden the default logger implementation with Log4Net logging. Some components that have been included use Log4Net, by default. Using Log4net as the default logger ensures that the logging provided in these components is integrated automatically. The default Log4Net is configured by default to log to the Console, a UDP output (for use by Chainsaw), and to a rolling file. The Consolidated Timer is a simple low resolution timer facility that enables a single Windows Timer instance to handle many timer operations within the application, reducing the number of system timer resources that need to be maintained. The base timer checks the timers every second, so this is a very course timer facility useful for many GUI tasks, e.g., refreshing, polling etc. I have included the Dialogs Workspace as provided by the CompositeWPF Contrib so that it can be readily used. This facility enables modules to activate the Wait Cursor on the Shell. Nothing special here, just a simple menu toolbar with a couple of basic commands implemented to perform basic functionality, e.g., exit application. There is an example module included that demonstrates how to use all of the above features as required. Normally, this project is deleted and replaced with a module generated using the MVVMView template. Enough of the high level information, let's talk about the installation of the templates. There is provided by an MSI, that when executed, will install the templates and all the required libraries. I have only tested these templates on Visual Studio 2008. Our development currently uses a number of third party libraries which are common to all application development. The list of libraries we use are shown below: These components could have been installed in the GAC, but I prefer to have these in a fixed location on each developer's machine. So, the setup places these in C:/Program Files/DotNetLibraries. Not all of these libraries are currently used in the framework, but are included for the convenience of developers. I intend to update the framework to incorporate these tools at some point. The installation places the template files in the user's MyDocuments folder under "Visual Studio 2008 /Templates/ProjectTemplates". The templates are configured to appear in the MyTemplates Visual C# section of the New Projects dialog when selected. The two new templates are "Simple Composite WPF Solution" and "MVVMView". These two templates and their uses are described below. This template, when selected, will create a new Prism Solution. I have tried to make the content of this solution as practical as possible and to comply with the traditional layout as used by many Composite WPF applications. I made the assumption that the installation is on a 32 bit machine. If this is not the case, then you need to copy the above libraries into the correct Program FIles/DotNetLibraries directory. Secondly, if you have changed the location where your templates are read from in Visual Studio, then you will need to relocate the above templates accordingly. The layout of the solution generated by this template is shown below: As you can see, there are three sections to this solution: Shell, Infrastructure, and Modules. The Infrastructure project is the common area of the Prism application, and I use it to store any common artifacts that may be needed by the application. Infrastructure is divided into the sections described below. Storing common entities in the Infrastructure class is a common pattern when using the Prism framework, and it is the one I find most useful for small to medium size applications, which is why I have used it in the templates. For larger applications, I would tend to store the common data in Core classes, one for each module. This section contains common entity classes that are used throughout the application. By default, there are two: TimerJob.cs and ApplicationConfig.cs. These classes are self explanatory when you read about the corresponding services below. There are three default classes in the CompositeWPFBase section: BindableObject, DispatchedObservableCollection, and CompositeWPFBase. BindableObject is a class from Josh Smith. This is an implementation of the INotifyPropertyChanged interface, and I use it for setting up the databindings in the ViewModel. So I added this into the base class for simplicity as he recommends. Have a read of his article for more information. BindableObject DispatchedObservableCollection CompositeWPFBase INotifyPropertyChanged The CompositeWPFBase class is my base class that is used by all MVVM classes. It exposes a number of references to core components of the base classes, e.g., Entity Container, Event Aggregator, and the ILoggerFacade. This generic class is used to wire up my MVVM pattern. There is normally no reason for users to access these two base classes directly. ILoggerFacade The DispatchedObservableCollection is included to solve the problem that the ObservableCollection cannot be used for databinding when the collection is updated from any thread that is not the primary GUI thread. The DispatchedObservableCollection implementation is that provided by the WPFExtensions library. As I did not require the other features of this library, I have included only the DispatchedObservableCollection classes directly into the Infrastructure project. ObservableCollection This section contains the configuration objects used by the application. There is a class called RegionConstants. This class contains the definitions for the regions used in the Shell and Module components. If a developer decides to modify the shell with new regions, they will need to add the definition for the regions in this class. RegionConstants There are also a number of Composite Presentation Events defined in this section that are used across the application, for example, StatusBarMessageEvent, WaitCursorEnabledEvent, etc. StatusBarMessageEvent WaitCursorEnabledEvent This section contains the common interface definitions used by the application. The Services section is where I define the common services for use across the application. By default, I provide two services: the first is CompositeWPFTimerService, and the second is the Configuration Service. This service is a low resolution timer service (by default, 1 second). The idea being that instead of using multiple timer operations throughout the application for common timer operations, you can register a TimerJob to this service for processing. The timer jobs are held in a queue and processed according to the TimerJob specification. The following timer base operations are supported by this service: TimerJob public interface ICompositeWPFTimerService { void Start(); void Stop(); void AddJob(TimerJob job); void RemoveJob(string name); void UpdateJob(TimerJob job); void RemoveAllJobs(); void PauseJob(string name); void ResumeJob(string name); } The user specifies how each TimerJob shall be processed in the TimerJob instance itself. Each job must have a unique name. The user can define if the timer is a one shot timer or if it should be rescheduled once expired. The user can specify the time when the timer shall execute and the reset time if the job shall be rescheduled. The user specifies the JobTask that shall be executed once the time period expires. This is executed on a separate thread per task. JobTask public string Name { get; set; } public DateTime ExecutionTime { get; set; } public TimeSpan ResetDuration { get; set; } public TimerJobState State { get; set; } public JobTask Operation { get; set; } public TimerType OneShot { get; set; } One point, there is a considerable amount of Log4Net debugging in this feature which is on by default. To switch this off, add the following to the shell app.config file: <logger name="CompositeWPFTimerService"> <level value="OFF" /> </logger> The configuration service is a work in progress. The current implementation is a simple example of how to read and write user preferences using the "Settings" feature which is adequate for my simple applications. However, with this simple abstraction, the service could be very easily modified to use an alternative storage medium, if necessary. As Dialogs are not natively supported in the Composite WPF, I took this opportunity to incorporate DialogWorkspace from the CompositeWPF Contrib. I had some problems with this code as I could not find how to position the dialog correctly, so I modified the code to always centre a dialog in the centre of the screen. The shell is the main entry point to the application. In addition to the usual WPF artifacts like app.xml etc., it also contains the Prism specific components, e.g., Bootstrapper.cs etc. The main shell of this application has three regions defined: MainMenu, StatusBar, and a StatusBar region. This is the minimum you would expect in a small application using Composite WPF. Following the standard Composite WPF documentation, developers can easily add additional regions into the shell for other areas of the GUI, e.g., Toolbars, Selection areas, etc. The modules section is where the modules are stored. These modules are hosted in Shell containers. By default, there are four modules supplied in the default solution. All of the modules follow the same MVVM Design Pattern as discussed earlier. The Main Menu module provides a basic Menu, see below. It comes prewired with Exit and an About dialog to show the use of the Dialog Workspace. The status bar at the bottom of the screen uses the Event Aggregator to subscribe for messages to be displayed. Modules can publish messages that will be displayed in the Status Bar. This module hosts the dialogs used by the application. These dialogs use the Dialog Workspace from the Composite WPF Contrib as described earlier. I have created a second template that creates a new project for each new module to be added to the application. The idea here is that the user will add modules to the Modules solution for each module needed for the application. When the user selects the MVVMProject template and executes it, a new project is created with the following structure: As you can see from the above example, the template renames all of the artifacts in the template using the name supplied by the user, in this case, prefixing all of the files in the project with "TestProject". It also renames the namespaces and class names etc., with the same prefix; see below: The template creates all of the plumbing necessary to integrate the module into the solution. The project is configured automatically to copy the output DLL to the output modules directory so that it is automatically picked up at run time. The modules are automatically configured to populate the DisplayRegion. If the user needs to change this location, then they only need to modify the constant for the Regions in the TestModule section, as shown above. The goal here is that the developer need only add code to populate the user control and ViewModel classes to add the business specific code and not have to worry too much about Prism. In addition, the base classes of the view's ViewModel provide easy access to the logging, event aggregator, and Unity components. The usage of the templates is simplicity in itself. The main screen of the application is shown below: Use the following steps to create a new Prism solution: Us the following steps to create a module: This section provides a short overview of the Examples module. The example module displays a tabbed view which has a number of basic tabs which provides facilities to show how to use some of the features provided. Enter a text in the text box. Select the button, and the text will be published in the status bar for 3 seconds. This example enables the user to play with the timers. You can add and manipulate the timers and watch the output in the window provided. This simple application shows how the configuration service can be accessed and used. This application shows how to access the logger and to write different logger messages to the Log4Net output. This application shows how to enable and disable the application wait cursor. As with all such tasks, this is a work in progress. The intention is to continue to upgrade and add to these simple templates. I really do welcome feedback on this code to help me learn and improve as I go forward. A lot of the libraries and code in these templates are a collection of useful utilities etc., I have collected over time. This section is a reference to these contributions without which my life would have been made a whole lot more.
http://www.codeproject.com/Articles/53291/Simple-Prism-Application-Templates
CC-MAIN-2015-48
refinedweb
2,794
53.61
For more than a year, I’ve been working with [Nuxt.js]() on a daily basis. In this review, I’ll try to summarize all of the ups and downs of working with this framework. Hopefully, this article may persuade you to try Nuxt on... For more than a year, I’ve been working with Nuxt.js on a daily basis. In this review, I’ll try to summarize all of the ups and downs of working with this framework. Hopefully, this article may persuade you to try Nuxt on your new project while being aware of some caveats along the way. is-httpsand Heroku (i18n multidomains, sitemap) thiseverywhere rx.jssupport One remark before I start: I don’t know if it was there intention, but I just love how the Nuxt logo is a mountain — like you’re behind a reliable wall. First of all, I want to give a small introduction to the project. A year ago, in October 2018, my friend rami alsalman (Ph.D. in machine learning) approached me and shared the pain of using job websites nowadays (i.e., you’re trying to find the software for back-end engineer PHP, and you get a list of offers, with Python, C#, ASP.net, and so on). All in all, relevancy is poor sometimes — the recognition of required programming skills and soft skills is a problem all of its own. The idea was to build a search system using machine-learning algorithms that will recognize any job-offer description text, any search text, and the applicant’s CV, so we could directly show the jobs that best match the CV. That is how we came up with idea of clusterjobs.de. I was responsible for the web application and became CTO of the startup, and rami alsalman became a CEO and machine-learning search-engine engineer. First of all, I wanted to start a project with a long-term framework that would help us start fast and be able to extend into the future. Another main point was to have SSR because we wanted to have sustainable SEO as a main traffic channel. Having PHP on the back end for SSR would lead to duplicating all the templates and doublethe work, which we couldn’t afford because the dev team is just me. I started to investigate JavaScript SSR solutions, and Nuxt seemed to be the clear winner. There was 2.0.0 major release and good documentation, and we decided to take a risk with a new technology on a new project. And so we took Nuxt as a framework for clusterjobs. To save some time on manual deployments, I invested couple of days in setting up a proper GitLab pipeline to deploy the app to Heroku. The Nuxt docs are a great resource on how to deploy to Heroku. Here’s a great article on how to deploy Vue on Heroku in the GitLab pipeline. Combine them together — and boom! This is what I have at the moment: image: node:10.15.3 before_script: - npm install cache: paths: - node_modules/ - .yarn stages: - build - test - deploy Build clusterjobs: stage: build before_script: - yarn config set cache-folder .yarn - yarn install script: - yarn run build Test: stage: test before_script: - yarn config set cache-folder .yarn - yarn install script: - yarn test Deploy master: stage: deploy only: - master before_script: # add remote heroku - git remote add heroku:[email protected]/clusterjobs-web.git # prepare files - git checkout master - git config - global user.email "[email protected]" - git config - global user.name "Mikhail Starikov" # put prod robots file where it belongs - rm ./static/robots.txt - cp -f ./static/robots.txt.prod ./static/robots.txt - git add ./static/robots.txt - git commit -m 'prod robots' script: # push - git push -f heroku HEAD:master .gitlab-ci.yml After the environment was done, it took roughly 2–3 months to prepare MVP and go live. After numerous iterations and improvements, I still don’t regret choosing Nuxt. So why is it good? I thought of the best moments I’ve experienced, and here they are: It’s performant. Even though it is a full JS framework that needs to deliver all library files to the client, it still tries its best to do it in the least harmful way. With the last 2.10 update, I found out the webpack config has been updated so that during development only the updated chunks are rebuilt, which really speeds up development. Also webpack for production is extendable, and you can play around with it on your own or use the default config, which is pretty performant on its own. build: { parallel: true, terser: true, extend(config, ctx) { if (process.env.NODE_ENV !== 'production') { config.devtool = '#source-map'; } if (ctx.isDev && ctx.isClient) { config.module.rules.push({ enforce: 'pre', test: /\.(js|vue)$/, loader: 'eslint-loader', exclude: /(node_modules)/, }); } if ( config.optimization.splitChunks && typeof config.optimization.splitChunks === 'object' ) { config.optimization.splitChunks.maxSize = 200000; } }, }, nuxt.config.js The advantage is that I, as a developer, didn’t need to think about where to put this or that. Nuxt comes with a skeleton of an app, with everything you need to build a complex web app: pages, components, assets, static, middlewares, plugins, and so on. The only thing that annoyed me is that Nuxt encourages you to use _~/component/blah-blah_ kind of stuff to import all over the application. JetBrains IDE, which love with the bottom of my heart, couldn’t recognize those paths. The workaround for that is pretty simple: const path = require('path'); // eslint-disable-next-line nuxt/no-cjs-in-config module.exports = { resolve: { alias: { '~': path.resolve(__dirname, './'), }, }, }; phpstorm.webpack.config.js The community is thriving. A huge thanks is due to Sebastien Chopin, who created Nuxt itself and has continued driving it until the current moment. Another huge thanks is due to the core team and all of its contributors for such an amazing product. If you’ve tried Nuxt, you probably know these resources, but I’ll just put them here anyway: nuxt-communitymodules That is the thing that makes you love Nuxt, really. Coming from such a great community, Nuxt has readjusted Vue modules, new modules, and modules for everything. Found some not covered use case in Nuxt? Write a module, and make it Nuxt-community open source! Here is a list of modules I used in production: There were some problems, of course, during my year of working with Nuxt. Now I’ll try to give context for each of them and help you avoid them in the future: This is related to how Node.js works. If you have a global variable, it’ll be overwritten in a simultaneous request. Some insights are given in this Stack Overflow question. The problem I experienced was related to fetch function on my offer-list page (search results). I was making an action call like: await store.dispatch('serverSearch', { query: paramQuery, page: paramPage, }); nuxt.page.js That was done to populate the store on the server side and to use the same offers on the client side. But when it’s done inside an action, like action -> commit -> store, then for some reason that data is mixed. I confess, I didn’t investigate the true reason for that — maybe some global object of the Vuex store or something like that, but the problem was that while I had my application running for the first request, every next request got its state. So you might end up landing on [“fullstack developer” job offers page]( developer) and having machine-learning engineer results. The fix for that was: const offersData = await store.dispatch('serverSearch', { query: paramQuery, page: paramPage, }); await setResults(offersData, paramPage, store.dispatch.bind(store)); await store.dispatch('serverSetQuery', paramQuery); nuxt.page.js So action -> back to fetch -> commit -> state. The action should return a promise that is resolved with proper data, which you could use back in the fetch function. After that point, commit calls will probably be close to the end of the fetch and the page will have the correct data, but the general problem might be still there. I am hosting the app using Cloudflare for DNS and Heroku as a server. Pointing the domain to Heroku is done through CNAME, which is giving me some problems. Several modules of Nuxt (sitemap, i18n) are using the is-https library to identify on the server side the type of request. The request which is done to Cloudflare is HTTPS but the proxying probably isn’t.nI got some advice on CMTY on that. Enabling x-forwarded-proto should help, but I haven’t tried it yet. thiseverywhere Personally I like to write functional code in JS. It’s possible with Nuxt, but all the modules makes you use this. Want to get to the current locale in the Vuex store or in the component? this.app.i18n.locale The same goes for switching the locale and getting all locales list. Want to change pages in Vuex? this.router.push I can live with that, but having those objects as an argument inside functions also could benefit better code separation. I love RX and love to apply it to state-managing use cases, especially. RX could be integrated into Vue — and into Nuxt as well, if we’re talking about DOM events. There’s this package. To integrate it into the Nuxt app, just create a plugin like this: import Vue from 'vue' import Rx from 'rxjs/Rx' import VueRx from 'vue-rx' Vue.use(VueRx, Rx) vue-rx.plugin.js There were also a couple of trials to integrate it into Vuex, but so far, the repo is deprecated. I haven’t seen any articles regarding this lately. All in all, I love Nuxt. I even prepared a workshop for my fellow colleagues and did it a couple of times to spread the knowledge and encourage them to try it. I think it’s a very mature and developed tool for any need. You could use it for everything from simple static landing pages and personal websites up to complex web applications and e-commerce platforms. I faced some caveats, which were fixable, but I also had a lot of great moments, when everything felt so simple and worked awesome. I truly believe in this framework and am deeply grateful to the people that created it and are maintaining it still. Vue Native is a framework to build cross platform native mobile apps using JavaScript. It is a wrapper around the APIs of React Native. So, with Vue Native, you can do everything that you can do with React Native. With Vue Native, you get. hzqing-vue-timeline . Vue's timeline plugin .A Vue component to hzqing-vue-timeline A Vue time plugin vuejs-clipper .Vue.js image clipping components using Vue-Rx. Add image clipping components to your Vue application in nothing flat. Touch devices supported and fully responsive.
https://morioh.com/p/889e9be9ae61
CC-MAIN-2020-40
refinedweb
1,823
65.93
Introduction to Replace() Function in Java The replace() function in Java is used to remove a particular letter or character sequence and put another letter or character sequence in its place. After the introduction of JDK 1.5, this function “replace()” was introduced. Before this function, a core logic could also have been written but to ease up this functionality by encapsulating the code logic in the function named replace(). This function reduces the work of coders as they can directly use this function to take two input parameters and return a new user modified string. This can be used as per the business requirements. There are some other variants of replace function as well like “replaceAll()”, “replaceFirst()” which use the regular expression to manipulate the string. Syntax: public String replace(char oldcharacter, char newcharacter) Here this function has an access modifier “public” allowing it to be used by other functions as well. String type designated that this function will have a return type of “string”. The input parameters are passed in the form of two-character variables named “oldcharacter” and “newcharacter”. These variables will be used to scan the character to be replaced and then logic in the function will work to replace this character with the new one sourcing from the “newcharacter” variable. Parameters: - oldcharacter: This the old character which needs a replacement. - newcharacter: This is the new character which is fixed instead of the previous character. Return Value: This function returns a string with the old characters replaced with the new ones. How Replace() Function Works in Java? The internal code logic of replace () function is given below with an explanation. Note: This is not a running code. It is a code logic on which replace function works. Here the function named “replacefunction” is actually “replac” function in Java. This function will work only when the character to be replaced is different than the character which should be placed in the place of replaced value. In case in string “abcdecd”, “d’ should be replaced by “d” itself in that case the same string will be outputted rather than entering into unnecessary logic of this function. Once the control enters into a function then all necessary checks are done to find out the value which needs to change. Variable “oldcharacter” and “newcharacter” are used to get input parameters for this function. These variables are then used in further function while replacing the values. Variable “characterlen” is used to store the length of character string from which the value should be scanned out and changed. Char array “valtobereplaced” is used to store the value which needs a change. This array is declared in case multiple characters of a character sequence should be changed. Array works to store multiple characters at a time. New character array “buffer” is used to store the modified string which is created after replacing the old character with the new ones. This string is then returned as an output from this function. Code: public String replacefunction(char oldcharacter, char newcharater) { if (oldcharacter != newcharater) { int characterlen = value.length; int k = -1; char[] valtobereplcaed = value; while (++k < characterlen) { if (valtobereplcaed[k] == oldcharacter) { break; } } if (k < characterlen) { char buffer[] = new char[characterlen]; for (int j = 0; j < k; j++) { buffer[j] = valtobereplcaed[j]; } Below is the core logic to replace the particular character with a new one. Here while loops designate that till we do not reach the end of the string we have to keep control in this loop. Here variable to be replaced which is being carried forward from the start is parked in character variable “c”. The conditional statement is put then where if character “c” matched with the “oldcharacter” variable then the value of “c” should be changed with “newcharacter” otherwise “c” should be retained as it is. Code: while (k < characterlen) { char c = valtobereplcaed[k]; buffer[k] = (c == oldcharacter) ? newcharater : c; k++; } return new String(buffer, true); } } Example of Replace() Function in Java The below example demonstrates the working of replace function in JAVA language. It has got two parameters as input and returns a changed string after replacing the targeted character or character sequence from the input string. Code: public class test { public static void main(String args[]) { // In the below line a new string Str_new is being created. For this to implement a new string object is being introduced. String Str_new = new String("dEmonsRating the functionality of REplacE function"); // Below code explains the use of replace function. This function returns a string as a return value. // This returned value is being captured in print statement. System.out.print("This is string after replacing all Rs with Ks : " ); System.out.println(Str_new.replace('R', 'K')); // The below line will work the same as previous code line only with small changes made with input parametrs provided to this function. System.out.print("This is string after replacing all E with U : " ); System.out.println(Str_new.replace('E', 'U')); } } Output: Conclusion Hence replace() function is very useful when we need a clean way to replace anything from another in a string. It is extensively used in JAVA programming language for string manipulation purposes during logic building. Recommended Articles This is a guide to Replace() Function in Java. Here we discuss how to replace() function works in Java along with example and code implementation. You can also go through our other related articles to learn more –
https://www.educba.com/replace-function-in-java/?source=leftnav
CC-MAIN-2020-34
refinedweb
901
54.42
Maybe my google kung-fu is not strong enough, but I couldn't find a decent disassembler for the Raspberry Pi 3. All that I've found (binutils, LLVM, Capstone) were huge, full of dependencies, most of them incomplete and difficult to use. Neither really suitable for a bare metal project. So I've written one. It's really lightweight (less than 128k), yet it supports all ARMv8.2 instructions. It is as easy to integrate and use as it gets: a single C header file with only one function, licensed under MIT license. You pass the address of the instruction and a pre-allocated buffer. The function returns the address of the next instruction, and writes the zero terminated disassembled string into the buffer using only sprintf(). Really simple. Code: Select all #include <aarch64.h> // include the architecture to use addr = disasm(uint64_t addr, char *str); Writing the disassembler was easy. On the other hand I've spent 2 weeks with copying the instruction table out of that dull DDI0487 documentation. It took me endless hours often late at night. Although I've double checked it, there's a chance that I might mixed up some (possibly vector) arguments. If you find any error with the disassembly, please check the instruction table text file, and let me know. I'll fix it right away. Bests, bzt
https://www.raspberrypi.org/forums/viewtopic.php?f=72&t=197946&p=1236244&sid=76abca57c029fc9b75c9e2c6ef12a32c
CC-MAIN-2018-39
refinedweb
227
65.42
. Memoization The word memoization was coined by Donald Michie, a British artificial-intelligence researcher, to refer to function-level caching for repeating values. Today, memoization is common in functional programming languages, either as a built-in feature or as one that's relatively easy to implement. Memoization helps in the following scenario. Suppose you have a performance-intensive function that you must call repeatedly. A common solution is to build an internal cache. Each time you calculate the value for a certain set of parameters, you put that value in the cache, keyed to the parameter value(s). In the future, if the function is invoked with previous parameters, return the value from the cache rather than recalculate it. Function caching is a classic computer science trade-off: It uses more memory (which we frequently have in abundance) to achieve better performance over time. Functions must be pure for the caching technique to work. A pure function is one that has no side effects: It references no other mutable class fields, doesn't set any values other than the return value, and relies only on the parameters for input. All the methods in the java.lang.Math class are excellent examples of pure functions. Obviously, you can reuse cached results successfully only if the function reliably returns the same values for a given set of parameters. Memoization in Groovy Memoization is trivial in Groovy, which includes a family of memoize() functions on the Closure class. For example, suppose you have an expensive hashing algorithm, leading you to cache the results for efficiency. You can do so by using closure-block syntax to define the method and calling the memoize() function on the return, as shown in Listing 1. (I don't mean to suggest that the ROT13 algorithm—a version of the Caesar Cipher—used in Listing 1 is performance-challenged, so just pretend that caching is worth it in this example.) Listing 1. Memoization in Groovy class NameHash { def static hash = {name -> name.collect{rot13(it)}.join() }.memoize() public static char rot13(s) { char c = s switch (c) { case 'A'..'M': case 'a'..'m': return c + 13 case 'N'..'Z': case 'n'..'z': return c - 13 default: return c } } } class NameHashTest extends GroovyTestCase { void testHash() { assertEquals("ubzre", NameHash.hash.call("homer")) } } Normally, Groovy function definitions look like rot13() in Listing 1, with the method body following the parameter list. The hash() function definition uses slightly different syntax, assigning the code block to the hash variable. The last part of the definition is the call to memoize(), which automatically creates an internal cache for repeating values, keyed on parameter. The memoize() method is really a family of methods, giving you some control over caching characteristics, as shown in Table 1. Table 1. Groovy's memoize() family The methods in Table 1 give you coarse-grained control over caching characteristics — not fine-grained ways to tune cache characteristics directly. Memoization is meant to be a general-purpose mechanism for easily optimizing common caching cases. Memoization in Clojure Memoization is built into Clojure. You can memoize any function by using the built-in (memoize ) function. For example, if you have an existing (hash "homer") function, you can memoize it via (memoize (hash "homer")) for a caching version. Listing 2 implements the name-hashing example from Listing 1 in Clojure. Listing 2. Clojure memoization (defn name-hash [name] (apply str (map #(rot13 %) (split name #"\d")))) (def name-hash-m (memoize name-hash)) (testing "name hash" (is (= "ubzre" (name-hash "homer")))) (testing "memoized name hash" (is (= "ubzre" (name-hash-m "homer"))))) Note that in Listing 1, calling the memoized function requires an invocation of the call() method. In the Clojure version, the memoized method call is exactly the same on the surface, with the added indirection and caching invisible to the method's user. Memoization in Scala Scala doesn't implement memoization directly but has a collection method named getOrElseUpdate() that handles most of the work of implementing it, as shown in Listing 3. Listing 3. Scala memoization def memoize[A, B](f: A => B) = new (A => B) { val cache = scala.collection.mutable.Map[A, B]() def apply(x: A): B = cache.getOrElseUpdate(x, f(x)) } def nameHash = memoize(hash) The getOrElseUpdate() function in Listing 3 is the perfect operator for building a cache. It either retrieves the matching value or creates a new entry when none exists. Combining functional features In the preceding section and in the last few Java.next installments, I've covered several details of functional programming, particularly as they pertain to the Java.next languages. However, the real power of functional programming lies in the combination of features and the way solutions are approached. Object-oriented programmers tend to create new data structures and attendant operations constantly. After all, building new classes and messages between them is the predominant language paradigm. But building so much bespoke structure makes building reusable code at the lowest level difficult. Functional programming languages prefer a few core data structures and build optimized machinery for understanding them. Here's an example. Listing 4 shows the indexOfAny() method from the Apache Commons framework (which provides a slew of helpers for Java programming). Listing 4. indexOfAny() from Apache Commons // From Apache Commons Lang, public static int indexOfAny(String str, char[] searchChars) { if (isEmpty(str) || ArrayUtils.isEmpty(searchChars)) { return INDEX_NOT_FOUND; }; } The first third of the code in Listing 4 concerns edge-case checks and initialization of the variables needed for the nested iteration to come. I'll gradually transform this code into Clojure. For the first step, I remove the corner cases, as shown in Listing 5. Listing 5. Removing corner cases public static int indexOfAny(String str, char[] searchChars) { when(searchChars) {; } } Clojure intelligently handles the null and empty cases and has intelligent functions such as (when ...), which returns true only when characters are present. Clojure is a dynamically (but strongly) typed, eliminating the need to declare variable types before use. Thus, I can remove the type declarations, resulting in the code in Listing 6. Listing 6. Removing type declarations indexOfAny(str, searchChars) { when(searchChars) { csLen = str.length(); csLast = csLen - 1; searchLen = searchChars.length; searchLast = searchLen - 1; for (i = 0; i < csLen; i++) { ch = str.charAt(i); for (j = 0; j < searchLen; j++) { if (searchChars[j] == ch) { if (i < csLast && j < searchLast && CharUtils.isHighSurrogate(ch)) { if (searchChars[j + 1] == str.charAt(i + 1)) { return i; } } else { return i; } } } } return INDEX_NOT_FOUND; } } The for loop — a staple of imperative languages — allows access to each element in turn. Functional languages tend to rely more on collection methods that already understand (or avoid) edge cases, so I can remove methods such as isHighSurrogate() (which checks for character encodings) and manipulation of index pointers. The result of this transformation appears in Listing 7. Listing 7. A when clause to replace the innermost for // when clause for innermost for indexOfAny(str, searchChars) { when(searchChars) { csLen = str.length(); for (i = 0; i < csLen; i++) { ch = str.charAt(i); when (searchChars(ch)) i; } } } In Listing 7, I collapse the code into a method that checks for the presence of the sought-after characters and returns the index when they're found. While I'm in neither Java nor Clojure but a strange pseudocode place, this when method doesn't quite exist. But the (when ) method in Clojure, which this code is slowly becoming, does. Next, I replace the topmost for loop with a more concise substitute, using the for comprehension: a macro that combines access and filtering (among other things) for collections. The evolved code appears in Listing 8. Listing 8. Adding a comprehension // add comprehension indexOfAny(str, searchChars) { when(searchChars) { for ([i, ch] in indexed(str)) { when (searchChars(ch)) i; } } } To understand the for comprehension in Listing 8, you must first understand a few parts. The (indexed ...) function in Clojure accepts a Sequence and returns a sequence that includes numbered elements. For example, if I call (indexed '(a b c)), the return is ([0 a] [1 b] [2 c]). (The single apostrophe indicates to Clojure that I want a literal sequence of characters, not that I want to execute an (a ) method with two parameters.) The for comprehension creates this sequence over my search characters, then applies the inner when to find the index of the matching characters. The last step in this transformation is to convert the code into proper Clojure syntax and restore the presence of real functions and syntax, as shown in Listing 9. Listing 9. Clojure-ifying the code // Clojure-ify (defn index-filter [pred coll] (when pred (for [[index element] (indexed coll) :when (pred element)] index))) In the final Clojure version in Listing 9, I convert the syntax to proper Clojure and add one upgrade: Callers of this function can now pass any predicate function (one that returns a Boolean result), not just the check for an empty string. One Clojure's goals is the ability to create readable code (after you assimilate the parentheses), and this function exemplifies this ability: For the indexed collection, when your predicate matches the element, return the index. Another Clojure goal is expressiveness with the least number of characters; Java suffers terribly in comparison with Clojure in this regard. Table 2 compares "moving parts" quantities in Listing 4 to those in Listing 9. Table 2. Comparison of "moving parts" The difference in complexity is telling. Yet although the Clojure code is simpler, it is also more general. Here I index a sequence of coin flips, modeled as the Clojure :h (heads) and :t (tails) keywords: (index-filter #{:h} [:t :t :h :t :h :t :t :t :h :h]) -> (2 4 8 9) Notice that the return value is a sequence of all matching index positions, not just the first. List operations in Clojure are lazy when possible, including this one. If I only want the first value, I can (take 1 ) from the result, or I can print them all, as I've done here. My (index-filter ) function is generic, so I can use it on numbers. For example, I can determine the first number whose Fibonacci value exceeds 1,000: (first (index-filter #(> % 1000) (fibo))) -> 17 The (fibo) function returns an infinite but lazy sequence of Fibonacci numbers; (index-filter ) finds the first one that exceeds 1,000. (It turns out that the Fibonacci of 18 is 1,597.) The combination of functional constructs, dynamic typing, laziness, and concise syntax yields great power. Conclusion Functional programming constructs yield benefits when used piecemeal, but they offer even more advantages when they're combined. All the Java.next languages are functional to one degree or another, enabling increasing use of this style of development. In this installment, I discussed how functional programming eliminates moving parts — making programming less error-prone — and the benefits of combining functional features. In the next installment, I begin an even more powerful illustration of this concept as I discuss how the Java.next languages make concurrency on the JVM easier. Resources Learn - ROT13: ROT13 is an example of the Caesar Cipher, an ancient encryption algorithm used by Julius Caesar. - Apache Commons: Commons is a popular utility framework in the Java ecosystem. - Groovy: Groovy is a dynamic variant of the Java language, with updated syntax and capabilities. - Scala: Scala is a modern, functional language on the JVM. - Clojure: Clojure is a modern, functional Lisp that runs on the JVM. - Functional thinking: Explore functional programming in Neal Ford's column series on developerWorks. - "Execution in the Kingdom of Nouns" (Steve Yegge, March 2006): An entertaining rant about some aspects of Java language design. -.
http://www.ibm.com/developerworks/library/j-jn12/
CC-MAIN-2015-06
refinedweb
1,933
53.71
Of course the USA needs to cut its deficit, but any government’s deficits can, as a matter of fact, only be cut when private sector consumption, net invetsment and net exports grow. Eh? Of course the USA needs to cut its deficit, but any government’s deficits can, as a matter of fact, only be cut when private sector consumption, net invetsment and net exports grow. Eh? Keynesianism. Happy New Year! Happy (?) Same Old Shit. A lot of keynsians don’t realise they are keynsians. Their starting point is that the economy has an agenda to it rather than being the result of voluntary human actions Dinero’s ideas on Keynsianism are pure hogwash. Keynsians are perfectly well aware that economic activity results from “voluntary human actions”. E.g. the decision as to how much work to do per week is a voluntary decision. But their statements such as the above sujjest internention is nesasary to some often unstated ends Not Keynsianism. Sectoral balance: Private (savings – investment) = Government (spending – income) + External (net exports – imports) But he hasn’t understood it. Private sector growth is not the only way of cutting the public deficit. Obviously we would like the government deficit to reduce because exports are growing, people are spending more and there is more net investment. But it is also possible to reduce the deficit without any change in the external balance. You do this by taxing savings in some way – whether by direct taxation, higher inflation (which is a tax on savings), or by reducing government spending, which has the effect of reducing private sector savings if investment and the external balance remain constant. I think that is obvious from the equation. If the private sector wants to save but its savings are constantly eroded by the desire of government to cut its deficit, it fights back by reducing investment – which tends to push the economy towards recession. It also tends to reduce imports, though – so the external balance may improve. It’s all rather fluid, really. But what IS clear is that government deficit-cutting is not possible if the private sector wants to save (or cut its own debt, which is the same thing) AND the external balance is negative. That’s the problem in the UK, and that is why the Government has been utterly ineffective at cutting the deficit. Their macro is rubbish. But then so is Ritchie’s. money can circulate in the private sector without government That’s a Keynesian paradigm analysis 🙂 Macro-economics is fundamentally and inescapably Keynesian. Drawing in the other above comments, I think the point is that Keynesianism reduces individual actions to mere responses to aggregate stimuli (“investment”, “confidence”, “money illusion”) etc. It is thus fundamentally incapable of indentifying problems in the economy due to anything other than aggregate variables; since recession is due generally to economic imbalance it doesn’t have the tools to analyse the situation, hence the simplistic sledgehammer type theories of gross income, gross “output”, gross “spending” etc. The other problem is that Keynes’s aggregate variables are variouslly useless and irrationally defined- the most egregious example perhaps being when Keynes initially derives “investment” as the savings portion of income and then, arbitrarily, starts using it to mean “any government spending”, even though (besides all else) a child from an unconcacted tribe in the Amazon could realise that the spending ought to go on the left-hand “income” side of the equation anyway- if a simple equation were any use (rather than a partial differential), which it isn’t. So far as one can tell, the limit of Keynes’s understanding of calculus was his ability to write “marginal” in front of things and then pretend he’s got a derivative. Hence my earlier assertion that caused a bit of a debate, that he seems to have been borderline innumerate. But that’s another issue. Ian B: I thought we had a truce – you don’t talk rubbish about Keynes, and I won’t point out how wrong you are. Analyze this: How about if the government stopped spending so damn much ?!?! The unsaid starting point for Dickie’s claim must be that government spending cannot be reduced? Ian B Sectoral balance is not Keynesian. It is an accounting identity, not an economic theory. Though its definition is usually attributed to Wynne Godley. GameCock I did explain what would happen if the government stopped spending so much. The private sector would have to make up the difference either by drawing on its savings or by reducing investment. Oh, and Ian, PLEASE go and read some Keynes. Then maybe you won’t talk such utter bollocks. That’s fundamental to Keynes. If output is below “full output”, government cannot reduce spending as this will reduce output further, because Keynesianism is based on a false paradigm of spending “driving” the economy. The basic idea is that a recession is due to people for some supernatural reason deciding to spend less, en masse, so the State has to make up the difference by spending more. This is why the banksters like Keynes, because the solution to every problem is to spend more money, which is borrowed from the banks either directly by consumers or indirectly by the State. Frances, I’ve read The General Theory and numerous commentaries on it. Which is why I understand how disastrously wrong it is. Unfortunately, a lot of people have a vested interest in pretending it’s true, particularly those who currently profit from mismanaging the economy. Hence, it is one of those stupendously wrong theories that refuses to die, like Marxism, Freudianism, etc. Reduced government spending does not nessasariliy also require private sector drawing on savings or investment. Money can circulate and be savings in private sector without government spending featuring. Is Keynsianism really economics. Its focus is maximising work , thats not what most people would call economics. We can just automagically create government debt out of thin air? Some person in this world will have to buy that debt, and that money can then not be used for consumption or investment in the private sector. ” Their starting point is that the economy has an agenda to it rather than being the result of voluntary human actions” Classic leftist thinking is taking the collective and making a separate entity from whatever that collective was made up of. Thus, the “economy” can somehow magically exist without people. See also “state”, “government”, “companies”, “society”, etc. “it is one of those stupendously wrong theories that refuses to die, like Marxism, Freudianism” Freudianism is wrong? Seriously? Because humans are not at all driven by sex? That really is a bizarre thing to suggest. Still, one of your three examples actually being an example of what you suggest isn’t a terrible hit-rate. Although I’m afraid you didn’t quite meet Meatloaf’s standard for ‘ain’t bad’. Karl Popper is your friend. Dinero, I’m afraid that simply isn’t true. The money supply is not fixed – it is affected by government behaviour: reducing “government” in the equation doesn’t leave more money for the private sector, as you seem to think. Cutting government spending is fiscal tightening, which has the effect of reducing the money supply – it’s the fiscal equivalent of raising interest rates. If the private sector maintains its savings level, then unless money comes into the economy from outside (via exports, FDI etc.) the amount of money actually circulating in the economy must fall, which tends to push the economy towards recession. Alternatively, the private sector may reduce its savings in order to maintain spending, leading to higher levels of personal and corporate debt. The government is expecting the second of these: in 2010 the OBR forecast significantly increased household debt due to government fiscal tightening. The trouble is that households and businesses already carrying high levels of debt either can’t or won’t take on more, so they are cutting spending instead. Monetary policy can counteract this effect to some extent, but there is considerable debate at the moment as to how effective monetary easing is when interest rates are on the floor and the entire economy is wearing hair shirts. There does seem to be some effect: M4 is growing, though at a snail’s pace, and corporate bond yields have fallen, reducing large companies’ borrowing costs (not that they were short of money anyway, though). But how much of this feeds through into productive activity in the real economy is much less clear. Ian, If you’ve read Keynes, you haven’t remotely understood him – or you wouldn’t misrepresent Keynesian thinking so massively. Unless of course you are simply using “Keynesian” in the way that Ritchie uses “neo-liberal” – a catch-all term for every economic theory that you don’t like. In which case there is no point debating with you, because you aren’t approaching this from a rational standpoint. Frances, does the economy really work that way? Does the money supply falling cause recession, or is it merely a symptom? Remember, when you’re using the term “money supply”, you actually mean the quantity of borrowing against M0. M3 isn’t “the money”. It’s credit. The problem with Keyensian-style macro is that it tries to describe the whole world in terms of tehse few aggregate variables. But, consider the tulipomania. What was the problem? Too many people growing too many fucking tulips because of a false assessment of the future value of tulip bulbs. What was the cure? Drastic reduction of the tulip sector of the economy (particularly the tulip bulbs futures market). Point is, no amount of mithering about the money supply, or aggregate investment, or state infrastructure spending, or government borrowing or government spending or encouraging Dutchmen to increase their consumer spending could do a damned thing about it. The problem was the tulips. A Keynesian Dutchman would have been entirely incapable of analysing the problem. It’s the wrong conceptual model. It just doesn’t work. Oh, I’ve thoroughly understood him Frances. But what do we mean by “understanding”? Consider Freud. You can “understand” him in the sense of reading all his books, and internalising his every word, and how to apply his theory, and become a Freud expert. But something is missing. You won’t know that the theory is bollocks. This is the second form of understanding, which is to analyse whether the theory is correct, using some form of critical rationalism. By applying broader scientific reasoning, you will realise that the theory is bollocks, which the Freud expert will never know, because he has limited his knowledge of Freud to the internals of the theory. Keynes is an internally consistent system. It can be thoroughly understood in that sense; so can many other internally consistent systems, such as Freud, or homeopathy, or astrology. But we should surely be more interested in the second meaning of “understanding” above, which is to consider whether this this internally consistent system is consistent with the external world. And then, as with astrology, we find that it isn’t. Ian B> Popper was a pompous ass. His theory of science is itself unscientific by his own standard. I suggest you need to encounter more of Feynman’s work in order to understand how science is actually done, rather than Popper’s fantasies of how science is done. Freud’s ‘experimental’ work is neither here nor there. The value of Freudianism is the recognition of Darwinian imperatives amongst humans – and that those exist is indisputably true. So is it correct that you do concur that central banks issue currency in exchange for cooporate bonds and commercial bank bonds making money available for the private sector without government spending Oh my, Dave. When the theory is bollocks, pretend it has some more general “good message”. That’s a truly desperate rescue operation. Freudianism doesn’t “recognise Darwinian imperatives” or anything else. It’s a pile of mystical cobblers. However, I think it probable that your anger at my criticism of it is due to your repressed sexual desires for your mother. Obviously, if you disagree with that analysis, it proves me correct. On science itself, I’m probably more of a Kuhnian. The point of bringing up Popper was that his impetus for developing his theory of science was a worthy attempt to try to provide a system for distinguishing between meritous scientific theories, and obvious pseudo-science like Marx and Freud (which were enormously popular among academics at the time). It was a worthy attempt. – Ian B Yep, Freud is not taken too seriously in pschology any more – Frances Drawing the conclusion you do from the sectorial balance equation is no more than a re-statement of the Keynsian paradigm to insert governmental demand. As Ian B says the money supply is a symptom of economic activity. If the private sector want spending money it is available from banks (technically via long term repurchase agreements of commercial banks with the cental bank.) Frances This part of your explanation eludes me: “Private (savings – investment)” All savings, unless in hoarding actual physical items or buying government bonds, surely end up being some other actor’s investment, don’t they? Doesn’t matter where the money’s put. Bank accounts. Building societies. The stock market. It doesn’t just sit there. It ends up being passed to another actor to spend. And I do prefer the term spending to investment. Be it spending today’s money for tomorrow, or tomorrows money today. Trying to give it a virtuous tag ‘investment’ can’t hide it’s spending. Whether it turns out to have been an ‘investment’ only the future can show. You can’t bank hopes. Ian B: good, let’s talk about tulipmania instead. Oh, I thought the Austrian School you’re so fond of believes that bubbles are caused by an excessive supply of money. Are they wrong? In this case, probably not. The underlying problem was Amsterdam’s adherence to a strict Gold and Silver Standard, at a time when bullion was flooding into Europe. Marx: wrong about communism, mostly right about everything else Keynes: mostly right Freud: a terrible doctor. His theories were sometimes right, sometimes wrong, but mostly too vague to be testable. Bloke in Spain, Yes, the term “investment” is much misused. In the sectoral balance equation it really means spending, but that includes investment spending for a future return (e.g. project finance) as well as consumption spending. Trouble is if you put “spending” in the sectoral balance equation people always misunderstand it as meaning consumption. Another way of looking at this is that if the external balance is zero (net exports = imports), government deficit = private sector surplus. The point I was trying to make is that unless the government can some how magically conjure up a trade surplus – which as the entire world is trying to do this at the moment seems unlikely – then cutting the deficit in the absence of growth must reduce the private sector surplus, which will show itself as reduced savings, higher debt, and/or reduced discretionary spending. Whether deficit reduction is achieved through spending cuts or tax increases is not the point from a sectoral balance point of view, though they do have different economic effects: spending cuts are thought to do less long-term damage than tax rises, though I suspect it depends on the nature of the spending and the nature of the taxes. Ian, Oh dear. I give a monetarist explanation of the sectoral balance and you STILL witter on about Keynes….. Please go and do some reading. I suggest Friedman, in this case. No. The general argument is that any artificial dicking around with money (increasing it, decreasing it, state directing its deployment, centralising interest rates, what have you) will tend to cause economic actors to make errors which lead to malinvestment. Which is why the general rule is that, wherever you are right now- boom, bust, whatever- the first rule for fixing things is to stop dicking around with the money. Which is of course the opposite of Keynes, who advises that the best cure for a disaster caused by dicking around is more dicking around. He was very fond of dicking around, was Keynes. And again, the basic problem in Amsterdam was too many fucking tulips. Because they’re both interpretations of the same dicking around with the money supply paradigm. It really is not unreasonable to observe that Keynes set the “macro vs. micro” ball rolling, and all the various varations (including the weirdness of Knight and the Chicago school) are simply different interpretations of the same Keynes-derived paradigm. It’s the same as the various different Marxisms, in that sense. Really Frances, I do know what I’m talking about. This sort of “go and read a book” argument is rarely fruitful and generally smacks of desperation. Hitler: Wrong about the Jews, lebensraum and the Aryan destiny through conquest, but other than that… Dinero “If the private sector want spending money it is available from banks” I really, really don’t want to get into another huge debate about how the money supply works. But I would like to make these points: 1) Banks only make money available via lending if they want to. Which at the moment they don’t, much. 2) If banks want to make money available they can do so without any help from the central bank. Most of the money in circulation is created by commercial banks, not the central bank. Loans create deposits, and those deposits don’t have to be backed by reserves. Reserves are needed for deposit withdrawals (payments), not lending. Banks do NOT “lend out” central bank borrowings – or customer deposits, either. 3) I’m well aware that believers in the money multiplier myth (such as Ian) like to distinguish between “money” and “credit”. But I don’t care which I use for my shopping – and the reality is that the money that people use for shopping is created by commercial banks in the form of credit. The only “real money” in circulation is notes & coins. The rest is commercial bank credit. You seem to want to exclude government completely from the sectoral balance equation. If you do so, then it becomes this (I’ve amended it in the light of Bloke in Spain’s comment): Private (savings – (consumption + investment)) = External (net exports – imports) This is, of course, a country that does not have its own currency. Or a country where the private sector does not use the sovereign currency – Zimbabwe springs to mind. In fact it’s effectively a country that has no government. Ian B. Oh. So every macro-economic theorist is simply recycling Keynes, then. Wow. Presumably that includes Hayek, and Mises, and Rothbard. “Zimbabwe springs to mind. In fact it’s effectively a country that has no government.” Go to Harare , stand on a soapbox in the centre and expound in a loud voice about what a sack of shite Robert Mugabe is. You will soon find out if they have a govt or not. Mr Ecks So Mugabe’s thugs can beat you up. Doesn’t make them a government. As far as economic activity in Zimbabwe is concerned, there is no government. Govt is about beating people up and killing them if a beating (or being sodomised in jail) is not enough. Sure, if enough people have been bullshitted–like you–you don’t need to show the fist as much–but it is always there. Frances- Hayek, von Mises and Rohbard all rejected the very existence of “Macro”. (Well, the last two certainly did, not sure about Hayek. He had a tendency to be a bit tepid). The multiplier is a “myth” is it? Look. In a gold (or other commodity standard) system, you have your gold reserves, which are the money. And certificates circulating for the gold. In a fiat system, the M0 takes the place of the gold, and the M3 is the gold certificates. The number of certificates in circulation expands and contracts in response to market conditions. The more lending there is, the more there are. It’s not hard. If you think I’m one of those fellows who wants to end fractional reserve banking, I’m not; if that is the justification for your “myth” barb. But you really ought, in thinking about economics, to “care” about what the money is, because it’s pretty fundamental. If you don’t, you mind start believing some crazy shit about how expanding the M3 might drive economic growth, and you’re not that stupid are you? So anyway, I’ll reiterate. When the economy is doing well, there is more lending (investment) and so the supply of gold/M0 certificates in circulation rises. When the economy crashes, the number of them falls due to the reduction in economic activity. This is not a reversible cause and effect. Ian B, Oh, and about those tulips. According to you, the cause of the tulip mania was TOO MANY tulips, and the price crash was achieved by reducing their supply. Blimey. For some supernatural reason the normal laws of supply and demand must have been reversed, then….. Over-supply drives DOWN prices, not up. If the problem was too many tulips the price would have been on the floor, not heading for the moon. And how on earth would cutting the supply of something reduce its price? Paul’s explanation makes far more sense. Excessive inflows of bullion caused a vast increase in the money supply, leading to hyperinflation in a particular asset class. Now, remind me, where recently have we seen high inflation in an asset class driven by excess inflows of hot money, followed by a crash when investors realised they had been sold a pup? Oh, and of course the tulip mania had nothing whatsoever to do with Keynesian gubmint nutters dicking around with the money supply….. Frances- Well, it’s a warning about money supply manipulations. It’s a warning that increasing the money supply doesn’t lead to economic growth. It leads to too many fucking tulips. A crash occurs every time somebody expects a future return in excess of what return they actually get (or realise they are going to get). Every failed business is a local crash. In the case of big crashes, it’s when everyone thinks the future return is far in excess of the actual return (tulips, railways, dot coms, houses). The actual point of the crash, the sudden crash, is the moment that that realisation dawns. In the case of tulips, the realisation that “we’ve got all these fucking tulips and nobody is going to buy them”. So, the price of tulips at that moment falls to the actual excessive supply (caused by over-investment) level; i.e. bugger all. Meanwhile, everyone who wagered their life savings on tulip prices rising forever starts jumping off… windmills, presumably in that case. So, it’s a case of price lagging supply. It does drive prices DOWN. That’s the crash. The cause of the boom is over-enthusiasm; a commonplace expectation that everyone can win. The availability of money (flood of bullion, state fiscal expansion) thus enables the boom. It allows for far too many fucking tulips. So, this brings us back to Macro. Once the price crash occurs, it’s not a money supply problem that can be fixed by the State spending more money. It’s caused by all these fucking tulips, or rather all these people in the tulip industry who need to go off and find another product that people actually want. And no amount of macro can fix that. Ian, It is precisely because I DO care what money is, and how it works, that I made those comments. The role of bank lending in the creation and circulation of money is widely misunderstood – not least by you. The multiplier is a myth. Bank lending simply does not work in the way you describe. It didn’t work that way even under a gold standard. Someone invented the money multiplier as a simplified explanation of the multiplication effect of fractional reserve lending, but they started from the wrong place. It starts with loans, not deposits – and that makes all the difference. I was right that there’s no point debating with you. Your considered opinion seems to be that macro is bollocks. That’s not rational. Frances, For a start, I’m not at all clear about what you think I think about the “money multiplier”. So I don’t know whether what you think I think is what I actually think. As I said, I’m not anti-frac. I’m anti-government borrowing, but that’s a different thing. Anyway, you’ll find that it’s pretty much central to the Austrian School that there ain’t no macro; that the attempt to arithmetical (and econometric) economics is fundamentally flawed for logical reasons. In that sense, I may not be rational, but I’m being pretty orthodox (to the Austrian School). Anyway, all I claimed (I think) is that the measure of broad money is simply proportional to the quantity of lending by banks. That doesn’t seem particularly la-la-loopy to me. Im rather mystifed as to what else you think it is. Come to that, I didn’t even mention a multiplier, did I? Ian, The tulip story actually demonstrates that gubmints CAN’T control the money supply. Which is exactly the point I have been making throughout these comments. The sectoral balance describes what HAPPENS, not what governments aim to achieve. You’ve completely misunderstood it – despite the fact that I have TWICE pointed out that the sectoral balance is not an economic theory, it is an accounting identity. It is not any sort of Keynesian macroeconomic paradigm. It’s the law of unintended consequences. If a government has a deficit it can only cut it by extracting money from the private sector. If the private sector is growing, it can accommodate government deficit cutting without much pain, though it tends to grumble (which is why politicians don’t like to do it). But when the private sector is NOT growing, extracting money from it to reduce a government deficit causes actual damage. I’m amazed that someone as keen on private sector activity as you wants an already-damaged private sector to take even more pain in order to reduce the size of government. If I explain it that way does it make more sense? That doesn’t mean I agree with Ritchie, though. It is in theory possible to cut a deficit when the private sector is not growing. All the sectoral balance shows us is that reducing the deficit and increasing private sector growth cannot happen at the same time. Government deficit-cutting causes economic activity to reduce even more. It’s inevitable. Ritchie makes me laugh. He is apparently blind to the fact that all of his proposals for increasing tax take amount to fiscal tightening. So much for his ideas about restoring economic growth. Ian, You claimed that M3 was proportional to M0. That is the multiplier. And it is wrong. If you look at Friedman’s book on the Great Depression, he produced some fascinating charts that showed M3 actually FALLING while M0 was rising. In the UK in 201o-11 M4 did the same – it continued to fall due to private sector deleveraging while M0 was rising due to QE. M4 is now rising, but M0 is rising faster. They aren’t really related at all at the moment because of QE. Banks simply don’t “leverage” central bank money. The Fed has shown that M0 lags M3, which is consistent with the view that reserves creation responds to commercial bank lending, not the other way round. And it has always been like that – even under the gold standard. Really only a pure bullion standard or strict full reserve banking would force reserve creation to precede lending. Frances, I didn’t claim that M3 is proportional to M0. I said the opposite. I said it’s proportional to the lending in the economy (i.e. it *is* the lending in the economy). M0 simply acts as an ultimate limit on how much of it you can have, hence expanding M0 allows for more M3, but does not *cause* an increase in M3. In fact I specfically stated that M3 falls during a recession due to economic conditions (ergo, not due to a fall in M0). So Keynesian spending is a deliberate attempt to push more M3 (via government purchases) into the economy when it *should* be naturally falling. We may be at crossed purposes. … also though, if you look at QE, you’ve got the government trying to create more M3 by creating more M0, which suggests that *they* do believe in a simple multiplier. Ian, Yes, M3 IS the lending in the economy. But M0 does not act as any sort of brake on lending. Reserves are needed for payments, not lending. No central bank would refuse to create reserves needed to settle aggregate payments across its RTGS system. So central banks create reserves on demand, without limit. Bank lending is not limited by reserve availability. Increasing bank reserves doesn’t make them lend – as QE has shown us all too clearly. And reducing bank reserves doesn’t stop them lending, either. It is all a myth. The constraints on bank lending are: – regulatory capital requirements – cost of funding (which is affected by central bank policy rates, of course – that’s how monetary policy influences bank lending) – the banks’ view of risk versus return. At present capital requirements are being tightened and banks are horribly risk-averse because of their awful balance sheets. So the only thing keeping bank lending going is the fact that funding is very cheap due to historically low interest rates and because of government intervention (the Funding for Lending scheme, for example). Are you really sure you don’t want gubmint dicking around with the money supply – which is of course what this is? The government believes all manner of things that aren’t right. But the Bank of England’s view is more nuanced. The stated purpose of QE is to drive down real interest rates, reducing corporate borrowing costs and therefore encouraging large corporates to invest and expand. It was also hoped that some investors might actually spend their QE money on goods and services, too. However, the Bank of England noted in its literature on QE that although increasing bank reserves might encourage banks to lend more, there was no guarantee of this and they weren’t relying on it – which suggests the Bank, at any rate, is unconvinced by the multiplier. Ian We have been at crossed purposes ever since you decided that I was describing a Keynesian economic paradigm when I wasn’t. I’ve really got to stop torturing myself like this. As an economics graduate, I find the amount of ignorant shit that people like PaulB and Frances come out with is simply staggering. Of course I can go to the peer-reviewed literature to generate a rebuttal in my own mind (recent papers by the Minneapolis Fed are particularly illuminating), but I’m afraid the only way I can take ownership of this situation is to stop butting my head against the ferroconcrete wall of religious faith in gov’t intervention. “This time will be different!!” That’s nice. Richard Allan Where exactly did I demonstrate “religious faith in government intervention”? Or say “this time will be different”? All I did was explain the sectoral balance, which is not an economic theory – it is an accounting identity which simply recognises that whether you like it or not government is part of national accounting. Do you think I don’t read peer-reviewed literature? Of course I bloody do. Peer-reviewed literature does not agree on the value or otherwise of government intervention. All you’ve shown is that you haven’t read what I’ve said properly and have forgotten all the economics you’ve ever learned. Government spending is a drag on the economy, not a boost to it. Ian B: Here‘s an article which explains tulipmania by reference to ” a large increase in the supply of coin and bullion in 1630s Amsterdam.” But it was published in the notoriously Keynesian Quarterly Journal of Austrian Economics, so it must be wrong. – Francis Thanks for clarifying your comment . You have overlooked that private activity can grow without Government or External involvelment, so there is noththing left of the sectoral balance equation. Gamecock. If the government spends less, the private sector must spend more. Which reduces its net savings. Which is what I said. Paul, I have that feeling I always have when debating with you, of trying to nail a jelly to a ceiling. It was me that brought up the tulipomania. It’s thus my perogative to decide what it is supposed to be illustrating in the discussion. What it isn’t illustrating s what caused the problem. It was to illustrate that at the moment the economic collapse happens, a Keynesian analysis is useless. The Amsterdam problem was not describable in aggregates- income, investment, saving, etc. It was caused by a specific malinvestment- in tulips and all things associated with them. This defines the whole (or much of) the disagreement beween Austrians and Keynesians (and other “macrophiles”). If you are the King of Dutchland looking at the economic chaos in Amsterdam, what do you need to know? The answer is, you need to know that there are too many fucking tulips. And that the only solution is liquidation of the escess tulip production in the Amsterdam economy. That’s what you need to know. Which is why an Austrian would know the solution, and a Keynesian with his little slate and chalk trying to measure aggregate investment and aggregate spending and aggregate output wouldn’t. It’s not because there is too much stuff being produced, too little being produced, or too much or too little being spent. It’s because the wrong things are being produced. Do you get the point now Paul? Too many tulips. Too many fucking tulips. You see? there is no explict necessity for the private sector to spend more if the government spends less @ Frances #6 Why do you assume that the government is *not* trying to reduce the external deficit? It is not making much headway since the Eurozone crisis cut spending and imports therein but that was clearly part of their plan and the VAT rise was one of the few moves open to it under EU and WTC rules towards that end. Dinero No, I haven’t overlooked that. I have not suggested that private sector growth, or export growth, require government involvement. The sectoral balance does not indicate anything about causation – it is simply a balance. All I said was that cutting the government deficit requires extraction of money from the private sector in some manner, which is money that could have gone into private sector saving, investment and consumption. Therefore government deficit-cutting tends to impede private sector growth. That is obvious from the equation. I’d guess you find it easy to understand that tax rises extract money from the private sector, but more difficult to understand that spending cuts also do so. Spending cuts and tax rises don’t extract money from the same groups. But anything that forces the private sector to reduce its savings level in order to maintain spending – or vice versa – is a brake on private sector growth. Private sector spending cuts obviously reduce economic activity, but so does savings reduction, since longer-term investment in the economy comes from savings. If you like, we can have a separate debate about “crowding out” and whether reducing government activity encourages private sector growth. The sectoral balance doesn’t tell us anything about this. Dinero…..books have to balance…..if the government spends less and the external balance doesn’t change, the private sector either has to spend more or overall economic activity reduces. Which is what I have been saying all along. And to conclude from your equationat the sum is to be maintined is simply a formal definition of Keynsianism which you said it was not – yes John77 I’m not assuming the government is not trying to improve the external balance. I know it is, but as I pointed out somewhere (probably to Ian), so is everyone else. When everyone wants to increase exports and cut imports, no-one can. Dinero This is not Keynsianism – it’s accounting, that’s all. The sectoral balance shows the effect of policy but does not define it. That’s why Ritchie doesn’t understand it – he thinks it supports the Keynsian view that deficit-cutting is only possible in a growing economy. But the sectoral balance is actually neutral. It is equally compatible with the liquidationist view that cutting government spending and allowing the economy to go into recession is a good thing, because it would clear out malinvestment and enable the private sector to expand into the gaps left by government. I don’t argue that deficit cutting makes private sector growth impossible in the long term, only that private sector growth is impeded WHILE THE DEFICIT IS BEING REDUCED. Why do you have a problem with this? I’d like to clarify something I said in my first comment. I suggested that government deficit-cutting was not possible when the private sector is saving (or deleveraging) and the external balance is negative. Here’s my clarification: – It is actually possible to cut the deficit under these circumstances, but only at the price of recession, which may be severe and prolonged. This is what is happening in Greece, which has managed to cut its deficit by an historically unprecedented amount but is now in the sixth year of deep recession. The view of the EU leadership is that severe and prolonged recession is better than allowing the Greek government to continue spending at an unsustainable level. – However, Irving Fisher suggests that in an indebted economy, if the private sector contracts too far deficit-cutting becomes impossible. Once the amount of money needed to service the debt exceeds what can be extracted from the private sector, the government is forced to borrow more in order to pay its debts. Or print money, of course – and we all know what happens when a distressed government in a deeply recessed economy starts printing money to pay its debts, don’t we? Or, it can just default on the debt. As Stalin said, “how many divisions do the banks have?” The simple answer to all this is, of course, for governments not to borrow money. They do not need to, because they have the legal money creation power. The borrowing of money stems from the days when money was gold; not a gold standard but actual gold, and the only way to get more gold, even for a king, was to borrow it (unless he could expect a good ROI from invasion and plunder of some other kingdom). So, they borrowed money. Mainly, ironically, to finance the invasion and plunder of other kingdoms. Whatever. It’s an anachronism. Ideally, the money supply (M0) would then just be static. Private banks would be free, like everyone else in the economy, to do what they wished with it; including crediting promisory credit to their account holders, etc etc. If the government really was still silly enough to want to expand the money supply, they would then simply mint it (electronically, these days) and spend it. And have no interest to pay. It’s been a long time since we paid for anything with bags of gold. It’s time government finances caught up with that. There’s no reason for an economy, as a whole, to be indebted. It’s lunacy. IanB: yes, I understand thank you. You are anxious to tell us why it would be problematic to attempt to maintain a tulip bulb price of several thousand guilders. No doubt we are all suitably grateful. I’m grateful that you’re grateful Paul. Now, can you guess what this tells us about recessions due, not to tulips but, say, railway shares, dot coms or subprime mortgage lending? IanB Yes, indeed, it could default on the debt. In fact triggering hyperinflation by printing money to pay debts amounts to default anyway. Debts paid with worthless money haven’t been paid. Not all governments have the power to print money, of course. Those that don’t – notably the Eurozone governments – have no choice but to borrow. We do tend to treat our fiat currency system as if it is a gold money system, don’t we! I think a lot of people are still intrinsically unhappy with the idea of governments just creating money. Well Frances, that was what I meant back up the thread about the M0 being the gold and the M3 being the gold certificates. We still think that way. History seems to tell us why goldbuggery won’t work; you start with gold, then people start circulating the certificates, then they start thinking of the certificates as “the money” and at that point you’re into something virtual, and inevitably will end up with a fiat anyway. So, we may as well live with that reality. In which case, we may as well just try to find some constitutional means to limit the money printing (“no more than 3% in any one year”) and save ourselves the interest. Which is pretty much what our mate Milton Friedman suggested. I can’t see any way forward for the likes of Greece that *doesn’t* amount to a default, anyway. BTW, have you read, “When Money Dies”? One thing that interested me was that the ultimate solution to the Weimar inflation was to cobble together a new currency that was actually no more sound than the old one; but simply because it was a new one, it got the confidence back that the old mark had lost, which brought the hypervelocity to an end. A lot of the Weimar problem was actually velocity. People were being paid first thing in the monring, then getting an hour off work to go spend it before it lost its value! Yes Ian, it tells us not to try to maintain prices of, say, railway shares after a railway share price bubble bursts. Does anyone say otherwise? I don’t think anyone but you has said anything about maintaining the price of anything Paul. So no, nobody has said anything otherwise. What it tells us is that the recession isn’t a monetary phenomenon. It might also tell us that there is no such thing as the marginal propensity to consume tulips, but I think you’ve already guessed that part. Ian: a few years ago I broke my ankle in a running accident. I went to the hospital where the doctor carefully examined me and told me that since the injury hadn’t been caused medically it would be quite wrong to treat it medically. So I went to a different hospital. Ian, I’m pretty much in agreement – especially as government debt is becoming more and more like cash anyway. It is nearly as liquid as a demand deposit and carries little more in the way of interest. And its nature and purpose is changing. We all know it isn’t really needed for government financing any more, but that doesn’t mean it isn’t needed. The financial world is a very, very strange place these days….. I read a paper from BIS the other day that suggested that, in order to ensure there was a sufficient supply of safe assets for investment and funding collateral, major governments should issue unlimited debt in response to financial system demand – rather as central banks issue unlimited reserves in response to bank demand. But to ensure that these debt assets remained “safe”, governments should also maintain strict control of government spending and run balanced budgets or primary surpluses. I think that’s sort of full reserve banking for governments, isn’t it – borrow the money, but put it safely away in a vault so there is no possibility of not being able to return it…..My mind boggled, anyway. We really have to reform this idiotic system. Greece has already technically defaulted twice (PSI early last year, and the recent PSI/OSI deal). And as its underlying problems haven’t been solved and its debt pile is not even remotely sustainable, it is bound to do so again. Whether or not it remains in the Euro, its debt will eventually have to be written off, I reckon. That outcome is already priced in – there isn’t really a market for Greek debt. I’ve read “When Money Dies”. It’s absolutely riveting. Yes, you’re right about velocity – I hadn’t thought it like that, but V of course rises exponentially in hyperinflation. We normally think of increasing V as a good thing, but not like that. And confidence – really the renewed confidence was not so much in the currency but in the government. Creating a new currency was symbolic of the government regaining control. Political chaos seems to be strongly associated with hyperinflation.
https://www.timworstall.com/2013/01/ritchie-on-macroeconomics/
CC-MAIN-2021-31
refinedweb
7,635
63.29
In my last project I had a requirement where we will be getting multiple records in a single file, and we have to loop through the file to get a single record and process it. I got a very good article on this written by Jan. Thanks Jan. When I was trying to implement the tutorial provided by Jan, I faced a few silly problems, obviously because of my lack of knowledge. So, I thought of putting the same, in a more simpler way. So, we will take the same example to develop our tutorial where we have the customer information containing the CustomerID and Name and this information will come to us as a batch file under customers node. We will execute the following steps to implement this tutorial: Customer. CustomerIDand Nameproperties as the child field elements to the schema. Note: Make sure that the schema looks similar to the above image, for me it created problems. The point is to set xs:int and xs:string as �Data Type� property and not �Base Data Type� property. Envelopeproperty to Yesin the Properties window. Customers. Importsproperty of the schema node. You�ll get a dialog window in which you can add an �XSD Import� of the CustomerDocument schema. Then add a new child record node under the Customersnode and name it Customer. Set the Data Structure property of this new node to �ns0:Customer�. (If you haven�t changed the namespace: If you don�t like ns0 notation, and want to put proper notation, use �ab|� tag provided in the import dialog. See the image shown below for more details.) Note: If you don�t want to use an XSD Import, you can set the Data Structure property to �xs:anyType�. When you say �xs:anyType� it will remove the child nodes. That is why, this way is not preferable as it will increase the complexity. Customersnode by clicking the ellipsis button and point to the Customersnode. The property will be set to: /* [local-name()='Customers' and namespace-uri()=''] Note: Make sure that your xPath is exactly similar to the one given above, by default it will append some other code also. So now we have created the perfect document as well as the envelop schemas. If you want to make sure before going further you can use the utility called xmldasm.exe which can be found in <sdk>\Utilities\PipelineTools folder. From here you can copy the xmldasm.exe and PipelineObjects.dll files to your working folder and execute the following command at command prompt: xmldasm batch.xml -ds customerdocument.xsd -es customerenvelope.xsd If everything is fine, then you will get a separate file for each record given in batch.xml file. You can also use IDE to generate valid XML file for you from the schema using the following steps: Next you need to configure a new ReceivePipeline in which the schemas created above will be used: ReceivePipelineto your project and name it CustomerReceivePipeline. ReceivePipeline. Now the CustomersReceivePipeline can be used in an orchestration; so let�s do that: BatchOrc. Click on Next and provide the port name as ReqPort and click Next which will take you to the following screen: Enter the details given above and click Next. This will take you to the following screen where we will configure the pattern of port like, type of port receive or send and specify the listening port, see the screen shown below for more details: Note: Note that we have to specify the CustomerReceivePipeline we created above as the receive pipeline. In a similar way, we can create the sending port with the direction �I�ll always be sending messag�� and receive pipeline as �XMLTransmit�. Refer to the screen below: Note: If you want to send the data to SQL database for insertion then you have to bind the port later. May be I will discuss this in my next tutorial. Hope this tutorial will help you to develop your cool orchestration. If you need any further help do mail me at gaurang.desai@gmail.com. General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/biztalk/BizEnvelop.aspx
crawl-002
refinedweb
686
71.65
If you are a front end developer who is new to the world of GraphQL and you're thinking about getting started with it, this article is for you. In this article, we will explore GraphQL basics and kick start our journey with it by building a simple project. What is GraphQL? GraphQL is a query language that lets apps fetch data from the API's. But what it does differently is that is allows clients to specify how to structure the data when it is returned by the server. This means that the client only asks for the data it needs and even specifies the format in which it needs the data. But what problem does it actually solve? It solves the problem of under fetching and over fetching. Ok but what's that? Well, let me tell you. Let's say you only need to display a userName, userImage, and Name in your profile page on your website or app. But when you request the data you are getting lots of other information about the user which you don't need. This is called over fetching – you are fetching a lot of data, even the data you don't need. On the other hand, under fetching is when we get less data than you need. So neither one is great. You might think ok, that's not a problem at all. Well, it's not a big problem in small scale applications. But what about in large scale applications that have millions of users? In those cases, over fetching and under fetching waste a lot of resources, which is where GraphQL comes in. How to Get Started With GraphQL Here we'll cover some key concepts you need to know before getting started with graphQL GraphQL PlayGround GraphQL playground is an interactive, Graphical IDE for GraphQL where you can visually explore the server. In it you can test various GraphQL queries and see their results in front of your eyes. Here is a link to GraphQL playground you can check out. If you click on the play button it will run the query. How do you request, write, or post data in GraphQL? You request data through a query in GraphQL. And to write or post data, you use mutations. Whenever we perform a GraphQL operation, we specify whether it is a mutation or a query. Then we name that operation, and this is the basic way of performing a GraphQL query. GraphQLOperatoinType Name { .... ........ ..... ... } To make a simple query, the syntax would be: query getData { ... } Similarly, to add a mutation we would write mutation in place of query. Now since we know the basics, let's get our hands dirty. We will be using the Anilist API to get a list of anime shows. How to Use Apollo Studio You've gotten a small taste of GraphQL playground, but there's something even more awesome called Apollo Studio. It makes life easier as a front end developer. In it, you just need select the fields you want and it writes a query for you. From the left hand side select the fields you want in your Query, and that's it. GraphQL will automatically create a query for you. Now you've made the query, but how do you use it in your application? Well let's get started building a simple Anime App with it. We'll use React in this project, but you can choose any framework or library you'd like. Firstly, create a new project in React: npx create-react-app graphql-example Now once the project is created go inside the project directory and install the Apollo client. npm install graphql @apollo/client Once it's done, go the src/index.js and import ApolloClient, InMemoryCache, and ApolloProvider: import {ApolloClient, InMemoryCache, ApolloProvider} from '@apollo/client'; Apollo Client is a class which represents the Apollo client itself, and we use it to create a new client instance. Here we need to provide a couple of things to it. One is the URI where we specify the URL of our GraphQL server. Also every instance of our Apollo client needs a cache so it can reduce the network requests and can make our app much faster. This is what our new client look like: const client = new ApolloClient({ uri : '', cache: new InMemoryCache(), }) Now we need to make this client available throughout our component tree so we wrap our app's top level component in ApolloProvider. Now we are done with the initial setup, so it's time to make a query and ask our API for the data – but how do we do that? We can do so using the useQuery hook. But before that we need to define a query, which we can do using the GQL (we need to wrap our query inside it). So now, import these two from the Apollo client: import {useQuery, gql} from '@apollo/client'; After importing them, we'll wrap our query inside GQL: const AnimeList = gql` query Query { Page { media { siteUrl title { english native } description coverImage { medium } bannerImage volumes episodes } } } } `; At this point you must be wondering if the query part is done, how do we get data from it now? That's where the useQuery hook comes handy. It returns error, and data properties we can use. const {loading, error, data} = useQuery(AnimeList); For now we can just display the data to check whether our app works or not: if(loading) return(<> Loading</>); if(error) return(<>{JSON.stringify(error)}</>) return ( <> {JSON.stringify(data)} </>); Well it works for now – time to style it. Maybe we can use object chaining to implement that nicely: <div className="container"> <h1> 🐈 Anime List </h1> <hr width="80%" /> {data?.Page?.media.map(anime => ( <> <div className="card" > <img src={anime.coverImage.medium}/> <div> <h1>{anime.title.english} </h1> <div className="episodes" >Episodes <b>{anime.episodes} </b></div> <div dangerouslySetInnerHTML={{__html: anime.description}} ></div> </div> </div> <hr width="75%"/> </> ))} <div className="buttonContainer"> { page != 1 && <button> Previous Page</button> } <div className="pageText"> {page}</div> <button onClick={NextPage}> Next Page </button> </div> </div>); You can check out this GitHub repo for the CSS file. Now we are able to get a list of anime films from the API. So what do we need to get them from the next page of the app? We need to pass a variable that has a page name into the query. That's where variables in GraphQL come into the picture. First, go to Apollo Studio and click on the arguments on the left hand side (first go to root > query >page and you'll see it): Click on page and it'll add an argument to your query. Also notice that in the variable page in the variables section, you can change its value and play around a little bit with it. But the data will only change according to the page. Now we need to pass this variable into the query – and then we'll be able to display next page's anime in our app. For that we'll be using the useState hook to keep a track of our current page's value. We also need to make a function to increment and decrement that as well. const [page, setPage] = useState(1); //this is how we would be passing the page in the query. const {loading, error, data} = useQuery(AnimeList , { variables: { "page" : page } }); const NextPage = () => { setPage(page+1); } const PreviousPage = () => { setPage(page - 1); } <div className="buttonContainer"> { page != 1 && <button onClick={PreviousPage}> Previous Page</button> } <div className="pageText"> {page}</div> <button onClick={NextPage}> Next Page </button> </div> And now we're done building our simple app with GraphQL. If you want to check out the codebase, here is the link. Wrapping Up In this article, we have covered some of the basic concepts to help you get started using GraphQL. Thank you for reading, and happy coding.
https://www.freecodecamp.org/news/graphql-for-front-end-developers/
CC-MAIN-2022-33
refinedweb
1,320
72.46
Complex URL schema with i18n and l10n Splash › Forums › Rewrite Users › Complex URL schema with i18n and l10n This topic contains 13 replies, has 3 voices, and was last updated by detestable 6 years, 2 months ago. - AuthorPosts We are starting a web project in jee and I’m trying to evaluate if rewrite can be used for our rewriting needs. Despite the lack of documentation we are quite interested in your plugin because of it’s flexibility and configurability. In our project we need to have information in URL that inform us on the place where resids the visitor and on the language he speaks. Here is the schema : [ISO 3166-1].domain.com/[ISO 639-1/]path/subpath[…] For exemple for united states we would have something like : us.domain.com for root path. In the case of Switzerland where 4 languages are spoken (french, italian, german and romansh) : ch.domain.com/fr/ ch.domain.com/it/ ch.domain.com/de/ ch.domain.com/rm/ The rule is simple, if we have only one official spoken language in the country we won’t append the language code at the beginning of the path. The best example I found to achieve this is the domain showcase which would handle the case of [something].domain.com but I’m quite afraid by the optional part with the language code. Is this possible to realize with ocpsoft rewrite? Hi there, Yes this sounds doable quite easily It might take a little creativity but this should work fine. It’s made much easier still since you know which languages you need to support in each country, thus you can specify constraints to ensure that they are correct. If you post a little more of your URL-schema and requirements, I can try to help you learn to build the rules. ~Lincoln Wow cool, good news! If think the most complex part of our needs is the optional part at the beginning of the path. For the subdomain part, the showcase is quite explicit and a rule like this should do the job : @Override public Configuration getConfiguration(final ServletContext context) { return ConfigurationBuilder .begin() .defineRule() .when(Domain.matches("{subdomain}.domain.com") .where("subdomain") .bindsTo(El.property("#{clientStuff.subdomain}")) .and(DispatchType.isRequest())); } With this kind of bean : // (we are using CDI instead of JSF container) @Named @SessionScoped public class ClientStuff implements Serializable { private String subdomain; public String getSubdomain() { return this.subdomain; } public void setSubdomain(String subdomain) { // BTW, when testing, this setter is called twice (a System.out.println would produce 2 output for a call on en.domain.com for example) this.subdomain = subdomain; } } When testing I made an other HttpConfigurationProvider for other mappings : @Override public Configuration getConfiguration(ServletContext t) { return ConfigurationBuilder.begin() .addRule(Join.path("/").to("/WEB-INF/jsf/index.xhtml")) .addRule(Join.path("/login").to("/WEB-INF/jsf/login.xhtml")) /* And so on ... */; } Which would work flawlessly for : en.domain.com en.domain.com/login uk.domain.com uk.domain.com/login (And so on) Now for the optional part of the path for the language I’m clueless. Let’s stick with Switzerland example, we know that subdomain “ch” would involve 4 possibilities : de, it, fr and rm => ch.domain.com/de/ ch.domain.com/de/login ch.domain.com/it/ ch.domain.com/it/login (And so on) I need to get the “de” or “it” without confusing with the “login” information. Shall I add a ConfigurationProvider that would act “between” the domain configuration and the path configuration? Ooh, while writing an idea comes to me, I could inject my ClientStuff in my configuration provider and rebuild the rules but I’m afraid to loose some flexibility. I’ll give a try and come back with what I did. Thanks for your answer! Why not do this? Add another rule in your routing config: @Override public Configuration getConfiguration(ServletContext t) { return ConfigurationBuilder.begin() .addRule(Join.path("/{lang}").to("/WEB-INF/jsf/index.xhtml").where("lang").matches("(de|fr|it|rm)")) .addRule(Join.path("/{lang}/login").to("/WEB-INF/jsf/login.xhtml").where("lang").matches("(de|fr|it|rm)")) /* And so on ... */; } Would that do what you want? ~Lincoln Finaly I failed with my idea. Your configuration works perfectly when there is the need of a /{lang} in the path but in order to handle for example en.domain.com (I don’t want a /{lang} in that case) it won’t match. I have the feeling that I’ll need to build custom rules depending on the subdomain. What do you think? Thank you again! [EDIT] I justed noticed the Add another rulein your reply, Then I understand I have no choice, I’ll have to create different rules depending on the subdomain. Thank you a lot for your support! I may came back for some other questions later!. The aim was to avoid writing .addRule(Join.path("/{lang}").to("/WEB-INF/jsf/index.xhtml").where("lang").matches("(de|fr|it|rm)")) + .addRule(Join.path("/").to("/WEB-INF/jsf/index.xhtml") to avoid having 2 rules to maintain in the case of changes, but I’ll stick with this solution for the moment which does exactly what I needed! Many thanks! You’re welcome! We can always make improvements How would you like to see this work, in terms of syntax? What do you think? ~Lincoln Hmmm, may be my case is quite specific, but it would be possible to handle it with regexes. What do you think about a full regex rule like this : private static final String pathExpr = "(?<lang>de|fr|it|rm)"; private static final String rootLangPath = "^(/" + pathExpr + ")?"; @Override public Configuration getConfiguration(ServletContext t) { return ConfigurationBuilder.begin() // Binds lang once for all .addRule(Regex.path(rootLangPath + ".*") // permissive regex .where("lang") .bindsTo(El.property("#{langBean.lang}"))) // Index rule .addRule(Regex.path("^/" + pathExpr + "?$") .to("/WEB-INF/jsf/index.xhtml")) // Login rule .addRule(Regex.path(rootLangPath + "/login$") .to("/WEB-INF/jsf/login.xhtml")) // Logout rule .addRule(Regex.path(rootLangPath + "/logout$") .to("/WEB-INF/jsf/logout.xhtml")) /* And so on ... */; } I have to admit that it looks like patchwork but it allows us to handle one rule per path, with our optional lang. I don’t see the harm in multiple rules – I think you’re probably better off and more maintainable with the former. Not to mention that you lose out on the outbound URL-rewriting with your regex approach. However, we could possibly add something like: Join.paths("/login", "/{lang}/login").to("login.xhtml").where("lang").matches(...) That might be nicer. The outbound URL would be selected based on order and availability of named parameters (‘lang’ in this case.) Christian KaltepothModerator I’m not sure I like this. And this only works if the number of parameters are different for each pattern, right? And it’s not easy to understand what is happening behind the scenes so it may confuse people. I for myself prefer to have a single URL for a page. If there are alternative URLs they should be redirected to the correct one. That’s a pattern that makes most sense IMHO. Just my two cents. Yeah, I agree. I’m finaly joining you as most of the time URLs depends on i18n. Thank you for your support! - AuthorPosts You must be logged in to reply to this topic.
http://www.ocpsoft.org/support/topic/complex-url-schema-with-i18n-and-l10n/
CC-MAIN-2018-39
refinedweb
1,222
57.87
Scientific Python 3: The Power Of scipy15 May 2019 - Numerical integration (scipy.integrate) - Convolution (scipy.signal) - Function optimization (scipy.optimization) - SVD image compression (scipy.linalg) - Quick Statistics (scipy.stats) - Frequency domain transformations (scipyfft.fftpack) - Let’s go wild (scipy.spatial, scipy.ndimage) - Appendix The scipy package provides a variety of functionalities, which can be applied to a broad range of scientific research. It is quite common for us to ignore that fact that a function already exists in scipy and waste time on finding other packages or reinventing the wheels. In this post, I would show a series of applications for most scipy submodules, just to let readers get a grasp of how versatile this package is. Before the decision of writing one’s own code, it is always better to refer to scipy documentation and see if there is anything helpful. Numerical integration ( scipy.integrate) Assume that we want to compute the following integration:\[I = \int^{1/2}_{y = 0}\int^{1-2y}_{x=0} xy~dx~dy\] It can be seen that\[\begin{align*} I &= \int^{1-2y}_{x=0}x~dx\int^{1/2}_{y = 0}y~dy\\ &= \frac{1}{2} \int^{1/2}_{y = 0}(2y-1)^2y~dy\\ &= \frac{1}{2}\left.\left( y^4 - \frac{4}{3}y^3 + \frac{1}{2}y^2 \right)\right\rvert^{1/2}_{0}\\ &= \frac{1}{96}. \end{align*}\] To compute this with scipy, we can write the following code: from scipy.integrate import dblquad intVal, error = dblquad(lambda x, y: x * y, 0.0, 0.5, 0.0, lambda x: 1-2*x) print(intVal - 1.0/96) The difference is 1.734723475976807e-18, which indicates that the error is very small. Convolution ( scipy.signal) Take the gradient Lenna image as an example, where we would like to apply convolution with a Sobel kernel, which is given by\[\begin{bmatrix} -1 & 0 & 1\\ -2 & 0 & 2\\ -1 & 0 & 1 \end{bmatrix}.\] With the help of scipy, it can be done with several lines of code: from PIL import Image import numpy as np from scipy.signal import convolve2d # open image in gradient mode img = Image.open('2019-05-14-sci-py-1.jpg') # only take the first channel # convert to floating point and standardize npImg = np.asarray(img)[:, :, 0].astype(np.float32) / 255.0 sobel = np.asarray( [ [-1, 0, 1], [-2, 0, 2], [-1, 0, 1] ], np.float32 ) result = convolve2d(sobel, npImg) # standardize result result -= result.min() result /= result.max() result *= 255.0 pilResult = Image.fromarray(result.astype(np.uint8)) pilResult.save('2019-05-15-sci-py-1.jpg') The result is shown as below. Function optimization ( scipy.optimization) Consider the following optimization problem: assume the equation of the circle is \(x^2 + y^2 = 1\), there are two fixed points \(A(0, -1)\) and \(B(1, 0)\) on the circle. Assume that \(P\) is an arbitrary point on the circle. Compute what the coordinate of \(P\) is so that the area of \(\triangle PAB\) is maximized? The answer is pretty obvious: the area of \(\triangle PAB\) is maximized when \(P\) is farthest away from line \(AB\). The coordinate of \(P\) at this point is \(P\left(-\frac{\sqrt{2}}{2}, \frac{\sqrt{2}}{2} \right)\). Converting this problem into a constrained optimization problem, denoted point \(P\) by \(P(x, y)\), it is given by:\[\begin{align*} \max_x f(x) = \frac{1}{2} \cdot \sqrt{2} \cdot \frac{\lvert x - y + 1 \rvert}{\sqrt{2}} = \frac{1}{2} \cdot \lvert x - y + 1 \rvert, \end{align*}\] subject to\[\begin{gather*} -1 \leq x \leq 1\\ -1 \leq y \leq 1\\ x^2 + y^2 = 1. \end{gather*}\] We can write the following code to solve this problem numerically: from scipy.optimize import Bounds, NonlinearConstraint, BFGS, minimize # it is negated because we want to compute the maximum def target(x): return -0.5 * abs(x[0] - x[1] - 1) def constraintFunc(x): return x[0]**2 + x[1]**2 # the bound of x and y bound = Bounds([-1.0, -1.0], [1.0, 1.0]) constraint = NonlinearConstraint(constraintFunc, 1.0, 1.0) # using BFGS hessian approximation as an example here # though it is not ideal for linear functions res = minimize(target, [0.0, 1.0], method='trust-constr', jac = '2-point', hess = BFGS(), constraints=[constraint], bounds=bound) print(res) The result shows: barrier_parameter: 1.0240000000000006e-08 barrier_tolerance: 1.0240000000000006e-08 cg_niter: 29 cg_stop_cond: 1 constr: [array([1.]), array([-0.70710665, 0.70710691])] constr_nfev: [90, 0] constr_nhev: [0, 0] constr_njev: [0, 0] constr_penalty: 1.0 constr_violation: 1.687538997430238e-14 execution_time: 0.06944108009338379 fun: -1.207106781186541 grad: array([ 0.5, -0.5]) jac: [array([[-1.4142133 , 1.41421384]]), array([[1., 0.], [0., 1.]])] lagrangian_grad: array([7.29464912e-09, 7.29464616e-09]) message: '`gtol` termination condition is satisfied.' method: 'tr_interior_point' nfev: 90 nhev: 0 niter: 40 njev: 0 optimality: 7.29464911613155e-09 status: 1 tr_radius: 50837982.540088184 v: [array([0.35355337]), array([-1.16498790e-07, -5.85727216e-08])] x: array([-0.70710665, 0.70710691]) Because \(\frac{\sqrt{2}}{2} \approx 0.707106781\), we can see that the optimizer has given a pretty nice result. In practice, specifying the Jacobian and Hessian matrices for the constraints and the target function is likely to result in faster convergence and better accuracy. SVD image compression ( scipy.linalg) Suppose that we have a gradient image \(I\), which is an \(m \times n\) matrix. When we apply Singular Value Decomposition (SVD) to it, we have\[I = USV^T,\] where \(U\) is an \(m \times m\) unitary matrix, \(S\) is an \(m \times n\) rectangular diagonal matrix, and \(V\) is an \(n \times n\) unitary matrix. We can select the first \(k(k \leq \min(m, n))\) columns of \(U\), the first \(k\) values on the diagonal, the first \(k\) rows of \(V^T\), and it can be seen that these three matrices multiply to a matrix of the same shape of \(I\), because\[(m \times k) \cdot (k \times k) \cdot (k \times n) \rightarrow (m \times n).\] Originally, the image is represented by \(mn\) numbers. Now, we only need \(mk + k + kn = k(m + n + 1)\) numbers. Choosing an appropriate \(k\) can save a lot of storage space. We can write the following program to achieve SVD image compression. The image to be compressed is still Lenna used above. from PIL import Image import numpy as np from scipy.linalg import svd img = Image.open('2019-05-14-sci-py-1.jpg') npImg = np.asarray(img)[:, :, 0].astype(np.float32)/255.0 # apply svd u, s, vt = svd(npImg, compute_uv=True, full_matrices=True) # how many singular vectors to select k = 20 print('amount of numbers before compression: ', npImg.size) # truncate the matrices newU = u[:, :k] newS = s[:k] newVt = vt[:k, :] print('amount of numbers after compression: ', newU.size + newS.size + newVt.size) # reconstruct the image reconImg = np.dot(np.dot(newU, np.diag(newS)), newVt) reconImg *= 255.0 # clip the reconstructed image reconImg = np.clip(reconImg, 0.0, 255.0) pilImage = Image.fromarray(reconImg.astype(np.uint8)) # saving as a lossless format pilImage.save('2019-05-15-sci-py-3.png') The output to stdout is: amount of numbers before compression: 50625 amount of numbers after compression: 9020 And the reconstructed image is shown as below. Quick Statistics ( scipy.stats) In my opinion, scipy.stats is one of the most powerful scipy submodules. Its functionalities include the p.d.f. and c.d.f. of several dozen probability distributions, parameter estimation, statistical tests, transformations and so on. As an amateur-level statistician, I have never heard of many functions it provides. Not to show my ignorance, I am demonstrating a very simple application of this submodule. Consider a two-dimensional Gaussian Mixture Model of two classes. Using subscripts to denote the classes, the weight \(w\) and distribution of each class is given by\[\begin{align*} w_1 &= 0.4\\ \mu_1 &= \begin{bmatrix} -5\\-5 \end{bmatrix}\\ \Sigma_1 &= \begin{bmatrix} 10 & 2\\ 2 & 20 \end{bmatrix}\\ w_2 &= 0.6\\ \mu_2 &= \begin{bmatrix} 5\\5 \end{bmatrix}\\ \Sigma_2 &= \begin{bmatrix} 15 & 3\\ 3 & 20 \end{bmatrix} \end{align*}\] Using \(C_1, C_2\) to denote class 1 and class 2, respectively. It can be seen that\[\begin{align*} p(C_1 \mid x) &\sim \mathcal{N}(\mu_1, \Sigma_1)\\ p(C_2 \mid x) &\sim \mathcal{N}(\mu_2, \Sigma_2). \end{align*}\] The Gaussian Mixture Model is given by\[f(x) = w_1p(C_1 \mid x) + w_2p(C_2 \mid x).\] The surface plot of the GMM is shown as below. By Bayes theorem, we have:\[\begin{align*} p(x \mid C_1) &= \frac{w_1p(C_1 \mid x)}{w_1p(C_1 \mid x) + w_2p(C_2 \mid x)}\\ p(x \mid C_2) &= \frac{w_2p(C_2 \mid x)}{w_1p(C_1 \mid x) + w_2p(C_2 \mid x)}. \end{align*}\] We would like to explore the shape of the decision boundary of Gaussian Mixture Models. We can do this by selecting a large number of random points, selecting those that are close to the decision boundary, and compute the Pearson’s correlation coefficient of their two components. The program is shown as below (it also includes surface plotting): from mpl_toolkits.mplot3d import Axes3D import matplotlib.pyplot as plt from matplotlib import cm import numpy as np from scipy.stats import multivariate_normal, pearsonr w1, w2 = 0.4, 0.6 mu1 = np.asarray([-5, -5], np.float) mu2 = np.asarray([5, 5], np.float) sigma1 = np.asarray([10, 2, 2, 20], np.float).reshape(2, 2) sigma2 = np.asarray([15, 3, 3, 20], np.float).reshape(2, 2) dist1 = multivariate_normal(mu1, sigma1) dist2 = multivariate_normal(mu2, sigma2) def ComputeGMM(x, y): return w1 * dist1.pdf([x, y]) + w2 * dist2.pdf([x, y]) def ProbClass1(x, y): return w1 * dist1.pdf([x, y]) / ComputeGMM(x, y) xs = np.linspace(-15, 15, 40) ys = np.linspace(-15, 15, 40) xs, ys = np.meshgrid(xs, ys) zs = np.vectorize(ComputeGMM)(xs, ys) # graph the surface of GMM ax = plt.gca(projection='3d') ax.plot_surface(xs, ys, zs, cmap = cm.coolwarm) plt.show() # a predicate to decide if a sample is close enough # to the decision boundary def CloseToBoundary(prob): return abs(prob - 0.5) < 0.05 # generate a large number of random points np.random.seed(0x1a2b3c4d) # in this case, generate 5000 samples samples = np.random.uniform(-15.0, 15.0, (5000, 2)) probs = np.vectorize(ProbClass1)(samples[:, 0], samples[:, 1]) closeBool = CloseToBoundary(probs) print('number of samples close to the decision boundary: ', np.count_nonzero(closeBool)) # extract those samples that are close to the boundary closeIndices = np.where(closeBool) closeOnes = samples[closeIndices] # compute pearson's correlation print(pearsonr(closeOnes[:, 0], closeOnes[:, 1])) The result is number of samples close to the decision boundary: 105 (-0.9925966291080566, 3.9205853694664065e-96) With a coefficient whose absolute value is very close to one, it is reasonable to speculate that GMM yields a linear decision surface. Frequency domain transformations ( scipyfft.fftpack) scipyfft.fftpack provides functions for FFT, DCT and DST. Let’s apply FFT to Lenna and see what the modulus is like. from PIL import Image from scipy.fftpack import fft2 import numpy as np img = Image.open('2019-05-14-sci-py-1.jpg') npImg = np.asarray(img)[:, :, 0].astype(np.float32)/255.0 # apply fft freqImg = fft2(npImg) # compute modulus, apply log to smooth the gradient modulusImg = np.log(np.abs(freqImg)) modulusImg -= modulusImg.min() modulusImg /= modulusImg.max() modulusImg *= 255.0 pilImg = Image.fromarray(modulusImg.astype(np.uint8)) pilImg.save('2019-05-15-sci-py-5.png') The output is shown below. Let’s go wild ( scipy.spatial, scipy.ndimage) scipy.spatial submodule is like a mini subset of CGAL, with KD-Tree, triangulation, convex hull and Voronoi diagram algorithm implementations. scipy.ndimage includes a number of image processing routines, such as filtering, morphology, connected component labeling and so on. Let’s try to become our own happy little Picasso with all these utilities! I select the following image as our template. Applying the Sobel operator and some erosion with the following code: from PIL import Image from scipy.ndimage import sobel, grey_erosion import numpy as np img = Image.open('2019-05-15-sci-py-7.jpg') npImg = np.asarray(img)[:, :, 0].astype(np.float32)/255.0 # compute sobel operator sobelX = sobel(npImg, axis = 0) sobelY = sobel(npImg, axis = 1) sobelMag = np.sqrt(sobelX **2 + sobelY ** 2) sobelMag /= sobelMag.max() # apply thresholding darkPixels = sobelMag < 0.2 brightPixels = np.logical_not(darkPixels) sobelMag[darkPixels] = 0.0 sobelMag[brightPixels] = 1.0 # apply a bit of erosion sobelMag = grey_erosion(sobelMag, size=(3,3)) sobelMag *= 255.0 pilImage = Image.fromarray(sobelMag.astype(np.uint8)) pilImage.save('2019-05-15-sci-py-8.png') We can acquire this image: To stylize the image, I used image editing software to paint randomly on the image with black and white brushes. The resulting image is shown as below. Then, we find out the connected components in the image, and compute their centroids. The next thing to do is to apply triangulation to all centroids, and fill each triangle with the average color taken from the template image. The code is shown as below: from PIL import Image, ImageDraw from scipy.ndimage.measurements import label from scipy.spatial import Delaunay import numpy as np img = Image.open('2019-05-15-sci-py-9.png') npImg = np.asarray(img).astype(np.float32)/255.0 # make the image binary binImg = npImg > 0.5 # label connected components labelMat, numLabel = label(img) def ComputeCentroid(labelId): pos = np.where(labelMat == labelId) centroid = np.mean(pos, axis=1) return centroid # for each component, find its centroid centroids = [ComputeCentroid(labelId) for labelId in range(numLabel)] # eliminate duplicates centroids = np.unique(centroids, axis = 1) # apply triangulation to all points triangles = Delaunay(centroids) # create a new black image pilImg = Image.new('RGB', (npImg.shape[1], npImg.shape[0]), (0, 0, 0)) drawer = ImageDraw.Draw(pilImg) def Revert(p): return (p[1], p[0]) # we will extract colors from the template templateImg = Image.open('2019-05-15-sci-py-7.jpg') npTemplateImg = np.asarray(templateImg) # iterate through triangles for p1, p2, p3 in triangles.simplices: # fetch coordinate p1, p2, p3 = centroids[p1], centroids[p2], centroids[p3] # revert coordinate points = list(map(Revert, [p1, p2, p3])) # create a temporary pil image tempImg = Image.new('1', pilImg.size, 0) # draw this triangle in temp image tempDraw = ImageDraw.Draw(tempImg) tempDraw.polygon(points, fill = 1) # convert it into numpy array npTempImg = np.asarray(tempImg) # using this triangle as the mask to # extract pixles from the template image pixelData = npTemplateImg[npTempImg] # compute average color avgColor = np.mean(pixelData, axis = 0).astype(np.uint8) # start drawing in the final image drawer.polygon(points, fill = tuple(avgColor)) pilImg.save('2019-05-15-sci-py-10.png') The output is shwon as below: It is amazing how they look alike when being inspected from afar! Appendix The code that generates the graph in function optimization \documentclass[convert=pdf2svg]{standalone} \usepackage{fourier} \usepackage{tikz} \begin{document} \begin{tikzpicture} \draw[->, thick] (-1.5, 0) -- (1.5, 0) node[right] {\tiny $x$}; \draw[->, thick] (0, -1.5) -- (0, 1.5) node[above] {\tiny $y$}; \draw (0, -1) -- (1, 0); \coordinate (itsc) at (-0.9, 0.435889894354067); \node[yshift=5pt, xshift=-5pt] at (itsc) {$P$}; \node[xshift=5pt, yshift=-5pt] at (0, -1) {$A$}; \node[xshift=5pt, yshift= 5pt] at (1, 0) {$B$}; \draw[fill] (itsc) circle (1pt); \draw[fill] (0, -1) circle(1pt); \draw[fill] (1, 0) circle (1pt); \draw[dashed] (itsc) -- (0, -1); \draw[dashed] (itsc) -- (1, 0); \draw (0, 0) circle (1); \end{tikzpicture} \end{document}
https://www.alanshawn.com/tech/2019/05/15/scientific-python-3.html
CC-MAIN-2020-50
refinedweb
2,587
52.15
Synopsis Write out the model spectrum in the form required by MARX Syntax save_marx_spectrum(outfile, clobber=True, verbose=True, id=None, elow=None, ehigh=None, ewidth=None, norm=None) Description The save_marx_spectrum() command writes out the current model values in the form expected by MARX (the Chandra simulator). Please see the MARX documentation of the spectrum format for further information on how to use this routine. The routine can be loaded into Sherpa by saying: from sherpa_contrib.marx import * Arguments Examples Example 1 sherpa>_marx_spectrum("marx_marx_spectrum() routine is used to write out the best-fit model - in a format usable by MARX - to the text file "marx.dat". Example 2 sherpa> save_marx_spectrum("marx.dat", elow=1, ehigh=8) In this example the output is restricted to the range 1 to 8 keV, using the default binning given by the ARF grid. Note that this energy range need not overlap the range used to fit the data (or even the energy ranges of the ARF and RMF files). It should however remain within the range 0.2 to 10 keV. Changes in the scripts 4.11.2 (April 2019) release Fixes to save_marx_spectrum The sherpa_contrib.marx.save_marx_spectrum() function now normalizes the output by the bin width, as expected by MARX. Bugs See the bugs pages on the Sherpa website for an up-to-date listing of known bugs. See Also - contrib - get_marx_spectrum, plot_marx_spectrum, save_chart_spectrum, sherpa_marx - modeling - save_model, save_source - saving - restore, save, save_all, save_arrays, save_data, save_delchi, save_error, save_filter, save_grouping, save_image, save_pha, save_quality, save_resid, save_staterror, save_syserror, save_table, script
https://cxc.cfa.harvard.edu/sherpa4.11/ahelp/save_marx_spectrum.html
CC-MAIN-2020-10
refinedweb
255
54.73
User talk:Vladimir Panteleev Contents Semantic Forms Hello! I've started toying around with Semantic now and it seems great! One thing I realised is that we really need Semantic Forms. For two reasons: - Adding "things" gets sane. As some properties only have an enumerated set of values, it's almost impossible for users to know which values are valid, and they are bound to enter invalid values, which are just ignored. With forms it's clear. - We get the #arraymap template which is needed to add several properties of the same type through templates. (Ie more than one Author etc.) It's still possible using "plain" property declarations, but it's confusing for users, and will likely be used the wrong way. (Ie we get an author with the name "Name 1, Name 2", instead of two authors "Name 1" and "Name 2".) I also checked and more or less all those websites that use Semantic also use Semantic Forms, it seems to be a key component, and I think I can start seeing why. I understand and respect if you're busy or reluctant to adding stuff, of course. Allow me to reassure though, this one thing seems quite essential, at least to me. Regards --Kraybit (talk) 18:02, 26 November 2012 (CET) - Sounds good, installed. --Vladimir Panteleev (talk) 01:10, 27 November 2012 (CET) Multi-dimensional hierarchies - Semantic Wiki Hi! I've looked some more into this, and I've come across Semantic_MediaWiki. I'm still researching this, but my initial impression is that this would Solve-All-Our-Problems™. What I really like is that it solves the problem of keeping data in one place and present views into it "proper", by introducing a layer of data on the wiki that is more database-like and queryable. It also seems well-established and used. Some example sites using Semantic Wiki: wiki.mozilla.org also uses it, but they don't seem to make use of the features that much. Good for reference though. Another good thing is Form Templates, which seems nice for users and would increase the chances of correct data input. Here's an example form template in action. (A bit too long, but one can get a feel for it.) Another thing that's nice is the query-based search feature, here's an example (Never mind the annoying yellow bg and overpopulated tag clouds.) One can certainly imagine having things like tutorials for "Beginners", "Intermediate" etc. as well as "Graphics", "Networking" etc. Nice. It also seems to provide good presentation features, should one want to, like map, calender etc. But that's overkill for now imho. What do you think? At the moment it feels like overkill, but imagine if this wiki grows, then things like this would certainly be very useful? And if we want it in the future, it's probably good to set it up early. Still, I'm not sure yet. Just sharing my thoughts. Regards --Kraybit (talk) 16:10, 25 November 2012 (CET) - Looks like a cool project! Installed. --Vladimir Panteleev (talk) 02:40, 26 November 2012 (CET) Subpages Hi! I'm looking at moving over the DIPs but I think we need subpages in order to do it properly It would be used to keep the DIP Archive and DIP Alternatives as on ProWiki. At least the WikiMeda help says that's a good use for subpages too. Seems without subpages we have to change the DIP process itself (See "Usage" in DIP1) which could get political? Could we turn it on for the main namespace, or perhaps now is good time to actually create a DIP-namespace (Or "Dev"?) and have it turned on just there? Regards --Kraybit (talk) 12:13, 24 November 2012 (CET) - Am I right that all this feature does is generate a link to the "parent" page (page with the title up to the last slash)? - For another wiki, we've simply used a template for that. It also has the advantage that a page can have multiple "parents", and does not clutter URLs. Example: - I'm not against enabling the feature, just making sure this is what you want. Let me know if I should go ahead and enable it. --Vladimir Panteleev (talk) 16:26, 24 November 2012 (CET) - Ha, yeah, I think you're right. Man, I'm learning a lot : ) Sorry for bugging you. I say we just keep it as it is for now, we can still use "/" in names to express the intention of a subpage, and then later if the link to the parent page is needed, we can activate it then perhaps, through a template or whatever seems best. --Kraybit (talk) 16:34, 24 November 2012 (CET) - How do we get the template to display directly under the main heading (similar to the contentSub div)? See this page for an example of what it should not look like ;-) Johannes Pfau (talk) - It needs some CSS to go with it. I added it to MediaWiki:Common.css. --Vladimir Panteleev (talk) 12:42, 6 July 2013 (CEST) Multi-dimensional hierarchies Here's a problem: It would be nice to have a page with all projects that are D-related, open source, proprietary, experimental, stable—everything! Then, it would also be very nice to be able to list a subset of those projects, only those that are "stable", "open source" and so on. Do you know of any way to achieve this without duplicating entries? --Kraybit (talk) 16:52, 24 November 2012 (CET) - So we're looking for a table or article list that allows dynamic filtering? - I think this could be achieved using a template which takes one parameter for each criteria. It would emit a table with one entry per row; however, each row is wrapped in a conditional which checks for the appropriate conditions. - Perhaps these extensions could be useful as well: 1 2 3 --Vladimir Panteleev (talk) 17:00, 24 November 2012 (CET) - Aaah, templates, ok, now I get it : ) - Your idea sounds great! Hm, I must admit, I can't really tell if these extension would accomplish that though. My impression: - CategorySearch — With this every project must have a page (which is perhaps not bad?), so it seems excessive, yet!, for thinks like the Cookbook that seems to be great! We could then show lists of howto's/guides etc. by a query of categories, like all howtos in "Meta programming" and "Beginner". (Or? Is this just a search-widget, or can we embed queries in a page and get a table? At least then users could do a meaningful search.) - DynamicPageList — Again for projects every projects must have their own page — yet! — this could be great for showing a list of the "5 latest tutorials" or "5 latest projects" etc., so again this seems very useful! - TemplateTable — I can't quite grasp this one. While it creates tables, I don't see any possibility to filter by criteria? - It feels like perhaps we need to just give every project a page of their own so that we can put it in a category, and that way be able to show different lists, somehow. Again, I must admit I can't see clearly how all this would work. Perhaps some more research into this is a good idea? - --Kraybit (talk) 17:51, 24 November 2012 (CET) - Well, let me know what you decide. We shouldn't install extensions we don't need, as it pointlessly increases the amount of maintenance work, increases the attack surface, and creates the risk of getting us stuck at an old MediaWiki version if an extension we rely on stops receiving compatibility updates. --Vladimir Panteleev (talk) 17:56, 24 November 2012 (CET) Logo Ping Discuss_DWiki --Kraybit (talk) 01:16, 25 November 2012 (CET) Favicon Could you port it from the old wiki? I'd do it myself, but it is currently illegal to upload .ico's. Monarchdodra (talk) 11:57, 10 December 2012 (CET) - Aren't they almost identical? I recreated ours from the dlang.org logo before I noticed I was just duplicating the old wiki's one. --Vladimir Panteleev (talk) 03:14, 11 December 2012 (CET) Spammer alert Hi Vladimir, FYI today I noticed in Special:RecentChanges that there are several new users with suspicious-looking user pages. You may want to keep watch on new users and perhaps install some kind of CAPTCHA mechanism to prevent abuse of the wiki.—Quickfur (talk) 23:02, 16 December 2012 (CET) - Sign of progress ;) Our wiki is no longer completely obscure. I've purposefully delayed enabling any anti-spam measures until its first appearance to avoid hampering initial contributors. I've enabled them now. --Vladimir Panteleev (talk) 01:51, 17 December 2012 (CET) Enable setting DISPLAYTITLE It would be nice if we could set the DISPLAYTITLE to any value as described here: This would be especially useful for "SubPages" if we don't want the page name to be something like "GDC/Development/Testsuite" but rather "Testsuite" or "GDC Testsuite". Johannes Pfau (talk) - Would enabling subpages (the MediaWiki feature) be a better option? --Vladimir Panteleev (talk) 21:59, 21 July 2013 (CEST) - As far as I know the subpage feature unfortunately doesn't change the page title at all, it only inserts the backlinks automatically. It would really be the best solution if we could somehow make this work for all subpages automatically, but that probably means editing mediawiki source code. Johannes Pfau (talk) Portals I wonder whether Portals ( ) would be useful. At least the main page, DMD, LDC and GDC could use it. We could also add Portals for special architectures (x86 / arm / ), OS (linux, windows, ...) or devices NintendoDS / ... However, Portals seem to be difficult to install on the wiki so I wonder if it's worth doing. - I'm not really sure what the goal is. What are portals (from a technical perspective - e.g. how are they different from normal pages or categories), and how would they help us? --Vladimir Panteleev (talk) 22:00, 21 July 2013 (CEST) - They are overview pages. A portal is an entry point for a specific topic the wiki similar to the main page. Kinda like a main page for this specific topic. We could of course also use the table based layout from the main page for these pages so I'm really not sure if portal pages are worth the trouble. The technical difference is that portals do not use table based layouts, css styling is more flexible(that's not really important for us..), it's possible to have a box displaying a random subpage/article, the content of the 'boxes' is stored in subpages and the source code for the portal pages is supposed to be easier to read/write. The goal is to have overview/entry pages for specific topics so that we can spread links to those. For example if we had a D port on NintendoDS, we could advertise the wiki.dlang.org/NintendoDS link in the DS homebrew community. wiki.dlang.org/NintendoDS would then be an overview page similar to the main page with links for setting up an IDE, supported compilers, tutorials, FAQs, ... all Nintendo DS specific. --Johannes Pfau (talk) - Aside from creating a specialized namespace (which simply gives you a ":" in the URL), here is nothing magical about portals. All those features (table-based layouts, random articles, subpage templates) can be done as with any other MediaWiki page. So, I'm not really sure what you're asking. --Vladimir Panteleev (talk) 11:52, 24 July 2013 (CEST) Cite extension / footnotes / references Hi, could you install the Cite extension? I think it would be very useful for DIPs but probably also some other wiki pages. Johannes Pfau (talk) 08:58, 13 May 2014 (UTC) - Done. --Vladimir Panteleev (talk) 09:00, 13 May 2014 (UTC) - Thanks! (Wow, that was incredibly fast!) Johannes Pfau (talk) 09:06, 13 May 2014 (UTC)
https://wiki.dlang.org/User_talk:Vladimir_Panteleev
CC-MAIN-2019-09
refinedweb
2,000
72.36
Jomo Fisher -- Here's how you can use the MSBuild Object Model to programmatically access the contents of a project file. This sample console application accepts a single parameter--the full path to some MSBuild project file--and displays all of the assemblies that project references: using System; using System.IO; using Microsoft.Build.BuildEngine; class MSBuildCracker { const string msbuildBin = @"C:\WINDOWS\Microsoft.NET\Framework\v2.0.40607"; static void Main(string[] args) { //Load the project. Engine engine = new Engine(msbuildBin); Project project = new Project(engine); project.LoadFromFile(args[0]); // Get the references and print them. ItemGroup references = project.GetEvaluatedItemsByType("Reference"); foreach (Item item in references) { Console.WriteLine(item.Include); } } } Here's how you would call the tool: MSBuildCracker c:\myprojects\project1\project1.csproj And this is the result: System System.Deployment System.Drawing System.Windows.Forms You can use this technique to access any of the items (like Reference or Compile) or properties (like AssemblyName or Platform). This posting is provided "AS IS" with no warranties, and confers no rights.This posting is provided "AS IS" with no warranties, and confers no rights. Some interesting stuff in this. Trying some stuff with this to write changes to an existing project file, but the object model seems to be missing OutputItems (or a way of attaching them to TaskElements). Is this just a ‘work in progress’ type issue that’ll be addressed in later betas? Thanks for these articles by the way. Rob, I’m pretty sure this won’t be in Beta2. Hopefully, this is something we can get into the RC. The current plan is that these APIs will not cover everything that the MSBuild language can express (as you’ve discovered). This posting is provided "AS IS" with no warranties, and confers no rights. Hi is it possible to get from a solution file also what all project it builds and project interdependencies? Sachit, Technically, this is possible, but its not terribly easy. Broadly, here are the steps: (1) There’s an API in Microsoft.Build.Conversion.dll that will convert a solution to an in-memory MSBuild project–ProjectFileCOnverter.ConvertInMemory. (2) Once you have this, I believe you can grab the names of the project files from the corresponding calls to the "MSBuild task". (3) Next, you would need to crack each of these project files in turn and look at their <ProjectReference> items to see what they depend on. This posting is provided "AS IS" with no warranties, and confers no rights. I am not able to get the exact Full Path of the referenced assemblies everytime. I tried reading FullPath Metadata of the Item but it seems not to work everytime. What is the appropriate way to get this? Thank you, Nush
https://blogs.msdn.microsoft.com/jomo_fisher/2005/02/09/hack-the-build-cracking-msbuild-project-files/
CC-MAIN-2019-35
refinedweb
458
58.79
10 July 2012 06:28 [Source: ICIS news] By Chow Bee Lin SINGAPORE (ICIS)--The current uptrend in China’s polyethylene (PE) and polypropylene (PP) import prices is likely to continue to end-July or early August on strong crude prices and low buyer inventory, before fizzling out on weak downstream demand, industry sources said on Tuesday. Most traders and end-users in ?xml:namespace> Bullish sentiment triggered by strong crude futures prices will also support the prices of PE and PP in “Crude prices are unlikely to go below $80/bbl in the near term,” he said. However, the PE and PP price uptrend may not be sustainable because downstream demand has not shown signs of improving and as the global economic outlook is uncertain, traders in China’s PE and PP prices are likely to rise further because local producers will raise their offers amid relatively tight supply, but buyer confidence remains weak because of weak downstream demand, hence any price hikes will be limited, said a source at a Malaysian resin company. Uncertainty in the global economic outlook also dampens market sentiment, said resin traders and suppliers in “The recent price uptrend could be short-lived because the global economic outlook remains bleak,” a source at a Saudi resin company said. The benchmark PP yarn and film grade high density PE (HDPE) weekly average prices rose to $1,315/tonne and $1,355/tonne CFR China respectively in the week ended 6 July, up by 2.7-3.0% from the previous week, according to ICIS. Several PE and PP producers in northeast ($1 = €0.81) Additional reporting by Amy Yu, Angie
http://www.icis.com/Articles/2012/07/10/9576700/chinas-pe-pp-price-uptrend-may-fizzle-out-on-weak-demand.html
CC-MAIN-2015-11
refinedweb
275
50.8
Hello. Rodolfo Giometti wrote: Sorry for follwing up 2 month ago, I just happen to stumble on some I was going to type "3 months ago" since 3 months have apssed indeed. :-< issues addresses by these patches as well. I assume you haven't tried sending them to Russel King? Not yet, I just waiting for some comments. :) Now you have some at last. :-) + case UPIO_MEM32: + case UPIO_AU: + return readl(port->membase + offset); NAK. readl() can't be used to read from Alchemy SOC peripherals because it'll break in BE mode. Alchemy automagically handles byteswap for the SOC peripherals. Ok. I'm going to fix it by using au_readl() but in this case I have to add an ifdef with au1xxx include file. Can it be acceptable? I think so. But it's Russel who will decide. :-) + (port->iotype == UPIO_MEM) ? "MMIO" : \ + (port->iotype == UPIO_AU) ? "AU" : "I/O port", + (port->iotype == UPIO_MEM) || \ + (port->iotype == UPIO_AU) ? port->mapbase : (unsigned long) port->iobase); I'd simply map UPIO_AU to "MMIO" in the message because it's memory mapped UART after all... Yes, but in the kernel command line we must supply "au"... That's why I used different string, so the user can verify whatever he/she passed to the kernel. I can also suggest something like "Au1xx0 MMIO"... :-) index 17839e7..9e27aee 100644 --- a/drivers/serial/serial_core.c +++ b/drivers/serial/serial_core.c @@ -2367,6 +2367,7 @@ int uart_match_port(struct uart_port *po return (port1->iobase == port2->iobase) && (port1->hub6 == port2->hub6); case UPIO_MEM: + case UPIO_AU: Also needs cases for UPIO_MEM32 and UPIO_TSI. I just added the code for au1xxx. Why should I consider those cases also? -#ifdef CONFIG_SERIAL_8250_AU1X00 case UPIO_AU: - __raw_writel(value, up->port.membase + offset); + writel(value, up->port.membase + offset); break; -#endif Ditto writel(). Is __raw_writel() correct? It should be. Thanks for your suggestions. Ciao, Rodolfo WBR, Sergei
http://www.linux-mips.org/archives/linux-mips/2006-08/msg00308.html
CC-MAIN-2013-48
refinedweb
308
70.6
I have to create a maze in which a robot has to follow the set of directions given and I'm so stuck on how to make the robot go through the maze. I have most all of it done but I'm still getting errors. The maze includes 1.safe place 2.chasm 3.trap 4. destination. The robot can 1. move forward once 2. jump 3. Turn left 4. Turn right Here's what I have so far.Please help me compile! import java.util.Scanner; public class GameMachine { public static void main (String args []) int robot_x = 0; int robot_y = 0; int robot_m_x = 0; int robot_m_y = 0; int[] robot_instructions = new int[64]; int i = 0; int robot_state = 0; int[][] maze = new int[8][8]; robot_instructions[0] = 1; maze[4][5] = 2; maze[6][7] = 3; maze[1][1] = 1; maze[8][8] = 4; maze[3][2] = 3; maze[4][1] = 2; robot_x = 1; robot_y = 1; for(i = 0; i < 64; i++) { robot_state = 1; switch(robot_instructions[i]) { case 1:robot_x = robot_m_x; robot_y = robot_m_y + 1; break; case 2:robot_x = robot_m_x; robot_y = robot_m_y + 2 ; break; case 3: robot_x = robot_m_x - 1; break; case 4: robot_x = robot_m_x + 1; break; } System.out.println("Robot final location X: " + robot_x + " Y: " + robot_y); System.out.println ("Robot State: "); { case 1:robot_state = 1; System.out.println ( "Robot is at a safe place"); break; case 2:robot_state = 2; System.out.println ("Robot falls into Chasm"); break; case 3:robot_state = 3; System.out.println("Robot falls into the trap"); break; case 4:robot_state = 4; System.out.println( "Robot reaches Destination"); break; case 5:robot_state = 5; System.out.println ( "Robot fails to reach Destination"); break; } } }
https://www.daniweb.com/programming/software-development/threads/199973/help-please
CC-MAIN-2017-34
refinedweb
274
75.1
module Main (main) where import DynFlags import Data.List import System.Environment main :: IO () main = do args <- getArgs case args of [] -> error "Need to give filename to generate as an argument" [f] -> case f of "docs/users_guide/users_guide.xml" -> writeFile f userGuideMain "docs/users_guide/what_glasgow_exts_does.gen.xml" -> writeFile f whatGlasgowExtsDoes _ -> error ("Don't know what to do for " ++ show f) _ -> error "Bad args" -- Hack: dblatex normalises the name of the main input file using -- os.path.realpath, which means that if we're in a linked build tree, -- it find the real source files rather than the symlinks in our link -- tree. This is fine for the static sources, but it means it can't -- find the generated sources. -- We therefore also generate the main input file, so that it really -- is in the link tree, and thus dblatex can find everything. userGuideMain :: String userGuideMain = unlines [ "", "", "%ug-ent;", "<!ENTITY ug-book SYSTEM \"ug-book.xml\">", "]>", "", "
https://git.haskell.org/ghc.git/blob_plain/86d41b13a7d07b563d4222a17eb5cfb61c3c0775:/utils/mkUserGuidePart/Main.hs
CC-MAIN-2019-39
refinedweb
157
65.83
On Sat, Jan 04, 2003 at 07:21:11PM -0600, Jamin W. Collins wrote: > A? The problem with this is that it contradicts what the vast majority of the upstream software we package does, and what that same software will do if people compile it for themselves. While we certainly do have various pieces of policy that require changes to software when it's packaged (the editor and pager policies come to mind), these tend to be a matter of default configuration, and don't cause interoperability problems with a plain unpatched version the user has installed from source in /usr/local. We already express this principle in policy 11.7.5 ('User configuration files ("dotfiles")'): "Furthermore, programs should be configured by the Debian default installation to behave as closely to the upstream default behaviour as possible". In other words, somebody will be told "that bug's fixed in the development version of this package upstream", so they go and try it out. But, hey presto, not only does it ignore the configuration set up while using the Debian package, but it also creates some new stuff in the home directory that we had hypothetically promised to keep pristine. I think this would be completely unacceptable. To avoid this, we'd have to convince every affected upstream to do this before we could implement it across the board. That's just not going to happen without some momentum behind it, and general agreement from the community, not just on some obscure Debian list, that it's a good idea. Not to mention that the '.' namespace hack is not all that bad. It's clearly separated and well-known, it sorts separately from all the other files in your home directory, and it doesn't clutter all that much because good file management tools tend not to display it by default. With a bit of discipline you can even keep it relatively uncluttered, particularly if you check your home directory into revision control. If I were designing Unix and all its software from scratch I'd probably do it differently, but as it stands it's certainly not a disaster and doesn't cause any non-cosmetic problems. Basically, I don't think it's the place of Debian policy to recommend something like this that flies against all of our packages and that doesn't have a basis in standards constructed by the community at large. On the other hand, it might well be useful to have something in policy 11.7.5 that says "packages should keep user configuration in dotfiles by default, rather than cluttering the user's home directory with plain files". I know that I'm much more annoyed when I run some program and it creates a non-dotfile in my home directory without me explicitly telling it to do so (~/lynx_bookmarks.html, for instance; I've long since configured it otherwise in .lynxrc, so I've no idea if that's still the default location) than I am with the whole dotfile system. Cheers, -- Colin Watson [cjwatson@flatline.org.uk]
https://lists.debian.org/debian-policy/2003/01/msg00061.html
CC-MAIN-2017-26
refinedweb
516
58.62
Microsoft's official enterprise support blog for AD DS and more Hi, Steve again. I thought I would speak through a series of posts about what knowledge is critical to fulfilling the Windows Server Domain Admin role. This topic carries a ton of breadth and depth knowledge. As a beginning, you have to find out where all this knowledge and training is located. My goal is to get you started down this path so you are exposed to different technologies that you will need to understand and master in order to become a Domain or Enterprise administrator. The process of building the depth of knowledge required may take years to acquire. With some help and guidance I hope to reduce this time to several months. For other folks that have already started on this path or are already fulfilling this role, there may be topics that I reference that may hold some value for them as well. A lot of great information for Windows Server 2003 exists and I will focus on these resources. As I go through this blog there will be links to more information. The links will consist of required reading to achieve our goal of being domain admin ready. It may take weeks to progress through this blog for some folks. I intend to develop follow up blogs that discuss in more detail, especially focused on ideas revolving around the design portions for Active Directory (AD). I would like to present some examples of a theoretical company’s environment and build an actual AD design. So let’s get started. You have been using a computer for your personal use and you have just been hired to the helpdesk at your company to manage user accounts. Microsoft has published an enormous library of technical information and other information on TechNet. What is TechNet? Well, that would be a blog unto itself, but as a quick reference you can find the technical libraries for most of our products there. You will also find educational resources, downloads, event information, webcasts and newsgroups. Microsoft also provides a learning portal designed for IT Professionals here. Let’s talk about design first; each company has to choose an AD design. The simplest is a single forest/single domain where all of your accounts are stored in one domain. By default the first domain you create is the forest root domain, you can add more domains to your forest as a child domain of the root or add new domains as a separate tree in the forest. So how do you decide how many domains should you have? The vast majority of companies can live comfortably within one forest, but may require multiple domains within the same forest for a variety of reasons. This link discusses planning an AD deployment and choosing a logical structure. The more complicated your design the more time and effort required to manage your environment. You might ask yourself if your company requires more than one group of administrators to manage computer and user accounts and requires isolation of data or resources for security purposes. If the answer is clearly yes then you can plan on having more than one domain. You want to try to match your company structure to your AD structure whenever possible as a general rule of thumb. Domains quite often are used to isolate and group resources and normally domain administrators don’t have access to resources within another domain. You also decide that locality might play an important role in domain structure. It may be due to network isolation or even language differences but several companies have chosen to isolate different geographic locations into separate domains. In order to choose the correct design for your company you will need to engage participants from all of your business divisions so they can share their requirements for resources. AD allows admins to create logical containers within a domain that allows you to group resources for control and/or manageability. These items are called Organizational Units or OU’s for short. You may decide to create an OU structure for User Accounts that is separate from Group Accounts or Computer Accounts. You can further refine your collections of accounts based on business function or geography. AD also supports using objects to describe the physical organization of your network. For example, a site is defined by one or more network subnets. AD sites control what AD resources within the domain or forest a client should use. Typically we want our clients to use resources within their local site rather than traversing to other sites. By now, you are probably getting the picture that AD design is flexible enough to support a wide variety logical models. As an exercise you might consider an AD design for a large international company named Contoso. Just to make it easy say we have 30,000 employees in Redmond, WA, USA and 8,000 in North America, South America, Europe and Asia. Right from the start you are going to have questions where you will need to engage other business divisions to get answers. For instance, the first question you might want to answer is; “Is one forest enough or should I isolate a segment of our business entirely and setup a trust between two forests?” The next question might be “Should I use one domain or should I use multiple domains to organize and manage my resources?” You should also engage the network experts in your organization to understand how best to map the physical structure of your network into AD. The steps in choosing a design are critical to the success of a company’s IT infrastructure. Even though it seems like there is no “right” answer there are definitely going to be “wrong” answers. The best advice I can give is design a few models and start discovering the pro’s and con’s of each design. There are fewer factors involved in choosing a namespace design. This document covers your choices well. Different business divisions might have some requirements with regards to namespace and they will need to be engaged in this discussion. You will find this discussion loops into the AD design as well and can be considered jointly. Ok let’s say we have selected a namespace and we know how many forests, domains and sites fit our company best. What’s our next step in AD? We need to choose structure for where our accounts in AD will be stored. We want to choose a logical structure that makes our objects in AD easy to find and manage. You do this by creating OU’s within your domain. Some choices you may make are creating OU’s for either user and computer accounts or combining them. Keeping them separate will make manageability easier. You may group similar machines in the same OU or you may break out accounts based on business function or geographic location. Having a methodical and tested plan is key here. AD is an important component in the organization and we must maintain its availability. There are several scenarios that need to be covered in a plan developed to cover that potential event. Here are points of failure that should be included in a recovery plan: Each one of these events or a combination is possible. You will need to work through these potential events and determine a clear and concise set of steps that need to be completed to resolve the problem. Regular backup of AD is critical. Testing these backups to make sure you can quickly and easily restore the data is best practice. All too often backup is scheduled but it’s not running or you cannot restore the data. What are FSMO roles and how should they be distributed in your environment? This varies based on forest size. In the smallest environments we might put all roles on one server. In a large environment we might choose to place each role on a different machine. The FSMO PDC Emulator (PDCe) role brings the highest volume of work. In a large domain you would probably want to make sure the DC hosting the PDCe role is isolated from performing other roles, such as: not targeted by LDAP based applications, deployment server, web server, file and print server etc… The FSMO roles are important to make sure the forest and domain have these roles assigned to a specific DC. In addition to server selection we might also distribute the FSMO role holder by physical location. This maybe the most secure site or best network availability or highest number of client requests. Many companies have time requirements for their computer systems. By default the PDCe FSMO role holder is at the top of the time pyramid for the domain. Other domain members use W32Time service to synchronize their system clocks. Keeping the PDCe synchronized with an accurate source will help keep you domain member’s time accurate. As domain admin you will need to know how new accounts are created. Question: “Will HR be creating user accounts within their own software and does that software create new user accounts in AD?”. Some environments have very complex user provisioning scenarios. In more simple scenarios the user account maybe created by an administrator and manually gets their mailbox created and user group membership configured. In more distributed environments the Account Operators group maybe used and in the most complex scenarios it maybe the Human Resources Dept or the hiring manager who creates the user accounts. There is a lot of information that falls under the group policy umbrella that a domain admin needs to be familiar with. In a nutshell you can use group policy to configure computer and user configuration settings on machines throughout your domain. You’ll want to familiarize yourself with GPMC, the group policy management tool. There are approximately 2,400 settings in Windows Vista that can be set in group policy. This gives the admins a good deal of flexibility in configuring computer and user settings. You use group policy not only to restrict the abilities of specific users in your domain but you can also enhance their experience through group policy as well. There are several general categories that can be controlled including: Application Deployment, Certificate Enrollment and Trust, Logon-logoff-startup-shutdown scripts, Restrict Groups settings, Internet Explorer configuration, disk quotas, user folder redirection, user rights and security configuration, etc… The list goes on and on depending on your needs. Group policy is also flexible; you can link group policies at a domain, site or OU level. Clients that have memberships in these containers either by direct membership or inheritance will receive these policies. Therefore, it follows that your design of AD can help ease the distribution of user and computer configurations. Here again collaborating with different business divisions within your organization is a must. Workstations in the accounting department will most likely need different software and access to their local machines than users in the Marketing department. Similarly, web servers will have different configurations than domain controllers. You can individually set the configuration as part of an image during the build process or you can manually change a machines configuration, but when you want to change thousands of machines at once, Group Policy is definitely the way to go. On top of normal GPO settings, we now have group policy preferences which increases the flexibility and extends the capability of what administrators can do with group policy. As the domain admin you have the proverbial “keys to the kingdom” for the resources in your domain. Security is a big responsibility to protect your domain’s resources. Domain security access requires authentication. There are several levels of authentication and you will want to implement the highest level of security where possible. Windows 2003 implements several authentication protocols: Negotiate, Kerberos, NTLM, Secure Channel, and Digest . It’s also extensible so other authentication protocols can be added. NTLM is a challenge-response authentication mechanism. The client attempts to access a resource and is challenged for credentials by the server. The client sends the username and a hash of the user account’s password and the server attempts to authenticate your credentials on a domain controller in the user’s domain. Therefore, the server must chain back to the user’s domain to be authenticated. NTLM has several variations and this is only one iteration. You would also loop anonymous access under this category if the username and password are null the target machine will attempt to logon the user as anonymous. If the server resource accepts anonymous authentication then the client will get access. Kerberos is a more secure and efficient form of authentication than NTLM. It is the default authentication package in most cases beginng with Windows 2000. To summarize kerberos authentication, a client will ask for a service ticket for the server resource it wants to access. The client receives the ticket and forwards the ticket to the resource server to be authenticated. Wherever possible you would want to configure authentication to use Kerberos. Certificates can be used for authentication as well. Certificate technologies have grown in scope and complexity over the past several years. More and more technologies are using certificates to increase security. So even though it’s not an authentication protocol it is used in conjunction with authentication protocols to increase security. For example, smartcard authentication uses a certificate that is installed on a physical card. The card is placed in a smartcard reader and the user provides a PIN to access certificates on the card. In this way its two factor authentication because we are using something we have, the smartcard, and something we know, the PIN, to provide authentication. Certificates can also be used to increase security by encrypting the network traffic. Secure Sockets Layer, SSL is a well known method of encrypting traffic and can provide server identity. SMIME is another common scenario where users can encrypt and digitally sign their email. We are seeing more and more companies implementing their own internal Certificate Authority infrastructure. Having a certificate authority for your domain allows you to assign both user and computer certificates through both automated and manual methods. Using these certificates can significantly increase the security inside and outside your network. Authentication can be difficult to manage. Two very common scenarios are choosing authentication methods in SQL and IIS. It would be nice if all your applications in the enterprise supported Kerberos and you could just worry about one method but that’s not realistic. It may be an overwhelming task to determine the configuration of all applications. Where you would have concerns are scenarios where plain text or basic authentication is being used. You’ll want to restrict this behavior as much as possible and never use your domain admin credentials to access those applications. If however it is the only method that can be used at the very least the authentication should be encrypted using certificates. Domain admins must determine if they will allow a trust to be established with another domain or forest. Moving to 2003 forest level allows you to establish a forest level trust and therefore inherit trusts for domains within the other forest. We can use certificates to provide encrypted sessions to servers. The most common example with be using HTTP over SSL. In this case, we would issue a server certificate to the web server that would confirm the server’s identity and allow for users to establish an encrypted session. For internet facing web servers, normally you would purchase a certificate from a trusted authority. Another example of using SSL internally within your organization is for LDAP over SSL. Typically our domain controllers service client request over port 389. We can leverage application that are LDAPS enable by installing a server authentication certificate on our DC’s. These two technologies provide file encryption. Bitlocker was introduced in the Windows Vista operating system. It helps provide whole drive encryption that is seamless to the user. It helps protect both data and operating system files and is especially useful on laptops where a user may not be able to maintain physical security of the device. EFS is the technology we use to encrypt specific files on a computer. By default the domain admin is the recovery agent for all EFS files in the domain. These encryption technologies can remove your access to data and it may be lost. Care needs to be given to design a proper system where the domain admin decides who can use encryption and for what purposes. As well as who will be the recovery agent in case a client cannot decrypt their files. Resource access is controlled through an access control list, (ACL), in most situations. Fundamentally, we need to determine whether we will create ACL’s based on users or groups. We recommend setting security on resources by using domain based groups when more than one user will be accessing the resource. Adding a user to that group gives them access to all those resources and, conversely, removing them restricts their access. Change management is much easier if group membership matches resource needs. Securing groups and controlling group membership is important as a domain admin to strike a balance between people that do not want to use any groups and those who would like a person to be a member of a 1000 groups. Some businesses are required by government regulations to maintain auditing at a certain level. Outside of these bounds, security auditing needs to be controlled closely on the domain controllers. This would include not only capturing the data but also periodically reviewing the audit logs to confirm their content. Password Policy – This is rather straight forward. Mainly need to determine the level of complexity, password expiration age and lockout threshold. Here, you want to have a secure password that changes on a regular basis but not so stringent that it costs your company money in lost productivity and helpdesk related issues. Complex passwords are the best way to increase security. On the other hand, there is a certain theory that a low account lockout threshold will increase security. The lower the number for account lockout the more frequently accounts will be locked out. Any number less than 7 will most likely increase lockout dramatically. Choosing the right combination of lockout threshold, duration and complexity will help keep everyone working with an acceptable level of password security. Care should be given to examine specific accounts that handle sensitive data. Not only should the data be closely protected but also the accounts that are used to control that data. This may include company executives, domain administrators, HR and Finance employees and application service accounts. Delegation of Control – Depending on your organization’s size you may have a highly distributed group of users that modify Active Directory objects. The goal would be to give each person responsible for AD management the least privilege required to perform their responsibilities. For ease of use we create specific groups with Active Directory to achieve common tasks: Server Operators, Account Operators, Backup Operators etc… These groups have predefined access to domain resources. Other actions may be non-standard and require specific permissions in Active Directory. Several applications write to Active Directory and their service accounts will need specific access rights. Typically this would be applications that may need Enterprise Admin permissions to install. In addition, they may create specific groups particular to their application that allows them to write to Active Directory. Microsoft Exchange is a good example. ACLs on AD objects – For the most part, the default permissions on an object within Active Directory will be acceptable. It can be very difficult to manage and troubleshoot access problems when you are not using a standard approach to control access. Setting specific restrictions to particular objects is where administration can turn into a nightmare. Make sure strict change control and documentation is enforced whenever making changes to AD. Keep in mind Active Directory will outlast many of your administrators. Nobody wants to be in a position where you are trying to back out changes that were completed a year ago without documentation of what changes were made. Although these technologies are managed and mostly controlled by our networking group, domain admins need to understand the concepts associated with network design and administration. At the core, TCP/IP is our primary communication protocol suite. DHCP is how our hosts get network addresses that are dynamically assigned and how we configure clients for DNS registration, Wins and DNS discovery. WINS is older technology for NetBIOS name resolution that is still in use in many networks. DNS is more critical to a fully functioning and distributed AD environment. Netlogon is used to register the domain controller records in DNS. These records allow client to discover domain roles and services within their AD site. Users are accessing our network in greater frequency outside of the LAN. In order to work closely with your network counterparts you need to understand some of their technologies. These would include but are not limited to: Routing, Remote Access, VPN, IPSec, Wireless, RDP, etc… There are two types of firewalls that you will encounter: the one at the edge of your network’s boundaries and one installed on workstation/server computers. While the network team will manage the perimeter firewall, the firewall installed on your clients and servers may be managed by Group Policy and, hence, in the domain admins’ realm. For the firewalls on the perimeter, you will need to be familiar with the ports that are required to be open. The FSMO roles: Schema Master and DomainNaming Master are forest based roles. The PDCe, RID Master and Infrastructure Master are domain specific roles. In a small domain scenario you may have all 5 roles installed on the same server. In larger environments you will most likely decide to distribute these roles to separate machines, and in the case of having more than one domain in your forest, you will need to host these roles on multiple machines. Usually the roles are hosted on machines in the data center or hub site. As far as the roles are involved the PDCe role hosts a lot more activity then all the other roles combined. Global Catalog(GC) server placement is also a concern for efficiency. GC’s are required for logon and need to be distributed efficiently. Having a GC present in remote sites will help to significantly reduce the amount of logon traffic and the time required for logon. Conversely, a GC in a remote site will consume network bandwidth during replication cycles. The schema is the base configuration for Active Directory. It defines the types of objects that will be created inside the database. Changes to the schema can be difficult or impossible to be undone. As the domain is upgraded to new versions there is typically going to be a schema update associated. Other applications may also extend the schema such as Exchange or third party applications. Before any schema update, ensure a rollback or recovery process is in place. Removing an upgrade to the schema is may not be possible or corruption of the schema may cause permanent malfunction of AD. You will need to be a member of the Schema Admins group to be able to modify the schema and the modification needs to take place on the Schema Master. The configuration container has a lot of information stored there but very little that will be actively managed. For instance, there is information about the configuration of the active directory forest and their associated partitions. This includes information about AD sites and Enterprise services such as Certificates and Exchange email. Extended Rights inside AD are defined here such as changing the FSMO role holders. There is just one configuration per forest so whatever is written here is available to all domains in the forest. This is where all of your user, computers and groups are stored. But if you enable advanced features in AD Users and Computers snapin, ADUC.MSC you will see a lot more data is contained in the system container. There is information associated with the AdminSDHolder process which protects your admin accounts from loosing permissions to AD, Domain DNS information and Group policy is also stored here. Rarely, if ever, will you modify any items in the system container using ADUC. It would be wise to restrict delegation to this container. On the other hand, the AD delegation wizard makes distributing permissions to other admins or account management people easy.
http://blogs.technet.com/b/askds/archive/2009/01.aspx?pi136234=2&PostSortBy=MostRecent&pi145015=1
CC-MAIN-2015-06
refinedweb
4,120
54.02
add a function to get local interfaces or external interface only Hello, are you interested by something like: >>> netifaces.local_interfaces() ['lo0', 'en0'] >>> netifaces.public_interfaces() ['gif0', 'stf0', 'en1', 'fw0'] My usecase is that I need to find local interfaces on several OSes to display them to the user. So I get all of them with netifaces.interfaces(), then making a loop for selecting local adresses with this function: def _is_local_address(addr): return addr.startswith("192.168") or \ (addr.startswith("172.") and 16 <= int(addr.split(".")[1]) <= 31) or \ addr.startswith("10.") I think a generic interface in netifaces could help other people. I can make a pull-request of the feature if you are interested. Seems useful but would be more involved than the above to implement (netifaces is written in C) and I'm not sure it belongs here either because it's hard to determine what's an external interface just on the IP address alone. By the way, you should really use CIDR blocks instead of checking what it starts with to determine if an interface is in a private address range and is also not a special use case: The other problem is "local interfaces" and "external interfaces" is a bit misleading in my opinion. In order to determine if an interface is actually external or not you have to use it to make a request to a non-local address. If it works, it's an "external interface" because it can communicate externally. The other kind of "external interface" is one that can receive traffic from the outside world. In most cases traffic will be routed to the interface by the network so you could easily have something in the 192.168.0.0/16 range receiving traffic from the WAN. You could also have special cases where what appears to be a private interface is actually getting traffic from another private interface which in turn is getting traffic from the public WAN. So basically, "external" can mean a lot of things so you're often better off trying to class an IP as local host only, private address or special use if you're trying to implement a public API. You're right, public_interfaces()is not clear and can be forgotten. local_interfaces()should be named private_interfaces(). Pyfarm example uses netaddr library and adding local_interfaces()could be easily done by depending from netaddr. However, I'm not sure it is interesting to add it as dependancy for netifaces. It could be implemented more simply than pyfarm does with: ... so CIDR is hidden in netaddr library. Still not sure if private_interfaces()should be added to netifaces. Do you think it's useful to continue or we can close this request? I'd close this request. I think netaddr already solves this problem and using netaddr directly prevents the need to write an equivalent function in C and/or having to add the packages as dependency of netifaces. I agree. I close it. conclusion is 'idea not interesting' in netifaces.
https://bitbucket.org/al45tair/netifaces/issues/51/add-a-function-to-get-local-interfaces-or
CC-MAIN-2018-13
refinedweb
502
56.15
Getting Started MVC application to use our components. What is Html.EJ() ? EJ refers to Essential JavaScript or Syncfusion Essential JS 1. It is a comprehensive collection of over 80 enterprise-grade HTML5 JavaScript components for building modern web applications. Refer to this JavaScript Package for more information. This is inherited from Syncfusion.EJ Assembly and Syncfusion.JavaScript namespace. Syncfusion.EJ and Syncfusion.EJ.MVC assemblies have to be referred in the project to use this, and these namespaces have to be added in the Web.config file as shown in this KB article. Getting started with Syncfusion MVC This section describes how to configure the Syncfusion ASP.NET MVC components into the ASP.NET MVC applications. There are four ways for embedding our controls into ASP.NET application: - Through Syncfusion Project Template - Through Syncfusion Project Conversion - Through Syncfusion NuGet Packages - Through Manual Integration into the new/existing Application The procedures that are followed in manual integration process are entirely automated, when we create an application using Syncfusion Project template. The similar steps are followed for integrating the Syncfusion controls into MVC 3, MVC 4, MVC 5, and MVC6 applications, the only thing that makes it a little bit different is the reference assemblies version chosen for each of the target MVC application. Through Syncfusion Project Template Syncfusion provides the Visual Studio Project Templates for the Syncfusion ASP.NET MVC platform to create a Syncfusion MVC application. The Project Configuration Wizard automates the process of configuring the required Syncfusion assemblies, scripts and their styles within the newly created application. Lets look onto these topics in detail in the following sections. To create a Syncfusion ASP.NET MVC (Essential JS 1) project, follow either one of the options below: Option 1: Click Syncfusion Menu and choose Essential Studio for ASP.NET MVC (EJ1) > Create New Syncfusion Project… in Visual Studio. NOTE In Visual Studio 2019, Syncfusion menu available under Extension in Visual Studio menu. Option 2: Choose File > New > Project and navigate to Syncfusion > Web > Syncfusion ASP.NET MVC (Essential JS 1) Application in Visual Studio. Then it opens Project Configuration Wizard as shown below. In this Wizard, select Target MVC Version as MVC5 and keep the other options as default. Click Next. Next window will be shown with the list of Syncfusion MVC controls. Choose the required controls and then click Create. Now you can notice the Syncfusion MVC 5 References, Scripts and Styles are configured into Scripts and Content folders. Also it configures the web.config and _Layout.cshtml files.. Now you can add the control DatePicker in the Index.cshtml file present within ~/Views/Home folder. @Html.EJ().DatePicker("MyFirstDatepicker") Compile and execute the application. You can able to see the following output in the browser. For more information about Project Configuration Templates and their options details, please visit here NOTE Ensure whether your project has only a single reference to jQuery. Because, multiple reference to jQuery in same project will throws console error and control will not be rendered. For more details refer to the KB here Through Syncfusion Project Conversion Syncfusion Project Conversion is a Visual Studio add-in that converts an existing ASP.NET MVC Project into a Syncfusion ASP.NET MVC Project by adding the required assemblies and resource files. The following steps help you use the Syncfusion Project Conversion in the existing ASP.NET MVC (Web) Project. Open an existing Microsoft MVC Project or create a new Microsoft MVC Project. To open Project Conversion Wizard, follow either one of the options below: Option 1: Click Syncfusion Menu and choose Essential Studio for ASP.NET MVC (EJ1) > Convert to Syncfusion ASP.NET MVC Application… in Visual Studio. NOTE In Visual Studio 2019, Syncfusion menu available under Extension in Visual Studio menu. Option 2: Right-click the Project from Solution Explorer, select Syncfusion Essential JS 1, and choose Convert to Syncfusion MVC (Essential JS 1) Application… Refer to the following screenshot for more information. Project Conversion Wizard opens so that you can configure the project. The following configurations are used in the Project Conversion Wizard. Assemblies From: Choose the assembly location: - Added From GAC: Refer the assemblies from the Global Assembly Cache - Added from Installed Location: Refer the assemblies from the Syncfusion Installed locations. - Add Referenced Assemblies to Solution: Copy and refer to the assemblies from project’s solution file lib directory. Choose the Theme: The master page of project will be updated based on the selected theme. The Theme Preview section shows the controls preview before convert into a Syncfusion project. Choose CDN Support: The master page of the project will be updated based on the required Syncfusion CDN links. Choose Copy Global Resources: The localization culture files will be shipped into Scripts\ej\i18n directory of the project. Choose the required controls from Components section and Click the Convert button to convert it into a Syncfusion Project. The Project Backup dialog will be opened. If Yes is clicked, it will backup the current project before converting it to Syncfusion project. If No is clicked it will convert the project to Syncfusion project without backup. The required Syncfusion Reference Assemblies, Scripts and CSS are included in the MVC Project. Refer to the following screenshots for more information. Through Syncfusion NuGet Packages To add our Syncfusion MVC controls into the new ASP.NET MVC5 application by making use of the Syncfusion Nuget Packages, refer to the following steps. The steps to download and configure the required Syncfusion NuGet Packages in Visual Studio is mentioned here Once Configured the Packages source, search and install the Syncfusion.AspNet.Mvc5 from Package Manager console by using following commands. PM>Install-Package Syncfusion.AspNet.Mvc5. Syncfusion specific stylesheets are loaded into the Content folder of your application, include the below specified theme reference (bootstrap-theme/ej.web.all.min.css) file in the ~/Views/Shared/_Layout.cshtml file, within the head section as this file contains the default theme styles applied for all the Syncfusion MVC controls. <head> <title>@ViewBag.Title</title> @Styles.Render("~/Content/ej/web/bootstrap-theme/ej.web.all.min.css") </head> It is mandatory to include the reference to the required JavaScript files in your _Layout.cshtml, so as to render the Syncfusion MVC controls properly. <head> <meta charset="utf-8" /> <title>@ViewBag.Title - My ASP.NET MVC Application</title> @Styles.Render("~/Content/ej/web/bootstrap-theme/ej.web.all.min.css") </head> <body> @Scripts.Render("~/bundles/jquery") @Scripts.Render("~/bundles/bootstrap") @Scripts.Render("~/Scripts/jsrender.min.js") @Scripts.Render("~/Scripts/ej/web/ej.web.all.min.js") @RenderSection("scripts", required: false) @Html.EJ().ScriptManager(); </body> The order of the reference to the script files made in the above section should be maintained in the same manner as mentioned above. If your application contains duplicate/multiple references to the jQuery files, remove it as the explicit reference to the jquery-1.10.2.min.js script file that is added to the application as specified above. Now you can add the control DatePicker in the Index.cshtml file present within ~/Views/Home folder. @Html.EJ().DatePicker("MyFirstDatepicker") Compile and execute the application. You can able to see the below output in the browser. Manual Integration This topic mainly focuses on how to integrate the Syncfusion ASP.NET MVC controls manually into the newly created/existing ASP.NET MVC application. The procedure for making use of any of our ASP.NET MVC controls within the ASP.NET MVC application are explained in the following. Creation of First ASP.NET MVC Application Follow the below steps to create a normal ASP.NET MVC application. - Start the Visual Studio. Create a new MVC application by selecting File -> New -> Project and save it with a meaningful name as shown in the following. - Build and run your application by pressing Ctrl+F5. It is time to add some other essential things to your application that allows you to make use of our Syncfusion ASP.NET MVC controls. For that, follow steps explained in the Existing Application section. For Existing Applications To add our Syncfusion ASP.NET MVC controls to your existing application, open your existing application and proceed with the following steps. Adding the required StyleSheets To render the Syncfusion ASP.NET MVC controls with its unique style and theme, it is necessary to refer to the required CSS files into your application. Copy all the required CSS files into your application from the following location. NOTE \Syncfusion\Essential Studio\18.3.0.35\JavaScript\assets\css\web For example, If you have installed the Essential Studio within C:\Program Files (x86), then navigate to the below location, C:\Program Files (x86)\Syncfusion\Essential Studio\18.3.0.35\JavaScript\assets\css\web Navigate to above mentioned location, find the files as shown in the below image. Copy entirely and paste it into your root application. Before pasting it into your application, create a folder structure with names ej/web within the Content folder of your application and place all the copied files into it as shown in the following. Solution Explorer - Project with CSS files copied into the Content folder NOTE The common-images folder is needed to be copied into your application mandatorily, as it includes all the common font icons and other images required for the control to render. Once the CSS files are added in your application, include the reference to “ej.web.all.min.css” file in the _Layout.cshtml page, within the head section. <link href="~/Content/ej/web/default-theme/ej.web.all.min.css" rel="stylesheet" /> Adding the required JavaScript files Adding the required JavaScript files into your application plays an important role, without that the Syncfusion controls cannot be created. It requires the following mandatory common script files. • jquery.min.js ( 1.7.1 and later versions) • jsrender.min.js NOTE jQuery-2.1.4, jQuery-3.0.0 support has been given from ejVersion 13.2.0.29, 14.3.0.49 onwards respectively. • ej.globalize.min.js library avails as built-in within ej.web.all.min.js file, therefore it is not necessary to externally refer it in your application (applicable for version 13.4.0.53 and higher). For version lower than 13.4.0.53, refer jQuery.globalize.min.js along with ej.web.all.min.js Apart from the above common scripts, it is also necessary to refer the ej.web.all.min.js file in your application that plays a major role in control creation. The dependencies are available in the following locations of your machine. Please copy these files from location given. NOTE Example for “Syncfusion Installed location” is “C:\Program Files (x86)\Syncfusion” Now, create a folder named ej, under the Scripts folder of your application and place the copied files ej.web.all.min.js into it as shown in the following. Solution Explorer - Script files copied into the Scripts folder of the project Once the scripts are added in your application, now it is necessary to include the reference to it in your application. This should be done within the _Layout.cshtml page, as we did previously for CSS files. Add the following script references in the _Layout.cshtml file within the head section. <link href="Content/ej/web/default-theme/ej.web.all.min.css" rel="stylesheet" /> <script src='<%= Page.ResolveClientUrl("~/Scripts/jquery-1.10.2.min.js")%>' type="text/javascript"></script> <script src='<%= Page.ResolveClientUrl("~/Scripts/jsrender.min.js")%>' type="text/javascript"></script> <script src='<%= Page.ResolveClientUrl("~/Scripts/ej/ej.web.all.min.js")%>' type="text/javascript"></script>. CDN Link reference If you want to refer the CDN links instead of the direct script and CSS references in your application, you need to make use of the following references in the _Layout.cshtml page. <head> <meta charset="utf-8" /> <title><%: Page.Title %> - My ASP.NET Application</title> <link href="" rel="stylesheet" /> <script src=""></script> <script src=""></script> <script src=""></script> <script src=""></script> <script src=""></script> </head> Assembly Reference Refer the following assemblies in your newly created ASP.NET MVC application that allows you to use any of the Syncfusion ASP.NET MVC controls within it. - Syncfusion.EJ - Syncfusion.EJ.MVC The reference to the Syncfusion assemblies can be added to your application in either of the following ways. - Referring from GAC - Referring from the installed location Referring from GAC Once you have installed the Essential Studio package in your system, the Syncfusion assemblies are automatically registered in the GAC. You can easily add the reference assemblies to your project by choosing Add Reference option. Now the Reference Manager pop-up will appear on the screen. In that pop-up, select the required assemblies from the Extensions tab as below, by choosing the appropriate versions (13.1450.0.21). The version to be chosen for the reference assemblies is based on the Framework used in the application. Reference Manager Pop-up Referring from the installed location Add the reference assemblies to your project by choosing Add Reference option, Now the Reference Manager pop-up will appear on the screen. Select the Browse tab in it and navigate to the installed location of the Syncfusion Essential Studio package in your system. (As depicted in the below image.) NOTE \Syncfusion\Essential Studio\18.3.0.35\precompiledassemblies\14.4.0.15 For example, If you have installed the Essential Studio package within C:\Program Files (x86), then navigate to the following location: C:\Program Files (x86)\Syncfusion\Essential Studio\18.3.0.35\precompiledassemblies\14.4.0.15 Reference Manager Pop-up with Browse button clicked NOTE In the above image, the folders 3.5, 4.0, 4.5, 4.5.1 denotes the .NET Framework version. Based on the Framework version used in your application, you can choose assemblies from the appropriate folders. The Syncfusion.EJ.MVC and other core assemblies like Syncfusion.Core, Syncfusion.EJ are available within these folders. - Add the Syncfusion.EJ, Syncfusion.EJ.MVC, and Syncfusion.Core assemblies to your application from the following specified location. NOTE \Syncfusion\Essential Studio\18.3.0.35\precompiledassemblies\18.3.0.35\4.5 For example, If you have installed the Essential Studio package within C:\Program Files (x86), then navigate to the below location, C:\Program Files (x86)\Syncfusion\Essential Studio\18.3.0.35\precompiledassemblies\18.3.0.35\4.5 NOTE Syncfusion.Core dependency has been removed from 13.2.0.29 and it is not required to refer this assembly when you are using 13.2.0.29 and later versions. - Once the assembly selection is done, click OK to add the selected references to your project. You can view the assembly references added to your application, in the solution explorer as shown in the following. Selected Assemblies added to the Project reference Registering Syncfusion Assemblies within the Web.config In your application’s web.config file, add the below assembly information within the <system.web> <compilation debug="true" targetFramework="4.5"> <assemblies> <add assembly="Syncfusion.EJ, Version=15.4450.0.20, Culture=neutral, PublicKeyToken=3d67ed1f87d44c89" /> <add assembly="Syncfusion.EJ.Mvc, Version=15.4500.0.20, Culture=neutral, PublicKeyToken=3d67ed1f87d44c89" /> </assemblies> </compilation> <authentication mode="Forms"> … </system.web> - Add the below DatePicker code in your view page as shown in the following. @Html.EJ().DatePicker("DatePick").DateFormat("MM/dd/yyyy").ShowOtherMonths(false).EnableRTL(false).Locale("en-US") NOTE Add the DatePicker code within the Content section, by removing the unwanted code within it. - Finally build and run the project by pressing F5, so that you can now see the output similar to the following screenshot in your web browser. DatePicker control displaying on the web browser Thus the DatePicker control is rendered successfully with its default appearance. You can then use its various properties to set its value and also make use of its available events to trigger when necessary. Version compatibility with respect to framework The Syncfusion.EJ.MVC assembly supports only till framework 4.5, so the dependent Syncfusion.EJ assembly will also be installed in that version even if the framework chosen is 4.6 or above. Find the supported MVC versions for the target framework in the following table. NOTE For Framework 4.6 and above, support has been provided in ASP.NET Core. So, the assembly version “XX.X460.X.XX” cannot be used in MVC platform.
https://help.syncfusion.com/aspnetmvc/getting-started
CC-MAIN-2020-45
refinedweb
2,727
50.94
Web App generator Looks like canonical shut down their web app generator page. Does any one know of some other easy way of simply creating click packages from web pages? In order that to continue with the webapps? It is something absurd having the explorer. It is to fill the shop of garbage, is my opinion. You have alternate-webapp-generator, great option. Thank you. Alternate-webapp-generator works great. I do agree with you that webapps on appstore are garbage. I use my webapps to connect to my raspberry-pi and webcam. @Vehi_MV said in Web App generator: Alternate-webapp-generator works great. What and where is this? I can't find it in the web. Or do you mean the alternate-webapp-container @Bastos It is a script and you can download it from here: It uses a config.cfg file, which is included in the zip. You just have to put your webapps info in config file and run the script. @Vehi_MV Dont say this, I made a webapp for a local public transport realtime monitor and its really useful xD BR @Flohack I didn't mean they are not usefull, they are very usefull, but to very limited number of people. And there are alot of such webapps, so it's really hard to find webaps that are usefull for me. That is why the webapp-generator is a thing that is nice to have. MV @Flohack realtime monitoring public transport! that is great. One of the few things fahrplan app is not able to. Is this webapp also due for germany? If yes, can you provide it? I agree on your points! For web-apps in the store, I totally agree that there are too much and only few users use them (even if they can be really useful!!!) For the alternate-webapp-generator it's better than nothing, but what canonical did was really a wonderful idea!!! Because I'm not a developer and I don't even know how to run a script That's why I have two questions: Would it be possible to do something as canonical did, i.e. a server were you insert a link and you can download a .click? Or a better solution: would it be possible to have a web-app generator on the phone, i.e. an app where you insert a link and it generates an app directly in the phone? Because I think that web-apps are "useless" in the store (even if I could install your app, I'm in Vienna too ;)), so I think there has to be found a solution even for this one day, and a web-app generator could help ! What do you think about it? It will be great if you could "generate" web app from the browser on your UT. It will be kinda like bookmark on steroids. @Vehi? +1 I haven't yet managed to get a working environment on my Ubuntu 16.10 system for compiling click apps. I used to have it working with the Ubuntu SDK, but that was a while ago. I tried to install clickable from the instructions in the wiki, but got stuck on some errors that I didn't manage to resolve. The webpage that Cannonical had was ok, so if there is someone that can replicate their page, It would be great. At this moment, I am verry happy, that at least this script exists, so we have a possibility to easily generate webapps. So first time I tried the webabb generator Unfortunately without access Can maybe someone tell me what I have done wrong: I downloaded and unzipped to my ubuntu 16.04 desktop I made the alternate-webapp-generator.sh file executable I edited the config.cfg # config.cfg export namespace="BastosNamespace" export app_name="xxAppname" export app_title="xxAppTitle" export app_url="" export app_description="A webapp client for xx.de." export app_UA="Mozilla/5.0 (Linux; Android 5.0; Nexus 5) AppleWebkit/537.36 (KHTML, like Gecko) Chrome/38.0.2125.102 Mobile Safari/537.36" export app_version="0.1" export maintainer_name="xxx" export maintainer_email="xxx@gallehr.de" export icon_path="/home/xxx/xxx.png" In terminal I cd to the right folder and run . alternate-webapp-generator.sh Everything went with success, I get a .click file created I copied the .click file to my phone With UT Tweak Tool I installed the .click with sucess BUT unfortunately the webapp doesnt show up at my apps even not after restart. Hi. Just a stupid question. From what I see from your dummy config file. Are you sure icon_path is OK? Maybe you forgot the username after /home/ ? @Vehi_MV said in Web App generator: Maybe you forgot the username after /home/ Thanks. Yes you are right, i used the dummy xxx also for path. I correct it... Thanks to @Vehi_MV here is the solution Here there is a new solution ! Thank you for this app !
https://forums.ubports.com/topic/291/web-app-generator
CC-MAIN-2020-45
refinedweb
827
76.52
"Light Makes Right" June 21, 1989 Volume 2, Number. eye!erich@wrath.cs.cornell.edu This is a lot more direct than hopping the HP network through Palo Alto and Colorado just to get to Ithaca. We have the connection courtesy of the Computer Science Dept at Cornell, and they have asked us to try to keep our traffic down. So, please don't be a funny guy and send me image files or somesuch. I just noticed that Andrew and I are out of sync: his hardcopy version is on Volume 4, and I'm on Volume 3. One excuse is that the first year of the email edition is labeled "Volume 0", since it wasn't even called "The Ray Tracing News" at that point. An alternate excuse is that I program in "C", and so start from 0. Anyway, until maybe the new year, I'll stick with the current scheme (hey, no one even noticed that last issue was misnumbered (and corrected on the USENET copy)). back to contents The latest issue of the hardcopy Ray Tracing News (Volume 3, Number 1, May 1989) goes into the mail today, 31 May. Everyone who is on the softcopy mailing list should receive a copy. If you don't get a copy in a week or two, please let me know (glassner.pa@xerox.com). It would help if you include your physical mailing address, so I can at least confirm that your issue was intended to go to the right place. Contributions are now being solicited for Vol. 4, No. 2. Start working on those articles! back to contents ____________ Stuart Green - multiprocessor systems for realistic image synthesis Department of Computer Science University of Bristol Queen's Building University Walk Bristol. BS8 1TR ENGLAND. green@uk.ac.bristol.compsci I am working on multiprocessor implementations of algorithms for realistic image synthesis. So far, this has been restricted to straightforward ray tracing, but I hope to look at enhanced ray tracing algorithms and radiosity. I've implemented a ray tracer on a network of Inmos Transputers which uses mechanisms for distributing both computation and the model data amongst the processors in a distributed memory MIMD system. ____________ Craig Kolb (and Ken Musgrave) My primary interests include modeling natural phenomena, realistic image synthesis, and animation. I can be reached at: Dept. of Mathematics Yale University P.O. Box 2155 Yale Station New Haven, CT 06520-2155 (203) 432-7053 alias craig_kolb craig@weedeater.math.yale.edu alias ken_musgrave musgrave-forest@yale.edu ...I've just started looking into ray/spline intersection. We do a lot of heightfield-tracing 'round here, and in the past have rendered them using a triangle tessellation. I'm giving splines a shot in order to render some pictures of eroded terrain for our SIGGRAPH talk. I notice that you list spline intersection among your primary interests. What sort of methods have you investigated? At the moment I've implemented (what I assume is) the standard Newton's method in tandem with a DDA-based cell traversal scheme (as per our SIGGRAPH paper). Although this works, it's not exactly blindingly fast... Do you know of any 'interesting' references? ____________ Kaveh Kardan Visual Edge Software Ltd. 3870 Cote Vertu Montreal, Quebec H4R 1V4 (514)332-6430 larry.mcrim.mcgill.edu!vedge!kaveh I graduated with a BS in Math from MIT in 1985, did some work in molecular graphics at the Xerox Research Centre of Canada (XRCC), wrote the renderer at Neo-Visuals (now known as SAS Canada) -- which included a raytracer --, and the animation stuff at Softimage. I'm currently working at Visual Edge on the UIMX package: an X Windows user interface design system. Regarding the Softimage raytracer: it was written by Mike Sweeney (who used to be at Abel, and who did "Crater Lake" at Waterloo). I will also be acting as a mail forwarder for Mike, as Softimage is not on any networks. So in effect, you should probably include Mike in the mailing list as well, with my address -- or somehow let people know that he can be reached tc/o me. If I may make some comments about the stuff I have read so far in the back issues: ==================== Jeff Goldsmith writes: >. Having worked at two CG Software companies, I know firsthand how the "to do" list grows faster than you can possibly implement features (no matter how many programmers you have -- c.f."The Mythical Man-Month"). Jeff is right that ray tracing sounds glitzy, and, yes, it is another factor to toss into the sales pitch -- but it is not at all clear that it is worth the effort. Most (if not all) ray tracers assume either infinite rendering time or infinite disk space. In the real world (a 68020 and a 144Meg disk) this is not the case. The raytracer I wrote at Neo Visuals was written in Fortran -- ergo no dynamic memory allocation -- so I had to work on optimizing it without significantly increasing the memory used. This mostly involved intelligently choosing when to fire rays. The renderer performs a Watkins-style rendering, and fires secondary rays from a pixel only if the surface at that pixel needs to be raytraced. Memory constraints prevented me from using any spatial subdivision methods. Yes, ray traced images are great sales tools. They are also sometimes not entirely honest -- novice users ("I want a system to animate Star Wars quality images, about ten minutes of animation a day on my XT") are not aware of the expense of raytracing, and very few salesmen go out of their way to point this out. However, these same users, unsure of the technology, put together a list of buzzwords (amongst them "raytracing") and go out to find that piece of software which has the most features on their list. Hence I coined the phrase "buzzword compatible" while at Neo-Visuals (and also "polygons for polygons sake" -- but that's another story). I have also seen demos, animations, and pictures at trade shows, presented by hardware and software vendors, which were extremely and deliberately misleading. A very common example is to show nice animation that was not really created with your software product. The animation having been typically created by special programs and add-ons. An obvious example was Abel, marketing their "Generation 2" software with "Sexy Robot", "Gold", "Hawaiian Punch", etc. I only mention Abel because they are no longer in business -- I don't want to mention any names of existing companies. I hadn't intended this to be a flame. But that sums up why not all software vendors bother with raytracing, and how it can be abused if not handled carefully. ==================== On Steve Upstill's remarks on the Renderman standard: Disclaimer: I have not read the Renderman specs, and have spoken to people who liked it and people who didn't. I would like to say that while I was at Neo-Visuals, Tom Porter and Pat Hanrahan did indeed drop by to ask us about our needs, and to ensure that the interface would be compatible with our system. As I recall, we asked that the interface be able of handling arbitrary polygons (n vertices, concave, etc). As I recall, I was playing devil's advocate at the meeting, questioning whether rendering had settled down enough to be standardized. So yes, at least Neo-Visuals did get to have a say and contribute to the interface. I spoke to one rendering person at Siggraph who didn't appreciate the way Pixar had handed down the interface and said "thou shalt enjoy." Well the alternative would be a PHIGS-like process: years spent in committees trying to hash out a compromise which will in all likelihood be obsolete before the ink is dry. In fact, two hardware vendors decided to take matters into their own hands and came up with PHIGS+. Yes, the interface is probably partly a marketing initiative by Pixar. Why would they do it otherwise? Why should they do it otherwise? I would guess that Pixar hopes to have the standard adopted, then come out with a board which will do Renderman rendering faster than anyone else's software. This seems a natural progression. More and more rendering features have been appearing in hardware -- 2D, 3D, flat shaded, Gouraud, and now Phong and texture maps. It is very probable that in a few years, "renderers" will be hardware, except for experimental, research, and prototype ones. back to contents 1) Make 1 pass through pts. Find these 6 pts: pt with min x, max x, min/max y, min/max z. Pick the pair with the widest dimensional span. This describes the diameter of the initial bounding sphere. If the pts are anywhere near uniform, this sphere will contain most pts. 2) Make 2nd pass through pts: for each pt still outside current sphere, update current sphere to the larger sphere passing through the pt on 1 side, and the back side of the old sphere on the other side. Each new sphere will (barely) contain its previous pts, plus the new pt, and probably some new outsiders as well. Step 2 should need to be done only a small fraction of total num pts. The following is code (untested as far as I know) to increment sp: typedef double Ordinate; typedef double Distance; typedef struct { Ordinate x; Ordinate y; Ordinate z; } Point; typedef struct { Point center; Distance radius; } Sphere; Distance separation(pa, pb) Point *pa; Point *pb; { Distance delta_x, delta_y, delta_z; delta_x = pa->x - pb->x; delta_y = pa->y - pb->y; delta_z = pa->z - pb->z; return (sqrt(delta_x * delta_x + delta_y * delta_y + delta_z * delta_z)); } Sphere *new_sphere(s, p) Sphere *s; Point *p; { Distance old_to_p; Distance old_to_new; old_to_p = separation(&s->center, p); if (old_to_p > s->radius) { /* could test vs r**2 here */ s->radius = (s->radius + old_to_p) / 2.0; old_to_new = old_to_p - s->radius; s->center.x = (s->radius * s->center.x + old_to_new * p->x) / old_to_p; s->center.y = (s->radius * s->center.y + old_to_new * p->y) / old_to_p; s->center.z = (s->radius * s->center.z + old_to_new * p->z) / old_to_p; } return (s); } [This looks to be a good quick algorithm giving a near-optimal solution. Has anyone come up with an absolutely optimal solution? The "three point" solution (in last issue) gives us a tool to do a brute force search of all triplets, but this is insufficient to solve the problem. For example, a tetrahedron's bounding sphere cannot be found by just searching all the triplets, as all such spheres would leave out the fourth point. - EAH] back to contents At present, our algorithm has been modified to take into account load balancing and we have several results not yet published. These new results may give some important conclusions about the Cleary approach (processor-volume association). We are working now on a new algorithm based on a global memory on distributed memory architectures! For my mind it is the best solution to obtain load and memory balancing. The ray coherence property is a means to have a sort of locality when data is read in the global memory (best use of caches). We are very interested (D. Badouel, K. Bouatouch and myself) in submitting to the "Ray-tracing News" a short paper which summarizes our work in parallel ray-tracing algorithm for distributed memory architecture. This contribution should present two ray tracing algorithms with associated results. This work has not been yet published outside France. back to contents ======== USENET cullings follow ================= Newsgroups: comp.arch,comp.graphics Organization: RPI CS Dept. Hi! I am wondering if anybody knows if there have been any attempts to port a ray tracing algorithm on a dataflow computer, or if there has been such a machine especially built for ray tracing. I am posting to both comp.arch and comp.graphics since I think that it concerns both newsgroups. It seems to me that a dynamic dataflow architecture is more appropriate to this problem because of the recursiveness and parallelism of the algorithm. Thanks in advance for any info... back to contents Summary: my noise, better described (I hope) Reply-To: bullerj@handel.colostate.edu.UUCP (Jon Buller) Organization: Colorado State University, Ft. Collins CO 80523 In article <...> jamesa@arabian.Sun.COM (James D. Allen) writes: >In article <...>, aaa@pixar.UUCP (Tony Apodaca) writes: >> In article <...> coy@ssc-vax.UUCP (Stephen B Coy) writes: >> > ...My question: Does anyone out there know what this >> >noise function really is? >> >> ... Conceptually, noise() >> is a "stochastic three-dimensional function which is statistically >> invariant under rotation and translation and has a narrow bandpass >> limit in frequency" (paraphrased from [Perlin1985]). This means that >> you put three-space points in, and you get values back which are basically >> random. But if you put other nearby points in, you get values that are >> very similar. The differences are still random, but the maximum rate of >> change is controlled so that you can avoid aliasing. If you put a set >> of points in from a different region of space, you will get values out >> which have "the same amount" of randomness. > > Anyone willing to post a detailed description of such an > algorithm? (Jon Buller posted one, but I couldn't figure it out: > what is `Pts'?) Sorry about not really describing my program to anyone, I know what it does, and I never expected anyone else to see it (isn't it obvious) :-) What it does is: pass a location in space, and an array of random numbers (this is 'Pts'). I fill the array with 0.0 .. 1.0 but any values or range will work. (I have other textures which color based on distance to the nearest point of a random set, hence the name, It has 4 values per entry at times.) Step 1: change the location to a group of points to interpolate. This is where xa,xb,xc,...zc come in, any location with the same coords (when trunc'ed) will produce the same xa...zc values, making the same values for the interpolation at the end. These xa..zc are then hashed in to the 'Pts' array to produce p000...P222, these 27 random numbers are then interpolated with a Quadratic 3d B-Spline (the long ugly formula at the end). The variables based on xf,yf, and zf (I believe they are x0..z2) are the B-Spline basis functions (notice to get DNoise, just take the (partial) derivatives of the basis functions and re-evaluate the spline). Step 2: now you have a value that is always smaller than the largest random number in 'Pts' (equal to in the odd case that major bunches of the numbers are also the maximum in the range). By the same argument, all numbers returned are larger than the smallest number in the array. (this can be handy if you don't want to have to clip your values to some limit.) I hope this explains the use of the routine better. Sorry I didn't realize that earlier. If you have any other questions about it, mail them to me, and I'll do my best to explain it. Jon back to contents Newsgroups: comp.graphics Organization: Old Dominion University, Norfolk, VA daniel@unmvax.unm.edu writes: >Has anyone converted the public domain ray trace program called DBW_render >to run on a SUN workstation? Ofer Licht (ofer@gandalf.berkeley.edu) has done just that. His modified DBW_Render is available via anonymous ftp from xanth.cs.odu.edu as /amiga/dbw.zoo. It is also designed to use ``DistPro'' to distribute the computations between many machines (this is available as well as /amiga/distpro.zoo). His address: Ofer Licht (ofer@gandalf.berkeley.edu) 1807 Addison St. #4 Berkeley, CA 94703 (415) 540-0266 back to contents Newsgroups: comp.graphics Organization: NASA Ames Research Center, Calif. In article <...> jwl@ernie.Berkeley.EDU.UUCP (James Wilbur Lewis) writes: >In article <...> jep@oink.UUCP (James E. Prior) writes: >>I've noticed that when I look closely at reasonably clean bare steel in good >>sunlight that it appears to have a very fine grain of colors. >> >>What is this due to? > >Probably a diffraction-grating type effect due to scratches, roughness, or >possibly crystalline structure at the surface. Funny you should mention this. I was sitting with my officemate, George Michael, he says Hi Kelly, and we were talking about stuff and he brought up the subject of polish. He said there were people at Livenomore who were researching the issue of polish for big mirrors, but that polish really isn't well understood, still open interesting physical science questions. Polish consists of minute "scratches" which have a set of interesting properties. You can probably [write] to them and get TRs on the topic. Polish is more than iridescence. Also, since somebody asked, the date on the Science article by Greenberg on Light reflection models for graphics is 14 April 1989, page 166. It will provide simple models for this type of stuff. back to contents Newsgroups: comp.graphics Organization: Versatec, Santa Clara, Ca. 95051 I've come up with a fast approximation to 3D Euclidean distance ( sqrt(dx*dx+dy*dy+dz*dz) ). (It's probably not original, but .....) 1) find these 3 values: abs(dx), abs(dy), abs(dz). 2) Sort them (3 compares, 0-3 swaps) 3) Approx E.D. = max + (1/4)med + (1/4)min. (error: +/- 13%) max + (5/16)med + (1/4)min has 9% error. max + (11/32)med + (1/4)min has 8% error. As you can see, only shifts & adds are used, and it can be done with integer arithmetic. It could be used in ray tracing as a preliminary test before using the exact form. We all have our dirty little tricks. back to contents Newsgroups: comp.graphics Organization: RPI CS Dept. A while ago, while it was still snowing, I was feeling adventurous, and a nice weekend I decided to write an obfuscated ray tracer. A friend of mine told me that is not too obfuscated for the "Obfuscated C Contest", I had already wasted one whole day, so I gave up. Today, I was cleaning up my account, and I thought it would be a very appropriate posting for comp.graphics. It is a hacker's approach to ray tracing; produces a text image on the screen. No shading; different characters represent different objects. The source code is 762 bytes long. I KNOW that I'll get flamed, but who cares! :-) Have fun people! So, here it is: Compile with cc ray.c -o ray -lm /* (c) 1988 by George Kyriazis */ #include <math.h> #define Q " #define _ define #_ O return #define T struct #_ G if #_ A(a,b) (a=b) #define D double #_ F for #define P (void)printf(Q #define S(x) ((x)*(1/*p-"hello"[6])/*comment*/*x)) T oo{D q,r,s,t;};int m[1]={2};T oo o[2]={{10,10,10,18},{15,15,17,27}};int x,y;D I(i){D b,c,s1,s2;int*p=0,q[1];b=i/*p+1["_P]+(1-x*x)*erf(M_PI/i)/1*/**q+sin(p);{ {b=2*-(i+o)->s;c=S(x-i[o].q)+S(y-o[i].r)+S(i[o].s)-(o+i)->t;}A(s1,S(b));}{G((s2 =(S(b)-4*c)<0?-1:sqrt(-4*c+S(b)))<0){O(b-(int)b)*(i>=0-unix);}}s1=(-b+s2)/2;s2= s1-s2;s1=s1<=0?s2:s1;s2=s2<=0?s1:s2;O s1<s2?s1:s2;}main(){D z,zz;int i,ii;F(A(y ,0);y<24;y-=listen(3,0)){F(x-=x;x<40;x++){F(z=!close(y+3),A(i,0);i<*m*(y>-1);i= A(i,i+1))G(z<(A(zz,I(i))))z=zz,ii=i;G(!!z)P%d",ii);else P%c",32-'\0');}P\n");}} back to contents Newsgroups: comp.graphics Organization: University of Oregon CIS Dept. Recently, the ftp archive of raytracing stuff was moved from our dying VAX-750 (drizzle.cs.uoregon.edu) to our new fileserver, which is called skinner.cs.uoregon.edu, or just cs.uoregon.edu. There is more diskspace available, and I have expanded the archives to contain several new items. I thought I would post the README here to let people know of its availability. skinner.cs.uoregon.edu contains information largely dealing with the subject of raytracing, although a radiosity tracer or solid modeler would be a welcome addition to the contents there. I am always busy looking for new software aquisitions, so if you have anything you wish to put there, feel free to send me a note. Mark VandeWettering -cut-cut-cut-cut-cut-cut-cut-cut-cut-cut-cut-cut-cut-cut-cut-cut-cut-cut-cut-cut The old README was dated, so I thought I would update this new one... dr-xr-xr-x 2 ftp 512 Feb 11 18:53 bibs contains bibliographies for fields that I am interested in, such as graphics and functional programming. drwxrwxr-- 2 ftp 512 May 13 23:44 gif.fmt descriptions of the gif format. Too many people wanted this, so I thought I would make it available. drwxrwxr-x 2 root 512 May 24 22:25 grafix-utils Utilities for converting among graphics formats etc. Now includes fuzzy bitmap, should also include pbm and utah raster toolkit soon. drwxrwxr-- 2 ftp 1024 May 14 15:45 hershey The Hershey Fonts. Useful PD fonts. drwxrwxr-x 2 root 512 May 24 22:26 mtv-tracer My raytracer, albeit a dated version. dr-xr-xr-x 2 ftp 512 Feb 16 17:24 musgrave Copies of papers on refraction by Kenton Musgrave. drwxrwxr-x 2 root 512 May 24 22:26 nff Haines SPD raytracing package, with some other NFF images created by myself & others. Useful for the mtv raytracer. drwxr-xr-x 2 ftp 1536 May 24 11:44 off-objects Some interesting, PD or near PD images from the OFF distribution. dr-xr-xr-x 2 ftp 512 Feb 15 22:48 polyhedra Polyhedra from the netlib server. I haven't done anything with these... dr-xr-xr-x 2 ftp 512 Mar 6 17:45 qrt The popular raytracer for PCs. dr-xr-xr-x 2 ftp 512 May 24 22:26 rayfilters Filters to convert the MTV output to a number of devices... drwxrwxr-x 2 root 512 May 24 22:26 raytracers Other raytracers.... -rw-r--r-- 1 ftp 323797 May 24 01:47 sunrpc.tar.Z SUN RPC v.3.9 [All issues of the email version of "The RT News" have been put in the directory "RTNews" since this posting.] back to contents
http://tog.acm.org/resources/RTNews/html/rtnews7b.html
crawl-002
refinedweb
3,818
64.91
Tag not in sync Hi All, I'm trying to create a new UVWTag (if an object doesn't already have one). Here's my basic code: def main(): obj = doc.GetFirstObject() if obj.GetTag(c4d.Tuvw) is None: tag = obj.MakeTag(c4d.Tuvw) tag[c4d.TEXTURETAG_PROJECTION]=3 tag.SetName('Cubic_map') c4d.EventAdd() The code runs howewer, cinema reports the following error later when trying to access the uv: A problem with this project has been detected: Object "Cube" - Tag 5671 not in sync. Please save and contact MAXON Support with a description of the last used commands, actions or plugins. What Am I missing? A UVWTag does not have any projections. You might want to use a Texture Tag ( Ttexture) instead. Best, Robert It seems like it has a projection. Please note that I don't know c4d, I'm just coding with the api but I don't know the software itself (I came from the blender's world) Anyways, If you go to the UV edit, and you select your object and UVW tag, you can choose the UV mapping - Projection let me attach a screenshot of what I'm trying to access. On the bottom right you can see there are some projections types, how do I set those through python? The projection modes listed there are actually commands that operate on the UVs directly. I'm not sure if you can access those functions in py, but here's a workaround. As said above, use a texture tag to set desired projection. After that, you can call the command Generate UV coordinateswhich then creates a new UV tag with the projection "baked in" and the texture mode is automatically set to "UVW". Hi Rage, thanks for reaching us. With regard to the issue mentioned and considering your non-Cinema background, first of all lets settle down a few concepts: - UVWTags are just acting as "storage" for the UVW data used on polygonal objects; - TextureTags are instead used to create the actual texturing on the on a generic object (whatever it's a polygonal or parametric one) A parametric object can use a texture even without explicit an UVWTag since the TextureTag delivers all the information with regard on how to map a texture to the object. When a parametric object (e.g. cube) is converted into a polygonal one the UVW values which are part of the parametric object gets dumped in the UVWTag that is generated as soon as the conversion ends. That said give a certain object (a polygonal one) the way to go is: def main(): # check that an object is selected if op is None: return # get the active material activeMat = doc.GetActiveMaterial() if activeMat is None: return # instantiate a TextureTag sph50NoTileTextureTag = c4d.TextureTag() if sph50NoTileTextureTag is None: return # set the mapping type sph50NoTileTextureTag[c4d.TEXTURETAG_PROJECTION] = c4d.TEXTURETAG_PROJECTION_SPHERICAL # turn off tiling sph50NoTileTextureTag[c4d.TEXTURETAG_TILE] = False # scale the mapping to 50% on u and v sph50NoTileTextureTag[c4d.TEXTURETAG_LENGTHX] = 0.5 sph50NoTileTextureTag[c4d.TEXTURETAG_LENGTHY] = 0.5 # link to the active material sph50NoTileTextureTag[c4d.TEXTURETAG_MATERIAL] = activeMat # generate the corresponding UVWTag using the mapping settings specific in the TextureTag sph50NoTileUVWTag = c4d.utils.GenerateUVW(op, op.GetMg(), sph50NoTileTextureTag, op.GetMg()) # check for UVWtag being properly created if sph50NoTileUVWTag is None: return # set the name of the tag sph50NoTileUVWTag.SetName('0.5 non-tiled spherical') # add both the UVWTag and the TextureTag if op.GetTag(c4d.Tuvw) is None: op.InsertTag(sph50NoTileUVWTag) if op.GetTag(c4d.Ttexture) is None: op.InsertTag(sph50NoTileTextureTag) # notify Cinema about the changes c4d.EventAdd() Best, Riccardo
https://plugincafe.maxon.net/topic/11326/tag-not-in-sync
CC-MAIN-2019-18
refinedweb
594
56.05
08 December 2010 06:58 [Source: ICIS news] By Prema Viswanathan DUBAI (ICIS)--Borouge is looking at penetrating the largely untapped northern ?xml:namespace> "With our distribution hubs and compounding plants in Shanghai and Guangzhou, we will be in a strong position to service the eastern and southern China markets…We now need to look at the northern part of the country, which has huge potential for growth," Borouge Pte Ltd CEO William Yau told ICIS in an interview. Yau was in Dubai for the three-day 5th Gulf Petrochemicals and Chemicals Association (GPCA) forum that will run upto 9 December. Meanwhile, Borouge is enhancing its presence in southern The company was planning to build a 100,000 tonne/year polypropylene (PP) compounding plant at the site, he said. "We expect the plant to start up by 2012 and complement the downstream support we are providing through the existing compounding plant in The A similar strategy of establishing local presence in other key markets would be pursued, he said. "We have to look long term so that we have our support system on the ground when Borouge III [at Ruwais in Abu Dhabi] starts up, bringing our polyolefins capacity to 4.5m tonnes/year by end 2013," he added. Yau said that Borouge is expanding its sales and marketing presence in "We now have 17 staff in our office in Mumbai and will soon be setting up a new office in Compounding and distribution facilities were expected to follow later. Borouge II, also at Ruwais, was operating well and the company was slowly ramping up capacity at the complex, Yau added. "But we expect the full 2m tonne/year output only next year, although we have begun to produce and sell small volumes of the different polyethylene (PE) and PP grades already," he said. Yau said he was "cautiously optimistic" about the market outlook for the coming year. "The fundamentals are still there. In key markets such as Borouge is a joint venture between the Abu Dhabi National Oil Company (ADNOC)
http://www.icis.com/Articles/2010/12/08/9417521/gpca-10-borouge-eyes-north-china-for-sales-growth.html
CC-MAIN-2015-11
refinedweb
341
54.05
Details Description TermsComponent should be distributed Issue Links Activity I got the previous patch working. It was we close. I attached the java file and a patch for just the TermsComponent - Based on Matt's patch - Synced to trunk - Uses BaseDistributedTestCase All tests pass. I had to change TermData#frequency to an int to match the output of distributed and non-distributed cases. It is theoretically possible to have the sum of frequencies from all shards to exceed size of an int but I don't think it is practical right now. The problem is that we represent frequency as int everywhere for non-distributed responses. If we want longs in distributed search responses then we must start using longs in non-distributed responses as well to maintain compatibility. Matt – There is an issue open for adding SolrJ support for TermsComponent - SOLR-1139. Is it possible to replace the TermsHelper and TermData classes by classes in SOLR-1139? I'd like to have the same classes parsing responses in Solrj and distributed search. The facet component internally uses long to add up distributed facet counts, and then uses this code: // use <int> tags for smaller facet counts (better back compatibility) private Number num(long val) { if (val < Integer.MAX_VALUE) return (int)val; else return val; } Yes, it's not ideal to switch from <int> to <long> in a running application, but I think it's better than failing or overflowing the int. Client code in SolrJ should be written to handle either via ((Number)x).longValue() Here is an updated patch that includes Shalin's suggestions: - replace TermData with TermsResponse.Term - updates TermsHelper to use the parsing code from TermsResponse I also changed TermsResponse.Term#frequency to a long so that we don't overflow when calculating the frequency. Then to keep back-compatbility with existing code I do the following when writing it to the NamedList: if (tc.getFrequency() >= freqmin && tc.getFrequency() <= freqmax){ fieldterms.add(tc.getTerm(), ((Number)tc.getFrequency()).intValue()); cnt++; } Is this a good approach? This new patch includes SOLR-1139. if (tc.getFrequency() >= freqmin && tc.getFrequency() <= freqmax) { fieldterms.add(tc.getTerm(), ((Number)tc.getFrequency()).intValue()); cnt++; } I changed freqmin and freqmax to long and used Yonik's method to write int if possible or else switch to longs in the response. I'll commit this shortly. Committed revision 890199. Thanks Matt! Correcting Fix Version based on CHANGES.txt, see this thread for more details... Bulk close for 3.1.0 release Here is my first attempt at a patch that is not currently working. For some reason only the prepare and process methods are being called. It seems that the shards parameter is not being honored like it is in the other distributed components because rb.shards is always null. I have looked at the other distributed components and did not notice them doing anything special with the shards parameter. I have based this code on the information from and looking though the FacetComponent, DebugComponent, StatsComponent, and HighlightComponent code. Any help figuring out why the other methods are not being called is greatly appreciated. Please ignore the println statments, they are for debug only and will be removed in the finalized, working patch. Thanks!
https://issues.apache.org/jira/browse/SOLR-1177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
CC-MAIN-2015-22
refinedweb
538
57.98
XML Attributes Introduction One of XML strengths is its ability to describe data with various options, using simple or complex elements. Although an element can have a value as large as a long paragraph of text, an element can have only one value. There are cases where you would like the same element to be equipped with, or be able to provide, various values that can also be easily distinguished or separate. To provide various types of information to an XML element, you can use one or more attributes Creating an Attribute In C#, we are used to creating classes. Imagine that you want to create one for employees. Such a class would appear as follows: #region Using directives using System; using System.Collections.Generic; using System.Text; #endregion namespace CSharpLessons { class CEmployeeRecord { public string Username; public string Password; public Double Salary; public char MaritalStatus; } class Program { static void Main(string[] args) { CEmployeeRecord emplRecord = new CEmployeeRecord(); emplRecord.Username = "kallar"; emplRecord.Password = "7hd47D89"; emplRecord.Salary = 20.12; emplRecord.MaritalStatus = 'D'; Console.WriteLine("Username: {0}", emplRecord.Username); Console.WriteLine("Password: {0}", emplRecord.Password); Console.WriteLine("Marital Status: {0}", emplRecord.MaritalStatus); Console.WriteLine("Hourly Salary: {0}", emplRecord.Salary); Console.ReadLine(); } } } This would produce: Username: kallar Password: 7hd47D89 Marital Status: D Hourly Salary: 20.12 The members of such a class are said to describe the class. When you instantiate the class (when you declare a variable of that class), you can provide value for one or more member variables. Another instance of the class can have different values. In XML, a tag is like a variable of a C++ class, except that you don't have to create the class but you must create the tag. Inside of the start tag, you can provide one or more attributes that mimic the member variables of a class. An attribute is created in the start tag using the formula: <tag Attribute_Name="Value">Element_Value</tag> Like the tag, the name of an attribute is up to you. On the right side of the attribute, type its value in double-quotes. The end tag doesn't need any information about any attribute. It is only used to close the tag. Here is an example of a tag that uses an attribute: <salary status="Full Time">22.05</salary > In this example, status="Full Time" is called an attribute of the salary element. One of the good features of an attribute is that it can carry the same type of value as that of an XML tag. Therefore, using an attribute, you can omit giving a value to a tag. For example, instead of creating the following tag with its value: <movie>Coming to America</movie> You can use an attribute to carry the value of the tag. Here is an example: <movie title="Coming to America"></movie> In this case, you can still provide another or new value for the tag. You can create more than one attribute in a tag. To do this, separate them with an empty space. Here is an example: <movie title="Coming to America" director="John Landis" length="116 min">Nile Rodgers</movie> If you create a tag that uniquely contains attributes without a formal value, you can omit the end tag. In this case, you can close the start tag as you would do for an empty tag. Here is an example: <movie title="Coming to America" /> Practical Learning: Creating XML Attributes <?xml version="1.0" encoding="utf-8"?> <logininfo> <credential username="belld" password="qwyIYw58" /> <credential username="democracy" password="2k!2hk3W" /> <credential username="autocrate" password="$*@#ywEy" /> <credential username="progress" password="36%y68F$" /> </logininfo> private void Form1_Load(object sender, System.EventArgs e) { this.dataSet1.ReadXml("credentials.xml"); this.dataGrid1.DataSource = this.dataSet1; this.dataGrid1.DataMember = "credential"; this.dataGrid1.CaptionText= "Login Credentials"; }
http://www.functionx.com/vcsharp2003/xml/attributes.htm
CC-MAIN-2016-30
refinedweb
629
57.67
1. Silverlight assemblies cannot reference standard .NET assemblies. You'll notice in Visual Studio that Silverlight has its own project types. These project types also have restrictions as to which framework assemblies are available to it. If you are building components to be deployed with a Silverlight application and some other .NET application such as WCF or WPF, I suggest creating a single folder with both the standard .NET project file and the Silverlight project file stored in it. Then, you can simply include the .cs files (or other applicable file types) in each project. I like to append the .Silverlight naming convention to Silverlight projects. Makes it even easier to keep track of your project files. Also, remember to change the default namespace in your Silverlight project to match that of your standard .NET project. You'll want to use the same namespaces in both project types. 2. Not all classes, properties, and methods behave the exact same way between Silverlight and WPF. You will run into situations where the same class file has methods or properties that have different parameter options or behave differently in a Silverlight application versus a WPF application. It doesn't happen often but will occur. I've seen it occur with the BitMapImage class and some conversion methods regarding UTF encoding. By using the two projects in one folder option mentioned in #1, you can usually get around this by creating a folder for Silverlight specific class file workarounds. This permits you to include your normal class file in a normal .NET project and the Silverlight workaround file in your Silverlight project. If you use the same namespace and class names, the rest of the application will never know the difference. Of course, this technique should be used as a last resort. 3. The System.Data assembly is not available for use in Silverlight. The Entity Framework 1.0 requires some of your classes to be decorated with attributes that exist in the System.Data namespace. This virtually excludes using the Entity Framework with Silverlight applications if you intend to use the exact same class in WCF that you do in your Silverlight application. 4. XAML in WPF does not necessarily equate to XAML in Silverlight. If you are writing an online version of your application in Silverlight and an offline version in WPF, write your Silverlight application first. Most of the time, any XAML written in Silverlight will work in WPF. 5. Today's Silverlight application is likely to be tomorrow's offline version in WPF. It is this statement alone that should give you great pause when attempting to create a religiously pure service oriented architecture between your Silverlight and WCF application. If you were to construct an offline version of your application in WPF, many of the design decisions you would make will likely have different outcomes. There is no longer a network oriented service which forms a natural barrier between what the client application can use directly and is thus dependent upon the server side to deliver. Your only real choice is to create shared data contracts with the same namespaces across the WCF, WPF, and Silverlight applications. Failure to do so will create circular reference and ambiguous namespace issues all over the place. When this occurs, the only option is to manage different sets of code that perform the same task. Ugh... I like to make environment aware business logic assemblies which disconnect all logic from classes that could be transferred to other environments across the WCF, Silverlight, and WPF spectrums. Then, I prefer to create a shared environment interface to these business logic classes used in the Silverlight and WPF applications. The WCF application could reference the offline business logic class directly if you chose to bypass the use of the interface. This technique enables shared user interface components to use the same business logic interface. The business logic underneath the interface either uses code for connecting to WCF or code connecting directly to the database in an offline local database environment. I tend to keep a separate user interface logic assembly from my WPF and Silverlight projects. This enables a single code set to manage user interface logic that can be reused in both environments. What we are left with is just the XAML and top level events in the WPF and Silverlight user interface assemblies. Towards the end of the application development cycle, you will likely be able to move some of your XAML files into the shared user interface logic assembly to avoid code duplication. The process flow ends up looking like this: WPF Silverlight Shared UI Business Logic Interface WPF Business Logic Silverlight Business Logic WPF DataBase WCF WPF Business Logic WPF DataBase 6. A WCF reference in a Silverlight or WPF application is just a set of generated source code. The proxy classes created are not biblical written text creations or laws governing SOA. They are just code automatically spit out for you. The Visual Studio service reference creator now offers advanced options which include collection types and namespace reuse between your WCF, Silverlight, and WPF assemblies in the software family. When creating online and offline versions of the same application, use this feature. It is a godsend. * Nhibernate does not yet support ObservableCollection. Beware of this when deciding which collection type to standardize on.7. Silverlight requires an asynchronous programming model. You may not like it but you need to accept it. I'll spare you the details you can read elsewhere about Silverlight's use of the browser network stack. What is important for you to consider is the impact this has on your offline WPF application. If you are smart, you'll wire up your offline business logic to handle both asynchronous and synchronous methods. Then, just expose the asynchronous methods via your business logic interface. This avoids confusion by your user interface developers working in Silverlight and WPF but still permits your WCF layer to reuse the WPF business logic class and call the synchronous methods. One additional suggestion (and admittedly an odd one). For insert/update/delete oriented web methods, I've found it quite beneficial to make the passed in class also the return value (ex. public Customer DeleteCustomer(Customer record)). The asynchronous nature of Silverlight calls to WCF means you've lost the reference to the record you were working with when the asynchronous complete event fires. By returning the class you sent in, that same class reference ends up in the .Result property. You could then use properties on your class to populate status messages or fire some other method with it as a parameter. 8. Silverlight does not support WCF FaultExceptions. You'll need to send in a custom exception object as an out parameter to all of your WCF web methods/entry points. Populate it accordingly in the WCF method and it will show itself in the Silverlight in the MethodCompleted oriented event from the asynchronous call to WCF. It appears as a property on the result class passed into the event. I tend to put success or failure properties on the custom exception class to provide a standard way to react to failures when the asynchronous complete event fires. You will also need to standardize on methods for passing custom exceptions back up through the user interface layer in WPF and Silverlight. Remember, Silverlight is reacting to return values from WCF and WPF is most often times reacting to direct database calls. Any shared user interface components will expect the same behavior of events from both business logic types.9. WCF does not permit the DataContract or DataMember attributes on .NET interfaces. Returning ICustomer instead of Customer from a WCF method results in the use of object instead of a strongly typed Customer class. You may want to consider using an abstract pattern with Customer if you want to create other variations of a Customer interface. 10. I was correct. Building online/offline versions of line of business applications in Silverlight and WPF is a complicated task. However, it is doable if you think very carefully about component reuse and reduction of code duplication up front. You will also need to think outside the box and perhaps vary from some of your normal design techniques. It doesn't make your old design techniques wrong. It just may mean that they are not a good fit for this type of application and could create obstacles you cannot overcome.
http://www.eggheadcafe.com/tutorials/xaml/100f610c-3417-4be5-a21c-2025234b4e01/silverlight-line-of-business-applications-with-offline-wpf-versions.aspx
crawl-003
refinedweb
1,416
63.09
In this tutorial I'm going to introduce you to HYPE, an ActionScript 3 framework released by Joshua Davis and Branden Hall on October 31, 2009. The purpose of this introduction is not to get into the intricacies of the framework, but to walk you through a rather simple exercise designed to demonstrate some of the possibilities this Open Source project offers you. Overview: As many of you may have guessed I am not a hard core coder. The reason, as I will tell anybody who listens, is that "coding is not hard wired into my genes". Give me a blank ActionScript panel in Flash and I'll stare at it for hours. What makes this odd is I can read the code when it is given to me. Think of me as being the kind of guy who will sit in a café in France reading a French book but can’t speak the language. I need to tell you this now because it's important you know how I approached the exercise. Also, I want you to clearly understand that even though I have known Josh and Branden for quite a few years, I am not even close to being in their league or part of their "hype machine". I'm just a guy, like you, who stumbled across something that made my life easier. As a teacher, I've been handed a tool that lets me teach AS3 basics in a manner that gives "Visual Learners" immediate feedback. The thing is, I get that code, like the Flash IDE is a "creative medium". The stuff that happens when artists and designers get hold of code is awesome to behold. Yet talk to people that are coming into Flash or have discovered they need to know AS3 to expand their creative possibilities and you will hear, "Man, this stuff is hard". At that point, frustration takes hold and, as they say, "Now you know the rest of the story ..." This brings me to Josh and Branden. They hear the same story from the people they meet in their travels. The thing is, Josh was once in their shoes and what sets him apart from the rest of the pack is that he mastered the fundamentals of code while, at the same time, bringing his awesome Fine Arts talents to his work. He didn’t do it alone. Branden and Josh first became deeply involved with each other at FlashForward 2000 when they were both relatively unknown and, since then, a deep and profound professional relationship has developed between them. Over the years, Josh has come up with ideas, Branden has wired them up and then Josh rearranged the wiring to take the work to levels neither expected 10 years ago. What has always struck me, if you have ever seen them at a conference or presentation, is their infectious sense of "wonder" and "fun" when it comes to their collaborations or solo efforts. With the introduction of ActionScript 3, both Josh and Branden quickly realized "wonder" and "fun" were two words that were disappearing from the Flash community. Creatives avoided code as a creative medium because the language was perceived, among this group, as too complicated or complex to master. The ability to play what I call "What if..." games became too risky because the odds of breaking the project were almost 100% unless you had a deep understanding of OOP. In many respects, this explains the rise of the "Developer" in the Flash community over the past few years. I am not saying this is a bad thing or "dissing" the developers. It is just that because of the complexity of the language the critical balance of the Designer/Developer partnership became more weighted toward the Developer. Branden and Josh, rather than talk about it, decided to do something about it. What many people don’t know is the genesis for HYPE was another project, Flow, which essentially tried to make things easier for designers but it fell flat on its face simply because it was too ahead of itself. Rather than give up, Branden retooled Flow and with Josh's input it evolved into HYPE. What has me jacked about the HYPE project is that the words "wonder" and "fun" will come back if the creative community gets behind it. As you're about to discover, you really don’t need a degree in Rocket Science to get hooked by HYPE. All you need is to be unafraid to to play with numbers and values. Step 1 : Download HYPE. Be aware that Branden and Josh suggest you have Flash Professional CS4 installed before starting, even though this product will work with CS3. Step 2: Extension Manager Unzip the download and double-click the .mxp file to launch the Extension Manager. The Extension Manager will install everything into their ultimate destinations. If you're curious, explore the HYPE folder- hype_01 - that you have just unzipped. Inside you will find: - All the help files inside the doc folder. - Examples of the various HYPE classes, including their corresponding source fla files in the examples folder. - The HYPE classes, found in the src folder. Step 3. Launch Flash Double-click the Setup Classpath.jsfl to launch Flash. All this step does is let Flash know where everything was placed during the install. That’s it folks. Now it's time to play. Getting Caught in the HYPE The idea for this exercise actually appeared in a tweet sent by Branden a week or so before the HYPE release. He said Josh was having too much fun playing with the SoundAnalyzer in HYPE and posted this link. The tweet caught my attention because one of the things I love to show is Audio Visualization in Flash. I use it as an example of being fearless around code rather than a full bore ActionScript lesson.. I use myself as the poster child for this and show how, by playing with numbers and changing things I know, the complex can become interesting. I start with a basic visualization and then progress to a full bore light show. Even though I make it interesting and fun, if I were to get into the nitty-gritty of working with the SoundMixer class and Byte Arrays, I may as well toss a wad of aluminum foil over to the shiny thing the audience is now staring at. They will have dialed out because I'm going way, way over their heads. When I saw Josh’s example I immediately pawed through the code looking for what wasn’t there; the complexity. Let’s bring the fun back to playing with audio in Flash: Step 4: New Document Open a new Flash ActionScript 3.0 document. To get yourself started grab an mp3 audio file. This example uses "Busted Chump", an ActiveDen demo track, but any audio file from your collection will do. Step 5: Triangle Draw a small filled triangle on the stage and convert it to a movieclip named "Triangle". Once you've drawn the triangle and converted it to a movieclip, delete the movieclip from the stage. Step 6: Symbol Properties Right-click on the symbol in the Library and open the Symbol Properties. Select Export for Actionscript. Your symbol name will appear as the class. Click OK and disregard the error message that appears. As you may have guessed, HYPE is going to pull the symbol out of the Library and allow you to play with it using ActionScript. For those of you recoiling from this, keep in mind that at its heart HYPE is a playground that gives creatives the opportunities to play "What if ..." games and see the results with very little effort. In the case of this exercise I am going to play three "What if ..." games: - What if I put the triangles on a grid? - What if those triangles on the grid pulsated to the music? - What if those pulsating triangles were put into motion? Step 7: ActionScript Enter the following ActionScript: import hype.extended.layout.GridLayout; var numItems:int = 80; var gridLayout:GridLayout = new GridLayout(30,30, 70, 50, 10); for (var i:uint = 1; i < numItems; ++i) { var clip:Triangle = new Triangle(); gridLayout.applyLayout(clip); addChild(clip); }; The first "What if " game involves placing the movieclip in a grid and, to paraphrase Apple, "there is a class for that". In fact, in HYPE there is a class for practically everything you will want to do. If there isn’t, write one because HYPE is Open Source. The next line tells Flash you want to put 80 triangles on the stage. Having done that, you now determine how they will appear on the grid by adding the parameters into the GridLayout object. In this case I want the grid to start 30 pixels in from the left of the stage and 30 pixels from the top of the stage. Also, there is to be 70 pixels of space between the triangles on the x axis and 50 pixels of space between the rows. The final parameter tells HYPE that I want to see what happens if there are 10 columns of triangles. The "for" loop tells HYPE how to place the 80 triangles on the stage. You grab the movieclip out of the library, give it an instance name, then by using the applyLayout() method of the Gridlayout class, lay the objects into the grid using the parameters of the GridLayout object. Step 8: Test Save and test the movie. That was easy and if I want to change up the look all I need to do is to play with the values in the numItem variable and the parameters in the GridLayout object. Don’t like the triangle? Then toss something else - an image, for example - into the movieclip or create a completely different movieclip and use that instead. What if the triangles were tied to an audio track? The triangles are on a grid and it is now time for our next "What if ..." game. In this case : What if the alpha and scale values of the triangles were tied to an audio track? At this point, many creatives would be, as I said earlier, looking at the "shiny thing" over there. Just keep in mind the whole purpose of HYPE is to let you play, not become a hard-core coder. Let’s have some fun: Step 9: Import Classes Click into line 2 of the Script and add the following code: import hype.extended.behavior.FunctionTracker; import hype.framework.sound.SoundAnalyzer; These two classes work together in HYPE. FunctionTracker, in very simple terms, manages the functions that are running and makes sure they are mapped to the specific properties of the target object. In our case, we are going to play with the alpha and scale properties of the triangle as it reacts to the audio track. The SoundAnalyzer class is where the magic happens. What it does, again in very simple terms, is to turn an audio file into data which can then be played with. What I absolutely adore about this class is I don’t have to write a ton of very complex code to get immediate results. I just need to know what the parameters do and then start playing. Step 10: SoundAnalyzer Object Add the following two lines of code after the import statements: var soundAnalyzer:SoundAnalyzer = new SoundAnalyzer(); soundAnalyzer.start(); All these two lines do is to create the SoundAnalyzer object and switch it on using the start() method (which is how you turn these classes on and off in HYPE). Think of the start() method as nothing more than a light switch. Step 11: Octaves Add the following code under the "applyLayout" method in the "for" loop: var ranNum:Number = int(Math.random() *7); var alphaTracker:FunctionTracker = new FunctionTracker(clip, "alpha", soundAnalyzer.getOctave, [ranNum, 0.01, 1]); var scaleTracker:FunctionTracker = new FunctionTracker(clip, "scale", soundAnalyzer.getOctave, [ranNum, 0.5, 4]); alphaTracker.start(); scaleTracker.start(); The key to the visualization is the first three lines of the code block. The SoundAnalyzer class uses the audio track’s octaves; the values for octaves range from 0 to 7. The first line, therefore, creates a random number based on the maximum octave value allowed. Keep this in mind when playing with this value. Numbers greater than 7 will be rounded down to 7. The next two lines use the functionTracker class to play with the triangles in the grid. You target the object, tell FunctionTracker which property of the object you want to play with, which function is to be run (getOctave) and which values to use. In this case we're going to play with the random octave values- ranNum - and make sure the alpha values range from 1% to 100% alpha based on the "size" of the octave in the audio track. Small numbers mean low alpha, big numbers mean full alpha. Also note that these values must be passed as an Array and that the properties being changed are String values. The final two lines switch on the functions. Step 12: Sound Add the following ActionScript to the end of the code block: var sound:Sound = new Sound(); sound.load(new URLRequest("YourAudioTrackGoesHere.mp3")); sound.play(); Step 13: Test Save and test the movie. What if those pulsating triangles were put in motion? As you have discovered, this stuff is not hard and, in fact, by simply playing with numbers, you can have a huge amount of fun as you "tweak up" how those triangles pulsate and fade. Now that we have that working, let’s play our final "What if ..." game and put them in motion. Here’s how: Step 14: One More Class Click once at the end of the class list and add one more class: import hype.extended.behavior. Oscillator; This class is an absolute blast to play with because it puts an object on an oscillating wave. Here’s the best part: You don’t need a trigonometry background to do it. In fact, there is no math involved. Step 15: Define Boundaries Add the following ActionScript below the import statements: var myWidth = stage.stageWidth; var myHeight = stage.stageHeight; var freq:int= 20; All this code does is confine the resulting animation to the boundaries of the stage and to set a value for the wave frequency. It is time to play with the grid. Step 16: Oscillator Object Add the following code after the "scaleTracker" variable in the "for" loop: var ypositionOsc:Oscillator = new Oscillator (clip,"y", Oscillator.sineWave, freq, clip.y, myHeight/3, i/(freq/2)); var scaleOsc:Oscillator = new Oscillator (clip, "scaleY", Oscillator.sineWave, freq, 5,50, i/(freq/2)); var rotateOsc:Oscillator = new Oscillator (clip,"rotation", Oscillator.sineWave, freq, 0,90, i/(freq/2)); yOsc.start(); sOsc.start(); rOsc.start(); Again, the Oscillator object, like the FunctionTracker object, doesn’t require a degree in particle physics. The parameters are really simple: - Which object is going to oscillate? - Which property- a string- of the object is going to be affected? - Which wave is to be applied? Your choices are sineWave, sawWave,squareWave and triangleWave. - What is the wave frequency? - What is the minimum wave value? - What is the maximum wave value? - What wave value do we use to start? In this case we are applying a sineWave to three properties - y position, yScale and rotation- of the triangle and then using the remaining three parameters to set the look of the wave’s motion. The remaining three lines switch the Oscillator on. The values I used simply popped out of "I wonder what the animation would look like if I used these numbers?" Nothing more. Step 17: Test Save and test the animation. Conclusion: This exercise was designed to introduce you to the HYPE framework and give you a chance to kick the tires. I showed you how to install it and then used three "What if ... " scenarios that took a simple triangle and heaved it onto a pulsating and waving grid that was driven by an audio track. In regular ActionScript coding those tasks, to many, would be a reason to "Flee. Screaming. Into the night". Instead, you discovered that HYPE is aimed at dialing down the developer side of the Flash equation while bringing the fun back to the designer side. Having completed this exercise it might not be a bad idea to revisit the code with a different point of view. What would that be? In many respects, using HYPE to work out ideas very much follows the creative process. It doesn’t get you bogged down in code but instead, by playing with numbers and values, you get to do what you do best: play ‘What If ...’ games. Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
https://code.tutsplus.com/tutorials/introduction-to-the-hype-actionscript-30-framework--active-2459
CC-MAIN-2017-04
refinedweb
2,823
71.95
MLGetByteString() This feature is not supported on the Wolfram Cloud. This feature is not supported on the Wolfram Cloud. int MLGetByteString(MLINK link,const unsigned char **s,int *n,long spec) gets a string of characters from the MathLink connection specified by link, storing the codes for the characters in s and the number of characters in n. The code spec is used for any character whose Wolfram Language character code is larger than 255. DetailsDetails - MLGetByteString() allocates memory for the array of character codes. You must call MLReleaseByteString() to disown this memory. If MLGetByteString() fails and the function's return value indicates an error, do not call MLReleaseByteString() on the value contained in s. - MLGetByteString() is convenient in situations where no special characters occur. - The character codes used by MLGetByteString() are exactly the ones returned by ToCharacterCode in the Wolfram Language. - The array of character codes in MLGetByteString() is not terminated by a null character. - Characters such as newlines are specified by their raw character codes, not by ASCII forms such as ∖n. - MLGetByteString() returns immutable data. - MLGetByteString() returns 0 in the event of an error, and a nonzero value if the function succeeds. - Use MLError() to retrieve the error code if MLGetByteString() fails. - MLGetByteString() is declared in the MathLink header file mathlink.h. ExamplesExamplesopen allclose all Basic Examples (1)Basic Examples (1) #include "mathlink.h" /* read a string encoded with codes from ToCharacterCode[] from a link */ void f(MLINK lp) { const unsigned char *string; int length; if(! MLGetByteString(lp, &string, &length, 0)) { /* unable to read the byte string from lp */ return; } /* ... */ MLReleaseByteString(lp, string, length); }
http://reference.wolfram.com/language/ref/c/MLGetByteString.html
CC-MAIN-2015-48
refinedweb
267
57.77
There are a few elements of information you can get out of the system settings for WPC, allowing you to see how various aspects of the parental controls system is setup. It is easy enough to read these settings using WMI and some javascript so we can see/use these settings in other places. This will be a super set of the DumpRestrictions.js I showed last time. As well as the Http exemptions and the url exemptions you can get the current games rating system, the number of days to wait until a balloon is shown to the user, the id and name of the current web filter (seeing if a custom web filter is being used). The games rating system is a GUID that is associated with the specific rating system. Finding the details of this rating system is not possible using the public APIs. The details on the rating systems can be found by looking at the regkey HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Parental Controls\Ratings Systems\Games, which turns the guids into the name of the specific rating system. ie: the guid of {768BD93D-63BE-46A9-8994-0B53C4B5248F} points to esrb.rs. The esrb.rs file is in the %WINDIR%\system32\esrb.rs directory and is a resource only file and you can see the details of it by loading it up into visual studio, or any other program that allows you to look at registry files. The Filter ID is an extension ID, the extensions are all registered in system and can be found by enumerating the WpcExtension class in WMI. The log reminder interval is time (in days) between when we will show the reminder balloon for looking at the logs. The last log view time is set when you look at the logs inside the activity viewer. // and the parental controls namespace. strComputer = “.”; strRootNamespace = “\\ROOT\\CIMV2”; strNamespace = strRootNamespace + “\\Applications\\WindowsParentalControls”; // Connect to WMI and get the system settings object. strConnectStr = “winmgmts:\\\\” + strComputer + strNamespace; sysSettings = GetObject(strConnectStr + “:WpcSystemSettings=@”); // The game rating system rating = sysSettings.CurrentGamesRatingSystem; WScript.Echo(“Current Game Rating System: “ + rating); // Log reminder interval. interval = sysSettings.LogViewReminderInterval; WScript.Echo(“Log View Reminder Interval: “ + interval); // Last log view lastview = sysSettings.LastLogView; WScript.Echo(“Last log view : “ + lastview); // Web filter id = sysSettings.FilterID; name = sysSettings.FilterName; WScript.Echo(“Web Filter : “ + name + ” (“ + id + “)”); I am having problems with the parental controls on my laptop, I am the administrator of this pc and i am over the age of 18 but i can not disable the parental controls on the admin account? Vista Service Pack 1 here. I have a problem with my esrb.rs file and parental controls in general, and I am thinking the two may be related. As the admin, I am unable to access parental controls– a common problem if you search for it on various Vista forums. After running SFC (system file checker), SFC was unable to repair a corrupt file — esrb.rs — which as you indicate is part of the parental controls game ratings system. How can I find a non-corrupt copy of esrb.rs to replace my corrupt copy? I am hoping such a replacement fixes the parental control issue I have.>
https://blogs.technet.microsoft.com/david_bennett/2006/09/08/showing-the-windows-parental-controls-system-settings/
CC-MAIN-2016-30
refinedweb
533
55.24
void swap ( set<Key,Compare,Allocator>& st ); Swap content Exchanges the content of the container with the content of st, which is another set object containing elements of the same type. Sizes may differ.After the call to this member function, the elements in this container are those which were in st before the call, and the elements of st are those which were in this. All iterators, references and pointers remain valid for the swapped objects.Notice that a global algorithm function exists with this same name, swap, and the same behavior. // swap sets #include <iostream> #include <set> using namespace std; main () { int myints[]={12,75,10,32,20,25}; set<int> first (myints,myints+3); // 10,12,75 set<int> second (myints+3,myints+6); // 20,25,32 set<int>::iterator it; first.swap(second); cout << "first contains:"; for (it=first.begin(); it!=first.end(); it++) cout << " " << *it; cout << "\nsecond contains:"; for (it=second.begin(); it!=second.end(); it++) cout << " " << *it; cout << endl; return 0; } first contains: 20 25 32second contains: 10 12 75
http://www.cplusplus.com/reference/stl/set/swap/
crawl-002
refinedweb
176
55.44
I am pretty new to Java. I tried to look at a few examples and they made no sense to me. Here is what I am trying to do: Create a method that just handles the calculations for the application. Create another method that allows returning all the values from the method used to do the calculations. Have the main method only hold the initialized variables and print out the results. Here is what I tried (pure conjecture and I cannot figure out how to accomplish what I want): Code : public class WorkingWithMults { public static void main(String[] args) { int a = 1, b = 2; System.out.print(obj); } public static void Calculations(int a, int b) { a = a + 5; b = b + 5; } public Calculations returnValue() { Calculations obj = new Calculations(); return obj; } } Thanks in advance!
http://www.javaprogrammingforums.com/%20object-oriented-programming/11120-trying-return-more-than-one-value-object-printingthethread.html
CC-MAIN-2014-41
refinedweb
134
61.26
Bummer! This is just a preview. You need to be signed in with a Basic account to view the entire video. Adding a Controller4:29 with James Churchill Now that we have our project, let’s add the first controller for our Comic Book Gallery website. adding-a-controller Controller Scaffolding In this video, we added a controller to our project by adding a C# class using the “Add > Class...” menu item. Using this method allowed us to understand what makes a C# class an ASP.NET MVC controller. While this method works, it’s not the typical way that developers add controllers to their ASP.NET MVC projects. Visual Studio provides a feature called “scaffolding” that can be used to quickly and easily add items to a project. You can use scaffolding to add a controller to an ASP.NET MVC project by right-clicking on the “Controllers” folder and selecting the “Add > Controller…” menu item. That’ll open up the “Add Scaffold” dialog, which will present you with a list of controller templates to choose from. For more information on how to add a controller to an ASP.NET MVC project use Visual Studio scaffolding see. Additional Learning For more information on how to perform quick actions with light bulbs in Visual Studio, checkout out this MSDN page. For more information about classes, access modifiers, and inheritance, check out these pages on MSDN. Keyboard Shortcuts CTRL+SHIFT+B- Build solution - 0:00 [MUSIC] - 0:04 Welcome back. - 0:06 We just created our new project, but - 0:08 when we tried running our website, we encountered in error. - 0:11 We were able to determine that the error occurred, - 0:14 because our website didn't have any controllers reviews. - 0:18 We're building a comic book gallery website. - 0:21 One of the main features will be the ability to view the details for - 0:25 a specific comic book. - 0:27 Let's call this our comic book detail page. - 0:30 That seems like a great place to start. - 0:33 So let's create a controller to handle the request for that page. - 0:38 Looking at our project here in the solution explorer panel, - 0:41 we can see that it contains a folder named controllers. - 0:45 While it's not absolutely necessary, - 0:47 it's a common convention to put all of your websites controllers in this folder. - 0:51 And remember following conventions often makes it easier for - 0:55 other developers to work on your code, which is super helpful. - 0:58 Right-click on the Controllers folder and select the Add > Class menu item. - 1:05 Let's name our class ComicBooksController. - 1:09 Adding the suffix controller is more than a convention, it's a requirement. - 1:13 Without it NVC wouldn't be able to distinguish our website's controllers - 1:18 from other classes in our project. - 1:20 Click the Add button to finish adding the class. - 1:24 Okay, here's our class. - 1:27 Let's start with talking about the public access modifier. - 1:30 Visual Studio will include the public access modifier by default - 1:34 when adding new classes to a project. - 1:37 This makes our class accessible to code outside of our project. - 1:41 You might be wondering if our class needs to be public. - 1:44 For now, let's leave it as it is. - 1:46 We will try an experiment in the next video to answer that question. - 1:51 We need to make a couple of changes to our controller before it is usable. - 1:55 To start with, - 1:56 we need to update our class to inherit from the NBC controller base class. - 2:00 To do that, - 2:01 type a colon at the end of the class name followed by the name of the base class. - 2:07 Before we go any further, let's build our project. - 2:10 There are multiple ways to do this, of course. - 2:13 You can click on the build build solution menu item, or - 2:18 you can right click on the solution in the solution explorer panel and - 2:22 select the build solution menu item. - 2:24 Notice that to the right of this menu item, - 2:26 you can see the keyboard shortcut for this command, control Shift B. - 2:31 That's my favorite option as it keeps your hands on the keyboard. - 2:34 Pressing control Shift B kicks off the build process. - 2:38 Visual studio opens the output window, so that we can monitor - 2:41 the progress of the build and see the results upon its completion. - 2:46 Looks like we got an error. - 2:47 The type or namespace name controller could not be found. - 2:51 Are you missing a using directive or an assembly reference? - 2:55 Luckily this error is telling us exactly what the problem is. - 2:59 We need to add a using directive. - 3:01 We could also see the Visual Studio is underlying the controller base class name - 3:06 in an angry red squiggle. - 3:08 If we hover our mouse pointer over the offending code, - 3:11 Visual Studio will display a pop up containing information about the air. - 3:15 Notice to the left of the pop up, there's a light bulb icon. - 3:18 If we click on it, - 3:20 a list of quick actions are displayed that we can take to resolve this error. - 3:24 The first item in the list is using system.web.nvc. - 3:28 And to the right, we can see a small visual preview of that change that - 3:33 will be made if we select that action. - 3:36 How cool is that? - 3:37 Go ahead and click on the first item. - 3:40 The using directive has been added, - 3:42 the red squiggle has gone away, and our project is building again. - 3:47 Adding using directives is a common activity, and - 3:50 this Visual Studio quick action makes the process quick and painless. - 3:55 If you're using GitHub, let's finish up by committing our changes. - 3:58 Switch back to the team explore panel and - 4:01 click the Home icon, if you're not already on the home panel. - 4:05 Then under the project's section click on the changes button. - 4:09 Enter a commit message of, - 4:10 added comic books controller, and click the commit all button. - 4:16 If Visual Studio prompts you to save the ComicBookGallery.sln file, - 4:21 go ahead and click the Yes button to confirm the action. - 4:25 In the next video, we'll continue with building out our controller. - 4:28 See you then.
https://teamtreehouse.com/library/adding-a-controller
CC-MAIN-2018-13
refinedweb
1,158
82.14
I need to do is "crush" the values in l1 by some percentage so they are closer together such that perhaps if an array l1 were ... l1 =[10,20,30,40,50,60,70,80,90,100] l2 = [12.5, 25.0, 37.5, 50.0, 62.5, 45.0, 52.5, 60.0, 67.5, 75.0] for i in l1: if i <= 50: i = (i*1.25) l2.append(i) print(i) elif i >= 50: i = (i*0.75) l2.append(i) print (l2) l1 =[4,2,3,4,3,6,4,8.6,10,7,12,4,14,15,26,14,15,16,10] import statistics a = [4, 3, 3, 4, 5, 1, 31, 321] input_scope = 1.1 def scouter (input_list, scope): mean = statistics.mean(input_list) searchpositions = [] for x, i in enumerate(input_list): print (x, i) if i == max(input_list) or i == min(input_list): searchpositions.append(x) for i in searchpositions: input_list[i] = [(input_list[i] - mean) / scope + mean] return (input_list) print(scouter((a), input_scope)) [4, 3, 3, 4, 5, [5.13636363636364], 31, [296.0454545454545]] Just scale towards the median? >>> l1 = [10, 20, 30, 40, 50, 60, 70, 80, 90, 100] >>> import statistics >>> median = statistics.median(l1) >>> [(x - median) / 10 + median for x in l1] [50.5, 51.5, 52.5, 53.5, 54.5, 55.5, 56.5, 57.5, 58.5, 59.5]
https://codedump.io/share/1T3fR9Bjf6V0/1/bring-all-values-in-an-array-closer-together
CC-MAIN-2017-51
refinedweb
227
76.42
Hi, i got problem in my with converting date string into date object. I get date string from backend in dd:MM:yyyy hh:MM:ss format. Is there any easy way to create date object out of it? When i use new Date() and this string from backend, it creates wrong object with days and months switched. Any ideas? Date object from string in dd:MM:yyyy format Use Moment.js. When it comes to date the package is just so much better First, run the command npm install moment --save Then, on the page that you want to use moment, import the library with import Moment from 'moment' From there, any date String u have (let say sth like dd:MM:yyyy), you can just convert it to a javascript date object with let dateString = "22-04-2017"; //whatever date string u have let dateObject = moment(dateString, "DD-MM-YYYY").toDate(); Could be a long step, but momentJS is such a flexible tool to calculate date difference, to add/minus days/hours/weeks and converting between string, moment and date object is so convenient that i stick with it reedrichards #3 or use which is lighter than momentjs reedrichards #4 or if you could modify your backend, why not not transmiting date as string but as time (numeric) values new Date().getTime() reedrichards #6 No worries, both libs have their own advantages
https://forum.ionicframework.com/t/date-object-from-string-in-dd-mm-yyyy-format/127326
CC-MAIN-2018-51
refinedweb
234
61.09
Sudo Placement[1.7] | Greatest Digital Root Given a number N, you need to find a divisor of N such that the Digital Root of that divisor is the greatest among all other divisors of N. If more than one divisors give the same greatest Digital Root then output the maximum divisor. The Digital Root of a non-negative number can be obtained by repeatedly summing the digits of the number until we reach to a single digit. Example: DigitalRoot(98)=9+8=>17=>1+7=>8(single digit!). The task is to print the greatest divisor having the greatest Digital Root followed by a space and the Digital Root of that divisor. Examples: Input: N = 10 Output: 5 5 The divisors of 10 are: 1, 2, 5, 10. The Digital Roots of these divisors are as follows: 1=>1, 2=>2, 5=>5, 10=>1. The greatest Digital Root is 5 which is produced by divisor 5, so answer is 5 5 Input: N = 18 Output: 18 9 The divisors of 18 are: 1, 2, 3, 6, 9, 18. The Digital Roots of these divisors are as follows: 1=>1, 2=>2, 3=>3, 6=>6, 9=>9, 18=>9. As we can see that 9 and 18 both have the greatest Digital Root of 9. So we select the maximum of those divisors, that is Max(9, 18)=18. So answer is 18 9 A naive approach will be to iterate till N and find all the factors and their digit sums. Store the largest among them and print them. Time Complexity: O(N) An efficient approach is to loop till sqrt(N), then the factors will be i and n/i. Check for the largest digit sum among them, in case of similar digit sums, store the larger factor. Once the iteration is completed, print them. Below is the implementation of the above approach. C++ Java C# // C# program to print the digital // roots of a number using System; class GFG { // Function to return dig-sum static int summ(int n) { if (n == 0) return 0; return (n % 9 == 0) ? 9 : (n % 9); } // Function to print the Digital Roots static void printDigitalRoot(int n) { // store the largest digital roots int maxi = 1; int dig = 1; // Iterate till sqrt(n) for (int i = 1; i <= Math.Sqrt(n); i++) { // if i is a factor if (n % i == 0) { // get the digit sum of both // factors i and n/i int d1 = summ(n / i); int d2 = summ(i); // if digit sum is greater // then previous maximum if (d1 > maxi) { dig = n / i; maxi = d1; } // if digit sum is greater // then previous maximum if (d2 > maxi) { dig = i; maxi = d2; } // if digit sum is same as // then previous maximum, then // check for larger divisor if (d1 == maxi) { if (dig < (n / i)) { dig = n / i; maxi = d1; } } // if digit sum is same as // then previous maximum, then // check for larger divisor if (d2 == maxi) { if (dig < i) { dig = i; maxi = d2; } } } } // Print the digital roots Console.WriteLine(dig + " " + maxi); } // Driver Code public static void Main() { int n = 10; // Function call to print digital roots printDigitalRoot(n); } } // This code is contributed // by Akanksha Rai [tabbyending] 5 5 Time Complexity: O(sqrt(N)) Recommended Posts: - Digital Root (repeated digital sum) of the given large integer - Numbers in a Range with given Digital Root - Print a number containing K digits with digital root D - Find Nth positive number whose digital root is X - Smallest root of the equation x^2 + s(x)*x - n = 0, where s(x) is the sum of digits of root x. - Greatest Integer Function - Sum of greatest odd divisor of numbers in given range - Greatest number less than equal to B that can be formed from the digits of A - Greatest divisor which divides all natural number in range [L, R] - N-th root of a number - Square root of a number using log - Square root of an integer - Find cubic root of a number - Babylonian method for square root - Fast inverse square, Akanksha_Rai
https://www.geeksforgeeks.org/sudo-placement1-7-greatest-digital-root/
CC-MAIN-2019-30
refinedweb
676
59.26
Okay, this may be answered by me not understanding something, but I truly feel this is a PyCharm issue unless explained otherwise. Say you have this setup: Layout Project |---A |--- SecondProgram.py |--- testfile.txt |---B |--- MainProgram.py Contents of testfile.txt This is a test. Contents of SecondProgram.py import os cwd = os.getcwd() print(cwd) filelocation = cwd + '/testfile.txt' with open(filelocation, 'r') as file: print(file.read()) Contents of MainProgram.py import os cwd = os.getcwd() print(cwd) from Project.A import SecondProgram #(ignore the PEP-8 rule breaking for not having this on top for now) First we run SecondProgram.py, and unsurprisingly, we get the text from the testfile printed. Now if I run the MainProgram.py, a FileNotFoundError is raised as the filelocation variable in the second program is "/.../Project/B" rather than "/.../ProjectA" as that's where the file was imported. This makes sense, all is normal. But, if I *move* MainProgram.py to the A folder/directory from the B folder/directory, and run MainProgram.py again, then even though all three files are now in the same directory (so MainProgram should be able to access the testfile as the directory name from os.getcwd() is the same now, it still raises a FileNotFoundError: FileNotFoundError: [Errno 2] No such file or directory: '/home/.../Program/B/testfile.txt' Even the os.getcwd() prints out "/home/.../Program/B" rather than "/home/.../Program/A". Why is this the case? The file now exists in the A folder, but os.getcwd() still locates it in the B folder? This becomes even more interesting. If you create a file in B called example1.py with just the import line: from Project.A import SecondProgram And you run it, it of course complains saying "FileNotFoundError: [Errno 2] No such file or directory: '/home/..../Project/B/testfile.txt'. If you move it to folder A, it still gives the same error, meaning the file location is not updated to the new folder. But, if we create another file, called example2.py, located again in the B folder, with the exact same import line (so everything is identical as example1.py), and we don't run it at all, but rather immediately move it to the A folder, then run it for the first time in A (remember, it was created in B), it works fine, outputting the text file. If we then move it back to the B folder, and run it a second time, it again compiles, with os.getcwd() reading that it is in the A file, and reading the textfile, even though it's in a very different directory. Another level: If you open an active session, and try to run example1.py there (remember, this is the file that was run in B, moved to A (where it currently is), and would not read the testfile anymore as the os.getcwd() still thought it was stored in the B, even though it was moved), it will say: "ModuleNotFoundError: No module named 'Project'", and/or "ModuleNotFoundError: No module named 'A'" if both those are removed so all the example1.py file contains is import SecondProgram and then compile it in an active session again, then the os.getcwd() finds the correct directory, and the testfile is read, and the print cwd line gives "/home/.../Project/A" in the active session. Even though in Pycharm, the exact same FilleNotFoundError is given as it thinks it's in the B folder (print(cwd) returns "/home/.../Project/B"). Interestingly enough, when you edit the example1.py file to have just "import SecondProgram", or if you leave it as "from Project.A import SecondProgram", no issue with importing the module name is raised by PyCharm, but in an active session, it must be "import SecondProgram" if it is in the A folder. I may not have explained this well, but with a bit of setup, you can play with this yourself and see how weird it is. From what I gather, when a python file is compiled, its directory is recorded is some metadata. If that file is moved to a new location, that metadata is not updated, and this causes very bug prone issues when trying to work with any external files/modules. I found this by having 2 folders with two programs, just like this example. I decided to merge them by moving the MainProgram over to the "A" folder, but now the directory/metadata for MainProgram is not updated. Interestingly, if I ctrl+a and ctrl+v the whole file into a new file (say MainProgramNew.py) in the same new location (folder A), then run it, no issues, os.getcwd works correctly, etc. as expected. But the existing file that was moved to folder "A" still can't read the file as it's trying to parse toe original location rather than the new one. Clearly os.getcwd() is working as intended if in an active session it correctly finds the directory of the moved folder. Why does it not work in PyCharm? How you move these files (dragging in Pycharm, Right-click -> refactor -> move, moving the file manually in the a terminal) has no effect. Restarting PyCharm does not fix the issue. There are several layers to this issue, and I hope I'm not going crazy with it. How do I move a file that imports modules that read/write textfiles, and have that moved file update it's location? Hello, Thank you for such a detailed description of the behavior, it was easy to reproduce and define a cause. The thing is that for every file in a project Run/Debug configuration is being created , and Working Directory is specified as a current location of a file. In the situation you have faced, after changing the file location, the working directory remains default. I have created a bug on YouTrack, please feel free to vote for it in order to increase a priority and monitor for a resolution:
https://intellij-support.jetbrains.com/hc/en-us/community/posts/360006384839-Moving-a-file-does-not-change-the-location-PyCharm-thinks-it-exists-in
CC-MAIN-2020-10
refinedweb
1,004
66.03
On Monday 18 October 2010 18:19:24 Christoph Hellwig wrote:> Before we get into all these fringe drivers:> > - I've not seen any progrss on ->get_sb BKL removal for a whileNot sure what you mean. Jan Blunck did the pushdown into get_sblast year, which is included into linux-next through my bkl/vfstree. Subsequent patches remove it from most file systems along withthe other BKL uses in them. If you like, I can post the seriesonce more, but it has been posted a few times now.> - locks.c is probably a higher priorit, too.As mentioned in the list, I expect the trivial final patch tobe applied in 2.6.37-rc1 after Linus has pulled the trees thatthis depends on (bkl/vfs, nfs, nfsd, ceph), see below.This is currently not in -next because of the prerequisites. Arnd---diff --git a/fs/Kconfig b/fs/Kconfigindex c386a9f..25ce2dc 100644--- a/fs/Kconfig+++ b/fs/Kconfig@@ -50,7 +50,6 @@ endif # BLOCK config FILE_LOCKING bool "Enable POSIX file locking API" if EMBEDDED default y- select BKL help This option enables standard file locking support, required for filesystems like NFS and for the flock() systemdiff --git a/fs/locks.c b/fs/locks.cindex 8b2b6ad..02b6e0e 100644--- a/fs/locks.c+++ b/fs/locks.c@@ -142,6 +142,7 @@ int lease_break_time = 45; static LIST_HEAD(file_lock_list); static LIST_HEAD(blocked_list);+static DEFINE_SPINLOCK(file_lock_lock); /* * Protects the two list heads above, plus the inode->i_flock list@@ -149,13 +150,13 @@ static LIST_HEAD(blocked_list); */ void lock_flocks(void) {- lock_kernel();+ spin_lock(&file_lock_lock); } EXPORT_SYMBOL_GPL(lock_flocks); void unlock_flocks(void) {- unlock_kernel();+ spin_unlock(&file_lock_lock); } EXPORT_SYMBOL_GPL(unlock_flocks);
http://lkml.org/lkml/2010/10/18/353
CC-MAIN-2014-35
refinedweb
265
53.51
Red Hat Bugzilla – Bug 119737 SCHED_FIFO freezes system even if priority lower than shell Last modified: 2005-10-31 17:00:50 EST Description of problem: SCHED_FIFO program that is in a constant loop seems to freeze the system till it is done, or if it is infinite loop system has to be rebotted even when running the shell at the highest priority. Version-Release number of selected component (if applicable): I am using glibc-2.3.2-27.9 on readhat 9, kernel version 2.4.20-18.9 How reproducible: Always Steps to Reproduce: 1. get schedutils package if you don't have already to get chrt program to change real time priority 2.gcc sched_test.c -o sched_test 4. run the shell at highest priority chrt -f -p 99 pidofshell 3.chrt -f 10 sched_test time_to_sleep& /sched_test.c/ #include <sys/time.h> #include <time.h> #include <stdlib.h> #include <stdio.h> #include <unistd.h> int main(int argc, char *argv[]) { int end_time; struct timeval now; if (argc < 2) { end_time = 1; } else { end_time = atoi(argv[1]); } gettimeofday(&now, NULL); end_time += now.tv_sec + 1; do{ gettimeofday(&now, NULL); }while (now.tv_sec < end_time); printf("bye now.\n"); return 0; } Actual results: system freezes till the sleep_time is done in the sched_test program. Expected results: be able to stop program as the shell is running at high priority Additional info: No keyboard input is displayed though it will be displayed once the program is done, showing that the keyboard interrupts handlers are working but some kind of priority inversion seems to be in effect as X is somehow involved. And based on the discussion I found at this site I tried to raise the priority of the keventd to 99 , sched_fifo but still I get no change. Books and man pages on real time scheduling just mention that running a shell of max priority will take care of lower priority real time threads that have run wild. I have also tried it with Fedora test2 distribution based on kernel 2.6 with the same effect. Also the kernel fix that was mentioned in the doesn't seem to:
https://bugzilla.redhat.com/show_bug.cgi?id=119737
CC-MAIN-2018-13
refinedweb
357
65.73
During the following steps we'll create an HTML Box using ActionScript 3.0. Along the way we'll see: - Creating external scripts (classes). - Creating new events with "dispatchEvent". - Using "TextEvent". - Using htmlText tags. You can create the files that we'll see below in a texteditor like notepad etc., I'll be using FlashDevelop as a development environment. Our files will be: - "styles.css" - "source.xml" - "CSS.as" - "XMLLoader.as" - "Main.as" and for those who want to compile in Flash IDE: - "htmlBox.fla" Step 1 - Starting the HTML BOX In FlashDevelop, start a new project. Choose "AS3 project" and name it "htmlBox". After creating the project, "Main.as" will be created automatically. You need to create other files manually. Add a folder named "keremk" to src folder. In this folder, we'll create "XMLLoader.as" and "CSS.as" by right-clicking "keremk" and going to Add > New Class... We'll also add our "source.xml" and "styles.css" files to the bin folder by right-clicking "bin" and going to Add > New XML File... and Add > New CSS File. For Flash IDE, create a folder named "htmlBox" in your explorer. Create "Main.as" by right-clicking and going to New > Flash ActionScript File, then create "htmlBox.fla" by right-clicking and going to New > Flash Document. Then create a folder named "keremk" then in this folder create "XMLLoader.as" and "CSS.as". You need to create "styles.css" and "source.xml" in the "htmlBox" folder (same folder as the "Main.as" and "htmlBox.fla" files). You can create them by right-clicking and going to New > Text Document then renaming the extensions. You can use any editor to write "as","xml" and "css" files. You can also write "as" files in Flash IDE by double-clicking them. Step 2 - Creating the CSS File I'll use font-family, font-size, text-align, font-weight, color and text-decoration properties in my CSS file. I'll also create an "hW" tag for headings and "activeL","passiveL","page" and "para" classes for other texts. hW : Heading styles activeL : Active link styles passiveL : Passive link styles page : Page number styles para : Paragraph styles Here is the code. I won't explain it line by line because I think it's pretty understandable. A:link { text-decoration: underline; } A:hover{ text-decoration: none; } hW { font-family: "Courier New", Courier, monospace; font-size: 20px; text-align: center; font-weight: bold; color: #CCCCCC; } .activeL{ font-family: "Comic Sans MS", cursive; font-size: 12px; text-align: center; font-weight: normal; color: #EEEEEE; } .passiveL{ font-family: "Comic Sans MS", cursive; font-size: 12px; text-align: center; font-weight: normal; color: #666666; } .para { font-family: Verdana, Arial, Helvetica, sans-serif; font-size: 12px; text-align: justify; font-weight: normal; color: #CCCCCC; } .page { font-family: Verdana, Arial, Helvetica, sans-serif; color: #CCCCCC; font-size: 12px; text-align: right; font-weight: normal; } Step 3 - Creating the XML file When creating XML file, we'll use Flash-htmlText tags that are shown below. For more information, you can visit Adobe for TextField.htmlText. Anchor tag (link tag): <a> Bold tag :<b> Break tag :<br> Image tag :<img> Italic tag :<i> List item tag :<li> Paragraph tag :<p> Span tag :<span> Underline tag :<u> We'll start creating our "source.xml" by defining the firstchild as <data></data>. Between the <page></page> tags, we'll write our html code. <?xml version="1.0" encoding="utf-8" ?> <data> <page> <? Entry point for page 1 ?> </page> <page> <? Entry point for page 2 ?> </page> </data> Step 4 - Writing HtmlBox Pages in the XML File We'll start with a break "<br/>" to improve presentation. Note that we have to close every tag that we use in XML, otherwise the XML file can not be parsed. "<br/>" is a closed tag. After "break" tag, we'll write a heading within the "hw" tag and start the paragraph in "<span class='para'> </span>". For the lists we'll use a "<li></li>" tag. <page><br/> <hW>HEADING</hW> <br/> <span class='para'> <? Paragraph text ?> </span> <li> <? List item ?> <? List item ?> </li> </page> Step 5 - Adding "Next" and "Previous" Links to Pages To add next and previous links, we'll use "event:next" and "event:prev" as "href". These will be captured by flashplayer as an event. When the links are clicked, "event:next" dispatches a "link" event with a "next" text in flash. <page> ..... <span class='passiveL'> <a href="">< PREVIOUS |</a> </span> <span class='activeL'> <a href="event:next">| NEXT ></a> </span> </page> In this page (for the first page) there won't be a previous page. So previous link should be passive and its "href" has to be empty. By the way, to see "<", "&" etc. symbols in htmlbox we should use their codes shown below. < : < (less than) > : > (greater than) & : & (ampersand) " : " (double quotes) ' : ' (apostrophe, single quote) Step 6 - Adding Page Numbers to Pages When adding page numbers, we just need to use the "page" class for "span". The pattern of page numbers is up to you. I wrote them like so: "(page 1/3)". <page> ..... <span class='page'>(page 1/3)</span> </page> And here is my XML file with one page. <?xml version="1.0" encoding="utf-8" ?> <data> <page> <br/><hW>AS3 HTML BOX with XML and CSS support</hW><br/> <span class='para'>Hi everybody.<br/><br/>This HTML Box has been created with only AS3. And all codes have been written in external "as" files. <br/><br/>With the tutorial below, you'll learn: <br/> <li>How to create external classes. <br/>How to load, parse and use XML and CSS files in a htmlText. <br/>How to create new events with "dispatchEvent" and use those events. <br/>How to use "TextEvent" in htmlText. <br/>How to use htmlText tags. </li> </span> <br/><br/><br/><br/><br/><br/> <span class='passiveL'><a href="">< PREVIOUS |</a></span> <span class='activeL'><a href="event:next">| NEXT ></a></span> <br/><br/><span class='page'>(page 1/3)</span> </page> By the way, you can add images to your pages as shown below: <? with link ?> <a href="your_link"><img src="your_image_source"/></a> <? without link ?> <img src="your_image_source"/> Step 7 - Action Script Files (External Classes) We've created the "keremk" folder and we'll use this folder for our "XMLLoader" and "CSS" classes. We therefore have to start our classes with: package keremk { } Step 8 - Creating the CSS Class We'll start our CSS class with "package keremk{}" . Its class name will be the same as the file name "CSS". Note: ActionScript is case sensitive. Since we'll dispatch events with this class, it will extend "EventDispatcher". package keremk {//CSS is in keremk folder public class CSS extends EventDispatcher {//CSS will dispatch events public function CSS():void { loader = new URLLoader;//when a CSS is created, new loader will be defined } } } Step 9 - CSS: Importing Flash Classes import flash.net.URLLoader;//We'll load css file with urlloader import flash.net.URLRequest;//and there should be a request to load. import flash.text.StyleSheet;//We'll parse css file as a StyleSheet. import flash.events.SecurityErrorEvent;//We'll dispatch events, so we need to import related classes too. import flash.events.IOErrorEvent; import flash.events.Event; import flash.events.EventDispatcher; You can also import those classes within 3 lines by using "*" to import all "events" and "net" classes, but I prefer to import them one by one. We don't need all "events" and "net" classes. If you prefer to write less code, here is the abbreviated equivalent. import flash.events.*; import flash.net.*; import flash.text.StyleSheet; Step 10 - CSS: Variables We'll need only two variables in this class, a URLLoader and a StyleSheet. private var loader:URLLoader; public var sheet:StyleSheet; By the way, private variables are not reachable from out of their classes. I'll use "loader" only in the CSS class so I can create it as private. I'll use "sheet" from the main class, so I need to create it as "public" (reachable). Step 11 - CSS: Load Function We'll use this load function from our main class, so we need to create it as public. It will require a string to load, that will be "_req:String" public function load(_req:String):void {//function will load the file that its is path 12 - CSS: Event Handlers and Dispatchers In Step 11, we added 3 event listeners to loader, Security Error, IO Error and Complete. One of them will be dispatched eventually. When it's happened we need to transfer it to the main class by listening and dispatching. We should also check if there is any problem when parsing CSS file after "Complete" event. We'll check it by using "try catch". private function ioError(e:IOErrorEvent):void {//When IO error occurs, dispatchEvent(new Event("CSS_IOError"));// this line dispatches the "CSS_IOError". } private function secError(e:SecurityErrorEvent):void {//When there is a security problem, dispatchEvent(new Event("CSS_SecurityError"));//this line dispatches the "CSS_SecurityError". } private function loaded(e:Event):void {//If loading the file is done, try { //try to parse it. sheet = new StyleSheet(); sheet.parseCSS(loader.data); dispatchEvent(new Event("CSS_Loaded"));//If parsing is OK, this line dispatches "CSS_Loaded". } catch (e:Error) { dispatchEvent(new Event("CSS_ParseError"));//If parsing is NOT OK,this line dispatches "CSS_ParseError" } } With event handlers and dispatchers, our CSS class is done. Here is the full CSS.as file: package keremk { import flash.net.URLLoader; import flash.net.URLRequest; import flash.text.StyleSheet; import flash.events.SecurityErrorEvent; import flash.events.IOErrorEvent; import flash.events.Event; import flash.events.EventDispatcher; public class CSS extends EventDispatcher { private var loader:URLLoader; public var sheet:StyleSheet; public function CSS():void {:IOErrorEvent):void { dispatchEvent(new Event("CSS_IOError")); } private function secError(e:SecurityErrorEvent):void { dispatchEvent(new Event("CSS_SecurityError")); } private function loaded(e:Event):void { try { sheet = new StyleSheet(); sheet.parseCSS(loader.data); dispatchEvent(new Event("CSS_Loaded")); } catch (e:Error) { dispatchEvent(new Event("CSS_ParseError")); } } } } Step 13 - Creating the XMLLoader We'll start our XMLLoader class with "package keremk {}" and it will extend "EventDispatcher", too. package keremk { // XMLLoader is in keremk folder. public class XMLLoader extends EventDispatcher { public function XMLLoader() { loader = new URLLoader;//when a XMLLoader is created, new loader will be defined. } } } Step 14 - XMLLoader: Importing Flash Classes We'll need the same classes as we did for our CSS without the "StyleSheet" class. They're as follows: import flash.events.SecurityErrorEvent;//Event classes to listen and dispatch. import flash.events.IOErrorEvent; import flash.events.Event; import flash.events.EventDispatcher; import flash.net.URLLoader;//net classes to load xml files. import flash.net.URLRequest; Step 15 - XMLLoader: Variables We'll now need 5 variables: private var loader:URLLoader;// to load XML file private var data:XML;// to hold XML file data to parse it. private var i:uint;//counter to use in parsing. private var lenXML:uint;//to check how many pages there are in XML. public var pages:Array = [];// to hold pages after parsing the XML. Step 16 - XMLLoader: Load Function The "load" function will be the same as with the "CSS.load". We'll use it from the main class and it should be public too. public function load(_req:String):void {//function will load the file for which path is 17 - XMLLoader: Event Handlers and Dispatchers We've added 3 event listeners to loader, Security Error, IO Error and Complete. One of them will be dispatched eventually. When it's happened we need to transfer it to the main class by listening and dispatching. We should also check if there is any problem when parsing the XML file after the "Complete" event. There can be two different events to dispatch: "XML_Loaded" or "XML_ParseError". We'll check it by using "try catch". private function ioError(e:IOErrorEvent):void {//When IO error occurs, dispatchEvent(new Event("XML_IOError"));// this line dispatches the "XML_IOError". } private function secError(e:SecurityErrorEvent):void {//When there is a security problem, dispatchEvent(new Event("XML_SecurityError"));//this line dispatches the "XML_SecurityError". } private function loaded(e:Event):void {//If loading the file is done, try { //try to parse it. data = new XML(loader.data);//takes XML data to "data" lenXML = data.children().length();//checks the number of the pages for (i=0; i < lenXML; i++) {//parses XML data to array pages.push(data.children()[i]); } dispatchEvent(new Event("XML_Loaded"));//if parsing the XML is OK, dispatch "XML_Loaded". } catch (e:Error) { dispatchEvent(new Event("XML_ParseError"));//if something is wrong with XML data, this line dispatches "XML_ParseError". } } With handlers and dispatchers, our XMLLoader class is done. Here is the finished XMLLoader: package keremk { import flash.events.SecurityErrorEvent; import flash.events.IOErrorEvent; import flash.events.Event; import flash.events.EventDispatcher; import flash.net.URLLoader; import flash.net.URLRequest; public class XMLLoader extends EventDispatcher { private var loader:URLLoader; private var data:XML; private var i:uint; private var lenXML:uint; public var pages:Array = []; public function XMLLoader() {:Event):void { dispatchEvent(new Event("XML_IOError")); } private function secError(e:Event):void { dispatchEvent(new Event("XML_SecurityError")); } private function loaded(e:Event):void { try { data = new XML(loader.data); lenXML = data.children().length(); for (i=0; i < lenXML; i++) { pages.push(data.children()[i]); } dispatchEvent(new Event("XML_Loaded")); } catch (e:Error) { dispatchEvent(new Event("XML_ParseError")); } } } } Step 18 - Creating the Main Class Since the Main class will be in our project's root folder, we'll begin writing it with "package {}". It will extend "sprite" and we'll start our code in the "Main" function: package { //entry point for imports. public class Main extends Sprite { //entry point for vars. public function Main():void { //entry point for codes. } //entry point for additional functions. } } Step 19 - Main: Importing Flash Classes import flash.display.Sprite;// Main class will extend "Sprite".So, we'll need "Sprite" class. import flash.display.StageAlign;// We'll need "StageAlign" to align stage. import flash.display.StageScaleMode;// We'll need "StageScaleMode" to manage scale mode of stage. import flash.events.Event;// We'll need "Event" class to use events that we have created in "XMLLoader" and "CSS" classes. import flash.events.TextEvent;// We'll need "TextEvent" to use page links in "htmlText". import flash.text.TextField;// We'll create a "TextField" to show html pages and add our css to it with "TextFormat" import flash.text.TextFormat; import keremk.CSS;// And in that "Main" class, we'll use our "CSS" and "XMLloader" classes that we have created earlier. import keremk.XMLLoader; Step 20 - Main: Variables private var xml:XMLLoader;//this will hold our XML data private var css:CSS;//this will hold our StyleSheet data private var field:TextField;//we'll use this to show our html pages private var cssBool:Boolean = false;//these two booleans will tell us if our CSS and XML files are loaded private var xmlBool:Boolean = false; private var stgW:Number = stage.stageWidth;//these two will check the height and width of stage. private var stgH:Number = stage.stageHeight;//this way we can change our HtmlBox's width and height from html file. private var pageNum:int = 0;//this will define the page that we show in HtmlBox.(Since array index starts from 0, pageNum is 0) private var boxBorder:Sprite;//this will be the border of our HtmlBox. We can enable border of TextField but this way we can manage the margins. Step 21 - Main: Main Function Main function will be executed automatically when we start HtmlBox. We therefore need to write our starter codes in this function. public function Main():void { stage.align = StageAlign.TOP_LEFT;//These two lines are optional. I'd rather keep stage aligned to top-left and nonscaled. stage.scaleMode = StageScaleMode.NO_SCALE; boxBorder = new Sprite();//This is our border of htmlbox. Basically, it's an unfilled rectangle. And we'll create a new Sprite to draw it. boxBorder.graphics.lineStyle(2, 0xC0C0C0, 1);//thicknes = 2px, color = 0xC0C0C0(gray), alpha = 1 (100%). You can change these values as you wish. boxBorder.graphics.drawRect(5, 5, stgW - 10, stgH - 10);//margin = 5. It's the distance of border to stage boundary. addChild(boxBorder);//after we create and draw our border, we need to add it to the stage. field = new TextField();//We'll create a new TextField to show html pages. addChild(field);//since there are many properties to define, we'll add field to stage first. with (field) {//after we add the "field" to stage, we can use "with" to define its properties. x = 10; //I have defined x and y as 10 to make a 5px space between field and border. y = 10; width = stgW-20;//And width should be stgW-20. Because, if we want to make a 10px distance between field and stage, the width of the field must be 20px(10px from left + 10px from right) shorter than the width of the stage. height = stgH-20;//And we should calculate the height like the width. multiline = true;//Field must be multiline. Because, our html texts are multiline. selectable = false;//If you want to make your text selectable, you can change this to "true". wordWrap = true;//Without "wordWrap" our paragraphs will be single lines. condenseWhite = true; // This is an important property that makes our text look better. Without this, there will be more spaces in everywhere of the htmltext. } //After we create our border and textfield, we can load our files. xml = new XMLLoader();//We'll create a new XMLLoader xml.load("source.xml");//and load our XML file. //We need to listen for events to know what to do next. xml.addEventListener("XML_Loaded", xmlDone);//If we capture "XML_Loaded", we'll continue to create HtmlBox. xml.addEventListener("XML_IOError", error);//I'll create only one function for all errors. xml.addEventListener("XML_SecurityError", error);//So, all error events will go to this "error" function xml.addEventListener("XML_ParseError", error); css = new CSS();//We'll create a new CSS css.load("styles.css");//and load our CSS file. //css events are pretty the same with the xml events. css.addEventListener("CSS_Loaded", cssDone);//If we capture "CSS_Loaded", we'll continue to create HtmlBox. css.addEventListener("CSS_IOError", error);//And all error events go to "error" function, too. css.addEventListener("CSS_SecurityError", error); css.addEventListener("CSS_ParseError", error); } Step 22 - Main: "error" Function Since all errors go to the "error" function, we need to arrange them with "switch case". We'll check which error occurred and write the requisite text to "field". In this step, I'll show only two errors. You'll see all errors in the finished Main class at Step 25 private function error(e:Event):void { switch(e.type) { //We'll check the type of the error that occurred case "XML_IOError"://If error is "XML_IOError", we'll write the requisite text about "XML_IOError" to "field". field.htmlText = '<p align="center"><b><font color="#FF0000" size="12" face="Verdana, Arial, Helvetica, sans-serif"><br> XML IO ERROR<br>Please control your XML path!</font></b></p>' break;//If error is "XML_IOError", we'll break the operation and stop the "switch case". case "XML_SecurityError": field.htmlText = '<p align="center"><b><font color="#FF0000" size="12" face="Verdana, Arial, Helvetica, sans-serif"><br> XML SECURITY ERROR<br>Please control your policy files!</font></b></p>' break; } } Step 23 - Main: "Done" Functions We'll create three "Done" functions. "xmlDone", "cssDone" and "allDone". "xmlDone" and "cssDone" will be executed after our files are loaded successfully and they'll inform "allDone". When both css and xml files are loaded successfully, "allDone" will add the StyleSheet to "field" and write the first page. private function cssDone(e:Event):void { cssBool = true;//We'll make cssBool "true". Because CSS file is loaded successfully. allDone();//And execute allDone. } private function xmlDone(e:Event):void { xmlBool = true;//We'll make xmlBool "true". Because XML file is loaded successfully. allDone();//And execute allDone. } private function allDone():void { if(cssBool && xmlBool){//if both css and xml files are loaded successfully, field.styleSheet = css.sheet;//we'll set our styles to "field". field.htmlText = xml.pages[pageNum];//we'll write the first page to field. addEventListener(TextEvent.LINK, textEvent);//And we'll add event listener for link events that will be dispatched by htmlText. } } Step 24 - Main: "textEvent" Function In this function, we'll check for "next" and "prev" event texts. private function textEvent(e:TextEvent):void { if (e.text == "next"){//If "next" link is clicked, ++pageNum;//we'll increase the pageNum field.htmlText = xml.pages[pageNum];//and write the new page to "field". } if (e.text == "prev"){//If "prev" link is clicked, --pageNum;//we'll decrease the pageNum field.htmlText = xml.pages[pageNum];//and write the new page to "field". } } Step 25 - Main: Finished Here is the finished Main class: package { import flash.display.Sprite; import flash.display.StageAlign; import flash.display.StageScaleMode; import flash.events.Event; import flash.events.TextEvent; import flash.text.TextField; import flash.text.TextFormat; import keremk.CSS; import keremk.XMLLoader; public class Main extends Sprite { private var xml:XMLLoader; private var css:CSS; private var field:TextField; private var cssBool:Boolean = false; private var xmlBool:Boolean = false; private var stgW:Number = stage.stageWidth; private var stgH:Number = stage.stageHeight; private var pageNum:int = 0; private var boxBorder:Sprite; public function Main():void { stage.align = StageAlign.TOP_LEFT; stage.scaleMode = StageScaleMode.NO_SCALE; boxBorder = new Sprite(); boxBorder.graphics.lineStyle(2, 0xC0C0C0, 1); boxBorder.graphics.drawRect(5, 5, stgW - 10, stgH - 10); addChild(boxBorder); field = new TextField(); addChild(field); with (field) { x = 10; y = 10; width = stgW-20; height = stgH-20; multiline = true; selectable = false; wordWrap = true; condenseWhite = true; } xml = new XMLLoader(); xml.load("source.xml"); xml.addEventListener("XML_Loaded", xmlDone); xml.addEventListener("XML_IOError", error); xml.addEventListener("XML_SecurityError", error); xml.addEventListener("XML_ParseError", error); css = new CSS(); css.load("styles.css"); css.addEventListener("CSS_Loaded", cssDone); css.addEventListener("CSS_IOError", error); css.addEventListener("CSS_SecurityError", error); css.addEventListener("CSS_ParseError", error); } private function error(e:Event):void { switch(e.type) { case "XML_IOError": field.htmlText = '<p align="center"><b><font color="#FF0000" size="12" face="Verdana, Arial, Helvetica, sans-serif"><br> XML IO ERROR<br>Please control your XML path!</font></b></p>' break; case "XML_SecurityError": field.htmlText = '<p align="center"><b><font color="#FF0000" size="12" face="Verdana, Arial, Helvetica, sans-serif"><br> XML SECURITY ERROR<br>Please control your policy files!</font></b></p>' break; case "XML_ParseError": field.htmlText = '<p align="center"><b><font color="#FF0000" size="12" face="Verdana, Arial, Helvetica, sans-serif"><br> XML PARSE ERROR<br>Please debug your XML file!</font></b></p>' break; case "CSS_IOError": field.htmlText = '<p align="center"><b><font color="#FF0000" size="12" face="Verdana, Arial, Helvetica, sans-serif"><br> CSS IO ERROR<br>Please control your CSS path!</font></b></p>' break; case "CSS_SecurityError": field.htmlText = '<p align="center"><b><font color="#FF0000" size="12" face="Verdana, Arial, Helvetica, sans-serif"><br> CSS SECURITY ERROR<br>Please control your policy files!</font></b></p>' break; case "CSS_ParseError": field.htmlText = '<p align="center"><b><font color="#FF0000" size="12" face="Verdana, Arial, Helvetica, sans-serif"><br> CSS PARSE ERROR<br>Please debug your CSS file!</font></b></p>' break; } } private function cssDone(e:Event):void { cssBool = true; allDone(); } private function xmlDone(e:Event):void { xmlBool = true; allDone(); } private function allDone():void { if(cssBool && xmlBool){ field.styleSheet = css.sheet; field.htmlText = xml.pages[pageNum]; addEventListener(TextEvent.LINK, textEvent); } } private function textEvent(e:TextEvent):void { if (e.text == "next"){ ++pageNum; field.htmlText = xml.pages[pageNum]; } if (e.text == "prev"){ --pageNum; field.htmlText = xml.pages[pageNum]; } } } } Step 26 - Compiling in FlashDevelop We've finished writing our code, now it's time to compile it. If you have created your project in FlashDevelop, you just need to hit "F5" to check it and "F8" to build the project. Before that, you might want to change your output settings. To do that, go to Project > Properties... In the properties panel, you can change: - "Target" -> Flash Player version - "Output file" -> Output file name and path (Our output file path is "bin/") - "Dimensions" -> Width and height of the output file - "Background color" -> Background color of the output file (I've used black "#000000") - "Framerate" -> Framerate of the output file (Since there is no frame in our project, I've used 30fps as default.) - "Test Movie" -> How to play test movie when pressing "F5" After "Build Project" operation, you can use the htmlBox from bin folder. If you're planing to move it to a different folder, you need to move the "htmlBox.swf", "source.xml" and "styles.css" files to the same folder. If you're planing to use "index.html" you're going to need the whole "bin" directory. By default, htmlBox dimensions will be 100% in "index.html". You can change this in the "swfobject.embedSWF();" function in "index.html". Step 27 - Compiling in Flash IDE If you are using Flash CS3 or CS4, open your "htmlBox.fla" file. In the properties window, write "Main" in the "Class" box. You can also change "Frame rate", "Size" and "Background color" of htmlBox in properties window. After defining the "Document class", you can test it by pressing "Ctrl+Enter" and publishing it by pressing "Ctrl+F12". If you want to change the publish settings (such as version), you can open publish settings by pressing "Ctrl+Shift+F12" or by going to File > Publish Settings.... Again, If you're planing to move it to a different folder, you need to move the "htmlBox.swf", "source.xml" and "styles.css" files to the same folder. If you're planing to use "index.html" you are going to need the "AC_RunActiveContent.js" file in the same directory. By default, htmlBox dimensions will be the same with swf file in "index.html". You can change it in html file or you can use the "HTML" tab in "Publish Settings". Conclusion We're done! You can use this html box in your web templates, for text that you don't wish to be copied or any project that you can imagine. Thanks for reading this tutorial, I hope you liked it. Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
https://code.tutsplus.com/tutorials/build-an-actionscript-30-html-box-with-xml-and-css-support--active-782
CC-MAIN-2020-50
refinedweb
4,337
60.11
On Tue, 13 Nov 2007, Fuzzyman wrote: > I don't have the API docs to hand - but what you want to do is to > effectively execute an import statement in your namespace. ah - there indeed is an import call in the API - i just always assumed that it works like import in the scripts, meaning that it can only import modules that have already been exposed to the scripts somehow. but i'll test that next. btw, from ironpython exposing classes seems to work just fine, so porting would indeed solve this: IronPython console: IronPython 2.0A6 (2.0.11102.00) on .NET 2.0.50727.312 Copyright (c) Microsoft Corporation. All rights reserved. >>> import clr >>> clr.AddReference("IronPython.dll") >>> clr.AddReference("Microsoft.Scripting.dll") >>> import IronPython >>> import Microsoft.Scripting >>> py = IronPython.Hosting.PythonEngine.CurrentEngine; >>> import System.Threading >>> from Microsoft.Scripting import Script >>> Script.SetVariable("Timer", System.Threading.Timer) >>> Script.Execute("python", "print Timer") <type 'Timer'> >>> def p(s): ... print s ... >>> p('hep') hep >>> Script.SetVariable('p', p) >>> Script.Execute("python", "p('hep')") hep >>> Script.Execute("python", "t = Timer(lambda x: p('tick')); t.Change(2000, 0)" ) >>> tick .. that last tick came from the worker thread 2s later, as i intended :) now off to test how the imports work. ~Toni
https://mail.python.org/pipermail/ironpython-users/2007-November/005955.html
CC-MAIN-2014-15
refinedweb
210
70.9
The above chapters introduced you to various ns-3 programming concepts such as smart pointers for reference-counted memory management, attributes, namespaces, callbacks, etc. Users who work at this low-level API can interconnect ns-3 objects with fine granularity. However, a simulation program written entirely using the low-level API would be quite long and tedious to code. For this reason, a separate so-called “helper API” has been overlaid on the core ns-3 API. If you have read the ns-3 tutorial, you will already be familiar with the helper API, since it is the API that new users are typically introduced to first. In this chapter, we introduce the design philosophy of the helper API and contrast it to the low-level API. If you become a heavy user of ns-3, you will likely move back and forth between these APIs even in the same program. The helper API has a few goals: The helper API is really all about making ns-3 programs easier to write and read, without taking away the power of the low-level interface. The rest of this chapter provides some examples of the programming conventions of the helper API.
https://www.nsnam.org/docs/manual/html/helpers.html
CC-MAIN-2018-05
refinedweb
200
58.72
Vinge and the Singularity 163 mindpixel writes: "Dr. Vinge is the Hugo award winning author of the 1992 novel "Fire Upon the Deep" and the 1981 novella "True Names." This New York Times piece (registration required) does a good job of profiling him and his ideas about the coming "technological singularity," where machines suddenly exceed human intelligence and the future becomes completely unpredictable. " Nice story. And if you haven't read True Names, get a hold of a copy, plenty of used ones out there. Deep Thought (Score:1) DEEP THOUGHT : What is this great task for which I, Deep Thought, the second greatest computer in the Universe of Time and Space have been called into existence? FOOK: Well, your task, O Computer is... LUNKWILL: No, wait a minute, this isn't right, We distinctly designed this computer to be the greatest one ever and we're not making do with second best. LUNKWILL: Deep Thought,are you not as we designed you to be, the greatest most powerful computer in all time? Anyway, most of you know the rest. If not time to listen to radio series again H2G2 [bbc.co.uk] Re:The Singularity and Computational Efficiency (Score:1) The key hurdle, in my mind, is a direct computer interface to the brain. Once we have that, our current clumsy programming tools become obsolete - and we will be able to see by direct comparision of AI code with our own minds what needs to be done. Theere is nothing like having the right tools. -- brief review of A Fire Upon the Deep (Score:2) Danny. Golem XIV by Stanislaw Lem (Score:1) Golem XIV [] However, when you think about it a little, the idea of a disembodied intelligence exiting in a computer is silly. Think what happens to human conciousness when deprived of all sensory input. Can a AI robot cross the street? (Score:2) This is an excellent point. I'd like to see the AI guys build a robot that can cross Broadway at Times Square, against the light, without getting squashed. Re:"General" Human Intelligence not Necessary (Score:1) Slightly OT (Score:1) Re:The Singularity and Computational Efficiency (Score:2) Well, I don't see the computational efficiency of humans (or future AIs) as being a problem. It takes human-level intelligence to correlate interesting information together (design of proposed chemical plant, mapping of local water table). But it doesn't take human-level intelligence to actually run the numbers and discover that there's a problem (arsenic levels in drinking water over EPA guidelines). Future AIs will be able to do the same things we do now. Except that the AI will be directly wired to unbelievably fast parallel supercomputers. (Dare I say Beowulf Cluster?) These AIs will be able to simulate complex weather systems as easily as you can calculate a mortgage table in Gnumeric. Corporations not a recent near-singularity (Score:1) The church The monarchy and aristochracy The state At least in my countrys history (Denmark) the imortallity of these entities have had a profound effect on the political and personal lives of the citizens. This is particulairy the case for the church. One of the main reasons that the Danish king abolished catolisicm in favour of protestantism was that the church had ammased immense power and wealth through (mostly deathbed) donations of money and (more important) land. The land belonging to the crown and the aritocracy was slowly eroded away, as it was split up and inherited by the younger sons - who in some cases donated it to the church in order to improve their standing in the hereafter At some point this led to the royalty and aristocrats joining forces, and neutering the church. This may happen to corporations too, if they get too powerfull. The current anti-trust laws are an indication that the political leadership of ANY country will never concede power to another entity. Re:Unpredictable future (Score:1) Excuse me? I can imagine the workings of my own brain quite well, even though I can't (yet) understand them. There is no reason that we are incapable of understanding the workings of the human brain, and therefore I think it rather likely that we will understand the workings of the human brain eventually (assuming that humankind lasts long enough). Re:Predictability and Unpredictability (Score:1) Yes. That set of rules would be exactly the program that is running on the smart computer. Probably no simpler set of rules would completely define it's behavior. I believe that you are confusing 'deterministic' with 'predictable' and thinking that determinism makes prediction easy. Why does a machine need to be conscious? (Score:2) Something as simple as a self replication nano-bot (whatever that is) that consumer oxygen for energy could end up being the only non-plant form of life on the planet if it replicated out of control and drove oxygen levels below that needed to sustain animal life. Currently machines do replicate and improve themselve, with the help of humans. Over time the amount of help they need is continuely decreasing. I do not think that machines will need to be as intelligent as humans to decrease the amount of human assistance required for replication to near 0. -josh Re:Smartness is Overrated (Score:2) (I mean, that's if they had any reason to really care about your (or my) opinion. Which they probably don't, except perhaps as just another tiny part of the masses.) And the point isn't that supersmart machines would necessarily want to run the world, it's that it's hard to guess what they would want. Or why they should care if what they want happens to be at odds with what we might want. Why would what we want be at all relevant to them? Huh? (Score:2) Where machines suddenly exceed human intelligence and the future becomes completely unpredictable. It's funny to see someone predicting the future and at the end of their prediction ruling out the possibility of future predictions. My prediction: That this prediction will end up like the majority of predictions -- wrong. Re:Why emotion? (Score:2) Emotions are much more than just chemical reactions. Chemical reactions are just how the human brain happens to implement emotions. Emotions have function and behavioral consequences (e.g. you lust for a female, so you sneak up behind her, restrain her, and hump her -- oops, I mean -- you talk to her and find out her astrological sign and phone #) and that behavior has emerged through (and been shaped by) the evolutionary process. Emotions do things useful for continued survival of the genes that program the chemical processes that implement the emotions, it's not just some weird byproduct. An AI that is created through an evolution-like process (and there is a very reasonable chance that this is how the first AI will be made) will benefit from the behavior-altering characteristics of emotions, so they will probably emerge. Sure, they won't be implemented as chemical processes (well, I guess that depends on how future computers work ;-) but they'll be there. --- Re:The Singularity and Computational Efficiency (Score:2) Mathematics as we know it has only been around for a couple thousand years (and was pretty darned simple until just a few hundred years ago), but humans have been around for hundreds of thousands of years. This means that the ability to do arithmetic quickly, simply isn't something that humans need to do to survive, thus evolutionary forces have not optimized our hardware for doing that. If you want AIs that are fast at arithmetic, evolve them in a virtual environment where arithmetic ability is an important selection criterion. --- Re:"A Fire..." and Anachronistic Commentary (Score:3) I don't think many people back then had any idea, that it would suddenly become "normal" for people to execute untrusted data with full privledges. The concept is still mind-boggling even today, let alone 1992. OTOH, it's more of a social issue than a technogical one. I guess it doesn't take much vision to realize: People are stupid. --- Re:Flawed assumptions? (Score:2) Consider, e.g., a large company that implemented an internal copy of the net. Now it has it's network servers, attached, but there's this problem of locating the information that is being sought. So it implements xml based data descriptions, and an indexing search engine. And, as computers get more powerful, it uses a distributed-net approach to do data-mining, with a neural net seeking the data, and people telling it whether it found what they wanted, or to look again. As time goes by, the computer staff tunes this to optimize storage, up-time, etc. The staff trains it to present them the information they need. It learns to recognize which kinds of jobs need the same information at the same time, which need it after a time delay, etc. And then it starts predicting what information will be asked for so that it can improve it's retrieval time. Of the entire network, only the people are separately intelligent, but the network is a lot more intelligent than any of its components, including the people. The computers may never become separately intelligent. But the network sure would. Still, I expect that eventually the prediction process would become sufficiently complete that it would also predict what the response to the data should be. So it could predict what data the next person should need. So it could predict what answer the next person should give. So So if anybody called in sick, or went on vacation, the network would just heal around them. And eventually... Caution: Now approaching the (technological) singularity. Re:The Singularity and Computational Efficiency (Score:2) Have you ever heard of sound cards? Video cards? Specialized graphics chips? There's nothing that keeps computers from adding specialized signal processing hardware onto their general purpose capability. This is proven, because we already do it. And so does the brain. Perhaps we will need to invent a specialized chip to handle synaesthesia for our intelligent computers. Is that really to be considered an impossible hurdle? To me that seems silly. Just because we don't know how to do it, and how it should be connected yet , doesn't mean that we won't next year. Or the year after that. Caution: Now approaching the (technological) singularity. Re:Smartness is Overrated (Score:2) A certain amount of intelligence is probably necessary, but the main ingredient seems to be a monomanical fixation. This, of course, leads to a certain number of acts that actually hinder the cause that one is ostensibly attempting to forward, but if the result in increased control, then to the lunatic in charge, this will actually be evaluated as a success. Don't trust what they tell you, watch what they do. Actions speak louder than words. (Don't I wish. In fact, many pay attention to the words, and ignore the actions.) Caution: Now approaching the (technological) singularity. Re:Official Flame Thread (Score:2) Caution: Now approaching the (technological) singularity. Re:Vinge's Singularity is AI Doc Numero Uno! (Score:3) If you will recall, last year was full of people denouncing Mozilla as a failure. It took a bit longer than they expected. But I no longer use anything else when I'm on Windows. (True, on Linux I more frequently use Konqueror, but I use Mozilla whenever I'm on the Gnome side of things.) Possibly people's ideas of how a project should work have been overly influenced by movies and popular stories. (Though in Asimov's Foundation series, the bare framework of the Seldon plan required the entire lifetime devotion of the principle architect, as well as extensive commitment from dozens of others, so not all popular fiction is of the "quick fix" school.) Relativity took many years to be developed to the point of presentation, then it took decades of testing, and it's still being worked on. Special Relativity is now reasonably soundly grounded, but General Relativity still needs work. But people don't call it a failure. Why not? The A-Bomb was as much of a brute-force effort as Deep Blue was. Both were successful demonstrations, and in their success they highlighted the weakness of the underlying theories. But when it comes to AI, people keep moving the markers, so that whatever you do isn't really what they mean. I wait for the day when the hard version of the Turing test is passed. I firmly expect that at that point AI will be redefined so that this isn't sufficient to demonstrate intelligence. Already in matters of sheer logic computer programs can surpass any except the most talented mathematicians. (And perhaps them, I don't track this kind of thing.) It's true, most of these programs require a bit more resources that is today available on most home computers. But that's fair. Neural net programs can solve certain kinds of problems much more adeptly than people can. And they learn on their own what is an acceptable solution (via "training" and "reinforcement", etc.). And expert systems and capture areas of knowledge that are otherwise only accessible to experts in the field. (For some reason, experts are often a bit reluctant to cooperate.) Now it's true, that these disparate functions need to be combined. It's true that the world is quite complex, and the only way to understand it may be to live in it. The real problem with AI, is that nobody has a satisfactory definition of the 'I' part. Artificial is clear, but nobody can agree on a testable definition of Intelligence. The one real benefit is that it may get rid of those silly multiple choice IQ tests, and Standardized Achievement Tests. It would be easy for an AI to learn how to get the highest score possible (though it would require a bit of training, but then that's what they've turned grade-schools into -- training grounds for multiple choice tests). Caution: Now approaching the (technological) singularity. Re:Two things (Score:3) In certain decades it is "fashionable" to be optomistic. In others to be pessimistic. (The reasons have much to do with the age spread of the population, of the writer, with whether the author feels that things are getting better or worse NOW, etc.) During the late 50's up through the mid 70's optimism dominated. Then there was a reaction (Vietnam war, etc.) and the trend turned to pessimism (this started in Britain for some reason...I don't know why, I wasn't there). But there are always contrary voices. When Asimov, and the well engineered machines that favored humanity were dominant, then Saberhage introduced the Berserkers (intelligent robot war machines designed to reproduce, evolve, and kill all life.) I can't remember which are current, but novels with robot servents (sometimes almost invisible) aren't that uncommon even now. They just aren't featured characters anymore. They've become common, expected. OTOH, another of Vinge's postulates is coming to pass, whether through fashion or necessity, the proportion of fantasy to science fiction is increasing. Fairly rapidly. Fantasy used to be uncommon (although it was common before WWII). In the 50's and 60's it was usually disguised as science fiction. It started emerging again in the 70's. And now it is the predominant form. But a large part of this may be fashion. OTOH, Vinge predicted that as the future became more incomprehensible, the proportion of fantasy to science fiction would increase. So. Not proof, but evidence. Caution: Now approaching the (technological) singularity. Talk - an early form of instant messaging? (Score:2) Quote from the article: Is it just me, or did anyone else pause for a second after reading that sentence? As far as I remember, most of the operating systems that had access to the Internet had some form of a "talk" program. This includes all UNIX-like operating systems that I tried, such as Ultrix, SunOS, Solaris, HP-UX, A/UX, AIX and now Linux, but also some IBM 3090 mainframes (although these were batch-processing machines, there was also a way to talk to other users). The term "instant messaging" was coined much later: only a few years ago, when Windows started to invade all desktops and AOL started promoting its AIM. Seeing "talk" defined as "an early form of instant messaging" just looks... strange to me. Re:We've already been through a singularity (Score:2) Corporations are an artifact of our legal systems and have steadily grown in power and efficacy since they were first concieved several hundred years ago. At this point they are self-sustaining and self-reproducing, even persuing their own agendas that have only a tangential relationship to individual human agendas. I think it is interesting to note, however, that corporations are not, by almost any measure, smarter than individual humans, quite the oposite (consider well known sayings about the I.Q. of a mob or design by committee). The issue isn't whether our creations become more intelligent than us, but whether they become more potent than us. Corporations have become more potent than individual humans because 1) they can amass far larger fortunes (in terms of manpower, money, land, or almost any other measure) than an individual, and 2) they are, essentially, immortal (and, to a large extent, unkillable. While the laws may, technically, be empowered to disband a coroporation, in practice this is nearly impossible). Corporations are essentially god-like: omnipotent (if not omnicient) and immortal, invulnerable to almost any harm, complete with their own mysterious motives and goals. So, if we accept that the singularity has already occurred, we might ask why we aren't more aware of it's after effects. The answer, of course, is that the corporations don't want us to be aware, and are doing everything in their considerable power to obscure the effects of the singularity. Life goes on as normal, as far as lowly humans are concerned, because it would be terribly inconvenient for the corporations if it didn't (modulo polution, environmental destruction and a moderate amount of human suffering and expoitation). Re:Vinge embodies the worst of science fiction (Score:2) The reason that noone is commenting on Vinge's characters or stories is because they are not relevant to the topic at hand! The issue at hand is whether or not Vinge is a blithering nut-job for going on about this singularity crap that seems to be so popular with a number of science fiction writers cum technology commentators. I am heartened to see that there is a fair amount of skepticism in the comments concerning the idea of the singularity and Vinge's general nuttiness (and, even, self-contradiction) on the subject. It's good to know that the CS and IT trenches are fill, for the most part, with sane, level-headed folk, unlike the ranks of supposed luminaries like Joy, Kurzweil, and Vinge. There may well be folks in this forum who think that Vinge is a great writer: they're wrong, but more power to 'em anyway. I've read both A Fire Upon the Deep and A Deepness in the Sky and found them moderately enjoyable, but nothing to rave about. I wouldn't say that Vinge is in the ranks of the worst science ficiton I've ever read, but he's not far removed from the median (I won't say if he's above or below). <OFFTOPIC> If you are looking for good literature in SF, you should have a look at Gene Wolfe (the New Sun and Long Sun series), Kim Stanley Robinson (Red/Green/Blue Mars and Ice Henge), Octavia Butler, Richard Grant (Rumors of Spring, Views from the Oldest House and Through the Heart. More recently, Tex and Molly in the Afterlife, In the Land of Winter and Kaspian Lost), or, maybe, Stephen R. Donaldson. I used to be quite fond of C. J. Cherryh, but have found her recent stuff too formulaeic. There is good SF out there, but, as with almost anything else, the ratio of good-to-crap follows Sturgeon's law. <OFFTOPIC> The Singularity and Computational Efficiency (Score:5) However, in doing this extrapolation, one is making a few assumptions. Most notably is that one can teach a computer how to What do I mean by computational efficiency? Roughly speaking, the relative performance of one algorithm to another. For instance, in talking about the singularity (as Vinge puts it), one often neglects to notice the fact that human beings, with their neurons clicking away at petacycles per second, can only do arithmetic extremely poorly, at less than a flop! Logical puzzles often similarly vex humans (witness the analytic portions of the GRE!), where they also perform incredibly poorly. Significantly, human beings are very computationally inefficient at most tasks involving higher brain functions. We might process sound and visual input very well and very quickly, but most higher brain functions are very poor performers indeed. One application of a similar train of logic is that human beings are the only animals known to be capable of performing arithmetic. Therefore, if one had a computer comparable to the human brain, one could do arithmetic. Heck, by this logic, we're only 50 years away from using computers to do integer addition! The main point here is that, with regards to developing a "thinking" machine, WE MIGHT VERY WELL have the brute force computational resources available to us today. The hardware is not the limitation, so much as our ability to design the software with the complex adaptive ability of the human brain. Just WHEN we will be able to develop that software, no one can really say, since it is really a fundamental flaw in our approaches, rather than in our devices. (It is similar to asking when physicists will be able to write down a self-consistent theory of everything. No one can say.) It could happen in a decade or two, or it could take significantly longer then 50 years. It all depends on how clever we are in attacking the problem. Diaspora by Greg Egan (Score:1) let me plug the novel Diaspora by Greg Egan as an interesting look at what the singularity will mean to the future of humanity - the history of the rest of time reduced to handy pocket novel size Re:Flawed assumptions? (Score:2) Yes, technology will advance in the next X years, but to assume that a necessary part of that advancement is the creation of a machine that is more intelligence than a human is just plain ridiculous. Some would argue that a machine intelligence of that nature is absolutely impossible in the first place (not that I agree with them, but there are rational arguments that suggest this). I'm basing my view on the state of AI and what we can expect in the future on the results of research I've seen and carried out at some of the top AI departments in the world, so I think I've got a fairly good grasp of the subject matter, and I am 100% happy to say that faster computers will not give us any form of machine intelligence. Re:Flawed assumptions? (Score:3) But very rarely in the ways you expect. Look at the predictions people were making for life in the year 2000 back in 1800, or 1900, or 1950, or even 1990. You'll see that a lot of it didn't happen. Some did, and some things that people hadn't even considered happened as well. But a lot of it just didn't take place. Regardless of whether advancement takes place, the link that Vinge assumes between computer hardware performance and computer intelligent does not exist. If true machine intelligence comes about within the next thirty years it will not be as a direct result of improved hardware performance. There aren't any systems out there that aren't intelligent, but could be if we could overclock their processors to 150GHz. Flawed assumptions? (Score:5) Progress in computer hardware has followed this curve and continues to do so. Progress in computer intelligence however, hasn't. Computers are still stupid. They can now be stupid more quickly. This isn't going to produce super-human intelligence any time soon. Dr Vinge reminds me somewhat of that most mocked of AI doomsayers, Kevin [kevinwarwick.com] Warwick [kevinwarwick.org.uk]. Replies (Score:1) You said: "Why do people think SIs [super intelligences] will be unpredictable?" Because they are so much smarter and intelligent than us that we have no way of knowing how they will think and thus act. Just as a moron can't write a book about a rocket scientist, an AI researcher can't write a book about a SIAI (well, at least not a rational one). In some cases, AI will act in ways that seem reasonable to us, and other times it will require a lot of thought on our part to understand their better reasoning. Still, as of right now, we can't say for sure how an AI will act or even once they are around know how they will act. You said: "I don't think SIAI will be dangerous. They're way smarter than us and won't be evil." Many of us in the Singularitarian community used to make this assumption, but eventually you realize that most of the reasons that we consider that an AI would act like this are antropomorphic. AIs don't have those same built in features that we do. They wouldn't even have a sense of self when first created (or maybe ever). Check out for papers on Friendly AI for potential solutions and further explinations of this problem. You said: "We'll never develop a theory of general intelligence. We're not even going to have strong AI, let alone ones that can improve their own code." You have a lot to learn. All I can say to these kind of comments is that you need to educate yourself better on topics of cognitive science. You know, learn about physicalism (roughly, brain == mind) as opposed to dualism, learn about how information is represented in the mind, etc.. This view is mostly the result of simply have not been presented the proper information. You said: "I don't think the Singularity will happen. I mean, look at how much stuff has been predicted and never came true." The Singularity is a matter of life and death. In case you're wondering, the Meaning of Life right now is to reach the Singularity. Until then, anything you do is pretty much worthless unless it gets us to the Singularity faster. If we don't reach the Singularity, you are going to die sooner or later. The Singularity means you can live as long as you like, do what you want (this is a complex issue, see my site for a paper on how to protect non violation of volition post Singularity), and live happily ever after. I can't stress this enough: reaching the Singularity is a matter of life and death. If we don't make it, we all die. Maybe not today, but soon. And this is not just the fate of individuals, but of humanity and all life. If the Singularity is not reached, all life will cease to exist eventually. When looking at the Singularity in this life, you almost have to wonder why you're not already help to make it arrive sooner. Re:Talk - an early form of instant messaging? (Score:1) Now that we have that defined that equivalence, are there any IM patents that need busting? uh... (Score:2) Man, that sure sounds strange to my ears. I wonder what stuff the press will be explaining in a few more years... Registration-free link (Score:1) Re:The Singularity and Computational Efficiency (Score:2) #include "disclaim.h" "All the best people in life seem to like LINUX." - Steve Wozniak Re:Knowledge Crash (Score:2) Like you say, an interesting theory. However, it seems to hinge on the idea that educating someone carries a fixed cost per unit of knowledge (whatever that may be). Or at least that the cost of education per k.u. is not falling as fast as the rise in the number of k.u.'s required to operate in society. This ignores the fact that it is not always necessary to have an instructor or prepared curriculum in order to learn something. For example, when I first got a Windows box, I could have spent $150. on a course at the community college to learn how to double click on an icon, but chose to save my money and teach myself. In fact when it comes to education in general, once you teach someone how to engage in critical thinking, and give them access to a worldwide knowledge database (which the Internet is turning into), the motivated student can gain unlimited knowledge and virtually no cost other than connectivity. Myself as an example: I have learned far more in my past 6 years of Internet access at a cost of <$1.4K in dial-up fees than I did in my previous 6 years of university education at a cost of >$30K in tuition fees. Trickster Coyote I think, therefore I am. I think... ray? (Score:1) Re:ray? (Score:1) you would have seen this. [kurzweilai.net] Re:ray? (Score:1) Re:Flawed assumptions? (Score:2) Technology in genetics, networking, materials science and electrical engineering is progressing at a frightening rate. Soon, we'll be able to construct useful, microscopic machines; implanted computers; and who knows what else. The world becomes stranger faster, every year. -- Aaron Sherman (ajs@ajs.com) Across Realtime and the signularity (Score:3) The idea is that technology progression is asymtotic, and will eventually reach the point where one day of technological progress is equal to all that of human history, and then, well... there's the next day. He doesn't cover exactly what it is, because by definition, we don't know yet. But, it's catastrophic in the novel. A good read (actually the first part which basically just introduces the "Bauble" is a good read alone). He sort of refined the idea into something maintainable in Fire Upon the Deep by introducing the concept of the Slow Zone which acts as a kind of buffer for technology. If things in the Beyond get too hairy, the Slow Zone always remains unaffected, and civilization can crawl back up out of the "backwaters" (e.g. our area of the galaxy). He's a good author, and I love his take on things like cryptography, culture (A Deepness in the Sky), religion, USENET (Fire Upon the Deep), Virtual Reality and cr/hacker culture (True Names). -- Aaron Sherman (ajs@ajs.com) Re:The Singularity and Computational Efficiency (Score:1) It may well be that we'll never be able to design such software. However, we could evolve it. Using genetic algoriothms and other "evolutionary" programming approaches [faqs.org] seems to me the most promising approach. Tom Swiss | the infamous tms | Re:Flawed assumptions? (Score:1) Re:The Singularity and Computational Efficiency (Score:1) Human intelligence? (Score:4) scold-mode: off We've already been through a singularity (Score:2) The human race has already been through a singularity. Its aftermath is known as "civilization", and the enabling technology was agriculture, which first made it possible for humans to gather in large permanent settlements. There are a few living humans who have personally lived through this singularity... stone-age peoples in the Amazon and Papua New Guinea abruptly confronted by it. For the rest of the human race it was creeping and gradual, but it still fits the definition of a singularity: the "after" is unknowable and incomprehensible to those who live in the "before". Re:Singularity, SETI and the Fermi Paradox (Score:2) There are other possibilities as well for SETI's lack of success. Our solar system and our planet may be fairly unique in some ways: Probably bacteria-like life is extremely common, but advanced intelligent life might in fact be somewhat rarer than was once thought. virtual reality progress = ghost planet (Score:3) Another strong possiblity (for lack of SETI) is that intelligent races prefer virtual reality to real reality, in much the same way that the human race prefers to sit inside watching TV instead of going outside for a walk in the woods and grasslands where we evolved. When we have better than Final Fantasy rendering in real time, most of the human race will probably choose to spend most of their day living and interacting there, in virtual-reality cyberspace... in much the same way that many of us today spend most of their day in an office environment, living and creating economic value in ways incomprehensible to our hunter and farmer ancestors. When this happens, the planet may seem empty in many ways... in much the same way that suburban streets in America seem empty to a third-world visitor used to bustling and noisy street life. This phase (human race moves into and settles cyberspace, become less visible in the physical world) is not the same as the Singularity. For one thing, it is not at all dependent on future advances in artificial intelligence... we just need ordinary number-crunching computers a few orders of magnitude faster than today. If the AI naysayers are right, and machines never get smart enough, then the Singularity will never happen... but the "ghost planet" scenario will inevitably happen in our lifetime... either as a result of progress, or as the unhappy result of plague or nuclear war. Re:get a copy...if you can (Score:1) True Names - the novel by Vernor Vinge [gatech.edu] ... Bluejay Books) TRUE NAMES ... Comment by the transcriber VERNOR VINGE Bluejay Books Inc. All the progoth.resnet.gatech.edu/truename/truename.htm - 101k - Cached [slashdot.org] - Similar pages [slashdot.org] Rapidly accelerating tech != singularity (Score:2) The function y = 2^x has no asymptote - it becomes ever higher, ever steeper, but for each value of x there is a finite value of y Let x = time, let y = technological level (if such a concept is reducable to a single number) and this may be a model our progress under ideal conditions, free from setbacks like plauges & nuclear wars I have yet to hear a good reason why this model is not a better one than the singularity idea, other than wishful thinking & that the singularity makes a better story. But let's not confuse SF with the real world. We dont even have working human-like AI yet (Score:2) The converse is can the calculator understand algebra or calculus? Nope. Do we currently have machines that aren't as smart as humans but can understand/simulated human mentation? Nope. (I certainly don't think Cyc the database qualifies or that Darwinian algorithms have intelligence) Can 'very briefly' equal never. Yep. I don't know when or why the a certain part of the geek contingent can't tell the difference between fiction and reality anymore but this transhumanist/Kurzweil/extropian stuff wont work until we have a working model of consciousness that can be verified experimentally. Vinge and his ilk are this generation's Tim Leary, lots of optimism and futurism but feet planted strictly in the sky. Re:Golem XIV by Stanislaw Lem (Score:1) Re:Deepness in the Sky - Focus (Score:2) We had better hope that AI (and hence the Singularity) is indeed possible, because if it isn't, Focus is almost certainly possible, and with it tyrrany on a scale we can barely imagine. Singularity is "Rapture for Nerds" (Score:1) Ken MacLeod noted this in a Salon article [salon.com]. Re:Respect Copyright (Score:1) True Names Re-issue keeps getting delayed (Score:2) Re:The Singularity and Computational Efficiency (Score:1) Thanks for the post, and while I (naturally) agree with your conclusion AI is a software rather than a hardware problem, your commentonly describes the calculations we carry out consciously. This doesn't really apply to the autistic lightning calculators - or even us when we're doing calculus to say, catch a ball or drive a car. Trying to think about what you're doing under those circumstances tends to make the task quite a bit harder.. (Is consciousness over-rated? :) Is there anyone out there who knows more maths than me who's willing to tell me what my brain can do that a neural net of sufficient size can't? All I have to say to Vinge is... (Score:1) I like his books, but his predictions about the future are about as likely as those in the 50's stating that we would be all have our own flying vehicle by now. Re:Huh? (Score:1) Great teacher (Score:1) Re:Deepness in the Sky - Focus (Score:2) Scares the shit out of me. The non-Singularity (Score:2) Vinge has made it fairly clear that he doesn't think that Deepness is where society is going--he seems fairly confident that we'll reach the Singularity. ~=Keelor Re:Flawed assumptions? (Score:2) Well, that doesn't say much. Because either A) you're not very bright or B) you live a very safe and healthy life, so you expect it to be looong. But seriously, don't you think there's a huge step from building an artificial neurological brain to making it actually work. We may imitate some internal processes in the neurons, but the brain has a huge and complex architecture suited for human activity and body. I believe it can be done, roughly, but if it's going to be in MY lifetime there'll have to be HUUGE advances soon. I don't believe these AIs will be comparable to humans that soon though. Much of human thinking is not logical at all. If we were to only live perfectly logical lives, I think I'd vote myself out of "humanity". Because much of our joy and fun is not logical at all. Then again, it all really depends what you mean with intelligence too. That's just another can of worms, making such statements completely arbitrary. - Steeltoe Re:Vinge's Singularity is AI Doc Numero Uno! (Score:1) And if you're really serious about this, remember that a lot of clever people have tried this before you, and utterly failed. Good luck anyway! Re:Knowledge Crash (Score:2) Allen also wrote a book called The Modular Man about a man who downloads his personality into a robotic vaccum cleaner that is excellent and deals with many of the same concepts Dr. Vinge is talking about. Knowledge Crash (Score:4) The idea is, basically, that every year it costs more to educate someone. In order to be able to expand our collective knowledge, or even to utilize the machines and operate the systems of the present, it will cost a certain amount of money in the education process. In addition, we can quantify the amount of output a single human creates in his or her lifetime. For instance - if she works for thirty years at a Power Planet or something, we can determine the value that she has contributed to society. As systems become more complex, more education is required. The education costs more money. At some point, if this continues unchecked, we will be faced with a situation where the cost of education exceeds the value brought as a result of that education. That's called the Knowledge Crash. (Or it was in the books.) While I'm not convinced that this is true, it's certainly an interesting theory. It seems to me that, on average this can't happen, as one of the points of creating more and more complicated (generic) systems is to facilitate simpler and simpler controls, and thus dumber and dumber operators. While the creators of those systems may have 'crashed knowledge,' it seems that the whole point of that would be to hurl some value at the workers. But then you have to consider that, inherent in the value of a designer, the ease of use is part of the entire value analysis versus education, and then that'll crash... Re:Flawed assumptions? (Score:2) While skepticism is a fine sentiment, I can't help notice that you are making more assumptions than Vinge is. Sure we will be able to simulate or imitate the brain roughly -- but I think it is a stretch to demand that consciousness come only from detailed imitation. It may be that roughly is enough. The brain is also a finite machine - we will soon be able to build electronics that excess the capacity. Re:Flawed assumptions? (Score:2) For example, storage. It seems that we can build harddisks in excess of human brain capacity. But static storage is incomparable to the dynamic kind of store the brains have. So - wrong measure. Another example: FLOPS. The human brain is a massively parallel computer, microchips are not. Now it is claimed that you can simulate a parrallel computer by a single chip one. Admittedly, the difference between possibility and practicability is huge. But if the brain is a massively parallel computer, then a sufficiently fast chip will get to the level where it has comparable compute power. Just run a brain simulation of this computer. If the brain is not, then again - wrong measure. We can just go on, finding the right measure. I think all things considered, that mesaure will be exceeded and by that time, we will have a conscious computer. At some level, you either have faith, or you don't. Singularity, SETI and the Fermi Paradox (Score:1) Fermi showed that, given reasonable assumptions, we ought to expect "ET" to be ubiquitous. Since extraterrestrials are not all about us, this suggests either technologic civilizations are exquisitely rare or that they rapidly lose behaviors like migration and radio communication. By rapidly I mean within two to three hundred years. The Singularity is the kind of event that would do that. If technologic civilizations always progress to a Singularity they may well lose interest in minor details like reproduction and out migration. Among other things they would operate on very different time scales from pre-Singular civilizations. See also [faughnan.com]. john -- John Faughnan Re:Singularity, SETI and the Fermi Paradox (Score:1) A few comments: 1. Fermi's calculations assume a civilization with light speed technology that expands from planet to planet every few hundred to thousand years. No highly advanced technology is required for such a civilization to colonize the galaxy within tens of thousands of years -- just exponential growth. See for a Rumanian example. 2. The point of my argument is that it's not likely that post-Singular civilizations are driven by the same things that drive biological organisms (growth, expansion, etc). For one thing their time scales are different from biological organisms; it's not hard to imagine that they exist in a time-space that's thousands of times faster than ours. Here's my argument in summary: -- John Faughnan Singularity, SETI and the Fermi Paradox - Kurzweil (Score:1) See Kurzweil's article: and search on SETI. I may have thought of it earlier (a year ago or so), but I didn't think I was the only one who thought of it. see -- John Faughnan Misconception about Vinge's Singularity (Score:1) Vinge does not require the advancement of computers to a point at which they are regarded intelligent. This is only one of several possibilities mentioned in his paper [caltech.edu]. Other possibilities include: Vinge is one of the Spearheads of traditional SF (Score:1) Your recollection, lacking exactly the substantiation you mention, is worthless. You can find plenty of detailed comments using Google. Deepness in the Sky (Score:2) 'Deepness' [amazon.com] is the Prequel to 'Fire upon the Deep' and even better. Read it first. While there is more discussion about non-human intelligence in 'Fire', the actual impact of Vinge's idea is greater in 'Deepness', where his excellent world-building skill is used to create the best traditional SF I know. Both 'Deepness' and 'Fire' also feature some really neat alien races. Where AI went to die (Score:2) Below the sign, two rows of empty cubicles hold obsolete computers and out of date manuals. In a nearby lounge, old issues of Wired from the early '90s lie on tables. Dusty boxes are stacked against one wall. Few people are about. Nearby office doors hold the names of researchers who had their fifteen minutes of fame around 1985. This is where the dream died. The Knowledge Systems Lab was the headquarters of the expert systems faction of artificial intelligence. These were the people who claimed that with enough rewrite rules, strong AI could be achieved. It didn't work. And that empty room, frozen in the past, is what remains. Re:I don't buy it. (Score:2) Snicker. I think Dr. Vinge is right...and I think it is scary. If you are familiar with electronics, think about how a diode avalanches. If he is correct, AI could well "avalanche" past what evolution gave us in a very, very short period of time. Humans learn at a given pace. We are nearly helpless at birth, yet can be a "McGyver" in our twenties and thirties, able to make a helocopter gunship from nothing but bailing wire and duct tape (on tv anyway). That's a 20-30 year span, or nearly a quarter of our lives to reach our maximum potential. Who is to say an AI system could not, at some point, triple it's cognitive abilities in a 100 nS time slice? And to think I didn't take his class cuz some lamer told me he was a "hard ass" -- rats. That's what I get for listening to lamers. SDSU has so many wonderful Professors...Vinge, Baase, Carroll. Great University, great professors, great memories. Treatment, not tyranny. End the drug war and free our American POWs. Smartness is Overrated (Score:2) Re:Smartness is Overrated (Score:2) I respect their copyright... (Score:2) Hmm...I wonder should I have an ethical dilemma reading commentary of people who have read the article in violation of copyright? I think not, since I have entered into no agreement with the NYT. "General" Human Intelligence not Necessary (Score:3) Because of stupid, but fast, computers, we are headed toward being able to hack our DNA (and/or proteins). This will certainly produce incremental gains in lifespan and health...perhaps it will produce dramatic ones. Because of stupid, but fast, computers, we can simulate physical processes to enable us to engineer better widgets. Perhaps this will make routine space travel economical. Because of stupid, but fast, computers, we are heading toward having the bulk of human knowledge instantly available to anyone net connection. How will this leverage technical progress? Two things (Score:3) Another thing has to do with this "let's fear AI" genre of SciFi in general. Why does no one challenge the assumption that when artificial creatures develop intelligence and a personality, that personality will inevitably be indifferent, power-hungry and cold? Isn't it just as easy to imagine that artificially intelligent creatures/machines will strike us as being neurotically cautious, or maybe friendly to the point of being creepy? Maybe they'll become obsessed with comedy or math or music. Or video games. Realistically, I think the first machines which we take to be intelligent will be very good at means-to-ends reasoning, but will not be able to deliberate about ends (i.e. why should one sort of outcome be preferrable to another). I would argue that even we humans can't really deliberate about ends. At some point we hit some hard-wired instincts. Why, for example, is it better that people are happy rather than suffering? The answer is just a knee-jerk reaction by us, not some sort of a reasoned conclusion. When we create AI we will have the luxury of hard-wiring these instincts into intelligent machines (without some parameters specifying basic goals, nothing could be intelligent, not even we). Humans and animals are basically built with a set of instincts designed make them survive and fuck and make sure the offspring survive. There is no reason to think AI creatures would necessarily have these instructions as basic. I'm sure we could think of much more interesting ones. The consequence is that AI creatures might be more intelligent than we are, but in no way sinister. Re:Knowledge Crash (Score:2) As things get more complex, they get refined into modular pieces. It takes a very small amount more training to drive a modern Ford Taurus as compared to a 1930's Packard. This holds true even when fixing the car. Mechanics don't rebuild alternators anymore, they replace them. Computer technicians don't use a solder iron anymore. They replace the defective video card! This pattern holds with software, as well. Remember when C, today's "low level" language, was considered very inefficient and bloat-ridden? How about Perl? (Now fast enough to decode a DVD movie on the fly with moderate hardware!) The real danger here is not that we'll have a knowledge crash, but that we'll keep dumbing everybody down to the point where, to run anything, you push a red button. If the red button doesn't work, we have a REAL crash... -Ben Re:Flawed assumptions? (Score:2) The problem from the perspective of a working neuroscientist is that we don't yet understand how the brain is intelligent. On the other hand, things are starting to fall into place. For example, we have a hint of why neural synchronization occurs in the brain, because we're beginning to realize that time synchrony is something many neurons are very good at detecting. We're also beginning to understand memory formation in the cortex. It seems to involve the creation of clusters of synapses, and those clusters get activated by time-synched signals. There's some evidence for analog computation, and there's some evidence for almost quantum computation. So we're beginning to understand how to build a brain. That seems to be the hump, so I'm fairly confident I'll live to see computers at least as intelligent as I am. And I'm 54. Re:Across Realtime and the signularity (Score:2) Vinge's Singularity is AI Doc Numero Uno! (Score:4) -- Delusional Technocratic Arrogance (Score:2) The subject line, stated by Henry Warwick on the Jaron Lanier .5 Manifesto [edge.org] site, says it all. Following is reality... suck it up: Steve Magruder Re:The Singularity and Computational Efficiency (Score:2) I disagree. Humans and other animals may be poor (relatively) at doing paper-and-pencil mathematics, but they are quite good and fast with innate math. Huh? Well, tossing a basketball through a hoop requires unconscious calculation to make the muscle add the correct energy to the throw, it must be pused in the correct direction to make up for player movement relative to the hoop, etc. A lion, alternatively, must do the calculation on efficient pursuit trajectory of prey when they bolt. A lion doesn't run to the prey, it predicts and compensates for the movement/running of the prey to form an intercept course. This happens all the time and unconsciously with ALL creatures with a brain. It does involve math and it is automatic. Not too bad. Then there is the difference between a machine calculating a formula that HAD to derive from a human No machine creates new formulas or mathematics. They ONLY calculate that which humans in their creativity, slow as it may be, are able to devise. Quantum math, relativity, calculus...humans are slow to calculate the answers but very good at coming up with the formulations and rules. Re:Registration required? Why not! (Score:2) Re:Flawed assumptions? (Score:2) Which is completely irrelevant. The human nervous system evolved over the course of billions of years and works in a very specific and detailed fashion, most of which is still a mystery to us at the computational level. Without reproducing all that evolved design, we would not have anything like human intelligence. We already have machines that exceed the human capacity in every way physically, but we have not yet been able to create a robotic construction worker. Why should just throwing terraflops at the intelligence problem go any further towards solving it? Tim Vinge embodies the worst of science fiction (Score:2) It's sad that there's not a better venue for scientific speculation per se. If there were, people with no ear for fiction, such as Vernor Vinge, Robert Forward and Isaac Asimov, would not feel themselves forced into quasi-fictional exercises that demean both themselves and the storyteller's art. Tim Re:Succinctly (Score:2) Hmm. The trouble is that very, very few humans can actually make intuitive leaps. I can think of the guy (or gal) who figured out fire, Da Vinci, Edison, Einstein, a handful of others. Most of us just make tiny tweaks to other people's ideas. Bizarrely, given sufficient processing power, it might be more efficient to produce a speculating machine (that can design a random device from the atomic level up, with no preconceptions of usage, then try and find a use for it), rather than try and identify humans who can actually come up with ideas that are genuinely new. Succinctly (Score:4) The most succinct Vinge quote [mbay.net] that I can think of is: cool singularity links (Score:5) And then there's the non-profit corporation, the Singularity Institute for artificial Intelligence, which is determined to bring the Singularity about as soon as possible: There are a lot of good Vinge links on that page too, btw Singinst seems to be the brainchild of this guy: who has a lot of interesting docs here: Don't miss, the FAQ on the meaning of life, it's great reading. Unpredictable future (Score:2) I thought the future was already unpredictable. About the intelligent machines, I think the error is falling into the "biology" trap. Our whole perception system is conditioned by the ideas of "survival", "advancement", "power", "conscience", among others. Those come from our setup as living entities, trapped in a limited resources environment, having to compete for those resources. The fact that a machine is intelligent won't make it conscious, or interested in survival or power. There is no obvious relation. If your were to menace a machine more intelligent than you with cutting the power supply, it would be perhaps politely interested but not more. That is, if the development of the machine is made through "traditional" procedures. I would be wary of genetical-algorithm type developing. That could create a thinking and competitive machine :o) There are things that we cannot even imagine. One of them is the workings of our own brains. Other one is how a thinking machine would act. Of course, some are more interesting to write a book about that others. But it isn't S&F for me, more like fantasy. -- Hmm yes (Score:2) Now that I have seen my error, can I correct it by withdrawing my post? Can anyone tell me how? (This is not intended as a troll) Registration required? (Score:5) (news/quote)." Re:The Singularity and Computational Efficiency (Score:2) I'm not sure I agree that AI is a software problem because I don't see where regular human intelligence is a software problem. There is no software that comes with a new-born. A new-born is a complex system that comes out of the womb ready to learn. It's already thinking. You could argue that it has an OS - instincts, genetic instructions, but really what if there were a hardware copy of a baby only made with silicon (or whatever). If it was constructed properly it should pop out of the vat ready to learn. I guess I'm arguing that intelligence is a function of pathway complexity and self-referentiality (real word?). Maybe if we build it right - complex enough circuitry/pathways and enough self-referential ability, can modify itself and external environment, e.g. alter it's own version of programmable logic controllers and move a coke bottle with a robotic arm, [Yes I did say "programmable" but I didn't say "fully pre-programmed".] - maybe, like a new-born, if we build it right, and simulate a little evolution along the way, the intelligence will come. I think the challenge is not coding intelligence which sounds impossible to me, but building something that facilitates intelligence developing on it's own, again, like a new-born. Not software, but hardware that has the "complex adaptive ability of the human brain". Granted the first one would be no more intelligent than a rabid gerbil, but that's a good start. Well, (Score:2) Re:Flawed assumptions? (Score:2) Consider that for perhaps millions of years we had fire and spears as our main tools. Then agriculture, then metallurgy, then language, communication etc. Each epoch is marked by revolutions in technological sophistication, and also, each epoch shift occurs more and more rapidly, in a logarithmic fashion. consider the advances of the last 100 years to see my point. In fact, the last great technological revolution has been the global information network that we are currently using to discuss the topic. Born less than 30 years ago, it has already saturated the planet, becoming nearly ubiquitous to the segment of the population at the front of the wave.
http://news.slashdot.org/story/01/08/02/0137256/vinge-and-the-singularity?sdsrc=nextbtmnext
CC-MAIN-2014-15
refinedweb
9,749
62.78
Depending on the system under test(SUT), it’s often a requirement to not use the same data more than once in a test or at least ensure that Virtual Users(VUs) are not concurrently using the same data, such as login credentials. To do this we must calculate a unique number for the VUs to use during the test. I’ll share one example as a reply, but please add any methods you may use for your use case. As mentioned above, in order to prevent collisions between VUs accessing data from an external source, we need to calculate a unique number per VU iteration. Luckily, we have a few pieces of data we can use to calculate this. k6 provides the ID of VUs per load generator(k6 instance) as well as an iteration for each VU. __VU is the ID of a VU. It is 1 based and assigned sequentially as VUs ramp up. Every VU on a load generator will have a unique VU ID. __ITER is 0 based and increases sequentially as the default function is completed by VUs. Every VU will have their own iteration count. Consider this example, with my script uniqueNum.js: import http from "k6/http"; import { sleep } from "k6"; export default function() { http.get(""); console.log(`VU: ${__VU} - ITER: ${__ITER}`); let uniqueNumber = __VU * 100 + __ITER -100 ; console.log(uniqueNumber); sleep(1); }; If I run the above script k6 run uniqueNum.js -i 10 -u 10 I will see in the console that each VU will start logging unique numbers in my console window, separated be 100. Resulting in no collisions. I can use this number to select a position from an external file. // Separate file where contents of data.json follows this pattern: { "users": [ { username: "test", password: "qwerty" }, { username: "test", password: "qwerty" } ] } Then, if our script looked something like: const data = JSON.parse(open("./data.json")); export default function() { let uniqueNumber = __VU * 100 + __ITER; let user = data.users[uniqueNumber]; console.log(data.users[uniqueNumber].username); } We are now accessing a unique position per iteration and per virtual user. Some things to keep in mind regarding the above method: In our calculation of uniqueNumber, 100 would be the maximum iterations VUs can make before collisions. My selection of it was arbitrary here, you can decrease it or increase it based on need. You may know your test will never exceed 20 iterations per VU The higher the above number, the larger your source file must be. A test with 200 VUs would need 20k lines/rows of unique data if all iterations were completed In the Load Impact Cloud, a maximum of 200 VUs are assigned per load generator. So you would need to take LI_INSTANCE_ID into account when calculating unique values. More info on Load Impact env variables here I’ve been assisting some users with using unique data across multiple k6 instances in the LoadImpact cloud. Since each instance will have overlapping __VU IDs. This adds some complexity to determining our unique number, so I will share some of the required thinking to solve this issue. As mentioned in my last point above we can use the LI_INSTANCE_ID to help here. However, you’ll need to do some testing to determine how many rows each VU will consume during your test (or at least a maximum). I think in most cases this should be equal to the number of iterations you expect each virtual user to make which will vary based on test duration. Let’s consider the following, we expect each virtual user to need up to 400 rows from our JSON or CSV file. To make our script reusable, define this in the Init context. If you want to get fancy, maybe you’ll set it as an ENV variable so you can adjust it per run let maxIter = 400 // you'll want to define this in the init context!!! Previously with a single instance of k6, we could do something like this in our default function to generate a unique value on each run: let uniqueNum = ((__VU * maxIter) - (maxIter) + (__ITER)); However, as stated earlier, when dealing with multiple k6 instances in the LoadImpact cloud we will encounter some collisions as each instance will have overlapping __VU IDs. If we make the following adjustment, we can ensure each VU gets a unique value by using their LI_INSTANCE_ID in the equation. First, you need to know that the LoadImpact cloud currently will put a maximum of ~200~ 300 VUs per load generator. With that in mind, this means that _VU 300 in the above case at __ITER 400 will be at line 120,000 in our source file. With that in mind we can rather simply do the following, so that each instance, will start in their appropriate “block” of the source data: let uniqueNum = ((__VU * maxIter) - (maxIter) + (__ITER) + (120000 * __ENV["LI_INSTANCE_ID"])); Edits: updated due to changes in load generator limits in LI Cloud. 200 ->300 max VUs @mark thank you for this topic. I have already arranged VUs in my test to be unique as was required for our platform performance testing. The last point that I can’t get it’s how to distribute VUs uniqueness when multiple instances (e.g load generators) are raised. Initially, we need to mimic 3K unique users, but currently started from 500, so following the context of 500 users and as I understand, as soos as my cloud test will reach out 300 VUs - it will be raised the second insntance which will start generating duplicated VU IDs. How to correctly make the calculation? let uniqueNum = ((__VU * maxIter) - (maxIter) + (__ITER) + (80000 * __ENV["LI_INSTANCE_ID"])); - can’t relate this formula for my test. For instance, we have to run 500 VU’s(unique accounts, where email has index from 1 to 500) for 1 hour, I don’t know how many iterations it will be, also I don’t know the ID of “LI_INSTANCE_ID”? @Alexander I missed an edit when updating recently. Let’s start with the 80/120k number. That’s the amount of data needed per load gen. 300 VUs on one load gen, making 400 iterations would reach line 120k. As you have more load gens, this number could grow. You need to do some guesstimating at first. How many total sessions are you looking to generate Or how large is your data source? We need to solve for some thing to set this in our test. You could probably get fancy later on and read the length of the file and calculate that in line dynamically, but initially you’ll probably need/want to work through the math. Maybe it would be helpful for you and others to step through the formula a bit(There may very well be a more efficient way to do this): let uniqueNum = ((__VU * maxIter) - (maxIter) + (__ITER) + (120000 * __ENV["LI_INSTANCE_ID"])); (__VU * maxIter) - (maxIter) // Set's the current unique number to 0 for VU 1, value of maxIter for VU 2, etc. this way they all start at a unique point in the data source (no collisions) + (__ITER) // Add's 1 per iteration + (120000 * __ENV["LI_INSTANCE_ID"]) // for instances > 1, let's those VUs start at a higher row as their "0" Hope that helps clear things up a bit! @Alexander I spoke to one of my colleagues with a stronger math background than myself. He came up with another solution that might be less confusing when dealing with even distributions: let VUsTotal = 1000 // Set total script's total VUs amount here let VUsPerInstance = 250 // minimum VUs per instance in the cloud execution let InstancesTotalUpperEstimate = Math.ceil(VUsTotal / VUsPerInstance) let uniqNum = (__ITER * VUsTotal + (__VU - 1)) * InstancesTotalUpperEstimate + __ENV["LI_INSTANCE_ID"] Note that VUsPerInstance requires some thinking on your part and the number above is representative of this example. 1000 VUs / 300 max VUs = 3.33 instances required. As we can’t have .33 of an instance, we round up to 4. 1000 VUs across 4 instances would be 250 per instance. This also assumes even distribution! If you start to have uneven distribution it gets a bit more complex. As you can see there are multiple ways to go about this, I hope this clears things up a bit though! @mark thank you for the update! I will consider it a bit later. Now we intensively work on the performance issues with already existing loading but later on we will need more users eventually, so this formula will come in handy. One thing that I can say right now that there is no option to test that it works correctly since no, at least, console.log is available to use while running in cloud… @mark - I have a scenario in which I’m trying to assign each VU a unique user account from a list of 5000 unique user accounts. The issue I’m having is that I cannot determine how the VUs are split across instances. I’ve done some testing and when I run my test with 500 VUs I can see that the users are split across 2 instances with 250 VUs each, this makes sense. However, if I run my test with 1000 VUs I can see that the VUs are only utilizing 1 instance. I’m making this determination by running console.log(__ENV[“LI_INSTANCE_ID”]) and it is never greater than 0. It seems as if how the VUs are split is a factor of how many VUs the test is running. Is there anyway to reliably determine the splits? Any help would be appreciated! @scott We actually had a recent breaking change that we should have documented here in regards to the cloud. I’m going to remove the solution tag as it will depend on test size entirely now. We do plan to introduce completely unique VU IDs that would remove all this messy math. I am not sure what that timeline is, however. That said, we’ve introduced some tiering of hardware for cloud tests to improve spin up times, data processing and general stability. This tiering doesn’t impact the resources per VU as we linearly increase instance size (I know you didn’t ask about this, but I’m sure someone will read this in the future and will have that question). Anyway, here is how it goes: We have 3 tiers of hardware for load-generation. The tier we choose depends on the number of VUs allocated to a load zone. Tier 1 is used when there are 1-999 VUs in a load zone Tier 2 is used when there are 1000-4001 VUs in a load zone Tier 3 is used when there are more than 4001 VUs in a load zone. The tier 1 server handles up to 300VUs The tier 2 server handles up to 1200VUs The tier 3 server handles up to 5000VUs. For example, if you start a test with 900VUs, we will use 3x Tier 1 servers. If you start a test with 1000VUs in a single load zone, we will use 1x Tier 2 server. If the same test is started in 2 load zones, there will be 500VUs per load zone and 4x Tier 1 servers will be used. So you will need to determine, based on test size and config, what size machines you will use, then you can use the correct max VUs per load gen.
https://community.k6.io/t/when-parameterizing-data-how-do-i-not-use-the-same-data-more-than-once-in-a-test/42
CC-MAIN-2020-45
refinedweb
1,901
69.52
More About Safari • Subscriber Login If you don't have a Safari Books Online subscription... Start a FREE Trial Subscribe Now Visual. First and foremost, this efficiently packaged text is a reference to all of the COM objects and APIs that are needed to program with the Windows shell successfully. Each section is organized by topic, with an explanation of what kind of functionality you can add, and then all of the COM objects, methods, and constants that you'll need to use in VB, along with sample code. For many of the examples, a custom file extension (.rad) illustrates how to integrate this file into the desktop, and extend what it can do within the Windows desktop. Reading this book is also an education in the features that the Windows shell actually offers. For example, you'll learn how to add dynamic, context-sensitive menus to desktop icons, and drag-and-drop processing and custom property sheets that pop up on the desktop. Later sections turn to the Internet, with browser extensions, which can customize the look and feel of Internet Explorer (and File Explorer). One sample presents the code for a Web site crawler, which automatically downloads a group of files. Throughout, the book is careful to point out those features that are easy to do in VB and those that require advanced programming techniques. (Generally speaking, there's a lot of VB expertise on display here.) The author provides a custom COM type library for exposing all of the shell functionality to VB programmers. Of course, you can use this file to develop your own VB shell applications. Overall, this book helps explain a rather difficult topic in Windows programming, and makes accessible for the very first time this exciting area of functionality to experienced VB programmers. Read Visual Basic Shell Programming to create applications that both are more professional looking and take full advantage of every available feature in today's Windows desktop. --Richard Dragan Topics covered: However, what disappoints me is that I found the sample codes fromChap 11 (Both sample projects DemoSpace and RegSpace) crash inmachines running Win2K. This means that if you want to use the techniques taught from the book to implement a Shell Extension by VB, you can only support platforms below Win2K. That will notbe much useful at all. As far as I know, the author has not yet figured a solution(through private communications with Orielly's book support). Many people think VB and windows shell programming don't mix very well. Honestly, I was one of them. But after reading this great introduction, I figured I was wrong. Well, mostly wrong. There are two issues that make shell programming hard in VB: (1) As in most "advanced VB programming tasks", the first realization must be you _can't_ do it in pure VB. You need to import Win32 APIs and then fake you are writing your program in C. But that's a very old and well-solved problem, and in fact this book assumes you know how to do it: it shows the import statements without explaining how to get them. But that's fine, for I think most advanced VB programmers have already picked up this old trick. (2) The windows shell is built heavily on COM, so must be the shell extensions. But this book is not about writing COM servers in VB... Apparently the author did not expect the readers to know COM before, so he offered a short chapter on COM basics that I find too short to be sufficient for the purpose of this book. For example, later on he started using jargons like "in-process COM servers" and "apartment threaded" (these are COM jargons) without explaining what they are. I tried to look up these terms in the index to quote the page number. They are absent---yet another proof of insufficient coverage of COM. I admit that shell extensions are in-process COM servers and so in most cases the readers are not expected to do anything else anyway, but this kind of treatment much weakens a reader's understanding of what he/she is doing. And there are other problems that plague this almost excellent book: (1) There is no separate treatment of what should the programmer do when a new shell extension comes out. As an example, icon overlay is not covered in this book. I think this is really the major reason I have to take half a star off: this book is more like "how I wrote those shell extensions" rather than "how you can write your own ones". For example, it does show many examples of how to turn a given IDL into more VB friendly, but not how can the programmer obtain the IDL of an interface that's not covered in the book. (OLE View won't answer all such prayers. Go check the platform SDK or, _cough_, wait for the second edition of the book to have a new chapter on that extension. :P) (2) There is no coverage of debugging shell extensions. It's not as easy as one may expect, especially as VB will automatically re-register your COM servers when you execute your code while Explorer loads some registry entries only once... Overall, this is a more than decent introduction to shell programming using VB. If you want to do some typical shell programming like having your own property sheet or namespace extensions, then this book is really good for the job and is worth every single penny. I would rather say it's 4.5 star, but I have to round down for the minor problems I mentioned. Excellent reference material and a worthly investment.
http://my.safaribooksonline.com/1565926706%3Fportal=oreilly
crawl-002
refinedweb
957
68.7
Unverified Commit 42598786 authored byBrowse files Fixed ordering in Project.find_with_namespace This ensures that Project.find_with_namespace returns a row matching literally as the first value, instead of returning a random value. The ordering here is _only_ applied to Project.find_with_namespace and _not_ Project.where_paths_in as currently there's no code that requires Project.where_paths_in to return rows in a certain order. Since this method also returns all rows that match there's no real harm in not setting a specific order either. Another reason is that generating all the "WHEN" arms for multiple values in Project.where_paths_in becomes really messy. On MySQL we have to use the "BINARY" operator to turn a "WHERE" into a case-sensitive WHERE as otherwise MySQL may still end up returning rows in an unpredictable order. Fixes gitlab-org/gitlab-ce#18603 parent 78ab6a92dd9b Please register or sign in to comment
https://foss.heptapod.net/heptapod/heptapod/-/commit/425987861530c9c0fb7fe618d7f4bab017a80253
CC-MAIN-2021-49
refinedweb
146
51.75
check obj in out other object On 26/08/2016 at 02:52, xxxxxxxx wrote: Hello to all! How can I do a check with python to see if an object is inside another object area? I have a piramid and i want to know if an object is in or out! Cheers :slightly_smiling_face::slightly_smiling_face::slightly_smiling_face: On 29/08/2016 at 02:48, xxxxxxxx wrote: Hello Davide, the complexity of this task depends highly on the precision you need. The easiest would probably be a comparison based on bounding boxes, check GetMp() and GetRad() for this. But if you are looking for a precise solution based on volumes or actual shapes and will get arbitrarily complex. You'll almost certainly want to look into the GeRayCollider module. The SDK Team can't provide algorithms, so this will be a matter to be discussed within the community. On 29/08/2016 at 10:05, xxxxxxxx wrote: Hello Andrea, tnx for reply At the moment i resolve this problem with the GeRayCollider but i use xpresso becouse i find problem with him in python On 29/08/2016 at 10:08, xxxxxxxx wrote: Can you provide us some details on the problems you have with GeRayCollider? A code snippet to reproduce the problem would be even better. On 29/08/2016 at 10:15, xxxxxxxx wrote: This is my code (another code but same problem) : import c4d from c4d import * from c4d.utils import GeRayCollider from c4d.modules import mograph as mo import math #Welcome to the world of Python def main() : Obj = op[c4d.ID_USERDATA,1] ObjPos= Obj.GetMg() mg = gen.GetMg() md = mo.GeGetMoData(op) if md is None: return False cnt = md.GetCount() marr = md.GetArray(c4d.MODATA_MATRIX) fall = md.GetFalloffs() for i in reversed(xrange(0, cnt)) : Ray=GeRayCollider() Ray.Init(Obj) Ray.Intersect(marr _.off,ObjPos.off,300000) print Ray.GetIntersection(0) md.SetArray(c4d.MODATA_MATRIX, marr, True) return True I understand the problem form me is to find the "object coordinates, this is not the abs and the rel coordinate, i need to understand the concept On 31/08/2016 at 11:24, xxxxxxxx wrote: Hi Davide, sorry for the delay. There's an article on matrix fundamentals in the Python documentation that may be helpful. I'm sorry, if it may seem obvious to you, but can you please describe your actual problem? I guess the code you posted is supposed to be used in a MoGraph Python Generator? How am I supposed to use it? And what's the supposed outcome? So, what do you want to achieve (expected or correct result) and what happens instead?
https://plugincafe.maxon.net/topic/9666/13002_check-obj-in-out-other-object
CC-MAIN-2020-40
refinedweb
443
57.37
I downloaded the latest pbf file(dated 27-Aug-2020 23:37) from a mirror and read its header using a python wrapper to osmium (v3.0.1) import osmium f = osmium.io.Reader("path/to/pbf_file.osm.pbf") header = f.header() seqnum = header.get("osmosis_replication_sequence_number", "") timestamp = header.get("osmosis_replication_timestamp", "") print(f'{seqnum!r},{timestamp!r}') # output: '', '2020-08-23T23:59:50Z' The sequence number was missing. Is this expected behaviour? In the documentation, all the replication fields in OSMHeader seem to be set as optional. What pbf files are expected to have these values set? asked 04 Sep '20, 09:37 M_T_ 16●1●1●3 accept rate: 0% edited 04 Sep '20, 10:26 The sequence field in the headers are currently set only on extracts from Geofabrik. pyosmium-up-to-date also sets them when you update a PBF file. There is a good reason that the sequence number is not set on the official planet file: there is no exact sequence number that corresponds to the state of a planet file. Planet files are created independently of the change files. So the content of the planet might correspond to something like sequence number "4164164 3/4". When you want to use updates with a downloaded planet file, the usual way is to look at the creation time and find a change sequence number that is far enough in the past that you get all new changes that are not yet contained in the planet. You don't have to do the math yourself, pyosmium-get-changes, which is included in the python osmium package, can do that for you. Just run pyosmium-get-changes -O planet-latest.osm.pbf and it prints a single number. This is the recommended sequence number where you should start with the updates. See the section on updating OSM data in the pyosmium manual for more information. answered 05 Sep '20, 20:25 lonvia 5.7k●2●53●81 accept rate: 40% AFAIU the sequence number you get from a call to pyosmium-get-changes is the minutely sequence number. How would you translate this to a daily or hourly sequence number? It seems that some minute updates have been skipped, so one can't do a simple multiplication to translate between the two pyosmium-get-changes You can't directly translate to hourly/daily sequence ID. You need to compute the appropriate ID in the same way as for the minutely replication. Use the parameter --server with pyosmium-get-changes to choose a different replication source. For example, to get the sequence ID for daily updates run pyosmium-get-changes --server -O planet-latest.osm.pbf. --server pyosmium-get-changes --server -O planet-latest.osm.pbf Once you sign in you will be able to subscribe for any updates here Answers Answers and Comments Markdown Basics learn more about Markdown This is the support site for OpenStreetMap. Question tags: python ×71 pbf ×58 osmium ×24 header ×2 question asked: 04 Sep '20, 09:37 question was seen: 535 times last updated: 12 Sep '20, 15:35 OSM data for Road/Lane Edges ? How do I get the class and type of an osm-object returned from osmium parser ? extracts all street/roads intersections in a bounding box How to read a .pbf file in Python (without reading the whole file at once)? Errors in osmium installation Downloading OSM then re-applying the OSM style in Mapnik Calling osmosis using Python on Windows Errors when compiling Osmium example Spreadnik/Tutorial modify generate_tiles.py to create 512 x 512 tiles? First time here? Check out the FAQ!
https://help.openstreetmap.org/questions/76430/pbf-file-missing-osmosis_replication_sequence_number-in-header
CC-MAIN-2021-43
refinedweb
605
65.42
Qt creator errors highlights Not even sure how to name it. It happened after updates 1 month ago about. Now Qt Creator IDE marks things like #include <memory> as "Error, file not found". Completion works bad too. So question is what and where to fix ? I work on the same project more than a year, everything is git versioned, but system update did that (arch-linux). Compiler works ok, so it's only IDE problem. Is your project makefile based? Or do you use a pro file? pro/pri files Also I started some new project since that (pro/pri) and it has same issue - all system includes not found. P.S. Forum forces me to delay in posts as new user for 300s :( - mrjj Qt Champions 2017 @alexzk said in Qt creator errors highlights: P.S. Forum forces me to delay in posts as new user for 300s :( Yes its for spam protection. :) Unexpected. Today my patience went off with KDE-5 (because it starts to swap something when I play MMOs - guess some leaks), I completely wiped it and installed lxqt. Now that subj bug with highlights went away. It is just working as expected. Also "ClangCodeModel" plugin works as expected. Conclusion - KDE5 is trash. - jsulm Moderators @alexzk To be honest: I have no idea how KDE is related to QtCreator issues. It was most probably something else. Possibly, i deleted all configs of kde. Actually I deleted all configs prior 2015 as well I had there (but I kept something named QtProject). Cleared 3-4 gigs. But bug happened when I updated from kde4/5 running mix to full kde5 because some tray applets were broken. After this creator went wrong. Now, when kde removed, it's ok again. @alexzk said in Qt creator errors highlights: KDE5 is trash. Probably you chose the wrong place to flame at KDE, btw it looks like the update installed a new version of gcc and Qt Creator did not find the includes in the old place. Probably just a change in the environment of the kit would have been enough to fix the problem @VRonin No, Gcc was working ok, everything was ok (even CLion) except qt-creator gui. Any way, problem is fixed by removing KDE. Can close thread.
https://forum.qt.io/topic/74994/qt-creator-errors-highlights
CC-MAIN-2018-43
refinedweb
380
84.37
Opened 6 years ago Closed 2 years ago Last modified 2 years ago #5968 closed New feature (fixed) Registering/Unregistering multiple models fails Description Registering and unregistering multiple models fails. Calling for example databrowse.site.register([model1, model2]) triggers this error: issubclass() arg 1 must be a class. The patch fixes this by checking if the argument is a type before calling issubclass. The call to issubclass is now not strictly necessary (unless someone would want to pass in an iterable of models that is also type) but I kept it for clarity. Attachments (6) Change History (25) Changed 6 years ago by anderso comment:1 Changed 6 years ago by mtredinnick - Needs documentation unset - Needs tests unset - Patch needs improvement set comment:2 Changed 6 years ago by anderso The intent of the original author was to allow either a Model or an iterable of Models, but the check for identifying the actual case was faulty. An *args parameter would probably be nicest but the function has this signature, which doesn't make that a pretty option. def register(self, model_or_iterable, databrowse_class=None, **options): I have changed the patch to check if it's not an iterable instead of checking if it's a Model. By the way, is hasattr(o, "iter") the best way to check for an iterable? Changed 6 years ago by anderso comment:3 Changed 6 years ago by mtredinnick - Needs documentation set - Triage Stage changed from Unreviewed to Accepted Since you're doing a for-loop over the iterator, it must have an __iter__ or __getitem__ method, so the test you're doing is fine. Leave the patch as it is for now, but I'm still tempted to move to the *args format. It would be mostly backwards compatible, the only difference being that databrowse_class would have to be specified as a keyword argument, but we can enforce that and that parameter isn't documented, so arguably isn't part of the public API (yet, at least). I'm still a little concerned about how you're going about the test here. Currently we check that the input must be a model. This patch changes that so that site.register(int) won't raise the error it does now and will only cause problems later. I think it's important to keep the test for models in there somewhere. Presumably the end result is to allow multiple models to be registered at once. The iterable is just a way to achieve that, not a requirement, right? After all, given an iterable, you can still make it work with the *args format by calling register(*list(my_iter)). Thus my slight preference for the *args version when we can just iterate over args inside the register() method. Anyway, let's leave it as is for now and we can tweak it when it comes time to commit. I think the idea is reasonable, however we end up imlpementing it. comment:4 Changed 6 years ago by anderso Yes I believe it's just a convenience for adding several models at once. The use case where you already have a list of Models is probably rare (and if you did you could just use the * syntax in the call). Verifying that the arguments really are Models might be nice since this is the main public api of databrowse. If we were to rethink the api, maybe it would make sense to provide registering at the application level in addition to individual models. I imagine this is how it looks for many users: from myapp.models import ModelA, ModelB, ModelC, ModelD, ModelE databrowse.site.register(ModelA) databrowse.site.register(ModelB) databrowse.site.register(ModelC) databrowse.site.register(ModelD) databrowse.site.register(ModelE) or with the undocumented iterable approach: from myapp.models import ModelA, ModelB, ModelC, ModelD, ModelE databrowse.site.register([ModelA, ModelB, ModelC, ModelD, ModelE]) maybe something like this would be a nice alternative: import myapp databrowse.site.register_application(myapp) Makes sense considering a main use case of databrowse is to provide a quick overview of the data for developers. comment:5 Changed 6 years ago by mattmcc Is there any reason not to fix this bug by using the same test that AdminSite uses? isinstance(model_or_iterable, ModelBase) Changed 6 years ago by mattmcc Patch using the same test as AdminSite.register Changed 6 years ago by mattmcc Add doc update comment:6 Changed 6 years ago by mattmcc - milestone set to 1.0 - Needs documentation unset - Patch needs improvement unset comment:7 Changed 6 years ago by mtredinnick - milestone changed from 1.0 to post-1.0 This is a feature addition, not a bug fix. comment:8 Changed 5 years ago by anonymous - milestone post-1.0 deleted Milestone post-1.0 deleted comment:9 Changed 3 years ago by gabrielhurley - Severity set to Normal - Type set to New feature comment:10 Changed 3 years ago by julien - Easy pickings unset - Needs tests set Changed 2 years ago by jamesp Adding tests comment:11 Changed 2 years ago by jamesp - UI/UX unset comment:12 Changed 2 years ago by ptone comment:13 Changed 2 years ago by PaulM - Needs tests unset - Patch needs improvement set I don't see anything wrong with this patch, and will be happy to add it as a minor improvement even though databrowse is deprecated. The tests are ok (since databrowse is otherwise untested). Unfortunately, the patch needs to be updated again to catch the pending deprecation warning. Feel free to mark it as RFC again once that's taken care of. comment:14 Changed 2 years ago by aaugustin - Patch needs improvement unset While the current code apparently intends to make it possible to register or unregister multiple models at once by passing a list, this ticket shows that it doesn't work, and it was never documented. Like Malcolm, I'd prefer to move to the *args format, as demonstrated in the patch I'm attaching. Also, contrib apps bundle their own tests. Changed 2 years ago by aaugustin comment:15 Changed 2 years ago by aaugustin - Owner changed from nobody to aaugustin comment:16 Changed 2 years ago by claudep - Triage Stage changed from Accepted to Ready for checkin It seems in good shape! Thanks. comment:17 Changed 2 years ago by aaugustin - Resolution set to fixed - Status changed from new to closed comment:18 Changed 2 years ago by aaugustin comment:19 Changed 2 years ago by aaugustin The changes to tests/runtests.py should have gone in the second commit, sorry. Checking for something being an instance of type is usually a sign of things going wrong. What are you actually trying to achieve here? Allowing an iterable as an argument? If so, then check to see if it's an iterable, not if it's a type, which is way too broad. However, even that is probably not the best approach. If you're going to try and allow multiple models to be registered, just let register() take a *args parameter so the user doesn't have to needlessly wrap things in a tuple or list and can just write
https://code.djangoproject.com/ticket/5968
CC-MAIN-2014-10
refinedweb
1,197
60.24
By now you may have seen the amazing screencast where someone implements a beautiful web interface to Flickr! (an online photo gallery) in under 5 minutes. Well if you haven’t seen it yet, you really should do so now. But if you have you would probably be glad to know that this is possible under Pylons too! First install Pylons 0.9.6: $ easy_install -U pylons==0.9.6 Then create your project: $ paster create -t pylons FlickrSearch add a controller called flickr: $ cd FlickrSearch $ paster controller flickr Our project is going to use a third party library for Flickr web services. We’ve picked flickr.py from the Flickr! API list. All third party libraries you add to a Pylons project can go in the lib directory so download and put it in lib: $ cd flickrsearch/lib $ wget Now lets start the server and see what we have: $ cd ../../ $ paster serve --reload development.ini Note that we have started the server with the --reload switch. This means any changes we make to code will cause the server to restart if necessary so that you can always test your latest code. To access Flickr! we need a Flickr! API key. You can get your API key here after filling in a very short form. If you look at config/middleware.py you will see these lines: javascripts_app = StaticJavascripts() ... app = Cascade([static_app, javascripts_app, app]) The javascripts_app WSGI application maps any requests to /javascripts/ straight to the relevant JavaScript in the WebHelpers package. This means you don’t have to manually copy the Pylons JavaScript files to your project and that if you upgrade Pylons, you will automatically be using the latest scripts. Knowing that we don’t need to worry about JavaScript files, edit the file templates/base.mako with the following content: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" ""> <html> <head> <title>Flickr!</title> ${h.javascript_include_tag('/javascripts/effects.js', builtins=True)} ${h.stylesheet_link_tag('/flickr.css')} </head> <body> ${self.body()} </body> </html> If you are interested in learning some of the features of Mako templates have a look at the comprehensive Mako Documentation. For now we just need to understand that ${self.body()} is replaced with the child template and that anything in ${ ... } is executed and replaced with the result. In the head of the HTML document, javaScript and stylesheet tags are inserted. Add the following to your controllers/flickr.py: import logging from flickrsearch.lib.base import * import flickrsearch.lib.flickr as flickr log = logging.getLogger(__name__) flickr.API_KEY = "Your key here!" class FlickrController(BaseController): def index(self): return render('/flickr.mako') def search(self): photos = flickr.photos_search(tags=request.params['tags'], per_page=24) c.photos = [photo.getURL(size="Small", urlType='source') for photo in photos] return render('/photos.mako') It should be pretty straight forward, we import the flickr API module, set the API_KEY. And define two actions in our controller. The first index() just renders flickr.mako, the other search() uses the flickr API module to select all photos by using the tag from request.params['tags']. request.params are given to this action by the form from templates/flickr.mako with a POST method. It then renders the templates/photos.mako template by calling the Pylons render() function. Time to create the two templates. Create templates/flickr.mako with this content: <%inherit ${h.form_remote_tag(url=h.url(</div> <fieldset> <label for="tags">Tags:</label> ${h.text_field("tags")} ${h.submit("Find")} </fieldset> <div id="photos" style="display:none"></div> ${h.end_form()} Create templates/photos.mako with this content: % for photo in c.photos: <img class="photo" src="${photo}"> % endfor Finally we need to add some style to our project so create the stylesheet public/flickr.css. We are going to use the same stylesheet as the Rails example: body { background-color: #888; font-family: Lucida Grande; font-size: 11px; margin: 25px; } form { margin: 0; margin-bottom: 10px; background-color: #eee; border: 5px solid #333; padding: 25px; } fieldset { border: none; } #spinner { float: right; margin: 10px; } #photos img { border: 1px solid #000; width: 75px; height: 75px; margin: 5px; } - Installed a Flickr library - Written a controller with index() and search() methods - Written a main template linking to the JavaScripts we need - Created a template fragment to generate HTML to return to the browser via AJAX - Added the necessary CSS We are done! OK visit and check your stopwatch. How long did it take you? Note If you have any problems ensure you have set the flickr.API_KEY in controllers/flickr and have a look at the console output from paster serve. If there are any debug URLs logged you can visit those URLs to get an interactive debug prompt and work out where you went wrong! Based on original tutorial for Pylons 0.8 by Nicholas Piel
http://bel-epa.com/pylonsdocs/tutorials/flickr_search_tutorial.html
crawl-002
refinedweb
799
68.67
PKGBUILD improvements could be made by warning a user if "git" user exists (from other setup). This could be further checked if /srv/gitosis directory exists Search Criteria Package Details: gitosis-git 0.2.r49.gdedb3dc-2 Dependencies (3) - git (git-git) - python2 (placeholder, pypy19, python26, stackless-python2) - python2-distribute (python2-setuptools) Required by (0) Sources (1) Latest Comments rumtata commented on 2013-12-25 19:51. alperkanat commented on 2012-04-18 09:39 thanks to Igor Vinokurov, updated the package with package removal fixes. alperkanat commented on 2012-04-03 16:17 Updated the package. Anonymous comment on 2012-04-03 04:29 I get permission denied. He moved his repo. Update the URL and git_root to be this: alperkanat commented on 2012-04-02 18:58 I believe you mean the movement to github of upstream? I just created an updated AUR package but cannot upload atm since I'm on my iPad. I'll be updating this package asap when I get my hands on my computer. Thanks @nickray! Anonymous comment on 2012-04-02 16:46 It seems the maintainer has moved his repo to gitosis but not adjusted the PKGBUILD? alperkanat commented on 2012-02-13 12:36 added git as a dependency. thanks Anonymous comment on 2012-02-11 17:38 Yeah. You might want to add git as a dependency (or at least a suggested one, as nothing works without it) alperkanat commented on 2012-01-23 09:06 sorry i missed tuxce's message, updated the package again. thanks @tuxce alperkanat commented on 2012-01-23 09:03 updated the package. sorry for the delay and thanks to @ksira ksira commented on 2011-12-06 11:37 I had an error using makepkg from pacman 4.0.1 with this PKGBUID I had to remove the parenthesis from around the install file name: install="gitosis.install" Hope that helps. Anonymous comment on 2011-08-29 21:34 Nevermind, I just realized that gitosis-git was added to archlinuxfr and it changed the permissions to /srv/gitosis to 700, it requires 711 to work. Anonymous comment on 2011-08-29 06:23 Hi, just updated the package and now I'm receiving a 404 - No projects found error, I've checked all the permissions etc and dug through config files but can't find anything wrong. Know why this recent update would have borked things? tuxce commented on 2011-08-27 12:09 Hi, gitosis-init need pkg_resources module which is provided by python2-distribute. python2 and python2-distribute should be in "depends" not "makedepends". alperkanat commented on 2011-08-19 07:15 the author reactivated his git repositories, so the package should start being built without any need for an update. you should create the folder and set it as gitosis' users home. please check if there are any existing gitosis or git users in your /etc/passwd and /etc/group files. that might be the reason if you installed gitosis manually or in any other way. josemota commented on 2011-08-18 16:55 @alperkanat no problem. I had trouble installing it for a couple of hours and went digging. -- Hey btw, how can I turn around the /srv/gitosis issue? Creating the folder isn't enough, right? What should I do? Thanks for helping, Alper. alperkanat commented on 2011-08-18 16:14 @josemota: thanks for the notice! i sent an e-mail to its author to notify him about the issue. meanwhile, i'll update the package asap. that github url seems to be the author's repositories so it's a possible reason why he might have killed his own repo. josemota commented on 2011-08-18 16:04 @alperkanat, the package needs an update. The eagain.net/gitosis.git URL is ill, i've found github.com/tv42/gitosis.git to be trustful. alperkanat commented on 2011-04-12 13:31 it's created by user addition of git(osis) so it has nothing to do with this package.. useradd command is used to add the user. Anonymous comment on 2011-04-12 13:29 /srv/gitosis does not create successfully ChojinDSL commented on 2011-03-16 12:26 Update... Ok I managed to resolve the issue. python2-setuptools needs to be installed. I had python2-distribute installed, which seemed to conflict with python2-setuptools. I simply uninstalled python2-distribute and installed python2-setuptools. ChojinDSL commented on 2011-03-16 10:55 When I try to run gitosis-init as specified in the Arch Wiki, I get the following: sh-4.2$ gitosis-init < /srv/id_dsa.pub Traceback (most recent call last): File "/usr/bin/gitosis-init", line 5, in <module> from pkg_resources import load_entry_point File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 2691, in <module> add_activation_listener(lambda dist: dist.activate()) File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 668, in subscribe callback(dist) File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 2691, in <lambda> add_activation_listener(lambda dist: dist.activate()) File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 2192, in activate self.insert_on(path) File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 2299, in insert_on self.check_version_conflict() File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 2338, in check_version_conflict for modname in self._get_metadata('top_level.txt'): File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 2186, in _get_metadata for line in self.get_metadata_lines(name): File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 1174, in get_metadata_lines return yield_lines(self.get_metadata(name)) File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 1166, in get_metadata return self._get(self._fn(self.egg_info,name)) File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 1281, in _get stream = open(path, 'rb') IOError: [Errno 13] Permission denied: '/usr/lib/python2.7/site-packages/soaplib-1.0.0_beta8-py2.7.egg-info/top_level.txt' If run the gitosis-init command as root its fine. But not if I try to run it as the git user, either via sudo or directly as the git user. canton7 commented on 2011-02-08 09:14 See the repository owner's blog here: and github README here: If, for example, you have a number of repositories in dir/ which are all writable by the same group, it allows you to write [group mygroup] members = jdoe writable = dir/* instead of specifying each repository individually. alperkanat commented on 2011-02-07 23:57 Hello canton7, What are wildcards are used for? canton7 commented on 2011-02-07 23:54 I've created a package of a fork which supports wildcards for the 'writable' attribute here: Anonymous comment on 2011-01-17 06:14 Ah, it appears reinstalling python2-distribute fixes this issue. Anonymous comment on 2011-01-16 16:30 I'm still having a problem... alperkanat commented on 2011-01-13 17:25 corrected the pkgbuild, thanks for warning! notizblock commented on 2011-01-11 13:28 i used python2-distribute instead of python2-setuptools alperkanat commented on 2010-12-21 20:46 thanks for warning.. updated the package. Anonymous comment on 2010-12-21 18:48 You need to change setuptools to python2-setuptools in the dependencies. Anonymous comment on 2010-10-27 00:56 @alperkanat Thanks! I think it was a problem with my system... I kept creating new groups and they all got GID 99. I ended up doing some looking and fixing it. alperkanat commented on 2010-10-26 08:58 @wsduvall: updated the package with permissions fix. the GID problem was old bug that has been fixed a few months ago but I can't find the bug report right now. so maybe you could try updating your system and reinstall gitosis-git package. if the problem continues, you can fill a bug report because gitosis-git package does nothing but to use groupadd -r command. Anonymous comment on 2010-10-26 01:22 Pretty sure you're supposed to do chmod 700 on /srv/gitosis (see the wiki), also this pkg is creating the git group with the same GID as nobody (99). alperkanat commented on 2010-10-20 22:13 updated the package with python2, thanks! alperkanat commented on 2010-10-20 06:51 thanks for your support, will do it asap Anonymous comment on 2010-10-20 00:45 With the python2.x -> python3.x upgrade, this will no longer build because it needs setuptools, which is available only for python2.x, but uses python, which is now python3.x. The fix is easy: Replace the dependence on python with one on python2. A tar-file that fixes this can be found at. If you could upload it, that would be great. alperkanat commented on 2010-08-08 21:18 in fact, it's not. because of downloading the latest version from git, it never goes out of date when you install it unless the upstream development is stopped. but i'll update the package revision so that everyone can update to the latest one. exterm commented on 2010-08-06 18:05 The link given lists the last commit as 2009-09-17 so This package is out of date, right?
https://aur.archlinux.org/packages/gitosis-git/?ID=23419&comments=all&detail=1
CC-MAIN-2017-22
refinedweb
1,539
58.69
I was rude!... I know I was, when I said that... but I had to say it! That young and pretty beautiful lady was looking straight to my sensitive data!... and we were in public... at an internet cafe, and my sensitive data was fully shown on the display screen!... completely unprotected! Fortunately, my data is large enough to be understood at first sight. Whenever that beautiful lady's image comes to my mind, I only think in one thing... My obvious conclusion out of this fact was that I should design a C++ class (MFC dependent) to protect any data from prying (although very attractive) eyes... and this is exactly what I've did. I've implemented a simple class, named Symmetric, inside a namespace named Encryption, to symmetrically encrypt/decrypt an array of bytes held by an object of type CArray<BYTE>, given a key in a form of another array of bytes, also held by an object of this same type. This Symmetric class holds only 2 public methods: void Encrypt(CArray<BYTE>& data , const CArray<BYTE>& key , const BOOL throwException = _dontThrowException) { // // code to throw or not exception... // Do(_encryption, data, key); } void Decrypt(CArray<BYTE>& data , const CArray<BYTE>& key , const BOOL throwException = _dontThrowException) { // // code to throw or not exception... // Do(_decryption, data, key); } The encryption/decryption is made in place, i.e., directly on the data object instance (no copy of it is made). Any memory used for holding the key (or part of it) is, after usage, securely zeroed. The key used to encrypt the data, makes no part of the encrypted data Also, the encrypted data is always 8 bytes greater than the original data, for producing some avalanche effect in the encryption data, avoiding quite similar datas having almost the same encrypted result. The Symmetric class deals with a zero-length key and/or a zero-length data, without needing to throw exceptions. An object instantiated from Symmetric class can encrypt/decrypt any data held in a CArray<BYTE> object, no matter its size. Is your sensitive data a small one ( ) ?... no problem, the Symmetric class remedies this. It can encrypt even a zero-length data, and decrypt it back correctly, that is, back to a zero-length data. Also, a zero-length key can encrypt (the word is inappropriate in this case, because there's no scrambling) and decrypt back correctly, any data. The Symmetric class can also, encrypt a zero-length data with a zero-length key (now, both arrays are zero-length), and decrypt it back correctly to a zero-length data. It's apparent that this class doesn't need to throw exception of any type. An example using this class for encryption is: CArray<BYTE> key; DefineYourKey(key); CArray<BYTE> data; ReadYourDataFromAFileForInstance(data); Encryption::Symmetric().Encrypt(data, key); //now, data is encrypted. PutYourEncryptedDataBack(data); And, for decryption: CArray<BYTE> key; UseTheSameKeyAsBefore(key); CArray<BYTE> data; ReadBackYourEncryptedData(data); Encryption::Symmetric().Decrypt(data, key); //now, data is as the original one. Sometimes, a client application wants to be warned about inappropriate argument; exceptions are useful in this case, while maintaining programming linearity: try { /*©*/ using namespace Encryption; Symmetric().Encrypt(abDATA, abKEY, _andThrowExceptionIfAnyArgumentIsZeroLength); // // etc... // } catch (INT_PTR check) { CString checkString = /*©*/ Encryption::GetCheckString(check); // // etc... // } For better appreciating some visual effects when running the executable zipped above, do it using the command line in the start up menu. Also, if you have another window shown up, it would be better let it cover no more than 50% of screen real estate, and don't forget that the file THE Lady.encrypted.dat must be in the same folder. Just basic MFC programming knowledge. The MFC dependent Encryption namespace (containing the Symmetric class) is declared and implemented in the Encryption header file. This article's project is an MFC one that makes no use of pre-compiled headers, and has got three files besides the one already mentioned: a .cpp for the application, mainframe (and its CRichEditCtrl derived child window), and dialog box classes, and a .h and a .rc for resources. The character set is not set. In the client application, a CRichEditCtrl derived child window is used as repository for the examples' results. A modal dialog box is used solely to collect user input and, as we know that it doesn't block the main window, send messages to the it, which will react accordingly, performing tests and showing them on screen. The main method in the Symmetric class is Do(const BOOL encrypt, ...), which performs the encryption or decryption, depending on the BOOL parameter, while keeping the correct execution order (when decrypting, the order must be reversed in relation to encrypting). The Symmetric::TheAvalanche(...) method increases the data length by 8 bytes (exclusively dependent on the data and on the key) to produce an avalanche effect, that makes similar datas having reasonably different encrypted datas. The user key is not directly used to encrypt (or decrypt) the data. Instead, many keys are built from it, with lengths varying from 1 up to the original key length, and repeating the same process, but now using the original key reversed. For instance, a key like ABC (hex 41 42 43), will cause the data be encrypted (or decrypted) with the following keys in sequence (but when decrypting, the sequence order is reversed): 41 b1 a5 02 41 42 64 4e 7e 04 41 42 43 6b 29 6b 85 81 07 14 e4 a5 02 14 24 02 1b 18 04 14 24 34 0d 09 65 a7 e7 07 Note that the bytes 14, 24 and 34 above are the originals 41, 42 and 43 with the nibbles reversed. The other bytes are calculated based only on the key itself. Maybe, the weakest part in an encrypted system is not the encryption algorithm, but, the key a user chooses to encrypt his/her data. For instance, a key like: Qwerty or qazxsw or bored web developer will never, ever, protect any data from any hacker, no matter how smart and strong the encryption algorithm is. Those words are in every hacker's dictionary for attacking a system security. The first two keys are directly found (any specific sequence) on the keyboard, and the last one is composed of three words directly found in a language dictionary and, worse, they are english words, which is an international language. Therefore, we should litter them ( please, just the keys, not the person, because it's against human civil rights ). To choose a key is easy; to choose a strong key is a bit harder; to remember such a key is the hardest thing. Maybe one would choose strong keys and, later, would choose to pack them all in a file encrypted with an easy key to remember later... . I'm very much interested in meeting that young and pretty beautiful lady again. If you happen to see her somewhere, just let me know! This is the point. Right now, after implementing Symmetric class, I feel confident to show her my sensitive data... ...and I'd like very much she'd let me see her sensitive data too! 11/november/2008 : just decided start looking for that beautiful lady... . By the way, let me tell you that I've just met, AGAIN, that pretty beautiful lady. Yeaaah! CRichEditCtrlderived child window. General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/security/WouldYouPleaseEyesOFF.aspx
crawl-002
refinedweb
1,231
53.81
. Disclaimer: The code in this post is intended to be educational only. It is simplified code that is intended to help you understand how ACS works. If you already have a good understanding of ACS and are looking to use it with PHP in a real application, then I suggest you check out the AppFabric SDK for PHP Developers (which I will examine more closely in a post soon). If you are interested in playing with my example code, I’ve attached it to this post in a .zip file. Credits: A series of articles by Jason Follas were the best articles I could find for helping me understand how ACS works. I’ll borrow heavily from his work (with his permission) in this post, including his bouncer-bartender analogy (which Jason actually credits Brian H. Prince with). If you are interested in Jason’s articles, they start here: Windows Azure Platform AppFabric Access Control: Introduction. AppFabric Access Control Service (ACS) as a Nightclub The diagram below (adapted from Jason Follas’ blog) shows how a nightclub (with a bouncer and bartender) might go about making sure that only people of legal drinking age are served drinks (I’ll draw the analogy to ACS shortly): - Before the nightclub is open for business, the bouncer tells the bartender that he will be handing out blue wristbands tonight to customers who are of legal drinking age. - Once the club is open, a customer presents a state-issued ID with a birth date that shows he is of legal drinking age. - The bouncer examines the ID and, when he’s satisfied that it is genuine, gives the customer a blue wristband. - Now the customer can go to the bartender and ask for a drink. - The bartender examines the wristband to make sure it is blue, then serves the customer a drink. I would add one step that occurs before the steps above and is not shown in the diagram: 0. The state issues IDs to people and tells bouncers how to determine if an ID is genuine. To draw the analogy to ACS, consider these ideas: - You, as a developer, are the State. You issue ID’s to customers and you tell the bouncer how to determine if an ID is genuine. You essentially do this (I’m oversimplifying for now) by giving giving both the customer and bouncer a common key. If the customer asks for a wristband without the correct key, the bouncer won’t give him one. - The bouncer is the AppFabric access control service. The bouncer has two jobs: - He issues tokens (i.e. wristbands) to customers who present valid IDs. Among other information, each token includes a Hash-based Message Authentication Code (HMAC). (The HMAC is generated using the SHA-256 algorithm.) - Before the bar opens, he gives the bartender the same signing key he will use to create the HMAC. (i.e. He tells the bartender what color wristband he’ll be handing out.) - The bartender is a service that delivers a protected resource (drinks). The bartender examines the token (i.e. wristband) that is presented by a customer. He uses information in the token and the signing key that the bouncer gave him to try to reproduce the HMAC that is part of the token. If the HMACs are not identical, he won’t serve the customer a drink. - The customer is any client trying to access the protected resource (drinks). For a customer to get a drink, he has to have a State-issued ID and the bouncer has to honor that ID and issue him a token (i.e. give him a wristband). Then, the customer has to take that token to the bartender, who will verify its validity (i.e. make sure it’s the agreed-upon color) by trying to reproduce a HMAC (using the signing key obtained from the bouncer before the bar opened) that is part of the token itself. If any of these checks fail along the way, the customer will not be served a drink. To drive this analogy home, I’ll build a very simple system composed of a client (barpatron.php) that will try to access a service (bartender.php) that requires a valid ACS token. Hiring a Bouncer (i.e. Setting up ACS) In the simplest way of thinking about things, when you hire a bouncer (or set up ACS) you are simply equipping him with the information he needs to determine if a customer should be issed a wristband. Essentially, this means providing information to a customer that the bouncer will recognize as a valid ID. And, we also need to equip the bartender with the tools for validating the color of a wristband issued by the bouncer. I think this is worth keeping in mind as you work through the following steps. To use Windows Azure AppFabric, you need a Windows Azure subscription, which you can create here:. (You’ll need a Windows Live ID to sign up.) I purchased the Windows Azure Platform e-mail after signing up). After creating and activating your subscription, go to the AppFabric Developer Portal: (where you can create an access control service). To create a service… 1. Click a project name. 2. Click Add Service Namespace. 3. Choose a service namespace (make sure it is valid) and the region in which you want the service deployed, then click Create (you can leave the ServiceBus connections at 0 because they don’t apply to access control): 4. On the resulting page, make note of your Current Management Key and the STS endpoint (which will always be of the form). (Note that ServiceBus-related information is not shown in the picture below.) 5. Once you have created a service, you need to create token policies within the service namespace (we’ll only create one token policy). I’ll use the command line tool (ACM) that is available in the Windows Azure AppFabric SDK. After you download the tools you’ll need to modify the configuration file (which is in the Windows Azure platform AppFabric SDK\V1.0\Tools directory) by adding your service namespace and your management key: 6. A token policy defines the time-to-live for a token after it is issued (86400 seconds = 24 hours in our case) and the key that will be used to sign tokens (this is the key that will be shared with the bartender). You can create a token policy with the following command: acm create tokenpolicy –name:BouncerPolicy –timeout:86400 –autogeneratekey To see the token policy id, name, timeout, and key, use this command: acm getall tokenpolicy. 7. Next, we need to create a scope that is associated with the token policy. The scope defines where a token will eventually be used (the value of the “appliesto” parameter. You can create a scope with the following command (with out the line breaks): acm create scope –name:Bartender –appliesto: -tokenpolicyid:<from previous step> To see the generated information, use this command: acm getall scope. 8. Next we have to create an issuer that will be trusted by ACS (i.e. We need to play the role of the State here, and let the bouncer know how to recognize a State-issued ID). To create an issuer, use the following command: acm create issuer -name:Washington -issuername:Washington –autogeneratekey Again, to see the generated information use this command: acm getall issuer. 9. Finally, we need to create a rule that defines which claims should be present in a token issued by the service. In the rule we’ll create, we’ll assume that ACS is expecting a DOB claim (with some value) and that the bartender is expecting Birthdate claim. ACS will then simply pass the value of the DOB claim on with the Birthdate name. Here is the command for creating the rule (omit the line breaks): acm create rule -name:Birthdate -scopeid:<scope id from step 7> -inclaimissuerid:<issuer id from step 8> -inclaimtype:DOB -outclaimtype:Birthdate -passthrough To see the generated information, use this command: acm getall rule –scopeid:<scope id from step 7>. Recall that we set out to give the bouncer the information he needs to issue (or not issue) wristbands. That information is the serervice namespace, the issuer name, the issuer key, the scope (i.e. the place where a token will be used), and a DOB claim. Note that, in a way, you play the State since you decide what customers to give this information to (i.e. you are the State issuing drivers licenses). We have also set up a way for the bartender to determine if a token that has been presented to him is valid: he will use the token policy key (shared between him and the bouncer) to try to replicate an HMAC generated by the bouncer. Setting Up the Bartender (i.e. Verifying Tokens) Now we need a way for the bartender to validate the agreed upon color of the wristbands (i.e. validate tokens that are presented to him). This may vary from service to service, but at the heart of it is an HMAC that is included in the token itself. The bartender will use a key (the token policy key) shared between the bouncer and himself to try to replicate the HMAC. The details are in the code below… A typical token issued by ACS will look something like this: wrap_access_token=Birthdate%3d1-1-70%26Issuer%3dhttps%253a%252f%252fbouncernamespace.accesscontrol.windows.net%252f%26Audience%3dhttp%253a%252f%252flocalhost%252fbartender.php%26ExpiresOn%3d1283809703%26HMACSHA256%3d19pmWGr9pgH9RGdqYhrPcO6qse8YLGPYZGMoQOvz%252biY%253d&wrap_access_token_expires_in=86400 If you can look past the URL encoding, you will see much of the information we provided when setting up ACS: the service namespace is part of the Issuer URL, the “applies to” value is the Audience, and the value of ExpiresOn is determined by the length of the timeout we set for tokens. The value of HMACSHA256 is the HMAC created by signing this part of the token, Birthdate=1-1-70&Issuer=https%3a%2f%2fbouncernamespace.accesscontrol.windows.net%2f&Audience=http%3a%2f%2flocalhost%2fbartender.php&ExpiresOn=1283788760, with a signing key (the token policy key) that is shared with the bartender. For the bartender to make sure this is a valid token (which, in my example, is expected in an Authorization header), he has to first perform a couple simple checks: 1) Make sure an Authorization header is present, and 2) Make sure the Authorization header is properly formed (i.e. beings with wrap_access_token). // Check for presence of Authorization header if(!isset($_SERVER[‘HTTP_AUTHORIZATION’])) Unauthorized(“No authorization header.”); $header = $_SERVER[‘HTTP_AUTHORIZATION’]; // Header must start with wrap_access_token $i = stripos($header, “wrap_access_token”); if ($i != 0 || $i === false) Unauthorized(“Authorization header doesn’t start with wrap_access_token.”); Next, the bartender needs to “split” the header for further processing: //Header must have exactly two parts, token is in second part $headerSplit = explode(‘=’, $header, 2); if (count($headerSplit) != 2) Unauthorized(“Header doesn’t have two parts.”); $token = $headerSplit[1]; Now the real work of making sure the token is valid can be done. The heavy lifting is done by the Validate method in a class called TokenValidator (which I will look at more closely in a moment). // Validate token $validator = new TokenValidator($token, SIGNING_KEY, SERVICE_NAMESPACE, APPLIES_TO); if(!$validator->Validate()) Unauthorized(“Token is not valid.”); else echo “What would you like to drink?”; Here is the Unauthorized function used in the code above: function Unauthorized($reason) { echo $reason.” No drink for you!<br/>”; die(); } Let’s look more closely at the TokenValidator class…in particular the Validate method. The Validate method simply calls 4 other methods that validate the HMAC, determine whether the token is expired, determine whether the issuer is trusted, and determine whether the audience is trusted: public function Validate() { if($this->isHMACValid() && !$this->isExpired() && $this->isTrustedIssuer() && $this->isTrustedAudience()) return true; else return false; } In turn, each of these methods is fairly simple. And, to make them easier to read, the constructor breaks the token into name-value pairs (stored in the tokenParts property): public function TokenValidator($token, $signingKey, $serviceNamespace, $audience) { $this->_token = $token; $this->_signingKey = $signingKey; $this->_serviceNamespace = $serviceNamespace; $this->_audience = $audience; $params = explode(‘&’, $token); foreach ($params as $param) { $namevalue = explode(‘=’, $param, 2); $this->_tokenParts[$namevalue[0]] = urldecode($namevalue[1]); } } The isHMACValid method splits the token on &HMACSHA256= and then uses the signing key that is shared with the bouncer to create an HMAC based on the first part of the token: private function isHMACValid() { $tokenSplit = explode(“&HMACSHA256=”, $this->_token, 2); if(count($tokenSplit) != 2) { echo “Failed to split on &HMACSHA256=.<br/>”; return false; } $hmac = hash_hmac(‘sha256’, $tokenSplit[0], base64_decode($this->_signingKey), true); $locallyGeneratedSignature = base64_encode($hmac); if($this->_tokenParts[‘HMACSHA256’] != $locallyGeneratedSignature) { echo “Signatures don’t match.<br/>”; return false; } return true; } The isExpired, isTrustedIssuer, and isTrustedAudience speak for themselves: private function isExpired() { $currentTime = time(); if($currentTime > $this->_tokenParts[‘ExpiresOn’]) { echo “Token is expired<br/>”; return true; } else { return false; } } private function isTrustedIssuer() { if(‘https://’.$this->_serviceNamespace.’.accesscontrol.windows.net/’ == $this->_tokenParts[‘Issuer’]) return true; else { echo “Not a trusted issuer.<br/>”; return false; } } private function isTrustedAudience() { if($this->_audience == $this->_tokenParts[‘Audience’]) return true; else { echo “Not a trusted audience.<br/>”; return false; } } See, making sure a wristband is blue isn’t all that complicated. (All the code above can be found in the bartender.php file in the attachment to this post.) Getting a Wristband (i.e. Requesting a Token) For a customer to get a token, he must have 4 key pieces of information that we made notes about when setting up ACS: $service_namespace = “Your_service_namespace”; //From step 3 above. $wrap_name = “Issuer_name”; //From step 8 above. $wrap_password = “Issuer_key”; //The generated key from step 8 above. $wrap_scope = “”; //Value of “applies to” in step 7 above. $claims=array(‘DOB’=>’1-1-70’); //Any DOB claim. A request to ACS for a token must be an HTTP POST request. With the information above, we can send a POST request using cURL. The post body is a concatenation of the much of the information above, and the URL is based on the $service_namespace. Note that I’m assuming the token is the last line in the response (the response body): //Define post body and URL for token request. $postBody = ‘wrap_name’ . ‘=’ . urlencode($wrap_name) . ‘&’ . ‘wrap_password’ . ‘=’ . urlencode($wrap_password) . ‘&’ . ‘wrap_scope’ . ‘=’ . $wrap_scope; foreach ($claims as $key => $value) { $postBody = $postBody . ‘&’ . $key . ‘=’ . $value; } $url = ‘https://’ . $service_namespace . ‘.’ . ‘accesscontrol.windows.net’ . ‘/’ . ‘WRAPv0.9’; // Initialize cURL session for requesting token. $ch = curl_init(); curl_setopt($ch,CURLOPT_URL,$url); curl_setopt ($ch, CURLOPT_POST, 1); curl_setopt ($ch, CURLOPT_POSTFIELDS, $postBody); curl_setopt($ch, CURL_HTTP_VERSION_1_1, true); curl_setopt($ch, CURLOPT_HEADER, true); curl_setopt($ch, CURLOPT_USERAGENT,”Bar Patron”); curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false); curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true); // Execute cURL session and extract token. $ACSResponse = curl_exec($ch); curl_close($ch); $responseParts = explode(“\n”, $ACSResponse); $token = $responseParts[count($responseParts)-1]; And now, with a token in hand, we can take it to the bartender. I’ll again use cURL to send a request (this time a GET request) with the token as the value of the Authorization header. // Initialize cURL session for presenting token. $ch2=curl_init(); curl_setopt($ch2, CURLOPT_URL, “”); curl_setopt($ch2, CURLOPT_HTTPHEADER, array(“Authorization: “.urldecode($token))); curl_setopt($ch2, CURLOPT_HEADER, true); curl_setopt($ch2, CURLOPT_RETURNTRANSFER, true); $bartenderResponse = curl_exec($ch2); curl_close($ch2); $responseParts = explode(“\n”, $bartenderResponse); echo $responseParts[count($responseParts)-1]; The code above is in the barpatron.php file (attached to this post). Wrapping Up Once you have set up ACS (i.e. once you have “Hired a Bouncer”), you should be able to take the two files (bartender.php and barpatron.php) in the attached .zip file, modify them to that they use your ACS information (service namespace, signing keys, etc.), and then load the barpatron.php in your browser to see how it all works. If you have been reading carefully, you may have noticed a flaw in this bouncer-bartender analogy: in the real world, the bouncer would actually check the DOB claim to make sure the customer is of legal drinking age. In this example, however, it would be up to the bartender to actually verify that the customer is of legal drinking age. (At this time, ACS doesn’t have a rules engine that would allow for this type of check.) Or, the Issuer would have to verify legal drinking age. i.e. You might write a service that only provides the customer with the information necessary to request a token if he has somehow proven that he is of legal drinking age. I hope this helps in understanding how ACS works. Look for a post soon that shows how to use the AppFabric SDK for PHP Developers to make things easier! Thanks. -Brian
https://blogs.msdn.microsoft.com/brian_swan/2010/08/17/understanding-windows-azure-appfabric-access-control-via-php/
CC-MAIN-2016-30
refinedweb
2,775
53.92
In this tutorial we will check how to serialize a Python dictionary to a JSON string. Introduction In this tutorial we will check how to serialize a Python dictionary to a JSON string. This tutorial was tested with Python version 3.7.2. The code We will start the code by importing the json module. This module will expose to us the function that allows to serialize a dictionary into a JSON string. import json After this we will define a variable that will contain a Python dictionary. We will add some arbitrary key-value pairs that could represent a person data structure, just for illustration purposes. person = { "name": "John", "age": 10, "skills": ["Cooking", "Singing"] } To serialize the dictionary to a string, we simply need to call the dumps function and pass as input our dictionary. Note that this function has a lot of optional parameters but the only mandatory one is the object that we want to serialize. We are going to print the result directly to the prompt. print(json.dumps(person)) Note that if we don’t specify any additional input for the dumps function, the returned JSON string will be in a compact format, without newlines. So, we can make use of the indent parameter by passing a positive integer. By doing this, the JSON string returned will be pretty printed with a number of indents per level equal to the number we have passed. We will pass an indent value of 2. print(json.dumps(person, indent = 2)) print("------------\n\n") To finalize, we will check another additional parameter called sort_keys. This parameter defaults to False but if we set it to True the keys will be ordered in the JSON string returned. print(json.dumps(person, sort_keys = True)) The final code can be seen below. import json person = { "name": "John", "age": 10, "skills": ["Cooking", "Singing"] } print(json.dumps(person)) print("------------\n\n") print(json.dumps(person, indent = 2)) print("------------\n\n") print(json.dumps(person, sort_keys = True)) Testing the code To test the code, simply run it in a tool of your choice. I’ll be using IDLE, a Python IDE. You should get an output similar to figure 1. In the first print we can see that the dictionary was correctly converted to a compact JSON string, as expected. In the second print we can see a prettified version of the JSON with the 2 indents that we specified in the code. In the third print we can confirm that the keys were ordered. References [1]
https://techtutorialsx.com/2020/02/18/python-converting-dictionary-to-json-string/?shared=email&msg=fail
CC-MAIN-2020-34
refinedweb
423
64.51
There are two images provided: the actionloop and the action-golang-v1.11 available. Each image accept different input in the deployment. The runtime actionlooop accepts: If the input is a single file, it can be either a in ELF format for architecture AMD64 implementing the ActionLoop protocol. It can also be a script, identified by the #! hash-bang path at the beginning. The default actionloop can execute bash shell scripts and can use the jq command to parse JSON files and the curl command to invoke other actions. If the file is a zipped file, it must contain in the top level (not in a subdirectory) an file named exec. This file must be in the same format as a single binary, either a binary or a script. The runtime action-golang-v1.11 accepts: actionloopruntme) execin the top level, and it must be again a Linux ELF executable compiled for the AMD64 architecture execwill be interpreted as a collection of zip files, and it will be compiled in a binary as described in the document about actions Please note in the separate the rules about the name of the main function (that defaults to main.Main), and the rules about how to overwrite the main.main. When you deploy a zip file, you can: mainpackage hello If all your functions are in the main package, just place all your sources in the top level of your zip file If some functions belong to a package, like hello/, you need to be careful with the layout of your source. The layout supported is the following: golang-main-package/ ├── Makefile └── src ├── hello │ ├── hello.go │ └── hello_test.go └── main.go You need to use a src folder, place the sources that belongs to the main package in the src and place sources of your package in the src/hello folder. Then you should import it your subpackage with import "hello". Note that this means if you want to compile locally you have to set your GOPATH to parent directory of your src packages. Check below for using VcCode as an editor with this setup. When you send the image you will have to zip the content Check the example golang-main-package and the associated Makefile for an example including also how to deploy and precompile your sources. When you need to use third part libraries, the runtime does not download them from Internet. You have to provide them, downloading and placing them using the vendor folder mechanism. We are going to show here how to use the vendor folder with the dep tool. NOTE the vendor folder does not work at the top level, you have to use a src folder and a package folder to have also the vendor folder. If you want for example use the library github.com/sirupsen/logrus to manage your logs (a widely used drop-in replacement for the standard log package), you have to include it in your source code in a sub package. For example consider you have in the file src/hello/hello.go the import: import "github.com/sirupsen/logrus" To create a vendor folder, you need to src/hellofolder (not the srcfolder) GOPATH=$PWD/../.. dep initthe first time (it will create 2 manifest files Gopkg.lockand Gopkg.toml) or dep ensureif you already have the manifest files. The layout will be something like this: golang-hello-vendor ├── Makefile └── src ├── hello │ ├── Gopkg.lock │ ├── Gopkg.toml │ ├── hello.go │ ├── hello_test.go │ └── vendor │ ├── github.com/... │ └── golang.org/... └── hello.go Check the example golang-hello-vendor. Note you do not need to store the vendor folder in the version control system as it can be regenerated (only the manifest files), but you need to include the entire vendor folder when you deploy the action. If you need to use vendor folder in the main package, you need to create a directory main and place all the source code that would normally go in the top level, in the main folder instead. A vendor folder in the top level does not work. If you are using VsCode as your Go development environment with the VsCode Go support, and you want to get rid of errors and have it working properly, you need to configure it to support the suggested: srcfolder in your source srcfolder as the top level source or add it as a folder in the workspace (not just have it as a subfolder) go.inferGopath Using this option, the GOPATH will be set to the parent directory of your src folder and you will not have errors in your imports. Compiling sources on the image can take some time when the images is initialized. You can speed up precompiling the sources using the image action-golang-v1.11 as an offline compiler. You need docker for doing that. The images accepts a -compile <main> flag, and expects you provide sources in standard input. It will then compile them, emit the binary in standard output and errors in stderr. The output is always a zip file containing an executable. If you have docker, you can do it this way: If you have a single source maybe in file main.go, with a function named Main just do this: docker run openwhisk/action-golang-v1.11 -compile main <main.go >main.zip If you have multiple sources in current directory, even with a subfolder with sources, you can compile it all with: zip -r - * | docker run openwhisk/action-golang-v1.11 -compile main >main.zip The generated executable is suitable to be deployed in OpenWhisk using just the generic actionloop runtime. wsk action create my/action main.zip -docker openwhisk/actionloop You can also use the full action-golang-v1.11 as runtime, it is only bigger. Note that the output is always a zip file in Linux AMD64 format so the executable can be run only inside a Docker Linux container.
https://apache.googlesource.com/openwhisk-runtime-go/+/2ea8fecad08615aa17b03d55577f83e1de072b6d/docs/DEPLOY.md
CC-MAIN-2020-16
refinedweb
986
62.58
You are tweaking an app which is already in production. You are implementing code that allows a user to delete his data. All of a sudden, you realize that you made a huge mistake! By providing a wrong ID, you accidentally deleted data of an actual user! Horror stories like this one can truly become a reality if you don't have separate production and development environments. Thankfully, it's very easy to set all of this up with Codemagic which is a CI/CD service dedicated specifically for Flutter apps. Our multi-environment project Environments can be used for just about anything - from supplying a different Firebase config file, so that you won't accidentally delete production data from Firestore, to changing the UI and even logic based on the current app environment. To keep this tutorial in a reasonable time-span, we won't deal with Firebase but we will instead create environments for a counter app! Yay 🎉 But seriously, have you never wanted to change the increment amount from 1 to 5 by reading a configuration JSON file? No? Well, now you'll see what you missed! There are 2 ways in which to configure multiple environments - either by providing a config file (this is the case with Firebase) or by passing around an Environment enum or a constant String (this is usually used with dependency injection). We're going to tackle both of these approaches. Also, how can we use the proper environment when we build the app? Additionally, config files can contain sensitive information which we don't want to check into source control... How can we handle that? The answer is Codemagic and multiple branches in a git repo. However, before we can set up that kind of stuff, we have to first create a Flutter project. The Flutter project As you already know, we will build yet another variant of the counter app. The UI code will remain fairly unchanged. To follow along, create a new Flutter project and paste the following code into lib/app_widget.dart. Changes to the default counter app are highlighted. app_widget.dart import 'package:flutter/material.dart'; import 'package:provider/provider.dart'; import './config_reader.dart'; class MyApp extends StatelessWidget { Widget build(BuildContext context) { return MaterialApp( title: 'Flutter CI with Codemagic', theme: ThemeData( primarySwatch: Provider.of<Color>(context), ), home: MyHomePage(title: 'Flutter CI with Codemagic'), ); } } class MyHomePage extends StatefulWidget { MyHomePage({Key key, this.title}) : super(key: key); final String title; _MyHomePageState createState() => _MyHomePageState(); } class _MyHomePageState extends State<MyHomePage> { int _counter = 0; void _incrementCounter() { setState(() { _counter += ConfigReader.getIncrementAmount(); }); }, ), Text( 'Revealed secret:\n${ConfigReader.getSecretKey()}', textAlign: TextAlign.center, ), ], ), ), floatingActionButton: FloatingActionButton( onPressed: _incrementCounter, tooltip: 'Increment', child: Icon(Icons.add), ), ); } } Pretty standard stuff, if you ask me. The difference from the default counter app lies in getting the increment value and also a "secret key" from a config JSON file using a ConfigReader. The primary theme color is gotten using Provider - changing the color is accomplished using environment const String. You'll see how it's all done in just a bit 😉 Configuration JSON Files present in the project are perfect for providing non-dynamic configuration. This can range from an increment amount for a counter app to various secret keys or even a Firebase config. The problem is that we don't want to check secret keys into source control. After all, all kinds of people can have access to the repository even when it's private (contractors, etc.) and you don't want them to see the keys! That's why it's good to add the config file into the project only at build time with Codemagic. Still, we want to be able to develop the app locally so we have to keep a copy of at least the development environment config on our machine. Let's put it into a folder located in the root of the project called config. config/app_config.json { "incrementAmount": 5, "secretKey": "DEV extremely secret stuff which you cannot put into the source control" } To prevent this file from being committed to git, we'll utilize .gitignore. .gitignore # Secrets /config Lastly we need to make sure that the app_config.json file is accessible from within our Flutter app. That means we have to add the config folder to assets. While we're inside pubspec.yaml, let's also add a dependency on the provider package. pubspec.yaml dependencies: flutter: sdk: flutter provider: ^4.0.4 ... flutter: ... assets: - config/ buildcommand automatically runs flutter pub get. Now that we know the structure of the config file and also that it's available as an asset, we can implement the ConfigReader class. Note that the initialize() method has to be called from main(). We'll do that later. config_reader.dart import 'dart:convert'; import 'package:flutter/services.dart'; abstract class ConfigReader { static Map<String, dynamic> _config; static Future<void> initialize() async { final configString = await rootBundle.loadString('config/app_config.json'); _config = json.decode(configString) as Map<String, dynamic>; } static int getIncrementAmount() { return _config['incrementAmount'] as int; } static String getSecretKey() { return _config['secretKey'] as String; } } Environment constants Having a config file is only one way to differentiate between environments. The other way is having multiple entry points (a.k.a. targets) for an app and then passing down either a "prod" or "dev" environment string depending on whether the initially called main() method was located inside main_dev.dart or main_prod.dart file. Let's first create the constant strings which will be passed around the app. It's also possible to create an enum instead. We want to have two environments - dev and prod. environment.dart abstract class Environment { static const dev = 'dev'; static const prod = 'prod'; } Now onto the entry points! If you're not aware of it, the flutter run and flutter build commands take in an option called --target or -t for short. This allows our app to have main_dev.dart and main_prod.dart files. These usually don't contain much code in themselves. Instead, they delegate all of the work to a common main method. main_dev.dart Future<void> main() async { await mainCommon(Environment.dev); } main_prod.dart Future<void> main() async { await mainCommon(Environment.prod); } The common method, which is usually located inside main_common.dart, is where we initialize the ConfigReader and do any setup based on the passed in environment. In this case, we only provide a different primary color. main_common.dart Future<void> mainCommon(String env) async { // Always call this if the main method is asynchronous WidgetsFlutterBinding.ensureInitialized(); // Load the JSON config into memory await ConfigReader.initialize(); Color primaryColor; switch (env) { case Environment.dev: primaryColor = Colors.blue; break; case Environment.prod: primaryColor = Colors.red; break; } runApp( Provider.value( value: primaryColor, child: MyApp(), ), ); } VS Code launch.json You can absolutely run the app from the command line and pass in the --target manually. I hope though that you aren't one of those who likes to program in VIM and any modern IDE gives that person a seizure. To be able to run our custom targets from VS Code, add this launch.json into a .vscode folder located in the root of the project. launch.json { "version": "0.2.0", "configurations": [ { "name": "Flutter Dev", "type": "dart", "request": "launch", "program": "lib/main_dev.dart" }, { "name": "Flutter Prod", "type": "dart", "request": "launch", "program": "lib/main_prod.dart" } ] } Going to the Debug tab will present you with two launch options 😎 Dev and Prod environments are just a click of a button away A short test to prove a point A CI tool is most useful for continuously running tests and making you think twice before you merge a PR with a failing test suite. To make ourselves happy at the sight of green checkmarks next to our tests, let's add a completely fabricated example_test.dart. test/example_test.dart import 'package:flutter_test/flutter_test.dart'; void main() { test( "1 + 1 == 2", () async { expect(1 + 1 == 2, true); }, ); } The app is now fully runnable locally. The config JSON file can be read and you can launch either the prod or dev main() method. All of the tests are passing, so the next step is to automate under which environment the app runs. Git repository with branches The real magic of app environments lies in automatically choosing the correct one based on if the commit is located in the dev branch or master branch. As you can imagine, the production environment should be used to create apps ready for publishing from the master branch. Let's first set up the git repository. Codemagic fully supports GitHub, Bitbucket, GitLab and even custom remote repositories. We're going to go with GitHub in this tutorial and I assume you already have some basic experience with setting up a repository there. Once you've set up either a public or a private repo, let's do all of the usual git setup together with adding the remote. 👨💻 terminal git init git add . git commit -m "Initial commit" git remote add origin git push -u origin master The master branch will hold production-ready code which can be published to the app stores running under the production environment. Let's also create a dev branch that will be an exact copy of master for now. Builds from the dev branch will be distributed to testers and they will run under the development environment. 👨💻 terminal git checkout -b dev git push -u origin dev Setting up Codemagic A good CI/CD tool is the glue that connects git branches to environments (and possibly flavors). Getting access to all that's happening in your repository through Codemagic is simple - just sign in with your GitHub account. Codemagic works with workflows. First, we're going to create a Dev Workflow which will, of course, build the app under the development environment. Find your repository in the list of apps and hit the gear icon. Dev Workflow Build trigger We want to trigger development environment builds whenever there's a new commit in the dev branch. That's what build triggers are for. Environment variable Next, we need to get the app_config.json file into Codemagic so that we can dynamically add the correct one to the project upon build. The issue is, you cannot upload a file to Codemagic 😢 That's not a problem though! Any string of text (JSON included) can be converted to a base64 string and then added to the CI/CD solution as an environment variable. Let's open up a terminal at the root of our Flutter project. If you're running GitBash, you can encode files to base64 in the following way: 👨💻 terminal # GitBash base64 config/app_config.json GitBash will display the encoded string directly in the terminal. MacOS and Linux users have to follow a bit different procedure as you have to write out the base64 string to a file. 👨💻 terminal # MacOS openssl base64 -in config/app_config.json -out outfile.txt # Linux base64 --encode config/app_config.json outfile.txt Either way, if you've been following along and your config file has the same incrementAmount and secretKey, the base64 string will be as follows: ew0KICAiaW5jcmVtZW50QW1vdW50IjogNSwNCiAgInNlY3JldEtleSI6ICJERVYgZXh0cmVtZWx5 IHNlY3JldCBzdHVmZiB3aGljaCB5b3UgY2Fubm90IHB1dCBpbnRvIHRoZSBzb3VyY2UgY29udHJv bCINCn0NCg== Copy this string and add it as an APP_CONFIG environment variable. Make sure to check "Secure" to make it encrypted.. With this variable accessible when the Codemagic's machine starts testing and building, we can now finally do what we wanted to accomplish from the beginning - create a new file called app_config.dart under the config directory all within the CI/CD workflow. Let's create a pre-build script by hitting the "+" button in between "Test" and "Build". We'll want to create the config directory if it doesn't yet exist and then take the APP_CONFIG encrypted base64 string and output it into the app_config.json file. ✨ Codemagic pre-build script # Create directory if it doesn't exist mkdir -p $FCI_BUILD_DIR/config # Write out the environment variable as a json file echo $APP_CONFIG | base64 --decode > $FCI_BUILD_DIR/config/app_config.json Test and build There are two last steps needed to finalize this Dev Workflow. First, let's enable analyzer and flutter_test under the Test tab in Codemagic and hit Save. In the Build tab, set up the plaforms for which you want to build, set the mode to debug and, most importantly, provide the proper target file as a build argument. Building only for one platform for the purposes of this tutorial (to speed up builds) Debug apps will be signed with a debug key on Android which is fine for APKs meant for testers Don't forget about this! This setup is enough for the Dev Workflow (which is currently called just Default Workflow) and we can finally start our first build manually. If everything goes properly (and it should!), this build will be successful and it will give us an APK file which will run under the development environment. You can download the APK files from the finished build tab. Prod Workflow First up, don't despair! We won't have to create the Prod Workflow from scratch because we can duplicate the Dev Workflow and then just slightly modify it. Rename the "Default" to "Dev" Workflow and hit Duplicate. Update the Build trigger to listen to pushes on the master branch. Steps are the same as in this paragraph above. Next, we're going to update the APP_CONFIG environment variable to hold a production incrementAmount and secretKey. Follow the steps from this paragraph. The only thing which differs is the content of the app_config.json. config/app_config.json { "incrementAmount": 1, "secretKey": "PROD extremely secret stuff which you cannot put into the source control" } Lastly, we're going to modify the Build tab to target the main_prod.dart file. Follow these steps if you need a refresher. Push commits and reap the benefits Setting up environments with Codemagic is a breeze. Since Codemagic is a CI/CD tool is dedicated to Flutter, they're always on top of their game and although this article is sponsored, I can truly recommend them. Their support is top notch and you get 500 build minutes free per month, which is a lot for most projects. Even then, their service is priced very competitively so you won't need to max out your credit card to use it 😉 Thank you so much. This is amazing! You’re welcome! For webapp, any suggestions on handling the firebaseConfig in index.html? Thank you for all of the amazing content! Thank you! What about those links about flavours you mentioned? Can you share them? Sorry. I just found them.
https://resocoder.com/2020/02/19/environments-flavors-in-flutter-with-codemagic-ci-cd/
CC-MAIN-2020-34
refinedweb
2,429
56.86
DZone Snippets is a public source code repository. Easily build up your personal collection of code snippets, categorize them with tags / keywords, and share them with the world Thai Election ID Checking The online service <a href=>here</a> aims to help people check where they are to vote. However, it can be used as ID certification as well. # nnnn is the id to be checked Here I make it into a function call. import urllib, re def getname(id): url = '' + str(id) src = urllib.urlopen(url).read() pat = '<H3> *(.*?) *<' name = re.findall(pat, src)[0] # don't use other info return name print getname(5100900050063) # show a name (one of my relatives)
http://www.dzone.com/snippets/thai-election-id-checking
CC-MAIN-2013-20
refinedweb
113
73.78
>>. 282 Reader Comments For example, poor introspection could be one... No block scoping. For example, poor introspection could be one... No block scoping. Mozilla has already implemented this with "let". After it's released into the wild for folks to see, then there will be reasons for discussions. Right now let's see what Google has learned from their own intense Javascript work, they do after all write a little bit of Javascript from time to time. This seems like a huge open Web tantrum to me, perhaps worthy of Project Runway. Take a deep breath, think, enjoy the ensuing complexity. Javascript tooling is horrible and Javascript namespace mess is awful for big projects. If we want to take web to the next level, we might as well do it with a new (and improved) alternative. I can do this all day ... I can do this all day ... And thats a fundamental flaw that cannot be fixed by changing interpreters/tools but needs an entirely new language how? That's not true at all. Half of Javascript's problems come from being too flexible. You can do pretty much anything as long as you can write contorted confusing code. That's why Dart will be able to compile into JS. It will be easier to write Dart code presumably, and the compiler will deal with all the nasty javascript. The other half of the problems are performance related because of how dynamic it is. If your server sends Dart code to the browsers that support it, and JS code to the rest, it should work the same except the Dart code will run a lot faster. That's no different than what happens when you run a modern web app in an old browser right now. This. You absolutely do not want the groundwork and first iteration of anything designed by committee. Given that Google is writing Javascript at a level nobody else even really approaches, I trust them when they say that they have hit a wall and can do better. This. Javascript is a fundamentally broken language. It doesn't support classes (only prototypes), handles true/false incorrectly, and lacks block level scoping (added with 'let', but that's hardly satisfactory). Plus the standard library is so poor that the majority of developers rely on jQuery or a similar third party library. I support any effort to replace javascript as the language of the web. I really wish NaCl would gain more traction. It's an incredible technology that could serve as a foundation for a next generation of internet applications, but it's association with ActiveX in most people's minds prevents widespread adoption. Yup, that's my opinion too. However, I'd be happy if javascript turned out to be a stand-in for bytecode. Make it so that it minimizes well and be done with it. Coffeescript is a great example of making this work and being backwards compatible. Dart, python, whatever can all just compile/transcode to JS. If you are going to break all the code that has ever been written (and you would have to to fix JS), you might as well start with a clean slate. The bottom line is that the web is a completely different place now than it was 15 years ago. Back then building websites was like building a wooden bookcase, now it's like building a car. It's absurd to try and use your woodworking tools to build a car, even if you can manage it if you try hard enough. I hate web development. Basic CRUD applications that used to be hour projects with Borland C++Builder now take weeks and still don't work in all browsers. I'm just excited for a change. "...While Javascript is not intrinsically a terrible scripting language (although it doesn't hold a candle to Ruby or Python), it's flexibility and oddness compared to everything else make" I read it more like in-ability, not flexibility. It has essentially no rules or structures. It has just one thing with which it tries to do everything: a hash table. It is literally saying this: here take this hash table and do whatever. I really don't have much else; nor did I feel the need to enact any more structires or principles. Feel free to innovate as much and publish your best practices; the field is open (tnot for writing programs but for figuring out how to write programs in me), so everythign goes. In time I am sure enough people will have banged their heads against me that 'acceptable' ways to do things will emerge in some way shape and form. And oh, by the way, I am confident that it will happpen becuase you know what; you all - all of you web developers - and soon Windows developers too (talk about excitement) stuck with me! Lucky me. Karmic Synergy Some people consider the first one an advantage. And, the second is simply not a fundamental language flaw that should relegate JS to the dustbin of history... Actually, most of JavaScript's most commonly raised "problems" come from developers not able to deal with some of its flexibility, esp. those mired in OOthink. IOW, PEBKAC. So, any language that doesn't support classes is "fundamentally broken"? The standard library argument is just like the browser argument, it has nothing to do with the language itself. Lots of people "rely" on C++ libraries. That reliance means C++ sucks? Last edited by alphadog7 on Wed Sep 14, 2011 2:39 pm Brendan Eich ought to understand this better than anyone else. He later said: "JS had to 'look like Java' only less so, be Java’s dumb kid brother or boy-hostage sidekick. Plus, I had to be done in ten days or something worse than JS would have happened." Let's also not forget that Microsoft, Apple, and the other companies mentioned in this article have no incentive to create anything better than Javascript. In fact, they have every incentive to ensure that web development stays inefficient and clunky. The more applications shift to the web, the less interesting Windows or custom iPhone applications become. So why should Google cooperate with companies that have no incentive to help create something better, and every incentive to sabotage it? Why should they follow an impossible code of conduct that not even Javascript's own creators ever followed? The whole anti-Google rant in this article is just illogical at best, hypocritical at worst. Google is absolutely going to try to make Dart an open standard that everyone else can freely implement. It will be a standard worth having.? Some people consider the first one an advantage. Who? Is there any other language that uses full on dynamic scoping without blocks? BASIC maybe? JavaScript was originally intended as an "easy" scripting language that those who knew basic HTML could use to add nifty effects to their webpages.. It has greatly outgrown that use, but technically it has not kept up..? You are missing the point. As a language, you still have to have carefull ydesigned rules, orthogonal rules, strictly speaking in order to be an 'effective' language. There is a reason C++ is better than Basic. It lets you do things far more elaborate and complex and weird because it has a highly sophisticated design. JavaScript is no better than Basic (let us accept that for the sake of argument; to put it in a place along the scale of 'merits' of languages) and that is why restricitng all client-side development to JavaScript is a critical error. If nothing else, the field should be open to any language using a plug-in architecture and then let the best langauge win on its own merit; and not through a political decision. You don't need a tour but you still need a means, a system to get to Europe; planes, baots cars, something concrete and dependeable and with a system.? After some experience, you too would return to the same restaurants when exploration is not the goal but rather to just eat. For many, web programming is meant for routine enterprise work and not an exercise in programming philosophies. Too many "options" in JS just gets in the way. I, for example, really get bogged down with JS falsy and truthiness when a simple boolean would cut down all the thinking one has to do to make sure its right. IOW, PEBKAC. Every assumption you can make when you look at someone else's code to modify it saves time (therefore money). When you go into the code for a working C# application you can make a whole lot of assumptions about things like types and almost universally used patterns. When you go into Javascript code, you have no idea what's going to be in there. Trying to do something even semi-complex requires a lot of third party libraries and code to add some structure, and everyone does it differently. Even within the jQuery community for example, you will find a half dozen different patterns for doing the same exact thing. Javascript is a productivity killer. It's not that it's impossible to write in, it's just much harder than the other modern languages. ... So, any language that doesn't support classes is "fundamentally broken"? The standard library argument is just like the browser argument, it has nothing to do with the language itself. Lots of people "rely" on C++ libraries. That reliance means C++ sucks? Yes, actually. The fact that I have to personally manage boost versions is a PITA, just to get a weak approximation of modern memory management or any of the tools necessary to write multi-threaded code for a modern system. This is one of my major complaints with C++, along with the failures of tool-ability, the ridiculous extra syntax and the terrible habits C++ coders import from C. C++ is an acceptable language for every purpose and best for none. Especially with premature optimization being the root of all evil and all. Anyway, the problem with Javascript is that it's awfully [censored] close to neutered assembly. What is the purpose of a language if it doesn't add helpful abstraction, encapsulation, modularity, verification or polymorphism? (and yes, you can get all those things without object-orientation, and they are still valuable even if you aren't an OO programmer.) Worse than that, with its awkward approach to data structure it's less functional than a low-level language while being barely easier to write well. Never mind OO; Javascript isn't a good functional language either. It lacks the easy flexibility and dynamism of something like, yes, even LISP. When I prefer coding in a 50 year old language with more parenthesis than zits on a teenager, the language is fundamentally broken. I don't need my language to provide infinite black magic, but I would like it to at least support extensions to allow me to use the innovations the last 50 years of language design have given us. Even TCL, for goodness sake, has OO and name spaces.! LOL, best forum quote I have read in a while. Props to you sir. If I knew nothing about JavaScript, the most meanigful stance would still be wait-and-see. Google so far has never shoved any technology down the users' throat, unlike MS (which has become the hipsters' darling since it's been struggling to catch up with the leaders) or Apple. And with Google still living off advertising, it would be deadly to them to cut off potential customers by imposing adoption of a specific technology. Chances are, if DART will gain any traction, it will be because it's actually superior to JavaScript. It's not that you can't do a specific thing. It's when you have to put together a big app that includes many things that doing it in javascript becomes increasingly frustrating. Besides, that argument is stupid in the way you are using it. There's nothing I can't do in assembly, but there are much better, much more efficient ways to do things. Last edited by A.Felix on Wed Sep 14, 2011 5:39 pm Intersystems ObjectScript... but you were probably looking for something a little more mainstream.). Prototypal inheritance is more modern than classical inheritance (classical meaning "with classes"). By modern, I mean an improvement that came later. Sure, prototypal inheritance is weird and confusing if you haven't taken the time to learn it and gain experience with it... just like classical inheritance was (this reminds me of the vicious rejection of classical OOP back in the day -- programmer's always seem to fight new improvements to their art). Once you grok prototypal inheritance, going back to classes feels primitive. And people are whining for static typing? Again, they probably haven't learned how to program with dynamic typing (which therefore seems to "lack structure" to them). The lack of block scoping isn't a bad thing (though it can confuse programmer's from other languages --- JavaScript borrowed the block syntax from C without also borrowing the scoping from it... perhaps not the best choice, but again, not a problem if you take the time to learn about the difference). Semicolon insertion is bad. I'll give you that. The language does have a few problems, but show me one that doesn't. And really, I think JavaScript has less problems than most other languages I've come across. Before anyone dismisses JavaScript, either take the time to learn it, or acknowledge that you don't know enough about it to have a truth-based opinion.! Javascript definitely has OO. Its not class based OO, but rather prototype based OO, doesn't mean its not OO. Last edited by arcadium on Wed Sep 14, 2011 6:19 pm Unfortunately, because of the birth and growth of JS as a web based scripting language, most online tutorials, and much existing code, does not contain any decent software design principles. JS's flexibility, and the fact that it was used by cut and paste type web designers from the beginning has hurt its reputation (and codebase) quite a bit. If you spend some time to understand how JS actually works, and good programming best practices (Crockford is a great start), it can be an awesome language to program in. However, I do agree that tooling around it is very weak (for the same reasons...it was always used as a quick and dirty "make this part of your webpage blink" tool, rather than a programming language). But the tooling cannot be any weaker than a language that does not exist yet. Language arguments always come down to edge case scenarios anyway. "Javascript/C/C++/perl/etc etc is shit because you can't write a program to perform this incredible esoteric task" No, that isn't an accurate summary at all. The rejection here is not because of the details of Dart. The rejection is because of the procedure. The leaked email indicates that this is Google's plan: This is no way to get anyone else on board with Dart. You can already see Eich's reaction. The reactions from Microsoft and Apple will be similar (but written by PR people, so less honest or even never written at all). The basic problems are (1) that tying Dart from day one to Google websites - after development in secret - is threatening to other browser makers, and (2) that no other browser maker can afford the "Google treadmill": They cannot just take a huge pile of code and put it in their browsers, when all the engineering talent that understands and works on that code is in another company; it will take a long time to get up to speed. And by that time Google will have moved on to another new technology (NaCl?). You don't build a new standard that way. Dart can be the ultimate scripting language for all we know. That doesn't matter because Google has taken a route that will never lead to standardization. Meanwhile, the next version of JS is looking very nice, thanks to a lot of hard work from all browser vendors - including Google! Clearly Google has gotten to the size where it has internal factions with very different opinions. Here's hoping the right one will win. If Google, having been the absolute leader in javascript technology with every incentive to push it thinks it needs serious overhauling for technical reasons, maybe they're right. Comparing them to MS 'lock-in' is nonsense - MS wanted everyone to use their Operating System - Google's wants people to use the internet for rich apps (oh noooooo!) I kind of don't see the downside. The comparison is ludicrous and doesn't make any sense if you think about it for more than 10 seconds. I agree with every quoted word. Microsoft was always doing vendor locking to their OS. Google on the contrary will allow and promote push of its language to all major OS and competing browsers. Ok. Static typing turns compile time (or in javascript's case lint-time) errors into runtime errors. It also makes it a lot more difficult for the runtime or JIT to optimize. All to save a couple of keystrokes. It is a bad thing, which is why no other language other than basic and some obscure domain specific languages use it. In practice good programmers end up using a heavy weight anonymous functions for scoping rather than lightweight blocks. Bad programmers don't realize there's a problem and just have silent bugs. It's a lose-lose. How generous of you. Last edited by Faramir on Wed Sep 14, 2011 7:35 is normally the case - but it doesn't need to be so. You can statically analyze dynamic languages like JS and issue type warnings during 'compile time'. For example, Firefox just added in Nightlies a static analysis type inference engine. It's goal is to speed up JS, but you can imagine where that API is exposed to your IDE. I believe that the closure compiler also has some type inference support. Another note, "saving a couple of keystrokes" is a worthwhile goal - less code means fewer bugs. But anyhow, both static and dynamic languages have their uses. Neither is simply 'better'. Wow, the majority of programmer's who visit Ars are a little behind. I thought most programmers had caught on by now:). I'll take a guess that Google has one of the best and deepest knowledge of javascript in the world considering all they have been doing around it. They don't think it's a toy given the complexity and importance of the apps they have built with it. They are not just dabbling with it. They are definitely not a little behind and I assure you they have taken the time to actually learn it. Taking Chrome and V8 into account I don't think they disrespect the language., and you know what they have to say about it? Edit: not to mention the guy who invented it actually apologized for that. Last edited by A.Felix on Wed Sep 14, 2011 7:12 pm Have a look at the scripts on this very page; do the same on Gmail. Are those human readable? Or simply machine writable? Is a programming language really what the web needs? Or does it instead need a platform? Something, even if text based, and pretty printable, that is designed to be targeted from higher level languages? Something that accepts that sometimes there is a lot of performance to be had by assuming types of basic things like ints and floats? What else? As it is actually being used, "javascript" sucks, regardless of its many merits as a programming language. Last edited by kruzes on Wed Sep 14, 2011 7:12's not quite the deal he has with Google. The thing is, Google trumpets the "open" to look the part and get good PR and marketing, but there's actually very little open about the way they do things. Closed source companies get a free pass because they never claim to be open. It's like a place at the mall offering "free back massages" but once you are in the store and get your massage, you are informed there's a cover to be paid before you leave the place. The cover, they tell you, was to enter the store, but the massage is free nonetheless. Guys like the author call them off on that, then, you say "but why do the other establishments that charge for a back massage get a free pass?" You may or may not agree on the ideological angle of his, but that's the reason. Last edited by A.Felix on Wed Sep 14, 2011 7:33 pm You must login or create an account to comment.
http://arstechnica.com/information-technology/2011/09/critics-call-foul-as-google-takes-aim-at-javascript-with-dart/?comments=1&start=80
CC-MAIN-2016-30
refinedweb
3,525
71.75
On Mon, 01 Oct 2012, Martin Kosek wrote: On 10/01/2012 04:35 PM, Alexander Bokovoy wrote:On Mon, 01 Oct 2012, Martin Kosek wrote:On 10/01/2012 11:24 AM, Alexander Bokovoy wrote:Hi, AdvertisingThe patch attached fixes Fedora build system issue with unified samba package (samba/samba4 packages got merged in Fedora 18 and Rawhide recently) since we depend on a wbclient.h header file path to which included versioned directory name previously (samba-4.0/ vs samba/).I am not convinced this is a correct approach, this was failing on my Fedora 18 instance anyway: # make rpms ... checking for NDR... yes checking for SAMBAUTIL... yes checking for samba-4.0/wbclient.h... no checking for samba/wbclient.h... no configure: error: samba/wbclient.h not found make: *** [bootstrap-autogen] Error 1 The problem was that samba-devel package is no longer providing wbclient.h header file: # rpm -qR samba-devel-4.0.0-150.fc18.rc1.x86_64 | grep wbclient.h # I had a discussion with Andreas (CC-ed), the root cause was a missing libwbclient-devel package which is the new provider of the samba-4.0/wbclient.h file. He was also not aware of /usr/include/samba-4.0/ -> /usr/include/samba/ change. I created a new patch with recommended approach (attached). Could you please check if it is OK? It worked for me on both Fedora 17 and 18.ACK for your patch except one change:@@ -214,10 +220,16 @@ Summary: Virtual package to install packages required for Active Directory trust Group: System Environment/Base Requires: %{name}-server = %version-%release Requires: python-crypto +%if 0%{?fedora} >= 18 +Requires: samba-python +Requires: samba +Requires: samba-winbind +%else Requires: samba4-python Requires: samba4 -Requires: libsss_idmapWhy libsss_idmap is removed? I'd assume this is a mistake.I just moved it to the end of the Requires list so that I can group samba Fedora-version-dependent Requires together: ... +%else Requires: samba4-python Requires: samba4 -Requires: libsss_idmap Requires: samba4-winbind +%endif +Requires: libsss_idmap <<<<< :) Thanks. I was not looking properly. ACK -- / Alexander Bokovoy _______________________________________________ Freeipa-devel mailing list Freeipa-devel@redhat.com
https://www.mail-archive.com/freeipa-devel@redhat.com/msg13182.html
CC-MAIN-2018-30
refinedweb
355
51.34
———————————————————– GitHub. Introduction: If you have a surface made of multiple Bezier patches they can be evaluated in parallel. The new shaders in DirectX 11 (Hull and Domain shaders) make this easier for graphics applications and rendering. There is a straightforward method of evaluating these patches using a simple equation. I am going to demonstrate the simple, straightforward way first (method2-bb.py) and then another way which I thought would work faster (simple-bb.py). What you’ll need: - Python (Preferably 2.7) - NumPy - PyOpenCL - A Graphics card which supports OpenCL. Any recent card should work fine (I used an old Nvidia 8800 GT). OpenCL: This is not intended to be an OpenCL tutorial, but I will be going over what I did in PyOpenCL (getting the context, setting up the input buffers, writing the kernels, getting the output, etc.) For a more detailed look into OpenCL I would suggest OpenCL tutorials by Nvidia. I will not be going over in detail what a work-item is, what a work-group is, etc. Quick note for CUDA users: For some reason CUDA and OpenCL use different terminology. I will be using OpenCL terminology throughout this post. So here’s a quick translation for CUDA users: - CUDA thread = OpenCL work-item - CUDA block = OpenCL work-group - CUDA shared memory = OpenCL local memory - CUDA local memory = OpenCL private memory Input: The input to both programs is a BezierView file with the control point coordinates of all the patches. BezierView is a program developed by my graduate research lab (SurfLab) for the purposes of rendering different kinds of surfaces and viewing their properties (like Gaussian curvature, highlight lines, etc.) Here is a sample file: cube2.bv. The input method is called readBezierFile and returns a 1-D list with all the vertices in it (16 vertices per patch). NumPy: PyOpenCL requires the use of NumPy arrays as input buffers to the OpenCL kernels. NumPy is a python module used for scientific computing, and it allows better creation of multi-dimensional arrays which are widely used for OpenCL. I won’t go over in much detail about what NumPy is, but I will show you how I use it. First, we need to convert the regular Python list with all the vertices in it to a NumPy array: import numpy ... # try to read from a file here - returns the array vertices = readBezierFile("cube2.bv") # create numpy array npVertices = array( vertices, dtype = float32) The function “array” converts a Python list to a NumPy array. The “dtype” represents the type of values we will be storing in the array (32-bit floating numbers in this case). Initial Setup: I read the file with all the OpenCL kernels in it first (bezier.cl) After that I setup the UV-buffer. Evaluating a Bicubic patch requires U and V values. The more UV-value pairs there are the more detailed surface is output (from 0.0 to 1.0). The UV-value generation code is: # Array of UV values - 36 in total (detail by 0.2) uvValues = empty( (36, 2)).astype(numpy.float32) index = 0 for u in range(0, 12, 2): # step = 2 for v in range(0, 12, 2): # conver the ints to floats fU = float(u) fV = float(v) uvValues[index] = [ fU/10.0, fV/10.0] index = index + 1 The “empty” function is used for creating a NumPy array with empty values. We are going to be generating an array of size 36 * 2. (UV values spaced by 0.2) OpenCL Setup: Before we do computation we have to do some OpenCL setup. This can be a little tedious when writing in C/C++, but PyOpenCL makes it much easier. First, we have to create an OpenCL Context. This can be done in two ways (I used the second one for my clarification): - This simple way will automatically choose a device for you (usually the GPU if it’s available). Very convenient. ctx = cl.create_some_context() - This way goes through all the platforms/devices and creates a context specifically with that device. # Platform test for found_platform in cl.get_platforms(): if found_platform.name == 'NVIDIA CUDA': my_platform = found_platform print "Selected platform:", my_platform.name for device in my_platform.get_devices(): dev_type = cl.device_type.to_string(device.type) if dev_type == 'GPU': dev = device print "Selected device: ", dev_type # context ctx = cl.Context([dev]) Next we need to create the Command Queue: cq = cl.CommandQueue(ctx, properties=cl.command_queue_properties.PROFILING_ENABLE) You can just pass in the Context to the function, but I want to point out that I passed in the PROFILING_ENABLE flag, which allows us to determine how much time the operation took. This will allow us to compare the different methods. Now finally we setup the input/output buffers to be sent to the GPU: # memory flags mf = cl.mem_flags # input buffers # the control point vertices vertex_buffer = cl.Buffer(ctx, mf.READ_ONLY | mf.COPY_HOST_PTR, hostbuf=npVertices) # the uv buffer uv_buffer = cl.Buffer(ctx, mf.READ_ONLY | mf.COPY_HOST_PTR, hostbuf=uvValues) # final output output_buffer = cl.Buffer(ctx, mf.WRITE_ONLY, uvValues.nbytes * 2 * numPatches) The output buffer is to be written to (WRITE_ONLY), and we specify the size of the output buffer in terms of bytes. the “nbytes” property in a NumPy array lets us know the size in terms of bytes of that array. For each patch we will have 36 * 4 * 4 bytes. 36 = number of UV pairs 4 = number of floating-point values output for each UV-value 4 = number of bytes for a float32 value Hopefully that is clear. Now we can start evaluating! Method #1: The simple, straightforward way: The first method directly evaluates the patch for 1 UV-pair. The kernel takes in the UV-pair, reads in the control points for the corresponding patch and then outputs it to the output buffer. The Python evaluation code is as follows: # the global size (number of uv-values * number of patches): 36 * numPatches numUVs = uvValues.size/2 # 72/2 = 36 globalSize = numUVs * numPatches localSize = numUVs # 36 work-items per work-group # evaluate exec_evt = prg.bezierEval2Multiple(cq, (globalSize,), (localSize,), vertex_buffer, uv_buffer, output_buffer) exec_evt.wait() First we calculate the global memory size (total number of kernels needed), and then the local memory size (# of work-items per work-group). The global memory size is basically the number of UV-value pairs. We will need this information to send to the evaluation kernel. The kernel name is “bezierEval2Multiple” (apologies for the incoherent names). The kernel needs to take in the Command Queue, the global memory size and the local memory size first. After that we pass in the input and output buffers as parameters. Once the kernel is called we wait for it to finish (exec_evt.wait()). The kernel is as follows (in bezier.cl): __kernel void bezierEval2Multiple(__global const float4 *controlPoints, __global float2 *uvValues, __global float4 *output) { // get the global id and the corresponding patch number int gid = get_global_id(0); // get the patch number - the patch info we will have to read int numPatch = gid/get_local_size(0); // get the uv values - local id goes from 0-35 int lid = get_local_id(0); float2 uv = uvValues[lid]; // evaluate row1 float4 b00 = evalCubicCurve(controlPoints[numPatch * 16 + 0], controlPoints[numPatch * 16 + 1], controlPoints[numPatch * 16 + 2], controlPoints[numPatch * 16 + 3], uv.x); float4 b01 = evalCubicCurve(controlPoints[numPatch * 16 + 4], controlPoints[numPatch * 16 + 5], controlPoints[numPatch * 16 + 6], controlPoints[numPatch * 16 + 7], uv.x); float4 b02 = evalCubicCurve(controlPoints[numPatch * 16 + 8], controlPoints[numPatch * 16 + 9], controlPoints[numPatch * 16 + 10], controlPoints[numPatch * 16 + 11], uv.x); float4 b03 = evalCubicCurve(controlPoints[numPatch * 16 + 12], controlPoints[numPatch * 16 + 13], controlPoints[numPatch * 16 + 14], controlPoints[numPatch * 16 + 15], uv.x); // evaluated point output[gid] = evalCubicCurve(b00, b01, b02, b03, uv.y); } For each kernel we need to figure out which Bezier patch we are going to evaluate, what UV-pair we are going to use and where we need to place our output in the output buffer. The global id let’s us know where we need to place our output. To get the patch we just divide our global id by the local memory size. The UV-pair we want to evaluate is our local id. The local memory size is 36, which is the # of UV-value pairs, so we can easily find out which one we need to use. Finally we evaluate the Bezier patch and place it in the output buffer. A simple kernel. Method #2: The complex, trying-to-be-more-parallel way: The “problem” I noticed with the first kernel is how all the work-items in each work-group have to read the same control points from global memory again and again. At this point I was still iffy on the concept of “local memory” so I read up on it and found out that reading from local memory is faster than global memory (obviously). So will using local memory allow me to make the whole process faster? I decided to come up with a different way which will use a lot more work-items but where each work-item does “less” work (only simple bilinear interpolation across selective control points). This method did not turn out to be faster. My advisor already told me before I fully implemented this that the first method will be much faster, and he was right. Turns out it was 3 times slower. In any case here is the concept: Bezier Patches can be evaluated using what is known as De Casteljau’s Algorithm (The link is for curves but can be easily adapted to surfaces). It basically involves linearly interpolating (bilinearly interpolating for surfaces) the control points and their results until you get the evaluated point. I try to parallelize the interpolation part so it can be done faster. Here are a few images showing the steps for a UV-pair of (0.5, 0.5): First, here are the 16 control points of a patch: The next step would be to do a bilinear interpolation across each “square” (4) of control points per kernel. The results are the blue points shown below: (since U and V are both 0.5, the point lies in the middle) We repeat the process for this biquadratic patch: (results are points in red) Finally we do one last bilinear interpolation to get the evaluated point (in green): You have probably figured it out by now, but the way this method works in parallel is that it generates 9 work-items per UV-pair. Each work-item does a bilinear interpolation and stores it’s result in local memory. This completes the first pass. After the first pass we need to repeat the process, but this time we only need 4 work-items. This is where I encounter a problem: the other 5 work-items don’t do anything. Then for the final evaluation I only use 1 work-item, with others not doing work. The kernel (bezierEvalMutiple) shown below: __kernel void bezierEvalMultiple(int numPatches, __global const float4 *controlPoints, __global float2 *uvValues, __global float4 *output, __local float4 *local_points, __local float4 *local_points2) { // get the global id int gid = get_global_id(0)/get_local_size(0); // divide by 9 // get the patch number int numPatch = gid/36; //get_local_size(0); // get the uv values float2 uv = uvValues[gid];//get_global_id(0)]; //output[gid] = (float4)(numPatch, numPatch, 0, 0); // get the row int lid = get_local_id(0); int patchNum = numPatch * 16; // the patch to deal with int row = lid/3; int index = row + numPatch + lid; // get the 4 control points you'll need /**************** b10--b11 | | b00--b01 ****************/ float4 b00 = controlPoints[index]; float4 b01 = controlPoints[index+1]; float4 b10 = controlPoints[index+4]; float4 b11 = controlPoints[index+5]; // do linear interpolation across u first float4 val1 = lerp(b00, b01, uv.x); float4 val2 = lerp(b10, b11, uv.x); // then across v float4 newPoint = lerp(val1, val2, uv.y); // store it in the local memory local_points[lid] = newPoint; // synchronize in work-group barrier(CLK_LOCAL_MEM_FENCE); // only do it for certain work-items (now evaluating quadratic) if (lid < 4) { row = lid/2; // 2 = degree float4 b002 = local_points[row + lid]; float4 b012 = local_points[row + lid+1]; float4 b102 = local_points[row + lid+3]; float4 b112 = local_points[row + lid+4]; // do linear interpolation across u first float4 val12 = lerp(b002, b012, uv.x); float4 val22 = lerp(b102, b112, uv.x); // then across v float4 newPoint2 = lerp(val12, val22, uv.y); local_points2[lid] = newPoint2; barrier(CLK_LOCAL_MEM_FENCE); // output - only do it for the first work-item if (lid == 1) { float4 val13 = lerp(local_points2[0], local_points2[1], uv.x); float4 val23 = lerp(local_points2[2], local_points2[3], uv.x); float4 newPoint3 = lerp(val13, val23, uv.y); output[gid] = newPoint3; } } } I won’t go over much of this (you can probably figure it out), but I just wanted to mention how I synchronize between work-items (in a work-group) at each pass. You have to call barrier(CLK_LOCAL_MEM_FENCE); to ensure synchronization at each point. This method runs 3 times slower – I would guess mainly because it requires 9 times more work-items and just straightforward multiplication is fast enough. At the very least I learned how to use local memory and synchronize between work-items. Output: In the end we want to find out the time elapsed and read back the output. This is how it’s done in Python: elapsed = exec_evt.profile.end - exec_evt.profile.start print("Execution time of test: %g " % elapsed) # read back the result eval = empty( numUVs * 4 * numPatches).astype(numpy.float32) cl.enqueue_read_buffer(cq, output_buffer, eval).wait() We create an empty NumPy buffer and then call enqueue_read_buffer to read from the output buffer into the array. Conclusion: I showed a simple example of using (Py)OpenCL for evaluating Bezier patches. The next step would be to render them using PyOpenGL, and perhaps allow user manipulation of the control points. This project helped me get a start on OpenCL and learn some of the basics. I would recommend using PyOpenCL since it allows you to write applications quickly. If you have any questions, comments or suggestions please feel free to ask. ———————————————————– GitHub Repository here ———————————————————– I don’t understand the implementation entirely. But at least for the above example you may be optimize it further. You can use read_imagef() functions if CL_R, CL_FLOAT format is supported on your system. Idea is to do the first pass just by reading pixels. After that I guess you have to use your approach (images can either readonly on writeonly all the passes could be written using image operations) Hmm, now I realize image read functions doesn’t guarantee correct values 😦 interpolation might be done in lower precision
https://pixelp3.wordpress.com/2011/11/13/evaluating-bezier-pyopencl/?replytocom=193
CC-MAIN-2020-24
refinedweb
2,426
62.78
Definition of JavaFX Timer JavaFX is known for its flexibility and easiness. There are several classes available in it and Timer is a class that helps in scheduling the tasks that has to be executed later. The creation of a new object of Timer spawns a thread that is new that can execute the program after a mentioned amount of time. This can help the developer to mention if the timer has to be run at a repeated interval or only once. Timer in JavaFx is denoted by the java.util.Timer class. Let us see more about this topic in the following sections. Syntax: Following is the syntax of JavaFX timer. Timer timerobj = new Timer(); How to Create a Timer in JavaFX? Similar to Timer class, the TimerTask class has a role in the execution of timer. It is an abstract class that extends the interface Runnable. However, it does not implement the method run. Moreover, a subclass of the class TimerTask can be created that can override the method run during the time when the timer fires within the overridden method run. For putting this all together, a subclass instance can be passed to Timer.schedule. Otherwise, an anonymous class can also be passed to Timer.schedule. Examples Now, let us see some sample examples on JavaFX timer. Example #1: JavaFX Program to demonstrate the working of a timer with the help of a button Code: //import all the relevant classes import java.util.Timer; import java.util.TimerTask; import javafx.application.Application; import javafx.application.Platform; import javafx.geometry.Insets; import javafx.scene.Scene; import javafx.scene.control.Alert; import javafx.scene.control.Button; import javafx.scene.control.Spinner; import javafx.scene.layout.HBox; import javafx.stage.Stage; //main class public class TimerProgramSample extends Application { //set the delay as 0 int del = 0; public void start(Stage st) { UIinitialisation(st); } private void UIinitialisation(Stage st) { //create object for horizantal box HBox hb = new HBox(12); //set the padding hb.setPadding(new Insets(12)); //create object for timer class Timer tm = new java.util.Timer(); //create object for spinner class Spinner sp = new Spinner(1, 62, 5); //set the prefernce width sp.setPrefWidth(85); //create button Button b = new Button("Yayyy. . . Timer works. . ."); //set the action event on clicking the button b.setOnAction(event -> { del = (int) sp.getValue(); //schedule the timer tm.schedule(new subtimer(), del*1000); }); //get the children of horizontal box hb.getChildren().addAll(b, sp); //on close event st.setOnCloseRequest(event -> { tm.cancel(); }); //create a scene Scene sc = new Scene(hb); //set the title st.setTitle("Timer Working"); //set the scene st.setScene(sc); //display the result st.show(); } //subclass that extends the TimerTask private class subtimer extends TimerTask { //run method @Override public void run() { //method Platform.runLater(() -> { //create object for Alert class Alert al = new Alert(Alert.AlertType.INFORMATION); //set the title al.setTitle("Dialog box"); //set the header text al.setHeaderText("Oh oh.. Time elapsed"); //create a string String c; //check the condition of delay if (del == 1) { // display one second is elapsed c = "1 sec elapsed"; } else { c = String.format("%d sec elapsed", del); } al.setContentText(c); al.showAndWait(); }); } } //main method public static void main(String[] args) { //launch the app launch(args); } } Output: In this program, all the necessary classes have to be imported. Then, set the delay as 0. Once this is completed, call the method UIinitialisation as it contains the whole functionalities we have to implement. In that method, create an object for the horizontal box. Then set the padding of the horizontal box. Once this is done, the timer class object and spinner class object can be created. After that, set the preferred width and create the button. As we all know, if a button is created, we have to implement the action event of the same. That is, the functionality that has to trigger on clicking the button has to be mentioned. Here, the delay value is retrieved and the timer is scheduled based on the subclass sub timer that extends the TimerTask. That function gets called which contains the overridden run method. After getting out of the subclass, a scene object can be created, followed by setting the title, scene, and displaying the result. On executing the code, a dialog box will appear as shown above. As I have selected 5 seconds and clicked the button, a dialog box as shown below appears after every five seconds. Once we change the value to another value, it will be displayed as displayed below. That is, timer functions for the new value given Example #2: Simple JavaFX Program to demonstrate the working of Timer Code: //import all the relevant classes import java.util.Timer; import java.util.TimerTask; //main class public class TimerProgramSample { //main method public static void main(String[] args) { //notify that timer starts System.out.println("Here, it starts...."); //create object for timer Timer tm = new Timer(); //schedule the timer tm.schedule(new TimerTask(){ //override run method @Override public void run() { //print a message notifying about timer System.out.println("Timer begins. . . ."); } }, 5000); //tIMER that repeats each 20second Timer tr = new Timer() ; //schedule the repeating timer tr.scheduleAtFixedRate(new TimerTask(){ //override run methid @Override public void run(){ System.out.println("Timer working. . . ."); } }, 0, 2000); } } Output: In this program also, all the necessary classes have to be imported. Then, create an object for the timer and schedule it. Override the run method and print a message notifying about the timer. Once this is done, a repeating timer is also set. Similar to the first one, it also has to be scheduler and override the run method. On executing the code, the result will be displayed as shown above. As this is a repeating timer, it can be stopped by clicking the red square box that terminates the same. Conclusion In JavaFX, Timer is a class that helps in scheduling the tasks that have to be executed later. Timer in JavaFx is denoted by the java.util.Timer class. In this article, different details on JavaFX timer such as working and examples are discussed in detail. Recommended Articles This is a guide to JavaFX Timer. Here we discuss the definition and How to Create a Timer in JavaFX? along with examples. You may also have a look at the following articles to learn more –
https://www.educba.com/javafx-timer/
CC-MAIN-2022-40
refinedweb
1,055
67.96
= 2003/11/25 Changed the name to File::OldSlurp Alex BATKO <abatko@cs.mcgill.ca> sent in some typo fixes. = 2003/09/04 This is the last release of File::Slurp by David Sharnoff. After this release, the File::Slurp namespace will be given to Uri Guttman <uri@stemsystems.com>. The only change in this release: a fix to the test suite 'cause Uri Guttman noticed it was buggy. = 2002/10/31 Change __DATA__ to __END__. Don't know why I had it wrong in the first place. = 2002/03/05 Changed the license to make the Debian folks happy. Changed the temporary file directory location code in the test suite to make the Win32 people happy. All changes for this release integrated by Alexander Zangerl <az@snafu.priv.at>. Thank you!
https://metacpan.org/changes/distribution/File-OldSlurp
CC-MAIN-2015-48
refinedweb
132
77.03
Most of the major paradigm shifts in programming have come about from seeking ways to improve the clarity, flexibility, and reusability of written code. These paradigm shifts have basically been: Structuring concept: code snippets. Structuring concepts: single-entry/single-exit structures and modular decomposition. Structuring concepts: the abstract data type, the class, and the interface contract. Structuring concept: the "aspect", an area of concern which applies to multiple data types in an application. Each paradigm shift has expanded our concepts of what "modularity" is, and what the "fundamental units" of coding and reuse are. AOP simply takes this to the next logical level, by addressing "aspects" rather than just "objects" as the unit of reuse. An "aspect" is an "area of concern" that cuts across the structure of a program. For example, data storage is an aspect. User interface is an aspect. Platform-specific code is an aspect. Security is an aspect. Distribution, logging, class structure, threading... they're all aspects. One way to look at aspects is that they reflect decisions you must make about implementing a program. In any of the pre-AOP paradigms, these would be decisions you'd have to make before you started coding, because the code that implements those decisions has to be "tangled" into all the other code. You can't start writing a web-based, ZODB-stored application using the Zope UI framework and then halfway through decide you'd really rather have a wxPython client GUI accessing a CORBA server to talk to an RDBMS. Of course, if you thought about it beforehand, you probably could have broken up your classes and code in such a way as to make parts reusable for each environment. But this adds a lot of overhead to your design process, and you'll probably end up with repeated inheritance trees and lots of little hooks and methods that call other methods just so parts can be overridden. AOP lets you deal with each of these decisions seperately, by creating an aspect (or set of aspects) for each area of concern. If you mean, what does it let your programs do that OOP doesn't, absolutely nothing. But then, neither does OOP let your programs do anything that couldn't be done with a structured program, nor does structured programming let your programs do things that you couldn't achieve through spaghetti coding... if you worked at it hard enough. But if you mean, what do you get from AOP as compared to OOP, then some of the answers are: Here's a more vivid example. Have you ever wanted to use a class library that somebody created, that had a part you didn't like (like an assumption of in-memory data structures), but which you couldn't take out because it was spread through all the classes? Or, which didn't have a part which you desperately needed (like thread-safety), but couldn't put in without changing all the classes? AOP makes it possible for people to create class families that do just one thing (focus on one "area of concern"), and then are cleanly combinable with other families that address other areas of concern. And TransWarp's implementation of AOP actually does pretty well at letting you retrofit existing Python class hierarchies into aspects, too. (It's better at adding in features than taking them out, though.) AOP is a very young technology, comparable to where OOP was about 15-20 years ago. While OOP terminology today is fairly well established and tool support is mature, AOP technology is still very much being invented. Many of the tools and languages for doing AOP are experimental or academic in nature, and there are several schools of thought on how to define aspects and do aspect weaving. Current approaches include: Most of the existing tools deal only with Java, C++, or Smalltalk. At the present time, TransWarp is the only attempt we know of to create an industrial-strength AOP solution for Python. In TransWarp, aspects are represented as "class families" - a set of classes which inherit from or instantiate each other - stored in an Aspect object. Aspect objects are themselves defined using Python "class" statements, or by adding two existing aspect objects together (e.g. AspectThree = AspectOne + AspectTwo). Aspects can also be created programmatically using an AspectMaker? object. AspectThree = AspectOne + AspectTwo A TransWarp aspect object is not a class or class family, even though it "looks like one" in source code. To create an actual class family, you must call the aspect to instantiate it. An example that demonstrates some core TransWarp features is shown below: from TW.Aspects import Aspect class PaintingObjects(Aspect): """An aspect describing paintable things""" class paintableThing: """Things we want to paint""" # no implementation here, this isn't an implementation aspect pass class Sky(paintableThing): name = "sky" color = "blue" class Grass(paintableThing): name = "tuft of grass" color = "green" class Dog(paintableThing): name = "dog named Sparky!" color = "black and white spotted" def __init__(self): self.things = self.Sky(), self.Grass(), self.Dog() class BasicPainter(Aspect): greeting="Hello!" def paint(self): print self.greeting; for thing in self.things: thing.paint() print class LazyPainter(BasicPainter): greeting = "Yawn...." class paintableThing: def paint(self): print "A %s %s would go here, if only I had the energy..." \ % (self.color,self.name) class AngryPainter(BasicPainter): greeting = "Oh yeah?" class paintableThing: def paint(self): print "What makes you think I want to paint a stupid %s %s?!" \ % (self.color,self.name) class Surrealism(Aspect): class Sky: color = "polka-dot pink on purple" class Grass: color = "tiger-striped" class Dog: color = "melting" # Adding aspects together makes aspects, but to make a class you must # call the resulting aspect to "weave" it into an actual class. LazyPaintingClass = (PaintingObjects + LazyPainter)() AngryPaintingClass = (PaintingObjects + AngryPainter)() SurrealAndLazyClass = (PaintingObjects + Surrealism + LazyPainter)() SurrealAndAngryClass = (PaintingObjects + Surrealism + AngryPainter)() LazyPaintingClass().paint() AngryPaintingClass().paint() SurrealAndLazyClass().paint() SurrealAndAngryClass().paint() When run, you should see the following: Yawn.... A blue sky would go here, if only I had the energy... A green tuft of grass would go here, if only I had the energy... A black and white spotted dog named Sparky! would go here, if only I had the energy... Oh yeah? What makes you think I want to paint a stupid blue sky?! What makes you think I want to paint a stupid green tuft of grass?! What makes you think I want to paint a stupid black and white spotted dog named Sparky!?! Yawn.... A polka-dot pink on purple sky would go here, if only I had the energy... A tiger-striped tuft of grass would go here, if only I had the energy... A melting dog named Sparky! would go here, if only I had the energy... Oh yeah? What makes you think I want to paint a stupid polka-dot pink on purple sky?! What makes you think I want to paint a stupid tiger-striped tuft of grass?! What makes you think I want to paint a stupid melting dog named Sparky!?! For more details on doing AOP with TransWarp, check out the AOPTutorial.
http://old.zope.org/Members/pje/Wikis/TransWarp/IntroToAOP/wikipage_view
crawl-003
refinedweb
1,182
63.29
03 June 2010 05:38 [Source: ICIS news] By Junie Lin SINGAPORE (ICIS news)--Buyers of semi-dull nylon chips in Asia chose to stay on the sidelines for most of May, as producers are bent on pushing prices to a fresh high, market sources said on Thursday. Poor sales ensued after suppliers offered the material at $3,000-3,050/tonne (€2,460-2,501/tonne) CFR (cost and freight) China in early May, citing high costs of caprolactam - the feedstock for nylon chips. “Just three words: Business is bad,” said a Taiwan-based nylon chip producer. Spot nylon chips prices remained at their 22-month high of around $2,900/tonne CFR (cost and freight) ?xml:namespace> “I think $3,000/tonne CFR China is too expensive. Maybe with benzene tumbling down, $2,900/tonne CFR China seems more acceptable. But we still need to monitor the situation,” said a Taiwan-based trader in Mandarin. May contract prices for caprolactam, meanwhile, were settled at about 7% higher than April’s numbers at $2,700-$2,720/tonne (€2,187-2,203/tonne) CFR (cost and freight) NE (northeast) With June caprolactram contract offers announced on Wednesday still at a five-year high of $2,700 CFR NE Asia, nylon chips producers hope to push for higher prices, industry sources said. But weak spot sales of nylon chips over the past three weeks had forced some producers to reduce operating rates at plants to as low as 60%, industry sources said. Nylon 6 is widely used in the manufacture of hosiery, knitted garments, threads, ropes, filaments, nets and tire cords. Peak production at key downstream textile sector had just ended, taking out a strong demand support for the nylon chips market, but demand coming from the tirecord segment has remained firm, industry sources said. “Let’s hope June nylon chips sales would be better,” said a market source. The price direction for the material would be clearer in the next two weeks as end-users in the textile yarn sector would “have to start buying soon as [their nylon chips] inventories are running low”, said the Taiwan-based trader. Among the nylon makers based in Source: ICIS pricing ($1 = €0.82) For more on nylon and caprolact
http://www.icis.com/Articles/2010/06/03/9364408/asia-nylon-chips-buyers-disappear-as-suppliers-quote-3000t.html
CC-MAIN-2015-14
refinedweb
378
65.25
In case you have never done unit testing, probably this article will be Chinese for you and if you are already Chinese then it will be French :). You would probably like to read my Unit testing article first and then move ahead. Everybody bumps with partial testing now or then. And well everybody has their own way of achieving it. Let me quickly recap what I mean by “partial testing”, first for people who are new to it, or rather I will say they have come across it and we just need to relate it. “First thing, this word “Partial testing” is not an official vocabulary; at least I am not aware of it. I personally feel comfortable using it. So excuse me if I have overstepped in any way.” Now let's say you have a nice data access layer component which does CRUD (Create, Read, Update, and Delete) on your database. When each of these operations is performed by the data access layer it needs to send an email about the activity. In simple words the database component is dependent on the send email component. Now let’s visualize a scenario. Let’s say you want to Unit Test the database component and the send email component is not ready or the SMTP configuration is not available. So if you hit your test cases on the database component you will get a FATAL error which does not help out. In case you are new to unit testing start here: VSTS UNIT Testing. In simple words you want to only test “Database component” and “simulate” the “Send email” component for now. Summing up, you want to partially test some part of your system and “simulate” the other part of your system which is half done. Need of partial testing comes in different forms. Below are six scenarios where partial testing is really helpful. Non-available / complex environment: We all know all code needs an environment. Sometimes these environments are not available / not completed / not configured when you do testing, but we still want to test the rest of the code which is completed. For instance in the previous introduction we wanted to test the database code but the SMTP configurations were not available. Another classic example is if you want to test ASP.NET behind code, it needs an HTTP context object, request object, and response object. These objects are created when you have a full-fledged web server like IIS. When we do unit testing there is no way we can create these objects in a simple way. Regression testing: The second place where partial testing is very much needed is “maintenance”. Maintenance is an important part of the software development life cycle. One of the important activities of maintenance is “change request”. When these change requests are applied they only change certain parts of the code. Logically you would like to only test those changed code and the impacted areas where this code is consumed. In other words you would like to do “Regression testing”. For instance you can see we have an invoicing and accounting application which is under maintenance and now let’s say we apply some change requests in the product class (invoicing section). In this case we would like to only run test cases of the product class and the impacted areas around it. But we would avoid running the accounting test cases. Because the accounting section was not impacted at all. So again here we would like to implement partial testing, i.e., we just “simulate” all the classes from the accounting component and only run the product test cases and impacted invoicing test cases. Test driven development: We are not big fans of TDD (no offense meant) but this is again one area where partial testing is much needed. In case you are new to TDD watch this video on TDD (Test Driven Development). In TDD we first start with a test case rather than the actual code. So we write the test case, we write enough code by which test cases pass, we add some more functionality, we test, we again add some more functionality, again test, and this continues until the logic is completed. Now in this scenario, again at a given moment of time you will have half-done code but you still want to test the complete code. Again a situation which very much calls for partial testing. Minimize testing time: Nowadays people want testing to be faster and efficient. Many times the test cases are optimized, but the business system takes their own sweet time for execution. For example you have a component which calls some other component “synchronously”. For now assume that the other component is a long running task. Now if you want to test this component, you have to also run your long running task. As the component calls the same synchronously, the test case will take a long time to execute. So if somehow you can only test the component and “simulate” the long running task, your test case can complete faster. Development dependency: You are working / testing on a code and that code needs some other component or a third-party code. This other component is half ready or not ready at all. So you would like to do partial testing on your completed code and “Simulate” the dependent code. Workflow testing: One of the indirect benefits of “Simulation” testing is behavioral testing. In simulation testing, technically, we get a full control over the methods. With this full control we get to know if the method has executed or not. So if you want to test correctness of sequence of workflows or how many number of times the method has been called etc.,“Simulation” testing helps. Frankly English is our second language, so sometimes we use our own vocabulary which makes us comfortable. But due to cultural differences (Sukesh and I stay in Asia) the other parts of the earth can mistake us saying something else. In the previous section we emphasized on the importance of partial testing; now that’s one side of the coin. To achieve partial testing we also said we need to “Simulate” the other part of the code which is not available. For various reasons “Mock” is a more official word which is used rather than “Simulate”. So hence forth I would like to swim with the tide and we will call “Simulation” as “MOCK”. VS 2012 has implemented partial testing by using “Fakes”,. So one more vocabulary to add to the chaos. But whatever term you use, the goal is the same. So we would term partial testing as “Mock” testing henceforth in the article to maintain consistency with the widely accepted vocabulary. Quick note: The MOQ testing team has chosen the word Mock because it’s a more widely accepted terminology, read here for more details. Below is a simple word from the MOQ team. “But I also think the generic concept of a ‘mock’ in the general sense is used much more extensively than ‘stub’ or even ‘fake’” - Mr. Daniel Cazzulino MOQ testing team. Now the good news is that we have lots and lots of tools for “Mock” testing, including the one introduced in VS 2012, Fakes. But to keep this article short and sweet we will be choosing two of them: MOQ(open source) and JustMock (paid). In a future article I will talk some restrictions on the open MOQ which compelled us to talk about the paid mock tool JustMock. You can download MOQ from and JustMock from Before we start Mock coding with any of the tools above, let’s get into a standard structure of writing unit test cases, i.e., AAA style. Almost all mock testing tools follow this structure. This will make our test look neat, tidy, and readable. Now all unit test cases can be visualized into three larger sections: Arrange, Act, and Assert. Arrange: Make the setup ready for test cases. Mathsobj = new Maths(); //Arrange Act: Execute the test case int total = obj.Add(10,10); //Act Assert: Check is the test case passed. Assert.AreEqual(20,Total); // Assert Let’s create a simple sample code for MOQ testing first and then we will do one code for “JustMock”. Here’s what we will do. We have a simple Customer business class as shown in the below code. This class has a simple function called SaveRecord. This function saves the customer data to the database. SaveRecord Before the customer data is saved to the database it sends an email using the SendEmail function. Let’s assume for now the SendEmail is not configured because we do not have the SMTP details in place. So we would like to Mock the SendEmail function and test the SaveRecord function. SendEmail public class ClsCustomerBAL { public virtual bool SendEmail() { .... .... } public bool SaveRecord(string strCustomerName) { this.SendEmail(); // This line throws a error using(SqlConnection objConnection = new SqlConnection(ClsCustomerBAL.ConnectionString)) { SqlCommand objSelectCommand = objConnection.CreateCommand(); objSelectCommand.CommandText = "Insert Into TblCustomer(CustomerName) values(@CustomerName)"; objSelectCommand.Parameters.AddWithValue("@CustomerName", strCustomerName.Trim()); objConnection.Open(); objSelectCommand.ExecuteNonQuery(); } return true; } } Below is the code for “MOQ” and “JustMock” in the AAA style for the above described ClsCustomerBAL class. The code for Act and Assert is pretty plain and self-understanding. Now only that needs explanation is the Arrange section. ClsCustomerBAL Mock<ClsCustomerBAL> target = new Mock<ClsCustomerBAL>(); target.Setup(x => x.SendEmail()).Returns(true); ClsCustomerBAL target = new ClsCustomerBAL(); bool called = true; target.Arrange(() => target.SendEmail()).DoInstead(()=> called = true); string strCustomerName = "MOQ Test Data"; bool returnvalue = target.Object.SaveRecord(strCustomerName); string strCustomerName ="Just Mock Test Data"; bool returnvalue = target.SaveRecord(strCustomerName); Assert.AreEqual(true,returnvalue); Assert.AreEqual(true, returnvalue); Let’s first cover MOQ. The first step we need to create is the Mock instance using Mock. Below is the code for MOQ. Mock Mock<ClsCustomerBAL> target = new Mock<ClsCustomerBAL>(); For just mock you can create a simple object or you can use Mock.Create as well. Mock.Create ClsCustomerBAL target = new ClsCustomerBAL>(); or ClsCustomerBAL target = Mock.Create<ClsCustomerBAL>(); Once the Mock object is created, we need to specify how the SendEmail function should be mocked. Below is the MOQ code which specifies that the SendEmail function should just return true and avoid running the actual logic. true target.Setup(x => x.SendEmail()).Returns(true); Below is a simple code of how the SendEmail function is mocked. You can see the word Arrange which is very much in line with the thought of “AAA”. In JustMock we have the DoInstead function which takes a delegate as input and defines what should be the output. JustMock DoInstead bool called = true; Mock.Arrange(() => target.SendEmail()).DoInstead(()=> called = true); Visuals are visuals. Below is a simple video which demonstrates a simple sample of MOQ. If you watch the previous example closely you can see that SendEmail function is public and virtual. Now as a developer you would like to follow OOP principles and make the SendEmail method as private rather than exposing it publicly. If you compile, you hit an error with MOQ as shown below: public virtual public class ClsCustomerBAL { public virtual bool SendEmail() { } } public bool SaveRecord(string strCustomerName) { this.SendEmail(); // This line throws a error } In simple words MOQ does not allow mocking of “private” methods. The reason given is as follows. “Testing private methods is a bad taste for Mock purist testers. From a mock testing perspective we just test the public API which is one logical closed unit. What happens internally is a black box and isolating or mocking those sections is not logical. But if you think those individual methods should be tested separately then probably they deserve a separate class.” This argument sounds completely logical. We are concerned only about the public API which gives us the output and that’s what we should be testing as one logical unit. But I still personally feel mocking private methods is still worth when you are looking at minimizing testing time or simulating complex environments. So if some of your private methods are taking too much time probably mocking is a good idea. As MOQ has limitations or I will say it’s more of by design, we can use JustMock for the same. Below is a simple sample code with AAA style. Customerobj = new Customer(); bool called = false; Mock.NonPublic.Arrange(obj, "Validate").DoInstead(() => called = true); obj.Add(); Assert.IsTrue(called); The Validate method is a private method in the customer class. By using the NonPublic function we can mock Validate to return true. Validate NonPublic Mock.NonPublic.Arrange(obj, "Validate").DoInstead(() => called = true); Now that we can intercept calls to a “method” we can also achieve a nice byproduct called as “Behavior verification”. In our above code which was under test, let’s say we want to ensure that the email is first sent and the customer is later inserted in to database. In simple words we want to test the sequence of calls of these methods. Below is a simple code to test behavior testing using MOQ. So first create the mock object. int order=0; Mock<clscustomerbal> target = new Mock<clscustomerbal>(); Attach a callback with an increment on a counter which will check the sequence of how methods are fired. For example you can see in the below code the SendEmail increments order counter and checks the value. In the same way we have done for SaveRecord. For send email the order counter value should be zero and for save record it will be 1. target.Setup(d => d.SendEmail()).Callback(() => Assert.AreEqual(order++, 0)); target.Setup(d => d.SaveRecord()).Callback(() => Assert.AreEqual(order++, 1)); target.Object.SaveAll(); Below goes the code for JustMock: ClsCustomerBAL target = new ClsCustomerBAL(); Mock.Arrange(() => target.SendEmail()).DoInstead(() => { Assert.AreEqual(1, counter++); }); Mock.Arrange(() => target.InsertCustomer(Arg.IsAny<string>())).DoInstead(() => { Assert.AreEqual(2, counter++); }); You can download the complete source for the article from here. All the above code and inputs were given by Sukesh:. Without him I would be slogging for nights figuring out things. You can visit my website for .NET interview questions and SQL Server interview questions.
https://www.codeproject.com/articles/479318/6-important-use-of-partial-mock-testing
CC-MAIN-2017-09
refinedweb
2,362
65.22
Content-type: text/html cc by locks, which are generally used to protect data that is frequently searched. Readers/writer locks can synchronize threads in this process and other processes if they are allocated in writable memory and shared among cooperating processes (see mmap(2)), and are initialized for this purpose. Additionally, readers/writer locks must be initialized prior to use. rwlock. type may be one of the following: USYNC_PROCESS The readers/writer lock can synchronize threads in this process and other processes. The readers/writer lock should be initialized by only one process. arg is ignored. A readers/writer lock initialized with this type, must be allocated in memory shared between processses, i.e. either in Sys V shared memory (see shmop(2)) or in memory mapped to a file (see mmap(2)). It is illegal to initialize the object this way and to not allocate it in such shared memory. USYNC_THREADlock() gets a read lock on the readers/writer lock pointed to by rwlp. If the readers/writer lock is currently locked for writing, the calling thread blocks until the write lock is freed. Multiple threads may simultaneously hold a read lock on a readers/writer lock. rw_tryrdlock() trys to get a read lock on the readers/writer lock pointed to by rwlp. If the readers/writer lock is locked for writing, it returns an error; otherwise, the read lock is acquired. rw_wrlock() gets a write lock on the readers/writer lock pointed to by rwlp. If the readers/writer lock is currently locked for reading or writing, the calling thread blocks until all the read and write locks are freed. At any given time, only one thread may have a write lock on a readers/writer lock. rw_trywrlock() trys to get a write lock on the readers/writer lock pointed to by rwlp. If the readers/writer lock is currently locked for reading or writing, it returns an error. rw_unlock() unlocks a readers/writer lock pointed to by rwlp, if the readers/writer lock is locked and the calling thread holds the lock for either reading or writing. One of the other threads that is waiting for the readers/writer lock to be freed will be unblocked, provided there is other waiting threads. If the calling thread does not hold the lock for either reading or writing, no error status is returned, and the program's behavior is unknown. If successful, these functions return 0. Otherwise, a non-zero value is returned to indicate the error. The rwlock_init() function will fail if: EINVAL type is invalid. The rw_tryrdlock() or rw_trywrlock() functions will fail if: EBUSY The reader or writer lock pointed to by rwlp was already locked. These functions may fail if: EFAULT rwlp or arg points to an illegal address. See attributes(5) for descriptions of the following attributes: mmap(2), attributes(5) These interfaces also available by way of: #include <thread.h> If multiple threads are waiting for a readers/writer lock, the acquisition order is random by default. However, some implementations may bias acquisition order to avoid depriving writers. The current implementation favors writers over readers.
http://backdrift.org/man/SunOS-5.10/man3c/rw_unlock.3c.html
CC-MAIN-2017-09
refinedweb
522
63.9
Kirome Thompson5,350 Points Help writing a time_machine function that takes an int and a string of 'minutes', 'hours', 'days' or 'years' Write a function named time_machine that takes an integer and a string of "minutes", "hours", "days", or "years". This describes a timedelta. Return a datetime that is the timedelta's duration from the starter datetime. import datetime starter = datetime.datetime(2015, 10, 21, 16, 29) # Remember, you can't set "years" on a timedelta! # Consider a year to be 365 days. ## Example # time_machine(5, "minutes") => datetime(2015, 10, 21, 16, 34) def time_machine(num, time): if time == 'years': time = 'days' num = 365 return starter + datetime.timedelta(**{time: num}) 1 Answer Louise St. Germain19,398 Points Hi Kirome, I think the issue is that if the user submits "years" to your function, you've hard-coded the number of days to 365, no matter how many years they specify. It should work if you use: num *= 365 (multiply the original num by 365 to turn years into days) instead of just num = 365. Hope it works after this! Kirome Thompson5,350 Points Kirome Thompson5,350 Points Hi Louise, Thanks very much for the helped it worked! :) I appreciate it
https://teamtreehouse.com/community/help-writing-a-timemachine-function-that-takes-an-int-and-a-string-of-minutes-hours-days-or-years
CC-MAIN-2020-34
refinedweb
200
73.98
Browse content similar to Rosh Hashanah: Science vs Religion. Check below for episodes and series from the same categories and more! 'We're living in an age of unprecedented scientific progress. 'Every aspect of our lives 'is shaped by the latest discoveries and innovations. 'For me, science is one of the greatest achievements of humankind - 'a gift given to us by God. 'But there are many who see me as misguided - 'they say my religious faith has become invalid. 'It's an outdated way of thinking that doesn't fit 'in a scientific world of hard evidence and binary logic.' 'There is something insidious about training children to believe things 'for which there's no evidence.' 'Rosh Hashanah, the Jewish New Year, is when we commemorate 'the creation of the universe and its God-given wonders. 'It's a good time to challenge the assumption 'that science and religion cannot co-exist. 'I'm about to meet three non-believing scientists, 'each working at the frontier of scientific discovery. 'A neurologist. 'A theoretical physicist. 'And the evolutionary biologist who leads the scientific war on religion. 'My mission is not to convert - that's not the nature of my faith.' What I hope to show is that belief in God doesn't require a suspension of our critical faculties. And that together, religion and science CAN make a great partnership. 'For centuries, religion and science stood happily side-by-side, 'but in the last few decades, that relationship has broken down. 'You'd be forgiven for thinking they were never on speaking terms. 'As we face the challenging problems of the 21st century, 'I think we need to reopen the dialogue between science and religion. 'In my latest book, I've written a letter to scientists 'like Richard Dawkins, who use science to argue 'that there is no God.' Well, I've written it to somebody who believes that because we live in an age of science, there's no need for religion any more, someone who believes that you have to be sad, mad or bad to believe in God, or practise a religious faith, that religion is immature, it's primitive, something that we have no need of, something that belongs to a bygone age. I believe that religion is being misrepresented. In my letter, I hope to show that religion is about answering questions that science cannot. It's about...how to live. What kind of world we want to create. How we relate to the ultimately unknowable. Those things are not scientific things. I want to show them that science and religion CAN work together, SHOULD work together, because they're actually two quite different ways of thinking and we need them both. Science takes things apart to see how they work, religion puts things together to see what they mean. But what I believe is about to be put to the test. I'm about to meet three non-believing scientists. I don't know what they're going to say and there are bound to be points on which we differ. Will I get them to agree that science and religion need not be opposing forces? I'm hoping to express my view that God made us in his own image. He marked us out from other animals by giving us free will, morality and conscience. It's precisely these aspects of the human mind that are under scrutiny by modern neuroscientists. My first encounter is with a neuroscientist from Oxford University. Now is the time when we really need to understand more than ever before how the brain is working. Baroness Susan Greenfield has pioneered research into how the human brain generates consciousness. How does the objectivity get converted? How do ordinary old brain cells, ordinary old chemicals, how do they suddenly get into a scenario where you have this subjective sensation no-one else can share? It's an impossible but very exciting issue, but I think at the moment it's something we can... think about almost as philosophers... rather than expect scientists to come along with a tidy little experiment. 'So far, science has been unable to explain how human consciousness is generated, 'or even what it is.' Science HAS to be impersonal -and consciousness has to be personal. -Yes. Aren't we at that point when we reach consciousness and the self - or what used to be called the soul - aren't we reaching the very limits of science? The big problem is not so much that we're saying, "We're scientists "and we're going to butt out of this." It's more, if I said to you, "I've discovered, John... I've just discovered how the brain generates consciousness." What do you expect me to show you? We don't even know. -No idea. -No, exactly. See, that's the problem - until we actually know what kind of answer, what kind of... thing or solution are we supposed to come up with, only then can you bring the machinery of scientific method to deal with it. 'Not only is science not able to explain human consciousness, 'it doesn't even know what type of question to ask. 'For me, it's religion, not science, that speaks of choice, freedom 'and responsibility - things that make us human. 'With neuroscience and religion competing over territory, 'Susan's work is at the front line of the battle 'between science and religion.' So are science and religion destined always to conflict? Absolutely not, I really don't think that is doing any service to science. Science is all about... having curiosity, having an open mind and challenging EVERYTHING. Challenging everything. So my own view is that you can have two seemingly incompatible things, that explain the same phenomena and you can do both - you can use both and it doesn't matter. So as a neuroscientist, I'm quite happy dealing with the subjective of my friend who is now convinced that God is with him. At the same time, one can talk about changes in brain connectivity and how experience leaves its mark on the brain. I don't think that one has to have both things completely reconciled. I think you can have the two sides to the same coin - it doesn't invalidate the coin. Do you think that science might not be the only way of seeing the world? Might...induce a little bit of humility into science?! See, science is now the alpha male of the intellectual world. Religion used to be, and, heaven help us when religion loses its humility. Yeah, I remember Michael Faraday, the great scientist, he had a lovely quote - he said, "There's nothing quite as frightening as somebody who knows they're right." I think that sometimes one sees among some scientists complete intolerance, complete intransigence, complete conviction that you're right and everyone else is wrong, and what real science is about, is about having an open mind, a really open mind to things. My own view is that if you have a very rigid way of approaching... And this might apply to religion as well, then perhaps you're not going to progress or have the same insight, as if you just question everything and as I say, the whole trick is to ask the question rather than know all the answers. So would you buy the proposition that religious people ought to have respect for science and that scientists ought to have respect for religion? I would say, that all people ought to have respect for all other people and I think respect is something, er, that we can't have enough of and that irrespective of whether you're religious or scientist, or just a human being, that clearly having respect for others is a very good starting point in life. 'I find Susan's approach very encouraging. 'For her, science isn't competing with religion. In its quest 'to understand how our minds work, 'neuroscience isn't attempting to replace faith. 'But there is another area of science which some claim 'IS encroaching on religion's territory - 'that it challenges the idea of God the Creator. 'Within the last few years, 'physicists have been making remarkable advances 'in finding scientific explanations for the origins of the universe. 'Just this year, they believe they've discovered the Higgs boson, 'the so-called "God particle".' This is the particle that explains why all the other particles are the way they are. And by particles, I mean the very fundamental building blocks of everything in the universe. 'Professor Jim Al Khalili is at the forefront of transforming 'our understanding of the universe. 'He's also an atheist. 'What will he make of my mission 'to get science and religion to work together?' Would I be right in thinking that there's a division of labour here? I mean, religious people are interested in whodunnit and WHY done it and scientists are interested in HOW done it. I guess from a scientist's perspective, a non-religious scientist's perspective, the why may not be as important as the how, because for me, the laws of nature, the laws of physics and the reason the universe is the way it is, are just there. In religion, you're looking for a reason behind it. For me, the universe just happens by accident, it doesn't have meaning or, or purpose, or a need... for a grand designer. Do you think that the success of cosmology thus far in explaining how the universe began has put religion on the defensive? To some extent, yes. I mean, what we've learnt in the last century... You know, 100 years ago, we didn't know that our galaxy was just one of billions of other galaxies, we didn't know the extent of the universe of reality, and you know, when you say science can no longer explain... Well, that's where religion comes in, in a naive sense, it's.. This is the extent that science can answer, and what science has been able to do is push that boundary back. You know, we are now... We believe we understand a lot about the Big Bang itself, and theoretical physicists are even now beginning to ask the question of whether there was something BEFORE the Big Bang, that caused our universe to come into existence. So, in that area of science, I do wonder whether religion feels it's on the back foot as it retreats, as science encroaches on what was religion's territory, and I guess... How do YOU feel about that? Do you think that's true? Yeah, I think that there was this view that has been called "God of the gaps". -Yes. -So God explains whatever science can't explain. -Right. -And that means that every great advance in science is... -Squeezes that. -..seen as a retreat for religion. I think the whole "God of the gaps" theory is crazy and incompatible with the religion that I believe in - the religion of the Bible, which is, that God, creating us in his image, wanted us to use our critical intelligence to understand the universe, to understand Creation, and therefore the more we understand, the more we wonder at the greatness of God and the universe and the smallness of us. So I see every advance for science as an advance for religion as well. That, I think, is where scientists and religious believers come closest together. We're very small, the universe is very big, and the miracle is that it's here, we're here and we're beginning to understand it. I do that all the time. I don't... I don't praise a higher intelligence, in the way that you do, but I acknowledge the wonder of the universe and the way it is the way it is. I try to understand it, I know I'm a very, very long way from being able to do that, but I, I guess like you, daily struggle to understand it. Despite our conflicting views on how the universe was created, ultimately, Jim and I are united in our shared awe at its wonder. 'So far I've spoken to non-believing scientists 'who've been prepared to engage in a productive dialogue with me. 'But I'm less certain about the outcome of my next encounter. 'I'm about to meet Britain's most vocal atheist 'and I know I am going to be challenged about the very nature of my faith. 'Richard Dawkins is an evolutionary biologist 'who first made his name 36 years ago with his seminal book, The Selfish Gene. 'Since then, he has achieved worldwide fame for his militant atheism. 'His best-selling book, The God Delusion, 'was a virulent attack on religion. 'For him, the supernatural aspects of religious belief 'are an affront to science.' We can never say that there definitely is no fairy, er... and that's the way I feel about God. God has the same status as fairies. 'It's not my intention to convert Richard Dawkins.' I just want to see if he's willing to admit that there's more to life than science and more to religion than ignorance and superstition. 'We're meeting in the hallowed halls of the Royal Society - 'the institute dedicated to the pursuit of scientific excellence. 'Its motto, "Nullius in verba" - ' "take no-one's word for it" - is at the very heart 'of the discipline of science. 'This could be seen as the opposite of faith, 'but, for me, religion at its best involves asking questions 'and challenging conventional assumptions. 'Will Richard see that we have something in common? 'I've asked him to read a letter he once wrote to his daughter.' "Dear Juliet, now that you're ten I want to write..." 'It offers her a life lesson about the importance of thinking for yourself. "Next time somebody tells you something that sounds important, "think to yourself, 'Is this the kind of thing that people probably know because of evidence " 'or is it the kind of thing that people -" 'only believe because of tradition, authority or revelation?' " -Mmm. "And next time somebody tells you that something is true, "why not say to them, 'What kind of evidence is there for that?' "And if they can't give you a good answer, "I hope you'll think very carefully before you believe a word they say." She was ten years old at the time and I wanted to do the opposite of indoctrinate her, I wanted to ask her -to think for herself. So, er, what would you say, for instance, about the Jewish tradition? The first duty of a Jewish parent to a Jewish child is to teach them to ask questions. Admirable. That's exactly what the first duty seems to me should be. Er, I would hope then that the parent would answer the questions on the basis of evidence rather than on the basis of tradition or scripture - that might be where we differ. 'It is indeed the nature of my religion 'for tradition and scripture to play a central role. 'I believe the Bible records events that actually happened, 'like God talking to Abraham, arguing with him, challenging him. 'God really did intervene in human history.' You don't really believe that Abraham talked to God and God bargained with him. This is some kind of symbolic parable that you're talking about. It's clearly a parable and the argument between God and Abraham is God giving Abraham a seminar in how to be a Jewish parent. Teach your child to argue, teach your child to challenge. I get the feeling that theologians, whether Jewish or Christian, almost don't bother to distinguish between that which is symbolic and that which is literal. Tell me, when your daughter was ten, did you teach her theories or tell her stories? Well, you make a good point which is that there are times when stories get across a point better than telling it literally. -And when civilisation was in its childhood... -..you tell it as stories. -Yes, yes, there's a lot to be said for parables, certainly, but what I want to know, and I always want to know this from theologians, Christian or Jewish, is do you actually think it happened? Do you actually think that Abraham did truss Isaac on an altar and then let him off an altar? I definitely think that something happened that made Jews value their children more than in any other civilisation I know. I really think that God wanted Abraham, and Jews from that day to this, to know one thing above all others - don't sacrifice your children. Virtually every other culture in the ancient world sacrificed its children. It's entirely admirable that these moral lessons should become enshrined in the culture of any people, and it's entirely admirable... Especially, it seems to have been enshrined in Jewish culture in a very big way. -Something interesting happened in Jewish history... ..which led to these admirable things, but...I actually care about what's historically true. So do I. Yes, but do you think that Abraham really did truss Isaac on an altar? -I don't... -I want to know whether you think it is literally true? Well, first of all, I think that story is a protest against the belief throughout the ancient world, -that parents own their children. -Yes, indeed. And I think God is saying, -"Don't think you own this one." -"No Jew owns his or her child. "They have a life of their own, they have a mind of their own," and that is what I am reading from all these stories. These things happened, but they didn't happen as mere facts. They happened as morally instructive lessons, whose full import we still haven't learnt, because we are still allowing children to die every single day of malnutrition in the 21st century. We're still sacrificing our children. OK, I thoroughly applaud your statement that parents don't own their children and I would extend that to we should not as a society make the assumption that a child belongs to the same religion as its parents, which we virtually all do. We assume that children will automatically be labelled with the religion of their parents, and I think that is wicked and it goes with all the things you've just been saying about the wickedness of, er, what we do to children. 'It's a point on which Richard and I will never agree. 'For me, we have to give our children an identity, a heritage, 'a story of which they are a part. 'Will Richard have more time for a recent study from Harvard University 'that offers evidence that religion can be a force for good?' Religious people are more likely than secular people to give money to charity, er, to do voluntary work, -to give money to a homeless person... -I've seen... -Would that be evidence? -Yes, it would. -I mean, I've seen counter evidence to that. -Yeah. -It is disputed. Even if that were true, it doesn't bear in any way on the truth of religious claims about the universe, which is what I care about. You can't say that because I have evidence that religious people are more likely to give blood or give money to charity, therefore what they believe about God or the Trinity, or whatever it might be, is more likely to be true. It has nothing to do with it. 'Richard Dawkins is renowned for proselytising about the damage religion can do, 'but he's also acknowledged that, in the wrong hands, 'science can be just as terrifying.' You actually said I think, very wisely and courageously, that when you take Darwinism and turn it into a social philosophy, it becomes very dangerous. It can become very dangerous, and if you take it...especially if you take it in a naive way, it can become... it can become Nazism. If we based out politics on a naive interpretation of Darwinism, we'd be living in a kind of, erm... -..Darwinian universe... -..in which the strong eliminate the weak. And I've frequently argued against that. I've frequently said I'm a passionate Darwinian, when it comes to understanding how we got here, but I'm a passionate anti-Darwinian -when it comes to deciding what kind of society we want to live in. So, erm, I just wonder since that you say that Darwin is one of the great... I mean, the greatest scientist in recent centuries, and at the same time you point out the way that Darwin has been misused, and you don't let the fact that it's been misused compromise your admiration for Darwin. Could you not also understand that in certain ways, -religion has been misused... -..and that that should not compromise at least some of us admiring and respecting the greatness of the great religions? Yes, I agree that it has been misused, I think what I would say, however, is that an unquestioning faith, and I accept that Judaism is a bit unusual in... because questioning is favoured, but an unquestioning faith justifies somebody who says, "I don't have to argue with you, I don't have to give you my reasons. "My faith tells me that X is the right thing to do." Now, if a child is bought up to think that faith trumps evidence, or trumps reason, then that child could be equipped to do something truly terrible. This is precisely what I think is the common ground between us. I don't minimise the differences. The common ground between us is that you and I are committed -to question... -..to the use of critical intelligence, to valuing human rights and the dignity of the human person and you acknowledge that there have been times when science has been misused, -but the answer to bad science is not no science... -..it's good science. And I acknowledge that religion has sometimes been misused, but I argue that the answer to bad religion is good religion not no religion. -And so even though there is this gap between us, you are not religious and I am and I'm not seeking to change you on this, could we not work together to value human rights, human dignity, where we engage in the collaborative pursuit of truth? Yes, it's clear that we could. I mean, it's clear that people of goodwill, wherever they're coming from, could and should work together. Science can be hideously misused - indeed if you want to do terrible things, you'd better use science to do it, because that's the most efficient way to do anything. 'Religion and science have been set up as polar opposites, 'but it appears that Richard Dawkins and I 'might have found a way to work together.' So, Richard, if I can sum up our conversation, despite clearly major differences between us, I think we've found major areas of agreement and commonality - a respect for truth, openness, a willingness to question, and the collaborative pursuit of knowledge for its own sake. And you've agreed that as we think our way through the very challenging problems of the 21st century, a conversation between us might give both of us humility, but might give both of us a fresh perspective. Now if we can actually to walk hand in hand towards the future on that basis, I think that's a tremendous source for both optimism and hope. I'll go along with that. Amen to that. -Thank you. -Thank you very much. 'I feel that we've made a real breakthrough. 'It's the first time I've ever heard Richard be so open 'to my position on science and religion. 'Well, I think that was a bit of an epiphany.' You know, he met me more than halfway and I actually felt something of the magic of the power of a conversation - when two people really open to one another and that allows each of us to move beyond our normal positions. I really think that's what happened. And if it is really so, and I believe it is, that we do have so much in common, then that is a very strong argument for saying that there can be a great partnership between religion and science. 'All too often, science and religion are set up as mutually exclusive, 'but through meeting three non-believing scientists, 'it feels to me that despite our differences, we have much in common. 'And through conversation, 'we may discover we're united in a desire to pursue a common good.' I see no conflict between religion and science. Science tells us about the origin of life, religion tells us about the purpose of life. Science explains the world that is. Religion summons us to the world that ought to be. On Rosh Hashanah, the Jewish New Year, we rededicate ourselves to the idea that God created us in love and forgiveness, asking us to love and forgive others. Add that to science and it equals hope. Subtitles by Red Bee Media Ltd
https://subsaga.com/bbc/religion-and-ethics/2012/rosh-hashanah-science-vs-religion.html
CC-MAIN-2022-27
refinedweb
4,258
70.84
Update: SAP’s recommended tool to develop SAPUI5 app is using SAP Web IDE. So I would suggest you to start with Web IDE instead of Appbuilder. More info: SAP Web IDE – Enablement Today I have registered for SMP 3 trial available from SAP via SCN. So I spend few hours in developing a simple hybrid app. To make my life easy I used SAP Appbuilder also to develop my first hybrid app, Appbuilder helped me to design screens quickly with drag and drop feature. It also helped to deploy the UI5 app to devices via SMP 3 with a cordova container and SAP Kapsel plugins. For those who want to try the solution I am explaining the steps here. Requirements: Register for SMP 3 Trial, Registration – CloudShare Pro Register for NWGW trial, Download and install SMP 3 SDK (tested with SMP 3 SP01) from Download and install Mobile SDK SP05 or SAP Service Market Place Install Cordova in your system (version 3.1.0-0.2.0), Apache Cordova Install SAP Appbuilder, SAP Development Tools for Eclipse Install Android SDK, Android SDK | Android Developers Configuring a Kapsel Application in SMP and leave the remaining field as by default. 4. Click on the tab “Authentication” and create a new profile with name “HTTP” (in the image it shows HTTP under existing profile since I created a profile earlier). Click on “New” button and select HTTP/HTTPS Authentication and click on “Create“, it opens a new dialog. Provide only control flag and url as given below and leave the other fields as by default. URL is 5. Click on the “Save” button. 6. Finally to test the backend connection, select the app connection and then click on “Ping“. The response will be a successful message as given below. Was your backend url ready for use or did you in some way exposed it to sap webgui.Im really having a hard time on how to make my odata service visible in a browser.I dont have installed sap netweaver gateway and therefore i cant import them in sap service builder because i dont have authorization to create a project there.do you know any way to fix this in another way ? In the example I am using NWGW trial, it has some sample OData services. I am using the Odata service . Once you register for NWGW trial you will be able to use this service for testing. You can test this url from a browser that gives you the list of collections associated with the service. If you don’t have a NWGW you have to use toolkit for integration gateway (GWPA) to model the service and deploy it to integration gateway, this example might help you with it,How to use Integration Gateway with SMP 3.0 (Part 1) Midhun VP when i do ping for the new application this message pops up Backend system cannot be reached:::Root cause:::Exception during connection execute: Could you give details on the steps you performed. The same steps you did anyway i made ping succesfully with another odata service but still in the second part i have problem with appbuilder connecting with the smp onboarding service,i read this have to do with google chrome and but still i couldn’t find any solution for that.How did you fix this ? Open the file datajs-1.1.0.min.js using any text editor (C:\appbuilder-1.0.1251\lib\onyx\util) and add the below code: if (!Document.prototype.createAttributeNS) { Document.prototype.createAttributeNS = function(namespaceURI, qualifiedName) { var dummy = this.createElement(‘dummy’); dummy.setAttributeNS(namespaceURI, qualifiedName, ”); var attr = dummy.attributes[0]; dummy.removeAttributeNode(attr); return attr; }; } if (!Element.prototype.setAttributeNodeNS) { Element.prototype.setAttributeNodeNS = Element.prototype.setAttributeNode; } Midhun VP Midhun VP Do you know password of that windows server machine at cloud? By mistake, it has locked. 🙁 Use the option “can’t access your account” 🙂 from login page. I was talking about windows system locked out. It was asking some password. anyways i got it. I had to close the link and open it again. Midhun VP 2 questions: 1. I followed the same steps and got one error while pinging the Application connection id in SMP cockpit (installed in my machine) but got this error Backend system cannot be reached:::Root cause:::Exception during connection execute: Then i tried the same in SMP cloud and it did work. (what could be the reason for this error)? I am able to run this URL in my machine browser with valid id and password. 2. I tried to run AppBuilder in Chrome browser, after doing SMP settings (help menu) and when i click on “Retrieve” in “SMP connection profile” page, i dont see any kind of messages in Service and request section. I have added the syntex as you mentioned in your previous reply. Double check input once. Make sure you are using the public IP of your SMP 3 cloud, note that IP is dynamic so if you close the SMP cloud and reopens, it IP changes. Check this from from a public network if the above didn’t work. Midhun VP The network from which you are trying to connect is not allowing a connection to the mentioned service. So try it from a public network. Midhun VP yes, i have crosschecked everything once again. Public ip address is the same what i mentioned earlier. Do i have to comment this code of line? But does it make sense to check/ping public ip in CMD? Hi Jitendra , Are you able to solve the above error . Even i am facing the same issue. Let me any solution if you have have solved this. Hi I have a problem when I am testing my app in appbuilder’s emulator. As you can see below, when I develop my superlist I can see my data. But when I choose run no data display. Before After How is it possible? My service is created with Integration gateway and everything is exactly as your tutorial. I don’t have any other error. Thank’s Could you add the key field on the screen and check. Ex. In a Odata service a field would be defined as key, in my example BussinessPartnerID is the key. Midhun VP I checked this but the problem still remains. I see the key in the preview of superlist but I can’t see this in appbuilder’s emulator. Angeliki Could you create a new application and check. I faced a similar issue, but after adding the key field to the superlist it worked. but not sure about the root cause. ok I will create a new one! In order to be sure, when you say key field you mean ID in my case? No, I mean the field you defined as primary key when you created Odata Models using GWPA or the primary key in an existing Odata service. It can be the ID too if you defined so. Ping failed with following message: Hi Umang, If you are accessing https external link through a proxy server, make sure you enable proxy detail as per below document How to configure SMP 3.0 to use proxy to access external resource Rgrds, JK Hi Jitendra, Thanks for the reply.Above document shows how to use the proxy…that I understood,but I dont know how to create that proxy.I am not aware of that. Regards Umang Patel umang patel Are you using any proxy in browser under proxy server? for Chrome>settings>Advanced settings>Change proxy settings>Connection>LAN settings Yes Jitendra, its there.with address:proxy.xxx.com and port:8080 then you can set these proxies in the props.ini file available in server installation path (E:\SAP\MobilePlatform3\Server) -Dhttp.proxyHost=proxy.xxx.com -Dhttp.proxyPort=8080 -Dhttps.proxyHost=proxy.xxx.com -Dhttps.proxyPort=8080 stop services and regenerate SMP Mobile device as discussed in the document. Hi Midhun, The link you have given of cloudshare is not working: Registration – CloudShare Pro. Also not able to login when I directly go to cloudshare site. Please suggest next step. Thanks
https://blogs.sap.com/2014/05/27/developing-smp-3-mobile-app-using-appbuilderkapselsmp-3-trial/
CC-MAIN-2018-17
refinedweb
1,361
65.32
Dear all, I am building a wireless musical interface using three XBees and two Teensy 3.2s. When I try to send analog potentiometer values using the Arduino XBee Library, I run into huge (+10 second) latency and errors. Why is this happening? Has anyone encountered this problem before? My wireless hardware setup involves: - One XBee 3 module set up as a coordinator running in API 1 Mode (without escapes). This module is connected to a Sparkfun USB explorer which is connected to my Macbook. - Two XBee 3 modules set up as routers running in transparent mode. - Two Teensy 3.2's connected to the XBee routers via two Teensy XBee adapters All XBee 3 modules are configured to run using 802.15.4 firmware. Everything works fine when I am simply running two modules in transparent mode and just printing serial values. Everything also works fine when I am running the two routers in transparent mode and one coordinator in API mode and send unchanging integer values using the Arduino XBee library. The latency/error issue arises when I am reading and sending a variable potentiometer value using the Arduino XBee library. Numbers received on my Macbook are totally mangled and often take 5-10 seconds to update and correspond to the physical potentiometer position. I've pinpointed the issue to the code running on the Teensy's. It seems to me as if the XBee library somehow can't keep up with the rate of transmission. Has anyone ever encountered this problem before? I'm going to wait a couple of days and then attempt to bypass the library and write my own packet sending functions on the Teensy to see if this solves the issue. Any advice is appreciated. My Teensy code is attached below for reference. Thanks in advance, Bernard Le Chevre Code: #include <XBee.h> XBee xbee = XBee(); uint8_t payload[] = {0, 0}; int pot; // 16-bit addressing: Enter address of remote XBee, typically the coordinator Tx16Request tx = Tx16Request(0000, payload, sizeof(payload)); void setup() { //Startup delay delay(5000); //Begin HW serial Serial1.begin(115200); xbee.setSerial(Serial1); } void loop() { //Break down 10-bit analog reading into two bytes and place in payload pot = analogRead(A0); payload[0] = pot >> 8 & 0xff; payload[1] = pot & 0xff; //Send to coordinator xbee.send(tx); delay(10); }
https://forum.pjrc.com/printthread.php?s=5e1ac855e1e4bf5681d6ba180c673fa1&t=60134&pp=25&page=1
CC-MAIN-2020-29
refinedweb
387
54.83
Ruby XML Roundup: Hpricot 0.7, Stable Libxml-ruby and Nokogiri - | - - - - - - Read later My Reading List Ruby's XML story has improved lately with a small arms race between XML libraries Nokogiri, Hpricot and libxml-ruby. Nokogiri was released last fall, and is based on the native libxml2 and libxslt: Since Nokogiri leverages libxml2, consumers get (among other things) fast parsing, i13n support, fast searching, standards based XPath support, namespace support, and mature HTML correction algorithms. Nokogiri also provides features such as searching with XPath and CSS selectors, and is supported on 1.9.1. After some benchmarks showed Nokogiri to be in the lead when it comes to performance, Hpricot's maintainer _why put effort into improving the library and recently released version Hpricot 0.7: Please enjoy a succulent, new Hpricot. A bit faster, some Ruby 1.9 support, and assorted fixes. [..] I'm sure you're wondering what's the reason for Hpricot updates, in the face of heated competition from the Nokogiri and LibXML libraries. Remember that Hpricot has no dependencies and is smaller than either of those libs. Hpricot uses its own Ragel-based parser, so you have the freedom to hack the parser itself, the code is dwarven by comparison. Best of all, Hpricot has run on JRuby in the past. And I am in the process of merging some IronRuby code[1] and porting 0.7 to JRuby. This means your code will run on a variety of Ruby platforms without alteration. That alone makes it worthwhile, wouldn't you agree? Finally, libxml-ruby was released as version 1.0 with: * Ruby 1.9.1 support * Out of the box support for OS X 10.5 and MacPorts [..] * A nice, clean API that makes it easy to do simple things, but provides all the power of libxml2 if you need it The latest version is 1.1.3, which was released with a crucial improvement: Working through the options one-by-one, I finally found the culprit, an obscure field in the structure:int dictNames : Use dictionary names for the treeWhat this setting controls is whether libxml2 uses a dictionary to cache strings it has previously parsed. Caching strings makes a big difference, so by default it should be enabled. That is now the case with libxml-ruby 1.2.3 and higher. With this change, libxml-ruby now runs at about equal performance as Nokogiri. Rate this Article - Editor Review - Chief Editor Action
https://www.infoq.com/news/2009/03/xml-roundup
CC-MAIN-2016-50
refinedweb
411
65.62
Holy cow, I wrote a book! Consider the following program: #include <windows.h> #include <stdlib.h> #include <stdlib.h> #include <stdio.h> int array[10000]; int countthem(int boundary) { int count = 0; for (int i = 0; i < 10000; i++) { if (array[i] < boundary) count++; } return count; } int __cdecl wmain(int, wchar_t **) { for (int i = 0; i < 10000; i++) array[i] = rand() % 10; for (int boundary = 0; boundary <= 10; boundary++) { LARGE_INTEGER liStart, liEnd; QueryPerformanceCounter(&liStart); int count = 0; for (int iterations = 0; iterations < 100; iterations++) { count += countthem(boundary); } QueryPerformanceCounter(&liEnd); printf("count=%7d, time = %I64d\n", count, liEnd.QuadPart - liStart.QuadPart); } return 0; } The program generates a lot of random integers in the range 0..9 and then counts how many are less than 0, less than 1, less than 2, and so on. It also prints how long the operation took in QPC units. We don't really care how big a QPC unit is; we're just interested in the relative values. (We print the number of items found merely to verify that the result is close to the expected value of boundary * 100000.) boundary * 100000 Here are the results: To the untrained eye, this chart is strange. Here's the naïve analysis: When the boundary is zero, there is no incrementing at all, so the entire running time is just loop overhead. You can think of this as our control group. We can subtract 1869 from the running time of every column to remove the loop overhead costs. What remains is the cost of running count increment instructions. count The cost of a single increment operation is highly variable. At low boundary values, it is around 0.03 time units per increment. But at high boundary values, the cost drops to one tenth that. What's even weirder is that once the count crosses 600,000, each addition of another 100,000 increment operations makes the code run faster, with the extreme case when the boundary value reaches 10, where we run faster than if we hadn't done any incrementing at all! How can the running time of an increment instruction be negative? The explanation for all this is that CPUs are more complicated than the naïve analysis realizes. We saw earlier that modern CPUs contain all sorts of hidden variables. Today's hidden variable is the branch predictor. Executing a single CPU instruction takes multiple steps, and modern CPUs kick off multiple instructions in parallel, with each instruction at a different stage of execution, a technique known as pipelining. Now, it could just sit there and let the pipeline sit idle until the branch/no-branch decision is made, at which point it now knows which instruction to feed into the pipeline next. But that wastes a lot of pipeline capacity, because it will take time for those new instructions to make it all the way through the pipeline and start doing productive work. To avoid wasting time, the processor has an internal branch predictor which remembers the recent history of which conditional branches were taken and which were not taken. The fanciness of the branch predictor varies. Some processors merely assume that a branch will go the same way that it did the last time it was countered. Others keep complicated branch history and try to infer patterns (such as "the branch is taken every other time"). When a conditional branch is encountered, the branch predictor tells the processor which instructions to feed into the pipeline. If the branch prediction turns out to be correct, then we win! Execution continues without a pipeline stall. But if the branch prediction turns out to be incorrect, then we lose! All of the instructions that were fed into the pipeline need to be recalled and their effects undone, and the processor has to go find the correct instructions and start feeding them into the pipeline. Let's look at our little program again. When the boundary is 0, the result of the comparison is always false. Similarly, when the boundary is 10, the result is always true. In those cases, the branch predictor can reach 100% accuracy. The worst case is when the boundary is 5. In that case, half of the time the comparison is true and half of the time the comparison is false. And since we have random data, fancy historical analysis doesn't help any. The predictor is going to be wrong half the time. Here's a tweak to the program: Change the line if (array[i] < boundary) count++; to count += (array[i] < boundary) ? 1 : 0; This time, the results look like this: The execution time is now independent of the boundary value. That's because the optimizer was able to remove the branch from the ternary expression: ; on entry to the loop, ebx = boundary mov edx, offset array ; start at the beginning of the array $LL3: xor ecx, ecx ; start with zero cmp [edx], ebx ; compare array[i] with boundary setl cl ; if less than boundary, then set al = 1 add eax, ecx ; accumulate result in eax add edx, 4 ; loop until end of array cmp edx, offset array + 40000 jl $LL3 Since there are no branching decisions in the inner loop aside from the loop counter, there is no need for a branch predictor to decide which way the comparison goes. The same code executes either way. Exercise: Why are the counts exactly the same for both runs, even though the dataset is random? A customer wanted to know whether it was okay to call FindNextFile with the same handle that returned an error last time. In other words, consider the following sequence of events: FindNextFile h = FindFirstFile(...); FindNextFile(h, ...); The customer elaborated: Suppose that the directory contains four files, A, B, C, and D. We expect the following: FindFirstFile returns A FindNextFile returns B FindNextFile fails (C is selected but an error occurred) FindNextFile returns D ← is this expected?. Suppose that the directory contains four files, A, B, C, and D. We expect the following: FindFirstFile. We asked the customer what problem they're encountering that is causing them to ask this strange question. The customer replied, "Sometimes we get the error ERROR_FILE_CORRUPT or ERROR_INVALID_FUNCTION, but we don't know what end-user configurations are causing those error codes. We would like to know whether we can continue to use FindNextFile in these two cases." ERROR_FILE_CORRUPT ERROR_INVALID_FUNCTION_FILE_CORRUPT is the case of drive corruption, and ERROR_INVALID_FUNCTION is some sort of device driver error state, perhaps because the device was unplugged.) You should just accept that you cannot enumerate the contents and proceed accordingly.. Remember, Microspeak is not merely for jargon exclusive to Microsoft, but it's jargon that you need to know. The term brownbag (always one word, accent on the first syllable) refers to a presentation given during lunch. The attendees are expected to bring their lunch to the meeting room and eat while they listen to the presentation. A brownbag could be a one-off presentation, or it could be a regular event. The speaker could be an invited guest, or the presenters may come from within the team. In general, the purpose of a brownbag is to familiarize the audience with a new concept or to share information with the rest of the team. Sometimes attendance is optional, sometimes attendance is mandatory, and sometimes attendance is optional but strongly recommended, which puts it in the murky category of mandatory optional. You can learn more about each team's plans in brownbags that we will kick off the week of 2/17 and continue regularly through the month. Are you going to the brownbag? I'm heading to the cafeteria, want to come along? It is common for the slides accompanying a brownbag to be placed on a Web site for future reference. Sometimes the presentation is recorded as well. The term brownbag is sometimes extended to mean any presentation which introduces a group of people to a new concept, whether it occurs at lunch or not. Virtual brownbag on widget coloring. That's the (redacted) subject of a message I sent out to our team. The message described the process you have to go through in order to get a widget coloring certificate. It could have been a brownbag but I was too lazy to book a room for it, so I created a virtual brownbag. Due to scheduling conflicts, we will have to move the presentation to Friday at noon. We apologize for the last-minute change. This is now really a brownbag, so grab your lunch in the cafeteria and join us for a great talk and discussion! The above is another example of how the term brownbag was applied to something that, at least originally, was not a lunch meeting. The CF_HDROP clipboard format is still quite popular, despite its limitation of being limited to files. You can't use it to represent virtual content, for example. CF_HDROP_FILE_ATTRIBUTES_ARRAY to your data object. This contains the file attribute information for the items in your CF_HDROP, thereby saving the drop target the cost of having to go find them. CFSTR_FILE_ATTRIBUTES_ARRAY_HDROP. We then declare a static DROPFILES structure which we use for all of our drag-drop operations. (Of course, in real life, you would generate it dynamically, but this is just a Little Program.) DROPFILES_HDROP format, so changing from CF_HDROP to some other format would be impractical. GetUIObjectOfFile CTinyDataObject; } We added a new data format, CFSTR_FILE_ATTRIBUTES_ARRAY, and we created a static copy of the FILE_ATTRIBUTES_ARRAY variable-length structure that contains the attributes of our one file. Of course, in a real program, you would generate the structure dynamically. Note that I use a sneaky trick here: Since the FILE_ATTRIBUTES_ARRAY ends with an array of length 1, and I happen to need exactly one item, I can just declare the structure as-is and fill in the one slot. (If I had more than one item, then I would have needed more typing.) FILE_ATTRIBUTES_ARRAY To make things easier for the consumers of the FILE_ATTRIBUTES_ARRAY, the structure also asks you to report the logical OR and logical AND of all the file attributes. This is to allow quick answers to questions like "Is everything in this CF_DROP a file?" or "Is anything in this CF_DROP write-protected?" Since we have only one file, the calculation of these OR and AND values is nearly trivial. CF_DROP.) The. dllimport Non-classical linking is even more advanced than non-classical physics. Whereas special relativity lets you stretch and slow down time, non-classical linking even gives you a limited form of time travel. Prerequisite reading: Closing over the loop variable considered harmful. JavaScript has the same problem. Consider: function hookupevents() { for (var i = 0; i < 4; i++) { document.getElementById("myButton" + i) .addEventListener("click", function() { alert(i); }); } } The most common case where you encounter this is when you are hooking up event handlers in a loop, so that's the case I used as an example. No matter which button you click, they all alert 4, rather than the respective button number. 4 i The cumbersome part is fixing the problem. In C#, you can just copy the value to a scoped local and capture the local, but that doesn't work in JavaScript: function hookupevents() { for (var i = 0; i < 4; i++) { var j = i; document.getElementById("myButton" + i) .addEventListener("click", function() { alert(j); }); } } Now the buttons all alert 3 instead of 4. The reason is that JavaScript variables have function scope, not block scope. Even though you declared var j inside a block, the variable's scope is still the entire function. In other words, it's as if you had written 3 var j function hookupevents() { var j; for (var i = 0; i < 4; i++) { j = i; document.getElementById("myButton" + i) .addEventListener("click", function() { alert(j); }); } } Here's a function which emphasizes this "variable declaration hoisting" behavior: function strange() { k = 42; for (i = 0; i < 4; i++) { var k; alert(k); } } The function alerts 42 four times because the variable k refers to the same variable k throughout the entire function, even before it has been declared. 42 k That's right. JavaScript lets you use a variable before declaring it. The scope of JavaScript variables is the function, so if you want to create a variable in a new scope, you have to put it in a new function, since functions define scope. function hookupevents() { for (var i = 0; i < 4; i++) { var handlerCreator = function(index) { var localIndex = index; return function() { alert(localIndex); }; }; var handler = handlerCreator(i); document.getElementById("myButton" + i) .addEventListener("click", handler); } } Okay, now things get weird. We need to put the variable into its own function, so we do that by declaring a helper function handlerCreator which creates event handlers. Since we now have a function, we can create a new local variable which is distinct from the variables in the parent function. We'll call that local variable localIndex. The handler creator function saves its parameter in the localIndex and then creates and returns the actual handler function, which uses localIndex rather than i so that it uses the captured value rather than the original variable. handlerCreator localIndex Now that each handler gets a separate copy of localIndex, you can see that each one alerts the expected value. Now, I wrote out the above code the long way for expository purposes. In real life, it's shrunk down quite a bit. For example, the index parameter itself can be used instead of the localIndex variable, since parameters can be viewed as merely conveniently-initialized local variables. index function hookupevents() { for (var i = 0; i < 4; i++) { var handlerCreator = function(index) { return function() { alert(index); }; }; var handler = handlerCreator(i); document.getElementById("myButton" + i) .addEventListener("click", handler); } } And then the handlerCreator variable can be inlined: function hookupevents() { for (var i = 0; i < 4; i++) { var handler = (function(index) { return function() { alert(index); })(i); document.getElementById("myButton" + i) .addEventListener("click", handler); } } And then the handler itself can be inlined: handler function hookupevents() { for (var i = 0; i < 4; i++) { document.getElementById("myButton" + i) .addEventListener("click", (function(index) { return function() { alert(index); })(i)); } } The pattern (function (x) { ... })(y) is misleadingly called the self-invoking function. It's misleading because the function doesn't invoke itself; the outer code is invoking the function. A better name for it would be the immediately-invoked function since the function is invoked immediately upon definition. (function (x) { ... })(y) The next step is to change then the name of the helper index variable to simply i so that the connection between the outer variable and the inner variable can be made more obvious (and more confusing to the uninitiated): function hookupevents() { for (var i = 0; i < 4; i++) { document.getElementById("myButton" + i) .addEventListener("click", (function(i) { return function() { alert(i); })(i)); } } The pattern (function (x) { ... })(x) is an idiom that means "For the enclosed block of code, capture x by value." And since functions can have more than one parameter, you can extend the pattern to (function (x, y, z) { ... })(x, y, z) to capture multiple variables by value. (function (x) { ... })(x) x (function (x, y, z) { ... })(x, y, z) It is common to move the entire loop body into the pattern, since you usually refer to the loop variable multiple times, so you may as well capture it just once and reuse the captured value. function hookupevents() { for (var i = 0; i < 4; i++) { (function(i) { document.getElementById("myButton" + i) .addEventListener("click", function() { alert(i); }); })(i); } } Maybe it's a good thing that the fix is more cumbersome in JavaScript. The fix for C# is easier to type, but it is also rather subtle. The JavaScript version is quite explicit. Exercise: The pattern doesn't work! var o = { a: 1, b: 2 }; document.getElementById("myButton") .addEventListener("click", (function(o) { alert(o.a); })(o)); o.a = 42; This code alerts 42 instead of 1, even though I captured o by value. Explain. 1 o Bonus reading: C# and ECMAScript approach solving this problem in two different ways. In C# 5, the loop variable of a foreach loop is now considered scoped to the loop. ECMAScript code name Harmony proposes a new let keyword. foreach let Mark wants to know why Windows has never supported having a taskbar on more than one monitor. (The question was asked before Windows 8 multi-monitor taskbar support became publically-known.) The feature has always been on the list, but it's a long list, and specifically the cost of designing, implementing, testing, performing usability tests, then redesigning the feature (because you will definitely need to redesign something as significant as this at least once) historically prevented it from escaping the minus-100-point deficit. Features do not exist in a vacuum, and decisions about features necessarily take into account the other features under consideration. For a feature to be adopted, it not only must be valuable enough in itself, but it almost must provide a better cost/benefit ratio than any other features under consideration. While the benefit of a multi-monitor taskbar is high, you have to scale it down by the percentage of users who would be able to take advantage of such a feature. I don't know the exact numbers, but I would hazard that fewer than ten percent of users use a multiple-monitor system on a regular basis, so any benefit would have to be ten times as great as the benefit of features that have broader use. On top of that, the development cost of a multiple-monitor taskbar is significantly higher than most other taskbar features. Just the compatibility constraints alone make you shudder. (Think about all the programs that do a FindWindow looking for the taskbar and assuming that there is only one.) FindWindow What changed in Windows 8 that made a multiple-monitor taskbar a feature worth implementing? I don't know, but I can guess. First of all, the overall engineering budget for the taskbar may have been raised, so that more features from the list can make the cut. Or maybe the people in charge of the taskbar decided to go with their gut and ignore the numbers, implementing a feature specifically targetting the enthusiast community even though the work would not be justified if you went strictly by the cost/benefit analysis. By doing this, they ended up short-changing other features which were perhaps more worthy if you looked at the numbers. And then you'd be asking, "Why didn't you do feature Y? I mean, it would have been far more useful to far more people than the multiple-monitor taskbar." (Of course, now that I mentioned Windows 8, everybody will treat this as open season to post their complaints here.) A customer had a question about the MSDN documentation on rules for legal file names: My employees keep naming documents with hyphens in the name. For example, they might name a file Budget-2012-Final.xlsx. It is my position that hyphens should not be used in this way, and the document should be named Budget 2012 Final.xlsx. Please advise on the use of hyphens within file names. Budget-2012-Final.xlsx Budget 2012 Final.xlsx Hyphens inside file names are legal, and you can use as many as you like, subject to the other rules for file names. If you are having an argument with your employees about file naming conventions, that's something you just need to work out among yourselves. Whatever you decide, the file system will be there for you. Today. IDesktopWallpaper DesktopWallpaper ::GetWallpaper and specify nullptr as the monitor ID. The call succeeds with S_OK if the same wallpaper is shown on all monitors (in which case the shared wallpaper is returned). It succeeds with S_FALSE if each monitor has a different wallpaper. IDesktopWallpaper::GetWallpaper nullptr S_OK S_FALSE. GetWallpaper And that's it. You can juice up this program by asking for wallpaper positioning information, and if you are feeling really adventuresome, you can use the SetWallpaper method to change the wallpaper. SetWallpaper
http://blogs.msdn.com/b/oldnewthing/default.aspx?PageIndex=3
CC-MAIN-2014-23
refinedweb
3,358
62.07
TypeScript is a free, open-source programming language developed and maintained by Microsoft. It is a strict superset of JavaScript and adds optional static typing and class-based object-oriented programming to the language. TypeScript can be helpful to React developers in a number of ways. For example: - Interfaces: TypeScript allows you to define complex type definitions in the form of interfaces. This is helpful when you have a complex type that you want to use in your application, such as an object which contains other properties. This results in strict checks which in turn reduces the amount of possible bugs you might have produced without it. - IDEs: TypeScript is very helpful while using IDEs like Visual Studio, Visual Studio Code, Atom, Webstorm, Eclipse, and so many more as they provide better autocomplete and snippet generation, which makes development faster. - Readable, easily understandable code: The key to TypeScript is that it’s a statically typed script. Programming languages can either be statically or dynamically typed; the difference is when type checking occurs. Static languages’ variables are type-checked, which helps make the code more readable. Getting started To install TypeScript globally so you can call it when needed, run: npm install -g typescript For verification of the installed TypeScript, use the tsc — v command: tsc --v //Version 2.6.1 Now, create a folder that will hold the application. To do this, run npm init and follow the instructions: //create a new directory mkdir typescript-react //change directory to the new folder cd typescript-react //run npm init npm init Configuring TypeScript Given no arguments, tsc will first check tsconfig.json for instructions. When it finds the config, it uses those settings to build the project. Create a new file called tsconfig.json in the root folder and add: { "compilerOptions": { "target": "es6", "jsx": "react", "module": "commonjs" }, "exclude": [ "node_modules" ] } This defines the two major sections, which include compilerOptions and exclude parameters: - In the compiler options, a target of es6has been set. This means that the JavaScript engine target will be set to es6, and the module will be set to CommonJS. Notice that there is also a key called JSX, which is set to React. This tells TypeScript to compile JSX files as React files. This is similar to running tsc — jsx react. - In the excludeblock, node_modulesis being defined for it. TypeScript will not scan the node_modulesfolder for any TypeScript file while compiling. - If you are familiar with TypeScript and its configuration, you must wonder why the includesection is missing. This is because webpackwill be configured to handle taking in the entry file, passing them to TypeScript for compilation and returning a bundled executable for browsers. Configuring webpack First, you need to install webpack and a webpack plugin called ts-loader. To do this, run: npm install webpack ts-loader Now you may wonder what ts-loader is. As its name implies, ts-loader is the TypeScript loader for webpack. You wouldn’t be wrong to say it’s a plugin that helps webpack work well with TypeScript. Just like TypeScript, webpack also checks for a file called webpack.config.js for configuration. So create a new file called webpack.config.js and add: var path = require("path"); var config = { entry: ["./app.tsx"], output: { path: path.resolve(__dirname, "build"), filename: "bundle.js" }, resolve: { extensions: [".ts", ".tsx", ".js"] }, module: { loaders: [ { test: /\.tsx?$/, loader: "ts-loader", exclude: /node_modules/ } ] } }; module.exports = config; The code base above is simple. Let me explain. The first key, entry, refers to the entry point of the application. The app.tsx file referenced here is yet to be created. You will create it soon, hold on. The output key is an object that accepts two parameters: the first is the path to publish bundled files, while the second is the name of your final bundle. The resolve key is also an object that takes in a key called extensions with an array of extensions it should watch out for and compile. The module key, which is the last key to discuss, is an object that has a key called loaders. The loaders key is an array of objects that defines which webpack plugin/loader should handle such file. Here, we test for tsx extensions and ask webpack to use the ts-loader earlier installed the compilation. Adding npm scripts After all configured so far, wouldn’t it make sense if you could just run a command like npm run magic anytime you want to create a bundle? Yes, it would. So, open your package.json file and update your scripts section with: "scripts": { "magic": "webpack" } Creating the app.tsx file Typically, that’s most of the configuration needed. There’s just one more rule to follow all the configuration you have done above: install types definitions for every library you install. For example, you install react this way: npm install react @types/react Install both react and react-dom to start with: npm install react react-dom @types/react @types/react-dom Next, create a new file called app.tsx in the root and add: import * as React from "react"; import * as ReactDOM from "react-dom"; ReactDOM.render( <div> <h1>Hello, Welcome to the first page</h1> </div>, document.getElementById("root") ); Above is a simple React setup, except that it is using TypeScript. Move ahead to compile the file by running npm run magic in your terminal. A build folder with a file named bundle.js has been created. Does this newly created bundle work as expected? Create a new index.html file that references the new build to find out: <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>Getting Started with Typescript and ReactJS</title> </head> <body> <!-- this is where react renders into --> <div id="root"></div> <script src="build/bundle.js"></script> </body> </html> If you double-click on the index.html file to open in a browser, you will see: Creating a component You know you’ve got the basics of writing React in TypeScript when you have your first page ready. Now it’s time to dive into writing React components. First, create a new folder called src, which is where components will live. Next, create a file called FirstComponent.tsx and paste: import * as React from "react"; let Logo =""; export default class FirstComponent extends React.Component <{}> { render() { return ( <div> {/* React components must have a wrapper node/element */} <h1>A Simple React Component Example with Typescript</h1> <div> <img src={Logo} /> </div> <p>I am a compinent which shows the logrocket logo. For more info on Logrocket, please visit</p> </div> ); } } The above code block is a simple component that returns a logo and some text. To make this new component accessible to React, you need to import and use the component in the base app.tsx file. Alter your app.tsx file to: import * as React from "react"; import * as ReactDOM from "react-dom"; import FirstComponent from './src/FirstComponent' ReactDOM.render( <div> <h1>Hello, Welcome to the first page</h1> <FirstComponent/> </div>, document.getElementById("root") ); Looking at the code block above, you see that the differences are on line 3 and 7, where you imported the new component and rendered it, respectively. If you run the npm run magic command and navigate to your browser, you will see: Using TypeScript interfaces with components and contracts with code outside of your project. First, create a new file in the src folder called UserInterface.ts and add: export default interface User{ name: string; age: number; address: string; dob: Date; } The code block above defines a simple User interface, which I will pass in as props into a new component. Because this interface is strongly typed, notice you cannot pass an integer for the name key, as the value is a string. Declare a new component called UserComponent.tsx in the src folder and add: import * as React from "react"; import UserInterface from './UserInterface' export default class UserComponent extends React.Component<UserInterface, {}> { constructor (props: UserInterface){ super(props); } render() { return ( <div> <h1>User Component</h1> Hello, {this.props.name} <br/> You are {this.props.age} years old, <br/> You live at: {this.props.address} <br/> you were born: {this.props.dob.toDateString()} </div> ); } } The code block above is pretty self-explanatory. I have imported the UserInterface created earlier on and passed it as the props of the UserComponent. In the constructor, I re-check that the props passed in are of the UserInterface type. In the render function, I printed the details out. Next, alter your app.tsx file to: import * as React from "react"; import * as ReactDOM from "react-dom"; import FirstComponent from './src/FirstComponent' import UserComponent from './src/UserComponent' ReactDOM.render( <div> <h1>Hello, Welcome to the first page</h1> <FirstComponent/> <UserComponent name="Logrocket" age={105} address="get me if you can" dob={new Date()} /> </div>, document.getElementById("root") ); If you run the npm run magic command, and you navigate to your browser, you will see: Conclusion In this tutorial, you have (hopefully) learned how to use TypeScript with React. You have also learned how to configure webpack with ts-loader to compile the TypeScript files and emit a final build. The code base for this tutorial is available here for you to play around “How & why: A guide to using TypeScript with…” This really helped, thank you! Nice article, but I think maybe you need to fix the contents of tsconfig.json (looks like you pasted the webpack config contents there.)
https://blog.logrocket.com/how-why-a-guide-to-using-typescript-with-react-fffb76c61614/
CC-MAIN-2019-43
refinedweb
1,577
57.06
++ Requests. To demostrate our client code, we need a web server that we can make our request to so in our case, we’ll use ASP.NET Web API version 2 to implement our CRUD API. The web server is not this article’s focus but I shall still devote some time to explain the Web API code. For those readers not interested in the server code (because they are not using ASP.NET), they can skip to the client section. ASP.NET Web API I am not going to go through the details on how to setup the ASP.NET Web API project. Interested readers can read the this tutorial and that tutorial provided by Microsoft. The Web API is based loosely on MVC design. MVC stands for Model, View and Controller. Model represents the data layer, usually they are class that models after data design in storage, View represents the presentation layer while Controller is the business logic layer. Strictly speaking, a pure ASP.NET Web API server does not serve out HTML pages, so it does not have the presentation layer. In our example, we have the Product class as our data Model. public class Product { public int Id { get; set; } public string Name { get; set; } public int Qty { get; set; } public decimal Price { get; set; } } In our ProductsController, Product is stored in a static Dictionary which is not persistent, meaning to say the data disappear after the Web API server shuts down. But this should suffice for our demostration without involving the use of database. public class ProductsController : ApiController { static Dictionary products = new Dictionary(); This is the Create method to create a Product object to store in our dictionary. [FromBody] attribute means the Product item shall be populated with the contents found in the request body. [HttpPost] public IHttpActionResult Create([FromBody] Product item) { if (item == null) { return BadRequest(); } products[item.Id] = item; return Ok(); } To test our code, I use curl command. If you have already had Postman installed, you can use that as well. I am old school, so I prefer to use curl command directly. curl -XPOST -H 'Content-Type: application/json' -d'{"Id":1, "Name":"ElectricFan","Qty":14,"Price":20.90}' - -X specifies the HTTP verb, POST which corresponds to create or update method. - 2nd argument is the URL to which this request should go to. - -H specifies the HTTP headers. We send JSON string so we set ‘Content-Type’ to ‘application/json’. - -d specifies the content body of the request. It can be seen clearly that the keys in the json dictionary correspond exactly to the Productmembers. The output returned by curl is empty when the post request is successful. To see our created Product, we need to have the retrieval method which is discussed shortly below. The methods to retrieve all products and a single product are listed below. Note: HTTP GET verb is used for data retrieval. [HttpGet] public List GetAll() { List temp = new List(); foreach(var item in products) { temp.Add(item.Value); } return temp; } [HttpGet] public IHttpActionResult GetProduct(long id) { try { return Ok(products[id]); } catch (System.Collections.Generic.KeyNotFoundException) { return NotFound(); } } The respective curl commands retrieve all and a single Product based on id (which is 1). The commandline argument is similar to what I have explained in above, so I skip them. curl -XGET' curl -XGET' The output is [{"Id":1,"Name":"ElectricFan","Qty":14,"Price":20.90}] {"Id":1,"Name":"ElectricFan","Qty":14,"Price":20.90} We see the 1st output is enclosed by [] because 1st command returns a collection of Product objects but in our case, we only have 1 Product right now. Lastly, we have the Update and Delete method. The difference between HTTP POST and PUT verb, is that PUT is purely a update method whereas POST create the object if it does not exist but POST can used for updating as well. [HttpPut] public IHttpActionResult Update(long id, [FromBody] Product item) { if (item == null || item.Id != id) { return BadRequest(); } if(products.ContainsKey(id)==false) { return NotFound(); } var product = products[id]; product.Name = item.Name; product.Qty = item.Qty; product.Price = item.Price; return Ok(); } [HttpDelete] public IHttpActionResult Delete(long id) { var product = products[id]; if (product == null) { return NotFound(); } products.Remove(id); return Ok(); } The respective curl commands below updates and deletes Product correspond to id=1. curl -XPUT -H 'Content-Type: application/json' -d'{"Id":1, "Name":"ElectricFan","Qty":15,"Price":29.80}' curl -XDELETE To see the Product is really updated or deleted, we have to use the retrieval curl command shown above. C++ Client Code At last, we have to come to main focus of this article! To able to use C++ Requests, please clone or download it at here and include its cpr header. Alternatively for Visual C++ users, you can install C++ Requests via vcpkg. C++ Requests is abbreviated as cpr in vcpkg. .\vcpkg install cpr #include <cpr/cpr.h> To send a POST request to create a Product, we put our Product json in raw string literal inside the cpr::Body. Otherwise, without raw string literal, we have to escape all the double quotes found in our json string. auto r = cpr::Post(cpr::Url{ "" }, cpr::Body{ R"({"Id":1, "Name":"ElectricFan","Qty":14,"Price":20.90})" }, cpr::Header{ { "Content-Type", "application/json" } }); Compare this C++ code to raw curl command, we can see which information goes to where. curl -XPOST -H 'Content-Type: application/json' -d'{"Id":1, "Name":"ElectricFan","Qty":14,"Price":20.90}' After product creation, we try to retrieve it using the C++ code below. auto r = cpr::Get(cpr::Url{ "" }); The output is the same as the one from curl command which isn’t strange since C++ Requests utilize libcurl underneath to do its work. {"Id":1,"Name":"ElectricFan","Qty":14,"Price":20.90} The full C++ code to do CRUD with ASP.NET Web API is listed below with its output. By the way, CRUD is short for Create, Retrieve, Update and Delete. Be sure your ASP.NET Web API is up and running before running the C++ code below. int main() { { std::cout << "Action: Create Product with Id = 1" << std::endl; auto r = cpr::Post(cpr::Url{ "" }, cpr::Body{ R"({"Id":1, "Name":"ElectricFan","Qty":14,"Price":20.90})" }, cpr::Header{ { "Content-Type", "application/json" } }); std::cout << "Returned Status:" << r.status_code << std::endl; } { std::cout << "Action: Retrieve the product with id = 1" << std::endl; auto r = cpr::Get(cpr::Url{ "" }); std::cout << "Returned Text:" << r.text << std::endl; } { std::cout << "Action: Update Product with Id = 1" << std::endl; auto r = cpr::Post(cpr::Url{ "" }, cpr::Body{ R"({"Id":1, "Name":"ElectricFan","Qty":15,"Price":29.80})" }, cpr::Header{ { "Content-Type", "application/json" } }); std::cout << "Returned Status:" << r.status_code << std::endl; } { std::cout << "Action: Retrieve all products" << std::endl; auto r = cpr::Get(cpr::Url{ "" }); std::cout << "Returned Text:" << r.text << std::endl; } { std::cout << "Action: Delete the product with id = 1" << std::endl; auto r = cpr::Delete(cpr::Url{ "" }); std::cout << "Returned Status:" << r.status_code << std::endl; } { std::cout << "Action: Retrieve all products" << std::endl; auto r = cpr::Get(cpr::Url{ "" }); std::cout << "Returned Text:" << r.text << std::endl; } return 0; } The output as mentioned is shown below. I only display the returned text when the CRUD supports it, otherwise I just display the status. HTTP Status 200 means successful HTTP request. For example, Create/Update/Delete operation does not return any text, so I just display their status. Action: Create Product with Id = 1 Returned Status:200 Action: Retrieve the product with id = 1 Returned Text:{"Id":1,"Name":"ElectricFan","Qty":14,"Price":20.90} Action: Update Product with Id = 1 Returned Status:200 Action: Retrieve all products Returned Text:[{"Id":1,"Name":"ElectricFan","Qty":15,"Price":29.80}] Action: Delete the product with id = 1 Returned Status:200 Action: Retrieve all products Returned Text:[] For users looking to send request with parameters like below, you can make use of the cpr::Parameters. C++ code for the above url example. auto r = cpr::Get(cpr::Url{ "" }, cpr::Parameters{{"quota", "500"}, {"sold", "true"}}); Source code written for this article is hosted at Github.
https://codingtidbit.com/page/7/
CC-MAIN-2021-17
refinedweb
1,365
57.27
Express Gateway - A microservices API Gateway built on top of Express.js - Interview with Vincenzo Chianese Our Kanban application is almost usable now. It looks alright and there's basic functionality in place. In this chapter, we will integrate drag and drop functionality to it as we set up React DnD. After this chapter, you should be able to sort notes within a lane and drag them from one lane to another. Although this sounds simple, there is quite a bit of work to do as we need to annotate our components the right way and develop the logic needed. As the first step, we need to connect React DnD with our project. We are going to use the HTML5 Drag and Drop based back-end. There are specific back-ends for testing and touch. In order to set it up, we need to use the DragDropContext decorator and provide the HTML5 back-end to it. To avoid unnecessary wrapping, I'll use Redux compose to keep the code neater and more readable: app/components/App.jsx import React from 'react'; import uuid from 'uuid';import {compose} from 'redux'; import {DragDropContext} from 'react-dnd'; import HTML5Backend from 'react-dnd-html5-backend';import connect from '../libs/connect'; import Lanes from './Lanes'; import LaneActions from '../actions/LaneActions'; const App = ({LaneActions, lanes}) => { const addLane = () => { LaneActions.create({ id: uuid.v4(), name: 'New lane' }); }; return ( <div> <button className="add-lane" onClick={addLane}>+</button> <Lanes lanes={lanes} /> </div> ); };export default connect(({lanes}) => ({ lanes }), { LaneActions })(App)export default compose( DragDropContext(HTML5Backend), connect( ({lanes}) => ({lanes}), {LaneActions} ) )(App) After this change, the application should look exactly the same as before. We are ready to add some sweet functionality to it now. Allowing notes to be dragged is a good first step. Before that, we need to set up a constant so that React DnD can tell different kind of draggables apart. Set up a file for tracking Note as follows: app/constants/itemTypes.js export default { NOTE: 'note' }; This definition can be expanded later as we add new types, such as LANE, to the system. Next, we need to tell our Note that it's possible to drag it. This can be achieved using the DragSource annotation. Replace Note with the following implementation: app/components/Note.jsx import React from 'react'; import {DragSource} from 'react-dnd'; import ItemTypes from '../constants/itemTypes'; const Note = ({ connectDragSource, children, ...props }) => { return connectDragSource( <div {...props}> {children} </div> ); }; const noteSource = { beginDrag(props) { console.log('begin dragging note', props); return {}; } }; export default DragSource(ItemTypes.NOTE, noteSource, connect => ({ connectDragSource: connect.dragSource() }))(Note) If you try to drag a Note now, you should see something like this at the browser console: begin dragging note Object {className: "note", children: Array[2]} Just being able to drag notes isn't enough. We need to annotate them so that they can accept dropping. Eventually this will allow us to swap them as we can trigger logic when we are trying to drop a note on top of another. In case we wanted to implement dragging based on a handle, we could apply connectDragSourceonly to a specific part of a Note. Note that React DnD doesn't support hot loading perfectly. You may need to refresh the browser to see the log messages you expect! Annotating notes so that they can notice that another note is being hovered on top of them is a similar process. In this case we'll have to use a DropTarget annotation: app/components/Note.jsx import React from 'react';import {DragSource} from 'react-dnd';import {compose} from 'redux'; import {DragSource, DropTarget} from 'react-dnd';import ItemTypes from '../constants/itemTypes'; const Note = ({connectDragSource, children, ...propsconnectDragSource, connectDropTarget, children, ...props}) => {return connectDragSource(return compose(connectDragSource, connectDropTarget)(<div {...props}> {children} </div> ); }; const noteSource = { beginDrag(props) { console.log('begin dragging note', props); return {}; } };const noteTarget = { hover(targetProps, monitor) { const sourceProps = monitor.getItem(); console.log('dragging note', sourceProps, targetProps); } };export default DragSource(ItemTypes.NOTE, noteSource, connect => ({ connectDragSource: connect.dragSource() }))(Note)export default compose( DragSource(ItemTypes.NOTE, noteSource, connect => ({ connectDragSource: connect.dragSource() })), DropTarget(ItemTypes.NOTE, noteTarget, connect => ({ connectDropTarget: connect.dropTarget() })) )(Note) If you try hovering a dragged note on top of another now, you should see messages like this at the console: dragging note Object {} Object {className: "note", children: Array[2]} Both decorators give us access to the Note props. In this case, we are using monitor.getItem() to access them at noteTarget. This is the key to making this to work properly. onMoveAPI for Notes# Now, that we can move notes around, we can start to define logic. The following steps are needed: Noteid on beginDrag. Noteid on hover. onMovecallback on hoverso that we can deal with the logic elsewhere. LaneStorewould be the ideal place for that. Based on the idea above we can see we should pass id to a Note through a prop. We also need to set up a onMove callback, define LaneActions.move, and LaneStore.move stub. idand onMoveat Note# We can accept id and onMove props at Note like below. There is an extra check at noteTarget as we don't need trigger hover in case we are hovering on top of the Note itself: app/components/Note.jsx ... const Note = ({ connectDragSource, connectDropTarget,children, ...propsonMove, id, children, ...props}) => { return compose(connectDragSource, connectDropTarget)( <div {...props}> {children} </div> ); };const noteSource = { beginDrag(props) { console.log('begin dragging note', props); return {}; } };const noteSource = { beginDrag(props) { return { id: props.id }; } };const noteTarget = { hover(targetProps, monitor) { const sourceProps = monitor.getItem(); console.log('dragging note', sourceProps, targetProps); } };const noteTarget = { hover(targetProps, monitor) { const targetId = targetProps.id; const sourceProps = monitor.getItem(); const sourceId = sourceProps.id; if(sourceId !== targetId) { targetProps.onMove({sourceId, targetId}); } } };... Having these props isn't useful if we don't pass anything to them at Notes. That's our next step. idand onMovefrom Notes# Passing a note id and onMove is simple enough: app/components/Notes.jsx import React from 'react'; import Note from './Note'; import Editable from './Editable'; export default ({ notes, onNoteClick=() => {}, onEdit=() => {}, onDelete=() => {} }) => ( <ul className="notes">{notes.map(({id, editing, task}) => <li key={id}><Note className="note" onClick={onNoteClick.bind(null, id)}><Note className="note" id={id} onClick={onNoteClick.bind(null, id)} onMove={({sourceId, targetId}) => console.log('moving from', sourceId, 'to', targetId)}><Editable className="editable" editing={editing} value={task} onEdit={onEdit.bind(null, id)} /> <button className="delete" onClick={onDelete.bind(null, id)}>x</button> </Note> </li> )}</ul> ) If you hover a note on top of another, you should see console messages like this: moving from 3310916b-5b59-40e6-8a98-370f9c194e16 to 939fb627-1d56-4b57-89ea-04207dbfb405 The logic of drag and drop goes as follows. Suppose we have a lane containing notes A, B, C. In case we move A below C we should end up with B, C, A. In case we have another list, say D, E, F, and move A to the beginning of it, we should end up with B, C and A, D, E, F. In our case, we'll get some extra complexity due to lane to lane dragging. When we move a Note, we know its original position and the intended target position. Lane knows what Notes belong to it by id. We are going to need some way to tell LaneStore that it should perform the logic over the given notes. A good starting point is to define LaneActions.move: app/actions/LaneActions.js import alt from '../libs/alt'; export default alt.generateActions( 'create', 'update', 'delete', 'attachToLane', 'detachFromLane', 'move' ); We should connect this action with the onMove hook we just defined:} onClick={onNoteClick.bind(null, id)}onMove={({sourceId, targetId}) => console.log('moving from', sourceId, 'to', targetId)}>onMove={LaneActions.move}><Editable className="editable" editing={editing} value={task} onEdit={onEdit.bind(null, id)} /> <button className="delete" onClick={onDelete.bind(null, id)}>x</button> </Note> </li> )}</ul> ) It could be a good idea to refactor onMoveas a prop to make the system more flexible. In our implementation the Notescomponent is coupled with LaneActions. This isn't particularly nice if you want to use it in some other context. We should also define a stub at LaneStore to see that we wired it up correctly: app/stores/LaneStore.js import LaneActions from '../actions/LaneActions'; export default class LaneStore { ... detachFromLane({laneId, noteId}) { ... }move({sourceId, targetId}) { console.log(`source: ${sourceId}, target: ${targetId}`); }} You should see the same log messages as earlier. Next, we'll need to add some logic to make this work. We can use the logic outlined above here. We have two cases to worry about: moving within a lane itself and moving from lane to another. Moving within a lane itself is complicated. When you are operating based on ids and perform operations one at a time, you'll need to take possible index alterations into account. As a result, I'm using update immutability helper from React as that solves the problem in one pass. It is possible to solve the lane to lane case using splice. First, we splice out the source note, and then we splice it to the target lane. Again, update could work here, but I didn't see much point in that given splice is nice and simple. The code below illustrates a mutation based solution: app/stores/LaneStore.js import update from 'react-addons-update';import LaneActions from '../actions/LaneActions'; export default class LaneStore { ...move({sourceId, targetId}) { console.log(`source: ${sourceId}, target: ${targetId}`); }move({sourceId, targetId}) { const lanes = this.lanes; const sourceLane = lanes.filter(lane => lane.notes.includes(sourceId))[0]; const targetLane = lanes.filter(lane => lane.notes.includes(targetId))[0]; const sourceNoteIndex = sourceLane.notes.indexOf(sourceId); const targetNoteIndex = targetLane.notes.indexOf(targetId); if(sourceLane === targetLane) { // move at once to avoid complications sourceLane.notes = update(sourceLane.notes, { $splice: [ [sourceNoteIndex, 1], [targetNoteIndex, 0, sourceId] ] }); } else { // get rid of the source sourceLane.notes.splice(sourceNoteIndex, 1); // and move it to target targetLane.notes.splice(targetNoteIndex, 0, sourceId); } this.setState({lanes}); }} If you try out the application now, you can actually drag notes around and it should behave as you expect. Dragging to empty lanes doesn't work, though, and the presentation could be better. It would be nicer if we indicated the dragged note's location more clearly. We can do this by hiding the dragged note from the list. React DnD provides us the hooks we need for this purpose. React DnD provides a feature known as state monitors. Through it we can use monitor.isDragging() and monitor.isOver() to detect which Note we are currently dragging. It can be set up as follows: app/components/Note.jsx import React from 'react'; import {compose} from 'redux'; import {DragSource, DropTarget} from 'react-dnd'; import ItemTypes from '../constants/itemTypes'; const Note = ({connectDragSource, connectDropTarget, onMove, id, children, ...propsconnectDragSource, connectDropTarget, isDragging, isOver, onMove, id, children, ...props}) => { return compose(connectDragSource, connectDropTarget)(<div {...props}> {children} </div><div style={{ opacity: isDragging || isOver ? 0 : 1 }} {...props}>{children}</div>); }; ... export default compose(DragSource(ItemTypes.NOTE, noteSource, connect => ({ connectDragSource: connect.dragSource() })), DropTarget(ItemTypes.NOTE, noteTarget, connect => ({ connectDropTarget: connect.dropTarget() }))DragSource(ItemTypes.NOTE, noteSource, (connect, monitor) => ({ connectDragSource: connect.dragSource(), isDragging: monitor.isDragging() })), DropTarget(ItemTypes.NOTE, noteTarget, (connect, monitor) => ({ connectDropTarget: connect.dropTarget(), isOver: monitor.isOver() })))(Note) If you drag a note within a lane, the dragged note should be shown as blank. There is one little problem in our system. We cannot drag notes to an empty lane yet. To drag notes to empty lanes, we should allow them to receive notes. Just as above, we can set up DropTarget based logic for this. First, we need to capture the drag on Lane: app/components/Lane.jsx import React from 'react';import {compose} from 'redux'; import {DropTarget} from 'react-dnd'; import ItemTypes from '../constants/itemTypes';import connect from '../libs/connect'; import NoteActions from '../actions/NoteActions'; import LaneActions from '../actions/LaneActions'; import Notes from './Notes'; import LaneHeader from './LaneHeader'; const Lane = ({lane, notes, LaneActions, NoteActions, ...propsconnectDropTarget, lane, notes, LaneActions, NoteActions, ...props}) => { ...return (return connectDropTarget(... ); }; function selectNotesByIds(allNotes, noteIds = []) { ... }const noteTarget = { hover(targetProps, monitor) { const sourceProps = monitor.getItem(); const sourceId = sourceProps.id; // If the target lane doesn't have notes, // attach the note to it. // // `attachToLane` performs necessarly // cleanup by default and it guarantees // a note can belong only to a single lane // at a time. if(!targetProps.lane.notes.length) { LaneActions.attachToLane({ laneId: targetProps.lane.id, noteId: sourceId }); } } };export default connect( ({notes}) => ({ notes }), { NoteActions, LaneActions } )(Lane)export default compose( DropTarget(ItemTypes.NOTE, noteTarget, connect => ({ connectDropTarget: connect.dropTarget() })), connect(({notes}) => ({ notes }), { NoteActions, LaneActions }) )(Lane) After attaching this logic, you should be able to drag notes to empty lanes. Our current implementation of attachToLane does a lot of the hard work for us. If it didn't guarantee that a note can belong only to a single lane at a time, we would need to adjust our logic. It's good to have these sort of invariants within the state management system. The current implementation has a small glitch. If you edit a note, you can still drag it around while it's being edited. This isn't ideal as it overrides the default behavior most people are used to. You cannot for instance double-click on an input to select all the text. Fortunately, this is simple to fix. We'll need to use the editing state per each Note to adjust its behavior. First we need to pass editing state to an individual Note:}editing={editing}onClick={onNoteClick.bind(null, id)} onMove={LaneActions.move}> <Editable className="editable" editing={editing} value={task} onEdit={onEdit.bind(null, id)} /> <button className="delete" onClick={onDelete.bind(null, id)}>x</button> </Note> </li> )}</ul> ) Next we need to take this into account while rendering: app/components/Note.jsx import React from 'react'; import {compose} from 'redux'; import {DragSource, DropTarget} from 'react-dnd'; import ItemTypes from '../constants/itemTypes'; const Note = ({ connectDragSource, connectDropTarget, isDragging,isOver, onMove, id, children, ...propsisOver, onMove, id, editing, children, ...props}) => {// Pass through if we are editing const dragSource = editing ? a => a : connectDragSource;return compose(connectDragSource, connectDropTarget)(return compose(dragSource, connectDropTarget)(<div style={{ opacity: isDragging || isOver ? 0 : 1 }} {...props}>{children}</div> ); }; ... This small change gives us the behavior we want. If you try to edit a note now, the input should work as you might expect it to behave normally. Design-wise it was a good idea to keep editing state outside of Editable. If we hadn't done that, implementing this change would have been a lot harder as we would have had to extract the state outside of the component. Now we have a Kanban table that is actually useful! We can create new lanes and notes, and edit and remove them. In addition we can move notes around. Mission accomplished! In this chapter, you saw how to implement drag and drop for our little application. You can model sorting for lanes using the same technique. First, you mark the lanes to be draggable and droppable, then you sort out their ids, and finally, you'll add some logic to make it all work together. It should be considerably simpler than what we did with notes. I encourage you to expand the application. The current implementation should work just as a starting point for something greater. Besides extending the DnD implementation, you can try adding more data to the system. You could also do something to the visual outlook. One option would be to try out various styling approaches discussed at the Styling React chapter. To make it harder to break the application during development, you can also implement tests as discussed at Testing React. Typing with React discussed yet more ways to harden your code. Learning these approaches can be worthwhile. Sometimes it may be worth your while to design your applications test first. It is a valuable approach as it allows you to document your assumptions as you go. This book is available through Leanpub. By purchasing the book you support the development of further content.
https://survivejs.com/react/implementing-kanban/drag-and-drop/index.html
CC-MAIN-2019-22
refinedweb
2,632
52.26
If you are new to C# or have been learning C# string from quite some time, then you may stumble upon, string interpolated version in C# ( $"Hello, {YourName}"), which was introduced in C# 6, so in this article, I have explained about c# string interpolated version with console application examples. What is string interpolation? In Programming language, string interpolation, is the process of evaluating a string literal containing one or more placeholders, yielding a result in which the placeholders are replaced with their corresponding values. So we can say, an interpolated string is a string literal that might contain interpolation expressions. When an interpolated string is resolved to a result string, items with interpolation expressions are replaced by the string representations of the expression results. Available in C# 6.0 and later, Interpolated strings are identiified by $ special character, take a look at an example class Program { static void Main(string[] args) { var name = "Vikas Lalwani"; Console.WriteLine($"My name is {name}"); Console.ReadKey(); } } Output of the above code will be My name is Vikas Lalwani Here is the image below which shows output, when code is written in Visual Studio using console application. In C#, a string literal is an interpolated string, when you prepand it with the $ symbol. You cannot have any white space between the $ and the " that starts a string literal. Interpolated String Example in C# Let's take a look at more complex example of interpolated string, suppose, you want to ask the name and age from user, then show it as an output on console. namespace StringInterpolation { class Program { static void Main(string[] args) { string name="", age = ""; Console.WriteLine("What is your name?"); //assign value to name variable name=Console.ReadLine(); Console.WriteLine("What is your age?"); //assign value to age variable age = Console.ReadLine(); //print it to console using interpolated string Console.WriteLine($"Your name is {name} and age is {age}"); Console.ReadKey(); } } } Output: What is your name? John Kanhwald What is your age? 20 Your name is John Kanhwald and age is 20 Adding Special characters inside interpolated string in C# Suppose you want to add special characters like "; {" etc inside interpolated string, then you can do it like string name = "John"; Console.WriteLine($"Hello, \"are you {name}?\", but not the terminator movie one :-{{"); Output: Hello, "are you John?", but not the terminator movie one :-{ Expression Evaluation With string interpolation, expressions within curly braces {} can also be evaluated. The result will be inserted at the corresponding location within the string. let's take a look at an example Console.WriteLine($"The greater one is: { Math.Max(10, 20) }"); Console.WriteLine($"Today's day and date is: {DateTime.Today:dddd, dd-MM-yyyy}"); Output: The greater one is: 20 Today's day and date is: Tuesday, 01-09-2020 Method call Yes, you can also call methods, let them evaluate and return results. The returned result will be placed in the corresponding location. static void Main(string[] args) { Console.WriteLine($"The 5*5 is {MultipleByItSelf(5)}"); } static int MultipleByItSelf(int num) { //multiply number by itself and return result return num * num; } Output: The 5*5 is 25 You may also like to read:
https://qawithexperts.com/article/c-sharp/string-interpolation-in-c-with-examples/308
CC-MAIN-2021-39
refinedweb
530
56.86
Red Hat Bugzilla – Bug 60193 Source RPM from FTP site does produce errors Last modified: 2007-04-18 12:40:31 EDT Description of Problem: Source RPM from FTP site does produce errors Version-Release number of selected component (if applicable): qt-2.3.1-5 How Reproducible: rpm -ba qt.spec Steps to Reproduce: 1. extract source rpm to respecitve rpm subdir 2. run "rpm -ba qt.spec" 3. watch outputs Actual Results: rpm -ba qt.spec error: Unclosed %if error: failed build dependencies: libmng-static is needed by qt-2.3.1-5 Expected Results: the first error message indicates a central problem in the spec file Additional Information: just ignore the other dependency warning. This was caused by a bug in our package translation engine (The %description section contained a %endif construct, which was patched out in the translation process).
https://bugzilla.redhat.com/show_bug.cgi?id=60193
CC-MAIN-2018-09
refinedweb
143
57.27
- 13 Oct, 2006 1 commit - 09 Oct, 2006 1 commit - David Kastrup authored what the joke was about. - 07 Oct, 2006 2 commits - 06 Oct, 2006 2 commits - 30 Sep, 2006 1 commit - 28 Sep, 2006 1 commit - 27 Sep, 2006 1 commit - Vinicius Jose Latorre authored - 26 Sep, 2006 1 commit - Reiner Steib authored - 20 Sep, 2006 2 commits - 18 Sep, 2006 1 commit - 16 Sep, 2006 1 commit - 15 Sep, 2006 5 commits - David Kastrup authored `command-remapping'. * keymaps.texi (Active Keymaps): Adapt description to use `get-char-property' instead `get-text-property'. Explain how mouse events change this. Explain the new optional argument of `key-binding' and its mouse-dependent lookup. (Searching Keymaps): Adapt description similarly. Explain the new optional argument of `command-remapping'. * Makefile.in (keymap.o): Add "keymap.h" and "window.h" dependencies. * keymap.c: include "window.h". (Fcommand_remapping): New optional POSITION argument. (Fkey_binding): New optional POSITION argument. Completely rework handling of mouse clicks to get the same order of keymaps as `read-key-sequence' and heed POSITION. Also temporarily switch buffers to location of mouse click and back. * keyboard.c (command_loop_1): Adjust call of `Fcommand_remapping' for additional argument. (parse_menu_item): Adjust call of `Fkey_binding' for additional argument. (read_key_sequence): If there are both `local-map' and `keymap' text properties at some buffer position, heed both. * keymap.h: Declare additional optional arguments of `Fcommand_remapping' and `Fkey_binding'. - 14 Sep, 2006 1 commit - 12 Sep, 2006 1 commit - Paul Eggert authored variable now defaults to Emacs's absolute file name, instead of to "t". * etc/PROBLEMS: Adjust tcsh advice for this. * make-dist (EMACS): Exit and fail if the EMACS environment variable is set to something other than an absolute file name. * lisp/comint.el (comint-exec-1): Set EMACS to the full name of Emacs, not to "t". * lisp/progmodes/compile.el (compilation-start): Likewise. * lisp/progmodes/idlwave.el (idlwave-rescan-asynchronously): Don't use expand-file-name on invocation-directory, since this might mishandle special characters in invocation-directory. * man/faq.texi (Escape sequences in shell output): EMACS is now set to Emacs's absolute file name, not to "t". (^M in the shell buffer): Likewise. * man/misc.texi (Interactive Shell): Likewise. - 10 Sep, 2006 2 commits - 08 Sep, 2006 1 commit - 06 Sep, 2006 2 commits - 02 Sep, 2006 2 commits - 01 Sep, 2006 1 commit - 25 Aug, 2006 2 commits - 24 Aug, 2006 1 commit - 23 Aug, 2006 1 commit - 21 Aug, 2006 2 commits - Kenichi Handa authored - 20 Aug, 2006 3 commits (__all__): Fix args -> eargs. Add new `modpath' fun. (eargs): Add `imports' arg. (all_names): New fun. (complete): Rewrite without using rlcompleter. Remove `namespace' arg, add `imports' arg. (ehelp): Replace g and l args with `imports'. (eimport): Use __main__ rather than `emacs' namespace. (modpath): New fun. - 18 Aug, 2006 2 commits
https://emba.gnu.org/emacs/emacs/-/commits/12b6af5c7ed2cfdb9783312bf890cf1e6c80c67a/etc
CC-MAIN-2021-43
refinedweb
467
59.9
If you've never encountered WPF (Windows Presentation Foundation) you are missing a versatile tool. This article is part of a series devoted to it. XAML can be confusing - especially if you think it is a markup language like HTML. It isn't. XAML is a general purpose object instantiation language. To find out what this means read on. XAML is a, mostly declarative, object instantiation language – that is it’s a way of describing using XML what objects should be created and how they should be initialized, MyProject say, using Visual Studio or your favourite IDE. You don’t need to modify any of the generated code but you do need to add a simple custom class with which to try out XAML. Right click on the project in the project window and select New,Add, New Item and finally C# class. Call it MyClass: using System;using System.Collections.Generic;using System.Text; namespace MyProject{ class:MyProject". Make sure that this is the only code between the windows tags. The project should now run without errors. If you do see any errors then it will be due to loss of synchronization between namespaces – simply run the project again. The need to keep namespaces and other generated files in sync is one of the problems of splitting instantiation from the runtime.
https://i-programmer.info/programming/wpf-workings/446-how-xaml-works.html
CC-MAIN-2021-25
refinedweb
222
64.61
Opened 13 months ago Closed 6 months ago #9066 closed bug (fixed) Template Haskell cannot splice an infix declaration for a data constructor Description When I say $([d| data Blargh = (:<=>) Int Int infix 4 :<=> |]) I get Illegal variable name: ‘:<=>’ When splicing a TH declaration: infix 4 :<=>_0 The code inside the TH quote works when not used with TH. I will fix in due course. Change History (9) comment:1 Changed 7 months ago by goldfire comment:2 Changed 7 months ago by goldfire Harrumph. In that second case, [d| infix 5 `Foo` |] produces an Exact RdrName for Foo that names a data constructor, not a type constructor, even when only the type constructor is in scope. Then, according to Note [dataTcOccs and Exact Names] in RnEnv, the Exact RdrNames are trusted to have the right namespace and, so a naive fix for this bug fails the Foo case. There are two possible ways forward that I see: - Don't trust Exact RdrNames in dataTcOccs. That is, when we have an Exact constructor name, also look for the type with same spelling. - Duplicate the dataTcOccs logic in DsMeta. I favor (2), because code that consumes the TH AST will want the TH.Names to have the right namespaces. It's really a bug that the fixity declaration above refers to a data constructor Foo. Going to implement (2). comment:3 Changed 7 months ago by goldfire Well, option (2) is infeasible. This is because desugaring a quoted fixity declaration produces TH.Names that do not have namespace information attached. This is a consequence of the fact that namespace information is available only with TH.Name's NameG constructor, which also has package and module information. Of course, when processing a quote, we have no idea what package/module the declaration will eventually end up in, so NameG is a non-starter. Thus, we have no namespace information here, and instead must be liberal when processing Exact RdrNames. I suppose the Right Way to fix this is to add namespace information to TH's NameU and NameL constructors, but that probably has farther-reaching implications than need to be dealt with at this moment. Going to implement (1). comment:4 Changed 7 months ago by simonpj I'm a bit confused. - What does the TH syntax look like? Presumably InfixD fixity name where name :: TH.Name. - What is the flavour of that name? Presumably not a NameG? So NameS or NameL? - If NameS, we never generate an Exact RdrName, so I guess NameL? comment:5 Changed 7 months ago by goldfire This sample program was educational for me: import Language.Haskell.TH.Syntax import GHC.Exts ( Int(I#) ) import Data.Generics ( listify ) $( do let getNames = listify (const True :: Name -> Bool) showNS VarName = "VarName" showNS DataName = "DataName" showNS TcClsName = "TcClsName" showFlav NameS = "NameS" showFlav (NameQ mod) = "NameQ " ++ show mod showFlav (NameU i) = "NameU " ++ show (I# i) showFlav (NameL i) = "NameL " ++ show (I# i) showFlav (NameG ns pkg mod) = "NameG " ++ showNS ns ++ " " ++ show pkg ++ " " ++ show mod toString (Name occ flav) = show occ ++ " (" ++ showFlav flav ++ ")" decs <- [d| type Foo a b = Either a b infix 5 `Foo` data Blargh = Foo |] runIO $ do putStr $ unlines $ map show decs putStrLn "" putStr $ unlines $ map toString $ getNames decs return [] ) The goal here is to learn more about the Names used in the desugaring. Here is my output: TySynD Foo_1627434972 [PlainTV a_1627434975,PlainTV b_1627434976] (AppT (AppT (ConT Data.Either.Either) (VarT a_1627434975)) (VarT b_1627434976)) InfixD (Fixity 5 InfixN) Foo_1627434974 InfixD (Fixity 5 InfixN) Foo_1627434972 DataD [] Blargh_1627434973 [] [NormalC Foo_1627434974 []] [] OccName "Foo" (NameU 1627434972) OccName "a" (NameU 1627434975) OccName "b" (NameU 1627434976) OccName "Either" (NameG TcClsName PkgName "base" ModName "Data.Either") OccName "a" (NameU 1627434975) OccName "b" (NameU 1627434976) OccName "Foo" (NameU 1627434974) OccName "Foo" (NameU 1627434972) OccName "Blargh" (NameU 1627434973) OccName "Foo" (NameU 1627434974) We see here a few things: - My solution (2) above is already somewhat implemented. Note that the quote has only 1 fixity declaration, but the desugared TH AST has 2! This was the essence of my idea (2) above. - GHC correctly notices the difference between the type Foo and the data constructor Foo in a quote. - All of the local names are NameUs. These NameUs indeed become Exacts during splicing. But, the round trip from quote to TH AST to splice loses the namespace information, because NameUs do not carry namespace info. So, we either add namespace information to NameU or implement (1), above. Adding namespace info to NameU is slightly annoying, because fixity declarations are the only place that the namespace isn't apparent from a usage site. Another possible solution is to add namespace info to the InfixD TH constructor. This is dissatisfactory because TH should model concrete syntax, and concrete syntax doesn't have a namespace marker there. I'm happy to take suggestions, but my tendency is toward (1). comment:6 Changed 7 months ago by goldfire - Differential Revisions set to Phab:D424 - Status changed from new to patch comment:7 Changed 6 months ago by Richard Eisenberg <eir@…> comment:8 Changed 6 months ago by Richard Eisenberg <eir@…> comment:9 Changed 6 months ago by goldfire - Resolution set to fixed - Status changed from patch to closed - Test Case set to th/T9066 This also fails for type constructors:
https://ghc.haskell.org/trac/ghc/ticket/9066
CC-MAIN-2015-22
refinedweb
877
61.77
What is Achieved by Project Jigsaw: As I explained earlier, project Jigsaw solves the problem of the whole java API being used as a single monolithic codebase. The following points highlight the main advantages. 1. Dependency Graph: Jigsaw gives a way to uniquely identify a particular codebase, and also to declare a codebase’s dependencies on other codebases. This creates a complete dependency graph for a particular set of classes. Say for example, you want to write a program that depends on Apache BCEL library. Until now, there was no way for you to express this requirement in the code itself. Using Jigsaw, you can express this requirement in the code itself, allowing tools to resolve this dependency. 2. Multiple Versions of the Same Code: Suppose you write a program that depends on both libray A and library B. Now suppose library A depends on version 1.0 of library C and library B depends on version 2.0 of library C. In the current java runtime, you cannot use library A and B at the same time without creating a complex hierarchy of custom classloaders, even that would not work in all cases. After Jigsaw becomes part of java, this is not a problem as a class will be able to see only the versions of its dependent classes that are part of the module versions required by the classes container module. That is to say, since module A depends on version 1.0 of module C, and module B depends on version 2.0 of module C, the java runtime can figure out which version of the classes in module C to be seen by either module A or module B. This is something similar to OSGi project. 3. Modularization of Java Platform Itself: The current java platform API is huge and not all parts of it may be relevant in every case. For example, a java platform intended to run a Java EE server does not have to implement the Swing API as that would not make any sense. Similarly, embedded environments can stripdown some not so important APIs (for embedded) like compiler API to make it smaller and faster. Under current java platform, its not possible as any certified java platform must implement all the APIs. Jigsaw will provide a way to implement only a part of the API set relevant to the particular platform. Since a module can explicitly declare its dependency on any particular java API module, it will be run only when the platform has an implementation of the modules requred by the module. 4. Integration with OS native installation: Since the module system is very similar to what is currently available for installation of programs and libraries in modern operating systems, the java modules can be integrated with those systems. These are in fact out of the scope of Jigsaw project itself, but the OS vendors are encouraged to enable this and they would most likely do so. For example, the rpm based repository system available in Redhat based linux systems and apt based repository systems available in Debian based linux systems can easily be enhanced to support java module systems. 5. Module Entry Point: Java modules can specify an entry point class just like the jars can specify it. When a module is run, the entry-point’s main method is invoked. Now since the OS can now install a java module and the java module can be executed, its very similar to installing an OS’s native program. 5. Efficiency: Currenly, every time a JVM is run, it verifies the integrity of every single class that is loaded during the run of the program. This takes a considerable amount of time. Also the classes are accessed individually from the OS file system. Since modules can be installed before running, the installation itself can now include the verification step which will eliminate the need to verify the classes at runtime. This will lead to considerable performance improvement. Also, the module system can store the classes in its own optimized manner leading to further improvement in the performance. 6. Module Abstraction: It is possible to provide an abstraction for a particular module. Say module A depends on module X. Now module D can provide for module X thus providing its implementation. For example, the Apache Xerces modules would want to provide for jdk.jaxp module and would be able to satisfy a dependency requirement for jdk.jaxp. Basics of Modular Codebase: All the above discussion are pretty vague without a real example of modular codebase and its usage. A modular codebase can either be single module or multi-module. In case of single module, all we need to enable module is to create a file named module-info.java at the base of the source path, outside any package. The module-info.java file is a special java file written in a special syntax designed to declare module information. The following is an example of such a mdoule-info.java. module com.a @ 1.0{ requires com.b @ 1.0; class com.a.Hello; } In this case the module is named com.a and it has got a dependency on com.b. It also declares an entry point com.a.Hello. Note that it is not required that the package structure ressembles the module name, although that would probably be a best practice. Now you might be thinking that if it is a single module mode, then why is there a dependency on a different module, does not that make it two modules. Notice that even if there is only one explicit declaration of a dependency module, there is implicit dependency on all java API modules. If none of the java API modules are declared explicitly as dependencies, all of the them are included. The only reason its still single module is that the com.b must be available in binary form in the module library. Its multi-module when more than one module is being compiled at the same time. Compiling a source in single module is as simple as how we compile a non-modular source. Only difference is that module-info.java will be present in the source root. Multi-module Source: In case the source contains multiple modules, they must be given a directory structure. Its pretty simple though. The source under a particular module must be kept in a directory of the name of the module. For example, the source for the class com.a.Hello in the module com.a must be kept in [source-root]/com.a/com/a/Hello.java and the module-info.java must be kept in the directory [source-root]/com.a Compiling Multi-module Source: For this let us consider an example of compiling two modules com.a and com.b. Let us first take a look at the directory structure. as below: classes src |--com.a | |--module-info.java | |--com | |--a | |--Hello.java |--com.b |--module-info.java |--com |--b |--Printer.java The code for module-info.java in com.a would be like this. module com.a @ 1.0{ requires com.b @ 1.0; class com.a.Hello; } The module-info.java in com.b module com.b @ 1.0{ exports com.b; } Printer.java in com.b/com/b package com.b; public class Printer{ public static void print(String toPrint){ System.out.println(toPrint); } } Hello.java in com.a/com/a package com.a; import com.b.Printer; public class Hello{ public static void main(String [] args){ Printer.print("Hello World!"); } } The codes are pretty self explanatory, we are trying to use com.b.Printer class in module com.b from com.a.Hello class in module com.a. For this, its mandatory for com.a module-info.java to declare com.b as a dependency with the requires keyword. We are trying to create the output class files in the classes directory. The following javac command would do that. javac -d classes -modulepath classes -sourcepath src `find src -name '*.java'` Note that we have used find command in backquotes(`) so that the command’s output will be included as the file list. This will work in linux and unix environments. In case of others we might simply type in the list of files. After compilation, classes directory will have a similar structure of classes. Now we can install the modules using jmod command. jmod create -L mlib jmod install -L mlib classes com.b jmod install -L mlib classes com.a We first created a module library mlib and installed our modules in the library. We could also have used the default library by not specifying the -L option to the install command in jmod. Now we can simply run module com.a using java -L mlib -m com.a Here too we could have used the default module. It is also possible to create a distributable module package [equivalent to a jar in today's distribution mechanism] that can directly be installed. For example, the following will create com.a@1.0.jmod for com.a jpkg -m classes/com.a jmod com.a I have tried to outline the module infrastructure in the upcoming java release. However project Jigsaw is being modified everyday and can turn up to be a completely differnt being altogether at the end. But it is expected that the basic concepts would still remain the same. The total module concepts are more complex and I will cover the details in an upcoming article. Reference: What’s Cooking in Java 8 – Project Jigsaw from our JCG partner Debasish Ray Chawdhuri at the Geeky Articles blog.
http://www.javacodegeeks.com/2012/05/whats-cooking-in-java-8-project-jigsaw.html
CC-MAIN-2014-10
refinedweb
1,615
59.09
MB_CUR_MAX - maximum length of a multibyte character in the current locale #include <stdlib.h> The MB_CUR_MAX macro defines an integer expression giving the maximum number of bytes needed to represent a single wide character in the current locale. It is locale dependent and therefore not a compile- time constant. An integer in the range [1, MB_LEN_MAX]. The value 1 denotes traditional 8-bit encoded characters. C99, POSIX.1-2001. MB_LEN_MAX(3), mblen(3), mbstowcs(3), mbtowc(3), wcstombs(3), wctomb(3) This page is part of release 3.24 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at.
http://huge-man-linux.net/man3/MB_CUR_MAX.html
CC-MAIN-2018-26
refinedweb
109
59.19
Newbie questions - How to gracefully shutdown the board? - How to execute a test script from REPL? - How to validate that the firmware for the Pysense and the Lopy4 module is on the latest recommended release Thanks for your help Hi, The hardware version of the board can be found written on the silk screen (either 1.0or 1.1), The firmware version of the pysense can be found by using the read_fw_version()function in pycoproc.pywhich can be found here. Finally in regards to what library version you are using, you can see that at the top of the .py file e.g: __version__ = '0.0.2' @shishir No idea. The pysense.py module itself is a pythion script, located in the library. But you migth have asked for the firmware version of the PIC controller on the pysense board. Maybe someone else can anwer that, like @seb @robert-hh Thanks. One more question. 4. How to find the Pysense version from board? @shishir The latest version is always visible in the forum's section "Announcement & News". Since always new features are added and bugs are fixed, you have to determine yourself whether a certain version is good for your purpose. @robert-hh Thanks a lot 1 and 2: OK 3. I found the commands in the document but my query was how to figure out if it is the recommended stable version @shishir 1. Remove power. Unless there ate open files, that's ok 2. If the script has beenuploaded to the device,enter import <name_of_file> Without the .py extension. 3. Enter the commands in REPL: import uos uos.uname()
https://forum.pycom.io/topic/2868/newbie-questions
CC-MAIN-2021-31
refinedweb
271
76.01
On Thu, Jun 14, 2007 at 03:54:47PM -0600, Marc St-Jean wrote: General comment all the first ~ 2900 of your patch is that it's rather heavy on #ifdef. #ifdef is a nasty construct that has a tendency to hard to read code which in trun results in bugs. Or a test compile fails to find real issues in the code because it's hidden in a dead #if construct. > diff --git a/include/asm-mips/pmc-sierra/msp71xx/msp_regops.h > b/include/asm-mips/pmc-sierra/msp71xx/msp_regops.h > new file mode 100644 > index 0000000..60a5a38 > --- /dev/null > +++ b/include/asm-mips/pmc-sierra/msp71xx/msp_regops.h > @@ -0,0 +1,236 @@ > +/* > + * SMP/VPE-safe functions to access "registers" (see note). > + * > + * NOTES: > +* - These macros use ll/sc instructions, so it is your responsibility to > + * ensure these are available on your platform before including this file. > + * - The MIPS32 spec states that ll/sc results are undefined for uncached > + * accesses. This means they can't be used on HW registers accessed > + * through kseg1. Code which requires these macros for this purpose must > + * front-end the registers with cached memory "registers" and have a single > + * thread update the actual HW registers. You basically betting on undefined behaviour of the architecture. I did indeed verify this specific case with a processor design guy and the answer was a "should be ok". That is I heared people speak in a more convincing tone already ;-) A SC with an uncached address on some other MIPS processors will always fail. So this isn't just a theoretical footnote in a manual. The way things are implemented LL/SC I am certain that uncached LL/SC will fail on any MIPS multiprocessor system. Fortunately while SMTC pretends to be multiprocessor it's really just a single core, which saves your day. > + * - A maximum of 2k of code can be inserted between ll and sc. Every > + * memory accesses between the instructions will increase the chance of > + * sc failing and having to loop. Any memory access between LL/SC makes the LL/SC sequence invalid that is it will have undefined effects. > + * - When using custom_read_reg32/custom_write_reg32 only perform the > + * necessary logical operations on the register value in between these > + * two calls. All other logic should be performed before the first call. > + * - There is a bug on the R10000 chips which has a workaround. If you > + * are affected by this bug, make sure to define the symbol 'R10000_LLSC_WAR' > + * to be non-zero. If you are using this header from within linux, you may > + * include <asm/war.h> before including this file to have this defined > + * appropriately for you. > +#ifndef __ASM_REGOPS_H__ > +#define __ASM_REGOPS_H__ > + > +#include <linux/types.h> > + > +#include <asm/war.h> > + > +#ifndef R10000_LLSC_WAR > +#define R10000_LLSC_WAR 0 > +#endif This symbol is supposed to be defined by <asm/war.h> only. Anyway, this #ifndef will never be true because you already include <asm/war.h>, so this is dead code. > +#if R10000_LLSC_WAR == 1 > +#define __beqz "beqzl " > +#else > +#define __beqz "beqz " > +#endif > + > +#ifndef _LINUX_TYPES_H > +typedef unsigned int u32; > +#endif Redefining a stanard Linux type is a no-no as is relying on include wrapper symbols like _LINUX_TYPES_H. Anyway, this #ifndef will never be true because you already include <linux/types.h>, so this is dead code. > +static inline u32 read_reg32(volatile u32 *const addr, > + u32 const mask) > +{ > + u32 temp; > + > + __asm__ __volatile__( > + " .set push \n" > + " .set noreorder \n" > + " lw %0, %1 # read \n" > + " and %0, %2 # mask \n" > + " .set pop \n" > + : "=&r" (temp) > + : "m" (*addr), "ir" (mask)); > + > + return temp; > +} No need for inline assembler here; plain C can achieve the same. Or just use a standard Linux function such as readl() or ioread32() or similar. > +/* > + * For special strange cases only: > + * > + * If you need custom processing within a ll/sc loop, use the following > macros > + * VERY CAREFULLY: > + * > + * u32 tmp; <-- Define a variable to hold > the data > + * > + * custom_read_reg32(address, tmp); <-- Reads the address and put > the value > + * in the 'tmp' variable given > + * > + * From here on out, you are (basicly) atomic, so don't do anything too > + * fancy! > + * Also, this code may loop if the end of this block fails to write > + * everything back safely due do the other CPU, so do NOT do anything > + * with side-effects! > + * > + * custom_write_reg32(address, tmp); <-- Writes back 'tmp' safely. > + */ > +#define custom_read_reg32(address, tmp) \ > + __asm__ __volatile__( \ > + " .set push \n" \ > + " .set mips3 \n" \ > + "1: ll %0, %1 #custom_read_reg32 \n" \ > + " .set pop \n" \ > + : "=r" (tmp), "=m" (*address) \ > + : "m" (*address)) > + > +#define custom_write_reg32(address, tmp) \ > + __asm__ __volatile__( \ > + " .set push \n" \ > + " .set mips3 \n" \ > + " sc %0, %1 #custom_write_reg32 \n" \ > + " "__beqz"%0, 1b \n" \ > + " nop \n" \ > + " .set pop \n" \ > + : "=&r" (tmp), "=m" (*address) \ > + : "0" (tmp), "m" (*address)) These two are *really* fragile stuff. Modern gcc rearranges code in amazing ways, so you might end up with other loads or stores being moved into the ll/sc sequence or the 1: label of another inline assembler construct being taken as the destination of the branch. So I would suggest to safely store the two function in a nice yellow barrel ;-) General suggestion, you can make about every access atomic if you do something like #include <linux/modules.h> #include <linux/spinlocks.h> DEFINE_SPINLOCK(register_lock); EXPORT_SYMBOL(register_lock); static inline void set_value_reg32(u32 *const addr, u32 const mask, u32 const value) { unsigned long flags; u32 bits; spinlock_irqsave(®ister_lock, flags); bits = readl(addr); bits &= mask; bits |= value; writel(bits, addr); } Maybe slower but definately more portable and not waiting before some CPU designer screws your code by accident :-) Ralf
http://www.linux-mips.org/archives/linux-mips/2007-06/msg00288.html
CC-MAIN-2014-35
refinedweb
907
64.1
In this video I start my new Android tutorial. The last Android tutorial I made is still very popular, but I’m going to try and improve on it here. If you are a beginner to Android and don’t know Java you may prefer my Android tutorial for beginners. I’ll be using Android Studio in this tutorial and I show how to install Android Studio here. All of the code follows the tutorial below. If you like videos like this, it helps my search ranking if you share it on Google Plus with a click here [googleplusone] Code from the Video MainActivity.java package com.newthinktank.helloagain.app; import android.os.Bundle; import android.support.v7.app.ActionBarActivity; import android.view.Menu; import android.view.MenuItem; import android.view.View; import android.widget.Button; import android.widget.TextView; public class MainActivity extends ActionBarActivity { // onCreate is executed when the activity is created @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); // Sets the file activity_main.xml as the user interface setContentView(R.layout.activity_main); // To be able to edit the TextView with our code we have to create it and // bind it to a TextView object. I need to use final because it will be // used in the inner class below final TextView firstTextView = (TextView) findViewById(R.id.textView); // I set up the Button just like I did the TextView Button firstButton = (Button) findViewById(R.id.firstButton); // This is how you make the Button change the text in the TextView when it is clicked firstButton.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View view) { firstTextView.setText("You Clicked"); } }); } ); } } activity_main.xml .newthinktank.helloagain.app.MainActivity"> <TextView android: <Button android: </RelativeLayout> dimens.xml <resources> <!-- Default screen margins, per the Android Design guidelines. --> <dimen name="activity_horizontal_margin">16dp</dimen> <dimen name="activity_vertical_margin">16dp</dimen> </resources> strings.xml <?xml version="1.0" encoding="utf-8"?> <!-- We store all the text in the strings.xml file so it is easy to translate into other languages --> <resources> <string name="app_name">HelloAgain</string> <string name="hello_world">Hello Again</string> <string name="action_settings">Settings</string> <string name="button_1_text">You Clicked</string> </resources> AndroidManifest.xml <?xml version="1.0" encoding="utf-8"?> <manifest xmlns: <application android: <activity android: <intent-filter> <action android: <category android: </intent-filter> </activity> </application> </manifest> when will your Android challenge start?? I can’t wait to win Samsung Galaxy Note 3 😀 It just started yesterday. All the information is on my site can you put a next and previous button for pagination on your site blog post so that we can easily go to next topic using this site? please. thank you I’ll see what I can do. I only avoided that because some people don’t like having to click to different pages. great job on doing this tutorials keep it up 😀 Thank you 🙂 can you help me with this error please and thanks Waiting for device. “/Applications/Android Studio.app/sdk/tools/emulator” -avd MonoForAndroid_API_8 -netspeed full -netdelay none emulator: ERROR: This AVD’s configuration is missing a kernel file!! In the SDK manager download the ARM EABI v7a System Image Best tutorials given by the best teacher Dereck Banas Thank you 🙂 It is very kind of you to say that. i’m going to learn about android apps development, which one should i choose, the new android tutorial or the old one? Probably the new one. I’m better at teaching Android now This is my first time programming anything for android, but I have developed a lot for Java and different languages earlier. I followed your tutorial but I can not get it too work. First it does not like ActionBarActivity so I googled and added dependencies { compile ‘com.android.support:appcompat-v7:+’ } too build.gradle but now it is complaing about Manifest merger failed : uses-sdk:minSdkVersion 15 cannot be smaller than version L declared in library com.android.support:appcompat-v7:21.0.0-rc1 What am I doing wrong, is it supposed to be this hard? 😀 I have never actually experienced anything thid hard to do a simple hello world program 😀 Switch your target API to 19 and make sure you have all the proper files downloaded for 19 in the SDK manager and the errors will go away Derek, are you going to make a video on how to create a menu bar, for example, and add an icon to it? Is this too graphics intensice i.e. GL stuff or is it easier than it looks? Also would you be able to add an action behind that button? I’ll cover custom layouts, menu bars, etc. later. I already cover the action bar and options menu in part 4. I think i need to learn the path to programming effectively for android. the app inventor has worked awesome and i have overcome all of the obstacles i set out to. what steps would you recommend? i have experience with computers mostly by necessity. i did basic in high school. a but of c in college but20 years ago. where should i start now and what should i not bypass? is java what i need to learn? a little insight could keep me from wasting vast amounts of time learning things that later will be irrelavant. thanks for your thought 🙂 Yes parts 1 – 18, minus parts 8 and 10 is all you need from my Java tutorial. Then move on to my new Android tutorial and you’ll be ready to go. Thanks for your Tutorial. It`s great. cheers You’re very welcome 🙂 Thanks for the tutorials! Love the android ones! Thank you 🙂 More are coming in the next few days way kool video. only downside is that it takes for ever to update the sdk manager. Thank you 🙂 That is odd that the SDK manager is so slow. I haven’t had that issue before. Hi there, I ran the code u given and i got an error. error:cannot find symbol variable main execution failed for task ‘:app.compileDebugJava compilation failed; see the computer compiler error output for details /////////////////////////////////////// @Override public boolean onCreateOptionsMenu(Menu menu) { // Inflate the menu; this adds items to the action bar if it is present. getMenuInflater().inflate(R.menu.main, menu); return true; } ///////////////////////////////////////// the ‘main’ in error is from the code above. may i know how do i resolve this? thank you. Remove all of these from your file import android.R and clean the project Really a huge fan of yours…! 🙂 Thank you 🙂 I do my best.
http://www.newthinktank.com/2014/06/make-android-apps/
CC-MAIN-2020-05
refinedweb
1,090
58.48
Become an Series 40 application developer - Part 2 This article explains how to develop a Java ME ("native") application for Series 40 devices. The article is for absolute beginners, and also for developers who are familiar with Java ME, but want to work on Series 40. Continuing from Become an Series 40 application developer - Part 1 Introduction You have learnt about Series40 platform and now your are familiar with using NetBeans IDE and Nokia IDE. In here, we are going to develop first application. In this part, we will be developing an application for Series 40 classic phones with Netbeans IDE with Oracle Java(TM) Micro Edition SDK . The application will run on all Series 40 devices. We will use Nokia Java SDK 1.1 for Touch and Type Devices on NetBeans IDE and Nokia IDE for full touch devices. Beginning With Development The application we are going to develop is called "Day Care". This is an application for Baby Minder. Application allows her to store baby's name, Age and Date and time with the address. Creating Project - Create a new JavaME Mobile Application project by pressing "Ctrl+Shift+N" keyboard shortcut. - Set the Project Name as "DayCare". Check Set as Main Project and uncheck Create Hello MIDlet and Click "Next" button. -. Adding MIDlet to project Now we need to add MIDlet to our project to start developing app. - Create a "New Visual MIDlet" by pressing "Ctrl+N" or "File>New File". - Set the MIDlet Name as "DayCare" and Package as "Daycare and click "Finish" button to create. Working with Components in Flow View Adding Form and List - Now Drag two Form and one List to Flow ( see 1 in fig.). Select each Form and List added and change"'Title" to "Add" and "View" for Form and "Menu" for List (see 2 in fig.). Then double click on default name and change the "Object Name" equivalent to "Title" (see 3 in fig.). - Click and Drag to connect from "Started" to "Menu" (see 4 in fig.). Adding List Element to List - Add 4 "List Elements" to "Menu", by dragging "List Elements" form "Elements" (see 1 in fig.) into "Menu" List. - Rename the newly added by changing "String" filed in "Properties" (see 2 in fig.) to "Add", "View", "Help" and "About". - Connect "Add" and"View" to corresponding Form by dragging a line (see 3 and 4 in fig). Adding Commands - Drag "Exit Command" from "Commands" into "Menu" list (see 1 in fig.) and "Back Command" into "add" (see 2 in fig.). Connect to "Menu" list by dragging a connection between "backCommand" and "Menu" list (see 3 in fig.). - Add existing "backCommand" to "view" form by right clicking on form and select "Add/New" from pop-up menu and select "backCommand" from "Add" category. Completing the Application Design - Complete the application design by adding "About" form and "Help" form. Repeat the previously done steps. Now your Flow View should like the one below. Working with Components in Screen view Open "Screen View" by clicking on screen button. add Form - Open "add Form" by selecting "add" from the drop down list located next to "Analyzer" button. - Add a "Text Filed" by dragging "Text Filed" from "Items" into form template (see 1 in fig.). Set the Label as "Name" in "Properties" window (see 2 in fig.). - Again add a "Text Filed" and set the Label as "Age" and Input Constraints to "Numeric" and Maximum Size to "3" (see 2 fig.). - Add a "Date Field" and set the Label as "Date and Time". - Add a "Text Field" and set the Label to "Address" and Maximum Size to "100". Renaming the Object Open "Navigation" Window. - Find and Expand add[Form]. Expand Items[Form]. - Right click on the textField. Select "Rename" from pop-up menu and set the name as "nameTxt". - Repeat the previous step for other object and set name as "ageTxt", "dateTxt" and "addressTxt" respectively. view Form - Open "view Form" from the drop down list (see 1 in fig.). - Add 4 "String Item" into the form template. - Set the the Label as "Name", "Age", "Date and Time" and "Address" respectively. - Rename the Objects to "nameStr", "ageStr", "dateStr" and "addressStr" respectively. About Form About Form is must to publish on to Store. It should contain the following details. Application Name with exact version and Support email. - Open "About Form" form the drop down list next to Analyzer button. - Drag a "String Item" into form template. Set the Label to "DayCare v1.0" and set the Text as "Developed By: <your_name>". - Drag again a "String Item" into form. Set the Label to "Support" and set the Text as "<your_email_id>". Help Form Help Form is must to publish on to Store. Help form should contain, detailed description of application and its usage. - Open "Help Form" from the drop down list next to Analyzer button. - Drag a "String Component" into form template. Set the Label to "DayCare" and add some help and usage string in to Text. Coding the things We have done designing part. But, we need to add Commands to save the information, to navigate through them and to delete. - Drag "OK Command" in Flow View to "add Form" and set Label to "Save" and rename it as "saveCommand". - Drag "OK Command" to "view Form" and set Label to "Next" and rename it as "nextCommand". - Drag "OK Command" to "view Form" and set Label to Delete and rename it as "deleteCommand". Saving and reading data For storing the data in phone, we use Record Store. The schema for our database will be: - Create an integer variable called "no". This is to keep track of record currently being read. - Create an object of RecordStore "rs". Record Store is used to save data. public class DayCare extends MIDlet implements CommandListener { private boolean midletPaused = false; int no = 1; RecordStore rs = null; - Expand Generated Method: commandAction for Displayables and perform the following steps. - Find the if statement for command == saveCommand and the following program statements bellow "// write pre-action user code here" to add record to record store. try { rs = RecordStore.openRecordStore("DayCareDB", true); // argument 1 is the name of record store to be opened. argument 2 specifies that it should create a new record store if it does not exists. ByteArrayOutputStream out = new ByteArrayOutputStream(); DataOutputStream dout = new DataOutputStream(out); dout.writeUTF(getNameTxt().getString()); dout.writeInt(Integer.parseInt(getAgeTxt().getString())); dout.writeUTF(getDateTxt().getDate().toString()); dout.writeUTF(getAddressTxt().getString()); dout.flush(); byte[] b = out.toByteArray(); rs.addRecord(b, 0, b.length); // to add record to record store rs.closeRecordStore(); dout.close(); out.close(); } catch (Exception ex) { } getNameTxt().setString(null); getAgeTxt().setString(null); getAddressTxt().setString(null); switchDisplayable(null, getMenu()); - Find the if statement for command == nextCommand and the following program statements bellow "// write pre-action user code here" to read next record from record store. try { rs = RecordStore.openRecordStore("DayCareDB", false); if (no <= rs.getNumRecords()) { byte[] b = new byte[1000]; ByteArrayInputStream inp = new ByteArrayInputStream(b); DataInputStream dinp = new DataInputStream(inp); rs.getRecord(no, b, 0); //to read from record store getNameStr().setText(dinp.readUTF()); getAgeStr().setText("" + dinp.readInt()); getDatetimeStr().setText(dinp.readUTF()); getAddressStr().setText(dinp.readUTF()); dinp.close(); inp.close(); no ++; } else { no = 1; } rs.closeRecordStore(); } catch (Exception ex) { } - Expand Generated Method: MenuAction and perform the following steps. - Find the if statement for __selectedString.equals("View") and the following program statements bellow " // write pre-action user code here" to set the "view Form". These statements reads the first record form record store. try { rs = RecordStore.openRecordStore("DayCareDB", false); if (rs.getNumRecords() != 0) { byte[] b = new byte[1000]; ByteArrayInputStream inp = new ByteArrayInputStream(b); DataInputStream dinp = new DataInputStream(inp); rs.getRecord(1, b, 0); //to read record form record store getNameStr().setText(dinp.readUTF()); getAgeStr().setText("" + dinp.readInt()); getDatetimeStr().setText(dinp.readUTF()); getAddressStr().setText(dinp.readUTF()); dinp.close(); inp.close(); } rs.closeRecordStore(); } catch (Exception ex) { } Running the project Save the file by pressing Ctrl+S. Click on Clean and Build button (keyboard shortcut Shift+F11). After build completes, click on "Run" button (keyboard shortcut F6) to run. Testing in RDA devices. Project File Where to go Next Java Developer Library Become an Series 40 application developer - Part 3 (coming soon).
http://developer.nokia.com/community/wiki/Become_an_Series_40_application_developer_-_Part_2
CC-MAIN-2014-41
refinedweb
1,356
67.45
Introduction: Servo Driven Automatic Vice You will need: - Parallel Gripper Kit - - Standard Size Servo - (See Note) Base - I show a couple of ideas To build it on a breadboard you will need: - Arduino - (I used an Uno) - Breadboard - Jumper Wires - 10k Linear Taper Potentiometer - - Long Male Headers - To build the permanent version you will need: - Adafruit Perma-Proto Breadboard - - 2.1mm barrel jack - 2 - 10mf electrolytic capacitors - 7805 Voltage regulator - Heat Sink for Voltage Regulator (see note) - 330-560 Ohm 1/4 Watt resistor (purchased locally) - Red 5mm LED (purchased locally) - 22 gauge hookup wire (Red, Black, Yellow, and Green, purchased locally) - 10k Linear Taper Potentiometer - - Potentiometer Knob - - Long Male Headers - - 28 Pin IC Socket - - ATmega328 - (see note) - 16 MHz Ceramic Resonator - - 9 Volt Power Adapter - (see note) Servo note: On the page with the gripper they show a few different servos that will work. I chose a more expensive servo with metal gears. Sparkfun lists standard size servos for $12.95 and $13.95. These might be good enough. That said, I like metal gears better than plastic. ATmega328 note: The chip is to replace the chip in your Arduino after you take it out to use in your project. Voltage Regulator note: Purchased locally, if you cant find one use a small machine screw to bolt a piece of aluminium to the voltage regulator. Power Adapter note: It works better with a plug in power source than with a battery. I wish it was possible to purchase all these parts from one supplier to save on shipping but that is not the case. Many of these parts are only available from one of the parts suppliers. Step 1: Assemble the Gripper and Vice Base Assemble the gripper and the servo. This link shows how to assemble the gripper: In this example I am using a Panavice base with the jaw assembly removed. Put the gripper assembly in the Panavice base. I used an eraser that is about 3/8" thick as a spacer. Step 2: Build It on a Breadboard Follow the diagram to build the circuit. Use a piece of the male headers with three pins to connect the servo. The positive and negative wires to the potentiometer can be reversed to adjust the direction. Step 3: The Program Code Copy/Paste this sketch into the Arduino IDE and upload it to your Arduino: #include <Servo.h> Servo myservo; // create servo object int pot = 0; // analog pin used for pot (A0) int val; // value from analog pin int lastVal; // prevoius value long LastChange; void setup() { myservo.attach(9); // attaches the servo on pin 9 } void loop() { val = analogRead(pot); // reads pot 0 - 1023 val = map(val, 0, 1023, 1, 179); // scale it to servo 0 - 180 if (val < (lastVal-5) || val > (lastVal+5)) { myservo.write(val); // sets servo position delay(15); // waits for servo to get there LastChange = millis(); lastVal = val; } else if (millis() - LastChange > 500); { myservo.write(val); } } Step 4: Permanent Version: the Power Supply 5: Permanent Version: the Potentiometer Drill a 9/32 to 5/16 hole in the printed circuit board as shown. Break off the little tab on the potentiometer so it will mount flat. Solder the wires onto the pot. Mount the pot And solder the wires to the board. The middle wire is soldered into hole A18. Step 6: Permanent Version: Finish the Circuit Solder the chip socket, it goes in the middle of the board in holes 26 through 39. The alignment notch points away from the power supply. Solder the male headers for the servo. The pins go toward the back, in holes B21 - B23. Attach a black wire from A21 to the ground rail and a red wire from A22 to the positive rail. Attach a yellow wire from D23 to D26. This connects the servo to digital pin nine. Attach a yellow wire from C18 to C34. This connects the pot to analog pin zero. Solder the ceramic resonator into holes H22 through H24. Solder the wires for the resonator: G22 to G31, I24 to I30, and J23 to the ground Rail. Step 7: Final Assembly Upload the program is step three onto the Arduino. Pull the chip out of you Arduino and insert it on the circuit board, the alignment notch points away from the power supply. Replace the chip in the Arduino with the new chip. Place the gripper and the circuit board in the base. In this picture and the picture on the introductory page I am using a Mitutoyo Micrometer Stand for a base. I already had the part. I searched for it of Amazon.com, they sell it but it is expensive, $60.00. And it is heavy so the shipping will also be expensive. You can use either of the bases I used or use an idea of your own. When you close the vice on a part close it just enough to hold the part, closing it farther does not increase the torque holding the part. Participated in the Guerilla Design Contest Be the First to Share Recommendations 2 Discussions 5 years ago on Introduction if you like my idea please vote for me in the Guerilla Design Contest. 5 years ago on Introduction I just made improvements to this instructable: A video link showing how to assemble the gripper A heat sink on the voltage regulator. Power Adapter replaces battery New improved Arduino code
https://www.instructables.com/id/Servo-Driven-Automatic-Vice/
CC-MAIN-2020-34
refinedweb
905
71.55
Opened 7 years ago Closed 7 years ago #17967 closed New feature (fixed) "change password" link in the admin header should be easier to disable Description The "change password" link in the admin header doesn't make sense for all installations. For example, sites that use django-auth-ldap as the primary authentication scheme don't necessarily allow users to change their password, because it might not make sense to do so. In that case, having the "change password" link in the header of the admin is confusing for users, because it doesn't work / do what they expect it to. I've overriden the base.html template and made this change: 35c35,37 < <a href="{% url 'admin:password_change' %}">{% trans 'Change password' %}</a> / --- > {% if user.has_usable_password %} > <a href="{% url 'admin:password_change' %}">{% trans 'Change password' %}</a> / > {% endif %} which is appropriate for my sites, though I'm not sure if it is in general. But this is annoying, because then when you update Django (as I did today) any changes to base.html just won't happen and there will probably be problems. If has_usable_password isn't a good way to distinguish in general, then there should either be a setting for whether this should be displayed, or the <div id="user-tools"> should be in its own template for easy overriding. Attachments (1) Change History (4) comment:1 Changed 7 years ago by Changed 7 years ago by comment:2 Changed 7 years ago by comment:3 Changed 7 years ago by Merged. patch against [17802]
https://code.djangoproject.com/ticket/17967
CC-MAIN-2019-35
refinedweb
255
66.67
Is It Time For .tel? 292 Vitaly Friedman writes "ICANN, the body responsible for creating top-level domains, is." This idea has been kicked around for quite a while; one of the question is the whole name-space collision issue. For instance, there's me and then there's other me. Lemme tell how strange it is getting fan mail for country music stars. Unforseen problems (Score:5, Funny) This may pose a problem with the 526,000+ people sharing the name Michael Smith. Re:Unforseen problems (Score:5, Insightful) This may pose a problem with the 526,000+ people sharing the name Michael Smith. Or the people who share names with companies [ncchelp.org]. Or the people who share names with each other. There will be collisions. This plan will not work for its stated purpose. However, its stated purpose and its real purpose most likely are not the same. Odds are, this is just another plan to make more money for the registrars by opening up a new land rush of domain name registrations. Re:Unforseen problems (Score:2, Insightful) I could see a way this would work. Whenever you ask for your namelastname.tel or mycompanyname.tel you won't get that domain for you, instead you would have to fill in a form in which you write a brief description of who you or your company are and write down your contact information, including your real website. This way, if I need to contact with some person or company, I'll type itsn Cynical churning of market (Score:5, Insightful) I don't see how it could be otherwise. First, the phone company already knows that the best way to index phone number is by soundex, to avoid massive problems caused by the fact that many people don't know the correct spellings of their friends' and associates' names. And they certainly aren't sounding like this will be the first domain indexed by soundex. Second, it's unlikely that domain ownership will be a prerequisite to having a phone number. I don't think they could sell that. (In fact, they might realistically make more by saying they were going to give away the domain with your name and invent a service called ... hmmm, let's see... how about the "unlisted domain" where the customer pays money to keep from being locatable.) Third, phone numbers have the virtue of being uncorrelated with a name. That's what makes them resolvable in ambiguity--they act as a cross-check to make sure you got it right. When you can't quite remember a number and think it's either 555-1234 or 555-1235 and then check information to find the first is for "Sam Smith" and the second for "Alex Jones", there's little doubt how to resolve things. But if you thought the number was 1387.Sam.Smith.com or 1386.Sam.Smith.com or maybe 1387.Samuel.Smith.com or maybe 1386.Samuel.Smith or 1387.Sam.Smythe.com or... Obviously finding out that the mis-remembered number matches a lot of same-named people won't help at all. (If you believe in correlating names with telephones this way, it's a short conceptual hop to believing that a .pw domain would help you remember your password.) If you can't autogenerate good phone numbers (i.e., tell people what name they're supposed to use), as I and many others here have argued you can't, what's the alternative? Allow people to choose? Gads, with all the domain squatting it's clear that this would allow much choice to a rich few and little choice to most people. And so it would not be fair at all. The fairest thing I can imagine is to not involve ICANN at all. And besides, back to the original point about this being a ploy to sell domain registries, if I wanted to have the domain system already remember my phone number, why wouldn't I just have people do nslookup on the names I already own? They already require domain owners to list their phone numbers. Re:Unforseen problems (Score:5, Funny) 2001:0db8:85a3:08d3:1319:8a2e:0370:7334 Or 8 groups of 4 hexadecimal digits. [/sarcasm] Re:Unforseen problems (Score:2) Re:Unforseen problems (Score:2) Re:Unforseen problems (Score:3, Informative) Being British, I dont have a "social security number" you small minded wanker Re:Unforseen problems (Score:3, Insightful) Isn't this what Re:Unforseen problems (Score:2, Insightful) Smith is the most common last name. I don't see anything that guarantees they're the most common combination. Re:Unforseen problems (Score:2) The most common male first name worldwide is Mohammed. The most common last name is Chang. Mind you, I don't see much opportunity for overlap, there, either! Re:Unforseen problems (Score:2) Re:Surname is Last Name, not Family Name (Score:3, Informative) Used a dictionary much? From: "the name borne in common by members of a family". That sure seems to indicate that a "surname" is a "family name." From dictionary.cambridge.org: "the name that you share with other members of your family." According to: "Meaning 'family name' is first found 1375." Re:Unforseen problems (Score:2, Insightful) Excuse me? I can't believe what I just read... what exactly are you doing? Sounds like you're doing just that, to me... You see, actually, it's a public forum, and people will use it for whatever the fuck they please. Get it? That is, I (not the original poster) will use "actually" wherever I see it fit; and there's nothing to stop me, just like there's nothing stopping you from being an incessant asswipe and posting hypocritical bullsh Re:Unforseen problems (Score:2, Funny) .tel is ok (Score:4, Insightful) This is way better than .biz, which I can only guess that they just banged out without thinking twice about. Re:.tel is ok (Score:2, Insightful) Re:.tel is ok (Score:2) Re:.tel is ok (Score:2) Having said that, I don't see the necessity for any new TLDs. Re:.tel is ok (Score:5, Funny) Re:.tel is ok (Score:3, Funny) Re:.tel is ok (Score:2) Re:.tel is ok (Score:3, Funny) That sly bunch, so they did find a way for a Re:.tel is ok (Score:2) In the Us, we find it hilarious, because Instead of using Re:.tel is ok (Score:2) I wonder when we will get eliminate top-level domains ? (Score:2, Interesting) If I want a web site, why can't it be -- or -- why does it have to end in boxlight Re:eliminate top-level domains ? (Score:2) Re:eliminate top-level domains ? (Score:2) As for .com, .net and .org, they're the legacy of what people thought the rest of the internet would be. It's too late to start enforcing the differences now though. To Re:eliminate top-level domains ? (Score:2) You have to be right in the first place before you can use sarcasm to reinforce your point. Re:eliminate top-level domains ? (Score:2) That isn't abuse of the system, really. I cannot go and register a .cx, .tv, or .to domain on a whim. It has to be done through whatever organization that country set up to do it. If they want to restrict it to organizations and people that reside in that country, they are free to do so. If they want to whore themselves out and make tons of money by selling domain names to other people, they are free to do so. Two good examples of this are .to and .tv. The system isn't being abused, because those countries Re:eliminate top-level domains ? (Score:5, Funny) But imagine if the internet was just a vast wasteland of porn and spammers. That doesn't require any imagination. Re:eliminate top-level domains ? (Score:2, Funny) It's Dutch for "now". It's Swedish for "naked". Re:eliminate top-level domains ? (Score:3, Funny) Re:eliminate top-level domains ? (Score:2) Re:eliminate top-level domains ? (Score:2) If I want a web site, why can't it be -- or -- why does it have to end in I've been saying this for years. Part of the reason is not rocking the boat. The current system works well enough. But I think there's fears of unleashing a tidal wave of trademark lawsuits, since TLDs, as it currently stands, can't be owned. Personally, I think there is no fundamental reason why one should not be able to register,"HATE.MICROSOFT" so you could hav Re:eliminate top-level domains ? (Score:5, Informative) Re:eliminate top-level domains ? (Score:2, Interesting) Well... really what would have to change is (non-recursive-querying) resolver code, and since that is distributed to practically every Internet host, that likely would take time. However, the server-side and administrative-side changes would stay largely the same, and there is no need to abandon hierarchical delegation of parts of the global distributed dabase. There are two obvious approaches. The first possible approach Re:eliminate top-level domains ? (Score:2) Yeah, try [slashdot.com] , [slashdot.org] , [slashdot.net] , [slashdot.info] , [slashdot.tv] and see what the results are. Everything that is not slashdot.org is a wannabe, and e This is a really good idea (Score:4, Funny) God knows it's time for Bad analogy, BadAnalogyGuy (Score:5, Informative) Excuse me, but while I agree with 92% of your examples, Phone sex (Score:5, Funny) Re:Phone sex (Score:5, Funny) Since I'm a pimp (Score:3, Funny) Re:Since I'm a pimp (Score:2) I've already pointed out why this won't work (Score:2) In the discussion on the proposed .mail TLD [slashdot.org] I already pointed out why this won't work [slashdot.org]. Intended purposes (Score:5, Insightful) Who still remembers when a Re:Intended purposes (Score:2) The problem, as far as I can tell, is that nobody foresaw demand for personal websites, so no personal website TLD was created. The result of this is that the mental barriers between the TLDs have been broken down. It's just the sort of thing language does when an important, popular Re:Intended purposes (Score:2) Re:Intended purposes (Score:2) Re:Intended purposes (Score:2) Re:Intended purposes (Score:2) Re:Intended purposes (Score:5, Interesting) Yes, but that doesn't stop plenty of people in the UK, like me, (ab)using the global And why not? I'm no more a "com"pany or an "org"anisation than I am a "net"work provider. I'm not a "biz"ness, and I'm not dedicated to providing "info"rmation, and the domain is not my real "name". But nor do I want a country-specific domain -- my site is of very limited interest to the vast majority of people, but the tiny community it interests is spread right across the globe. My site isn't aimed particularly at people in the UK, so why should it have a misleading What it comes down to is, there is no point whatsoever in trying to force an artificial hierarchy onto something like the internet, which is an interconnected network, not a neat and nicely categorised tree. It doesn't work. It's pointless and confusing. Let's just give it up already, okay? Re:Intended purposes (Score:2) Well said. There seems little point in TLDs these days other than to cause trademark fights. Why not take a page from heirarchy free websites and, if we must keep some ghost of TLDs, implement a DNS tagging system. A domain could be tagged with "com" or "org" or "whatever". Search engines and browsers would be aware of this, if anyone cared. This could be run by the same folks who do things now and be enforced so only educational institutions could be tagged "edu" for example. Re:Intended purposes (Score:3, Interesting) Now that I think about it, the divisions into "com", "net", "org" help avoid collisions, but do so in a most useless manner. Say for example, that McDonalds is a purveyor of fine foods, and registers mcdonalds.com. Now, Old McDonald had a farm but now sells farm equipment and would like to register mcdonalds.com. Trademark law allows McDonald's Fast Food and McDonald's Farm Equipment to coexist, but it is unfair to allow one to register the domain but not the other. One soultion would be to deny mcdonalds. Pretty pointless imo... (Score:3, Insightful) The problem with all these newly introduced TLDs is that they don't ring a bell for the average joe on teh intarweb, since most casual users are familiar with Re:Pretty pointless imo... (Score:2) Does it really matter for most individuals that their website is not too memorable? That's what your bookmarks files (or favorites) are for. You just send a link to anyone interested and put one in your email .sig. I remember about 3 phone numbers. The rest are in my phone. It seems the same to me... MichaelBolton.Tel (Score:5, Funny) Another dumb idea (Score:5, Insightful) When is everyone going to stop assuming that issuing new TLDs is going to solve all their problems? What, is it impossible for people to update the contact information on their personal web sites now, or has their been some fundamental change to HTML/XML of which I am unaware? This is a dumb idea. I won't even touch the personal namespace problem, which should be evident to anyone with a brain. The only way that would work is if everyone had five names. You know there are going to be squabbles over company names, as old and new companies jockey for the .tel names that offer them the best marketing bang for the buck. Need a place to put your contact information? Try. ICANN needs to stop polluting the TLD pool. Re:Another dumb idea (Score:2) All new TLDs do is make it so companies have to spend another $15/year to protect their trademarks. How often do you see two domain names with the same name but different TLDs and they are truly different sites? Yes, there are some good examples (whitehouse.gov/com), but for the most part, the "other" sites redirect to the main site or are parked by domain squatters. Re:Another dumb idea (Score:3, Insightful) If everyone adopted the format: tel.{company}.{tld} for their contact page, rather than bitching about new TLDs, then the number of collisions will be fewer (like foo.com, foo.net) and the world would be just as happy. Disclaimer: I haven't read (nor will I read) TFA. Re:Another dumb idea (Score:2) "When I issued the first few TLDs, things were good...then I issued a couple more...but now I'm just issuing more TLDs to take my problems away. "Hi, my name is ICANN, and I'm a TLDaholic." Those who do not understand 'finger' (Score:5, Insightful) So lets see, we create a whole separate _TLD_ that people/companies must register, just so people can have, which is essentially a directory of who's who at? This is completely idiotic. How about "finger @foo.com | grep -i 'your name'" Obviously wrap it into some kind of GUI, or do something as simple as a web front end to an existing in-house address book? Geesh. Next someone will invent the ".mail" TLD, which is the address for foo.com, that you use to send email to. what about ".web" ? How odd (Score:2, Funny) Really? Your parents are called Mr and Mrs 945 Chestnut Street? How odd. -Grey [wellingtongrey.net] Re:How odd (Score:2) "They haven't signed up for our 'service', but they COULD, or they have a cousin who is registered, so we feel justified in counting them!" -1 Redundant (Score:2) Ugh? Can you get any more pointless than this? If you have a Name-space issue solution (Score:2) Problem solved Boy does this sound dumb (Score:3, Insightful) I see this as another $35 per year revenue for the domain registers. Re:Boy does this sound dumb (Score:2) Ah yes, but you forget that URI's are for more than just webpages. Instead of a web page think that you could just have mobile.bigpat.tel in your phone's address book and you would never have to update it. Or home.bigpat.tel. Carefully Thoughtout? (Score:2) Just what do they plan to do about the 1.1 million "John Smith"s that live in the use (not to mention any other countries? Append a number? Gee that sound familiar. I can see some excelent uses of the Re:Carefully Thoughtout? (Score:3, Insightful) How can you say that a I have always believed it be a law that there be a new "META" tag in HTML. Something of the sorts I've got a better idea (Score:2) In an age of too much communication how about a top level domain called .dnc (do not call) that has all of my contact information. Oh wait, why don't I just not make it available to everyone in the first place. Today with the "Do not call list" being so popular and being able to keep your contact information private when registering a new domain this new .tel tld seem Hey! My Contact Info's Online (Score:2, Insightful) Too little too late (Score:3, Interesting) It's because they were so late to introduce a large variety that ".com" become synonymous with "web" and everybody wanted his site to be a ".com" Should've they introduced domains like This is a wonderful idea! (Score:5, Insightful) Man, I'm in the wrong business; if only I could get paid for coming up with ideas like this... Re:This is a wonderful idea! (Score:2) But less intuitive and less economical than contact.companyname.com or telephone.companyname.com. Why should companies have to fork out even more money for more domains? I know the domain name I want! (Score:2) Now I just need to find a way to get an alarm system hooked up to it. An idiotic idea that shows domain names are broken (Score:3, Interesting) Think about it. Do we still need domain names? People made up the "I'm feeling lucky" ifl: protocol as a joke, but isn't it true? Can't we find anything with Google anyway? Why should we have to remember a particular address with a complicated system of slashes and characters to get to a particular page? Right now, my URL is Here's what I'm proposing: Let's extend ifl: or something like into a real protocol. A trusted source, or better a network of user selectable sources, assigns keywords to URLs based on tagging by users via hyperlinks to the source and delicious-like tags. Normally, the URL bar shows nothing but the title the site has given itself (in our case, "Slashdot") and the particular page being viewed ("Reply to thread"), but on request, the URL bar can generate a user shareable set of keyword tags for the site with hash codes for pages to prevent collision (think about the addresses generated snipurl and the like; "ifl:Slashdot/4bacc23"). For the purposes of bookmarks, traditional URLs can be stored, but since these URLs won't be exposed to users, Ford Motor company can use a23rf2.ifl and Ford Modeling Agency can use j737bdh.ifl, and no one will care, since it won't be possible to hijack a keyword without the agreement of the majority of users. (No more Whitehouse.coms!) Domain names can stick around, so that people are free to assign multiple IPs to the same site, but the concept will become a background detail that users need to know nothing about. Until the technology is built into all browsers, URL-to-ifl translator sites can fill in the gap: "go to [ifl.com] or just ifl:Slashdot/4bacc23..." but since this won't be hard to integrate into browsers as a plug-in, I imagine it can be implemented quickly. So, what do you guys think? Am I being naive about the possibility of the keyword space being kept pure without a registrar? Need I point out that the keyword space is *already* polluted, inspite of that barrier? Back to the future (Score:4, Informative) Unfortunately, URLs and DNS hacks turned out to be "good enough", nobody saw the need for a global location-independent naming system for web pages, and we ended up with today's system. All contact info in one place - FOR TELEMARKETERS (Score:4, Interesting) Yes, that sounds like a GREAT idea - I think I'll also put my social security number, my alarm codes, a Google maps link to my house, a picture of my house key, and my bank account numbers up there as well. Look, if my company wants to set up a contact page they can set up a web page under their already existing domain name. If I want a contact page, I can set it up under my already existing personal web space. What does a new TLD add to this? Now, *IF* they were talking about a new transport class (like http:// and ftp://) for encapsulating telephone numbers, such that a link to tel://8675309 would get me Jenny on the line, that *might* be useful. But hell - I haven't even signed up for MYCALL@arrl.net to avoid being spammed by any asshole who scrapes my callsign (and I already have this one jackass who has done exactly that - he scraped my callsign and now he keeps adding me to stupid services like plaxo and the like, even though I've told this tool quite sharply that I don't want him bothering me.) tel: URLs (Score:2) In fact, "tel:", "fax:", and "modem:" URL schemes were proposed six(!) years ago by a Nokia researcher (RFC 2806 [rfc-editor.org]), but no one seems to have paid them much mind. Something similar is already avaliable in the UK (Score:3, Insightful) Re:Something similar is already avaliable in the U (Score:2) Great for people with unusual names... (Score:2) So if you have a name that others don't have then you'll be fine. Of course if you are in the vast majority of people who don't have a unique name then unless you are quick its not going to work for you. Genius idea, formed on the fact that "John Smith" is of course unique. Hell there have been TWO US presidents in the last 20 years who would have to argue over who got the domain name. This is before we get to countries where its more common to be known as lastname.firstname rather than firstname.lastname I know (Score:2, Insightful) ur-domain.ur-tld/contact.ext !?!?!?!?!? Whooooo the simplicity.... Whole top-level domains concept flawed (Score:2, Insightful) In the minds of the vast majority of internet users, the extension is an afterthought at best. The company I work for is a Real progress would be in moving to simplify things; less top level domains. How abo As if domain names matter anymore (Score:2) Or are you going to type google.com and search for $company? Contact info in an easily accessible location? (Score:4, Funny) Wow! [tldp.org] If [superpages.com] only [microsoft.com] someone [openldap.org] had [novell.com] thought [isode.com] of [netscape.com] that [rfc.net] before! [view500.com] Just a way to make more money for registrars... (Score:2) You think THAT's weird? (Score:2) Try getting hit up for autographs after being MISTAKEN for a country music star in person! It happened to me... Back in the old days I was in the broadcasting business, and stuck at a tiny country music station in the hills (complete with shag carpeting on the studio wall as sound deadening material) and was forced by my employer to attend a Tim McGraw/Faith Hill concert. (Forced: As in, "If you don't go to the show Saturday, don't com Sounds awesome! (Score:2) Brilliant (Score:2) Drop TLD and go with new prefixes ;-p (Score:2) This would at least allow for several orders of differentiation.... we do it with phone numbers.. ie: prefixes instead of suffixes This way you could have multiple companies/individuals, etc. as You could register: us.va.richmond.shoegallery.com for a website/address for a busin Re:The future (Score:2) You run a web server on your coffee maker? Re:The future (Score:2) you don't?? ok, i'm trying REALLY hard not to make any of the obvious Java jokes... but seriously, it's not like this hasn't been kicked around a lot in fact, back in college, in a network application development class, we had to write (in Java, no less [damn]) a number of tiny, appliance-specific http servers that could serve for an XML-based Internet kitchen, along with drivers and whatnot. i believe we wrote for the fridge, oven, microwave, coffemaker, toaster ove Re:lets get the it out of the way (Score:2) dot TLD needed soon ... (Score:2) ... to keep track of the TLDs: lists all .com domains ... et cetera Re:How IANA should not fuck up .tel (Score:2) How cool. I'm looking forward to the day when my cellphone is subject to DDoS by scriptkiddies. Re:Huh? (Score:2) Arthur C. Clark's idea (see 3001) was for everyone to have their name, date of birth, and a 5(?) digit unique ID concatenate and assigned at birth. This gave everyone a unique ID that was easy for them to remember (everyone can remember their name and DoB already, so rememb Re:Getting Dumber by the Minute (Score:2) 2. Your 'by country' resolution of domains is unworkable - it is NOT possible for a DNS server to determine the physical location of a client making a lookup request. There are schemes to GUESS at it, but nothing remotely close to accurate. Re:.tel Me Something I Don't Know (Score:2) 100% Redundant Speaking of Redundant, TrollMods have found a new toy in their childish mod games, now that they're bored of Offtopic, Flamebait, Troll and Overrated.
https://slashdot.org/story/06/04/17/129248/is-it-time-for-tel?sdsrc=nextbtmprev
CC-MAIN-2018-05
refinedweb
4,461
73.68
Simple Top-Down Parsing in Python Fredrik Lundh | July 2008 In Simple Iterator-based Parsing, I described a way to write simple recursive-descent parsers in Python, by passing around the current token and a token generator function. A recursive-descent parser consists of a series of functions, usually one for each grammar rule. Such parsers are easy to write, and are reasonably efficient, as long as the grammar is “prefix-heavy”; that is, that it’s usually sufficient to look at a token at the beginning of a construct to figure out what parser function to call. For example, if you’re parsing Python code, you can identify most statements simply by looking at the first token. However, recursive-descent is less efficient for expression syntaxes, especially for languages with lots of operators at different precedence levels. With one function per rule, you can easily end up with lots of calls even for short, trivial expressions, just to get to the right level in the grammar. For example, here’s an excerpt from Python’s expression grammar. The “test” rule is a basic expression element: test: or_test ['if' or_test 'else' test] | lambdef or_test: and_test ('or' and_test)* and_test: not_test ('and' not_test)* not_test: 'not' not_test | comparison comparison: expr (comp_op expr)* expr: xor_expr ('|' xor_expr)* xor_expr: and_expr ('^' and_expr)* and_expr: shift_expr ('&' shift_expr)* shift_expr: arith_expr (('<<'|'>>') arith_expr)* arith_expr: term (('+'|'-') term)* term: factor (('*'|'/'|'%'|'//') factor)* factor: ('+'|'-'|'~') factor | power power: atom trailer* ['**' factor] trailer: '(' [arglist] ')' | '[' subscriptlist ']' | '.' NAME With a naive recursive-descent implementation of this grammar, the parser would have to recurse all the way from “test” down to “trailer” in order to parse a simple function call (of the form “expression(arglist)”). In the early seventies, Vaughan Pratt published an elegant improvement to recursive-descent in his paper Top-down Operator Precedence. Pratt’s algorithm associates semantics with tokens instead of grammar rules, and uses a simple “binding power” mechanism to handle precedence levels. Traditional recursive-descent parsing is then used to handle odd or irregular portions of the syntax. In an article (and book chapter) with the same name, Douglas Crockford shows how to implement the algorithm in a subset of JavaScript, and uses it to develop a parser that can parse itself in the process. In this article, I’ll be a bit more modest: I’ll briefly explain how the algorithm works, discuss different ways to implement interpreters and translators with it in Python, and finally use it to implement a parser for Python’s expression syntax. And yes, there will be benchmarks too. Introducing The Algorithm Like most other parsers, a topdown parser operates on a stream of distinct syntax elements, or tokens. For example, the expression “1 + 2” could correspond to the following tokens: literal with value 1 add operator literal with value 2 end of program In the topdown algorithm, each token has two associated functions, called “nud” and “led”, and an integer value called “lbp”. The “nud” function (for null denotation) is used when a token appears at the beginning of a language construct, and the “led” function (left denotation) when it appears inside the construct (to the left of the rest of the construct, that is). The “lbp” value is a binding power, and it controls operator precedence; the higher the value, the tighter a token binds to the tokens that follow. Given this brief introduction, we’re ready to look at the core of Pratt’s algorithm, the expression parser: def expression(rbp=0): global token t = token token = next() left = t.nud() while rbp < token.lbp: t = token token = next() left = t.led(left) return left (Pratt calls this function “parse”, but we’ll use the name from Crockford’s article instead.) Here, “token” is a global variable that contains the current token, and “next” is a global helper that fetches the next token. The “nud” and “led” functions are represented as methods, and the “lbp” is an attribute. The “left” variable, finally, is used to pass some value that represents the “left” part of the expression through to the “led” method; this can be any object, such as an intermediate result (for an interpreter) or a portion of a parse tree (for a compiler). If applied to the simple expression shown earlier, the parser will start by calling the “nud” method on the first token. In our example, that’s a literal token, which can be represented by something like the following class: class literal_token: def __init__(self, value): self.value = int(value) def nud(self): return self.value Next, the parser checks if the binding power of the next token is at least as large as the given binding power (the “rbp” argument, for “right binding power”). If it is, it calls the “led” method for that token. Here, the right binding power is zero, and the next token is an operator, the implementation of which could look like: class operator_add_token: lbp = 10 def led(self, left): right = expression(10) return left + right The operator has a binding power of 10, and a “led” method that calls the expression parser again, passing in a right binding power that’s the same as the operator’s own power. This causes the expression parser to treat everything with a higher power as a subexpression, and return its result. The method then adds the left value (from the literal, in this case) to the return value from the expression parser, and returns the result. The end of the program is indicated by a special marker token, with binding power zero (lower than any other token). This makes sure that the expression parser stops when it reaches the end of the program. class end_token: lbp = 0 And that’s the entire parser. To use it, we need a tokenizer that can generate the right kind of token objects for a given source program. Here’s a simple regular expression-based version that handles the minimal language we’ve used this far: import re token_pat = re.compile("\s*(?:(\d+)|(.))") def tokenize(program): for number, operator in token_pat.findall(program): if number: yield literal_token(number) elif operator == "+": yield operator_add_token() else: raise SyntaxError("unknown operator") yield end_token() Now, let’s wire this up and try it out: def parse(program): global token, next next = tokenize(program).next token = next() return expression() >>> parse("1 + 2") 3 Not counting the calls to the tokenizer, the parser algorithm will make a total of four calls to parse this expression; one for each token, and one extra for the recursive call to the expression parser in the “led” method. Extending the Parser To see how this scales, let’s add support for a few more math operations. We need a few more token classes: class operator_sub_token: lbp = 10 def led(self, left): return left - expression(10) class operator_mul_token: lbp = 20 def led(self, left): return left * expression(20) class operator_div_token: lbp = 20 def led(self, left): return left / expression(20) Note that “mul” and “div” uses a higher binding power than the other operators; this guarantees that when the “mul” operator is invoked in the expression “1 * 2 + 3”, it only gets the literal “2”, instead of treating “2 + 3” as a subexpression. We also need to add the classes to the tokenizer: def tokenize(program): for number, operator in token_pat.findall(program): if number: yield literal_token(number) elif operator == "+": yield operator_add_token() elif operator == "-": yield operator_sub_token() elif operator == "*": yield operator_mul_token() elif operator == "/": yield operator_div_token() else: raise SyntaxError("unknown operator) yield end_token() but that’s it. The parser now understands the four basic math operators, and handles their precedence correctly. >>> parse("1+2") 3 >>> parse("1+2*3") 7 >>> parse("1+2-3*4/5") 1 Despite the fact that we’ve added more grammar rules, the parser still makes the same number of calls as before; the expression “1+2” is still handled by four calls inside the parser. However, codewise, this isn’t that different from a recursive-descent parser. We still need to write code for each token class, and while we’ve moved most of the dispatching from individual rules to the expression parser, most of that ended up in a big if/else statement in the tokenizer. Before we look at ways to get rid of some of that code, let’s add two more features to the parser: unary plus and minus operators, and a Python-style exponentiation operator (**). To support the unary operators, all we need to do is to add “nud” implementations to the relevant tokens: class operator_add_token: lbp = 10 def nud(self): return expression(100) def led(self, left): return left + expression(10) class operator_sub_token: lbp = 10 def nud(self): return -expression(100) def led(self, left): return left - expression(10) Note that the recursive call to expression uses a high binding power, to make sure that the unary operator binds to the token immediately to the right, instead of to the rest of the expression (“(-1)-2” and “-(1-2)” are two different things). Adding exponentiation is a bit trickier; first, we need to tweak the tokenizer to identify the two-character operator: token_pat = re.compile("\s*(?:(\d+)|(\*\*|.))") ... elif operator == "**": yield operator_pow_token() ... A bigger problem is that the operator is right-associative (that it, it binds to the right). If you type “2**3**4” into a Python prompt, Python will evaluate the “3**4” part first: >>> 2**3**4 2417851639229258349412352L >>> (2**3)**4 4096 >>> 2**(3**4) 2417851639229258349412352L >>> Luckily, the binding power mechanism makes it easy to implement this; to get right associativity, just subtract one from the operator’s binding power when doing the recursive call: class operator_pow_token: lbp = 30 def led(self, left): return left ** expression(30-1) In this way, the parser will treat subsequent exponentiation operators (with binding power 30) as subexpressions to the current one, which is exactly what we want. Building Parse Trees A nice side-effect of the top-down approach is that it’s easy to build parse trees, without much extra overhead; since the tokenizer is creating a new object for each token anyway, we can reuse these objects as nodes in the parse tree. To do this, the “nud” and “led” methods have to add syntax tree information to the objects, and then return the objects themselves. In the following example, the literal leaf nodes has a “value” attribute, and the operator nodes have “first” and “second” attributes. The classes also have __repr__ methods to make it easier to look at the resulting tree: class literal_token: def __init__(self, value): self.value = value def nud(self): return self def __repr__(self): return "(literal %s)" % self.value class operator_add_token: lbp = 10 def nud(self): self.first = expression(100) self.second = None return self def led(self, left): self.first = left self.second = expression(10) return self def __repr__(self): return "(add %s %s)" % (self.first, self.second) class operator_mul_token: lbp = 20 def led(self, left): self.first = left self.second = expression(20) return self def __repr__(self): return "(mul %s %s)" % (self.first, self.second) (implementing “sub”, “div”, and “pow” is left as an exercise.) With the new token implementations, the parser will return parse trees: >>> parse("1") (literal 1) >>> parse("+1") (add (literal 1) None) >>> parse("1+2+3") (add (add (literal 1) (literal 2)) (literal 3)) >>> parse("1+2*3") (add (literal 1) (mul (literal 2) (literal 3))) >>> parse("1*2+3") (add (mul (literal 1) (literal 2)) (literal 3)) The unary plus inserts a “unary add” node in the tree (with the “second” attribute set to None). If you prefer, you can skip the extra node, simply by returning the inner expression from “nud”: class operator_add_token: lbp = 10 def nud(self): return expression(100) ... >>> parse("1") (literal 1) >>> parse("+1") (literal 1) Whether this is a good idea or not depends on your language definition (Python, for one, won’t optimize them away in the general case, in case you’re using unary plus on something that’s not a number.) Streamlining Token Class Generation The simple parsers we’ve used this far all consist of a number of classes, one for each token type, and a tokenizer that knows about them all. Pratt uses associative arrays instead, and associates the operations with their tokens. In Python, it could look something like: nud = {}; led = {}; lbp = {} nud["+"] = lambda: +expression(100) led["+"] = lambda left: left + expression(10) lbp["+"] = 10 This is a bit unwieldy, and feels somewhat backwards from a Python perspective. Crockford’s JavaScript implementation uses a different approach; he uses a single “token class registry” (which he calls “symbol table”), with a factory function that creates new classes on the fly. JavaScript’s prototype model makes that ludicrously simple, but it’s not that hard to generate classes on the fly in Python either. First, let’s introduce a base class for token types, to get a place to stuff all common behaviour. I’ve added default attributes for storing the token type name (the “id” attribute) and the token value (for literal and name tokens), as well as a few attributes for the syntax tree. This class is also a convenient place to provide default implementations of the “nud” and “led” methods. class symbol_base(object): id = None # node/token type name value = None # used by literals first = second = third = None # used by tree nodes def nud(self): raise SyntaxError( "Syntax error (%r)." % self.id ) def led(self, left): raise SyntaxError( "Unknown operator (%r)." % self.id ) def __repr__(self): if self.id == "(name)" or self.id == "(literal)": return "(%s %s)" % (self.id[1:-1], self.value) out = [self.id, self.first, self.second, self.third] out = map(str, filter(None, out)) return "(" + " ".join(out) + ")" Next, we need a token type factory: symbol_table = {} def symbol(id, bp=0): try: s = symbol_table[id] except KeyError: class s(symbol_base): pass s.__name__ = "symbol-" + id # for debugging s.id = id s.lbp = bp symbol_table[id] = s else: s.lbp = max(bp, s.lbp) return s This function takes a token identifier and an optional binding power, and creates a new class if necessary. The identifier and the binding power are inserted as class attributes, and will thus be available in all instances of that class. If the function is called for a symbol that’s already registered, it just updates the binding power; this allows us to define different parts of the symbol’s behaviour in different places, as we’ll see later. We can now populate the registry with the symbols we’re going to use: symbol("(literal)") symbol("+", 10); symbol("-", 10) symbol("*", 20); symbol("/", 20) symbol("**", 30) symbol("(end)") To simplify dispatching, we’re using the token strings as identifiers; the identifiers for the “(literal)” and “(end)” symbols (which replaces the literal_token and end_token classes used earlier) are strings that won’t appear as ordinary tokens. We also need to update the tokenizer, to make it use classes from the registry: def tokenize(program): for number, operator in token_pat.findall(program): if number: symbol = symbol_table["(literal)"] s = symbol() s.value = number yield s else: symbol = symbol_table.get(operator) if not symbol: raise SyntaxError("Unknown operator") yield symbol() symbol = symbol_table["(end)"] yield symbol() Like before, the literal class is used as a common class for all literal values. All other tokens have their own classes. Now, all that’s left is to define “nud” and “led” methods for the symbols that need additional behavior. To do that, we can define them as ordinary functions, and then simply plug them into the symbol classes, one by one. For example, here’s the “led” method for addition: def led(self, left): self.first = left self.second = expression(10) return self symbol("+").led = led That last line fetches the class from the symbol registry, and adds the function to it. Here are a few more “led” methods: def led(self, left): self.first = left self.second = expression(10) return self symbol("-").led = led def led(self, left): self.first = left self.second = expression(20) return self symbol("*").led = led def led(self, left): self.first = left self.second = expression(20) return self symbol("/").led = led They do look pretty similar, don’t they? The only thing that differs is the binding power, so we can simplify things quite a bit by moving the repeated code into a helper function: def infix(id, bp): def led(self, left): self.first = left self.second = expression(bp) return self symbol(id, bp).led = led Given this helper, we can now replace the “led” functions above with four simple calls: infix("+", 10); infix("-", 10) infix("*", 20); infix("/", 20) Likewise, we can provide helper functions for the “nud” methods, and for operators with right associativity: def prefix(id, bp): def nud(self): self.first = expression(bp) self.second = None return self symbol(id).nud = nud prefix("+", 100); prefix("-", 100) def infix_r(id, bp): def led(self, left): self.first = left self.second = expression(bp-1) return self symbol(id, bp).led = led infix_r("**", 30) Finally, the literal symbol must be fitted with a “nud” method that returns the symbol itself. We can use a plain lambda for this: symbol("(literal)").nud = lambda self: self Note that most of the above is general-purpose plumbing; given the helper functions, the actual parser definition boils down to the following six lines: infix("+", 10); infix("-", 10) infix("*", 20); infix("/", 20) infix_r("**", 30) prefix("+", 100); prefix("-", 100) symbol("(literal)").nud = lambda self: self symbol("(end)") Running this produces the same result as before: >>> parse("1") (literal 1) >>> parse("+1") (+ (literal 1)) >>> parse("1+2") (+ (literal 1) (literal 2)) >>> parse("1+2+3") (+ (+ (literal 1) (literal 2)) (literal 3)) >>> parse("1+2*3") (+ (literal 1) (* (literal 2) (literal 3))) >>> parse("1*2+3") (+ (* (literal 1) (literal 2)) (literal 3)) Parsing Python Expressions To get a somewhat larger example, let’s tweak the parser so it can parse a subset of the Python expression syntax, similar to the syntax shown in the grammar snippet at the start of this article. To do this, we first need a fancier tokenizer. The obvious choice is to build on Python’s tokenize module: def tokenize_python(program): import tokenize from cStringIO import StringIO type_map = { tokenize.NUMBER: "(literal)", tokenize.STRING: "(literal)", tokenize.OP: "(operator)", tokenize.NAME: "(name)", } for t in tokenize.generate_tokens(StringIO(program).next): try: yield type_map[t[0]], t[1] except KeyError: if t[0] == tokenize.ENDMARKER: break else: raise SyntaxError("Syntax error") yield "(end)", "(end)" def tokenize(program): for id, value in tokenize_python(program): if id == "(literal)": symbol = symbol_table[id] s = symbol() s.value = value else: # name or operator symbol = symbol_table.get(value) if symbol: s = symbol() elif id == "(name)": symbol = symbol_table[id] s = symbol() s.value = value else: raise SyntaxError("Unknown operator (%r)" % id) yield s This tokenizer is split into two parts; one language-specific parser that turns the source program into a stream of literals, names, and operators, and a second part that turns those into a token instances. The latter checks both operators and names against the symbol table (to handle keyword operators), and uses a psuedo-symbol (“(name)”) for all other names. You could combine the two tasks into a single function, but the separation makes it a bit easier to test the parser, and also makes it possible to reuse the second part for other syntaxes. We can test the new tokenizer with the old parser definition: >>> parse("1+2") (+ (literal 1) (literal 2)) >>> parse(1+2+3") (+ (+ (literal 1) (literal 2)) (literal 3)) >>> parse(1+2*3") (+ (literal 1) (* (literal 2) (literal 3))) >>> parse(1.0*2+3") (+ (* (literal 1.0) (literal 2)) (literal 3)) >>> parse("'hello'+'world'") (+ (literal 'hello') (literal 'world')) The new tokenizer supports more literals, so our parser does that too, without any extra work. And we’re still using the 10-line expression implementation we introduced at the beginning of this article. The Python Expression Grammar So, let’s do something about the grammar. We could figure out the correct expression grammar from the grammar snippet shown earlier, but there’s a more practical description in the section “Evaluation order” in Python’s language reference. The table in that section lists all expression operators in precedence order, from lowest to highest. Here are the corresponding definitions (starting at binding power 20): symbol("lambda", 20) symbol("if", 20) # ternary form infix_r("or", 30); infix_r("and", 40); prefix("not", 50) infix("in", 60); infix("not", 60) # in, not in infix("is", 60) # is, is not infix("<", 60); infix("<=", 60) infix(">", 60); infix(">=", 60) infix("<>", 60); infix("!=", 60); infix("==", 60) infix("|", 70); infix("^", 80); infix("&", 90) infix("<<", 100); infix(">>", 100) infix("+", 110); infix("-", 110) infix("*", 120); infix("/", 120); infix("//", 120) infix("%", 120) prefix("-", 130); prefix("+", 130); prefix("~", 130) infix_r("**", 140) symbol(".", 150); symbol("[", 150); symbol("(", 150) These 16 lines define the syntax for 35 operators, and also provide behaviour for most of them. However, tokens defined by the symbol helper have no intrinsic behaviour; to make them work, additional code is needed. There are also some intricacies caused by limitations in Python’s tokenizer; more about those later. But before we start working on those symbols, we need to add behaviour to the pseudo-tokens too: symbol("(literal)").nud = lambda self: self symbol("(name)").nud = lambda self: self symbol("(end)") We can now do a quick sanity check: >>> parse("1+2") (+ (literal 1) (literal 2)) >>> parse("2<<3") (<< (literal 2) (literal 3)) Parenthesized Expressions Let’s turn our focus to the remaining symbols, and start with something simple: parenthesized expressions. They can be implemented by a “nud” method on the “(” token: def nud(self): expr = expression() advance(")") return expr symbol("(").nud = nud The “advance” function used here is a helper function that checks that the current token has a given value, before fetching the next token. def advance(id=None): global token if id and token.id != id: raise SyntaxError("Expected %r" % id) token = next() The “)” token must be registered; if not, the tokenizer will report it as an invalid token. To register it, just call the symbol function: symbol(")") Let’s try it out: >>> 1+2*3 (+ (literal 1) (* (literal 2) (literal 3))) >>> (1+2)*3 (* (+ (literal 1) (literal 2)) (literal 3)) Note that the “nud” method returns the inner expression, so the “(” node won’t appear in the resulting syntax tree. Also note that we’re cheating here, for a moment: the “(” prefix has two meanings in Python; it can either be used for grouping, as above, or to create tuples. We’ll fix this below. Ternary Operators Most custom methods look more or less exactly like their recursive-descent counterparts, and the code for inline if-else is no different: def led(self, left): self.first = left self.second = expression() advance("else") self.third = expression() return self symbol("if").led = led Again, we need to register the extra token before we can try it out: symbol("else") >>> parse("1 if 2 else 3") (if (literal 1) (literal 2) (literal 3)) Attribute and Item Lookups To handle attribute lookups, the “.” operator needs a “led” method. For convenience, this version verifies that the period is followed by a proper name token (this check could be made at a later stage as well): def led(self, left): if token.id != "(name)": SyntaxError("Expected an attribute name.") self.first = left self.second = token advance() return self symbol(".").led = led >>> parse("foo.bar") (. (name foo) (name bar)) Item access is similar; just add a “led” method to the “[” operator. And since “]” is part of the syntax, we need to register that symbol as well. symbol("]") def led(self, left): self.first = left self.second = expression() advance("]") return self symbol("[").led = led >>> parse("'hello'[0]") ([ (literal 'hello') (literal 0)) Note that we’re ending up with lots of code of the form: def led(self, left): ... symbol(id).led = led which is a bit inconvenient, if not else because it violates the “don’t repeat yourself” rule (the name of the method appears three times). A simple decorator solves this: def method(s): assert issubclass(s, symbol_base) def bind(fn): setattr(s, fn.__name__, fn) return bind This decorator picks up the function name, and attaches that to the given symbol. This puts the symbol name before the method definition, and only requires you to write the method name once. @method(symbol(id)) def led(self, left): ... We’ll use this in the following examples. The other approach isn’t much longer, so you can still use it if you need to target Python 2.3 or older. Just watch out for typos. Function Calls A function call consists of an expression followed by a comma-separated expression list, in parentheses. By treating the left parentesis as a binary operator, parsing this is straight-forward: symbol(")"); symbol(",") @method(symbol("(")) def led(self, left): self.first = left self.second = [] if token.id != ")": while 1: self.second.append(expression()) if token.id != ",": break advance(",") advance(")") return self >>> parse("hello(1,2,3)") (( (name hello) [(literal 1), (literal 2), (literal 3)]) This is a bit simplified; keyword arguments and the “*” and “**” forms are not supported by this version. To handle keyword arguments, look for an “=” after the first expression, and if that’s found, check that the subtree is a plain name, and then call expression again to get the default value. The other forms could be handled by “nud” methods on the corresponding operators, but it’s probably easier to handle these too in this method. Lambdas Lambdas are also quite simple. Since the “lambda” keyword is a prefix operator, we’ll implement it using a “nud” method: symbol(":") @method(symbol("lambda")) def nud(self): self.first = [] if token.id != ":": argument_list(self.first) advance(":") self.second = expression() return self def argument_list(list): while 1: if token.id != "(name)": SyntaxError("Expected an argument name.") list.append(token) advance() if token.id != ",": break advance(",") >>> parse("lambda a, b, c: a+b+c") (lambda [(name a), (name b), (name c)] (+ (+ (name a) (name b)) (name c))) Again, the argument list parsing is a bit simplified; it doesn’t handle default values and the “*” and “**” forms. See above for implementation hints. Also note that there’s no scope handling at the parser level in this implementation. See Crockford’s article for more on that topic. Constants Constants can be handled as literals; the following “nud” method changes the token instance to a literal node, and inserts the token itself as the literal value: def constant(id): @method(symbol(id)) def nud(self): self.id = "(literal)" self.value = id return self constant("None") constant("True") constant("False") >>> parse("1 is None") (is (literal 1) (literal None)) >>> parse("True or False") (or (literal True) (literal False)) Multi-Token Operators Python has two multi-token operators, “is not” and “not in”, but our parser doesn’t quite treat them correctly: >>> parse("1 is not 2") (is (literal 1) (not (literal 2))) The problem is that the standard tokenize module doesn’t understand this syntax, so it happily returns these operators as two separate tokens: >>> list(tokenize("1 is not 2")) [(literal 1), (is), (not), (literal 2), ((end))] In other words, “1 is not 2” is handled as “1 is (not 2)”, which isn’t the same thing: >>> 1 is not 2 True >>> 1 is (not 2) False One way to fix this is to tweak the tokenizer (e.g. by inserting a combining filter between the raw Python parser and the token instance factory), but it’s probably easier to fix this with custom “led” methods on the “is” and “not” operators: @method(symbol("not")) def led(self, left): if token.id != "in": raise SyntaxError("Invalid syntax") advance() self.id = "not in" self.first = left self.second = expression(60) return self @method(symbol("is")) def led(self, left): if token.id == "not": advance() self.id = "is not" self.first = left self.second = expression(60) return self >>> parse("1 in 2") (in (literal 1) (literal 2)) >>> parse("1 not in 2") (not in (literal 1) (literal 2)) >>> parse("1 is 2") (is (literal 1) (literal 2)) >>> parse("1 is not 2") (is not (literal 1) (literal 2)) This means that the “not” operator handles both unary “not” and binary “not in”. Tuples, Lists, and Dictionary Displays As noted above, the “(” prefix serves two purposes in Python; it’s used for grouping, and to create tuples (it’s also used as a binary operator, for function calls). To handle tuples, we need to replace the “nud” method with a version that can distinguish between tuples and a plain parenthesized expression. Python’s tuple-forming rules are simple; if a pair of parenteses are empty, or contain at least one comma, it’s a tuple. Otherwise, it’s an expression. Or in other words: - () is a tuple - (1) is a parenthesized expression - (1,) is a tuple - (1, 2) is a tuple Here’s a “nud” replacement that implements these rules: @method(symbol("(")) def nud(self): self.first = [] comma = False if token.id != ")": while 1: if token.id == ")": break self.first.append(expression()) if token.id != ",": break comma = True advance(",") advance(")") if not self.first or comma: return self # tuple else: return self.first[0] >>> parse("()") (() >>> parse("(1)") (literal 1) >>> parse("(1,)") (( [(literal 1)]) >>> parse("(1, 2)") (( [(literal 1), (literal 2)]) Lists and dictionaries are a bit simpler; they’re just plain lists of expressions or expression pairs. Don’t forget to register the extra tokens. symbol("]") @method(symbol("[")) def nud(self): self.first = [] if token.id != "]": while 1: if token.id == "]": break self.first.append(expression()) if token.id != ",": break advance(",") advance("]") return self >>> parse("[1, 2, 3]") ([ [(literal 1), (literal 2), (literal 3)]) symbol("}"); symbol(":") @method(symbol("{")) def nud(self): self.first = [] if token.id != "}": while 1: if token.id == "}": break self.first.append(expression()) advance(":") self.first.append(expression()) if token.id != ",": break advance(",") advance("}") return self >>> {1: 'one', 2: 'two'} ({ [(literal 1), (literal 'one'), (literal 2), (literal 'two')]) Note that Python allows you to use optional trailing commas when creating lists, tuples, and dictionaries; an extra if-statement at the beginning of the collection loop takes care of that case. Summary At roughly 250 lines of code (including the entire parser machinery), there are still a few things left to add before we can claim to fully support the Python 2.5 expression syntax, but we’ve covered a remarkably large part of the syntax with very little work. And as we’ve seen thoughout this article, parsers using this algorithm and implementation approach are readable, easy to extend, and, as we’ll see in a moment, surprisingly fast. While this article has focussed on expressions, the algorithm can be easily extended for statement-oriented syntaxes. See Crockford’s article for one way to do that. All in all, Pratt’s parsing algorithm is a great addition to the Python parsing toolbox, and the implementation strategy outlined in this article is a simple way to quickly implement such parsers. Performance As we’ve seen, the parser makes only a few Python calls per token, which means that it should be pretty efficient (or as Pratt put it, “efficient in practice if not in theory”). To test practical performance, I picked a 456 character long Python expression (about 300 tokens) from the Python FAQ, and parsed it with a number of different tools. Here are some typical results under Python 2.5: - topdown parse (to abstract syntax tree): 4.0 ms - built-in parse (to tuple tree): 0.60 ms - built-in compile (to code object): 0.68 ms - compiler parse (to abtract syntax tree): 4.8 ms - compiler compile (to code object): 18 ms If we tweak the parser to work on a precomputed list of tokens (obtained by running “list(tokenize_python(program))”), the parsing time drops to just under 0.9 ms. In other words, only about one fourth of the time for the full parse is spent on token instance creation, parsing, and tree building. The rest is almost entirely spent in Python’s tokenize module. With a faster tokenizer, this algorithm would get within 2x or so from Python’s built-in tokenizer/parser. The built-in parse test is in itself quite interesting; it uses Python’s internal tokenizer and parser module (both of which are written in C), and uses the parser module (also written in C) to convert the internal syntax tree object to a tuple tree. This is fast, but results in a remarkably undecipherable low-level tree: >>> parser.st2tuple(parser.expr("1+2")) (258, (326, (303, (304, (305, (306, (307, (309, (310, (311, (312, (313, (314, (315, (316, (317, (2, '1'))))), (14, '+'), (314, (315, (316, (317, (2, '2')))))))))))))))), (4, ''), (0, '')) (In this example, 2 means number, 14 means plus, 4 is newline, and 0 is end of program. The 3-digit numbers represent intermediate rules in the Python grammar.) The compiler parse test uses the parse function from the compiler package instead; this function uses Python’s internal tokenizer and parser, and then turns the resulting low-level structure into a much nicer abstract tree: >>> import compiler >>> compiler.parse("1+2", "eval") Expression(Add((Const(1), Const(2)))) This conversion (done in Python) turns out to be more work than parsing the expression with the topdown parser; with the code in this article, we get an abstract tree in about 85% of the time, despite using a really slow tokenizer. Code Notes The code in this article uses global variables to hold parser state (the “token” variable and the “next” helper). If you need a thread-safe parser, these should be moved to a context object. This will result in a slight performance hit, but there are some surprising ways to compensate for that, by trading a little memory for performance. More on that in a later article. All code for the interpreters and translators shown in this article is included in the article itself. Assorted code samples are also available from:
http://www.effbot.org/zone/simple-top-down-parsing.htm
CC-MAIN-2014-10
refinedweb
5,662
52.9
![if !IE]> <![endif]> INSERTION SORT: Definition: An insertion sort is one that sorts a set of records by inserting records into an existing sorted file. • Based on the technique used by card players to arrange a hand of cards – Player keeps the cards that have been picked up so far in sorted order – When the player picks up a new card, he makes room for the new card and then inserts it in its proper place Algorithm: #include <stdio.h> #include<conio.h> void main() { int n, array[1000], i, j, t; printf("Enter number of elements\n"); scanf("%d", &n); printf("Enter %d integers\n", n); for (i = 0; i< n; i++) { scanf("%d", &array[i]); } for (j = 1 ; j < n; j++) { k=j ; while ( k> 0 && array[k] < array[k-1]) { t = array[k]; array[k] = array[k-1]; array[k-1] = t; k--; } } printf("Sorted list in ascending order:\n"); for (i= 0; i< n; i++) { printf("%d\n", array[i]); } getch(); } Execution with example: Initially x[0] may be thought of as a sorted file, after each repetition elements from x[0] to x[k] are ordered. By moving all elements greater than y to the right direction we can insert y in the correct position. – The algorithm sees that 8 is smaller than 34 so it swaps. • 8 34 64 51 32 21 – 51 is smaller than 64, so they swap. • 8 34 51 64 32 21 – The algorithm sees 32 as another smaller number and moves it to its appropriate location between 8 and 34. • 8 32 34 51 64 21 – The algorithm sees 21 as another smaller number and moves into between 8 and 32. • Final sorted numbers: • 8 21 32 34 51 64 Efficiency Analysis: The algorithm needs n pass through for n elements with n comparisons each time so the efficiency of the algorithm is O (n2). Related Topics Copyright © 2018-2020 BrainKart.com; All Rights Reserved. Developed by Therithal info, Chennai.
https://www.brainkart.com/article/Insertion-sort_6983/
CC-MAIN-2019-51
refinedweb
331
63.02
A class to perform some of the functions of a Radial Basis Function Network. More... #include "mbl_rbf_network.h" #include <vcl_cstdlib.h> #include <vcl_cassert.h> #include <vsl/vsl_indent.h> #include <mbl/mbl_stats_1d.h> #include <vnl/algo/vnl_svd.h> #include <mbl/mbl_matxvec.h> #include <vnl/io/vnl_io_vector.h> #include <vsl/vsl_vector_io.h> Go to the source code of this file. A class to perform some of the functions of a Radial Basis Function Network. Given a set of n training vectors, x_i (i=0..n-1), a set of internal weights are computed. Given a new vector, x, a vector of weights, w, are computed such that if x = x_i then w(i+1) = 1, w(j !=i+1) = 0 The sum of the weights should always be unity. If x is not equal to any training vector, the vector of weights varies smoothly. This is useful for interpolation purposes. It can also be used to define non-linear transformations between vector spaces. If Y is a matrix of n columns, each corresponding to a vector in a new space which corresponds to one of the original training vectors x_i, then a vector x can be mapped to Yw in the new space. (Note: y-space does not have to have the same dimension as x space). This class is equivalent to the basis of thin-plate spline warping. I'm not sure if this is exactly an RBF network in the original definition. I'll check one day. Definition in file mbl_rbf_network.cxx. Stream output operator for class reference. Definition at line 279 of file mbl_rbf_network.cxx. Binary file stream input operator for class reference. Definition at line 270 of file mbl_rbf_network.cxx. Binary file stream output operator for class reference. Definition at line 261 of file mbl_rbf_network.cxx.
http://public.kitware.com/vxl/doc/release/contrib/mul/mbl/html/mbl__rbf__network_8cxx.html
crawl-003
refinedweb
298
62.54
NeXT Computers Log in to check your private messages Can a Quadra Run NeXTstep or Openstep ? NeXT Computers Forum Index -> Apple Hardware View previous topic :: View next topic Author Rob Blessin Black Hole Site Admin Joined: 05 Sep 2006 Posts: 702 Location: Ft. Collins, Colorado Posted: Mon Apr 24, 2017 12:22 am Post subject: Can a Quadra Run NeXTstep or Openstep ? , I have a question for the Community with NeXTstep 3.3 being quad fat .... how to difficult was it to add the support for different processors , The reason I ask is quadras of the same era had almost an identical architecture including 68040 25Mhz processors as the NeXT stations proven by Daydream and Darkmatter causing a NeXT to boot up as a MAC, I've often wondered if NeXTSTEP could actually run on a quadra 68040 with some tweaks , I'm guessing probably NeXT boot roms in software sort of a reverse Darkmatter and may be it would have worked. , I've heard rumors there were actually a few NeXTStations floating around that had mods a 601E or 603 Processor upgrade running on them , not sure if it was socketed directly or on an xlr8 your mac daughter card , if true I'm guessing they had it in the NeXTSTEP 3.3 code to id a Power PC chip as NeXT shut down its plant in February 93 , I'm wondering if any of the hardware guys moved over to Apple _________________ Rob Blessin President computerpowwow ebay sales@blackholeinc.com 303-741-9998 Serving the NeXT Community since 2/9/93 Back to top neozeed Joined: 15 Apr 2006 Posts: 716 Location: Hong Kong Posted: Mon Apr 24, 2017 3:33 am Post subject: The source has a lot more going on with the m88k, I'm pretty sure the NRW was the direction, and it was a LOT further along than that blank machine suggests, just like those unofficial 'brick' at apple with dual processor 88k's that apparently boots. Libc, objc cctools, and cc all have m88k support. That said, could it run? sure. Just as a cisco 7000 RSP could run NeXTSTEP, you don't need a frame buffer, a serial port and RAM would be enough, just as an Amiga 3000 could run it as well. The best 'fit' outside of NeXT was the Atari Falcon, 68030 and the same DSP to boot! .. but why wasn't it? NeXT had a dream of selling hardware, and Quadras simply were too numerous, and too cheap compared to next hardware. the NBIC ended up being snake oil, as NS 3 had to go to non NeXT hardware, and ran fine without it. It was political. let me add, darkmatter, and many other mac emulators of the day injected drivers into macos and trapped calls to hardware to emulate stuff.. that isn't going to work for nextstep, but rather a port will be needed, in much the same way they ported it to the i386, sparc and hppa... If it were 1994 it'd get a lot of excitement, just like the darwin stuff, although I guess overall considering how badly apple mismanaged darwin I guess we'd end up here anyways _________________ # include <wittycomment.h> Back to top cuby Joined: 12 Jan 2006 Posts: 167 Location: Coburg, Germany Posted: Mon Apr 24, 2017 5:43 am Post subject: One class of machines that might be able to run early NeXTstep releases are Sun workstations. I remember a picture in one of the books on Apple or NeXT showing a Sun 3 workstation on a developer's desk at NeXT. Hm, time to dig into the NeXTstep 0.8 kernel to see if any support for Sun machines is still in there? One major difference between Sun 3 machines is that Sun used its own MMU design (no 68851 was available at the time the first Sun 3 came out, the Mac II had the same problem) with the exception of the 68030-based Sun 3x machines (e.g., Sun 3/80) - but these came to market in 1989... -- Michael Update - there is a 1991 Usenet thread in comp.sys.next.programmer mentioning that NeXTstep was developed on Sun 3 machines: Back to top t-rexky Joined: 09 Jan 2011 Posts: 285 Location: Snowy Canada Posted: Mon Apr 24, 2017 7:42 am Post subject: Hey Rob. I actually own a mint condition Quadra 660av that I purchased a few years ago for sole purpose of installing NetBSD mac68k. This was part of my NEXTSTEP m68k GCC porting effort. After some tinkering I abandoned this idea because even with full source code available for NetBSD the community was unable to address some of the fundamental issues and limitations on the 68k Macs. For example, NetBSD mac68k can only be started from a running System 7 or System 8 by running a MacOS application. This, I understand, is because no-one has figured out how to properly set-up the hardware independently and outside of MacOS. Another example is SCSI: it is so slow in NetBSD that it is crippling. Furthermore, the interrupt load it puts on the CPU causes the clock to drift, resulting in system time slowing by minutes per hour under heavy IO load such as compiling. There is just way too little low-level hardware information on those machines in order to make them useful for anything but MacOS. Back to top neozeed Joined: 15 Apr 2006 Posts: 716 Location: Hong Kong Posted: Mon Apr 24, 2017 8:50 am Post subject: t-rexky wrote: For example, NetBSD mac68k can only be started from a running System 7 or System 8 by running a MacOS application. To be fair, even A/UX needs MacOS to lauch it. Unfortuantly you have the AV variation, so you can't compare A/UX to NetBSD. _________________ # include <wittycomment.h> Back to top t-rexky Joined: 09 Jan 2011 Posts: 285 Location: Snowy Canada Posted: Mon Apr 24, 2017 9:30 am Post subject: neozeed wrote: To be fair, even A/UX needs MacOS to lauch it. Unfortuantly you have the AV variation, so you can't compare A/UX to NetBSD. That I did not know... Yes, unfortunately the m68k Mac hardware is getting more and more difficult to find. When my machine turned up on local Kijiji I could not resist purchasing it despite some of its limitations. It was from the original owner, with all the parts and boxes of software. Everything in absolutely mint condition and not even any signs of yellowing. I of course re-built the power supply and the main board with new electrolytics so it should be good for another 10 - 20 years. Back to top neozeed Joined: 15 Apr 2006 Posts: 716 Location: Hong Kong Posted: Thu Jun 01, 2017 10:20 am Post subject: Actually looking at MachTen, it sure could have. Actually I'm more surprised MachTen didn't get more people on the platform, as it really is a Mach 2.5 + 4.3 BSD on MacOS. And the kicker is that it will run without a MMU. But yeah as proved with all the ports, there honestly wasn't anything special about the black hardware, in that NS could run on anything else. I emailed them asking if they were willing to license or sell the product.. no reply. _________________ # include <wittycomment.h> Back to top mouser Joined: 17 Apr 2018 Posts: 2 Posted: Thu Apr 19, 2018 6:56 pm Post subject: Thread revival! I worked on a Quadra 700 back in 91 when I worked for Ultimate Technographics. They were awesome machines. One phillips screw to remove cover. Another one to remove the disk tower and release all internal parts. It was a great design, good form factor. Awesome Mac performance for the time. There aren't many Macs of that era I'm fond of, despite having worked on them all my life. But the Quadra 700 surely is in my favorites from the 90s. Strangely, I dont have one in my collection.
http://www.nextcomputers.org/forums/viewtopic.php?p=23934&sid=7c7f8668abf06bfe020ffee301fa81da
CC-MAIN-2019-04
refinedweb
1,352
68.2
Technical Support On-Line Manuals RL-ARM User's Guide (MDK v4) #include <rl_usb.h> BOOL usbh_msc_read ( U8 ctrl, // USB Host Controller index U8 dev_idx, // Device instance index U32 blk_adr, // Starting block address U8 *ptr_data, // Pointer to data location U16 blk_num // Number of blocks to be read ); The function usbh_msc_read reads data from a mass storage device. The argument ctrl is the index of USB Host Controller. The argument dev_idx is the index of the device instance. The argument blk_adr is the address of the starting block to be read. The argument ptr_data is a pointer indicating the location where data will be read. The argument blk_num is a value indicating the number of blocks to be read. The function is part of the RL-USB Host Class Driver software layer. usbh_msc_get_last_error,.
https://www.keil.com/support/man/docs/rlarm/rlarm_usbh_msc_read.htm
CC-MAIN-2020-34
refinedweb
132
55.34
Introduction In this tutorial we will see some of the extenders provided by AJAX Toolkit. We can enhance the functionality of TextBox. In this tutorial both ways of adding extenders are mentioned; i.e. in mark up source and in design time. We will discuss the following textbox extenders: 1. Watermark Extender Watermark is an untouchable text; that usually helps us for guidance. Consider you are typing your data (e.g. First name, last name, etc.) into the text box. You can have watermark text to guide you for the process. The following image displays the watermark in the text box for searching. As soon as you click inside the textbox to type, the watermark becomes invisible. The following steps will help you to use watermark extender for the textbox. The mark up source for the extender is as follows:<asp:TextBox</asp:TextBox> <cc1:TextBoxWatermarkExtender </cc1:TextBoxWatermarkExtender> 2. Filtered TextBox While developing an application we are more concerned about the user inputs. As we treat the user as crazy, meaning the user can give anything as input. While taking the search or query or the data, we need to filter it. We can extend our textbox to filter the user input. The available filter type options are as follows: The basic idea behind filtered text box is that, the user cannot type Invalid characters into the textbox. The following procedure is followed for achieving filtering. The mark up source for the extender is as follows: <asp:TextBox</asp:TextBox> <cc1:FilteredTextBoxExtender </cc1:FilteredTextBoxExtender> 3. Masked Edit TextBox and masked edit validator In some cases you will be demanding a textbox which can take input as numbers only and it would be display as a valid date or valid number or even time. Here is the solution for you, masked edit extender for textbox provides you the textbox which can format user input into valid number, date and time. We can mask a particular textbox for the following mask types: Consider the following figures, which displays date and number as masked type. We can achieve masked edit textbox by following the below steps. The markup source for the extender is as follows. <asp:TextBox</asp:TextBox> <cc1:MaskedEditExtender <cc1:MaskedEditValidator 4. Calendar Extender This is an extender which provides a calendar control dynamically and provides the date when it is chosen. We can use a particular textbox to show the calendar dynamically. The following figure describes our requirement. The following steps are to be followed to achieve the above. The mark up source for the extender is shown below. <asp:TextBox</asp:TextBox> <cc1:CalendarExtender <asp:Image 5. Validator Callout Extender This extender is not exactly related to textbox, but it is related to Required Field Validator control. You could remember the Required Field Validator when it shows the error message it shows the message as it is labeled. Here adding this extender we can make the message as a callout. The figure below describes the whole picture of using this. Looks great isn't it. To use this extender you have to follow the below steps. The markup source for the extender is shown below. <asp:RequiredFieldValidator <cc1:ValidatorCalloutExtender 6. Auto Complete Extender This is an extender which helps the user to find out suggestions. You might have visited Google's suggestion page. It looks like the following figure. Link: This is being used along with web service, which gives the related words for the search text. We will use a simple version of this. The following figure shows about the basic idea about using the extender. To achieve AutoComplete extender work for our textbox we need to have the Web Service where the logic for the word suggestion is written. We will use a simple logic. The following code is for the Web Service. [WebService(Namespace = "")] [WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)] [System.Web.Script.Services.ScriptService] public class AutoComplete : WebService { public AutoComplete() { } [WebMethod] public string[] GetCompletionList(string prefixText, int count) if (count == 0) { count = 10; }(); } Now to add this extender to the textbox you need to follow the below steps. The markup source for the above extender is as follows. <asp:TextBox <cc1:AutoCompleteExtender 7. Password Strength Extender This is an excellent extender available for textbox. Usually for passwords we are not sure whether it is a strong password or not. Now AJAX is providing the extender for this, through this we can allow how many characters have to be allowed. The following figure displays a sample password strength extender's work. To use this extender in your textbox you need to follow the below steps: The markup source for the extender is as followed: <asp:TextBox <cc1:PasswordStrength Visit in for latest articles.Facebook Page: ©2015 C# Corner. All contents are copyright of their authors.
http://www.c-sharpcorner.com/uploadfile/dpatra/using-ajax-textbox-extenders/
CC-MAIN-2015-40
refinedweb
797
58.89