text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
As I wrote about a while ago, quantifiers such as 'any' and 'all' are not supported in WCF Data Services. One way to work around this if you have some specific need is to create a service operation, which is what we'll walk through today. The ScenarioI'm going to write a small WCF Data Service from scratch to keep a catalog of songs and their genre. Now, I'm a firm believer that you can't always classify a song in a single genre; sometimes it'll be a mix of things, and presumably your catalog should reflect this rather than force you to pick one genre over another. The Basic ServerSo we'll start with some class declarations in an ASP.NET Web Application project to which I've added a data service item. public class Song{ public Song() { } public Song(int id, string name, params Genre[] genres) { this.ID = id; this.Name = name; this.Genres = genres.ToList(); } public int ID { get; set; } public string Name { get; set; } public List<Genre> Genres { get; set; }}public class Genre{ public Genre() {} public Genre(int id, string name, string description) { this.ID = id; this.Name = name; this.Description = description; } public int ID { get; set; } public string Name { get; set; } public string Description { get; set; }} These are some simple placeholders for data with some specialized constructors for convenience. Let's use these to build our data context. public class SongGenreContext{ private static List<Song> songs; private static List<Genre> genres; static SongGenreContext() { Genre rock = new Genre(1, "Rock", "Makes you move in alternating directions."); Genre jazz = new Genre(2, "Smooth Jazz", "Like regular jazz, but without the bumps."); Genre blues = new Genre(3, "Blues", "The eternal enemies of Reds."); genres = new List<Genre>() { rock, jazz, blues }; songs = new List<Song>() { new Song(1, "Rocking Blues of Love", rock, blues), new Song(2, "Sax in the Rain", jazz, blues), new Song(3, "Bluer than blue", blues), new Song(4, "Rocker than rock", rock), }; } public IQueryable<Genre> Genres { get { return genres.AsQueryable(); } } public IQueryable<Song> Songs { get { return songs.AsQueryable(); } }} Now I can fill in that data service class that was created from the template item. public class SongGenreService : DataService<SongGenreContext>{; config.UseVerboseErrors = true; }} The Basic ClientTo try this out, I'm writing a console application that has a reference to the server, connects and prints out the results of a query. var serviceRoot = new Uri("");var service = new SongGenreContext(serviceRoot);var q = from s in service.Songs select s;foreach (var s in q) Console.WriteLine(s.Name); Not bad for a few straightforward lines of code, but now what happens if we want all the songs that belong to a few specific genres?Creating the Service OperationTo pull this trick off, we're going to write a service operation in the SongGenreService class. [WebGet]public IQueryable<Song> SongsWithGenre(string tags){ if (String.IsNullOrEmpty(tags)) { return new Song[0].AsQueryable(); } string[] tagsInArray = tags.Split(','); return this.CurrentDataSource.Songs.Where( s => s.Genres.Any(g => tagsInArray.Contains(g.Name)));} This data service takes a parameter 'tags' and breaks it up into separate genre names, assuming these are comma-delimited. It then filters the songs to only those that have a genre with a name that is contained within the ones the client provided. Note that the result is IQueryable<Song>, so the client is free to do additional querying over these results. Using the Service OperationWhat I like to do on the client is make use of the fact that the code generated by adding a reference declares the class as partial, so even though it doesn't add support for service operations, we can do it ourselves. So in the main file, I can add the following code: public partial class SongGenreContext{ public IQueryable<Song> SongsWithGenre(params string[] tags) { if (tags == null || tags.Length == 0) { throw new InvalidOperationException("no tags specified"); } string optionValue = string.Join(",", tags); optionValue = "'" + optionValue.Replace("'", "''") + "'"; return this.CreateQuery<Song>("SongsWithGenre") .AddQueryOption("tags", optionValue); }} Now we can change the code to query only for songs that have a Rock or Smooth Jazz genre associated (or both, for that matter), and we can do further composition like filtering out those that include 'Love', because that's just how we feel like today. var q = from s in service.SongsWithGenre("Rock", "Smooth Jazz") where !s.Name.Contains("Love") select s;foreach (var s in q) Console.WriteLine(s.Name); This will display the following songs: Sax in the RainRocker than Rock 'Bluer than blue' is filtered out by the service operation, because it's a Blues-only song, and 'Rocking Blues of Love' is filter by the additional filter request the client is specifying. Enjoy! PS: For more information on doing 'WHERE IN'-style queries in EF, check out Alex's great post at.
http://blogs.msdn.com/b/marcelolr/archive/2010/05/11/service-operations-for-any-and-all.aspx
CC-MAIN-2015-18
refinedweb
803
63.9
Nothing brings back warm childhood memories like grandma’s chocolate cake recipe. My slight alteration of her original recipe is to add a touch of Python and share it with the world through a static site. By reading this article, you’ll learn how to create a Contentful-powered Flask app. You’ll also learn how to turn that Flask app into a static site using Frozen-Flask—and how to deploy it all to Surge.sh. Why static sites? Static sites are fast, lightweight and can be deployed almost everywhere for little or no money. Using Frozen Flask, a static site generator, to turn your Flask app, with all its inner logic and dependencies, into a static site reduces the need for complex hosting environments. Why Contentful? Contentful is content infrastructure for any digital project. For our chocolate cake app this means that we’ll retrieve the recipe together with the list of ingredients from Contentful’s Content Delivery API (CDA). Creating the chocolate cake content type With Contentful, a content type is similar to a database table in that it defines what kind of data it can hold. We’ll create the content type so that it can hold any recipe. Because truth be told, grandma also made some fantastic pancakes, and I would like to share that recipe too one day. So let’s start by naming our new content type Grandma’s kitchen like so: We can then add different field types to this content type: This recipe we’ll need the following field types: Medium - that will contain the image of our beautiful creation Short text - that will contain the recipe name Long text - that will contain our list of ingredients Long text - that will contain instructions Boolean - to make sure that delicious == true With the added field types our content type will look like so: Adding the ingredients Now that we have our content type set up, we’ll go ahead and add our chocolate cake recipe. Setting up the Flask app With the recipe in place, it’s time put together our Flask app. When a user visits /chocolatecake/, this minimalist app will render the recipe through a the recipe.html template as seen below: from flask import Flask, render_template app = Flask(__name__) @app.route("/chocolatecake/") def cake(): return render_template("recipe.html") But we of course need a way to get our chocolate cake recipe data from the Contentful CDN and pass that data to the render_template function as a variable….. Getting the data from Contentful into Flask To pull data from Contentful into Flask, you will need an access token to authorize your API calls. Note that the access token is personal so you need to generate your own to get things to work. While we can interact with Contentful’s endpoints using bare HTTP calls, the Python SDK makes dealing with response objects easier. Run pip install contentful to install it. We also need to install support for rendering the Markdown-formatted parts of the response: pip install Flask-Markdown Now we’ll create a method in our Flask app that does the following: Connects to Contentful using our unique access token Grabs the content of our chocolate cake recipe Handles the JSON response from Contentful Sends the data to be rendered to our template def getRecipe(): SPACE_ID = '1476xanqlrah' ENTRY_ID = '4kgJZqf18AYgYiyYkgaMy0' ACEESS_TOKEN = '457ba5e5af020499b5b8e7c22ae5da0ffeaf314028e28a6b0bdba4f28e35222c' client = Client(SPACE_ID, ACEESS_TOKEN) entry = client.entry(ENTRY_ID) imageURL = entry.image.url() recipeName = entry.recipe_name listOfIngredients = entry.list_of_ingredients instructions = entry.instructions isDelicious = entry.is_delicious return { 'imageURL': 'https:{0}'.format(imageURL), 'recipeName': recipeName, 'listOfIngredients': listOfIngredients, 'instructions': instructions, 'isDelicious': isDelicious } And to send our dictionary data structure of recipe data for rendering by the template we modify the /chocolatecake/ route to look like so: @app.route("/chocolatecake/") def cake(): recipe = getRecipe() return render_template("recipe.html", recipe=recipe) The template that we’ll render, recipe.html, has the following content: <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>Grandma's Chocolate Cake</title> <link rel="stylesheet" href="{{ url_for('static', filename='style.css') }}"> </head> <body> <div> <h2>{{ recipe.get("recipeName")}}</h2> </div> <div> <img src="{{ recipe.get(" imageURL ")}}"> </div> <div> Is delicious: {{ recipe.get("isDelicious")}} </div> <div> {{ recipe.get("listOfIngredients") | markdown }} </div> <div> {{ recipe.get("instructions") | markdown }} </div> </body> </html> We now have a working Flask app running locally. This app grabs its content from Contentful’s CDN using an API call, and then renders a page like this: We could stop here and just deploy our creation to Heroku or any other platform that can run Flask apps. But we want more—and we want it to be static. Adding Frozen-Flask To turn our Contentful-powered Flask app into a static site, we’ll be using Frozen-Flask. Install it using pip: pip install Frozen-Flask Creating the static site We’ll create file called freeze.py with the following content: from flask_frozen import Freezer from app import app freezer = Freezer(app) if __name__ == '__main__': freezer.freeze() After running freeze.py, we have a build directory containing: ├── chocolatecake └── static └── style.css This is exactly what we want—HTML and styling. So let’s ship it! Deploying the static site to surge.sh Surge.sh is a single-command web publishing platform. It allows you to publish HTML, CSS, and JavaScript for free—without leaving the command line. In other words: it’s a great platform for our static site. Run npm install --global surge to install the Surge and to create your free account. All we need to deploy our static site is to run the surge command from the build directory: Surge - surge.sh email: robert.svensson@contentful.com token: ***************** project path: /Users/robertsvensson/Code/cake/build/ size: 2 files, 2.9 KB domain: loud-jeans.surge.sh upload: [====================] 100%, eta: 0.0s propagate on CDN: [====================] 100% plan: Free users: robert.svensson@contentful.com IP Address: 48.68.110.122 Success! Project is published and running at loud-jeans.surge.sh When you deploy a site using Surge.sh it generates the URL based on two random words — this time we got the words loud and jeans. So now all that’s left to do is to browse to and make sure that static site version of grandma’s chocolate cake recipe deployed successfully. It sure did 🚢 Summary All it takes to create a static site from your Contentful-powered Flask app is Frozen-Flask. You get the best of both worlds by having both writers and editors working on the app’s content in Contentful, and then generating a static site that can be deployed almost anywhere. This workflow will save you both time and money: Your creative writers get access to a feature-rich editing platform within Contentful, and you get the chance to deploy a fast and lightweight static site for little or no money.
https://www.contentful.com/blog/2018/02/02/chocolate-cake-and-static-sites/
CC-MAIN-2021-25
refinedweb
1,140
64.61
Issues ZF-4872: Add core support for Google Data protocol version 2 Description Add core support for Google Data protocol version 2, including support for the final AtomPub specification as defined in RFC 5023. Changes include: - Support for the new 'GData-Version' HTTP header. - New namespaces for AtomPub (app) and OpenSearch (openSearch) in version 2. - Support for using ETags to implement optimistic concurrency (ETag headers and gd:etag XML elements are recognized, and later used to set 'If-Match' HTTP headers). - Support for setting arbitrary headers on most HTTP-bound operations. Posted by Trevor Johns (tjohns) on 2008-11-10T13:04:52.000+0000 Marking as fixed for the 1.7 release. Posted by Wil Sinclair (wil) on 2008-11-13T14:09:55.000+0000 Changing issues in preparation for the 1.7.0 release.
http://framework.zend.com/issues/browse/ZF-4872?page=com.atlassian.streams.streams-jira-plugin:activity-stream-issue-tab
CC-MAIN-2016-26
refinedweb
135
58.48
killpg - send signal to a process group Synopsis Description Errors Notes Colophon #include <signal.h> int killpg(int pgrp, int sig); Feature Test Macro Requirements for glibc (see feature_test_macros(7)): killpg() sends the signal sig to the process group pgrp. See signal(7) for a list of signals. If pgrp is 0, killpg() sends the signal to the calling processs. On success, zero is returned. On error, -1 is returned, and errno is set appropriately. SVr4, 4.4BSD (the killpg() function call first appeared in 4BSD), POSIX.1-2001.). getpgrp(2), kill(2), signal(2), capabilities(7), credentials(7) This page is part of release 3.44 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at.
http://manpages.sgvulcan.com/killpg.2.php
CC-MAIN-2017-26
refinedweb
127
68.47
Learn C union declaration, initilization, array of unions, typedef, size of union, passing union or union pointer to function, function returning union or union pointer with examples, demonstrations. $ Let's look at how an union can be used actually to save memory space in a real world application. #include "stdio.h" typedef struct Flight { enum { PASSENGER, CARGO } type; union { int npassengers; double tonnages; } cargo; } Flight; void main() { Flight flights[1000]; flights[ 42 ].type = PASSENGER; flights[ 42 ].cargo.npassengers = 150; flights[ 20 ].type = CARGO; flights[ 20 ].cargo.tonnages = 356.78; } In the above program you will use union to identify flight type as PASENGER or CARGO and set the variables accordingly. A flight can either be PASSENGER or CARGO but not both. Also if it is a PASSENGER type it will have no of passengers on board, tonnages variable will not be applicable for it. Similarly if it is a PASSENGER type it will have no of tonnes loaded on the flight, npassengers variable will not be applicable for it. In this situation you will use an union not a struct.
https://www.mbed.in/c/union-in-c/
CC-MAIN-2020-45
refinedweb
180
63.9
02 February 2012 03:55 [Source: ICIS news] SINGAPORE (ICIS)--?xml:namespace> The cause of the power outage is still being investigated and it is not clear when the cracker will resume operations, the source said. A derivative 400,000 tonne/year linear low density polyethylene (LLDPE) unit and a 300,000 tonne/year low density polyethylene (LDPE) facility at the site were shut briefly on 31 January because of the power outage, but are now running smoothly, the source said. PTTGC’s three other crackers at Map Ta Phut that have a combined ethylene nameplate capacity of 1.38m tonnes/year were not affected
http://www.icis.com/Articles/2012/02/02/9528711/thailands-pttgc-shuts-1m-tonneyear-cracker-after-power.html
CC-MAIN-2014-41
refinedweb
105
52.19
Regex Class Assembly: System (in system.dll) The Regex class contains several static (or Shared in Visual Basic) methods that allow you to use a regular expression without explicitly creating a Regex object. In the .NET Framework version 2.0, regular expressions compiled from static method calls are cached, whereas regular expressions compiled from instance method calls are not cached. By default, the regular expression engine caches the 15 most recently used static regular expressions. As a result, in applications that rely extensively on a fixed set of regular expressions to extract, modify, or validate text, you may prefer to call these static methods rather than their corresponding instance methods. Static overloads of the IsMatch, Match, Matches, Replace, and Split methods are available. The following code example illustrates the use of a regular expression to check whether a string has the correct format to represent a currency value. Note the use of enclosing ^ and $ tokens to indicate that the entire string, not just a substring, must match the regular expression. import System.*; import System.Text.RegularExpressions.*; public class Test { public static void main(String[] args) { // Define a regular expression for currency values. Regex rx = new Regex("^-?\\d+(\\.\\d{2})?$"); // Define some test strings. String tests[] = { "-42", "19.99", "0.001", "100 USD" }; // Check each test string against the regular expression. for (int iCtr = 0; iCtr < tests.get_Length(); iCtr++) { String test = (String)tests.get_Item(iCtr); if (rx.IsMatch(test)) { Console.WriteLine("{0} is a currency value.", test); } else { Console.WriteLine("{0} is not a currency value.", test); } } } //main } //Test The following code example illustrates the use of a regular expression to check for repeated occurrences of words within a string. Note the use of the (?<word>) construct to name a group and the use of the (\k<word>) construct to refer to that group later in the expression. import System.*; import System.Text.RegularExpressions.*; public class Test { public static void main(String[] args) { //.", (Int32)matches.get_Count()); // Report on each match. for (int iCtr = 0; iCtr < matches.get_Count(); iCtr++) { Match match = matches.get_Item(iCtr); String word = match.get_Groups().get_Item("word").get_Value(); int index = match.get_Index(); Console.WriteLine("{0} repeated at position {1}", word, (Int32)index); } } //main } //Test System.Text.RegularExpressions.Regex Derived Classes The Regex class is immutable (read-only) and is inherently thread safe. Regex objects can be created on any thread and shared between threads. For more information, see.
https://msdn.microsoft.com/en-us/library/system.text.regularexpressions.regex(v=vs.85).aspx?cs-save-lang=1&cs-lang=vb
CC-MAIN-2017-17
refinedweb
399
52.05
Analyzing selling price of used cars using Python Now-a-days, with the technological advancement, Techniques like Machine Learning, etc are being used on a large scale in many organisations. These models usually work with a set of predefined data-points available in the form of datasets. These datasets contain the past/previous information on a specific domain. Organising these datapoints before it is fed to the model is very important. This is where we use Data Analysis. If the data fed to the machine learning model is not well organised, it gives out false or undesired output. This can cause major losses to the organisation. Hence making use of proper data analysis is very important. About Dataset: The data that we are going to use in this example is about cars. Specifically containing various information datapoints about the used cars, like their price, color, etc. Here we need to understand that simply collecting data isn’t enough. Raw data isn’t useful. Here data analysis plays a vital role in unlocking the information that we require and to gain new insights into this raw data. Consider this scenario, our friend, Otis, wants to sell his car. But he doesn’t know how much should he sell his car for! He wants to maximize the profit but he also wants it to be sold for a reasonable price for someone who would want to own it. So here, us, being a data scientist, we can help our friend Otis. Let’s think like data scientists and clearly define some of his problems: For example, is there data on the prices of other cars and their characteristics? What features of cars affect their prices? Colour? Brand? Does horsepower also affect the selling price, or perhaps, something else? As a data analyst or data scientist, these are some of the questions we can start thinking about. To answer these questions, we’re going to need some data. But this data is in raw form. Hence we need to analyze it first. The data is available in the form of .csv/.data format with us To download the file used in this example click here. The file provided is in the .data format. Follow the below process for converting a .data file to .csv file. Process to convert .data file to .csv: - open MS Excel - Go to DATA - Select From text - Check box tick on comas(only) - Save as .csv to your desired location on your pc! Modules needed: - pandas: Pandas is an opensource library that allows you to perform data manipulation in Python. Pandas provide an easy way to create, manipulate and wrangle the data. - numpy: Numpy is the fundamental package for scientific computing with Python. numpycan be used as an efficient multi-dimensional container of generic data. - matplotlib: Matplotlib is a Python 2D plotting library which produces publication quality figures in a variety of formats. - seaborn: Seaborn is a Python data-visualization library that is based on matplotlib. Seaborn provides a high-level interface for drawing attractive and informative statistical graphics. - scipy: Scipy is a Python-based ecosystem of open-source software for mathematics, science, and engineering. Steps for installing these packages: - If you are using anaconda- jupyter/ syder or any other third party softwares to write your python code, make sure to set the path to the “scripts folder” of that software in command prompt of your pc. - Then type – pip install package-name Example: pip install numpy - Then after the installation is done. (Make sure you are connected to the internet!!) Open your IDE, then import those packages. To import, type – import package name Example: import numpy Steps that are used in the following code (Short description): - Import the packages - Set the path to the data file(.csv file) - Find if there are any null data or NaN data in our file. If any, remove them - Perform various data cleaning and data visualisation operations on your data. These steps are illustrated beside each line of code in the form of comments for better understanding, as it would be better to see the code side by side than explaining it entirely here, would be meaningless. - Obtain the result! Lets start analyzing the data. Step 1: Import the modules needed. Step 2: Let’s check the first five entries of dataset. Output: Step 3: Defining headers for our dataset. Output: Step 4: Finding the missing value if any. Output: Step 4: Converting mpg to L/100km and checking the data type of each column. Output: Step 5: Here, price is of object type(string), it should be int or float, so we need to change it Output: Step 6: Normalizing values by using simple feature scaling method examples(do for the rest) and binning- grouping values Output: Step 7: Doing descriptive analysis of data categorical to numerical values. Output: Step 8: Plotting the data according to the price based on engine size. Output: Step 9: Grouping the data according to wheel, body-style and price. Output: Step 10: Using the pivot method and plotting the heatmap according to the data obtained by pivot method Output: Step 11: Obtaining the final result and showing it in the form of a graph. As the slope is increasing in a positive direction, it is a positive linear relationship. Output: Recommended Posts: - Amazon product price tracker using Python - Analyzing Mobile Data Speeds from TRAI with Pandas - Python | Find Hotel Prices using Hotel price comparison API - Python | Convert list to Python array - Python | Merge Python key values to list - Important differences between Python 2.x and Python 3.x with examples - Reading Python File-Like Objects from C | Python -? - SQL using.
https://www.geeksforgeeks.org/analyzing-selling-price-of-used-cars-using-python/
CC-MAIN-2020-05
refinedweb
945
65.32
On Tue, 2009-01-06 at 15:44 -0500, Trond Myklebust wrote:> On Tue, 2009-01-06 at 14:02 -0600, Serge E. Hallyn wrote:> > Quoting Matt Helsley (matthltc@us.ibm.com):> > > We can often specify the UTS namespace to use when starting an RPC client.> > > However sometimes no UTS namespace is available (specifically during system> > > shutdown as the last NFS mount in a container is unmounted) so fall> > > back to the initial UTS namespace.> > > > So what happens if we take this patch and do nothing else?> > > > The only potential problem situation will be rpc requests> > made on behalf of a container in which the last task has> > exited, right? So let's say a container did an nfs mount> > and then exits, causing an nfs umount request.> > > > That umount request will now be sent with the wrong nodename.> > Does that actually cause problems, will the server use the> > nodename to try and determine the client sending the request?> > The NFSv2/v3 umount rpc call will be sent by the 'umount' program from> userspace, not the kernel. The problem here is that because lazy mountsAhh, that's news to me. I thought userspace originated the umount butthe kernel constructed and sent the corresponding RPC call.> exist, the lifetime of the RPC client may be longer than that of the> container. In addition, it may be shared among more than 1 container,> because superblocks can be shared.Right.> One thing you need to be aware of here is that inode dirty data> writebacks may be initiated by completely different processes than the> one that dirtied the inode.> IOW: Aside from being extremely ugly, approaches like [PATCH 4/4] which> rely on being able to determine the container-specific node name at RPC> generation time are therefore going to return incorrect values.Yes, I was aware that the inode might be dirtied by another container. Iwas thinking that, at least in the case of NFS, it makes sense to reportthe node name of the container that did the original mount. Of coursethis doesn't address the general RPC client case and, like patch 3, itmakes the superblock solution rather NFS-specific. That brings me to abasic question: Are there any RPC clients in the kernel that do notoperate on behalf of NFS?Thanks!Cheers, -Matt Helsley
https://lkml.org/lkml/2009/1/6/517
CC-MAIN-2015-18
refinedweb
384
70.84
Threadobject to Executor.execute? Would such an invocation make sense? :: BadThreads.java public class BadThreads { static String message; private static class CorrectorThread extends Thread { public void run() { try { sleep(1000); } catch (InterruptedException e) {} // Key statement 1: message = "Mares do eat oats."; } } public static void main(String args[]) throws InterruptedException { (new CorrectorThread()).start(); message = "Mares do not eat oats."; Thread.sleep(2000); // Key statement 2: System.out.println(message); } } The application should print out "Mares do eat oats." Is it guaranteed to always do this? If not, why not? Would it help to change the parameters of the two invocations of Sleep? How would you guarantee that all changes to message will be visible in the main thread? Dropclass.
http://docs.oracle.com/javase/tutorial/essential/concurrency/QandE/questions.html
CC-MAIN-2016-18
refinedweb
118
61.53
forrest yang a écrit : > i try to load a big file into a dict, which is about 9,000,000 lines, > something like > 1 2 3 4 > 2 2 3 4 > 3 4 5 6 How "like" is it ?-) > code > for line in open(file) > arr=line.strip().split('\t') > dict[arr[0]]=arr > > but, the dict is really slow as i load more data into the memory, Looks like your system is starting to swap. Use 'top' or any other system monitor to check it out. > by > the way the mac i use have 16G memory. > is this cased by the low performace for dict to extend memory dicts are Python's central data type (objects are based on dicts, all non-local namespaces are based on dicts, etc), so you can safely assume they are highly optimized. > or > something other reason. FWIW, a very loose (and partially wrong, cf below) estimation based on wild guesses: assuming an average size of 512 bytes per object (remember that Python doesn't have 'primitive' types), the above would use =~ 22G. Hopefully, CPython does some caching for some values of some immutable types (specifically, small ints and strings that respect the grammar for Python identifiers), so depending on your real data, you might need a bit less RAM. Also, the 512 bytes per object is really more of a wild guess than anything else (but given the internal structure of a CPython object, I think it's about that order - please someone correct me if I'm plain wrong). Anyway: I'm afraid the problem has more to do with your design than with your code or Python's dict implementation itself. > is there any one can provide a better solution Use a DBMS. They are designed - and highly optimised - for fast lookup over huge data sets. My 2 cents.
https://mail.python.org/pipermail/python-list/2009-April/535446.html
CC-MAIN-2018-26
refinedweb
308
67.28
soundwave.py: 22 points Our main goal for this lab is to write a sound generator capable of playing any combination of notes, or tones, we desire. We can think of these tones as objects with their own attributes and methods. For example, a tone can have a duration attribute and a method for combining itself with other tones to create a song! In this part of the lab, we’ll get started by creating a SoundWave class that will represent a sampled sound wave. Your SoundWave class should be written in a separate soundwave.py file, and it should follow the specifications below. Remember to refer back to the Warmup for details on how to generate and sample the sine waves needed to create your tones. The constructor for your SoundWave class will begin with an __init__() function for instantiating new SoundWave objects. We’ll start by designing our constructor to accept four parameters specifying a single note: halftones- the number of halftones above or below middle C of the note duration- the length of the note in seconds amp- the amplitude of the note (i.e., it’s volume or loudness) sample_rate- the interval at which the sine wave is sampled These four parameters define everything we need to generate the sine wave for a given half tone. Recall, however, that the sine wave equation requires a frequency. We can compute the frequency as outlined in the Warmup using: freq = 220*(2**((halftones+3)/12)) Now that the sine wave parameters have been determined, we need to sample the waveform and store those sampled values. Your constructor should create an instance variable called self.samples, and define self.samples to be the list of sampled values. You should populate this list such that it has a length of duration * sample_rate (truncated to an integer), where entry i in this list is created as follows: s = amp * math.sin(2 * math.pi * freq * i/sample_rate) Say you’ve computed a new sample s that you plan on appending to your list samples. A natural way to do this is with the instruction: self.samples.append(s) For the purposes of this lab, we’ll always be using a sample rate of 44100, or 44.1 kHz (this value is dictated by the WAV audio format we’ll be using to save our music). To make our programs cleaner, we’ll want our SoundWave constructor to assign default values when parameters are left unspecified. The default values for halftones, duration, amp, and sample_rate should be 0 (middle C), 0.0 (zero seconds), 1.0 (max volume), and 44100, respectively. Setting Default Values We can set the default value of a function parameter by doing an assignment within the function definition. For example, def my_function(param1, param2=10): # do something sets a default value of 10 to param2 and no default value for param1. By setting the default value of param2, we can call the function with my_function(param1) and Python will automatically set param2 to 10. Note that default values work the same for methods. By having these default values (and the parameters in this order), we allow users of this object to invoke the constructor as:. Before moving on, you should test out your SoundWave class, even though you can’t play the full tone just yet! Below your class definition, write a function called test_samples() which, when called, creates a SoundWave instance s and prints out the first 10 samples of that instance (e.g., s.samples[0:10]). You should look at the output and check that the values seem to be changing in a sensical way as you adjust the parameters passed to the constructor (i.e., getting bigger but less than amp). Your function should also print out the total number of samples in s to confirm you are generating what you intended (i.e., duration * sample_rate, truncated to an int).
https://www.cs.oberlin.edu/~cs150/lab-8/part-1/
CC-MAIN-2022-21
refinedweb
656
61.97
> # ----- Module Initialization ---------- > def fnInitSubroutines(sC,sE,sP): > exec "from " + sC + " import *" # Configuration of test equipment > exec "from " + sE + " import *" # Lists of user Equipment settings > exec "from " + sP + " import *" # Lists of user test Parameters > globals().update(locals()) If you want to stick with the exec-based solution, read the docs about the optional "in <dict>" syntax. It allows you to avoid calling locals() and update(). locals() in particular is potentially quite expensive in some implementations. exec "from foo import *" in globals() Earlier you wrote: >> No they are NOT the names. They are the modules themselves. > That's too bad. It's slightly more pythonic (and avoids the need for the rather advanced "exec") if you pass in the modules themselves instead of names: def fnInitSubroutines(C, E, P): """Pass in modules whose contents are copied into the global namespace.""" for m in (C, E, P): for name,val in m.__dict__.iteritems(): if not n.startswith('__'): globals()[name] = val import myC, myE, myP import subroutines subroutines.init(myC, myE, myP)
http://wingware.com/pipermail/wingide-users/2011-June/009061.html
CC-MAIN-2016-50
refinedweb
170
58.48
Most implementations (windows, posix, ...) have slightly different api for low level input-output functions. These are gathered here and re-represented with a cross platform set of functions. This package provides cross platform api to some of the lower level socket functions available on different platforms. Currently there is only minor support for a few functions on: Include the following at the top of any translation unit which requires this library: #include <ecl/io.hpp> // Cross platform functions using ecl::init_sockets; using ecl::shutdown_sockets; using ecl::poll_sockets; using ecl::close_socket; using ecl::socket_pair; You will also need to link to -lecl_io. A drop in for poll() on windoze is soon to come. Really rough, no tests yet, just some examples.
http://docs.ros.org/en/fuerte/api/ecl_io/html/
CC-MAIN-2021-49
refinedweb
120
57.37
On Fri, Dec 24, 2010 at 05:29:35PM +0200, Damyan Ivanov wrote: > -=| Joel Roth, Wed, Dec 22, 2010 at 01:00:09PM -1000 |=- > > On Mon, Dec 20, 2010 at 02:13:07PM +0200, Gabor Szabo wrote: > > > It would be nice to see this on CPAN to encourage distributions to > > > implement the same. (...) > > Then, what about a name? > > > > CPAN::NativePackageInstaller::Debian > > > > CPAN::Installer::Debian > > > > Debian::CPAN::HybridInstaller > > > > Would there be any benefit to prefixing App:: ? > > Hm, would that be extended to support other distributions than > Debian? If yes, having the generic code (including the detection of > the distribution-specific module to load) in a generic module, and > distribution specific code in corresponding module may be wanted. > > CPAN::HybridInstaller - generic code > CPAN::HybridInstaller::Debian, CPAN::HybridInstaller::Fedora etc. Seems like a suitably inclusive approach. Makes me wonder: Do other distributions/OSs go to as much trouble as Debian in providing native packaging for perl distributions? btw, I found two tools for detecting the OS: Linux::Distribution Pfacter::operatingsystem > > The script might be named 'dcpan', easy to type and remember > > > > > I also wonder if it could be integrated into CPAN.pm (and friends) > > > itself so if I type > > > > > > cpan Module::Name it will install dependencies using apt-get - if > > > possible and using cpan (with local::lib) > > > otherwise. > > > > I think this would work at the top level, where > > the user is not constraining the version number. > > "Plugging" this recursively in cpan or another tool would be the > cherry of the icecream. Top-level is OK, but if you install something > with lots of dependencies the gain will not be that big. Seems worth aiming for, at least to code so that the same library used for a simple top-level script, could also be incorporated into a cpan installer. > > The harder problem is the dependency case you mentioned. > > > > > BTW what happens if a secondary dependency is available as a > > > .deb package ? I mean if I am installing package X which > > > depends on Y which in turn depends on W and only W is in > > > .deb X and Y are not. Will your script notice this situation > > > and install W using apt-get ? > > > > Say Y requires W (>=0.5), but the corresponding Debian > > libw-perl is only 0.4. > > > > It seems we can't reliably extract the upstream version from the > > Debian package metadata. An enhanced CPAN client might > > work around this limitation by: > > > > - installing the Debian package for W > > - seeing if Y's dependency is met > > - falling back to installing W from CPAN. > > It seems to me that this would work reliably, at the expense of the > possibility of unnecessarily installing packaged modules. > I was thinking about the version problem, and I think it is possible > to reach a sufficiently good approach: > > 1) strip epoch (s/^\d+://) > 2) strip revision (s/-[^-]$//) > 3) strip repackaging suffix (s/[+.](?:dfsg|ds)\.?\d*$) That's reasonable. I came up with using the regex from the Perl::Version docs to grab the perlish part of the version. qr/ ( (?i: Revision: \s+ ) | v | ) ( \d+ (?: [.] \d+)* ) ( (?: _ \d+ )? ) /x; > 4) what is left should be suitable for feeding version.pm and then > comparing with the wanted version. If the version of the > package was mangled, this is most likely 1.23 -> 1.2300 > extension, which is irrelevant to version.pm (AIUI). > or > 5) compare what is left with the wanted version using *dpkg* > version comparison functions (available somewhere in the Dpkg:: > namespace). If the version was mangled, the debian-extracted > version should compare greater than the wanted version, which > would work for the purpose of deciding "is the package > sufficiently new". > > Finally, there is a caveat with @INC, that I am not sure how to > address. Thing is, 'cpan' installs packages in /usr/local, and > Debian's perl has that path before /usr in @INC. I guess local::lib > does something similar. The problem is that once you install a module > there, it will take precedencde and will always be used regardles of > the fact that there may be a Debian package installed which has > a greater version. So once you install a module not using the package > manager, you have to either upgrade it when needed or uninstall it and > use the packaged one. Yes I thought of this, too. I just discovered that cpanplus has an uninstall command. Maybe that would help to remove a module in /usr/local or a local::lib when installing a more recent Debian package. Thanks for your suggestions and reviewing the script. Joel -- Joel Roth
https://lists.debian.org/debian-perl/2010/12/msg00082.html
CC-MAIN-2014-15
refinedweb
753
63.7
My AutoFileName plugin should do the first part. It lets you navigate the file structure to insert a filename. If you give me some time, I try adding the width and height features into the plugin (though I don't know how easy this would be, especially cross-platform). Although the main problem with AutoFileName for me is that it is useless when using absolute paths. If you're having a problem with the plugin, could you please file a bug report? There's no way for me to know if people are having issues if no one reports any problems. AutoFileName should work for absolute paths just fine. If it's not working for you could you please give me some more info? Platform? Errors in Console (Control + `)? Sample file you're trying to use it with? Thanks. I'm not sure how to file a bug report, but in any case, it probably isn't a bug. There are no errors in the console. For my setup, I run a local webserver in which the absolute root of my webserver is not necessarily the root of my computer. AutoFileName seems to pickup the root of my computer and there appears to be no way to override that. I'm using:- ST2 2181- Windows 7 Oh okay. Now I understand. Someone else already requested this and its on my todo list. I'll get started on it as soon as I get a change. Thanks for the explanation though. hi, "someone else" here. while C0D312 is working on the proper solution to this "absolute path" issue, i hacked his plugin to give me behavior that works for me and i think will work for you. i found that all i needed to do was to add a couple lines to the code in the plugin. NOTE: the first two lines below are C0D312's code and the second two lines are what i added directly afterwards. if cur_path.startswith(("'","\"","(")): cur_path = cur_path[1:-1] if cur_path.startswith("/"): cur_path = "." + cur_path you can see my conversation with C0D312 about this issue here. BBEdit does this too, it's very helpful. Would be awesome to have this in ST2. i'm playing with writing a plugin for this. if i have success will report back. i could do it for os x only using a command line call to get the image dimensions, but i would like to see first if i can make a cross-platform plugin that everyone can use. in order to do this, as far as i can tell, the package will need to embed the python Image module and i'm not sure how (yet) to go about packaging up a module like that such that it will work for everyone. i'm exploring. i just made a nice discovery. the zen coding plugin already has the ability to update the height and width attributes of img tags for .png, .gif, and .jpg images (and he does it without embedding the python image module). all you have to do is have the cursor inside the img tag and trigger an action. the key sequence is platform dependent and i don't know the others, but on the mac it's ctrl-alt-shift-m. err... looking at the zen coding code, it is deep. and really nicely written. i think it would be too difficult to try to extract the image size functionality and put it into C0D312's plugin because bits and pieces of the functionality are scattered through many functions and modules in zen coding. it might be nice for C0D312 to see if zen coding is available and if so, to auto-trigger the update size action from it if such inter-plugin communication is possible. Nice find, didn't know that.Actually the code is a very simple parser located in: \Sublime Text 2\Packages\ZenCoding\zencoding\utils.py: def get_image_size(stream) You could probably cut and paste it wherever you want, or directly use it from zencoding: from zencoding.utils import get_image_size [quote="bizoo"]Actually the code is a very simple parser located in: [/quote] yeah, but what's a "stream"? then you go further down the rabbit hole when you go looking for zen_file.read() and then you start looking at all the excellent code in zen that has to do with identifying when you are in the img tag and how to create / update the height and width attributes, ...and then... at least if you're me (yes, lazy), you conclude that it might be easier to try to just call the zen coding update_image_size action than try to extract all the relevant code or call the bits and pieces of it from within AutoFileName. but when i started going down that path, i found that there seems to be no call-back or event listener that lets you know when a completion has been inserted into the document, so that path is probably moot, and we're back to trying to extract all the bits and pieces from zen coding. which, you're right i guess, might not be so bad... zen_file.read is just a simple python file.read() I just updated AutoFileName. I didn't add support for image dimensions (hopefully @castles_made_of_sand will update zencoding soon). However, I didn't make a lot of big changes. I put the changes into a beta branch because there are a few things that could cause issues and I don't want to be help responsible. Please, test the beta branch and tell me what you think. working great for me as far as the absolute paths issue goes. thanks for this fix. i noticed a problem with support for .less files as i mentioned in an issue in your repo, but i think it is likely that problem is actually in the sublime-LESS package. in a fork of AutoFileName, i tottered a few steps down the path toward support for image size. i have working code for determining whether the ZenCoding package is installed or not and a conceptual outline for how to leverage ZenCoding's functions to get the height and width of .png, .gif, and .jpg files. but i stalled when i realized the complexity of how it would have to work. since there is [currently] no way to get a callback or event notification for when a completion occurs, you would have to compile the whole tag up front when you are creating the list of completions. that means parsing the tag and all its present attributes and constructing a new version of it for each potential completion (for completions that are image files). i found some code in the ST2 html package in html_completions.py that gives some hints on how this might be done, but i don't have time right now to press further on this. If you look at the source of my plugin, I built my own custom callback for competions. SAY WHAT!?! The plugin uses a combination of macros, keybindings, and on_query_context to intercept completions and call my own custom textCommand on completion (MAGIC!). From there I can just hook a call to the get_image_size form zen. The problem isn't the integration with the plugin. I can do that just fine. The problem: I can't get get_image_size to work. Feeding it a path just causes a bunch of errors. So if you can get that to do anything, let me know. If not, I'll just yell at @castles until he fixes it i didn't look but i think i might know what the problem is -- get_image_size doesn't want a path -- it wants the byte stream of the file. so instead of sending it the path, do a python file.read() and send get_image_size the output. tell me if you don't get what i'm saying or if it doesn't work and i'll play with it. Silly me. I got it working Now I just need to add everything together. BTW. This got me thinking. Perhaps Will Bond should consider adding a way to list dependencies in Package Control. While I'll probably just taking the get_image_size out of zen and putting it in a separate folder, if more developers start wanting to build off one another, it would be awesome to have something a way to do so. It could work kind of like Cydia for jailbroken iPhones; when downloading a plugin, it would say: this plugin requires x and y, install those too? Just added image dimensions to the beta branch of AutoFileName. I'd like some feedback before I add it to the main branch though. Thanks. It's nice to be able to see the image dimensions in the popup. I still don't see the absolute path working as I require. Ideally, one should be able to define any directory as root. Are you using the beta branch? If so, those are options in the sublime-settings file. It should use the root of the current project now. Is that not the case? Or are you looking for something else? Also, changing "afn_proj_root" allows for a custom root.
https://forum.sublimetext.com/t/plugin-to-hlep-with-img-tags-in-html/5457/7
CC-MAIN-2017-43
refinedweb
1,540
72.56
Recall, if you will, the dictum in "The Zen of Python" that "There should be one?and preferably only one?obvious way to do it." As with most dictums, the real world sometimes fails our ideals. Also as with most dictums, this is not necessarily such a bad thing. A discussion on the newsgroup <comp.lang.python> in 2001 posed an apparently rather simple problem. The immediate problem was that one might encounter telephone numbers with a variety of dividers and delimiters inside them. For example, (123) 456-7890, 123-456-7890, or 123/456-7890 might all represent the same telephone number, and all forms might be encountered in textual data sources (such as ones entered by users of a free-form entry field. For purposes of this problem, the canonical form of this number should be 1234567890. The problem mentioned here can be generalized in some natural ways: Maybe we are interested in only some of the characters within a longer text field (in this case, the digits), and the rest is simply filler. So the general problem is how to extract the content out from the filler. The first and "obvious" approach might be a procedural loop through the initial string. One version of this approach might look like: >>>>>>> for c in s: ... if c in '0123456789': ... result = result + c ... >>> result '1234567890' This first approach works fine, but it might seem a bit bulky for what is, after all, basically a single action. And it might also seem odd that you need to loop though character-by-character rather than just transform the whole string. One possibly simpler approach is to use a regular expression. For readers who have skipped to the next chapter, or who know regular expressions already, this approach seems obvious: >>> import re >>>>> re.sub(r'\D', '', s) '1234567890' The actual work done (excluding defining the initial string and importing the re module) is just one short expression. Good enough, but one catch with regular expressions is that they are frequently far slower than basic string operations. This makes no difference for the tiny example presented, but for processing megabytes, it could start to matter. Using a functional style of programming is one way to express the "filter" in question rather tersely, and perhaps more efficiently. For example: >>>>> filter(lambda c:c.isdigit(), s) '1234567890' We also get something short, without needing to use regular expressions. Here is another technique that utilizes string object methods and list comprehensions, and also pins some hopes on the great efficiency of Python dictionaries: >>> isdigit = {'0':1,'1':1,'2':1,'3':1,'4':1, ... '5':1,'6':1,'7':1,'8':1,'9':1}.has_key >>> " .join([x for x in s if isdigit(x)]) '1234567890' The concept of a "digital signature" was introduced in Section 2.2.4. As was mentioned, the Python standard library does not include (directly) any support for digital signatures. One way to characterize a digital signature is as some information that proves or verifies that some other information really is what it purports to be. But this characterization actually applies to a broader set of things than just digital signatures. In cryptology literature one is accustomed to talk about the "threat model" a crypto-system defends against. Let us look at a few. Data may be altered by malicious tampering, but it may also be altered by packet loss, storage-media errors, or by program errors. The threat of accidental damage to data is the easiest threat to defend against. The standard technique is to use a hash of the correct data and send that also. The receiver of the data can simply calculate the hash of the data herself?using the same algorithm?and compare it with the hash sent. A very simple utility like the one below does this: # Calculate CRC32 hash of input files or STDIN # Incremental read for large input sources # Usage: python crc32.py [file1 [file2 [...]]] # or: python crc32.py < STDIN import binascii import fileinput filelist = [] crc = binascii.crc32('') for line in fileinput.input(): if fileinput.isfirstline(): if fileinput.isstdin(): filelist.append('STDIN') else: filelist.append(fileinput.filename()) crc = binascii.crc32(line,crc) print 'Files:', ' '.join(filelist) print 'CRC32:', crc A slightly faster version could use zlib.adler32() instead of binascii.crc32. The chance that a randomly corrupted file would have the right CRC32 hash is approximately (2**-32)?unlikely enough not to worry about most times. A CRC32 hash, however, is far too weak to be used cryptographically. While random data error will almost surely not create a chance hash collision, a malicious tamperer?Mallory, in crypto-parlance?can find one relatively easily. Specifically, suppose the true message is M, Mallory can find an M' such that CRC32(M) equals CRC32(M'). Moreover, even imposing the condition that M' appears plausible as a message to the receiver does not make Mallory's tasks particularly difficult. To thwart fraudulent messages, it is necessary to use a cryptographically strong hash, such as SHA or MD5. Doing so is almost the same utility as above: # Calculate SHA hash of input files or STDIN # Usage: python sha.py [file1 [file2 [...]]] # or: python sha.py < STDIN import sha, fileinput, os, sys filelist = [] sha = sha.sha() for line in fileinput.input(): if fileinput.isfirstline(): if fileinput.isstdin(): filelist.append('STDIN') else: filelist.append(fileinput.filename()) sha.update(line[:-1]+os.linesep) # same as binary read sys.stderr.write('Files: '+' '.join(filelist)+'\nSHA: ') print sha.hexdigest() An SHA or MD5 hash cannot be forged practically, but if our threat model includes a malicious tamperer, we need to worry about whether the hash itself is authentic. Mallory, our tamperer, can produce a false SHA hash that matches her false message. With CRC32 hashes, a very common procedure is to attach the hash to the data message itself?for example, as the first or last line of the data file, or within some wrapper lines. This is called an "in band" or "in channel" transmission. One alternative is "out of band" or "off channel" transmission of cryptographic hashes. For example, a set of cryptographic hashes matching data files could be placed on a Web page. Merely transmitting the hash off channel does not guarantee security, but it does require Mallory to attack both channels effectively. By using encryption, it is possible to transmit a secured hash in channel. The key here is to encrypt the hash and attach that encrypted version. If the hash is appended with some identifying information before the encryption, that can be recovered to prove identity. Otherwise, one could simply include both the hash and its encrypted version. For the encryption of the hash, an asymmetrical encryption algorithm is ideal; however, with the Python standard library, the best we can do is to use the (weak) symmetrical encryption in rotor. For example, we could use the utility below: #!/usr/bin/env python # Encrypt hash on STDIN using sys.argv[1] as password import rotor, sys, binascii cipher = rotor.newrotor(sys.argv[1]) hexhash = sys.stdin.read()[:-1] # no newline print hexhash hash = binascii.unhexlify(hexhash) sys.stderr.write('Encryption: ') print binascii.hexlify(cipher.encrypt(hash)) The utilities could then be used like: % cat mary.txt Mary had a little lamb % python sha.py mary.txt I hash_rotor.py mypassword >> mary.txt Files: mary.txt SHA: Encryption: % cat mary.txt Mary had a little lamb c49bf9a7840f6c07ab00b164413d7958e0945941 63a9d3a2f4493d957397178354f21915cb36f8f8 The penultimate line of the file now has its SHA hash, and the last line has an encryption of the hash. The password used will somehow need to be transmitted securely for the receiver to validate the appended document (obviously, the whole system make more sense with longer and more proprietary documents than in the example). Try implementing an RSA public-key algorithm in Python, and use this to enrich the digital signature system you developed above. Many texts you deal with are loosely structured and prose-like, rather than composed of well-ordered records. For documents of that sort, a very frequent question you want answered is, "What is (or isn't) in the documents?"?at a more general level than the semantic richness you might obtain by actually reading the documents. In particular, you often want to check a large collection of documents to determine the (comparatively) small subset of them that are relevant to a given area of interest. A certain category of questions about document collections has nothing much to do with text processing. For example, to locate all the files modified within a certain time period, and having a certain file size, some basic use of the os.path module suffices. Below is a sample utility to do such a search, which includes some typical argument parsing and help screens. The search itself is only a few lines of code: # Find files matching date and size _usage = """ Usage: python findfilel.py [-start=days_ago] [-end=days_ago] [-small=min_size] [-large=max_size] [pattern] Example: python findfile1.py -start=10 -end=5 -small=1000 -large=5000 *.txt """ import os.path import time import glob import sys def parseargs(args): """Somewhat flexible argument parser for multiple platforms. Switches can start with - or /, keywords can end with = or :. No error checking for bad arguments is performed, however. """ now = time.time() secs_in_day = 60*60*24 start = 0 # start of epoch end = time.time() # right now small = 0 # empty files large = sys.maxint # max file size pat = '*' # match all for arg in args: if arg[0] in '-/': if arg[1:6]=='start': start = now-(secs_in_day*int(arg[7:])) elif arg[1:4]=='end': end = now-(secs_in_day*int(arg[5:])) elif arg[1:6]=='small': small = int(arg[7:]) elif arg[1:6]=='large': large = int(arg[7:]) elif arg[1] in 'h?': print _usage else: pat = arg return (start,end,small,large,pat) if __name__ == '__main__': if len(sys.argv) > 1: (start,end,small,large,pat) = parseargs(sys.argv[1:]) for fname in glob.glob(pat): if not os.path.isfile(fname): continue # don't check directories modtime = os.path.getmtime(fname) size = os.path.getsize(fname) if small <= size <= large and start <= modtime <= end: print time.ctime(modtime),'%8d '%size,fname else: print _usage What about searching for text inside files? The string.find() function is good for locating contents quickly and could be used to search files for contents. But for large document collections, hits may be common. To make sense of search results, ranking the results by number of hits can help. The utility below performs a match-accuracy ranking (for brevity, without the argument parsing of findfile1.py): # Find files that contain a word _usage = "Usage: python findfile.py word" import os.path import glob import sys if len(sys.argv) == 2: search_word = sys.argv[1] results = [] for fname in glob.glob('*'): if os.path.isfile(fname): # don't check directories text = open(fname).read() fsize = len(text) hits = text.count(search_word) density = (fsize > 0) and float(hits)/(fsize) if density > 0: # consider when density==0 results.append((density,fname)) results.sort() results.reverse() print 'RANKING FILENAME' print '------- -------------------- for match in results: print '%6d '%int(match[0] *1000000), match[1] else: print _usage Variations on these are, of course, possible. But generally you could build pretty sophisticated searches and rankings by adding new search options incrementally to findfile2.py. For example, adding some regular expression options could give the utility capabilities similar to the grep utility. The place where a word search program like the one above falls terribly short is in speed of locating documents in very large document collections. Even something as fast, and well optimized, as grep simply takes a while to search a lot of source text. Fortunately, it is possible to shortcut this search time, as well as add some additional capabilities. A technique for rapid searching is to perform a generic search just once (or periodically) and create an index?i.e., database?of those generic search results. Performing a later search need not really search contents, but only check the abstracted and structured index of possible searches. The utility indexer.py is a functional example of such a computed search index. The most current version may be downloaded from the book's Web site < The utility indexer.py allows very rapid searching for the simultaneous occurrence of multiple words within a file. For example, one might want to locate all the document files (or other text sources, such as VARCHAR database fields) that contain the words Python, index, and search. Supposing there are many thousands of candidate documents, searching them on an ad hoc basis could be slow. But indexer.py creates a comparatively compact collection of persistent dictionaries that provide answers to such inquiries. The full source code to indexer.py is worth reading, but most of it deals with a variety of persistence mechanisms and with an object-oriented programming (OOP) framework for reuse. The underlying idea is simple, however. Create three dictionaries based on scanning a collection of documents: *Indexer.fileids: fileid --> filename *Indexer.files: filename --> (fileid, wordcount) *Indexer.words: word --> {fileid1:occurs, fileid2:occurs, ...} The essential mapping is *Indexer.words. For each word, what files does it occur in and how often? The mappings *Indexer.fileids and *Indexer.files are ancillary. The first just allows shorter numeric aliases to be used instead of long filenames in the *Indexer.words mapping (a performance boost and storage saver). The second, *Indexer.files, also holds a total wordcount for each file. This allows a ranking of the importance of different matches. The thought is that a megabyte file with ten occurrences of Python is less focused on the topic of Python than is a kilobyte file with the same ten occurrences. Both generating and utilizing the mappings above is straightforward. To search multiple words, one basically simply needs the intersection of the results of several values of the *Indexer.words dictionary, one value for each word key. Generating the mappings involves incrementing counts in the nested dictionary of *Indexer.words, but is not complicated.
https://etutorials.org/Programming/Python.+Text+processing/Chapter+2.+Basic+String+Operations/2.3+Solving+Problems/
CC-MAIN-2022-21
refinedweb
2,338
59.4
# wrapper rabbit hole and see how it is all possible with some patterns: Proxy, Decorator, and Adapter. Did II. The Proxy When we use our wrapper powers to have wrappers make objects appear where they are not or shield a wrapped object, we are really talking about the GOF Proxy pattern. Let us first look at how our newly discovered wrapper power can stop bullets and give us the ability to travel outside our bodies. The force shield wrapper (protection by proxy) Let's say we have a contract for a person IDude. All objects that implement the IDude interface have a name and can get shot. public interface IDude { string Name { get; set; } void GotShot(string typeOfGun); } Here is a NormalDude. If any NormalDude gets shot, they get hurt. public class NormalDude: IDude private string m_Name = string.Empty; private bool m_IsHurt = false; public string Name { get{ return m_Name;} set{ m_Name = value; } } public void GotShot(string typeOfGun) m_IsHurt = true; Console.WriteLine(m_Name + " got shot by a " + typeOfGun + " gun."); public override string ToString() StringBuilder result = new StringBuilder(); result.Append(m_Name); if(m_IsHurt) { result.Append(" is hurt"); } else { result.Append(" is as healthy as a clam"); } return result.ToString(); And here is a wrapper for our NormalDude: a SuperDude. If a SuperDude gets shot, he acts as a shield for the NormalDude he wraps. Any NormalDude wrapped in a SuperDude can't get hurt by getting shot. This is an example the Proxy pattern because we are limiting access to the wrapped object. public class SuperDude : IDude private IDude m_dude; public SuperDude(IDude dude) m_dude = dude; #region IDude Members get { return "Super" + m_dude.Name; } set { m_dude.Name = value; } result.Append(this.Name).Append(" got shot by a ").Append(typeOfGun); result.Append(" gun but it bounced off!! \nYou can't hurt ").Append(this.Name); result.Append("\n\n"); Console.WriteLine(result.ToString()); // NOTICE: THE GotShot() METHOD WAS NOT CALLED ON THE WRAPPED OBJECT #endregion result.Append(Name).Append(" can't get hurt!"); result.Append(" (").Append(Name).Append(" is a super-hero proxy, you know).\n"); return result.ToString(); Here's what happens when the bad-guys come along: IDude Joe = Here are the results: Projection (interaction by proxy) If use a wrapper to remotely view and interact with things, then it is a remote proxy. Let's look at another example. We can also use a proxy as a stand-in for the real object it is wrapping. Let's say we have an interface that defines perceiving and changing a string. public interface IInteractor void Percieve(string percievedThing); void Change(ref string perceivedThing); And a Me class that implements the interface: class Me: IInteractor private string m_LookingAt; private string m_ChangeTo; public string ChangeTo get { return m_ChangeTo; } set { m_ChangeTo = value; } #region IInteractor Members public void Percieve(string percievedThing) m_LookingAt= percievedThing; public void Change(ref string perceivedThing) Console.WriteLine("I'm changing " + perceivedThing + " to a " + m_ChangeTo); perceivedThing = m_LookingAt = m_ChangeTo; return "I'm looking at a " + m_LookingAt; A house class in which something can be changed: class House private string m_thing = "vase"; public void LookAtThing(IInteractor person) person.Percieve(m_thing); public void ChangeThing(IInteractor person) person.Change(ref m_thing); return "This house has a " + m_thing; And a proxy for the Me class: class MeProxy : IInteractor private Me m_wrappedObject; public MeProxy(Me wrappedObject) m_wrappedObject = wrappedObject; Console.WriteLine("Perception by Proxy"); m_wrappedObject.Percieve(percievedThing); Console.WriteLine("Change by Proxy"); m_wrappedObject.Change(ref perceivedThing); return "I'm an apparition. The real me says:" + m_wrappedObject.ToString(); Here's the story. Joe has a favorite vase at his house. We thought it would be funny to put a duck in place of the vase. When Joe found out, he was pretty angry so we use our secret proxy power to send an apparition into his house and change the duck into a bag of money (so he can buy a new vase). Next article we'll look at the powers a decorator can give us. Until then-Only use your powers for good.
http://www.c-sharpcorner.com/UploadFile/rmcochran/csharp_wrapper202112006124804PM/csharp_wrapper2.aspx
crawl-002
refinedweb
663
58.48
Repository What is the project about? In short - it's a script, that makes NES emulator remote-controlled, and server, that can send commands to the emulator. This project is also related to my CadEditor project - Universal Level Editor project, and powerful tools, needed to explore games, so I also used my fundition.io project tag. Why someone needs it? Several emulators, and Fceux also, can run Lua-scripts for controlling them. But Lua is a bad language for general programming. Basically, it's the simple language for calling C-functions. Authors of emulators use it as a scripting language only for one reason - it's lightweight. Accuracy emulation needs many CPU resources, and scripting is not the primary goal for authors. But now, personal computers have enough resources for NES emulation, so why not used powerful scripting languages like Python or JavaScript for write script for emulators? Unfortunately, there are no mainstream NES emulators, that can be controlled with these languages. I know only about Nintaco, it's also has fceux core, rewritten to Java. So I want to create my own project. It's a proof-of-concept only - it not robust, or fast, but it's working. I created it for myself, but there is the frequent question, how to control emulator externally, so I decided to publish it's sources as is. How it works Lua side Fceux emulator already has several Lua libraries embedded in it. One of them - LuaSocket library. It's not a lot of documentation about it, but I found code snippet in XKeeper Lua-scripts collection. It used sockets to send commands from Mirc to fceux. Actually, code that create socket: function connect(address, port, laddress, lport) local sock, err = socket.tcp() if not sock then return nil, err end if laddress then local res, err = sock:bind(laddress, lport, -1) if not res then return nil, err end end local res, err = sock:connect(address, port) if not res then return nil, err end return sock end sock2, err2 = connect("127.0.0.1", 81) sock2:settimeout(0) --it's our socket object print("Connected", sock2, err2) It's a low-level tcp-socket, that send and receive data per 1 byte. Fceux lua main cycle looks like: function main() while true do --loops forever passiveUpdate() --do our updates emu.frameadvance() --return control to the emulator, and it's render next frame end end And update cycle looks like: function passiveUpdate() local message, err, part = sock2:receive("*all") if not message then message = part end if message and string.len(message)>0 then --print(message) local recCommand = json.decode(message) table.insert(commandsQueue, recCommand) coroutine.resume(parseCommandCoroutine) end end It's not very hard - we read data from the socket, and if next command in it, so we parse and execute it. Parsing made with coroutines - it's powerful lua concept for pause and resume code execution. One more thing about fceux Lua system. Execution of emulation process can be paused from Lua-script, how it can be resumed from the socket, if the main cycle is paused? Answer - there is the one undocumented Lua function, that will be called even if emulation paused: gui.register(passiveUpdate) --undocumented. this function will call even if emulator paused So, we can pause and continue execution with it - it will be used for setting up breakpoints from a remote server. Python side I created a very simple RPC-protocol, based on JSON commands. Python serialiazed command and arguments to JSON string and sent it via socket. Next, it wait to answer from lua. Answers has name "FUNCITON_finished" and field for results. This idea encapsulated in syncCall class: class syncCall: @classmethod def waitUntil(cls, messageName): """cycle for reading data from socket until needed message was read from it. All other messages will added in message queue""" while True: cmd = messages.parseMessages(asyncCall.waitAnswer(), [messageName]) #print(cmd) if cmd != None: if len(cmd)>1: return cmd[1] return @classmethod def call(cls, *params): """wrapper for sending [functionName, [param1, param2, ...]] to socket and wait until client return [functionName_finished, [result1,...]] answer""" sender.send(*params) funcName = params[0] return syncCall.waitUntil(funcName + "_finished") So, with this class lua methods can be encapsulated in Python classes-wrappers: class emu: @classmethod def poweron(cls): return syncCall.call("emu.poweron") @classmethod def pause(cls): return syncCall.call("emu.pause") @classmethod def unpause(cls): return syncCall.call("emu.unpause") @classmethod def message(cls, str): return syncCall.call("emu.message", str) @classmethod def softreset(cls): return syncCall.call("emu.softreset") @classmethod def speedmode(cls, str): return syncCall.call("emu.speedmode", str) And called exactly how it called from lua: #Restart game: emu.poweron() Callbacks Lua can register callbacks - functions, that will be called after certain conditions. We can encapsulate this behavior in Python. It needs some additional trick toimplement it. At first, we save callback function handler in python, and save this callback to lua. class callbacks: functions = {} callbackList = [ "emu.registerbefore_callback", "emu.registerafter_callback", "memory.registerexecute_callback", "memory.registerwrite_callback", ] @classmethod def registerfunction(cls, func): if func == None: return 0 hfunc = hash(func) callbacks.functions[hfunc] = func return hfunc @classmethod def error(cls, e): emu.message("Python error: " + str(e)) @classmethod def checkAllCallbacks(cls, cmd): #print("check:", cmd) for callbackName in callbacks.callbackList: if cmd[0] == callbackName: hfunc = cmd[1] #print("hfunc:", hfunc) func = callbacks.functions.get(hfunc) #print("func:", func) if func: try: func(*cmd[2:]) #skip function name and function hash and save others arguments except Exception as e: callbacks.error(e) pass #TODO: thread locking sender.send(callbackName + "_finished") Lua server save this handler, and call generic python callback function with this handler. After that, in Python, we create additional thread, that checks, if registered callback function need to be called: def callbacksThread(): cycle = 0 while True: cycle += 1 try: cmd = messages.parseMessages(asyncCall.waitAnswer(), callbacks.callbackList) if cmd: #print("Callback received:", cmd) callbacks.checkAllCallbacks(cmd) pass except socket.timeout: pass time.sleep(0.001) Last step - callback function do some work and return flow control to lua caller. How to run example - You must have working Python and Jupyter Notebook in your system. Run jupyter with command: jupyter notebook Open FceuxPythonServer.py.ipynb notebook and run first cell: Now you must run fceux emulator with ROM (I used Castlevania (U) (PRG0) [!].nes for my examples). Next, start lua script fceux_listener.lua. It must connect to running jupyter python server. I do all these things with one command-line command: fceux.exe -lua fceux_listener.lua "Castlevania (U) (PRG0) [!].nes" - Now go back to Jupyter Notebook and you must see the message about the successful connection: You are able to send commands from Jupyter to Fceux (you can execute Notebook cells one by one and see results). Full example can be viewed on github: It contains simple functions: Callbacks: And even complex script for moving Super Mario Bros enemies with mouse: Video example Limitations and applications The script is not very robust, it needs additional checks for all data, received from the network. Also, it not very fast. It's better to use some binary RPC protocol instead of text. But my implementation not need compilation, so it can be used "as is". The script can switch execution context from emulator to server and back 500-1000 times per second. It's enough for almost all applications, excepts some special debugging case (for example per pixels or per scanlines PPU debugging, but Fceux does not support it whatever). Other possible applications: - example for other remote-controlled emulators - dynamic reverse-engineering for games - add cheating or tool-assisted superplay abilities - injecting or exctracting data or code to/from games - extending emulator features - create 3rd party debuggers, recording replay tools, scripting libraries, game editors - netplay, control emulator via mobile devices/remote services/remote joypads, cloud saves/patches - cross emulator features - using python's or other languages' libraries to analyze games' data or to control game (AI-bots) Technology Stack I used: Fceux - It the classic NES emulator, that many peoples using. It not updated by a long time, and not the best by features, but it still defaults emulator for most romhackers. Also, I chose it, because it already includes several Lua libraries, LuaSocket is one of them, so I don't need to implement sockets myself. Json.lua - It's pure lua json implementation. I used it because I want to provide the sample, that no need compilation). But I need to create the fork of the library because some other lua library inside fceux code (there are no sources for these libraries) override lua system tostring function, and it brake json serialization (my rejected pull request ) to the original library. Python 3 - Fceux Lua script tries to open tcp-socket and to listen for commands sent via it. Server, that will send commands to the emulator, can be implemented with any language. I used python because of it's the philosophy of "Battery included" - a lot of modules included by default (socket and json also), and others can be added without problems. Also, Python has libraries for working with Deep Learning, and I want to experiment with it for creating AI for NES games. Jupyter Notebook - Jupyter Notebook is cool python environment, with it, you can write python commands (and not only) in table-like editor inside browser interactively. Also, it's good to creating interactive examples. Also, I used dexpot, because I need to pin fceux window topmost. Windows can't do that by default, and that window manager can. Also, it is free for personal use. Roadmap This is proof-of-concept of the remote controlling emulator. It can be used as a base for other projects. I implemented all Lua functions, that emulator has. Other more complex examples can be implemented and committed to the repository. Several improvements for speed can be done also. Links and similar projects Nintaco - Java NES emulator with api for remote controlling Xkeeper0 emu-lua collection - collection of many Lua scripts Mesen - C# NES emulator with powerful Lua script abilities (with no embedded socket support for now) CadEditor - Universal Level Editor and powerful tools for exploring games. I used the project from post to explore games and add it to the CadEditor. How to contribute? Use it, test it, explore NES games with it. Send me your scripts by pull requests. My github account Thanks for the review. Reviews and feedbacks make my code and quality of posts better =) Downvoting a post can decrease pending rewards and make it less visible. Common reasons: Submit Thank you for your review, @helo! Keep up the good work! 25% achieved =) Downvoting a post can decrease pending rewards and make it less visible. Common reasons: Submit Hi @pinkwonder!, @pinkwonder! Congratulations! This post has been upvoted from the communal account, @minnowsupport, by spii @pinkwonder!
https://steemit.com/utopian-io/@pinkwonder/fceux-remote-controlling
CC-MAIN-2019-39
refinedweb
1,803
56.96
Status People (Reporter: oeschger, Assigned: sheppy) Tracking ({meta}) Attachments (3 attachments) Need a place to keep track of additions, errors, versions, etc. Hope you guys don't mind me sticking you on here. Status: NEW → ASSIGNED Just posted an updated doc at. Waiting to see about mozilla.org changes. Will post there when it's ready. Also talking with Fabian Guisset about integrating some of the introductory material he has into the intro section of this doc. Updating URL (for now). I'm here :-) My email is hidday@geocities.com. I can't currently answer your mails because Mozilla is in a crashing mood today when composing mails (smoketest blocker). I'll do it asap though. And thanks for filing this bug :-) a few nitpicks: When moving the doc from brownhen to mozilla.org, remove the mozilla.org link in the footer. I have no idea how you generate those pages, but I am always picking on linking to #names, when you intend to just link to then page. You get a cut off header, which always puzzles me, and I have to find out where the heck I am. (docbook + dssl stylesheets? you gotta love rewriting that rule) Should have a list of -moz properties? Axel Just moved these docs over to mozilla.org/docs/dom/domref (without yet removing the link to mozilla.org, Axel--but I will do so, probably take my name out of there too :) and posted to n.p.m.dom and n.p.m.docs. I agree completely: the anchors near the tops of pages just suck. I am using Framemaker->WebWorks->HTML, a pretty common one-two punch in the tech writing biz, and while it has its advantages, it doesn't exactly encourage collaboration or simple HTML source, and it does annoying things like this. I will add the -moz properties right away. Thanks for the suggestions, Axel I think it's a good idea to at least have your name somewhere in there, unless you want me to be held responsible for all the blah blah's ;-) (just kidding) Having the examples as live pages would be cool. And do some line wrappings on the examples, they're hard to read with such long lines. Axel Hi, I was wondering what the chances of a zipped version were. For those of us without broadband, a local copy would be really useful.. David A zipped version would be good. I want to take a couple more passes at it and integrate as much of the feedback, suggestions and examples as I can. It's pretty drafty right now. Gimme a few days more on it and I will put a domref.zip down in that project directory. Thanks! Just updated the introduction and preface to the domref. Still working on these files. I changed the URL from brownhen to mozilla.org, so it's easier to find. And: how about rephrasing the second paragraph in the preface section "Who Should Read This Guide", I had to read it twice to find out that it's legal english ;-) The last paragraph in "What's Gecko" does not only stop rather suddenly (;-)), but it looks like the UI would be part of the content DOM. Those hierarchies *should* be separate. No idea how to word that more precisely though. API syntax has the return type of a return type :-(. I miss a few XXX there, too. example 4 misses a }, and some indention. so much for this comment Ian, There's a Web page that explains how to make WWP do the right thing with anchors: The examples are from the Dynamic HTML template, I believe, but the XHTML template should be extremely similar. Just checked in some Element API updates: the namespace-related methods (getAttributeNS, et cetera), a couple of other ones I was missing. Thanks a lot for those, Fabian. Haven't started on any of the DOM3 stuff yet. Will soon. Looking at Jeff's suggestion about the anchor->top stuff now. I guess I also better create a document history, at least a text file for the cvs log. What does getElementsByTagName() mean on the element? Does it get all the elements in the document to which the element belongs, or does it return the child elements or what? tanks. Just checked in a simple history file (history.txt, linked from TOC), added the getElementsByTagName() method to the element reference, fooled with the template a little. Working on DOM3 reference stuff now. Not sure what to do about the HTML-only element stuff, given how things are set up. Will stop spamming so much here. :^/ Anyone want to help out on element.innerHTML? I have a bunch of blah blahs where I hoped to put examples and some explanation of this property's usefulness, but I'm not really familiar with it. If you send me stuff in whatever form I'll push it into the docs. thanks! Just updated the domref again: pulled the not implemented stuff out of window, added a few there, cleaned it up, and indexed that chapter as well. Also made a new domref.zip and called it domref09192001.zip. Working on these today: window.scrollX window.scrollY window.scrollTo() window.scrollBy() window.scrollByLines() window.scrollByPages() window.directories window.controllers window.prompter window.pkcs11 window.updateCommands() Just added style reference section to the DOM book. This new chapter includes the interfaces for the style, stylesheet, and cssRule objects, as well as the DOM CSS properties list that I moved over from the elements chapter. I think you should add in the style reference that the only rules accessible via element.style are the inline styles. I think I've got all the window stuff in there now, albeit in a pretty spare way. Also figured out how to get the titles of the pages set correctly--all of the regular API pages were titled "Syntax", which was bad. Have a little post-process script now that sets the right title in the HTML. No..wait..still don't know what window.updateCommands() does. Can anyone give me a description of this method? thanks -i. Ok first it is a XUL-only method. It calls the updateCommands() method of the XUL command dispatcher. I think it updates all the widgets that rely on a <command> element whose id is passed in as argument to updateCommands(). You might want to verify this with someone like hyatt though. A couple of little nits about the window object: * window.Components: we should link to from * Returns the pkcs11 object , which can be used to install drivers other software associated with the pkcs11 protocol. (Grammar error in "drivers other software"?) Apart from that, it's excellent!! Thank you so much for doing this work. Just checked in another update: have a little script that actually linkifies the pointers to the specs at the bottom of each page (which were just sitting there inert because I am using Framemaker->WebWorks in a not very savvy way); also added an events chapter, which is still pretty stubby, but which I am adding to quickly. Also added a couple more examples in the examples chapter and in some of the document APIs (e.g., addEventListener, which is so key for the event object stuff I've just added). Time for a new zipfile and a PDF: docs/dom/domref10102001.zip docs/dom/domref.PDF Bug in Event interface docs: This is supposed to be the target property, which designates where the Event object is heading. Instead, it tells me target means a boolean on whether the ALT key is depressed... :) Likewise, all such dom_event_refXX.html links reflect that information. I'll be seeking info in irc.mozilla.org at #mozilla and #mozillazine for a little while. Would appreciate assistance (trying to complete my book; I'm on the Events chapter). Million more small updates. Got a great example of DOM event constants into the Examples chapter, made lots of additions and changes to the dom_events chapter per his last comment, including the documentation for those event.init methods, and created a new zipped version at: mozilla-org/html/docs/dom/domref10192001.zip Woops. Meant to say "Got a great example of DOM event constants *from Alex Vincent* into the Examples chapter," whence the reference to "his last comment" later in that same sentence. ajvincent is the possessive pronoun antecedent there. Ahem... More updates! Added the form and table element interfaces in a new chapter called the DOM HTML Elements Reference. Fabian and I did these and are working on the other specialized HTML element interfaces. I expect this chapter will be long and manifold, but not as well-thumbed as the set that's there already: doc, window, element, et al. Heading into the minutiae of DOM reference material. Also did some more in the introductory chapter. Have a better list there of "core" APIs for DOM programmers, an "Interfaces versus Objects" section, and some general cleanup. Fixed a typo in the replaceChild() params: it's not replaceChild(oldChild, newChild), as I had it, but the reverse. Ought to see the update on the site within the hour. window.updateCommands() has no documentation, only dummy text. Checked in an update of the domref: new, somewhat clearer information about the CSS properties list and the use of the style property. Also some minor formatting changes: bug link now called "tracking bug", link to mozilla.org becomes link to the DOM project itself. Updated the DOM project page: reference to domref now includes links to zipped HTML and PDF versions per requests. Hope you don't mind, Fabian. Most recent zipped HTML will always be linked simply as domref.zip. IMHO the "HTML" bullet is not necessary because it points to the same place as the larger title. I try to keep the main page as short as possible. The other two bullets are fine with me though. It helps newcomers see that it is the most important part of the DOM content. Hey Ian, I just got a mail saying the "Zipped HTML" link at docs/dom is broken. And indeed it is. It will be annoying to update that link everytime you update the zip name. Perhaps set a redirect? Have you got any solution? And btw, what's the current zip name? I can't find it. Shoot-y-boots! I forgot to cvs add the domref.zip I just made. Fabian, I am going to always call the latest zip as domref.zip, so I won't need to do any versioning on the front end. Just committed the domref and removed the extraneous domref->HTML link from the project page. Also updating the domref.pdf right now, which was a few months behind There is a problem with the "document" object documentation. I'll post more details in npm.dom/documentation and CC you, Ian. Also the "appendChild()" method is bogus in the "Element" object reference. It should say "element" instead of "document", everywhere it's used. typo at iframe.contentDocument: working_title = src_doc.tile; should probably be working_title = src_doc.title; *** Bug 119668 has been marked as a duplicate of this bug. *** Just made an update. The tile->title typo is fixed in the frame doc, the appendChild doc->el is also fixed, so is the formatting of the title in the TOC. needs new title (bookmarking matters!) is: Chapter 0 Table of Contents ought: Gecko DOM Reference Bugs relating to the DOM Event Reference: - Summary is missing for charCode, pageX/Y, layerX/Y - The detail page for pageX/Y and layerX/Y needs to be fixed - Please clarify how charCode is different from keyCode in the detail page for both properties. - Please describe how the secondaryTarget property is actually implemented instead of quoting the W3C spec (which may or may not actually correspond to the Mozilla reality) - I can't find a description of how to get the event coordinates relative to the target element anywhere (is this layerX/Y or something?). *cringe* I can't believe I'm even suggesting this since I use IE, but you might want to check to make sure your CSS styles look good on NS/Mozilla. For example, check out on IE and then on Mozilla. The IE styles are great, but Mozilla is barely readable. (The fonts are way too small, and aren't sans-serif, like they are on IE). More issues ... - It would be really nice to have a comprehensive list of event handlers, something akin to - There's no documentation for the 'oncontextmenu' event handler, even though it's supported. I have been working on getting some of Rober Kieffer's excellent suggestions and good spots into the domref. Just posted a new version that has the fix for the minor problem addressed in comment #37, has most of the issues in #38 addressed (including an example of relatedTarget I cribbed from LXR), and takes a stab at the list he suggests in #40 with a new section in dom_event_ref, DOM Event Handler List. For this last one, I grabbed stuff from DOMEvent.h and began to write little summaries of the related properties. Not done yet. Some of them are missing and some are almost certainly inaccurate. But I am not averse to publishing blatant inaccuracies and blah blahs to stimulate assistance, as you will know if you've read through the domref. :) Take a look at the new section and give me a hand if you have a minute. Just need to describe what each of the events are for which these guys are handlers, and may want to add more information if we have it. Robert also suggested creating new bugs for some of the more important issues in the reference. If you do this, please create them in Documentation->Web Developer and also create a dependency here to the new bug. Thanks a lot! Jasper du Pont noticed this: hasChildNodes is not a boolean property but a method that returns a boolean. Had it wrong in the domref. Fixed now (locally, though not on mozilla.org. check-in coming soon). cols documentation is wrong: fixed locally. checking in an update with this and a few other things very soon. Suggestion from bill@fastpitchcentral.com: "How about adding a link, or a search box, to a script reference? For example, if someone is looking for details on using 'this' to provide a self-reference, it would be helpful to start on your Index page." I think we should do something like this to make JS info as readily available as possible. Which reference should we be referring to in this case? fixed nodeValue description (and closed bug 122741), hasChildNodes syntax, and FRAMESET.cols documentation. Thanks all. A few things:* You should add body.vLink, .aLink etc.* In document.vlinkColor (and friends) you should note that it is deprecated from the DOM2 spec and now "redirects" to its body correspondent. is wrong, insertBefore needs 2 arguments. Ah, yes. Very sharp. Just testing you, of course, doron: _I know_ insertBefore takes two arguments...Ahem...here is another sighting from jst: "There is no focus() method on document, so the page is incorrect." I will get both of these updates in tout de suite. thanks both of you. another thing. Some pages have: Parameters param is a something. in them when the entity in question has no parameter. Example is I've tested (Mozilla RC3) the example of replaceChild() but it doesn't work. I got only a Error: uncaught exception: [Exception... "Node was not found" code: "8" nsresult: "0x80530008 (NS_ERROR_DOM_NOT_FOUND_ERR)" location: " DFmann/Sonstiges/CSS%20Werbung%20Test/replace-doc.html Line: 10"] --example-- <html> <body> <div id="top" align="left"> <div id="in">in</div> </div> <script> d1 = document.getElementById("top"); d_new = document.createElement("p"); d1.replaceChild(d_new, d1); alert(d1.childNodes[0].tagName) </script> </body> </html> --/example-- See bug #56758 for the replaceChild issue The problem is the example. d1.replaceChild(d_new, d1); is incorrect. The second argument should the node to be replaced. Also, the alert won't work because of #text node which is the first child. updated the replaceChild example in the HTML. Will fix the source (my Framemaker documents) as soon as I can get to them. Thanks, Bob, Goerg, Kinger. The reference for is incorrect: Syntax documentFragment = element.createDocumentFragment Element does not support createDocumentFragment, Document does however. It is also missing the ending (). It should read documentFragment = document.createDocumentFragment(); The Example frag = document.createDocumentFragment(); frag.write("<div>Moveable Div</div>"); is incorrect. a DocumentFragment is a Node and does not support a method called write. It should read: frag = document.createDocumentFragment(); elm = frag.appendChild( document.createElement('foo') ); I'd like to help out by working on the docs for the Range. I have working knowledge of the DOM 2 Range interfaces, as well as examples for most that I have used in exposing bugs in it over the past two years. I have a real need for the specs so I'm sure others do too. So what do I need to do to get involved in this? You're on, Dylan. Dylan has sent me great docs for the Range APIs and I am integrating them into the book this week. Also working on noting the APIs in the domref that are frozen. Just checked in Dylan's Schiemann's Range interface docs. Also regenerating domref.zip and domref.pdf for your viewing pleasure. Thanks, Dylan! Properties, methods, events should be listed in alphabetical order (see HTMLFormElement Interface: ). In the frame page ( ), replace the 2nd name by noresize and edit the link accordingly if the page is done. General comments: ================= Definitions given in reference pages should always correspond to the definition in detail pages. E.g.: "clientX :Returns the horizontal position of the event" does not correspond to "clientX: Returns the horizontal coordinate of the event within the DOM client area". "Parameters" in detail pages should be only used for methods. Mentioning parameters makes no sense when defining object and properties. It's an misuse of expression. Inconsistant choice of identifiers in definition pages: e.g.: the syntax refers to event.screenX but the example uses e.screenX. I personally have a problem with the use of the object's identifier "event" since this is known to be the event object as a property of the window object under MSIE. This is also bad for people learning javascript, DOM as it confuses matters. Most programmers use evt to identify to Netscape's event object. Some examples simply cannot work (too many mistakes). E.g.: event.screenX ( ). Some definitions are simply wrong: window.screenX/Y to name one. ( ) window.screenY returns the vertical distance of the top border of the user's browser from the top side of the screen. Some methods are missing (e.g. method addEventListener not listed in window and document). The format of the Specification section should be standardized and always be a link to the precise url defining the standard. E.g.: Specification DOM level 2 Style module -- deleteRule The Notes section should always include a default value if there is one (mostly properties). I agree with R. Kieffer's comment #38 entirely. I'm sorry if I appear nitpicking and harsh here but I personnally feel that MSDN has a much better reference site with interactive demo examples: it is more useful, more complete, more structured. I understand that the Gecko DOM reference is in development and Ian Oeschger might be overloaded. Specific points: =============== window.screen.availTop: Returns the first available pixel from the top of the screen available to the browser. window.screen.availHeight : Returns the amount of vertical space available to the window on the screen. The operative word here is available. I believe people will not understand what that means. It's true that this property is very rarely used. Nevertheless I think there should be a better definition for these 2 properties. How about: availTop Specifies the y-coordinate of the first pixel that is not allocated to permanent or semipermanent user interface features. availHeight Specifies the height of the screen, in pixels, minus permanent or semipermanent user interface features displayed by the operating system, such as the Taskbar on Windows. as given by Client-Side JavaScript Reference v1.3 at One can see the effect of these properties when you move the windows taskbar around the screen (at top and left sides of the monitor video display). --------- I think you should redo the detail page of event.clientX/Y and event.layerX/Y entirely. I can help you on this. The definition of pageX/Y "looks" awkward or wrong. PageX/Y return the horizontal/vertical coordinate of the event relative to the whole width/height of the document. "visible page" as stated in the definition is very misleading: I am sure some people would think this is the client area, the rendering area. window.onresize and window.onscroll are DOM level 2 Events module spec: Added a first batch of methods from Dylan Schiemann to the Range object APIs at:. Not sure it's up on the server quite yet, and it's not complete, but the remainder are coming shortly. Selection interfaces also coming soon! Dr. Unclear--thanks for the comments. As you suggest, I can't always spend as much time on the domref as it wants--certainly not as much as the _army of MS employees_ do on their corresponding reference materials--but I welcome your suggestions. I will try to address some of the issues you bring up, and I have already in fact begun some updates based upon the more specific comments (e.g., ambiguity of availHeight, availWidth, etc.). Robert Kieffer--haven't forgotten about your very useful suggestions either. DOM Elements Interface ====================== offsetLeft, offsetTop, offsetWidth and offsetHeight --------------------------------------------------- offsetLeft, offsetTop, offsetWidth and offsetHeight are all referenced/related to a system of coordinates created by the closest (nearest in the containment hierarchy) positioned containing element. If the element is non-positioned, the root element (html in standards compliant mode; body in quirks rendering mode) is the offsetParent. So, in the definition of these 4 properties, you can safely replace the last 2 words "parent element" by "offsetParent node" offsetParent ------------ "offsetParent returns a reference to the object in which the current element is offset (i.e., the parent element)." can be changed to "offsetParent returns a reference to the object which is the closest (nearest in the containment hierarchy) positioned containing element. If the element is non-positioned, the root element (html in standards compliant mode; body in quirks rendering mode) is the offsetParent. N.B.: Only 1 item needs to be checked/verified for sure about this definition: if the root element is actually the html node in standards compliant rendering mode and if the body node is the root element in quirks/compatible rendering mode. I'm pretty sure it is like that. Re: comment #59 "I can't always spend as much time on the domref as it wants--certainly not as much as the _army of MS employees_ do on their corresponding reference materials (...)" I understand. I'm willing to help. I consider myself as very good at testing and creating good test cases or examples. DOM window Interface ==================== The window.title property does not exist. The document and element object has the title property, but not the window. Removed window.title interface. Thinking now we ought to make new bugs (like 120589 133932 and others, which I am creating dependencies for now in this bug) for these specific issues. Then I can have the pleasure of closing them as I fix them. :) An update later today to the domref will resolve a few of these. docs on element offset properties updated per comment #60. yeah, dr. unclear! Here's a working and more complete example for evt.screenX/Y. The notes about the routing of event is also covered in this example. Finally, removing the <p id="idP"> node and leaving the red, yellow and green image serving as a client image map (that would better fit the meaning of the function name) is up to you. In the "Notes" section, please change "When you trap events on the window, document, or other roomy elements, " for "When you trap events on the window, document or other elements," I'm not sure that "trap" is the best verb. Maybe capture or "When events are fired on the ...". Just my opinion here. How do I upload an image in bugzilla? I.e.: RedYellowGreenClickMap.png ------- <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" ""> <html lang="en-us"> <head> <meta http- <meta http- <meta http- <meta http- <title>evt.screenX (dom_event_ref21.html)</title> <script type="text/javascript"> <!-- function checkClickMap(evt) { if (evt.screenX < 60) doRedButton(evt); if (evt.screenX >= 60 && evt.screenX < 110) doYellowButton(evt); if (evt.screenX >= 110) doGreenButton(evt); } function doRedButton(RoutingEvtForRed) {alert("red was clicked at: " + RoutingEvtForRed.screenX);} function doYellowButton(RoutingEvtForYellow) {alert("yellow was clicked at: " + RoutingEvtForYellow.screenX);} function doGreenButton(RoutingEvtForGreen) {alert("green was clicked at: " + RoutingEvtForGreen.screenX);} function init() { document.getElementsByTagName("body").item(0).addEventListener("click" , checkClickMap , true); /* document.getElementsByTagName("body")[0].addEventListener("click" , checkClickMap , true); also works without a problem */ } --> </script> <style type="text/css"> <!-- body {margin:7px; color:black; background-color:white; border:3px solid silver;} p#idP {color:white; background-color:black;} --> </style> </head> <body onload="init();"> <p id="idP">The body node is within the silver bordered box.</p> <p><img src="RedYellowGreenClickMap.png" style="width:200px; height:50px;" alt="A red, yellow and green image"></p> </body></html> In the index file I would use a bigger font-size for the A - B - C - ... - X - Y - Z links. Regarding the long list of links to files, did you consider spacing a bit each line and wrapping the text into an anchor? I would find this more convenient than having to click on a single number, which is rather narrow... You have/use big letters for delimiting subsection but rather "little" font for clickable links. Just my opinion here. pageXOffset is missing in that file; title 2 should be removed and also window.title . This is a request for more detail in the Gecko DOM Reference. I see other similar comments in this bug, so I think this is the right place. navigator.platform -- some known values are listed, but apparently incorrect -- "win32" seems to be the value used for all Win9x/WinNT systems. I'd like to see the Windows 3.1 value, in particular; additional Linux/Unix strings, if known, would also be helpful. navigator.oscpu -- no field values are listed; listing the known ones would be useful, along with a note about the less-specific (i.e. Unix) values. I would like to know, for instance, that the string "OS X" appears in the Mac OSX version, since that string apparently does not appear in the 'platform' field. navigator.userAgent -- could a link be added from here to: ... and, could that linked document be updated as well, with full documentation for the Mac cases, and additional Linux/Unix info? Is it guaranteed that, barring spoofing, the 'oscpu' field will appear verbatim in the userAgent? (It is not the case that the 'platform' field appears verbatim in the userAgent.) Attached is a list of userAgent strings I grep'd out of an enormous list from someone's website. If the 'oscpu' field does normally appear verbatim in the userAgent string, then the data there can be used to flesh out the oscpu list of values. Just updated the domref html and pdf: now has Dylan's Mozilla Range interface extensions in the dom_range chapter. *** Bug 193270 has been marked as a duplicate of this bug. *** >I can't always spend as much time on the domref as it wants--certainly not as >much as the _army of MS employees_ do on their corresponding reference materials Yes, that is true, Ian. Maintaining the messy source code requires extra effort --I know from experience. Ian has decided that using inline styling is the way to go. lots of: <span style="color:#000000; font-style: normal; font-weight: normal; text-decoration: none; text-transform: none; vertical-align: baseline... When I speak of messy source code, this is a big part of the problem. This bug should be marked INVALID. What little progress is made here is futile and in vain. Bug 193270, marked a duplicate of this, more clearly addresses the problem: The documentation needs to be rewritten. The dom refs docs are 90% wrong. Since the docs have messy source code, it would be more efficient and even easier to bag the current project and start over with a carefully thought-out plan. I strongly encourage anyone who is contributing to the domref docs to stop and think about the usefulness of your efforts. Could your efforts be more efficiently used? If so, in what scenario? I have already made an offer to work on rewriting the docs and I provided a sample document, attachment 114391 [details]. It documents interfaces as interfaces and objects as objects, never confusing the two as the current docs do. The sample document has a few minor technical difficulties (misspelling "hierarchy" and says "implements modules" instead of "supports modules"). The sample doc is also short on links. Each property needs to be linked, and this is a TODO. I stand by the structure of my sample and the completeness of its approach. I posted a message in netscape.public.mozilla.documentation a few min. ago (Subject line is "Gecko DOM reference: problems and suggestions. (bug 93108, bug 193210, bug 157610)"). I hope people from mozilla.org and netscape (DevEdge) will read it with an open-mind and try to establish a roadmap, agree on a plan, a new design, some new protocols to best use efficiently the time and efforts of volunteers willing to update the Gecko DOM reference and upgrade its functionalities, content, etc.. I know that others tried a few years ago. The cost involved by incorrect, weak, flawed, unusable, counter-productive, inconsequent documentation is immense and it affects all who use it .. or avoid it deliberately. Just this window.innerWidth issue is a good example of how much time, efforts is lost, redundantly lost, repeatedly from duplicate bugfiles to INVALID ones. Bug 57700 is another good example. I wish one day I can post a reply in a web programming newsgroup and confidently recommend the Gecko DOM reference without shame, without hesitation, without guilt. I tried like many others to do a difference and I hope others will succeed where I have failed. I have nothing personal against Ian Oeschger, J. Rudman or B. Clary. The document is written in FrameMaker and then exported to HTML. Ian Oeschger said he would convert the doc to DocBook so other people can contribute. Before that happens, DOM ref is still a one-man effort :-( In #c72, I was referring to bug 193270. > Ian Oeschger said he would convert the doc to DocBook so other people can contribute. Even if this is done, Gecko DOM reference will still need to address many issues, organize a roadmap, a plan of some sort, protocols, etc.. As it is, the Gecko DOM reference is not recommendable and the organization to contribute to it is impractical, too much time-consuming and too much effort consuming and involves duplicated efforts. Am I fair and square here? >Before that happens, DOM ref is still a one-man effort :-( The Gecko DOM ref never should have been a one-man effort. Just by reading this bugfile, I think it never was either. Maybe Ian O. has been the main person behind this bugfile and Gecko DOM reference (GDR) edition but that's the nr 1 issue to address. Mozilla never was and never should be a 1 person organization. By saying this, I'm not condemning Ian's personal efforts into this bugfile or GDR. Nobody can't reproach Ian for not being omniscient or for not being an impeccable robot. But I'll definitively point my finger toward so many Mozilla.org webpages inviting people to contribute to documentation when trying to do so is so frustrating, counter-productive (blatant incoherences) and a big waste of volunteer time. That's how I feel about this issue. This documentation issue makes lots of other people (in bugzilla bugfile and elsewhere) lose time also: that's just one major consequence of the GDR documentation state. moving stuff over to an outside-the-firewall email for the time being, looking for people to pick these Help and doc bugs up for me. Assignee: oeschger → oeschger Status: ASSIGNED → NEW Ian, I would be interested in maintaining it, just need to get cvs access to it :) reassigning to doron. Doron, the original source files are in Framemaker. I checked them into a src subdir of that domref dir on mozilla. Can talk to you about maintenance. Thanks! Assignee: oeschger → doronr The reference is in process of being migrated to dev.m.o wiki, updated URL. Assignee: doronr → nobody Component: Web Developer → Documentation Requests Product: Documentation → Mozilla Developer Center QA Contact: rudman → doc-request Component: Documentation Requests → Documentation Product: Mozilla Developer Network → Mozilla Developer Network Automatically closing all bugs that have not been updated in a while. Please reopen if this is still important to you and has not yet been corrected. Status: NEW → RESOLVED Last Resolved: 6 years ago Resolution: --- → INVALID Reopening for review by Sheppy. Assignee: nobody → eshepherd Status: RESOLVED → REOPENED Resolution: INVALID → --- Component: Documentation → General Product: Mozilla Developer Network → Developer Documentation Whiteboard: u=webdev p=0 Whiteboard: u=webdev p=0 → u=webdev p=0 c=DOM Now that we have "Developer Documentation::DOM" as component/product pair, I don't think this tracking bug is necessary any longer. Closing. Feel free to reopen if it serves a different purpose than "Developer Documentation::DOM" Status: REOPENED → RESOLVED Last Resolved: 6 years ago → 6 years ago Resolution: --- → INVALID
https://bugzilla.mozilla.org/show_bug.cgi?id=93108
CC-MAIN-2019-13
refinedweb
5,685
66.23
Proxy Pattern Tutorial with Java Examples Proxy Pattern Tutorial with Java Examples Learn the Proxy Design Pattern with easy Java source code examples as James Sugrue continues his design patterns tutorial series, Design Patterns Uncovered Join the DZone community and get the full member experience.Join For Free Today's pattern is the Proxy pattern, another simple but effective pattern that helps with controlling use and access of resources. Proxy in the Real World A Proxy can also be defined as a surrogate. In the real work". Design Patterns Refcard For a great overview of the most popular design patterns, DZone's Design Patterns Refcard is the best place to start. The Proxy Pattern The Proxy is known as a structural pattern,as it's used to form large object structures across many disparate objects. Thedefinition of Proxy provided in the original Gang of Four book on DesignPatterns states: Allows for object level access control by acting as a pass through entity or a placeholder object. So it's quite a simple concept - to save on the amount of memory used, you might use a Proxy. Similarly, if you want to control access to an object, the pattern becomes useful. Let's take a look at the diagram definition before we go into more detail. As usual, when dealing with design patterns we code to interfaces. In this case, the interface that the client knows about is the Subject. Both the Proxy and RealSubject objects implement the Subject interface, but the client may not be able to access the RealSubject without going through the Proxy. It's quite common that the Proxy would handle the creation of the RealSubject object, but it will at least have a reference to it so that it can pass messages along. Let's take a look at this in action with a sequence diagram. As you can see it's quite simple - the Proxy is providing a barrier between the client and the real implementation. There are many different flavours of Proxy, depending on it's purpose. You may have a protection proxy, to control access rights to an object. A virtual proxy handles the case where an object might be expensive to create, and a remote proxy controls access to a remote object. You'll have noticed that this is very similar to the pattern. However, the main difference between bot is that the adapter will expose a different interface to allow interoperability. The Proxy exposes the same interface, but gets in the way to save processing time or memory. Would I Use This Pattern? This pattern is recommended when either of the following scenarios occur in your application: - The object being represented is external to the system. - Objects need to be created on demand. - Access control for the original object is required - Added functionality is required when an object is accessed. Typically, you'll want to use a proxy when communication with a third party is an expensive operation, perhaps over a network. The proxy would allow you to hold your data until you are ready to commit, and can limit the amount of times that the communication is called. The proxy is also useful if you want to decouple actual implementation code from the access to a particular library. Proxy is also useful for access to large files, or graphics. By using a proxy, you can delay loading the resource until you really need the data inside. Without the concept of proxies, an application could be slow, and appear non-responsive. So How Does It Work In Java? Let's continue with the idea of using a proxy for loading images. First, we should create a common interface for the real and proxy implementations to use: public interface Image{ public void displayImage(); } The RealImage implementation of this interface works as you'd expect: public class RealImage implements Image{ public RealImage(URL url) { //load up the image loadImage(url); } public void displayImage() { //display the image } //a method that only the real image has private void loadImage(URL url) { //do resource intensive operation to load image }} Now the Proxy implementation can be written, which provides access to the RealImage class. Note that it's only when we call the displayImage() method that it actually uses the RealImage. Until then, we don't need the data. public class ProxyImage implements Image{ private URL url; public ProxyImage(URL url) { this.url = url; } //this method delegates to the real image public void displayImage() { RealImage real = new RealImage(url); real.displayImage(); }} And it's really as simple as that. As far as the client is concerned, they will just deal with the interface. Watch Out for the Downsides Usually this is the stage that I point out the disadvantages to the pattern. Proxy is quite simple, and pragmatic, and it's one pattern that I can't think of any downsides for. Perhaps you know of some? If so, please share them in the comments section. Next Up The Decorator pattern is a close relation to the Proxy pattern, so we'll take a look at that }}
https://dzone.com/articles/design-patterns-proxy
CC-MAIN-2019-18
refinedweb
850
53.41
How to Use Lambda Expressions in Java 8 Java 8 introduces a new feature that in some ways is similar to anonymous classes, but with more concise syntax. More specifically, a lambda expression lets you create an anonymous class that implements a specific type of interface called a functional interface— which has one and only one abstract method. The Ball interface meets that definition: interface Ball { void hit(); } Here the only abstract method is the hit method. A functional interface can contain additional methods, provided they are not abstract. Until Java 8, this was not possible because an interface could contain only abstract methods. However, in Java 8 you can create default methods which provide a default implementation. Thus a functional interface can contain one or more default methods, but can contain only one abstract method. A lambda expression is a concise way to create an anonymous class that implements a functional interface. Instead of providing a formal method declaration that includes the return type, method name, parameter types, and method body, you simply define the parameter types and the method body. The Java compiler infers the rest based on the context in which you use the lambda expression. The parameter types are separated from the method body by a new operator, called the arrow operator, which consists of a hyphen followed by a greater-than symbol. Here’s an example that implements the Ball interface: () -> { System.out.println("You hit it!");} Here the lambda expression implements a functional interface whose single method does not accept parameters. When the method is called, the text “You hit it!” is printed. You can use a lambda expression anywhere you can use a normal Java expression. You’ll use them most in assignment statements or as passed parameters. The only restriction is that you can use a lambda expression only in a context that requires an instance of a functional interface. For example, here’s a complete program that uses a lambda expression to implement the Ball interface: public class LambdaBall { public static void main(String[] args) { Ball b = () -> { System.out.println("You hit it!"); }; b.hit(); } interface Ball { void hit(); } } The general syntax for a lambda expression is this: (parameters) -> expression or this: (parameters) -> { statement; ... } If you use an expression, a semicolon is not required. If you use one or more statements, the statements must be enclosed in curly braces and a semicolon is required at the end of each statement. Don’t forget that the statement in which you use the lambda expression must itself end with a semicolon. Thus, the lambda expression in the previous example has two semicolons in close proximity: Ball b = () -> { System.out.println("You hit it!"); }; The first semicolon marks the end of the statement that calls System.out.println; the second semicolon marks the end of the assignment statement that assigns the lambda expression to the variable b.
https://www.dummies.com/programming/java/how-to-use-lambda-expressions-in-java-8/
CC-MAIN-2019-13
refinedweb
483
54.63
: 125 thoughts on “Twisted Introduction” hi dave, I’m Jayson Pryde, and I’m new to twisted. I’ve been learning Twisted via your awesome tutorial, and I am already in part 13. I tried to implement my own client-server system, but I am encountering some hangs in the client. It’s probably because I messed up with callbacks in the deferred. I’ve asked this via this link in stackoverflow: It would be greatly appreciated if you can find time and take a look, and help me point out where I messed up. So what happened in the client is I am already able to send a request, but it seems that the callback to process the request is not fired up. Hope you can find time. thanks a lot in advance! Hi Jayson, do you have the code in a source code repository like github? It would be much easier to take a look and help you in that case. Hi Dave, Thanks for the reply. Here’s the github link of my code: Thanks a lot again! Cool, I just sent you a pull request. Thanks a lot dave. I already merged it. Before I received your reply, this was somehow the same thing I did to make it working. I remove the other factory/class to make it work. But still, my question is, how come your examples work (even if they are already old) when I run it in my machine? And also, I’m still quite confused with the deferred returning a deferred. Thanks a lot again Dave! 🙂 I’m not sure what you are asking — my examples work because the Twisted project has been very good about maintaining backwards compatibility. The code you posted didn’t work because of bugs I tried to explain in the pull request. I recommend reading the source code for Deferred itself and reworking the examples in the relevant chapters of the tutorial. Learning a new kind of programming takes time, you have to go slowly in the beginning. Hi dave, Thank you for your nice introduction to asynchronous programming, and I’m reading chapter 4 now. I think the philosophy of asynch programming is what I need for my project so I wish to ask you few questions. Let me explain myself first, currently I’m working on an automated web-site testing project. I decide use selenium + bs4 + unittest approach. The website I’m testing heavily relay on ajax and iframe, so I wish to develop someway better than traditional unittest case, which is separate interaction(I/O) and content assertion(RAM) My original idea is have a main thread do all navigation and interaction, and use bs4 to parse the DOM, and then create children to assert those parsed DOM. However, after read your blog, I realize I don’t really need threads to do it, I think asynchronous way suits my purpose too. So my question is, is my understanding of asynchronous programming suits my purpose correct? is this a possible solution for automated web testing? or is there anyone else already did/doing something similar? (since I really can’t find anything similar) Thank You Jack Hey Jack, thanks for the kind words! If I’m understanding you, I think Twisted could be used in lieu of threads. I wonder, though, if you need threads or async at all? Is it not possible to do the interaction and then parse the result in the same thread? Both threaded and async code are more complex than vanilla single-threading and for functional testing the performance of a single, non-async thread may be acceptable. Hi Dave, thank you for your answer! Since I’m using selenium to simulate user visiting website using browser, it actually involves heavy I/O operation: fetching pages. In fact, since the data in that website is huge and queries are not that optimized, some page may take more than 30s to load. This is why I’m looking for a method to separate the I/O operation from DOM assertion. You are right about threads, I think I don’t really need thread programming. But if my code can run DOM assertion while I/O fetching next page would be nice Furthermore, it’s also about code organization. Since the website currently is under developing, although main frame remain same, the content on single page might change frequently, that’s why I wish to keep them separate, for better organization and change Thank You Jack Hey sir i am new in learning of twisted and i learn many thing but i am confuse at one point of my script i am making script where user just need to give the number of URL of any website then script will be extract data from the website like Title, Description and Links which are on the website and after getting links it will also extract these three details from the extracted link and so on untill the link not completed (Similar to Sitemap Crawler). this is an idea. I write this code using Twisted Python but it complete only one cycle after that reactor stop can you please help and that how i can run the Twsited addcallback function into the loop according to certain condition did not complete thank you Please look at the code. Thank you 2:)Extracting Code From Website def parseHtml(html): print(“\t\t\t———————-“) print(“\t\t\t Requesting Website “) print(“\t\t\t———————-“) # Create this Defered Variable for returning the Function Values to another function defered = defer.Deferred() # Now Calling another function which will be extract 3 thing from the Response(Website Code) # reactor.callLater (Received 3 parameters 1: Delay time, Return Type, 3:parameter mean Data reactor.callLater(3, defered.callback, html) # defered will be return the Value to another function return defered pass 3:)Extracting Title,Description,Links(URL) from Source Code def ExtractingData(response, url, crawling, Webname, tocrawl): crawled = set([]) WebKeyword = [] WebTitle = [] WebDescription = [] keywordregex = re.compile(‘<meta\sname=[“\’]keywords[“\’]\scontent=“\’[“\’]\s/>’) linkregex = re.compile(‘<a\shref=\’|”[\'”].?>’) print(“\t\t\t——————————–“) print(“\t\t\t Extracting Data from Website “) print(“\t\t\t——————————–“) msg = response startPos = msg.find(”) print(“Start position:{}”.format(startPos)) if startPos != -1: endPos = msg.find(”, startPos + 7) if endPos != -1: title = msg[startPos + 7:endPos] print(“Title:{}”.format(title)) WebTitle.append(title) else: WebTitle.append(“N/A”) pass pass # Getting Description from the Website Soup = BeautifulSoup(msg, ‘html.parser’) Desc = Soup.findAll(attrs={“name”: “description”}) if len(Desc) <= 0: print(“N/A”) WebDescription.append(“N/A”) else: print(“Description:{}”.format(Desc[0][‘content’].encode(‘utf-8’))) WebDescription.append(Desc[0][‘content’].encode(‘utf-8’)) pass keywordlist = keywordregex.findall(msg) if len(keywordlist) > 0: keywordlist = keywordlist[0] keywordlist = keywordlist.split(“, “) print (“Keyword:{}”.format(keywordlist)) WebKeyword.append(keywordlist) else: WebKeyword.append(“N/A”) pass links = linkregex.findall(msg) # print(“Links:{}”.format(links)) crawled.add(crawling) for link in (links.pop(0) for _ in xrange(len(links))): if link.startswith(‘/’): link = ‘http://’ + url[1] + link elif link.startswith(‘#’): link = ‘http://’ + url[1] + url[2] + link elif not link.startswith(‘http’): link = ‘http://’ + url[1] + ‘/’ + link pass if link not in crawled: if Webname[0] in link: print(“Link:{}”.format(link)) tocrawl.add(link) pass pass pass print(“Crawled URL:{}”.format(len(tocrawl))) defers=defer.Deferred() defers.callback(None) return pass 1:)Sitemap Crawler def SitemapCrawler(Link): tocrawl = {Link} Webname = str(Link).replace(“http://”, “”).split(“.”) Iterator = False while Iterator is not True: try: crawling = tocrawl.pop() except: Iterator = True pass url = urlparse.urlparse(crawling) #downloadPage(crawling,”NewFile.txt”) print(“Calling:{}”.format(crawling)) d = getPage(crawling) # Extracting Code or Website Response from Pages d.addCallback(parseHtml) # Calling Finish/Stop Function at the END of Processes d.addCallback(ExtractingData, url=url, crawling=crawling, Webname=Webname, tocrawl=tocrawl) reactor.run() pass # StoringIntoDatabase(WebTitle, WebKeyword, crawledList, WebDescription) pass 4:)Finish Process def finishingProcess(): print(“\t\t\t Stopping the Process……..”) # 3 is the delay time which will be stop all work without this we cannot stop the work reactor.callLater(5, reactor.stop) pass Hi there, it’s a bit difficult to read this code as it is not formatted. Do you have it in source control, say GitHub, where it would be a lot easier to read and comment upon? Hi Dave, My English is not very well, so I hope you can understand what I want to tell you. I come from Taiwan, and I’m learning Python. I read your “Twisted Introduction”, it’s really helpful for me. I know someone already had translated into Simplified Chinese, but some of the content is not very …correct. So I have re-translated it to Traditional Chinese, also modified your sample code make they run in Python 3.I want to put re-translated articles and modified code on my blog and GitHub. But I want to get your permission first. Your articles are very helpful to me, so I want to share them with others people who want to learn Twisted. I hope I can get your permission. Thank you. Shan-Ho Chan Hi Shan-Ho, I’m very glad you’ve found my articles helpful and you definitely have my permissions to translate and re-post them, thank you. Thank you for doing such a service for mankind man. 4 birds with one stone python/sockets/twisted/async_coding. Reaching out from Turkey. So glad you liked it!
http://krondo.com/an-introduction-to-asynchronous-programming-and-twisted/comment-page-2/
CC-MAIN-2019-51
refinedweb
1,576
65.73
03 March 2004 17:20 [Source: ICIS news] LONDON (CNI)--Resolution Performance Products (RPP) said Wednesday it is seeking to raise the European prices of epoxy resins by Euro150/tonne from 1 April. In addition, the US firm is seeking rises in the prices of epichlorohydrin (ECH) and bisphenol A (BPA) of Euro150/tonne and Euro125/tonne, respectively. Similar announcements have been made for other markets. A company source told CNI the European hikes would be about 7-10% over current selling prices. For BPA, an independent market source said the market is unlikely to accept the hike fully. For ECH, another source said the proposed hike is not likely to be achieved. ?xml:namespace> Ian Harris, business manager of major resins for Europe, Middle East & Africa (EMEA) at RPP, commented: “Epoxy demand through Europe has been declining in recent years through the global downturn, leading to depressed pricing at all time historic low points. Combined with a sustained high position of raw materials, margins for epoxy producers reached unsustainably low levels recently.” He added: “However, the tide is now turning. Epoxy demand has picked up, and there are even some global shortages in both precursors and epoxies. Prices are already rising in Asia, and we are taking up our prices in EM
http://www.icis.com/Articles/2004/03/03/562458/resolution-seeks-price-hikes-in-euro-epoxy-ech-bpa-1.html
CC-MAIN-2014-42
refinedweb
213
60.95
November 2013 Volume 28 Number 11 ASP.NET - Single-Page Applications: Build Modern, Responsive Web Apps with ASP.NET By Mike Wasson. For the traditional ASP.NET developer, it can be difficult to make the leap. Luckily, there are many open source JavaScript frameworks that make it easier to create SPAs. In this article, I’ll walk through creating a simple SPA app. Along the way, I’ll introduce some fundamental concepts for building SPAs, including the Model-View-Controller (MVC) and Model-View-ViewModel (MVVM) patterns, data binding and routing. About the Sample App The sample app I created is a simple movie database, shown in Figure 1. The far-left column of the page displays a list of genres. Clicking on a genre brings up a list of movies within that genre. Clicking the Edit button next to an entry lets you change that entry. After making edits, you can click Save to submit the update to the server, or Cancel to revert the changes. Figure 1 The Single-Page Application Movie Database App I created two different versions of the app, one using the Knockout.js library and the other using the Ember.js library. These two libraries have different approaches, so it’s instructive to compare them. In both cases, the client app was fewer than 150 lines of JavaScript. On the server side, I used ASP.NET Web API to serve JSON to the client. You can find source code for both versions of the app at github.com/MikeWasson/MoviesSPA. (Note: I created the app using the release candidate [RC] version of Visual Studio 2013. Some things might change for the released to manufacturing [RTM] version, but they shouldn’t affect the code.) Background In a traditional Web app, every time the app calls the server, the server renders a new HTML page. This triggers a page refresh in the browser. If you’ve ever written a Web Forms application or PHP application, this page lifecycle should look familiar. In an SPA, after the first page loads, all interaction with the server happens through AJAX calls. These AJAX calls return data—not markup—usually in JSON format. The app uses the JSON data to update the page dynamically, without reloading the page. Figure 2 illustrates the difference between the two approaches. Figure 2 The Traditional Page Lifecycle vs. the SPA Lifecycle One benefit of SPAs is obvious: Applications are more fluid and responsive, without the jarring effect of reloading and re-rendering the page. Another benefit might be less obvious and it concerns how you architect a Web app. Sending the app data as JSON creates a separation between the presentation (HTML markup) and application logic (AJAX requests plus JSON responses). This separation makes it easier to design and evolve each layer. In a well-architected SPA, you can change the HTML markup without touching the code that implements the application logic (at least, that’s the ideal). You’ll see this in action when I discuss data binding later. In a pure SPA, all UI interaction occurs on the client side, through JavaScript and CSS. After the initial page load, the server acts purely as a service layer. The client just needs to know what HTTP requests to send. It doesn’t care how the server implements things on the back end. With this architecture, the client and the service are independent. You could replace the entire back end that runs the service, and as long as you don’t change the API, you won’t break the client. The reverse is also true—you can replace the entire client app without changing the service layer. For example, you might write a native mobile client that consumes the service. Creating the Visual Studio Project Visual Studio 2013 has a single ASP.NET Web Application project type. The project wizard lets you select the ASP.NET components to include in your project. I started with the Empty template and then added ASP.NET Web API to the project by checking Web API under “Add folders and core references for:” as shown in Figure 3. Figure 3 Creating a New ASP.NET Project in Visual Studio 2013 The new project has all the libraries needed for Web API, plus some Web API configuration code. I didn’t take any dependency on Web Forms or ASP.NET MVC. Notice in Figure 3 that Visual Studio 2013 includes a Single Page Application template. This template installs a skeleton SPA built on Knockout.js. It supports log in using a membership database or external authentication provider. I didn’t use the template in my app because I wanted to show a simpler example starting from scratch. The SPA template is a great resource, though, especially if you want to add authentication to your app. Creating the Service Layer I used ASP.NET Web API to create a simple REST API for the app. I won’t go into detail about Web API here—you can read more at asp.net/web-api. First, I created a Movie class that represents a movie. This class does two things: - Tells Entity Framework (EF) how to create the database tables to store the movie data. - Tells Web API how to format the JSON payload. You don’t have to use the same model for both. For example, you might want your database schema to look different from your JSON payloads. For this app, I kept things simple: namespace MoviesSPA.Models { public class Movie { public int ID { get; set; } public string Title { get; set; } public int Year { get; set; } public string Genre { get; set; } public string Rating { get; set; } } } Next, I used Visual Studio scaffolding to create a Web API controller that uses EF as the data layer. To use the scaffolding, right-click the Controllers folder in Solution Explorer and select Add | New Scaffolded Item. In the Add Scaffold wizard, select “Web API 2 Controller with actions, using Entity Framework,” as shown in Figure 4. Figure 4 Adding a Web API Controller Figure 5 shows the Add Controller wizard. I named the controller MoviesController. The name matters, because the URIs for the REST API are based on the controller name. I also checked “Use async controller actions” to take advantage of the new async feature in EF 6. I selected the Movie class for the model and selected “New data context” to create a new EF data context. Figure 5 The Add Controller Wizard The wizard adds two files: - MoviesController.cs defines the Web API controller that implements the REST API for the app. - MovieSPAContext.cs is basically EF glue that provides methods to query the underlying database. Figure 6 shows the default REST API the scaffolding creates. Figure 6 The Default REST API Created by the Web API Scaffolding Values in curly brackets are placeholders. For example, to get a movie with ID equal to 5, the URI is /api/movies/5. I extended this API by adding a method that finds all the movies in a specified genre: public class MoviesController : ApiController { public IQueryable<Movie> GetMoviesByGenre(string genre) { return db.Movies.Where(m => m.Genre.Equals(genre, StringComparison.OrdinalIgnoreCase)); } // Other code not shown The client puts the genre in the query string of the URI. For example, to get all movies in the Drama genre, the client sends a GET request to /api/movies?genre=drama. Web API automatically binds the query parameter to the genre parameter in the GetMoviesByGenre method. Creating the Web Client So far, I’ve just created a REST API. If you send a GET request to /api/movies?genre=drama, the raw HTTP response looks like this: HTTP/1.1 200 OK Cache-Control: no-cache Pragma: no-cache Content-Type: application/json; charset=utf-8 Date: Tue, 10 Sep 2013 15:20:59 GMT Content-Length: 240 [{"ID":5,"Title":"Forgotten Doors","Year":2009,"Genre":"Drama","Rating":"R"}, {"ID":6,"Title":"Blue Moon June","Year":1998,"Genre":"Drama","Rating":"PG-13"},{"ID":7,"Title":"The Edge of the Sun","Year":1977,"Genre":"Drama","Rating":"PG-13"}] Now I need to write a client app that does something meaningful with this. The basic workflow is: - UI triggers an AJAX request - Update the HTML to display the response payload - Handle AJAX errors You could code all of this by hand. For example, here’s some jQuery code that creates a list of movie titles: $.getJSON(url) .done(function (data) { // On success, "data" contains a list of movies var ul = $("<ul></ul>") $.each(data, function (key, item) { // Add a list item $('<li>', { text: item.Title }).appendTo(ul); }); $('#movies').html(ul); }); This code has some problems. It mixes application logic with presentation logic, and it’s tightly bound to your HTML. Also, it’s tedious to write. Instead of focusing on your app, you spend your time writing event handlers and code to manipulate the DOM. The solution is to build on top of a JavaScript framework. Luckily, you can choose from many open source JavaScript frameworks. Some of the more popular ones include Backbone, Angular, Ember, Knockout, Dojo and JavaScriptMVC. Most use some variation of the MVC or MVVM patterns, so it might be helpful to review those patterns. The MVC and MVVM Patterns The MVC pattern dates back to the 1980s and early graphical UIs. The goal of MVC is to factor the code into three separate responsibilities, shown in Figure 7. Here’s what they do: - The model represents the domain data and business logic. - The view displays the model. - The controller receives user input and updates the model. Figure 7 The MVC Pattern A more recent variant of MVC is the MVVM pattern (see Figure 8). In MVVM: - The model still represents the domain data. - The view model is an abstract representation of the view. - The view displays the view model and sends user input to the view model. Figure 8 The MVVM Pattern In a JavaScript MVVM framework, the view is markup and the view model is code. MVC has many variants, and the literature on MVC is often confusing and contradictory. Perhaps that’s not surprising for a design pattern that started with Smalltalk-76 and is still being used in modern Web apps. So even though it’s good to know the theory, the main thing is to understand the particular MVC framework you’re using. Building the Web Client with Knockout.js For the first version of my app, I used the Knockout.js library. Knockout follows the MVVM pattern, using data binding to connect the view with the view model. To create data bindings, you add a special data-binding attribute to the HTML elements. For example, the following markup binds the span element to a property named genre on the view model. Whenever the value of genre changes, Knockout automatically updates the HTML: <h1><span data-</span></h1> Bindings can also work in the other direction—for example, if the user enters text into a text box, Knockout updates the corresponding property in the view model. The nice part is that data binding is declarative. You don’t have to wire up the view model to the HTML page elements. Just add the data-binding attribute and Knockout does the rest. I started by creating an HTML page with the basic layout, with no data binding, as shown in Figure 9. (Note: I used the Bootstrap library to style the app, so the real app has a lot of extra <div> elements and CSS classes to control the formatting. I left these out of the code examples for clarity.) Figure 9 Initial HTML Layout <!DOCTYPE html> <html> <head> <title>Movies SPA</title> </head> <body> <ul> <li><a href="#"><!-- Genre --></a></li> </ul> <table> <thead> <tr><th>Title</th><th>Year</th><th>Rating</th> </tr> </thead> <tbody> <tr> <td><!-- Title --></td> <td><!-- Year --></td> <td><!-- Rating --></td></tr> </tbody> </table> <p><!-- Error message --></p> <p>No records found.</p> </body> </html> Creating the View Model Observables are the core of the Knockout data-binding system. An observable is an object that stores a value and can notify subscribers when the value changes. The following code converts the JSON representation of a movie into the equivalent object with observables: function movie(data) { var self = this; data = data || {}; // Data from model self.ID = data.ID; self.Title = ko.observable(data.Title); self.Year = ko.observable(data.Year); self.Rating = ko.observable(data.Rating); self.Genre = ko.observable(data.Genre); }; Figure 10 shows my initial implementation of the view model. This version only supports getting the list of movies. I’ll add the editing features later. The view model contains observables for the list of movies, an error string and the current genre. Figure 10 The View Model var ViewModel = function () { var self = this; // View model observables self.movies = ko.observableArray(); self.error = ko.observable(); self.genre = ko.observable(); // Genre the user is currently browsing // Available genres self.genres = ['Action', 'Drama', 'Fantasy', 'Horror', 'Romantic Comedy']; // Adds a JSON array of movies to the view model function addMovies(data) { var mapped = ko.utils.arrayMap(data, function (item) { return new movie(item); }); self.movies(mapped); } // Callback for error responses from the server function onError(error) { self.error('Error: ' + error.status + ' ' + error.statusText); } // Fetches a list of movies by genre and updates the view model self.getByGenre = function (genre) { self.error(''); // Clear the error self.genre(genre); app.service.byGenre(genre).then(addMovies, onError); }; // Initialize the app by getting the first genre self.getByGenre(self.genres[0]); } // Create the view model instance and pass it to Knockout ko.applyBindings(new ViewModel()); Notice that movies is an observableArray. As the name implies, an observableArray acts as an array that notifies subscribers when the array contents change. The getByGenre function makes an AJAX request to the server for the list of movies and then populates the self.movies array with the results. When you consume a REST API, one of the trickiest parts is handling the asynchronous nature of HTTP. The jQuery ajax function returns an object that implements the Promises API. You can use a Promise object’s then method to set a callback that’s invoked when the AJAX call completes successfully and another callback that’s invoked if the AJAX call fails: app.service.byGenre(genre).then(addMovies, onError); Data Bindings Now that I have a view model, I can data bind the HTML to it. For the list of genres that appears in the left side of the screen, I used the following data bindings: <ul data- <li><a href="#"><span data-</span></a></li> </ul> The data-bind attribute contains one or more binding declarations, where each binding has the form “binding: expression.” In this example, the foreach binding tells Knockout to loop through the contents of the genres array in the view model. For each item in the array, Knockout creates a new <li> element. The text binding in the <span> sets the span text equal to the value of the array item, which in this case is the name of the genre. Right now, clicking on the genre names doesn’t do anything, so I added a click binding to handle click events: <li><a href="#" data- <span data-</span></a></li> This binds the click event to the getByGenre function on the view model. I needed to use $parent here, because this binding occurs within the context of the foreach. By default, bindings within a foreach refer to the current item in the loop. To display the list of movies, I added bindings to the table, as shown in Figure 11. Figure 11 Adding Bindings to the Table to Display a List of Movies <table data- <thead> <tr><th>Title</th><th>Year</th><th>Rating</th><th></th></tr> </thead> <tbody data- <tr> <td><span data-</span></td> <td><span data-</span></td> <td><span data-</span></td> <td><!-- Edit button will go here --></td> </tr> </tbody> </table> In Figure 11, the foreach binding loops over an array of movie objects. Within the foreach, the text bindings refer to properties on the current object. The visible binding on the <table> element controls whether the table is rendered. This will hide the table if the movies array is empty. Finally, here are the bindings for the error message and the “No records found” message (notice that you can put complex expressions into a binding): <p data-</p> <p data-No records found.</p> Making the Records Editable The last part of this app is giving the user the ability to edit the records in the table. This involves several bits of functionality: - Toggling between viewing mode (plain text) and editing mode (input controls). - Submitting updates to the server. - Letting the user cancel an edit and revert to the original data. To track the viewing/editing mode, I added a Boolean flag to the movie object, as an observable: function movie(data) { // Other properties not shown self.editing = ko.observable(false); }; I wanted the table of movies to display text when the editing property is false, but switch to input controls when editing is true. To accomplish this, I used the Knockout if and ifnot bindings, as shown in Figure 12. The “<!-- ko -->” syntax lets you include if and ifnot bindings without putting them inside an HTML container element. Figure 12 Enabling Editing of Movie Records <tr> <!-- ko if: editing --> <td><input data-</td> <td><input type="number" class="input-small" data-</td> <td><select class="input-small" data-</select></td> <td> <button class="btn" data-Save</button> <button class="btn" data-Cancel</button> </td> <!-- /ko --> <!-- ko ifnot: editing --> <td><span data-</span></td> <td><span data-</span></td> <td><span data-</span></td> <td><button class="btn" data-Edit</button></td> <!-- /ko --> </tr> The value binding sets the value of an input control. This is a two-way binding, so when the user types something in the text field or changes the dropdown selection, the change automatically propagates to the view model. I bound the button click handlers to functions named save, cancel and edit on the view model. The edit function is easy. Just set the editing flag to true: self.edit = function (item) { item.editing(true); }; Save and cancel were a bit trickier. In order to support cancel, I needed a way to cache the original value during editing. Fortunately, Knockout makes it easy to extend the behavior of observables. The code in Figure 13 adds a store function to the observable class. Calling the store function on an observable gives the observable two new functions: revert and commit. Figure 13 Extending ko.observable with Revert and Commit Now I can call the store function to add this functionality to the model: function movie(data) { // ... // New code: self.Title = ko.observable(data.Title).store(); self.Year = ko.observable(data.Year).store(); self.Rating = ko.observable(data.Rating).store(); self.Genre = ko.observable(data.Genre).store(); }; Figure 14 shows the save and cancel functions on the view model. Figure 14 Adding Save and Cancel Functions self.cancel = function (item) { revertChanges(item); item.editing(false); }; self.save = function (item) { app.service.update(item).then( function () { commitChanges(item); }, function (error) { onError(error); revertChanges(item); }).always(function () { item.editing(false); }); } function commitChanges(item) { for (var prop in item) { if (item.hasOwnProperty(prop) && item[prop].commit) { item[prop].commit(); } } } function revertChanges(item) { for (var prop in item) { if (item.hasOwnProperty(prop) && item[prop].revert) { item[prop].revert(); } } } Building the Web Client with Ember For comparison, I wrote another version of my app using the Ember.js library. An Ember app starts with a routing table, which defines how the user will navigate through the app: window.App = Ember.Application.create(); App.Router.map(function () { this.route('about'); this.resource('genres', function () { this.route('movies', { path: '/:genre_name' }); }); }); The first line of code creates an Ember application. The call to Router.map creates three routes. Each route corresponds to a URI or URI pattern: /#/about /#/genres /#/genres/genre_name For every route, you create an HTML template using the Handlebars template library. Ember has a top-level template for the entire app. This template gets rendered for every route. Figure 15 shows the application template for my app. As you can see, the template is basically HTML, placed within a script tag with type=“text/x-handlebars.” The template contains special Handlebars markup inside double curly braces: {{ }}. This markup serves a similar purpose as the data-bind attribute in Knockout. For example, {{#linkTo}} creates a link to a route. Figure 15 The Application-Level Handlebars Template ko.observable.fn.store = function () { var self = this; var oldValue = self(); var observable = ko.computed({ read: function () { return self(); }, write: function (value) { oldValue = self(); self(value); } }); this.revert = function () { self(oldValue); } this.commit = function () { oldValue = self(); } return this; } <script type="text/x-handlebars" data- <div class="container"> <div class="page-header"> <h1>Movies</h1> </div> <div class="well"> <div class="navbar navbar-static-top"> <div class="navbar-inner"> <ul class="nav nav-tabs"> <li>{{#linkTo 'genres'}}Genres{{/linkTo}} </li> <li>{{#linkTo 'about'}}About{{/linkTo}} </li> </ul> </div> </div> </div> <div class="container"> <div class="row">{{outlet}}</div> </div> </div> <div class="container"><p>©2013 Mike Wasson</p></div> </script> Now suppose the user navigates to /#/about. This invokes the “about” route. Ember first renders the top-level application template. Then it renders the about template inside the {{outlet}} of the application template. Here’s the about template: <script type="text/x-handlebars" data- <h2>Movies App</h2> <h3>About this app...</h3> </script> Figure 16 shows how the about template is rendered within the application template. Figure 16 Rendering the About Template Because each route has its own URI, the browser history is preserved. The user can navigate with the Back button. The user can also refresh the page without losing the context, or bookmark and reload the same page. Ember Controllers and Models In Ember, each route has a model and a controller. The model contains the domain data. The controller acts as a proxy for the model and stores any application state data for the view. (This doesn’t exactly match the classic definition of MVC. In some ways, the controller is more like a view model.) Here’s how I defined the movie model: App.Movie = DS.Model.extend({ Title: DS.attr(), Genre: DS.attr(), Year: DS.attr(), Rating: DS.attr(), }); The controller derives from Ember.ObjectController, as shown in Figure 17. Figure 17 The Movie Controller Derives from Ember.ObjectController App.MovieController = Ember.ObjectController.extend({ isEditing: false, actions: { edit: function () { this.set('isEditing', true); }, save: function () { this.content.save(); this.set('isEditing', false); }, cancel: function () { this.set('isEditing', false); this.content.rollback(); } } }); There are some interesting things going on here. First, I didn’t specify the model in the controller class. By default, the route automatically sets the model on the controller. Second, the save and cancel functions use the transaction features built into the DS.Model class. To revert edits, just call the rollback function on the model. Ember uses a lot of naming conventions to connect different components. The genres route talks to the GenresController, which renders the genres template. In fact, Ember will automatically create a GenresController object if you don’t define one. However, you can override the defaults. In my app, I configured the genres/movies route to use a different controller by implementing the renderTemplate hook. This way, several routes can share the same controller (see Figure 18). Figure 18 Several Routes Can Share the Same Controller App.GenresMoviesRoute = Ember.Route.extend({ serialize: function (model) { return { genre_name: model.get('name') }; }, renderTemplate: function () { this.render({ controller: 'movies' }); }, afterModel: function (genre) { var controller = this.controllerFor('movies'); var store = controller.store; return store.findQuery('movie', { genre: genre.get('name') }) .then(function (data) { controller.set('model', data); }); } }); One nice thing about Ember is you can do things with very little code. My sample app is about 110 lines of JavaScript. That’s shorter than the Knockout version, and I get browser history for free. On the other hand, Ember is also a highly “opinionated” framework. If you don’t write your code the “Ember way,” you’re likely to hit some roadblocks. When choosing a framework, you should consider whether the feature set and the overall design of the framework match your needs and coding style. Learn More In this article, I showed how JavaScript frameworks make it easier to create SPAs. Along the way, I introduced some common features of these libraries, including data binding, routing, and the MVC and MVVM patterns. You can learn more about building SPAs with ASP.NET at asp.net/single-page-application. Mike Wasson is a programmer-writer at Microsoft. For many years he documented the Win32 multimedia APIs. He currently writes about ASP.NET, focusing on Web API. You can reach him at mwasson@microsoft.com. Thanks to the following technical expert for reviewing this article: Xinyang Qiu (Microsoft) Xinyang Qiu is a senior Software Design Engineer in Test on the Microsoft ASP.NET team and an active blogger for blogs.msdn.com/b/webdev. He’s happy to answer ASP.NET questions or direct experts to answer your questions. Reach him at xinqiu@microsoft.com.
https://docs.microsoft.com/en-us/archive/msdn-magazine/2013/november/asp-net-single-page-applications-build-modern-responsive-web-apps-with-asp-net
CC-MAIN-2019-47
refinedweb
4,223
58.18
Your browser does not seem to support JavaScript. As a result, your viewing experience will be diminished, and you have been placed in read-only mode. Please download a browser that supports JavaScript, or enable it if it's disabled (i.e. NoScript). Hello again; when I wish to use the global status bar in Python (c4d.StatusSetBar() etc) I need to relinquish control to C4D so the GUI can be updated and the status bar is actually redrawn. If I'm in a Python script and just keep setting the status bar, it's only updated after the script has ended: c4d.StatusSetBar() for p in xrange(101): c4d.StatusSetBar(p) time.sleep(0.05) So, this code obviously doesn't work. Adding EventAdd() doesn't work either, as this will only add a redraw to the message queue, which is not evaluated until the script ends. Using c4d.GeSyncMessage(c4d.EVMSG_CHANGE) seems promising but it always returns False. DrawViews only redraws the viewports. EventAdd() c4d.GeSyncMessage(c4d.EVMSG_CHANGE) False DrawViews Is it really impossible to relinquish control from a Python script to enable C4D to evaluate the message queue once, and then return to the script? Or enforce the update of the status bar in some other way? It's definitely not an insurmountable technical issue, because if I call MessageDialog in between... MessageDialog for p in xrange(101): c4d.StatusSetBar(p) if p in [5,20,30,40,50,75,100]: gui.MessageDialog("xxx") time.sleep(0.05) ...not only the dialog appears, but also the status bar is updated. And the really strange effect: The status bar seems to update distinctly several times between message dialogs at some points. Try [0,100] as timing for the dialogs to appear... what's causing these intermediate updates? Why doesn't that work without the dialog calls? [0,100] (We do not need to discuss calling the status bar updates in asynchronous worker threads or through the message system; I just want to know whether there is a way to call it from a script, and to solve the dialog riddle...) I am also having trouble to get the Status bar to update while a script is running. My old scripts using Callcomand nevertheless update the Status bar fine ...?!? nevermind got it working again ... mistake on my end. (passed a small flot to the statusbar instead of 0-100. kind regards mogh Hi, I am not quite sure what you would consider "obviously not working", but the following Script Manager snippet will update the statusbar 100 times in R21. import c4d import time class MyDialog(c4d.gui.GeDialog): """ """ pass def main(): """ """ # If you have something bocking in the way, like a modal dialog for # example, this will not work, due to the code not being reached until # the dialog closes in this case. dialog = MyDialog() # dialog.Open(c4d.DLG_TYPE_MODAL) # An async dialog is however fine, but the loop running on the main # thread will block that dialog until it has finished. dialog.Open(c4d.DLG_TYPE_ASYNC) for i in range(101): c4d.StatusSetText(str(i)) c4d.StatusSetBar(i) time.sleep(.05) print "end" if __name__ == "__main__": main() Cheers, zipit @zipit: Actually I wouldn't consider your code a solution to the problem. This is something, though I'm aware it's done quite often (and at least for quick tests I used such code myself for sure), that's a bit dangerous and most likely not supported by Maxon. The context owning the dialog, which is used to defer the status bar updates, does no longer exist after the execution of the script. It's basically a a zombie dialog performing the status bar updates. In my view, such code shouldn't be recommended, nor be used in anything that's released to the public. Edit: While it's no good practice to leave a dialog open after script execution ended, my comment was wrong and stupid. you mean that I didn't close the dialog before script ended? Yeah, that is probably not the most cleanest code, but I think @Cairyn can handle it. The point was only the demonstrate the blocking behaviour of different dialog types. If you mean something else, something status bar related, I am all ears, because I do not really understand how the dialog should influence the status bar as MyDialog literally does nothing. MyDialog Currently I do not see anything inherently bad with that snippet, except from the fact that putting such giant blocking loop into main thread is a bad idea in the first place. @zipit Thanks, but this doesn't work in my setup. What happens (on my system at least) is that an empty dialog appears. The status bar still doesn't update before the Python script ends. I found that it does work if I dock the status bar in the same window as the script manager (I do not need the separate dialog in that case), so obviously it depends on what internal message loop is still evaluated and which one is blocked. Normally my status bar resides in a separate window on the second screen (together with material manager, Takes manager and other stuff) which is classed as "Independent Window" (the latter doesn't matter for the message loop apparently, if the window is not independent, it doesn't work either). There is no documentation on how the windows' various message loops interplay with the Python interpreter (beyond the obvious modal dialogs), but what I experimentally find, baffles me a little. @zipit You are right. Forget my comment. I shouldn't have posted before my first coffee. Indeed I was triggered by the dialog not being closed, but then brain activity obviously went back to power save... so it seems to be more a multi monitor setup problem. Although I would not rule it out categorically and also have not tested it, I would doubt that Python is here the culprit and would more suspect this to be a general design flaw of Cinema. You could try to invoke between the status bar calls c4d.gui.GeUpdateUI(), but I would not hold my breath. c4d.gui.GeUpdateUI() @zipit said in How to enforce StatusBar redraws: c4d.gui.GeUpdateUI() Nope, that doesn't work reliably either. It's definitely a question of how the message loops are forwarding messages to each other, and how the status bar is updated internally, and how the windows are parented to each other. Without the C4D source code, it's probably impossible to come to a definitive answer. Here's what I checked: As long as Status Bar and Script Manager share a MS Windows window, the update message is apparently propagated and the Bar progresses. This works even if I use a keyboard shortcut to start the script while the main window/viewport is the active window. I gather that while the Python interpreter is running, the Script Manager still has some update loop running (not a normal message loop, as you can't move the window edit any code there) that is taking care of the status bar updates when in the same window (hierarchy?). I also suppose that the status bar gets some exceptional updates, as the Console never updates (I tried to insert some print commands into the timer loop) before the script ends, regardless of the window hierarchy. Okay, that was enlightening, thanks for the comments. I suppose Maxon needs to expand the status bar updates across the whole window hierarchy, so the bar would work everywhere, but I also have a feeling that they won't bother with such a minor issue hi, just to let you know i've asked the dev about that one Cheers, Manuel Thanks. In the meantime I made some more experiments, and it seems the problem is limited to when you click Execute in the Script Manager or the Customize Command window. I created a keyboard shortcut and embedded the script as icon in the GUI. In both cases, the status bar gets correctly updated regardless of what window was active / received the key press, and where the icon resides (same window, other window, undocked toolbar). It seems the Python interpreter is not blocking the message loops in these cases. That makes the question somewhat academic, because shortcut or icon (or menu but I did not check that) would be the common ways to start the script; starting it from the Script Manager is common during development but irrelevant in practice. I'm curious to hear what the developers will say about the message loops though.
https://plugincafe.maxon.net/topic/12726/how-to-enforce-statusbar-redraws
CC-MAIN-2022-05
refinedweb
1,436
61.46
Manycore and the Microsoft .NET Framework 4: A Match Made in Microsoft Visual Studio 2010 Download this episode Description The Microsoft .NET Framework 4 and Visual Studio 2010 include new technologies for expressing, debugging, and tuning parallelism in managed applications. Dive into key areas of support, including the new System.Threading.Tasks and System.Collections.Concurrent namespaces, cutting-edge concurrency views in the Visual Studio profiler, and debugger tool windows for analyzing the state of concurrent code. Download:Slides View Slides Online Code:FT03 Format Available formats for this video: Actual format may change based on video formats available and browser capability. Event Homepage More episodes in this series Petabytes for Peanuts! Making Sense out of “Ambient” Data. Related episodes Async'ing Your Way to a Successful App with .NET Day 2 Keynote Recap with Stephen Toub Building parallelized apps with .NET and Visual Studio The zen of async: Best practices for best performance On Beyond PLINQ: Scaling out your Applications to Cloud and Clusters with LINQ The Future of Parallel Programming in Visual Studio (Repeats on Friday) The Manycore Shift: Making Parallel Computing Mainstream Patterns of Parallel Programming: A Tutorial on Fundamental Patterns and Practices for… The Discussion Comments have been closed since this content was published more than 30 days ago, but if you'd like to continue the conversation, please create a new thread in our Forums, or Contact Us and let us know.
https://channel9.msdn.com/Events/PDC/PDC09/P09-09
CC-MAIN-2017-09
refinedweb
237
52.19
The extension module discussion on python-dev got me thinking about the different ways in which the "singleton" assumption for modules can be broken, and how to ensure that extension modules play nicely in that environment. As I see it, there are 4 ways the "singleton that survives for the lifetime of the process following initial import" assumption regarding modules can turn out to be wrong: 1. In-place reload, overwriting the existing contents of a namespace. imp.reload() does this. We sort of do it for __main__, except we usually keep re-using that namespace to run *different* things, rather than rerunning the same code. 2. Parallel loading. We remove the existing module from sys.modules (keeping a reference to it alive), and load a second copy. Alternatively, we call the loader APIs directly. Either way, we end up with two independent copies of the "same" module, potentially reflecting difference system states at the time of execution. 3. Subinterpreter support. Quite similar to parallel loading, but we're loading the second copy because we're in a subinterpreter and can't see the original. 4. Unloading. We remove the existing module from sys.modules and drop all other references to it. The module gets destroyed, and we later import a completely fresh copy. Even pure Python modules may not support these, since they may have side effects, or assume they're in the main interpreter, or other things. Currently, there is no way to signal this to the import system, so we're left with implicit misbehaviour when we attempt to reload the modules with global side effects. For a while, I was thinking we could design the import system to "just figure it out", but now I'm thinking a selection of read/write properties on spec objects may make more sense: allow_reload allow_unload allow_reimport allow_subinterpreter_import These would all default to True, but loaders and modules could selectively turn them off. They would also be advisory rather than enforced via all possible import state manipulation mechanisms. New functions in importlib.util could provide easier alternatives to directly manipulating sys.modules: - importlib.util.reload (replacement for imp.reload that checks the spec allows reloading) - importlib.util.unload (replacement for "del sys.modules[module.__name__]" that checks the spec allows unloading, and also unloads all child modules) - importlib.util.reimport (replacement for test.support.import_fresh_module that checks the spec of any existing sys.module entry allows reimporting a parallel copy) One of these is not like the others... aside from the existing extension module specific mechanism defined in PEP 3121, I'm not sure we can devise a general *loader* level API to force imports for a particular name to fail in a subinterpreter. So this concern probably needs to be ignored in favour of a possible future C API level solution. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia
https://mail.python.org/pipermail/import-sig/2013-September/000735.html
CC-MAIN-2019-09
refinedweb
482
56.15
App::ZofCMS::Plugin::Base - base class for App::ZofCMS plugins package App::ZofCMS::Plugin::Example; use strict; use warnings; use base 'App::ZofCMS::Plugin::Base'; sub _key { 'plug_example' } sub _defaults { return ( qw/foo bar baz beer/ ); } sub _do { my ( $self, $conf, $t, $q, $config ) = @_; $self->_dbh->do('DELETE FROM `foo`') if _has_value( $q->{foo} ); } first-level key in either Main Config File or ZofCMS Template. That key's value must be a hashref or a subref that returns a hashref, empty list or undef. _key sub _key { 'plug_example' } The _key needs to return a scalar containing the name of first level key in ZofCMS template or Main Config file. Study the source code of this module to find out what it's used for if it's still unclear. The value of that key can be either a hashref or a subref that returns a hashref or undef. If the value is a subref, its return value will be assigned to the key and its @_ will contain (in that order): $t, $q, $conf where $t is ZofCMS Template hashref, $q is hashref of query parameters and $conf is App::ZofCMS::Config object. _defaults sub _defaults { qw/foo bar baz beer/ } The _defaults sub needs to return a list of default arguments in a form of key/value pairs. By default it returns an empty list. _do sub _do { my ( $self, $conf, $template, $query, $config ) = @_; } The _do sub is where you'd do all of your processing. The @_ will contain $self, $conf, $template, $query and $config (in that order) where $self is your plugin's object, $conf is the plugin's configuration hashref (what the user would specify in ZofCMS Template or Main Config File, the key of which is returned by _key() sub), the $template is the hashref of ZofCMS template that is being processed, the $query is a query parameters hashref where keys are names of the params and values are their values. Finally, the $config is App::ZofCMS::Config object. The module provides these utility subs that are meant to give you a hand during coding: _has_value sub _has_value { my $v = shift; return 1 if defined $v and length $v; return 0; } This sub is shown above and is meant to provide a shorter way to test whether a given variable has any meaningful content. _dbh sub _dbh { my $self = shift; return $self->{DBH} if $self->{DBH}; $self->{DBH} = DBI->connect_cached( @{ $self->{CONF} }{ qw/dsn user pass opt/ }, ); return $self->{DBH}; } This sub (shown above) has marginally narrower spectrum of usability as opposed to the rest of this module; nevertheless, I found needing it way too often. The sub is an accessor to a connected DBI's database handle that autoconnects if it hasn't already. Note that the sub expects dns, user, pass and opt arguments located in $self->{CONF} hashref. For descriptionof these arguments, see DBI's connect_cached() method. Feel free to email me the requests for extra functionality for this base class. Below is a "template" documentation. If you're going to use it, make sure to read through the entire thing as some things may not apply to your plugin; I've added those bits as they are very common in the plugins that I write, some of them (but not all) I marked with word [EDIT]. =head1 DESCRIPTION The module is a plugin for L<App::ZofCMS> that provides means to [EDIT]. This documentation assumes you've read L<App::ZofCMS>, L<App::ZofCMS::Config> and L<App::ZofCMS::Template> =head1 FIRST-LEVEL ZofCMS TEMPLATE AND MAIN CONFIG FILE KEYS =head2 C<plugins> plugins => [ qw/[EDIT]/ ], B<Mandatory>. You need to include the plugin in the list of plugins to execute. =head2 C<[EDIT]> [EDIT] => { }, # or [EDIT] => sub { my ( $t, $q, $config ) = @_; return $hashref_to_assign_to_this_key_instead_of_subref; }, B<Mandatory>. Takes either a hashref or a subref as a value. If subref is specified, its return value will be assigned to C<[EDIT]> as if it were already there. If sub returns an C<undef> or an empty list, then plugin will stop further processing. The C<@_> of the subref will contain C<$t>, C<$q>, and C<$config> (in that order), where C<$t> is ZofCMS Template hashref, C<$q> is query parameter hashref, and C<$config> is L<App::ZofCMS::Config> object. Possible keys/values for the hashref are as follows: =head3 C<cell> [EDIT] => { cell => 't', }, B<Optional>. Specifies ZofCMS Template first-level key where to [EDIT]. Must be pointing to either a hashref or an C<undef> (see C<key> below). B<Defaults to:> C<t> =head3 C<key> [EDIT] => { key => '[EDIT]', }, B<Optional>. Specifies ZofCMS Template second-level key where to [EDIT]. This key will be inside C<cell> (see above)>. B<Defaults to:> C<[EDIT]> The following is the documentation I use for the DBI configuration part of arguments that are used by DBI-using modules: =head3 C<dsn> [EDIT] => { dsn => "DBI:mysql:database=test;host=localhost", ... B<Mandatory>. The C<dsn> key will be passed to L<DBI>'s C<connect_cached()> method, see documentation for L<DBI> and C<DBD::your_database> for the correct syntax for this one. The example above uses MySQL database called C<test> that is located on C<localhost>. =head3 C<user> [EDIT] => { user => '', ... B<Optional>. Specifies the user name (login) for the database. This can be an empty string if, for example, you are connecting using SQLite driver. B<Defaults to:> C<''> (empty string) =head3 C<pass> [EDIT] => { pass => undef, ... B<Optional>. Same as C<user> except specifies the password for the database. B<Defaults to:> C<undef> (no password) =head3 C<opt> [EDIT] => { opt => { RaiseError => 1, AutoCommit => 1 }, ... B<Optional>. Will be passed directly to L<DBI>'s C<connect_cached()> method as "options". B<Defaults to:> C<< { RaiseError => 1, AutoCommit => 1 } >> 'Zoffix, <'zoffix at cpan.org'> (,,) Please report any bugs or feature requests to bug-app-zofcms-plugin-base::Base You can also look for information at: This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
http://search.cpan.org/dist/App-ZofCMS-Plugin-Base/lib/App/ZofCMS/Plugin/Base.pm
crawl-003
refinedweb
1,019
59.74
{- | Module : XMonad.Actions.WindowGo License : Public domain Maintainer : <gwern0@gmail.com> Stability : unstable Portability : unportable Defines a few convenient operations for raising (traveling to) windows based on XMonad's Query monad, such as 'runOrRaise'. runOrRaise will run a shell command unless it can find a specified window; you would use this to automatically travel to your Firefox or Emacs session, or start a new one (for example), instead of trying to remember where you left it or whether you still have one running. -} module XMonad.Actions.WindowGo ( -- * Usage -- $usage raise, raiseNext, runOrRaise, runOrRaiseNext, raiseMaybe, raiseNextMaybe, raiseBrowser, raiseEditor, runOrRaiseAndDo, runOrRaiseMaster, raiseAndDo, raiseMaster, module XMonad.ManageHook ) where import Control.Monad (filterM) import Data.Char (toLower) import XMonad (Query(), X(), withWindowSet, spawn, runQuery, liftIO) import Graphics.X11 (Window) import XMonad.ManageHook import XMonad.Operations (windows) import XMonad.Prompt.Shell (getBrowser, getEditor) import qualified XMonad.StackSet as W (allWindows, peek, swapMaster, focusWindow) {- $usage Import". -} -- | 'action' is an executable to be run via 'spawn' (of "XMonad.Core") if the Window cannot be found. -- Presumably this executable is the same one that you were looking for. runOrRaise :: String -> Query Bool -> X () runOrRaise = raiseMaybe . spawn -- | See 'raiseMaybe'. If the Window can't be found, quietly give up and do nothing. raise :: Query Bool -> X () raise = raiseMaybe $ return () {- | ")) -} raiseMaybe :: X () -> Query Bool -> X () raiseMaybe f thatUserQuery = withWindowSet $ \s -> do maybeResult <- filterM (runQuery thatUserQuery) (W.allWindows s) case maybeResult of [] -> f (x:_) -> windows $ W.focusWindow x -- | See 'runOrRaise' and 'raiseNextMaybe'. Version that allows cycling through matches. runOrRaiseNext :: String -> Query Bool -> X () runOrRaiseNext = raiseNextMaybe . spawn -- | thatUserQuery = withWindowSet $ \s -> do ws <- filterM (runQuery thatUserQuery) (W.allWindows s) case ws of [] -> f (x:_) -> let go (Just w) | (w `elem` ws) = next w $ cycle ws go _ = windows $ W.focusWindow x in go $ W.peek s where next w (x:y:_) | x==w = windows $ W.focusWindow y next w (_:xs) = next w xs next _ _ = error "raiseNextMaybe: empty list" -- | raisef thatUserQuery afterRaise = withWindowSet $ \s -> do maybeResult <- filterM (runQuery thatUserQuery) (W.allWindows s) case maybeResult of [] -> raisef (x:_) -> do windows $ W.focusWindow x afterRaise x {- | if the window is found the window is focused and the third argument is called otherwise, raisef is called -} runOrRaiseAndDo :: String -> Query Bool -> (Window -> X ()) -> X () runOrRaiseAndDo = raiseAndDo . spawn {- |)
http://hackage.haskell.org/package/xmonad-contrib-bluetilebranch-0.8.1.3/docs/src/XMonad-Actions-WindowGo.html
CC-MAIN-2015-32
refinedweb
376
51.44
Automated Airsoft Target (Ghetto Style) Introduction: Automated Airsoft Target (Ghetto Style) With winter break off college comes boredom, and with boredom comes fun with PVC and arduino! In about two hours, we hacked together a basic servo-controlled automatic airsoft target. The target moves, allowing us to improve our skills before they're tested in the field. Step 1: Acquire Parts For this project, we used: 1) Arduino Uno 2) 2X Hi-Tec HS-311 hobby servos (180deg rotation - link) 3) roughly 10' of 1/2" CPVC pipe (regular PVC also works... we just had extra) 4) Scrap plastic for the levers (we used bits from a NERF tripod) 5) Dental floss (fishing line would've been better) 6) 12V, 6A power supply for arduino & servo control (connects directly to the arduino jack - link) 7) Two 1/2" CPVC right angle connectors 8) Two 1/2" CPVC T connectors 9) Spare cardboard for the target 10) A nutri-grain bar (LOL) and duct tape to weight down the cardboard. Can use pretty much anything here. The pipe and pipe connectors we found at Lowes. It's also helpful to have a pair of pipe cutters handy to ensure clean cuts on the CPVC. A drill is also required for the lift holes Step 2: Assemble! As you can see in the pictures, just cut three lengths and join them together in an upside-down U formation. The bottoms are stabilized using the T connectors and roughly 6" of extra pipe inserted into each side. To mount the servos, we cut away a bit of the PVC that the servos fit snugly into, then applied liberal amounts of hot glue and finally wire-wrapped the servo on. For the plastic arms it's up to you - we cut a little bit in so the servo handle would grab the plastic, then repeated the hot glue and wiring that we did on the servos. It helps to pop the handle off the servo for this. After this, we drilled holes in the top horizontal bar for the line/floss to go through. Nothing special here, just drill a horizontal hole on each side and ensure the line move smoothly through. The cardboard piece (and ever-essential Nutri-Grain bar) are then tied on with a simple double knot on the top corners. Servo wiring is standard - see the Arduino site for information on wiring and considerations. We put the signal wires into pins 10 and 11. Step 3: Code!! // Sweep // Some code borrowed from the Arduino Servo example. // Upload this to your arduino to run the automated turret! #include <Servo.h> // servos Servo s1; Servo s2; // servo positions & movement goals (#2 is reversed) long p1 = 0; long p2 = 180; long p1_goal = 0; long p2_goal = 180; void setup() { s1.attach(10); s2.attach(11); // attaches the servo on pin 9 to the servo object Serial.begin(9600); randomSeed(analogRead(0)); } void loop() { if (abs(p1 - p1_goal) > 1 || abs(p2 - p2_goal) > 1) { // move if (p1 < p1_goal) { p1 += 1; } if (p1 > p1_goal) { p1 -= 1; } if (p2 < p2_goal) { p2 += 1; } if (p2 > p2_goal) { p2 -= 1; } s1.write(p1); s2.write(p2); } else { // assign random goal do { p1_goal = random(180); p2_goal = random(180); } while (abs(p1_goal - p2_goal) > 80); delay (1000); } delay (5); } Step 4: Done. If all goes well, you, too now have a ghetto automated airsoft target. Happy shooting! As always, if you have any questions just throw me a comment or PM and I'd be glad to help. Yo, neat ible but you lost me with “Ghetto Style”; there are so many it is difficult to adjust my thinking to any one in particular.
http://www.instructables.com/id/Automated-Airsoft-Target-Ghetto-Style/
CC-MAIN-2017-39
refinedweb
610
68.3
>> display stacked bar chart using matplotlib in Python? Mat oriented Stacked bar plots show the data points of two values in a single rectangular box. Let us understand how Matplotlib can be used to create a stacked plot − Example import matplotlib.pyplot as plt labels = ['A1', 'A2', 'A3', 'A4'] val_1 = [34, 56, 78, 91] val_2 = [20, 56, 32, 89] val_3 = [1, 3, 5, 3] val_4 = [3, 5, 3, 4] width = 0.40 fig, ax = plt.subplots() ax.bar(labels, val_1, width, yerr=val_3, label='Label_1') ax.bar(labels, val_2, width, yerr=val_4, bottom=val_1, label='Label_2') ax.set_ylabel('Y−axis') ax.set_title('X−axis') ax.legend() plt.show() Output Explanation The required packages are imported and its alias is defined for ease of use. The labels for the stacked chart and values for the labels/bars are defined. An empty figure is created using the ‘figure’ function. The ‘subplot’ function is used to create an area to plot the graph. The data is plotted using the ‘plot’ function. The set_xlabel, set_ylabel and set_title functions are used to provide labels for ‘X’ axis, ‘Y’ axis and title. The plot is defined as a bar chart by specifying ‘bar’. It is shown on the console using the ‘show’ function. - Related Questions & Answers - Horizontal stacked bar chart in Matplotlib - How to Create a Diverging Stacked Bar Chart in Matplotlib? - How to create a stacked bar chart using JavaFX? - How to create stacked bar chart using ggvis in R? - How to create a stacked bar chart for my DataFrame using Seaborn in Matplotlib? - How to create horizontal stacked bar chart using ggvis in R? - Python Pandas - Plot a Stacked Horizontal Bar Chart - How to display percentage above a bar chart in Matplotlib? - How to create a 100% stacked Area Chart with Matplotlib? - Create stacked bar chart with percentages on Y-axis using ggplot2 in R. - How can I display text over columns in a bar chart in Matplotlib? - How to create a stacked area chart using JavaFX? - How to plot a very simple bar chart (Python, Matplotlib) using input *.txt file? - How to plot a bar chart for a list in Python matplotlib? - How to change Bar Chart values to percentages in Matplotlib?
https://www.tutorialspoint.com/how-to-display-stacked-bar-chart-using-matplotlib-in-python
CC-MAIN-2022-33
refinedweb
369
75.81
What are pointers? A pointer is a variable that holds a memory address. This address is the location of another object (variable or anything) in memory. For example, if one variable contains the address of another variable, the first variable is said to point to second. Advantages and disadvantages of pointers Advantage - Pointers provide the means by which functions can modify their calling arguments - Pointers support dynamic allocation - Pointers can improve the efficiency of certain routines Disadvantages - Pointers should be handled with care. Uninitialized pointers can cause system to crash and these bugs are difficult to find Pointer variables Syntax type *name; type is the base type of pointer and it can be any valid data type. It defines what type of variables the pointer can point to. name is the name of pointer variable. Pointer Operators The address of operator (&) The & is a unary operator that returns the memory address of its operand. For Example, m = &count; places into m the memory address of variable count. This address is the computer’s internal location of the variable. It has nothing to do with value of count. For better understanding, assume that the variable count uses memory location 2000 to store its value. Also say that the value of count is 100. Than after the above assignment m will have 2000. (Not the value of count, but the memory address) Dereference operator (*) It is the complement of above operator. It is a unary operator that returns the value located at the address that follows. Continuing the above example, if m contains the memory address of variable count than, q = *m; places the value of count into q. Thus, q will have value 100 because 100 is stores at location 2000, which is the memory address that is stores in m. Important Note When we declare a pointer to be of type int, the compiler assumes that the address it holds points an integer variable. In C++, it is illegal to convert one type of pointer into another without the use of an explicit type cast. When we don’t assign the pointer to a matching data type variable, the program compiles error free but does not produce desired results. Example, #include<iostream> using namespace std; int main() { double x = 100.1,y; int *p; p = (int *)&x; y = *p; cout<<y; return 0; } This program does not print 100.1 because some bits are lost due to incorrect pointer data type
https://boostlog.io/@sophia91/pointers-5a9e5267a6e96c008a6fc858
CC-MAIN-2019-30
refinedweb
411
63.49
The Samba-Bugzilla – Bug 3262 In Samba from SVN does not work storing dos attributes to EA. Last modified: 2005-11-21 23:04:18 UTC Samba from SVN can not store dos attributes into extattr under FreeBSD. This appeared after appling patch from bugzilla report #3218 3.0.20b without patch works OK, with patch works as described below. I make sure that "store dos attributes = yes" with testparm. ... comment = Test Share path = /shared read only = No store dos attributes = Yes ... There is a file in the share: -rwxr--r-- 1 root wheel 0 15 ноя 14:07 1.txt root@testbsd# getextattr user DOSATTRIB 1.txt 1.txt 0x22 (File is Archive & Hidden) From WinXP computer: C:\Program Files\Far>attrib V:\4\1.txt A V:\4\1.txt i.e. file attributes was gotten from permission bits (not eas). In level 10 log: [2005/11/15 14:12:12, 1] smbd/dosmode.c:get_ea_dos_attribute(200) get_ea_dos_attributes: Cannot get attribute from EA on file .: Error = Result too large Created attachment 1571 [details] Level 10 Log of executing attrib V:\4\1.txt It's strange but smbclient shows correct attributes: root@testbsd# smbclient //localhost/share -U rc20 Domain=[TESTD] OS=[Unix] Server=[Samba 3.0.21pre3-SVN-build-11729] smb: \> ls 4/1.txt 1.txt AH 0 Tue Nov 15 14:07:28 2005 33874 blocks of size 262144. 10513 blocks available Ok - this is the code that got added in that bug report (#3218). /* + * The BSD implementation has a nasty habit of silently truncating + * the returned value to the size of the buffer, so we have to check + * that the buffer is large enough to fit the returned value. + */ + retval = extattr_get_file(path, attrnamespace, attrname, NULL, 0); + if(retval > size) { + errno = ERANGE; + return -1; + } + We are calling : sizeret = SMB_VFS_GETXATTR(conn, path, SAMBA_XATTR_DOS_ATTRIB, attrstr, sizeof(attrstr)); Where attrstr is defined as an fstring (256 bytes). What I need from you is additional debug statements in your lib/system.c code in the FreeBSD specific part that prints out what retval and size in the above call. Then we need to figure out why 256 bytes isn't enough for FreeBSD to store the string "0x22" in an EA. Jeremy. (In reply to comment #3) > Ok - this is the code that got added in that bug report (#3218). > + retval = extattr_get_file(path, attrnamespace, attrname, NULL, 0); > > + if(retval > size) { > + errno = ERANGE; > + return -1; > + } > + Looking again on this code I see that it misses check against retval < 0 - which in this case can give that (retval > size). It's my fault, I was concentrated on setxattr() code and missed retval check for this part of the functions.... With regards, Timur. Created attachment 1574 [details] Level 10 log after appliing patch (In reply to comment #4) >... I've added this code. Now attrib shows correct attributes: C:\>attrib V:\4\1.txt A S V:\4\1.txt alex@testbsd$ getextattr user DOSATTRIB /shared/4/1.txt /shared/4/1.txt 0x24 -rwxr--r-- 1 root wheel 0 15 ноя 14:07 /shared/4/1.txt Level 10 log attached. This problem does not appear in SVN build 11739. Thanks! (In reply to comment #6) > This problem does not appear in SVN build 11739. > Thanks! > Hi Alex! Can you try attached patch? It's supposed to do the same staff, just a bit more sane. Jerremy, can yo uapply it later on if alex confirm it works ok for him? Created attachment 1586 [details] Additional sanity checks for FreeBSD EA emulation Ok, I'll apply it once it gets the ok. Jeremy. Created attachment 1587 [details] Level 10 log after last patch Hello Timur, Jeremy! After last patch from Timur storing dos attributes to EAs works OK. I've attached level 10 Log to illustrate this. Thanks again! Applied thanks. Jeremy.
https://bugzilla.samba.org/show_bug.cgi?id=3262
CC-MAIN-2016-40
refinedweb
640
75.2
Cursor¶ - class Cursor¶ A cursor for connection. Allows Python code to execute MySQL command in a database session. Cursors are created by the Connection.cursor()coroutine: they are bound to the connection for the entire lifetime and all the commands are executed in the context of the database session wrapped by the connection. Cursors that are created from the same connection are not isolated, i.e., any changes done to the database by a cursor are immediately visible by the other cursors. Cursors created from different connections can or can not be isolated, depending on the connections’ isolation level. import asyncio import aiomysql loop = asyncio.get_event_loop() @asyncio.coroutine def test_example(): conn = yield from aiomysql.connect(host='127.0.0.1', port=3306, user='root', password='', db='mysql', loop=loop) # create default cursor cursor = yield from conn.cursor() # execute sql query yield from cursor.execute("SELECT Host, User FROM user") # fetch all results r = yield from cursor.fetchall() # detach cursor from connection yield from cursor.close() # close connection conn.close() loop.run_until_complete(test_example()) Use Connection.cursor()for getting cursor for connection. connection¶ This read-only attribute return a reference to the Connectionobject on which the cursor was created description¶ This read-only attribute is a sequence of 7-item sequences. Each of these sequences is a collections.namedtuple containing information describing one result column: - name: the name of the column returned. - type_code: the type of the column. - display_size: the actual length of the column in bytes. - internal_size: the size in bytes of the column associated to this column on the server. - precision: total number of significant digits in columns of type NUMERIC. None for other types. - scale: count of decimal digits in the fractional part in columns of type NUMERIC. None for other types. - null_ok: always None. This attribute will be None for operations that do not return rows or if the cursor has not had an operation invoked via the Cursor.execute()method yet. rowcount¶ Returns the number of rows that has been produced of affected. This read-only attribute specifies the number of rows that the last Cursor.execute()produced (for Data Query Language statements like SELECT) or affected (for Data Manipulation Language statements like UPDATEor INSERT). The attribute is -1 in case no Cursor.execute()has been performed on the cursor or the row count of the last operation if it can’t be determined by the interface. rownumber¶ Row index. This read-only attribute provides the current 0-based index of the cursor in the result set or Noneif the index cannot be determined. arraysize¶ How many rows will be returned by Cursor.fetchmany()call. This read/write attribute specifies the number of rows to fetch at a time with Cursor.fetchmany(). It defaults to 1 meaning to fetch a single row at a time. lastrowid¶ This read-only property returns the value generated for an AUTO_INCREMENT column by the previous INSERT or UPDATE statement or None when there is no such value available. For example, if you perform an INSERT into a table that contains an AUTO_INCREMENT column, Cursor.lastrowidreturns the AUTO_INCREMENT value for the new row. Coroutine to close the cursor now (rather than whenever delis executed). The cursor will be unusable from this point forward; closing a cursor just exhausts all remaining data. execute(query, args=None)¶ Coroutine, executes the given operation substituting any markers with the given parameters. For example, getting all rows where id is 5: yield from cursor.execute("SELECT * FROM t1 WHERE id=%s", (5,)) executemany(query, args)¶ The executemany() coroutine will execute the operation iterating over the list of parameters in seq_params. Example: Inserting 3 new employees and their phone number: data = [ ('Jane','555-001'), ('Joe', '555-001'), ('John', '555-003') ] stmt = "INSERT INTO employees (name, phone) VALUES ('%s','%s')" yield from cursor.executemany(stmt, data) INSERT statements are optimized by batching the data, that is using the MySQL multiple rows syntax. callproc(procname, args)¶ Execute stored procedure procname with args, this method is coroutine. Cursor.execute()to get any OUT or INOUT values. Basic usage example: conn = yield from aiomysql.connect(host='127.0.0.1', port=3306, user='root', password='', db='mysql', loop=self.loop) cur = yield from conn.cursor() yield from cur.execute("""CREATE PROCEDURE myinc(p1 INT) BEGIN SELECT p1 + 1; END """) yield from cur.callproc('myinc', [1]) (ret, ) = yield from cur.fetchone() assert 2, ret yield from cur.close() conn.close() Compatibility warning: The act of calling a stored procedure itself creates an empty result set. This appears after any result sets generated by the procedure. This is non-standard behavior with respect to the DB-API. Be sure to use Cursor.nextset()to advance through all result sets; otherwise you may get disconnected. fetchmany(size=None)¶ Coroutine the next set of rows of a query result, returning a list of tuples. When no more rows are available, it returns an empty list. The number of rows to fetch per call is specified by the parameter. If it is not given, the cursor’s Cursor.arraysizedetermines the number of rows to be fetched. The method should try to fetch as many rows as indicated by the size parameter. If this is not possible due to the specified number of rows not being available, fewer rows may be returned cursor = yield from connection.cursor() yield from cursor.execute("SELECT * FROM test;") r = cursor.fetchmany(2) print(r) # [(1, 100, "abc'def"), (2, None, 'dada')] r = yield from cursor.fetchmany(2) print(r) # [(3, 42, 'bar')] r = yield from cursor.fetchmany(2) print(r) # [] fetchall()¶ Coroutine returns all rows of a query result set: yield from cursor.execute("SELECT * FROM test;") r = yield from cursor.fetchall() print(r) # [(1, 100, "abc'def"), (2, None, 'dada'), (3, 42, 'bar')] scroll(value, mode='relative')¶ Scroll the cursor in the result set to a new position according to mode. This method is coroutine. According to the DBAPI, the exception raised for a cursor out of bound should have been IndexError. The best option is probably to catch both exceptions in your code: try: yield from cur.scroll(1000 * 1000) except (ProgrammingError, IndexError), exc: deal_with_it(exc) - class DictCursor¶ A cursor which returns results as a dictionary. All methods and arguments same as Cursor, see example: import asyncio import aiomysql loop = asyncio.get_event_loop() @asyncio.coroutine def test_example(): conn = yield from aiomysql.connect(host='127.0.0.1', port=3306, user='root', password='', db='mysql', loop=loop) # create dict cursor cursor = yield from conn.cursor(aiomysql.DictCursor) # execute sql query yield from cursor.execute( "SELECT * from people where name='bob'") # fetch all results r = yield from cursor.fetchone() print(r) # {'age': 20, 'DOB': datetime.datetime(1990, 2, 6, 23, 4, 56), # 'name': 'bob'} loop.run_until_complete(test_example()) You can customize your dictionary, see example: import asyncio import aiomysql class AttrDict(dict): """Dict that can get attribute by dot, and doesn't raise KeyError""" def __getattr__(self, name): try: return self[name] except KeyError: return None class AttrDictCursor(aiomysql.DictCursor): dict_type = AttrDict loop = asyncio.get_event_loop() @asyncio.coroutine def test_example(): conn = yield from aiomysql.connect(host='127.0.0.1', port=3306, user='root', password='', db='mysql', loop=loop) # create your dict cursor cursor = yield from conn.cursor(AttrDictCursor) # execute sql query yield from cursor.execute( "SELECT * from people where name='bob'") # fetch all results r = yield from cursor.fetchone() print(r) # {'age': 20, 'DOB': datetime.datetime(1990, 2, 6, 23, 4, 56), # 'name': 'bob'} print(r.age) # 20 print(r.foo) # None loop.run_until_complete(test_example()) - class SSCursor¶. All methods are the same as in Cursorbut with different behaviour. fetchall()¶ Same as :meth:`Cursor.fetchall` :ref:`coroutine <coroutine>`, useless for large queries, as all rows fetched one by one. fetchmany(size=None, mode='relative')¶ Same as :meth:`Cursor.fetchall`, but each row fetched one by one.
https://aiomysql.readthedocs.io/en/v0.0.20/cursors.html
CC-MAIN-2019-35
refinedweb
1,301
51.95
/* Lisp functions pertaining to editing. <sys/types.h> #include <stdio.h> #ifdef HAVE_PWD_H #include <pwd.h> #include <grp <float, EMACS_TIME,; void init_editfns (void) { const char *user_name; register char *p; struct passwd *pw; /* password entry for the current user */ Lisp_Object tem; /* Set up system_name even when dumping. */ init_system_name (); #ifndef CANNOT_DUMP /* Don't bother with this on initial start when just dumping out */ if (!initialized) return; #endif /* not CANNOT_DUMP */ initial_tz = getenv ("TZ"); tzvalbuf_in_environ = 0; pw = getpwuid (getuid ()); #ifdef MSDOS /* We let the real user name default to "root" because that's quite accurate on MSDOG and because it lets Emacs find the init file. (The DVX libraries override the Djgpp libraries here.) */ Vuser_real_login_name = build_string (pw ? pw->pw_name : "root"); = pw ? pw->pw_name : "unknown"; } Vuser_login_name = build_string (user_name); /* If the user name claimed in the environment vars differs from the real uid, use the claimed name to find the full name. */ tem = Fstring_equal (Vuser_login_name, Vuser_real_login_name); if (! NILP (tem)) tem = Vuser_login_name; else { uid_t euid = geteuid (); tem = make_fixnum_or_float (euid); } Vuser_full_name = Fuser_full_name (tem); p =; } DEFUN ("point", Fpoint, Spoint, 0, 0, 0, doc: /* Return value of point, as an integer. Beginning of buffer is position (point-min). */) (void) { Lisp_Object temp; XSETFASTINT (temp, PT); return temp; } DEFUN ("point-marker", Fpoint_marker, Spoint_marker, 0, 0, 0, doc: /* Return value of point, as a marker object. */) return build_marker (current_buffer, PT, PT_BYTE); DEFUN ("goto-char", Fgoto_char, Sgoto_char, 1, 1, "NGoto char: ", doc: /* Set point to POSITION, a number or marker. Beginning of buffer is position (point-min), end is (point-max). The return value is POSITION. */) (register Lisp_Object position) ptrdiff means return the start. If there is no region active, signal an error. */ static Lisp_Object region_limit (bool"); /* Clip to the current narrowing (bug#11770). */ return make_number ((PT < XFASTINT (m)) == beginningp ? PT : clip_to_bounds (BEGV, XFASTINT (m), ZV)); }. */) return region_limit (0); } DEFUN ("mark-marker", Fmark_marker, Smark_marker, 0, 0, 0, doc: /* Return this buffer's mark, as a marker object. Watch out! Moving this marker changes the mark position. If you set the marker not to point anywhere, the buffer will have no mark. */) return; ptrdiff) { SAFE_ALLOCA_LISP (overlay_vec, noverlays); { SAFE_FREE (); return tem; } } } SAFE_FREE (); { /* null, means don't store the beginning or end of the field. BEG_LIMIT and END_LIMIT serve to limit the ranged of the returned results; they do not effect boundary behavior. If MERGE_AT_BOUNDARY is non-nil, then if POS is at the very first position of a field, then the beginning of the previous field is returned instead of the beginning of POS's field (since the end of a field is actually also the beginning of the next input field, this behavior is sometimes useful). Additionally in the MERGE_AT_BOUNDARY non-nil case, if two fields are separated by a field with the special value `boundary', and POS lies within it, then the two separated fields are considered to be adjacent, and POS between them, when finding the beginning and ending of the "merged" field. Either BEG or END may be 0, in which case the corresponding value is not stored. */ static void find_field (Lisp_Object pos, Lisp_Object merge_at_boundary, Lisp_Object beg_limit, ptrdiff_t *beg, Lisp_Object end_limit, ptrdiff_t *end) /* Fields right before and after the point. */ Lisp_Object before_field, after_field; /* True if POS counts as the start of a field. */ bool at_field_start = 0; /* True if POS counts as the end of a field. */ bool at_field_end = 0; if (NILP (pos)) XSETFASTINT (pos, PT); else CHECK_NUMBER_COERCE_MARKER (pos); after_field = get_char_property_and_overlay (pos, Qfield, Qnil, NULL); before_field = (XFASTINT (pos) > BEGV ? get_char_property_and_overlay (make_number (XINT (pos) - 1), Qfield, Qnil, NULL) /* Using nil here would be a more obvious choice, but it would fail when the buffer starts with a non-sticky field. */ : after_field); /* See if we need to handle the case where MERGE_AT_BOUNDARY is nil and POS is at beginning of a field, which can also be interpreted as the end of the previous field. Note that the case where if MERGE_AT_BOUNDARY is non-nil (see function comment) is actually the more natural one; then we avoid treating the beginning of a field specially. */ if (NILP (merge_at_boundary)); } /* Note about special `boundary' fields: Consider the case where the point (`.') is between the fields `x' and `y': xxxx.yyyy In this situation, if merge_at_boundary is non-nil, consider the `x' and `y' fields as forming one big merged field, and so the end of the field is the end of `y'. However, if `x' and `y' are separated by a special `boundary' field (a field with a `field' char-property of 'boundary), then ignore this special field when merging adjacent fields. Here's the same situation, but with a `boundary' field between the `x' and `y' fields: xxx.BBBByyyy Here, if point is at the end of `x', the beginning of `y', or anywhere in-between (within the `boundary' field), merge all three fields and consider the beginning as being the beginning of the `x' field, and the end as being the end of the `y' field. */ if (beg) { if (at_field_start) /* POS is at the edge of a field, and we should consider it as the beginning of the following field. */ *beg = XFASTINT (pos); else /* Find the previous field boundary. */ {); } } if (end) { if (at_field_end) /* POS is at the edge of a field, and we should consider it as the end of the previous field. */ *end = XFASTINT (pos); else /* Find the next field boundary. */ { if (!NILP (merge_at_boundary) && EQ (after_field, Qboundary)) /* Skip a `boundary' field. */ pos = Fnext_single_char_property_change (pos, Qfield, Qnil, end_limit); pos = Fnext_single_char_property_change (pos, Qfield, Qnil, end_limit); *end = NILP (pos) ? ZV : XFASTINT (pos); } } DEFUN ("delete-field", Fdelete_field, Sdelete_field, 0, 1, 0, doc: /* Delete the field surrounding POS. A field is a region of text with the same `field' property. If POS is nil, the value of point is used for POS. */) (Lisp_Object pos) ptrdiff_t beg, end; find_field (pos, Qnil, Qnil, &beg, Qnil, &end); if (beg != end) del_range (beg, end); return Qnil; } DEFUN ("field-string", Ffield_string, Sfield_string, 0, 1, 0, doc: /* Return the contents of the field surrounding POS as a string. return make_buffer_string (beg, end, 1); } DEFUN ("field-string-no-properties", Ffield_string_no_properties, Sfield_string_no_properties, 0, 1, 0, doc: /* Return the contents of the field around POS, without text properties. return make_buffer_string (beg, end, 0); } DEFUN ("field-beginning", Ffield_beginning, Sfield_beginning, 0, 3, 0, doc: /*. If LIMIT is non-nil, it is a buffer position; if the beginning of the field is before LIMIT, then LIMIT will be returned instead. */) (Lisp_Object pos, Lisp_Object escape_from_edge, Lisp_Object limit) ptrdiff_t beg; find_field (pos, escape_from_edge, limit, &beg, Qnil, 0); return make_number (beg); } DEFUN ("field-end", Ffield_end, Sfield_end, 0, 3, 0, doc: /*. If LIMIT is non-nil, it is a buffer position; if the end of the field is after LIMIT, then LIMIT will be returned instead. */) ptrdiff_t end; find_field (pos, escape_from_edge, Qnil, 0, limit, &end); return make_number (end); } DEFUN ("constrain-to-field", Fconstrain_to_field, Sconstrain_to_field, 2, 5, 0, doc: /* Return the position closest to NEW-POS that is in the same field as OLD-POS. If NEW-POS is nil, then use the current point instead, and move point to the resulting constrained position, in addition to returning that position. \\[next-line] or \\[beginning-of-line],. */) (Lisp_Object new_pos, Lisp_Object old_pos, Lisp_Object escape_from_edge, Lisp_Object only_in_line, Lisp_Object inhibit_capture_property) { /* If non-zero, then the original point, before re-positioning. */ ptrdiff_t orig_point = 0; bool fwd; Lisp_Object prev_old, prev_new; if (NILP (new_pos)) /* Use the current point, and afterwards, set it. */ { orig_point = PT; XSETFASTINT (new_pos, PT); } CHECK_NUMBER_COERCE_MARKER (new_pos); CHECK_NUMBER_COERCE_MARKER (old_pos); fwd = (XINT (new_pos) > XINT (old_pos)); prev_old = make_number (XINT (old_pos) - 1); prev_new = make_number (X. */ ptrdiff_t shortage; Lisp_Object field_bound; if (fwd) field_bound = Ffield_end (old_pos, escape_from_edge, new_pos); field_bound = Ffield_beginning (old_pos, escape_from_edge, new_pos); if (/* See if ESCAPE_FROM_EDGE caused FIELD_BOUND to jump to the other side of NEW_POS, which would mean that NEW_POS is already acceptable, and it's not necessary to constrain it to FIELD_BOUND. */ ((XFASTINT (field_bound) < XFASTINT (new_pos)) ? fwd : !fwd) /* NEW_POS should be constrained, but only if either ONLY_IN_LINE is nil (in which case any constraint is OK), or NEW_POS and FIELD_BOUND are on the same line (in which case the constraint is OK even if ONLY_IN_LINE is non-nil). */ && (NILP (only_in_line) /* This is the ONLY_IN_LINE case, check that NEW_POS and FIELD_BOUND are on the same line by seeing whether there's an intervening newline or not. */ || (find_newline (XFASTINT (new_pos), XFASTINT (field_bound), fwd ? -1 : 1, &shortage, 1), shortage != 0))) /* Constrain NEW_POS to FIELD_BOUND. */ new_pos = field_bound; if (orig_point && XFASTINT (new_pos) != orig_point) /* The NEW_POS argument was originally nil, so automatically set PT. */ SET_PT (XFASTINT (new_pos)); } return new_pos; } DEFUN ("line-beginning-position", Fline_beginning_position, Sline_beginning_position, 0, 1, 0, doc: /* Return the character position of the first character on the current line. With optional argument N, scan forward N - 1 lines first. If the scan reaches the end of the buffer, return that position. This function ignores text display directionality; it returns the position of the first character in logical order, i.e. the smallest character position on the line. This function constrains the returned position to the current field unless that position) ptrdiff_t orig, orig_byte, end; ptrdiff); /* Return END constrained to the current input field. */ return Fconstrain_to_field (make_number (end), make_number (orig), XINT (n) != 1 ? Qt : Qnil, Qt, Qnil); DEFUN ("line-end-position", Fline_end_position, Sline_end_position, 0, 1, 0, doc: /* Return the character position of the last character on the current line. With argument N not nil or 1, move forward N - 1 lines first. If scan reaches end of buffer, return that position. This function ignores text display directionality; it returns the position of the last character in logical order, i.e. the largest character position on the line.. ptrdiff_t clipped_n; ptrdiff_t end_pos; ptrdiff_t orig = PT; if (NILP (n)) XSETFASTINT (n, 1); else clipped_n = clip_to_bounds (PTRDIFF_MIN + 1, XINT (n), PTRDIFF_MAX); end_pos = find_before_next_newline (orig, 0, clipped_n - (clipped_n <= 0)); /*_value ("oooo", Fpoint_marker (), /* Do not copy the mark if it points to nowhere. */ (XMARKER (BVAR (current_buffer, mark))->buffer ? Fcopy_marker (BVAR (current_buffer, mark), Qnil) : Qnil), /* Selected window if current buffer is shown in it, nil otherwise. */ ((XBUFFER (XWINDOW (selected_window)->buffer) == current_buffer) ? selected_window : Qnil), BVAR (current_buffer, mark_active));
https://emba.gnu.org/emacs/emacs/-/blame/1dfcc79e83d3db031b45e9f6b9314dc1f0697b1d/src/editfns.c
CC-MAIN-2021-10
refinedweb
1,640
62.27
Simple image box arithmetic Project Description This provides image crop/resize algorithm for chaining multiple resize, crop actions and producing a resulting crop/resize action pair. Usage The usage is fairly simple: from boxmath import box, resize, crop, size, make_transformer from wand import image # Load the image to get its width and height i = image.Image(filename="chrysanthemum.jpg") b = box(i.width, i.height) # manipulate the virtual image b = resize(b, 629, 483) b = crop(b, 0, 0, 480, 480) b = resize(b, 1000, 1000) # render def resizer(img, w, h): img.resize(int(w), int(h), filter=FILTER) return img def cropper(img, l,t,r,b): img.crop(int(l),int(t),int(r),int(b)) return img t = make_transformer(b, resizer, cropper) i = t(i) i.save(filename="chrysanthemum-1000x1000.jpg") Normally, if we would of used wand or PIL directly, each resize would degrade the image. The action of down scaling and then up scaling would wreck the quality of the image; with the power of math, we only apply the resize and crop when we need render the image. Not that the width, height, left, top, right, and bottom values passed to the resizer and cropper functions are cast as ints. This is because they are either fractions.Fraction() instances or int(). boxmath uses the Fraction class to ensure precision while resizing and cropping. Download Files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/boxmath/
CC-MAIN-2018-13
refinedweb
251
56.15
Introduction: Arduino Based Bi-color LED Matrix Snake Game We demonstrated how an Arduino based Bi-color LED Matrix Tetris Game can be built in our last instructable. We were quite surprise it was featured to the Instructables homepage and have garnered quite a handful of favourites over a short period. You may check it out if you are interested at. When we were young, there were not many electronic games around and one of the games we enjoy playing on those monochrome monitor with green phosphor screen is the Snake game. For this instructable, we will be building the classic SNAKE game using Bi-color LED Matrices powered by Arduino. Before we decide to come up with this instructable, we browse through the existing instructables to check if there are any similar projects. Indeed we found a couple of instructables implementing the game, some using the Rainbowduino driving an RGB LED Matrix from Seeedstudio. We decided to go ahead with our instructable to build the Snake game using our jolliFactory Bi-color LED Matrix Driver module DIY kit. This LED matrix Driver module is designed to be modular and chain-able so that you may daisy-chain the modules together to the number of modules you need to suit your project. We actually re-used these modules here by dis-assembling them from one of our old Scrolling Text Display project. You may visit if you are keen on building one of these displays. If you have built the Bi-color LED Matrix Tetris game based on our last instructable, you may proceed directly to the Programming the Arduino Board step to download the Arduino sketch and enjoy the Snake game. For this game, we will only be using 2 push buttons (Left and Right) for game navigation as we think it will be more challenging than using 4 push buttons. We are able to produce dots with red, green or orange on the display by using the Bi-color LED Matrix which should be sufficient for this simple game. We will have orange color for the snake head, red for the snake body and green for the food/token. To build this project, basic electronics knowledge with electronics component soldering skill and some knowledge on using the Arduino are required. You may view the following YouTube video to see what we are building. We will be repeating some of the steps from our Tetris game instructable here to make this instructable as complete by itself without reference to another instructable. Step 1: Building the Arduino Bi-color LED Matrix Snake Game We will be building a two LED Matrix tall Snake game here driven by an Arduino Nano. We will need two of the Bi-color (Red: Step 2: Wiring After the two. Two panel mount momentary push button switches are required for the project for controlling the movement of the snake.. To reduce the part count for this project, you may try without the 10Kohms pull-down resistors for the DATA IN and CLK input lines. Except for the two Bi-color LED Matrix Driver modules and the two push button switches, we hook up the entire circuit on a small piece of perf-board around 60mm x 60mm in size. Note that there are two PCB mount push buttons on the perf-board. We initially used them for the game control but after building a simple enclosure for the game, we decided to use two panel mount push buttons instead for better game control. We parallel wired our panel mount push button with the PCB mount push buttons so game control can now be performed using either the PCB or panel mount push buttons. Step 3: Programming the Arduino Board The Arduino board needs to be loaded with the Arduino sketch to run the game. We used Arduino IDE V1.03 for our project. Download the Arduino sketch below for this project and upload it into your Arduino board. Download jollifactory_Snake_V1_0.ino We adapted the snake game sketch found at... to work with our jolliFactory Bi-color LED Matrix Driver Module for this project. The sketch uses the SPI and Bounce2 libraries. The SPI library comes with the Arduino IDE V1.03 installation and Bounce2 library can be found at We have coded the Arduino sketch so that if you would like to build a single bi-color LED Matrix Snake game, you simply need to change the variable bi_maxInUse from 2 to 1, upload the sketch and enjoy the game. The Snake game sketch we have here is very basic without any game levels and scores. You may amend and enhance the sketch to your liking. Step 4: Enclosure and Assembly We will re-use the hand-held enclosure we built for our last Tetris game instructable here as basically there is no change to the project module form factor. As this project is also build just for the FUN factor with no intention of using it for long, we do not want to put in too much effort to build a proper enclosure. However, the enclosure built should enable the player to hand-held it to play quite comfortably. What we have for the enclosure is a cardboard box backing with a blue tinted acrylic protective front with the game control push button switches mounted. We did not even secure the modules to the enclosure as they fit quite snuggly in the enclosure. We will not delve into the detail on how we build our game enclosure here. The pictures show the various stages of assembling the sub-modules together. Step 5: Enjoy the Snake Game Playing the Snake game is easy. - Control the snake movement by activating either the Left or Right button. - Hunt for food/token and grow longer. - Do not let the snake bite itself. Be the First to Share Recommendations 15 Comments 4 years ago There are some problems on Arduino nano device. If I use an Arduino UNO all works properly, but if I use an Arduino Nano and the same code of Arduino UNO I see problems on visualization. In the arduino.cc site there's this note for Arduino Nano hardware: "SPI: 10 (SS), 11 (MOSI), 12 (MISO), 13 (SCK). These pins support SPI communication, which, although provided by the underlying hardware, is not currently included in the Arduino language." 6 years ago It looks great! 6 years ago how could i do this with a 10x10 WS2812B rgb matrix? Reply 6 years ago very good project by the way :O 6 years ago on Introduction Hello please ask how to connect 2 led matrixes each other? Reply 6 years ago on Introduction Are you using the Bi-color LED Matrix modules from...? If so, the LED Matrix modules can be connected easily by just plugging one module to another via the connectors at the sides of the modules. You may plug more LED Matrix modules together to make a longer display for projects such as a scrolling text display found at... Reply 6 years ago on Introduction We have used one color led matrix and we have connecting matrixes each other by plugging one module to another via the connectors, but they are doing the same thing, What shall we do? Reply 6 years ago on Introduction The Arduino sketch we have here works with Bi-color LED Matrix modules and will not work with a single color LED Matrix. You may need to modify the sketch for it to work with single color LED Matrix but it may not be easy if you are not familiar with programming. It should not be difficult to find a snake game running using single color LED Matrix if you search online. Reply 6 years ago on Introduction Thank you for help. 7 years ago on Introduction For those who are unable to locate the bounce2 library, I have amended the link in Step 3 to go to the download page for this library. 7 years ago on Introduction Hi, me again, would you mind explaining what this part of the code does?: #include <SPI.h> #define GREEN 0 #define RED 1 #define offREDoffGREEN 0 #define offREDonGREEN 1 #define onREDoffGREEN 2 #define ISR_FREQ 190 thanks, I am trying to recreate your project because its really cool! Reply 7 years ago on Introduction Thanks for your interest in this project. This project's coding is not for beginners. You may need to check out more basic Arduino project tutorials first to be able to follow through the codes here. Basically, the 'include' statement here is to use the SPI library so that we can use the available SPI library functions and routines such as SPI.transfer in our code without writing our own functions and routines. All the 'define' statements are used for ease of reading the codes. For example, #define offREDonGREEN 1 is used so that whenever you see the term offREDon GREEN in the code, the program actually replaces it with 1. 7 years ago on Introduction would this work with a regular arduino? (arduino uno) Reply 7 years ago on Introduction Sure. Arduino UNO works perfectly with this instructable. 8 years ago Good stuff! Love it.
https://www.instructables.com/Arduino-based-Bi-color-LED-Matrix-Snake-Game/
CC-MAIN-2022-33
refinedweb
1,539
68.81
Follow the straight-forward steps of this Premium Tutorial to create an entertaining Slot Machine game in Flash. Spin the wheels and see what you could win! Step 1: Brief Overview Using the Flash drawing tools we'll create a good looking graphic interface that will be powered by several ActionScript 3 classes. The user will be able to bet different amounts to win the prizes. Step 2: Flash Document Settings Open Flash and create a 480 pixels wide, 320 pixels tall document. Set the Frame rate to 30fps. Step 3: Interface A dark interface will be displayed; this involves multiple shapes, buttons, bitmaps and more. Continue to the next steps to learn how to create this GUI. Step 4: Background Create a 480x320 px rectangle and fill it with this radial gradient: #404040, #080808. Use the Align Panel (Cmd + K) to center it in the stage. Step 5: Title Let's add a title to our game; depending on your Slot Machine theme you can change the graphics to fit your needs. Here I've used the Tuts+ logo. Step 6: Slots Background Use the Rectangle Primitive Tool (R) to create a 320x160px rectangle. Change its corner radius to 10 and fill it with this linear gradient: #F5DA95, #967226, #91723B. Duplicate the shape, change its size to 316x156px and change its color to the black linear gradient we used before. Step 7: Items Graphics. Duplicate the shapes and align them in the slots area. Step 9: Reel MovieClip Arrange the items graphics in your desired order and convert them to movie clips. We'll use the reel background rectangle of the last step to create the shadow effect: change its color to black and change its alpha values to 65, 15, 0. This can be a tricky part so be sure to download the source files to help you out. As you can see, I've used two Nettuts+ logos and two Psdtuts+ logos, but only one each of the Activetuts+ and Vectortuts+ logos. This means there's a greater possibility of matching three Nettuts+ logos than there is of matching three Activetuts+ logos. Use the shadow as a Mask Layer, and the Timeline to animate the items downwards. I used frame by frame animation moving the items 20 px down in every frame. You could use a tween, if you wanted to. Duplicate this MoveClip and place them in the correct slot background. Use the following instance names: items1, items2, items3. Step 10: Labels Timeline labels will be used to check for a winning combination. Create a new Layer and label each frame where our item is in the center. Step 11: Static TextFields Use the Text Tool (T) to create three static textfields: Credits, Bet and Winner Paid. Step 12: Dynamic TextFields With the Text Tool still selected, create three dynamic textfields, place them above the static ones and name them, from left to right: creditsT, betT and paidT. Step 13: Buttons Use the Rectangle Primitive Tool to create three 45x45px squares, change the corner radius to 4 and fill it with: #CD0202, #910202. Add the corresponding text label to each, convert each to a button, and name them: payTabB, betMaxB and betOneB. Step 14: Spin Button The Spin button is a little larger that the others and also has another color. Use the same process of the other buttons, but change the size to 50x50px and the color to: #5DA012, 3C670C. Name the button spinB. Step 15: Sounds is easier to use. You can download TweenNano from its official website. Step 17: New ActionScript Class Create a new (Cmd + N) ActionScript 3.0 Class and save it as Main.as in your class folder. Step 18: Class Structure Create your basic class structure to begin writing your code. > package { import flash.display.Sprite; public class Main extends Sprite { public function Main():void { // constructor code } } } Step 19: Required Classes These are the classes we'll need to import for our class to work; the import directive makes externally defined classes and packages available to your code. import flash.display.Sprite; import flash.events.MouseEvent; import com.greensock.TweenNano; import flash.utils.Timer; import flash.events.TimerEvent; Step 20: Variables These are the variables we'll use, read the comments in the code to know more about them. var payTable:PayTable; //A Pay Table instance var timer:Timer; // Timer object that controls the duration of the spins /* Sounds */ var buttonS:ButtonS = new ButtonS(); var spinS:SpinS = new SpinS(); var stopS:StopS = new StopS(); var winS:WinS = new WinS(); Step 21: Constructor Code The constructor is a function that runs when an object is created from a class, this code is the first to execute when you make an instance of an object, or runs when then SWF first loads if it belongs to the document class. It calls the necessary functions to start the game. Check those functions in the next steps. public final function Main():void { //Code } Step 22: Stop Items Prevent the reels MovieClips from playing immediately. items1.stop(); items2.stop(); items3.stop(); Step 23: Add Button Listeners Here we use a custom function to add the Mouse Events to our buttons; this function will be created later in the class. buttonListeners('add'); Step 24: Disable Spin Button Next we use another custom function that will prevent the Mouse Events of the Spin button. buttons('disable', spinB); Step 25: Button Listeners This function adds or removes a MouseUp Event to the buttons depending on the specified parameter. private final function buttonListeners(e:String):void { if(e == 'add') { spinB.addEventListener(MouseEvent.MOUSE_UP, spinBtn); betMaxB.addEventListener(MouseEvent.MOUSE_UP, betMax); betOneB.addEventListener(MouseEvent.MOUSE_UP, betOne); payTabB.addEventListener(MouseEvent.MOUSE_UP, payTableHandler); } else { spinB.removeEventListener(MouseEvent.MOUSE_UP, spinBtn); betMaxB.removeEventListener(MouseEvent.MOUSE_UP, betMax); betOneB.removeEventListener(MouseEvent.MOUSE_UP, betOne); payTabB.removeEventListener(MouseEvent.MOUSE_UP, payTableHandler); } } Step 26: Enable/Disable Buttons The following function uses its parameters to tween the alpha value of the specified button and disable/enable mouse interactions. private final function buttons(action:String, ...btns):void { var btnsLen:int = btns.length; if(action == 'enable') { for(var i:int = 0; i < btnsLen; i++) { btns[i].enabled = true; btns[i].mouseEnabled = true; TweenNano.to(btns[i], 0.5, {alpha:1}); } } else { for(var j:int = 0; j < btnsLen; j++) { btns[j].enabled = false; btns[j].mouseEnabled = false; TweenNano.to(btns[j], 0.5, {alpha:0.2}); } } } Step 27: Bet Max Button The Bet Max button is handled by this function. It plays the spin sound, changes the credits textfields,, and calls the spin function. private final function betMax(e:MouseEvent):void { /* Sound */ spinS.play(); /* Spin if enough credits */ if(int(creditsT.text) >= 3) { betT.text = '3'; buttons('disable', spinB, betOneB, betMaxB, payTabB); spin(); } } Step 28: Bet One Button The Bet One button is handled by this function. It increases the bet by one (if possible) and plays the corresponding button sound. It also enables the Spin button. private final function betOne(e:MouseEvent):void { /* Sound */ buttonS.play(); /* Bet One */ if(betT.text == '3') { betT.text = '1'; } else { betT.text = String(int(betT.text) + 1); } /* Enable Spin Button */ if(spinB.enabled == false) { buttons('enable', spinB); } } Step 29: Show/Hide Pay Table The Pay Table button is handled by this function. It checks whether the pay table is already on stage, and, if not, it uses a Tween to display it and center it. The other buttons are disabled while the table is showing. private final function payTableHandler(e:MouseEvent):void { /* Sound */ buttonS.play(); /* Show if not in stage */ if(payTable == null) { payTable = new PayTable(); payTable.x = stage.stageWidth * 0.5; payTable.y = stage.stageHeight * 0.5; addChild(payTable); TweenNano.from(payTable, 0.2, {scaleX:0.4, scaleY:0.4}); /* Disable buttons */ buttons('disable', spinB, betMaxB, betOneB); } else { TweenNano.to(payTable, 0.2, {scaleX:0.1, scaleY:0.1, alpha:0, onComplete:function destroyPT():void{removeChild(payTable); payTable = null}}); /* Enable buttons */ if(betT.text != '0') { buttons('enable', spinB); } buttons('enable', betMaxB, betOneB); } } Step 30: Spin Button The Spin button is handled by this function. It plays the spin sound and the Spin function if the credits are correct. private final function spinBtn(e:MouseEvent):void { /* Sound */ spinS.play(); /* Spin if enough credits */ if(int(creditsT.text) >= int(betT.text)) { spin(); buttons('disable', spinB, betOneB, betMaxB, payTabB); } } Step 31: Spin Function One of the core functions of the game, the spin function handles the winning and spending of credits, spins the items in the slots and uses a timer to stop them. Read the next steps for a more detailed view of these actions. private final function spin():void { //Code } Step 32: Add Won Credits This checks whether credits are available to add from the paidT textfield, and resets its value to 0. creditsT.text = String(int(creditsT.text) + int(paidT.text)); paidT.text = '0'; Step 33: Subtract Credits This subtracts the credits used in the last bet. creditsT.text = String(int(creditsT.text) - int(betT.text)); Step 34: Spin Items This function animates the reels, to make the items appear to spin. items1.play(); items2.play(); items3.play(); Step 35: Spin Timer This timer determines (randomly) the time to let the reel items spin, it is different in every spin. timer = new Timer(Math.floor(Math.random() * 1000) + 500); timer.addEventListener(TimerEvent.TIMER, handleTimer); timer.start(); Step 36: Timer Function This function is executed every time the timer ends its count. It stops the current slot from spinning and plays the stop sound. When all items are stopped it clears the timer and calls the checkWin() function. private function handleTimer(e:TimerEvent):void { if(timer.currentCount == 1) { stopItem(items1.currentFrame, items1); /* Sound */ stopS.play(); } if(timer.currentCount == 2) { stopItem(items2.currentFrame, items2); /* Sound */ stopS.play(); } if(timer.currentCount == 3) { stopItem(items3.currentFrame, items3); /* Sound */ stopS.play(); /* Stop Timer */ timer.stop(); timer.removeEventListener(TimerEvent.TIMER, handleTimer); timer = null; /* Enable buttons */ buttons('enable', spinB, betOneB, betMaxB, payTabB); /* Check Items for a winning combination */ checkWin(); } } Step 37: Snap To Nearest Logo As the timer can end in a frame where the current item is not in the center, we check the current frame of the MovieClip and use gotoAndStop() to display the closest item. private final function stopItem(cFrame:int, targetItem:MovieClip):void { if(cFrame >= 2 && cFrame <= 5) { targetItem.gotoAndStop(5); } else if(cFrame >= 6 && cFrame <= 9) { targetItem.gotoAndStop(9); } else if(cFrame >= 10 && cFrame <= 13) { targetItem.gotoAndStop(13); } else if(cFrame >= 14 && cFrame <= 17) { targetItem.gotoAndStop(17); } else if(cFrame >= 18 && cFrame <= 21) { targetItem.gotoAndStop(21); } else if(cFrame >= 22 && cFrame <= 24) { targetItem.gotoAndStop(1); } else if(cFrame == 1) { targetItem.stop(); } } You may need to alter this code to match the symbols and spin animation that you chose. Step 38: Check Win This function checks if the three items are equal, if true, it plays the winning sound and adds the corresponding amount to the paid textfield. private final function checkWin():void { if(items1.currentLabel == items2.currentLabel && items2.currentLabel == items3.currentLabel) { /* Sound */ winS.play(); /* Get current label to determine item's value */ var lbl:String = items1.currentLabel; if(lbl == 'a') { paidT.text = String(100 * int(betT.text)); } else if(lbl == 'v') { paidT.text = String(50 * int(betT.text)); } else if(lbl == 'p') { paidT.text = String(25 * int(betT.text)); } else if(lbl == 'n') { paidT.text = String(10 * int(betT.text)); } } } Step 39: Set Main Class We're making use of the Document Class in this tutorial, if you don't know how to use it or are a bit confused please read this QuickTip. Set your FLA's document class to Main. Step 40: Test We are now ready to test the movie and see if everything works as expected, don't forget to try all the buttons! Conclusion The final result is a customizable and entertaining game; try adding your custom graphics and prizes! You could also have a go at altering the probability to make it easier or harder to win. I hope you liked this tutorial, thank you for reading! Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
https://gamedevelopment.tutsplus.com/tutorials/create-a-slot-machine-game-in-flash-using-as3--active-8127
CC-MAIN-2019-51
refinedweb
2,016
67.86
Question: Alright so I am making a commandline based implementation of a website search feature. The website has a list of all the links I need in alphabetical order. Usage would be something like ./find.py LinkThatStartsWithB So it would navigate to the webpage associated with the letter B. My questions is what is the most efficient/smartest way to use the input by the user and navigate to the webpage? What I was thinking at first was something along the lines of using a list and then getting the first letter of the word and using the numeric identifier to tell where to go in list index. (A = 1, B = 2...) Example code: #Use base url as starting point then add extension on end. Base_URL = "" #Use list index as representation of letter Alphabetic_Urls = [ "/extensionA.html", "/extensionB.html", "/extensionC.html", ] Or would Dictionary be a better bet? Thanks Solution:1 How are you getting this list of URLS? If your commandline app is crawling the website for links, and you are only looking for a single item, building a dictionary is pointless. It will take at least as long to build the dict as it would to just check as you go! eg, just search as: for link in mysite.getallLinks(): if link[0] == firstletter: print link If you are going to be doing multiple searches (rather than just a single commandline parameter), then it might be worth building a dictionary using something like: import collections d=collections.defaultdict(list) for link in mysite.getallLinks(): d[link[0]].append(link) # Dict of first letter -> list of links # Print all links starting with firstletter for link in d[firstletter]: print link Though given that there are just 26 buckets, it's not going to make that much of a difference. Solution:2 The smartest way here will be whatever makes the code simplest to read. When you've only got 26 items in a list, who cares what algorithm it uses to look through it? You'd have to use something really, really stupid to make it have an impact on performance. If you're really interested in the performance though, you'd need to benchmark different options. Looking at just the complexity doesn't tell the whole story, because it hides the factors involved. For instance, a dictionary lookup will involve computing the hash of the key, looking that up in tables, then checking equality. For short lists, a simple linear search can sometimes be more efficient, depending on how costly the hashing algorithm is. If your example is really accurate though, can't you just take the first letter of the input string and predict the URL from that? ( "/extension" + letter + ".html") Solution:3 Dictionary! O(1) Solution:4 Dictionary would be a good choice if you have (and will always have) a small number of items. If the list of URL's is going to expand in the future you will probably actually want to sort the URL's by their letter and then match the input against that instead of hard-coding the dictionary for each one. Solution:5 Since it sounds like you're only talking about 26 total items, you probably don't have to worry too much about efficiency. Anything you come up with should be fast enough. In general, I recommend trying to use the data structure that is the best approximation of your problem domain. For example, it sounds like you are trying to map letters to URLs. E.g., this is the "A" url and this is the "B" url. In that case, a mapping data structure like a dict sounds appropriate: html_files = { 'a': '/extensionA.html', 'b': '/extensionB.html', 'c': '/extensionC.html', } Although in this exact example you could actually cheat it and skip the data structure altogether -- '/extension%s.html' % letter.upper() :) Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com EmoticonEmoticon
http://www.toontricks.com/2018/06/tutorial-question-on-python-sorting.html
CC-MAIN-2018-34
refinedweb
662
63.39
Problem: Write a class that defines a car. The car class stores the following data about a car: Member Name Data Type make string model string vin string owner string doors int mileage float gas tank float trip float gas remaining float Create methods to set data into the data members and to read the data from the data members. Write a method to determine the gas mileage. What I got so far is: #include <iostream> #include <string> using namespace std; class car { private: string make, model, vin, owner; int doors; float mileage, gasTank, trip, gasRemaining; public: string getMake(); string getModel(); string getOwner(); int getDoors(); float getMileage(); float getGasTank(); float getTrip(); float getGasRemaining(); double getGasMilease(); }; int main() { car cars; cout << " Please enter Make of car: "; cin >> cars.make; return 0; } string car::getMake() { return make; } I am getting the following errors: 1>.\CH 7 Addendum 2 Car Class.cpp(41) : error C2248: 'car::make' : cannot access private member declared in class 'car' 1> .\CH 7 Addendum 2 Car Class.cpp(15) : see declaration of 'car::make' 1> .\CH 7 Addendum 2 Car Class.cpp(13) : see declaration of 'car' This week we just started to learn about classes so I am totally lost even after reading the textbook several times and looking over their examples. Can anyone help me understand what I am doing wrong with the class? Edited by C++ Beginner: n/a
https://www.daniweb.com/programming/software-development/threads/258443/car-class-homework-help
CC-MAIN-2017-09
refinedweb
234
62.58
import gs.TweenLite; import gs.OverwriteManager; import gs.easing.*; stop(); OverwriteManager.init(); var clipArray:Array = new Array(); var numButtons:int = 31; for (var k:int = 1; k < numButtons; k++) { this["btn" + k].addEventListener(MouseEvent.CLICK, showClickedImage); //push each button into an array clipArray.push(this["btn"+k]); } function openRandomClip():void { var randomClip:int = Math.random()*clipArray.length; showClickedImage(null, clipArray[randomClip]); } openRandomClip(); function showClickedImage(event:MouseEvent = null, targetClip:SimpleButton = null):void { var target:Object = event!=null ? event.target : targetClip; //cary on as normal this.setChildIndex(this.getChildByName(target.name), this.numChildren-1); var orgWidth:int = target.width; var orgHeight:int = target.height; var orgX:int = target.x var orgY:int = target.y TweenLite.to(target, 1, {width:350, height:350, x:25, y:0, ease:Bounce.easeOut, delay: 1}) TweenLite.to(target, 1, {width:orgWidth, height:orgHeight, x:orgX, y:orgY, ease:Bounce.easeOut, delay:4}) } This is my code that make myimage thumbnails tween to maximized stage size than tween back again a few seconds after. I’m wondering if someone could help me make the openRandomClip function get called again or loop somehow so that the images will continue to randomly maximize then shrink on their own. I used the onComplete arguement to do this at first, but then the dilemna exists where I want the user to be able to click on any of the thumbnails and have only that specific image maximize. I had that working, but it wouldn’t stop the openRandomClip when the user clicks, so two images would then start popping up, and everytime you click a thumbnail another randomimage routine would be added to the mix. I basically just want to be able to make it so that when/if the user mouses over ANY thumbnail it will just pause the openRandomClip from happening and let the user look around and click certain ones if desired. Then once the user mouses out off the entire movie of just off all the thumbnails I want the randomclip to resume again. Can anyone help me with that? I’m totally stuck on how to do it!! Thanks in advance! -Matt
https://forum.kirupa.com/t/as3-tweenlite-mouseover-math-random-help/271319
CC-MAIN-2022-27
refinedweb
356
50.12
Upgrade tips¶ Description Advanced tips for upgrading Plone. Some of the information on this page is for Plone 4, which used the Archetypes content type framework. Plone 5.x uses the Dexterity content type framework. General Tips¶ This guide contains some tips for Plone upgrades. For more information, see also the official Plone upgrade guide Recommended setup¶ Test the upgrade on your local development computer first. Create two buildouts: one for the old Plone version (your existing buildout) and one for the new version. Prepare the migration in the old buildout. After all preparations are done, copy the Data.fs and blobstorage to the new buildout and run plone_migration tool there. Fix persistent utilities¶ You might need to clean up some leftovers from uninstalled add-ons which have not cleanly uninstalled. Use this utility: Note Perform this against old buildout Content Upgrades¶ For content migrations, Products.contentmigration can help you. Documentation on how to use it can be found on plone.org. Migration from non-folderish to folderish Archetypes based content types¶. This applies to Archetypes based content types. Upgrading theme¶ Make sure that your site them works on Plone 4. Official upgrade guide has tips how the theme codebase should be upgraded. Theme fixing and portal_skins¶ Your theme might be messed up after upgrade. Try playing around setting in portal_skins Properties tab. You can enable, disable and reorder skins layer applied in the theme. Upgrade may change the default theme and you might want to restore custom theme in portal_skins. Upgrade tips for plone.app.discussion¶ Enabling plone.app.discussion after Plone 4.1 upgrade¶ After migration from an earlier version of Plone, you will may notice that you do not have a Discussion control panel for plone.app.discussion, the new commenting infrastructure which now ships as part of new Plone installs beyond version 4.1. If a check of your Site Setup page reveals that you do not have the Discussion control panel, implement the following. Install plone.app.discussion manually¶ Log into your Plone site as a user with Manager access Browse to the following URL to manually install plone.app.discussion: http://<your-plone-url>:<port>/<plone-instance>/portal_setup/manage_importSteps In the Select Profile or Snapshot drop-down menu, select Plone Discussions. Click the Import all stepsbutton at the bottom of the page. Confirm that Discussion is now present as a control panel in your Site Setup Migrate existing comments¶ Follow the instructions regarding How to migrate comments to plone.app.discussion to migrate existing Plone comments. Fixing Creator details on existing comments¶ You may notice that some of your site’s comments have the user’s ID as their Creator property. At time of writing (for plone.app.discussion==2.0.10), the Creator field should refer to the user’s full name and not their user ID. You’ll likely notice that a number of other fields, including author_username, author_name and author_email are not present on some of your migrated comments. Reasons why comments get migrated but unsuccessfully are being investigated. This may change for future versions of plone.app.discussion. For now, though, having the user ID left as the Creator is less than helpful and means aspects like the username, name, and email not present affect usability of If a site has many comments with this issue, it is possible to step through all of them and correct them. Using a script like the following will process each of the affected comments accordingly: from Products.CMFPlone.utils import getToolByName from zope.app.component import hooks from plone import api context = hooks.getSite() catalog = api.portal.get_tool(name='portal_catalog') mtool = api.portal.get_tool(name='portal_membership') brains = catalog.searchResults(object_provides='plone.app.discussion.interfaces.IComment') for brain in brains: member = api.user.get(username=brain.Creator') comment = brain.getObject() if member and not comment.author_username and not comment.author_name and not comment.author_email: fullname = member.getProperty('fullname') email = member.getProperty('email') if fullname and email: comment.author_username = brain.Creator #our borked user ID comment.creator = fullname comment.author_name = fullname comment.author_email = email comment.reindexObject() print 'Fixed and reindexed %s' % comment else: print 'Could not find properties for author of %s' % comment This can be run anywhere an Acquisition context object is available, such as running your Zope instance in debug mode, an ipython prompt, or some other function on the filesystem. The getSite() function call can (and may need to) be replaced with some other pre-existing context object if that is more suitable. Keep in mind that this script was successfully used in a situation where no possible collisions existed between correctly-migrated comments Creators’ full names and user IDs (the code looks up the Creator in the hope of finding a valid Plone member). If you had a situation where you had some correctly migrated comments written by a user with ID david and full name of Administrator, and also had a user with the ID of Administrator, then this script may not be suitable. In the test situation, the three attributes of author_username, author_name, and author_email were observed as all being None, so in checking for this too, this may avoid problems. Test the code first with something like a
https://docs.plone.org/develop/plone/misc/upgrade.html
CC-MAIN-2020-16
refinedweb
870
50.33
Solving problem is about exposing yourself to as many situations as possible like How to fix: “UnicodeDecodeError: ‘ascii’ codec can’t decode byte” fix: “UnicodeDecodeError: ‘ascii’ codec can’t decode byte”, which can be followed any time. Take easy to follow this discuss. as3:~/ngokevin-site# nano content/blog/20140114_test-chinese.mkd as3:~/ngokevin-site# wok Traceback (most recent call last): File "/usr/local/bin/wok", line 4, in Engine() File "/usr/local/lib/python2.7/site-packages/wok/engine.py", line 104, in init self.load_pages() File "/usr/local/lib/python2.7/site-packages/wok/engine.py", line 238, in load_pages p = Page.from_file(os.path.join(root, f), self.options, self, renderer) File "/usr/local/lib/python2.7/site-packages/wok/page.py", line 111, in from_file page.meta['content'] = page.renderer.render(page.original) File "/usr/local/lib/python2.7/site-packages/wok/renderers.py", line 46, in render return markdown(plain, Markdown.plugins) File "/usr/local/lib/python2.7/site-packages/markdown/init.py", line 419, in markdown return md.convert(text) File "/usr/local/lib/python2.7/site-packages/markdown/init.py", line 281, in convert source = unicode(source) UnicodeDecodeError: 'ascii' codec can't decode byte 0xe8 in position 1: ordinal not in range(128). -- Note: Markdown only accepts unicode input! How to fix it? In some other python-based static blog apps, Chinese post can be published successfully. Such as this app:. In my site, Chinese post can be published successfully. Answer #1: tl;dr / quick fix - Don’t decode/encode willy nilly - Don’t assume your strings are UTF-8 encoded - Try to convert strings to Unicode strings as soon as possible in your code - Fix your locale: How to solve UnicodeDecodeError in Python 3.6? - Don’t be tempted to use quick reloadhacks Unicode Zen in Python 2.x – The Long Version Without seeing the source it’s difficult to know the root cause, so I’ll have to speak generally. UnicodeDecodeError: 'ascii' codec can't decode byte generally happens when you try to convert a Python 2.x str that contains non-ASCII to a Unicode string without specifying the encoding of the original string. In brief, Unicode strings are an entirely separate type of Python string that does not contain any encoding. They only hold Unicode point codes and therefore can hold any Unicode point from across the entire spectrum. Strings contain encoded text, beit UTF-8, UTF-16, ISO-8895-1, GBK, Big5 etc. Strings are decoded to Unicode and Unicodes are encoded to strings. Files and text data are always transferred in encoded strings. The Markdown module authors probably use unicode() (where the exception is thrown) as a quality gate to the rest of the code – it will convert ASCII or re-wrap existing Unicodes strings to a new Unicode string. The Markdown authors can’t know the encoding of the incoming string so will rely on you to decode strings to Unicode strings before passing to Markdown. Unicode strings can be declared in your code using the u prefix to strings. E.g. u'my ünicôdé str?ng' type(my_u) <type 'unicode'>my_u = Unicode strings may also come from file, databases and network modules. When this happens, you don’t need to worry about the encoding. Gotchas Conversion from str to Unicode can happen even when you don’t explicitly call unicode(). The following scenarios cause UnicodeDecodeError exceptions: # Explicit conversion without encoding unicode('€') # New style format string into Unicode string # Python will try to convert value string to Unicode first u"The currency is: {}".format('€') # Old style format string into Unicode string # Python will try to convert value string to Unicode first u'The currency is: %s' % '€' # Append string to Unicode # Python will try to convert string to Unicode first u'The currency is: ' + '€' Examples In the following diagram, you can see how the word café has been encoded in either “UTF-8” or “Cp1252” encoding depending on the terminal type. In both examples, caf is just regular ascii. In UTF-8, é is encoded using two bytes. In “Cp1252”, é is 0xE9 (which is also happens to be the Unicode point value (it’s no coincidence)). The correct decode() is invoked and conversion to a Python Unicode is successfull: In this diagram, decode() is called with ascii (which is the same as calling unicode() without an encoding given). As ASCII can’t contain bytes greater than 0x7F, this will throw a UnicodeDecodeError exception: The Unicode Sandwich It’s good practice to form a Unicode sandwich in your code, where you decode all incoming data to Unicode strings, work with Unicodes, then encode to strs on the way out. This saves you from worrying about the encoding of strings in the middle of your code. Input / Decode Source code If you need to bake non-ASCII into your source code, just create Unicode strings by prefixing the string with a u. E.g. u'Zürich' To allow Python to decode your source code, you will need to add an encoding header to match the actual encoding of your file. For example, if your file was encoded as ‘UTF-8’, you would use: # encoding: utf-8 This is only necessary when you have non-ASCII in your source code. Files Usually non-ASCII data is received from a file. The io module provides a TextWrapper that decodes your file on the fly, using a given encoding. You must use the correct encoding for the file – it can’t be easily guessed. For example, for a UTF-8 file: import io with io.open("my_utf8_file.txt", "r", encoding="utf-8") as my_file: my_unicode_string = my_file.read() my_unicode_string would then be suitable for passing to Markdown. If a UnicodeDecodeError from the read() line, then you’ve probably used the wrong encoding value. CSV Files The Python 2.7 CSV module does not support non-ASCII characters ?. Help is at hand, however, with. Use it like above but pass the opened file to it: from backports import csv import io with io.open("my_utf8_file.txt", "r", encoding="utf-8") as my_file: for row in csv.reader(my_file): yield row Databases Most Python database drivers can return data in Unicode, but usually require a little configuration. Always use Unicode strings for SQL queries. MySQL In the connection string add: charset='utf8', use_unicode=True E.g. "localhost", user='root', passwd='passwd', db='sandbox', use_unicode=True, charset="utf8")db = MySQLdb.connect(host= PostgreSQL Add: psycopg2.extensions.register_type(psycopg2.extensions.UNICODE) psycopg2.extensions.register_type(psycopg2.extensions.UNICODEARRAY) HTTP Web pages can be encoded in just about any encoding. The Content-type header should contain a charset field to hint at the encoding. The content can then be decoded manually against this value. Alternatively, Python-Requests returns Unicodes in response.text. Manually. The meat of the sandwich Work with Unicodes as you would normal strs. Output stdout / printing locale is en_GB.UTF-8, the output will be encoded to UTF-8. On Windows, you will be limited to an 8bit code page. An incorrectly configured console, such as corrupt locale, can lead to unexpected print errors. PYTHONIOENCODING environment variable can force the encoding for stdout. Files Just like input, io.open can be used to transparently convert Unicodes to encoded byte strings. Database The same configuration for reading will allow Unicodes to be written directly. Python 3 Python 3 is no more Unicode capable than Python 2.x is, however it is slightly less confused on the topic. E.g the regular str is now a Unicode string and the old str is now bytes. The default encoding is UTF-8, so if you .decode() a byte string without giving an encoding, Python 3 uses UTF-8 encoding. This probably fixes 50% of people’s Unicode problems. Further, open() operates in text mode by default, so returns decoded str (Unicode ones). The encoding is derived from your locale, which tends to be UTF-8 on Un*x systems or an 8-bit code page, such as windows-1251, on Windows boxes. Why you shouldn’t use sys.setdefaultencoding('utf8') It’s a nasty hack (there’s a reason you have to use reload) that will only mask problems and hinder your migration to Python 3.x. Understand the problem, fix the root cause and enjoy Unicode zen. See Why should we NOT use sys.setdefaultencoding(“utf-8”) in a py script? for further details Answer #2: Finally I got it: as3:/usr/local/lib/python2.7/site-packages# cat sitecustomize.py # encoding=utf8 import sys reload(sys) sys.setdefaultencoding('utf8') Let me check: as3:~/ngokevin-site# python Python 2.7.6 (default, Dec 6 2013, 14:49:02) [GCC 4.4.5] on linux2 Type "help", "copyright", "credits" or "license" for more information. import sys reload(sys) <module 'sys' (built-in)> sys.getdefaultencoding() 'utf8' >>> The above shows the default encoding of python is utf8. Then the error is no more. Answer #3: This is the classic “unicode issue”. I believe that explaining this is beyond the scope of a StackOverflow answer to completely explain what is happening. It is well explained here. In very brief summary, you have passed something that is being interpreted as a string of bytes to something that needs to decode it into Unicode characters, but the default codec (ascii) is failing. The presentation I pointed you to provides advice for avoiding this. Make your code a “unicode sandwich”. In Python 2, the use of from __future__ import unicode_literals helps. Update: how can the code be fixed: OK – in your variable “source” you have some bytes. It is not clear from your question how they got in there – maybe you read them from a web form? In any case, they are not encoded with ascii, but python is trying to convert them to unicode assuming that they are. You need to explicitly tell it what the encoding is. This means that you need to know what the encoding is! That is not always easy, and it depends entirely on where this string came from. You could experiment with some common encodings – for example UTF-8. You tell unicode() the encoding as a second parameter: source = unicode(source, 'utf-8') Answer #4: In some cases, when you check your default encoding ( print sys.getdefaultencoding()), it returns that you are using ASCII. If you change to UTF-8, it doesn’t work, depending on the content of your variable. I found another way: import sys reload(sys) sys.setdefaultencoding('Cp1252') Answer #5: I was searching to solve the following error message: unicodedecodeerror: ‘ascii’ codec can’t decode byte 0xe2 in position 5454: ordinal not in range(128) I finally got it fixed by specifying ‘encoding’: f = open('../glove/glove.6B.100d.txt', encoding="utf-8") Wish it could help you too. Answer #6: "UnicodeDecodeError: 'ascii' codec can't decode byte" Cause of this error: input_string must be unicode but str was given "TypeError: Decoding Unicode is not supported" Cause of this error: trying to convert unicode input_string into unicode So first check that your input_string is str and convert to unicode if necessary: if isinstance(input_string, str): input_string = unicode(input_string, 'utf-8') Secondly, the above just changes the type but does not remove non ascii characters. If you want to remove non-ascii characters: if isinstance(input_string, str): input_string = input_string.decode('ascii', 'ignore').encode('ascii') #note: this removes the character and encodes back to string. elif isinstance(input_string, unicode): input_string = input_string.encode('ascii', 'ignore') Answer #7: In order to resolve this on an operating system level in an Ubuntu installation check the following: $ locale charmap If you get locale: Cannot set LC_CTYPE to default locale: No such file or directory instead of UTF-8 then set LC_CTYPE and LC_ALL like this: $ export LC_ALL="en_US.UTF-8" $ export LC_CTYPE="en_US.UTF-8" Answer #8: I find the best is to always convert to unicode – but this is difficult to achieve because in practice you’d have to check and convert every argument to every function and method you ever write that includes some form of string processing. So I came up with the following approach to either guarantee unicodes or byte strings, from either input. In short, include and use the following lambdas: # guarantee unicode string _u = lambda t: t.decode('UTF-8', 'replace') if isinstance(t, str) else t _uu = lambda *tt: tuple(_u(t) for t in tt) # guarantee byte string in UTF8 encoding _u8 = lambda t: t.encode('UTF-8', 'replace') if isinstance(t, unicode) else t _uu8 = lambda *tt: tuple(_u8(t) for t in tt) Examples: text='Some string with codes > 127, like Zürich' utext=u'Some string with codes > 127, like Zürich' print "==> with _u, _uu" print _u(text), type(_u(text)) print _u(utext), type(_u(utext)) print _uu(text, utext), type(_uu(text, utext)) print "==> with u8, uu8" print _u8(text), type(_u8(text)) print _u8(utext), type(_u8(utext)) print _uu8(text, utext), type(_uu8(text, utext)) # with % formatting, always use _u() and _uu() print "Some unknown input %s" % _u(text) print "Multiple inputs %s, %s" % _uu(text, text) # but with string.format be sure to always work with unicode strings print u"Also works with formats: {}".format(_u(text)) print u"Also works with formats: {},{}".format(*_uu(text, text)) # ... or use _u8 and _uu8, because string.format expects byte strings print "Also works with formats: {}".format(_u8(text)) print "Also works with formats: {},{}".format(*_uu8(text, text)) Here’s some more reasoning about this.
https://discuss.dizzycoding.com/how-to-fix-unicodedecodeerror-ascii-codec-cant-decode-byte/
CC-MAIN-2022-33
refinedweb
2,258
57.06
I am using a list comprehensive 'search' to match objects of my employee class. I then want to assign a value to them based on who matched the search. Basically the code equivalent of asking who likes sandwiches and then giving that person a sandwich. This bit works class Employee(): def __init__(self, name, age, favoriteFood): self.name = name self.age = age self.favoriteFood = favoriteFood def __repr__(self): return "Employee {0}".format(self.name) employee1 = Employee('John', 28, 'Pizza') employee2 = Employee('Kate', 27, 'Sandwiches') myList = [employee1, employee2] a = 'Sandwiches' b = 'Tuna Mayo Sandwich' matchingEmployee = [x for x in myList if x.favoriteFood == a] print matchingEmployee matchingEmployee.food = b If you want to append food to each employee that matched your filter you'd need to loop through the matchingEmployee list. For example: for employee in matchingEmployee: employee.food = b
https://codedump.io/share/PHI8RCkxJsRY/1/accessing-object-from-list-comprehensive-search-in-python
CC-MAIN-2016-50
refinedweb
139
59.9
MS Dynamics CRM 3.0 #include <iostream> class Base{ private: virtual void f(int) { std::cout << "f in Base" << std::endl; } Base* p = &derived; p->f(2); return 1; - Hide quoted text - > #include <iostream> > class Base{ > private: > virtual void f(int) { std::cout << "f in Base" << std::endl; } > }; > class Derived : public Base{ > public: > virtual void f(int) { std::cout << "f in Derived" << std::endl; } > }; > int main(void) > { > Base base; > Derived derived; > Base* p = &derived; > p->f(2); > return 1; > } > In base class f is decleared as private member function, and the > compiler complains that pointer p couldn't access private member in > base. -- A: Because it messes up the order in which people normally read text. Q: Why is it such a bad thing? A: Top-posting. Q: What is the most annoying thing on usenet and in e-mail? 1. The name of the function is looked up in the calling scope. 2. Overloads for that name are checked to find the best match. 3. The protection for that overload is checked. 4. Then IF the function is virtual, the final overriding function is called The difference between static and dynamic type only comes into play at step 4.
http://www.megasolutions.net/cplus/Why-polymorph-fails-when-virtual-function-is-decleared-private-in-base-class-and-public-in-derived-class_-77641.aspx
CC-MAIN-2015-40
refinedweb
200
67.89
The article is about the File Handling in Java. Before we begin, know that there are many different classes in Java related to File Handling. They have a lot in common aside from a few differences so the concept remains roughly the same. Some of these Classes are File, FileResource, FileReader, FileWriter, FileOutputStream, FileInputStream, BufferedWriter and BufferedReader. Typically anything with Input or Reader is used to read data from files, whereas anything with Output or Writer is used to write data to files. In this article, we’ll cover File, FileWriter and FileReader. FileWriter The FileWriter class is used to write data in the form of text to text files. It writes a stream of characters to the file, as compared to the stream of bytes that the FileOutputStream class writes. Before we use any of the methods in the FileWriter class, we must declare an object for it. FileWriter obj = new FileWriter(file_path) - obj is the name of the object which we creating from the FileReader Class. - The new keyword is used to create the object from the FileReader Class. - The file_path must be a string and a valid path to a text file on your computer. In the code below we are going to write to a file called “Data.txt”. If this file does not already exist, the FileWriter Class is create it for you. Remember to import the two following classes into your code as they are not included normally. Secondly, remember to include the throws IOException statement. It’s part of Java’s behavior to throw an IO exception when reading or writing. Finally, it’s the write() function that we use to insert text into the file. import java.io.FileWriter; import java.io.IOException; public class example { public static void main(String[] args) throws IOException { FileWriter file = new FileWriter("C://CodersLegacy/Data.txt"); String data; data = "File Handling in Java - CodersLegacy"; file.write(data); file.close(); } } Below is a screenshot of the text file created by the above code. If a file with this name had already existed, any data in it would have been wiped and overwritten with the below text. Remember to call the close() function on the object you created to free up memory resources for the program to use elsewhere. FileReader The FileReader class is used to read text data from files. It reads a stream of characters from the file, as compared to the stream of characters that the FileInputStream class reads. Before we use any of the methods in the FileReader class, we must declare an object for it. The syntax for this is the same as the FileWriter code if you swap out FileWriter for FileReader. FileReader obj = new FileReader(file_path) In this example we’ll use the same file we created in the FileWriter section. We’ll be using the read() method belonging to the FileReader Class. It is used to return a character’s ASCII integer value and returns -1 at the end of file. import java.io.FileReader; import java.io.IOException; public class example { public static void main(String[] args) throws IOException { FileReader file = new FileReader("C://CodersLegacy/Data.txt"); int data; while ((data = file.read()) != -1) { System.out.print((char)data); } file.close(); } } The read() function only returns one character at a time, so we have to employ the use of a while loop. Each iteration of the while loop moves the file to the next character. We convert the output to char type else an integer ASCII value would be displayed. If you want to display the text in the below format, use the print() function, not println(). Else one character would be displayed per line. File Handling in Java - CodersLegacy Once again, remember to call the close() function on the object you created. File The File class has many useful methods that prove useful during File Handling in Java. We’ll discuss the most useful ones here briefly. All the below functions take a file path as their input argument. First up, is the exists() function which is used to check if a file at the specified file path exists. It’s a good idea to use the exists() function before attempting to read or write from a specified file path. This helps prevent any nasty surprises down the road. The delete() function has the unique ability to delete a file at a specific file path. It’s advised to be careful while using this. You might end up permanently deleting a valuable file from your PC. The length() Function returns the length of a file in bytes. The mkdir() function takes as input a file path, and creates a directory at that location. You can think of it as creating a folder. The list() function takes a directory path as input, as returns the names of all the files in it, in Array format. Lastly, the createNewFile() is another way to create Files in Java. All you have to do is pass an appropriate path into it’s parameters. All these functions are methods of the File class, so you will have to call them in the following format, File.methodname. This marks the end of the Java File Handling section. Any suggestions or contributions for CodersLegacy are more than welcome. You can ask any relevant questions in the comments section below.
https://coderslegacy.com/java/java-file-handling/
CC-MAIN-2021-21
refinedweb
894
65.32
0 Hi, so i wrote a few of the programs on my own after all the help previosuly but i've got stuck in this one here. The point is the make the string alteast 'n' characters long by padding spaces to its right. #include <iostream> using namespace std; void padRight(char *&a, int n) { int diff = strlen(a) - n; if(diff >= 0) { return; } else { char *temp = a; //cout << temp << endl; a = (char *)malloc(n); memset(a,' ',n); while (*a++ = *temp++); //cout << a << endl; } } int main() { char p[] = "smiths"; cout << "before padding " << strlen(p) << " " << p << endl; char *a = p; padRight(a,10); cout << "after padding " << strlen(a) << " " << a << endl; }; output: > before padding 6 smiths after padding 10 _MONETm Please point me to the errors with explanations.
https://www.daniweb.com/programming/software-development/threads/168445/problem-in-padding-a-string
CC-MAIN-2017-26
refinedweb
126
57.57
So I was working on Python files and no matter how often I reloaded this Python file it kept setting tab width to 8 characters, despite my Python configuration file having:. This behaviour is controlled by the detectIndentation setting: it tries to automatically determine the indentation settings of the file on load. You can disable this by setting: detectIndentation false I have to say that detectIndentation is a bit flawed though. Why would it set tabs to 8 spaces when the whole Python file I am editing uses 4 space indents? It generally works quite well. In this case, it'll be choosing an 8 space indent because 80% of the lines are indented with a multiple of 8 spaces. Perhaps there's an argument to be made that the threshold should be greater than 80%. Aha, so it doesn't check the flow control indentation but the overall percentage of indentation present in the code. Mmm. The problem than arises when you have small files that have to do something like this: [code]class DoHickey(object): def init(self): code code code def method(self, arguments): more code more code more code[/code] You will quickly then trip the 8 space tab width. so with python we are useing 4 spaces for each indentation, but most python is two+ indentations levels. i guess it could get confused with multiples of your desired indentation. It seems I'm still getting this problem with the latest beta. (Coding in Matlab, I don't know if that makes a difference.) I also have several files where most lines are indented two levels, which trips the 80% rule, and the double indentations are interpreted as one. Maybe some special case considerations would be good, such as if >80% of the lines are indented at 2X the default indentation (or 4X, 6X, etc), and the remaining lines are >50% at 1X, 3X, 5X, etc of the default, then stick with the default!
https://forum.sublimetext.com/t/weird-tab-behaviour-in-beta-20100613/924/2
CC-MAIN-2016-18
refinedweb
328
66.88
-- ----------------------------------------------------------------------------- -- -- (c) The University of Glasgow 2012 -- -- Monadic streams -- -- ----------------------------------------------------------------------------- {-# LANGUAGE CPP #-} module Stream ( Stream(..), yield, liftIO, collect, fromList, Stream.map, Stream.mapM, Stream.mapAccumL ) where import Control.Monad #if __GLASGOW_HASKELL__ < 709 import Control.Applicative #endif -- | -- @Stream m a b@ is a computation in some Monad @m@ that delivers a sequence -- of elements of type @a@ followed by a result of type @b@. -- -- More concretely, a value of type @Stream m a b@ can be run using @runStream@ -- in the Monad @m@, and it delivers either -- -- * the final result: @Left b@, or -- * @Right (a,str)@, where @a@ is the next element in the stream, and @str@ -- is a computation to get. -- newtype Stream m a b = Stream { runStream :: m (Either b (a, Stream m a b)) } instance Monad f => Functor (Stream f a) where fmap = liftM instance Monad m => Applicative (Stream m a) where pure = return (<*>) = ap instance Monad m => Monad (Stream m a) where return a = Stream (return (Left a)) Stream m >>= k = Stream $ do r <- m case r of Left b -> runStream (k b) Right (a,str) -> return (Right (a, str >>= k)) yield :: Monad m => a -> Stream m a () yield a = Stream (return (Right (a, return ()))) liftIO :: IO a -> Stream IO b a liftIO io = Stream $ io >>= return . Left -- | Turn a Stream into an ordinary list, by demanding all the elements. collect :: Monad m => Stream m a () -> m [a] collect str = go str [] where go str acc = do r <- runStream str case r of Left () -> return (reverse acc) Right (a, str') -> go str' (a:acc) -- | Turn a list into a 'Stream', by yielding each element in turn. fromList :: Monad m => [a] -> Stream m a () fromList = mapM_ yield -- | Apply a function to each element of a 'Stream', lazily map :: Monad m => (a -> b) -> Stream m a x -> Stream m b x map f str = Stream $ do r <- runStream str case r of Left x -> return (Left x) Right (a, str') -> return (Right (f a, Stream.map f str')) -- | Apply a monadic operation to each element of a 'Stream', lazily mapM :: Monad m => (a -> m b) -> Stream m a x -> Stream m b x mapM f str = Stream $ do r <- runStream str case r of Left x -> return (Left x) Right (a, str') -> do b <- f a return (Right (b, Stream.mapM f str')) -- | analog of the list-based 'mapAccumL' on Streams. This is a simple -- way to map over a Stream while carrying some state around. mapAccumL :: Monad m => (c -> a -> m (c,b)) -> c -> Stream m a () -> Stream m b c mapAccumL f c str = Stream $ do r <- runStream str case r of Left () -> return (Left c) Right (a, str') -> do (c',b) <- f c a return (Right (b, mapAccumL f c' str'))
https://downloads.haskell.org/~ghc/latest/docs/html/libraries/ghc/src/Stream.html
CC-MAIN-2015-27
refinedweb
453
64.98
Static import in Java explanation with example : Using static import in Java, we can access any public static member of a class directly without using its class name. For example, to find the square root of a number , we have sqrt() method defined in Math class. Since it is a public static class (public static double sqrt(double a)), we can call it directly using the class name like Math.sqrt(4). But , we can also use static import of the class Math and call sqrt method directly like sqrt(4). Imagine a large class with thousand lines of code and we are using static methods like sqrt in each line. In that case, using static import will save us a lot of time as we don’t need to type the same class name again and again. Below example will help you to understand more about static import : Java example program without using static import : class Main{ public static void main(String args[]){ System.out.println("The value of PI is : "+Math.PI); System.out.println("Square root of 16 is : "+Math.sqrt(16)); } } It will produce the following output : The value of PI is : 3.141592653589793 Square root of 16 is : 4.0 Now, Let’s see how to use static import on this program Java example program using static import : import static java.lang.Math.*; import static java.lang.System.out; class Main{ public static void main(String args[]){ out.println("The value of PI is : "+PI); out.println("Square root of 16 is : "+sqrt(16)); } } This program will also print the same output as above. Only difference is that we have used two imports (static imports) in the beginning, so System.out.println() is written as out.println() and Math.PI,Math.sqrt() are written as PI,sqrt() respectively. Ambiguity : If two static imports have members with same name, it will throw an error. Because, it will not be able to determine which member to select in the absence of class name. import static java.lang.Integer.*; class Main{ public static void main(String args[]){ System.out.println(MAX_VALUE); } } This program will work . But : import static java.lang.Integer.*; import static java.lang.Long.*; class Main{ public static void main(String args[]){ System.out.println(MAX_VALUE); } } It will throw one compiler error stating as reference to MAXVALUE is ambiguous._ because MAXVALUE_ is present in both of the imported packages. Drawbacks of Static import : Use static import if your program needs frequent access to static members of a different class. But importing all static members from a class may harm the readability of the program. Because it is hard to find which class contains a value only by reading its name. Use it in your code , but make sure that it is understandable to other peoples as well. Similar tutorials : - SortedSet in Java explanation with Example - Java DayOfWeek explanation with example - What is Jagged Arrays in Java : explanation with examples - Java peek(), peekFirst() and peekLast() explanation with examples - Java Math incrementExact explanation with example - Java Math decrementExact explanation with example
https://www.codevscolor.com/static-import-in-java
CC-MAIN-2020-40
refinedweb
514
58.08
Closed Bug 605378 Opened 12 years ago Closed 11 years ago Fail to parse supported-calendar-component-set results from Chandler PROPFIND Categories (Calendar :: Provider: CalDAV, defect) Tracking (Not tracked) 1.2 People (Reporter: glen.a.ritchie, Assigned: sfleiter) References Details (Whiteboard: [CalDAV server: Chandler]) Attachments (1 file, 1 obsolete file) User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-GB; rv:1.9.2.10) Gecko/20100914 Firefox/3.6.10 Build Identifier: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-GB; rv:1.9.2.9) Gecko/20100915 Lightning/1.0b3pre Thunderbird/3.1.4 Recently I had issues after the upgrade to lightning 1.02b and Thunderbird 3.1.4 which broke CalDAV calendar support while using Chandler(Cosmos) server. Specifically I was unable to create events on the calendar, as the calender was simply not listed in the available calendars when trying to create an event. Reproducible: Always Steps to Reproduce: 1. Add network calendar from Chandler CalDAV 2. Attempt to create event on new calendar Actual Results: Unable to create event on the new calendar Expected Results: Should be able to create events on the calendar - Chandler Server 1.1.0. While investigating the cause, I determined that the CalDAV module was not parsing the PROPFIND results correctly. Chandler was returning(among others) XML like the following string: <C:comp C: The code in calDavCalendar.js attempts to parse ( ) the XML but fails to retrieve the "name" attribute due to the attribute having a namespace associated with it and as a result, believes the server is unable to accept events. Changing the line 1393: let comp = sc.@name.toString(); to let comp = sc.@*::name.toString(); Fixes the issue by allowing it to retrieve the "name" attribute no matter what the namespace. I freely admit my lack of understanding of XML and namespaces, so please feel free to make a better fix instead of my workaround. Version: unspecified → Lightning 1.0b2 That response returned by Chandler seems OK, even if I am not sure that the attributes really "inherits" the CALDAV: namespace from the xml element "C:comp" Chandler server would probably have much better interoperability if they tried to match their responses with the examples in rfc 4791 (i.e.: never specify the namespace on attributes). That being said I agree with the filer's recommended fix. Status: UNCONFIRMED → NEW Ever confirmed: true Whiteboard: [CalDAV server: Chandler] OS: Windows XP → All Hardware: x86 → All I repeat the most important points from duplicate bug 707231 I created: The XML Chandler/Cosmo does create is right. At least according to "xmllint --debug" which dumps a debug tree of the in-memory document the C namespace prefix for the attribute is valid but redundant, so both kinds of writing the "name" attribute should work. If sc is in the namespace C, shouldn't sc.@name match to name attributes in namespace C? C:name and name are equivalent so the parsing code should handle that. My guess is that sc.@*::name.toString(); matches to name attributes in any namespace which would be wrong. Where do I find the documentation to the XML parsing done here: let comp = sc.@name.toString(); ? I would very much like to understand that code. Thanks in advance. The sc.@name.toString() thing is the javascript xml extensions called "e4x". You could try setting: default xml namespace = C just under the definition of the namespace in that function. I assume that its not using the right default namespace. If that doesn't work, instead of accepting any namespace, I'd suggest using sc.@C::name because the attribute should be from the CalDAV namespace and not from some other namespace. Stefan (Fleiter) I'd appreciate if you could put together a patch, as you seem to have a good grip of the code! A patch for that file would be a start, ideally you could clone and create a hg diff --git there. Thank you Philipp for your support, the word e4x was a good starting point. This is very much appreciated. I tried what you suggested, here the results: - Declaring the default namespace does not change anything: - using sc.@C::name does not match on attributes which do not have the namespace declared themselves which is quite a surprise. I will continue with this as time permits. I had a look at the XML Namespace spec: According to "The namespace name for an unprefixed attribute name always has no value." That means that an attribute does *not* inherit the namespace of its element. A good description of this fact and the reasons for it can be found here: So the output of "xmllint --debug" was misleading. So that is a bug in cosmo/chandler and *not* in Lightning. As tested now the cosmo/chandler cloud instance hub.chandlerproject.org does not have this bug anymore. The latest source release of cosmo still has the problem, though. See and especially the code DomUtil.setAttribute(e, ATTR_CALDAV_NAME, NAMESPACE_CALDAV, type); Since cosmo, the ClaDAV server of chandler, is a dead project I do not think that this will be fixed in the source in the future. I do not know how many companies/organizations have cosmo servers running or will setup new ones and whether a workaround should be implemented. If no workaround is wanted this should be closed as INVALID. If a workaround would be accepted I could create one. Sorry for bugspam, last comment for today: A fix could look like the fallback logic here: Ok, since @C::name doesn't work I think its ok to just workaround using @*::name, adding a comment that this is needed for cosmo. Are there any other places you know of where cosmo adds a namespace to the attribute? If so we should add workarounds there too. Thank you for your detailed analysis That would be Especially interesting is the FOO_BAR calendar component type. ;-) No I do not know (yet) any other place where cosmo uses a namespaced attribute. If I find one I will report it here. Yep, lets go with that version. Could you attach a patch to this bug and ask me for review? Will do that during the next days. This patch does not help until bug 588799 is fixed, too. So that should be next. :-) Comment on attachment 581066 [details] [diff] [review] workaround for wrong namespace of name attribute in comp element of supported-calendar-component-set reponse Having slept a night over this the comment is quite misleading, sorry. I will generate a better patch when I am at my private computer again. This time with better (not misleading) comment. Attachment #581066 - Attachment is obsolete: true Attachment #581352 - Flags: review?(philipp) Assignee: nobody → stefan.fleiter Status: NEW → ASSIGNED Comment on attachment 581352 [details] [diff] [review] workaround for wrong namespace of name attribute in comp element of supported-calendar-component-set reponse (better comment) Review of attachment 581352 [details] [diff] [review]: ----------------------------------------------------------------- r=philipp and approval for comm-aurora since its low risk ::: calendar/providers/caldav/calDavCalendar.js @@ +1609,5 @@ > if (supportedComponentsXml.C::comp.length() > 0) { > thisCalendar.mSupportedItemTypes.length = 0; > for each (let sc in supportedComponentsXml.C::comp) { > + // accept name attribute from all namespaces to workaround Cosmo bug > + // see I think it sufficient to write bug 605378 comment 6, as we've done this the same in other places. I'll fix this before checkin. Attachment #581352 - Flags: review?(philipp) → review+ Pushed to: comm-central changeset 55b341cdcf82 releases/comm-aurora changeset 0274ebc20d7f Status: ASSIGNED → RESOLVED Closed: 11 years ago Resolution: --- → FIXED Target Milestone: --- → 1.2
https://bugzilla.mozilla.org/show_bug.cgi?id=605378
CC-MAIN-2022-33
refinedweb
1,267
66.23
Python Musings #1: Reading raw input from Hackkerank Challenges Want to share your content on python-bloggers? click here. As some of you may or may not know, Hackkerank is a website that offers a variety practice questions to work on your coding skills in an interactive online environment. You can work on a variety of languages like, Java, C, C++, Python and more! There are a lot of high quality questions that can really challenge your present coding and problem solving skills and help you build on them. When I started out, I found that reading raw data was more challenging than writing the rest of the solution to the problem; This blog post is to show how to read raw data as lists, arrays and matrices and hopefully shed some light on how to do this in other problems. I’m sure there are other more effective ways to be reading raw input from Hackerrank, but this has worked for me and I hope it will be helpful for others as well; Sorry in advance if my code appears to be juvenile. To solve these problems, I will be working with Python 3. Step 0: Reading Raw input To read a line of raw input. Simply use the input() function. While this is great for reading data. It gives it to us in the raw form, which results in the data being received as a string- which isn’t any good if we want to do calculations. Now that we know how to read raw input. Lets now get the data to be readable and in the form that we want for solving Hackerrank challenges. Step 1: Reading Lists Lets look at a problem that we can to read raw input as a list to solve. This problem is called “Shape and Reshape“. This involves reading raw, space-separated data and turning it into a matrix. Turning the data into a matrix can be done by using the numpy package. But to do this, the data first needs to be made into a list. I do this by first reading the raw data with input().strip().split(' '). Lets explain what each part of this code does. input() takes in the raw input. The .strip() method clears all leading and trailing spaces. While not necessary for our example, it is good practice to do this so as not to have to encounter this problem in other cases. The .split(' ') method splits the raw data into individual elements. Because the data is space separated we define the the split to be a space. We could technically write split() without anything in the middle (the default is white-space) but for this example we are defining the separator as an individual space character. The problem now is that the data needs to be converted to a proper form. Remember, input() reads all raw data as a string. For our problem, our data needs to be in integer form. This can be done by iterating the int() function across the elements in our list. The code we use is thus: n = input("Write input: ").strip().split(' ') data=[int(i) for i in n] # Print the data to see our result (Not required for the solution) print(data) Looking at the for-loop may not seem intuitive when first looking at it if you are used to writing for-loops traditionally. But once you learn how to write for loops this way, it definitely will be more preferred. As someone who has come to learn Python after initially spending a lot of time with R. I would describe this method of writing a for loop analogous to R’s sapply() or lapply() functions. And there you have it! With 2 lines of code our data is ready to solve the problem! (For actually solving the problem, you will have to figure it out yourself or you can check out my code on my Python Musings Github Repository (Its still a work in progress, messy code and all) Step 2: Reading Arrays After looking in the discussions for this problem the raw data can be read directly as an array using the numpy package. The code is: import numpy as np data = np.array(input("Write input: ").strip().split(' '),dtype= int) # Print the data to see our result (Not required for the solution) print(data) Essentially we put our raw input that we put in as a list before directly into numpy’s .array() function. To coerce the data into integers we define the data type as an integer. Therefore we set dtype=int in the .array() function…. and there you have it! An array of integers! Step 3: Reading Matrices For reading matrices, lets look at the problem titled “Transpose and Flatten“. This requires reading a matrix of data and transposing it and flattening it. While doing those operations are pretty straight forward. Reading the data might be a challenge. Lets first look at how the input format is. We want to have to read data in a way that will let us know the number of rows and columns of the data followed by the elements to read. To do this, the code is: import numpy as np n,m =map(int,input().strip().split()) array= np.array([input().strip().split() for i in range(0,n)],dtype=int) print(array) Lets now break down the code: Python has a very cool feature that you can assign multiple variables in a single line by separating them by a comma. So we can immediately assign our rows and columns from our input in a single line. To assign differing values, we use the map() function on the split input. To assure that the values are in integer form we apply the int function to both variables. To get the rest of the data, we can read it directly into numpy’s .array() function, iterating across the number of inputs we know we will be having (i.e. the number of rows). With this, we get the matrix we want for our input and can now work on the problem! Conclusion Doing challenges on Hackerrank is a good way to build skills in knowing how to write code and problem solve in Python. I personally found it initially challenging to read data in a form I wanted it. I hope this article shed some light on solving these problems! Be sure to check out my Python Musing’s Github repository to see where I am in my adventures! Want to share your content on python-bloggers? click here.
https://python-bloggers.com/2020/06/python-musings-1-reading-raw-input-from-hackkerank-challenges/
CC-MAIN-2021-10
refinedweb
1,104
71.75
27 December 2010 13:30 [Source: ICIS news] By Truong Mellor and Julia Meehan ?xml:namespace> LONDON “January is looking firm on both benzene and styrene,” said one trader. “It appears to be driven by demand and not crude.” Delegates at the 9th European Aromatics & Derivatives Conference in Strong demand from the styrenics chain was the focus of some for 2011, with players noting that the substitution of polypropylene for polystyrene had come to a head. Others were optimistic about some of the smaller downstream sectors, like expandable polystyrene, for avenues of future growth. With reformers expected to run at lower rates next year due to weakening demand for gasoline, many expected benzene availability to remain precarious in 2011. Additionally, the “The Some were predicting that imports into “Demand will stay strong, and we could see some arbitrage opportunities for European players open up in 2011,” said one trader. While it was still too early to gauge the impact of new operations, such as Styron and Styrolution, on the market, several players were expecting spot activity to pick up in 2011. With fewer integrated players, “The status quo will continue to an extent, because it is difficult for these new bodies to completely remove themselves from their parent companies,” said one source. “However, we might see some more entrepreneurial behaviour.” This will prove crucial for styrene players in a market that will be increasingly tough, and further consolidation plans may emerge in 2011. Despite the current bullishness, there were still some observers that continued to exercise caution. “We have seen a hike in crude prices in 2010 despite supply/demand fundamentals,” said one consultant. “There are record stocks in the There was also some concern that continued high pricing on crude and energy would begin to eat into discretionary consumer spending, which would in turn have a depressing effect on key end-use markets. One source felt that the market now risked moving into “demand destruction” territory, adding that strong and sustainable downstream demand was unlikely. January has traditionally been a strong month as players seek to replenish inventories following the holiday period when stocks are run down. Despite talk of a bullish opening to the year, recent indications regarding the automotive, packaging and housing industries in However, there was some optimism that underperforming domestic sectors could be counterbalanced by strong export demand in 2011. Sources noted that markets such as Asia and e With volumes from the Toluene was predicted to remain balanced, with no major structural changes on the horizon. Demand from emerging chemical markets such as In the paraxylene (PX) market, downstream demand for purified terephthalic acid (PTA) and polyethylene terephthalate (PET) was expected to remain strong in 2011. Buyers of PX were unconcerned about feedstock availability. “There will be enough PX in the market. Producers will simply run in order to meet our demand. There are only a few PX buyers left so availability should not be a problem,” a major buyer said. However, in the orthoxylene (OX) market, a major buyer and producer of phthalic anhydride did sound concerned about availability. This was partly attributed to strategic decisions upstream, with questions over whether refineries would continue to see value in producing OX. “The availability of raw materials [in 2011] is a big question mark. Are refineries willing to produce OX?” a producer said. This was also partly attributed to OX producers wanting to increase captive use of the material. To discuss issues facing the chemical industry go to ICIS connect For more on aromatics
http://www.icis.com/Articles/2010/12/27/9420685/outlook-11-europe-aromatics-look-strong-on-healthy-demand.html
CC-MAIN-2014-23
refinedweb
589
51.68
dot. Docker uses the concept of a Standard Container which contains a software component along with all its dependencies - binaries, libraries, configuration files, scripts, virtualenvs, jars, gems, tarballs, etc. – and can be run on any x64-bit Linux kernel that supports cgroups. Such containers can be deployed on a laptop, on a distributed infrastructure, in the cloud, etc., preserving its environment, making it appropriate for a broad range of uses: continuous deployment, web deployments, database clusters, SOA, etc., as Mike Kavis explained on his blog: The use case that was relevant to me, the application guy, is to use Docker to streamline a continuous delivery process. In every place that I have worked in my career, from the mainframe days, to the client server days, to the cloud days, getting the different environments in sync and successfully testing applications has been a nightmare. When code moves from Dev to QA to Stage to Prod, no matter how good or bad our processes were these environments were NEVER the same. The end result was always a hit in the quality of a production release. “It worked in test” became the most shrugged off phrase since “the check is in the mail”. With Continuous Delivery (CD), the entire environment moves with the code from Dev to QA to Stage to Prod. No more configuration issues, no more different systems, no more excuses. With CD, if it didn’t work in Prod it didn’t work in Test. With Docker, I can see writing scripts to automate the CD process. I can see gains in speed to market because of how quickly new environments can be created without dealing with all of the setup and configuration issues. Solomon Hykes, CEO of dotCloud, demoed Docker at PyCon, explaining that it’s a repeatable lightweight virtualization solution because “it’s isolated at the process level and it has its own file system”. The API enables system administrators to execute a number of operations on containers: start, stop, copy, wait, commit, attach standard streams, list file system changes, etc. Some of Docker’s main features are: - File system isolation: each process container runs in a completely separate root file system. - Resource isolation: system resources like CPU and memory can be allocated differently to each process container, using cgroups. - Network isolation: each process container runs in its own network namespace, with a virtual interface and IP address of its own. - Copy-on-write: root file systems are created using copy-on-write, which makes deployment extremely fast, memory-cheap and disk-cheap. - Logging: the standard streams (stdout/stderr/stdin) of each process container are collected and logged for real-time or batch retrieval. - Change management: changes to a container's file system can be committed into a new image and re-used to create more containers. No templating or manual configuration required. - Interactive shell: docker can allocate a pseudo-tty and attach to the standard input of any container, for example to run a throwaway interactive shell. So far, Docker has been tested with Ubuntu 12.04 and 12. 10, but it should be working with any Linux 2.6.24 or later, according to dotCloud. It can also be installed on Windows or Mac OS X via VirtualBox using Vagrant. Docker was written in Go, and uses Linux cgroup and namespacing, AUFS – file system with copy-on-write capabilities-, and LXC scripts. Community comments
https://www.infoq.com/news/2013/03/Docker/
CC-MAIN-2021-31
refinedweb
568
53
In Visual Studio 11 Developer Preview we’ve streamlined and modernized our vast array of Find experiences. Find and Replace is now a lightweight control within documents, delivering incremental search results as you type. The Find and Replace in Files dialog has been simplified while at the same time gaining additional functionality such as Find Next and Find Previous. Both Find experiences now let you use .NET Regular Expressions to perform complex search and replace operations, even across multiple lines. The new Find and Replace Control The new Find control sits at the top of the document as a search box. Ctrl + F now brings up this control, instead of the dialog. The new control affords all the capabilities of the dialog. You can Find Next, Find Previous, set Search Options, Replace etc. Performing a simple Find With the new Find control, you can perform a search exactly as you would with the VS 2010. Just hit Ctrl+F and type the search term. What you would notice however, is that now the search is instant and incremental. As you type, you get the matches highlighted and the first match selected by default. No need to click an additional Find Next button. Hit Enter to navigate among the matches in the document. Performing an Advanced Find As I mentioned earlier, the new Find control affords all the capabilities of the Find Dialog. This means that you can search using advanced options (Match Case, Match whole word, Use Regular Expression) just the same as in the Find dialog. For this, simply click on the drop down next to the magnifying glass, in the search box. You can also hit Alt+Down when inside the search box. Notice that setting Search Options acts like a filter instead of starting a new search. So if you searched for “Button”, without Match case, this would highlight all matches – button as well as Button. Checking Match case would filter this result set to only matches for Button (with the uppercase “B”) MRU (Search History) Find control maintains a history of the previously performed searches of up to five terms. This is displayed as a MRU (Most Recently Used) list. The list can be accessed by bringing up the dropdown from within the Find search box (or hitting the Alt+Down arrow key) Performing a Replace You can perform a Replace operation right within the new control. Just hit Ctrl+H / click on the drop down arrow to the left of the search box. This brings up the expanded version of the Find & Replace control. The replace box is below the Find box. Type your replace term and Hit Enter, or Atl+R or the Replace Next or Replace All buttons. Replace Next and Replace All Changing the Scope The Find control allows you to change the scope for Find & Replace operations. To change the scope, click on the Drop down arrow, as in the case of Replace. This brings up the expanded Find control with the scope selection dropdown. “No Results” Alert The Find control allows you to quickly find out when there are no results for the current search term with the current search criteria. When there are no results to show, the search box is highlighted with a red border indicating that there are currently no results to display. Using Regular Expressions (.NET Regular Expression syntax only) One of the major changes we have made, in response to customer feedback, with the new Find experience is moving away from the POSIX style regex syntax to .Net Regex syntax. You can do all your searches using the familiar .Net style regular expressions. The VS2010 style regular expression syntax has been discontinued. So, to search for a “Start Game”, I would type In Visual Studio 11 Developer Preview: Start\s+Game In VS 2010 (This will not work in VS 11 Developer Preview): Start:b+Game A complete reference for .NET Regular expressions can be found here .NET Regular Expression Cheat Sheet. Find In Files Dialog You can still do everything that you were used to doing with VS2010 Find Dialog with the new experience. To bring up the Find In Files dialog, simply hit Ctrl+Shift+F (or Ctrl+Shift+H for replace). This brings up the familiar dialog. We have simplified this dialog to a large extent. There are just 2 tabs – Find In Files and Replace In Files. Now there are only two tabs – Find In Files and Replace in Files. We have added Find Next and Find Previous. Note that you can also bring up the Find In files dialog from within the Find control, in the Search Options dropdown. Find Symbol has been discontinued With the new Find experience we have relooked at the Find dialog and redesigned it for simplicity. Based on the usage data we have received from our Customer Experience Improvement program, the Find Symbol features was rarely ever used by our customers. With the new Find, the support for Find Symbol has been removed from the UI. However, you can still search for symbols using the Find All References from within the Editor. Find & Replace Cheat Sheet You can of course use the same short-cuts and accelerator keys that you are used to with the Find dialog, with the Find control. Here is a list of shortcuts and accelerator keys you can use to work with Find & Replace in Visual Studio 11 Developer Preview. Shortcuts: Accelerator Keys: We want to hear from you! Because Find is such a critical and integral part of the Developer experience, we need your feedback on how well the new experiences are (or are not!) working for you. We are listening to your feedback. If you have any additional questions, feel free to leave comments below. If you’re experience problems with Find & Replace, please file a Connect Bug, and we’ll investigate You can write to us here: VSFindFeedback2@microsoft.com You can also send your feedback from within the Find control: Thanks, Murali Krishna Hosabettu Kamalesha Program Manager – Visual Studio Code Experience Known Issues As you might imagine, with the new release, we have completely revamped the Find experience for our customers. With this huge change, there are some known issues and unfortunately we didn’t have time to fix by the time of this release of Visual Studio 11 Developer Preview. However, these issues are already fixed in internal builds and you would be able to get these when you install a subsequent release J. Unable to Find All with the Current Document scope In the current version, you will not be able to perform a Find All when the scope is current document. However, when you search for a term using the new Find Control, all of the matches in the current document are highlighted. This should hopefully allow you to work around this issue. And like I mentioned, this issue has been fixed for a subsequent release. Find in Files does not clear the Find results In the current version, when you perform a Find in Files, the results are appended to the Find Results window. The results of the previous search are not cleared. To work around this issue, you can still clear out the Find Results window manually. We understand this is inconvenient and this has been fixed, but you will have to wait for a subsequent release. J Search Criteria is missing from the Find Results window When you perform a Find All operation, the search results are populated in the Find results window. In the current version, the results are still populated, but a line mentioning the search criteria is missing. For subsequent releases, this issue is also fixed. This is very much the same as the extension I already use for 2010 I tried sending an email to the address listed. I got this response: Delivery has failed to these recipients or groups: VSFindFeedback2@microsoft.com Your message can't be delivered because delivery to this address is restricted. Although I program in .NET, I've become very accustomed to the regular expression find/replace syntax in Visual Studio through version 2010. I'm willing to change my ways. However, one thing I'm not seeing a good workaround for is a .NET alternative to the :i pattern, which matches an identifier. I'll be very unhappy to change to using [a-fA-F_][a-fA-F0-9]*. That's a LOT more typing. I don't see a .NET equivalent to :i. Please correct me if I'm wrong. Yep, +1 for @MysticTaz. What about Replace with and 1 syntax? Don't want to name my matches and refer to names afterwards, because that's also too much typing. I don't see how the new interface for Find & Replace is better than what is in VS2010. Please forgive me for being frank, but I think you have wasted your time redoing what was working all right already. Now, the waste would have bothered me much less if you weren't constantly reminding us that you can't get us the features we want (like support for C++11, which you said isn't going to improve much in vNext) because you don't have infinite resources. Well, here is a good portion of your precious resources, wasted on something I am not even sure adds value… Way to go, Microsoft (sad). "Find Symbol has been discontinued" Alt+F12? Nooooo… I use that all the time! Find All References requires you to actually go to a reference so you can right-click it. Alt+F12 allows you to just type a substring of the thing you want. I'd use the new Ctrl+Comma thing in VC2010 but you didn't include it in the Express edition. Class View has a filter, but it's clunky to use. Ctrl+Shift+C takes focus to the pane, but not to the search filter text box. You expect it to operate in real time, Windows Explorer filter style, but you wait for a while and then realise you have to press enter. And it doesn't find substrings, just prefixes. Perhaps these things will be improved with the changes to Solution Explorer. But, if you're going to remove Find Symbol, please at least add Navigate To to the Express Edition! @PleaseFixYourBugs The people involved in the Find work are not on the C++ team, the alternative to them doing the find work would not to have been doing C++11 work. >I think you have wasted your time redoing what was working all right already There has been a lot of feedback from a lot of people that it *wasn't* working all right. VS has a huge user base, so while something may be appear to be 'working all right' to some percentage of them it can be roundly disliked by a much larger percentage. Ryan @Ryan Molden: "The people involved in the Find work are not on the C++ team, the alternative to them doing the find work would not to have been doing C++11 work." OK, what about improving the performance of the IDE? This is a #1 issue on the user voice site. Could people involved in the Find work instead have worked on that? (Of course, they could.) "There has been a lot of feedback from a lot of people that it *wasn't* working all right." Could you elaborate on what this feedback was?? If there is nothing else new, did the above feature warrant a rewrite?? Given that it comes at the expense of another feature (Find Symbol)? Given that it can be easily offered via an add-in? Given that there are plenty of other, much more important problems, eg, a dog slow IDE? I don't think so. >OK, what about improving the performance of the IDE? This is a #1 issue on the user voice site. Could people involved in the Find work instead have worked on that? (Of course, they could.) The people involved did work on performance bugs as well, as did all other people on the team. Performance is another interesting issue in that with a user base the size of Visual Studios and a HUGE variety of usage patterns (i.e. solutions with small numbers of projects, solutions with hundreds of projects, single language solutions, polyglot solutions, hundreds of third party plugins, etc…) some people see terrible performance problems (for which we are collecting data to figure out why via the PerfWatson stuff), others see no performance issues. With PerfWatson we are gathering actual hard data on what is causing the issues for people so we can fix them. >Could you elaborate on what this feedback was?? This was available in the old version as well, it was called Incremental Search and didn't have a dialog, so that isn't new. >Given that it comes at the expense of another feature (Find Symbol)? Everything comes at the expense of another feature, if we had done 'Find Symbol' it would have come at the expense of some other feature that someone else perhaps valued more highly than your choice of 'Find Symbol'. Also the editor team wouldn't be the one doing Find Symbol as that would be language specific thus the language teams would need to be the ones doing that work, and each and every one of them would need to do it, at least until Rosyln integrates with the product. >Given that there are plenty of other, much more important problems, eg, a dog slow IDE? I don't think so. Again, you seem to imagine that your ranking of what is important is universal. Teams decide on feature work based on broad customer feedback/requests. Deciding that all customers are say up in arms because of the top posting on User Voice is suffering from a bit of selection bias. Performance is important, and there is work being done on it, a lot of work. This is a complex area due to the massive size of our user base and the huge number of different configurations/usage patterns. That said we are always hiring people passionate about development tools and making VS better. Ryan @Ryan: …" So what? You just traded these complaints for complaints that (1) it is annoying to have a search box appear as part of the window instead of in a separate dialog, that (2) it is annoying to have a search box obscure the top row of characters every time without any smarts, that (3) there are still too many options (you haven't thrown anything away, have you? if you have, replace this complaint with "you have thrown away the feature I used"). In addition to that you will have a new complaint that things work differently from how they used to work. Thoroughly unconvincing. "This was available in the old version as well, it was called Incremental Search and didn't have a dialog, so that isn't new." So, there isn't even anything new. Why did you rework the feature?? "Everything comes at the expense of another feature, if we had done 'Find Symbol' it would have come at the expense of some other feature that someone else perhaps valued more highly than your choice of 'Find Symbol'." Oh, right, "our resources are not infinite" argument again. First, 'Find Symbol' was already there in the code, you have chosen to rewrite that code for the reason I am trying to understand, and in the process you have thrown the feature away. Second, the reason I am writing what I am writing is exactly that you seem to have chosen to waste your non-infinite resources reworking something that already worked. Seriously, why did you rework the old Find & Replace? I don't get it. You said that "there has been a lot of feedback from a lot of people that it *wasn't* working all right". Have you seen this feedback?? What was it? You seem to back off now saying that you are not on the editor team. Can you get someone from the editor team to clarify? Why did you rewrite the feature? In what way the new code is better than the old? Geez… And to think of it, this is what you have chosen to blog about. Something is rotten in the kingdom of Denmark. @Ryan: "Deciding that all customers are say up in arms because of the top posting on User Voice is suffering from a bit of selection bias." Brilliant. With this, if you don't like the results of your poll, you can freely ignore it. Way to go. @PleaseFixYourBugs >Brilliant. With this, if you don't like the results of your poll, you can freely ignore it. Way to go. No, I was simply pointing out that saying that something is at the top of the site == it is automatically slated for the next release is not accurate, I don't believe the site ever said that, in fact it says 'This site is for suggestions and ideas.'. >Second, the reason I am writing what I am writing is exactly that you seem to have chosen to waste your non-infinite resources reworking something that already worked. Again, worked for you, others had different views of the matter. Reasonable people can disagree. >Thoroughly unconvincing. I wasn't necessarily trying to convince you, I was trying to simply respond to your complaints with an alternative perspective. If you aren't interested in that then I apologize. Ryan To be honest, I also don't get what exactly is the improvement of the new Find & Replace dialog over the old one. @PleaseFixYourBugs: If you ever noticed people complained about Search Dialog in IE7 (yes, it's Internet Explorer 7) and earlier versions, I think that's what people are complaining with *modal-dialog-based* Search Dialog in VS. Well, at least for me. You know that modal dialog is obstructing/blocking visually any text/content behind it, and it's visually annoying for me to have it when searching. By making it inlined as this post has shown, VS can save a significant portion of UI space so that user can FOCUS more on the text editor than the Search Dialog itself when searching obviously. Modal dialog, like in searching, is personally distracting my focus and visually obstructing what I should actually look at when searching: it's the text editor and NOT the Search Dialog. And I think it improves user productivity and user experience so that user is doing no more work on 1) switching between Search Dialog and the text editor, and 2) moving the Search Dialog somewhere so that it won't visually obstruct any text/content in the text editor. So, does this finally make any sense to you? @Maximilian Haru Raditya: The Find & Replace dialog in VS2010 is *modeless*, not modal. It has been modeless since forever.. So: "So, does this finally make any sense to you?" …no, it does not many any sense to me. I remain confused as to the reasons for the rewrite. The rewrite that doesn't add any value is a waste, and this particular rewrite doesn't seem to add a lot of value. Ryan: "I wasn't necessarily trying to convince you, I was trying to simply respond to your complaints with an alternative perspective. If you aren't interested in that then I apologize." What is the alternative perspective? I am honestly trying to find out. I ask for the third time, why did you rewrite the feature? This is a simple question. You said there was a lot of negative feedback on the old dialog: "There has been a lot of feedback from a lot of people that it *wasn't* working all right." Yet the new, reworked Find & Replace system seems to behave almost completely like the old system (you added the highlighting but lost Find Symbol). What was the feedback? What did you improve? Please be exact. I know that different people can have different perspectives. I am asking a simple thing: what was your perspective when you decided to rewrite the feature? I simply don't understand why you thought it was a good idea to rebuild what is essentially the same functionality. Why?? @PleaseFixYourBugs: Sorry, it seems I've caused confusion by using incorrect technical terms. What I meant actually there two types of search box AFAIK: 1) with a modal/modeless window, and 2) inlined. In modal/modeless window, it still uses a separate container: window. While in inlined, there's no such window and instead it blends into the parent container: the text editor. So, it's all about *inlined* search experience, without window or similar container. As I said, I think you could use the analogy of inlined search in a browser like IE/Firefox/Chrome. And notice how it has given a better user experience to the users. >. That's what I've said previously, I don't actually like moving the modeless search window when it obstructs the text behind. Moving thing unnecessarily took my effort, and I don't want to do that *repeatedly* as I often use the search box. By making it inlined, it becomes effortless for me. Also docking is very limited in VS. You *can't* dock it at a specific location inside the text editor. I don't know exactly what the correct term is, but perhaps it sorts of subdocking/multi-level docking. OK, aside from this whole endless conversation about this feature.? ok, I really don't want to get into this little argument, but @Ryan, your argument is intellectually dishonest. > >Brilliant. With this, if you don't like the results of your poll, you can freely ignore it. Way to go. >No, I was simply pointing out that saying that something is at the top of the site == it is automatically slated for the next release is not accurate No, that's not what you were pointing out. What you were pointing out is this: > >Given that there are plenty of other, much more important problems, eg, a dog slow IDE? I don't think so. > Again, you seem to imagine that your ranking of what is important is universal. You were implying that it is wrong to assume that there are "plenty of other, much more important problems, eg, a dog slow IDE", and that people who think performance is a bigger issue than search/replace are out of touch with your userbase as a whole. PleaseFixYourBugs believes that to the users of VS, performance is seen as a bigger issue than search/replace. You strongly implied that this is incorrect, that it would be wrong to assume this prioritization to be universal. Is that so? Do you have data indicating that more people care about search/replace? If so, you had no reason to suddenly pretend that you were talking about something entirely different. And if not, I think you owe @PleaseFixYourBugs an apology for trying to brush off his concern with an invalid argument, and then denying that you did so. If your first statement was incorrect, admit it. Otherwise stick by it. Please don't just pretend that what you said was something entirely different. That's just insulting. @Maximilian Haru Raditya: I get what you are saying about the separate container, but I don't get why having the feature inline vs having it in a separate container is so important. Yes, some users might prefer the search controls be part of the window, but I can assure you that some prefer the search controls be in a separate window too. One reason would be that a separate window can more readily host controls customizing the search options. Another reason would be that a separate window is what the UI guidelines suggest you do, and is a standard interface, unlike the inline controls. I believe one could argue from both sides here and it is puzzling to me that the dev team would choose to spend their time switching from one side to the other "just because", without making any significant improvements to the feature. The example with IE doesn't count since the Find dialog in IE was truly modal. ?" True. Yes, I, my team, as well as many other people did told the dev team about the specific problems we are having. We did this via the Connect site (lots of bug reports), via email, via blogs, via polls, etc, etc, for years. Most of the time the response from Microsoft (when they grace us with such, you don't always get lucky to have a non-automated reply) is that they acknowledge a problem, but they aren't going to fix it because they don't have time, because there is a workaround (even if it is ugly, time consuming and doesn't always work), because the problem doesn't meet a certain internal criteria they currently use to triage, etc. This is going on for years. To witness this first hand, please take a look at the Connect site and count how many bugs for VS are reported only to be closed as "won't fix". Even simpler, take a look at this very thread. Ryan Molden from Microsoft is already talking cautiously that the results of the polling on the Microsoft's user voice site, which show unequivocally that the number 1 problem with Visual Studio is performance, might not matter much in reality due to the selection bias. Sigh… So, yes, I have actually told Microsoft about the bugs and problems I am having, numerous times, and the reaction from Microsoft does not satisfy me at all. @Grumpy, @PleaseFixYourBugs I have been asked by the team that owns this feature to stop responding to this thread as I am not a spokesman for the team or for Microsoft, so I will honor that request. This is my last response and it is intended only to make explicitly clear that my original involvement was in no way meant to 'dismiss peoples concerns' or be 'intellectually dishonest' or to claim that a revamped search experience was 'more important than perf'. If it came across that way I apologize, it was not intended as such and was likely due to poor communication on my part. I was only trying to have a conversation about the topic at hand, but it seems I have succeeded only in muddying the waters and upsetting/misleading/insulting people which was never my intention. In the future I will leave communication up to the PMs and teams that own the features to avoid this kind of situation. I think I like the inline search pane slightly more than the dialog, but I have to agree that this is a very small and, ultimately, unimportant thing. Also, +1 to grumpy. Thing is, this is totally useless. With the old dialog, you had several keyboard shortcuts to _quickly_ change the current serach's behavior. Now you hide everything in dropdowns and floating menus whihc require you to find your mouse, navigate to the upper right corner and start clicking. A simple "type in your text, alt+c, alt+w" (make the current search case sensitive, whole word) turns into a clickfest. (and don't get me started on quickly changing the scope from single file tosearch in project/solution/dir etc) With the vs2010 addin at least i have the option to disable this stupid window, but i cannot seem to find the old behavior in vs2011. @a. Apparently you didn't read the blog post, where it specifically calls out of the shortcuts/accelerators (including Alt+C and Alt+W) that work with this new tool. Hello Everyone, Firstly, I would like to thank you for your feedback! I am the PM on the Visual Studio Editor team. I see that there are a couple of points here and I would like to clarify. 1. Why update Find ? Find represents one of the top 50 most frequently used commands used in Visual Studio. Over the past one year, we have monitored multiple feedback channels closely such as Microsoft Connect to identify the top customer pain points with Find. The top three areas where we have improved the experience for our customers with the new Find: • Incremental Search This feature allows you to quickly perform a search and navigate instantly to the first match in the current document – just hit Ctrl+I and type your search term; it immediately navigates to the first match in the current document, incrementally as you type. Users of incremental search love the feature and use it heavily. But most users didn’t know about this feature. ISearch was not discoverable. With the new Find Control you get this incremental search experience by default. Hit Ctrl+F and type the search term – it immediately navigates to the first match, but also highlights all the matches in the document for that search term. All of this happens incrementally giving instant results. • Find gets in my way The Find dialog covers too much code. The dialog tries to get out of the way, but users would have to chase it around in the IDE. The new Find control takes up nearly 8% of the space of the dialog. It remains stationery in a consistent location at the top of the document. • Which Find do I use ? In Visual Studio 2010, the Find dialog had too many options – there were too many tabs and too much duplication. The Find Dialog had four different tabs – Quick Find, FindInFiles, QuickReplace, ReplaceInFiles providing the same functionality. Each tab allowed you to perform a search/ replace in all the scopes – Document, Project, Solution, Open Documents etc. Which Find do I use when ? We have unified the multiple tabs in the Find dialog into two tabs with all the functionality. You get one tab for find and the other for replace. You will be able to perform FindNext, FindPrevious, FindAll and BookmarkAll from within a single tab. We have also optimized the scenarios where Quick Find is used vs. where the Find dialog is used. Ctrl+F gets you to the Find Control which provides quick find optimized for the current document. If you want to search beyond the scope of the current document, you use FindInFiles (Ctrl+Shift+F). 2. .NET alternative to the :i pattern There is an MSDN article on converting the Visual Studio regular expression syntax to the .NET syntax. Please refer to Using Regular Expressions in Visual Studio With the .NET regex syntax, you could use this expression for matching identifiers. b(_w+|[w-[0-9_]]w*)b 3. Alt+F12 – does it work ? This time around, we took a hard look at simplifying the Find experience. When we looked at the usage data for FindSymbol, we figured that either our customers are not using this or the feature was not discoverable. Furthermore, FindSymbol functionality is provided by the Solution Explorer as well. You can search for symbols through the Solution Explorer. The new Solution Explorer provides a Search feature which allows you to search through the symbol table. We would be glad if you could try out this feature and let us know how it works out. Ctrl+; is the shortcut to search in the Solution Explorer. While we can't comment on what features are in the final versions of Express at this point in time, we can confirm that the improved Solution Explorer support is present in the Visual Studio 11 Developer Preview Alt + F12 is not supported. P.S: Some emails to the feedback link VSFindFeedback2@microsoft.com were not getting delivered earlier. This issue has been resolved now. Your feedback is valuable. Please feel free to write to us. Thanks again for your feedback! If you think of anything else while using Find please do let us know. Hey great! I have *always* hated the find/replace mechanism in VS since as long as I can remember. It was probably VC6 when I last didn't hate it. I haven't used the process described above so I can't be certain of this, but from what I read, I am super happy that you guys have worked on it. On that basis I like the look of the solution so far. It's a small feature, but I use it all the time so it could be a big impact to fix it – for me at least – for relatively small effort. I'm glad your data appears to back that. This kind of quick tactical win I am all for. I agree with @PleaseFixYour Bugs big picture wise but I accept your arguments on this and also for the reasons I just gave. @PleaseYourBugs As far as I know, the poster was likely right that the find/replace guys wouldn't have worked on performance etc. had they not been working on this; because as far as I understand, MS has feature crews for each of those area. But regardless, it seems natural that peformance issues might be looked at by either dedicated performance experts and/or the original authors/crews of a particular component that was found to be slow. So in that context, and there is some common sense to it, it seems quite reasonable that the people working at the find/replace dialog UI level would not be core to either of those crews. The skills aren't an obvious overlap anyway, though of course it's possible. PleaseFixYourBugs, we want many of the same things but I think ryan et al weren't saying anything outrageously wrong here, even if you didn't agree with them. When answers are within the bounds of reasonable, IMHO a softer style would have been fairer here. I'm all for "say it like you see it" or if you like, harsh but fair where appropriate – see my C++11 priority posts expounding what appear to be similar views to yours for example – but guns blazing 24×7 is a lot of bullets and this isn't picking your battles and doesn't help win support for your cases, which mostly I do support. 🙂 @Glen: You are right. Point taken. @Murali Krishna Hosabettu Kamalesha: Thanks for the reply. I apologize for perhaps being a bit too bitter and I appreciate your willingness to talk despite of this. I am still disappointed that you have chosen to work on Find & Replace rather than on something more significant, since the benefits are, in my view, very minor (what you said above proves that I haven't missed anything about the feature earlier and, well, I said enough about the lack of real benefits already, everything I said stands). I wish you were spending the majority of your time working on features like those upvoted on the user voice site. Judging from the preview version of vNext, you didn't have much chance to improve those areas yet. I hope this will change in the real release. Good luck. MSFT guys, please please don't limit remebered searches to 5, up to now 20 were remebered, it's important for big projects (I search through 2 million lines!) Simlpe Escape key in Output or Find results windows closed them in VS6, any chance to allow that in the next version? Now I don't know how to close them from keyboard to free the vertical space for the code! Sorry if this was already called out elsewhere and I missed it, but is find+replace async WRT the UI thread? When I do large find/replace operations (especially with regex) currently, VS is effectively 'dead' for a long time (at the moment, it's been 8 minutes for the one that's currently running). It's not pumping messages much, if any (Resource Monitor lists it as 'Not Responding', the window chrome occasionally shows up and then disappears again, etc). I'm fine if the cpu and disk make the rest of my VS experience slow, but it'd be nice to have such operations (running way over 50ms! 🙂 not to lock the whole IDE. 🙂 Thanks! @James Manning: I have a similar idea for VS: visualstudio.uservoice.com/…/2255881-adopt-metro-ui-design-philosophy-fast-and-fluid Find in a specific directory and below appears to be broken in the find in files dialog. Whenever I selected a directory by typing in the combo it appears to work, but it just jumps to another scope when the combo box loses focus. So, as I imagined, the only .NET alternative to the :i search is to type (and remember) b(_w+|[w-[0-9_]]w*)b That's A LOT of extra typing just to find an identifier. Hello again! Thank you for your posts. Your feedback helps us tailor the experience for our customers. Keep them flowing 🙂 I will answer to these questions separately. 1. Number of items in the MRU (Most Recently Used) List The Find dialog displays up to 20 MRU items. In the Find control, the options reside below the MRU. We have capped the MRU to allow users to quickly get to these search options with the down arrow key. We will continue to refine the Find control in this and future versions. 2. Find and Replace on UI thread We have made some perf changes in Find. In particular, we’ve moved the highlighting of results to the background thread while you interact with the IDE. We’ve heard the feedback on performance and so we’ll continue to work at improving find performance in this and future versions. 3. Shortcut to close the output window. Shift+Esc is the shortcut to close the output window. 4. For all your queries on Visual C++, the best place to start is with the Visual C++ Team blog – Thanks again for writing to us! If you think of anything else while you are using the new Find, please do let us know. Murali Krishna Hosabettu Kamalesha • Program Manager • Visual Studio Editor It would be great if there was an option to permanently surface the frequently used Match Regex, Match Whole Words, and Match Case options so that you could know at a glance which options are selected. If there was an option to expand the quick find box so that these options always show up off to the side or in a second row below that would make searching easier. If you need to condense it for space, the UI could read something like: "Match: [x]Regex [x]Whole Words [x]Case". But it would be great if there was a way to make these always default to visible. Also it seems like the new search box doesn't work that well when you want to just Search/Replace within a selected block of code. We should be able to cycle through the MRU search history with the Down key (without requiring us to hit Alt+Down to drop the list). Find, control sitting in document instead of dialog! looks good… For some reason I can't get n to work for a replace. In other words, I have "Testing" in my document. The search string is "ing". My replace string is "n". I end up with "Testn" What happened to the ability to replace to a CR, Tab or other? How can I support this in other ways? Even WinWord allows a S@R to add CR/LF. Hello, Hope you all had a great Holiday season. Happy New Year! Thanks again for your posts. We are glad to hear you like the placement of the control inside the document better than the dialog. We have heard feedback from our users asking for the ability to cycle through the MRU using the down arrow key. This had not been built at the time of the Developer Preview. The good news is that we have already added this feature in our internal builds and you should be able to hit the down arrow key and cycle through the MRU. You are right, the new line replace does not work in the Developer Preview. This is a known issue which we have already fixed in our internal builds. Find control should allow you to perform replaces with “n”. We have considered feedback from our users and have made significant improvements to the Find experience in Visual Studio, post the Developer Preview. Stay tuned for upcoming versions! Thanks, Murali Murali Krishna Hosabettu Kamalesha • Program Manager • Visual Studio Editor what customer feedback told you to get rid of POSIX regular expressions? The only feedback I can see on the issue says the opposite… @Iain AFAIK Visual studio has never implemented the posix regular expression syntax. It's always been a heavily customized syntax, quite different from POSIX. The blog post is in error in this detail. Not to mention that it's very easy to get VS stuck in an (almost) infinite loop with regexes.". Simon.B1 hit it on the head – 'Find' (Ctrl-F) is almost always for 'find in document' but 'Find in Files' should always search *files*, not just 'Current Document' or whatever the scope is set to. Overall, I really like the changes to the Find dialog (especially getting out of the way), but constantly getting no results and then remembering I have to manually change the scope is not productive for me. That is really my only criticism of the new changes. Cool ! how to create a thread in vs 2012 in c# ? I tried by including using System.Threading; namespace but it is showing error. pls… help me in solving this Great tool i always use this! How do I make find & replace to popup as a dialogue box? Is there a way to get the old Find functionality back? The new "search while you type" functionality completely freezes Visual Studio if you're working with a file over a couple hundred lines, like a build file. @Xaiter yes this is a problem I have with the search as well @Xaiter: You can still get the Find dialog by using the keyboard shortcut Ctrl+Shift+F. This dialog is optimized for Find in files, but you can change scope and options to search in individual documents. Please, if anyone can provide me with Visual Studio 2005 professional source to can setup it into my computer to can train and work with you. Visual Studio 2005 Professional, urgently please. My email: A.Ahmed@motinalhiba.com thanks
https://blogs.msdn.microsoft.com/visualstudio/2011/09/22/visual-studio-11-developer-preview-find-replace/
CC-MAIN-2016-44
refinedweb
7,133
71.24
From: Peter Dimov (pdimov_at_[hidden]) Date: 2001-04-10 16:08:31 From: "Gary Powell" <Gary.Powell_at_[hidden]> > >> > When I comment the #if/#endif lines it compiles but crashes when run. :-) > << > Hmm.. > "Runs with my compilers!" :> > No seriously, can you run the debugger and see what the problem is? In a debug build the memory manager yells at me at auto_ptr<X[]>::~auto_ptr, in delete[] this->get(), when I continue I get an access violation. Weird. The line that triggers it is auto_ptr<B[]> b3(array_source_b()); so it must be the destructor of the temporary. array_source_b() by itself works, array_sink_b(array_source_b()) doesn't. Ah, yes, I see it now. You have the same #if/#endif pair there, too, so you don't define a copy constructor and the compiler happily generates a default. > > I wonder what copy constructor does MSVC use. :-) > > I think it uses > template<typename Y> > auto_ptr(auto_ptr<Y> &rhs); > > because if I undef the BOOST_NO_ARGUMENT_DEPENDENT_LOOKUP for MSVC I get > errors about multiple definitions of > auto_ptr(auto_ptr &rhs); You do know that for MSVC 6 the non-template must come _after_ the template as it's considered a specialization by the compiler, don't you? :-) [MSVC 7 seems to handle this as-is, without the #if.] And what's this to do with the argument-dependent lookup? -- Peter Dimov Multi Media Ltd. Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2001/04/10931.php
CC-MAIN-2020-45
refinedweb
246
59.5
Do you want to analyze and plot the data of Bitcoin (BTC), Ethereum (ETH), Cardano (ADA) and other cryptocurrencies but you don’t know where to find a reliable data source? I had the same problem a couple of hours ago when writing a Python script to plot the relative price of altcoins versus BTC. This article is the one I’ve been looking for—and I hope it’ll be helpful to you as well. Let’s get started with my top data source right away! CryptoDataDownload.com This is my preferred data source because it’s updated and very fine-granular data: - Daily, Hourly, Minute data sets - Spot and physical market - CSV format - Downloadable by Python script Here’s how they describe their data set: “We track and produce files for Daily, Hourly, and Minute(!)” (source) Here are some of their specific data sets from Binance stock exchange. Each link directly leads to the CSV file: - BTC/USDT [Daily] [Hourly] [Minute] … [Value at Risk] - ETH/USDT [Daily] [Hourly] [Minute] … [Value at Risk] - LTC/USDT [Daily] [Hourly] [Minute] … [Value at Risk] - NEO/USDT [Daily] [Hourly] [Minute] - BNB/USDT [Daily] [Hourly] [Minute] - XRP/USDT [Daily] [Hourly] [Minute] - LINK/USDT [Daily] [Hourly] [Minute] - EOS/USDT [Daily] [Hourly] [Minute] - TRX/USDT [Daily] [Hourly] [Minute] - ETC/USDT [Daily] [Hourly] [Minute] - XLM/USDT [Daily] [Hourly] [Minute] - ZEC/USDT [Daily] [Hourly] [Minute] - ADA/USDT [Daily] [Hourly] [Minute] - QTUM/USDT [Daily] [Hourly] [Minute] - DASH/USDT [Daily] [Hourly] [Minute] - XMR/USDT [Daily] [Hourly] [Minute] - BTT/USDT [Daily] [Hourly] [Minute] You can download these CSV data sets in your own Python script using the pandas library: import pandas as pd # Needed to use unverified SSL import ssl ssl._create_default_ = ssl._create_unverified_context # For example: BTC/USD data url = " df = pd.read_csv(url, delimiter=",", skiprows=[0]) print(df) You can replace the URL field with the custom URL with your data from the list above. The code downloads the BTC/USD historic data that looks like this: unix date ... Volume USDT tradecount 0 1.622333e+12 2021-05-30 00:00:00 ... 1.690781e+09 965806.0 1 1.622246e+12 2021-05-29 00:00:00 ... 3.949843e+09 2169643.0 2 1.622160e+12 2021-05-28 00:00:00 ... 4.926261e+09 2659178.0 3 1.622074e+12 2021-05-27 00:00:00 ... 3.361414e+09 2102182.0 4 1.621987e+12 2021-05-26 00:00:00 ... 4.113718e+09 2432319.0 ... ... ... ... ... ... 1379 1.503274e+09 2017-08-21 ... 2.770592e+06 NaN 1380 1.503187e+09 2017-08-20 ... 1.915636e+06 NaN 1381 1.503101e+09 2017-08-19 ... 1.508239e+06 NaN 1382 1.503014e+09 2017-08-18 ... 4.994494e+06 NaN 1383 1.502928e+09 2017-08-17 ... 2.812379e+06 NaN [1384 rows x 10 columns] Feel free to play with this in our interactive Jupyter Notebook here: The interactive notebook opens in a new tab. To summarize, the best way to download cryptocurrency data is via this link: CoinMetrics.io You can also download specific data sets at CoinMetrics.io: If you want to download, for example, Bitcoin data, you can use the dropdown menu, select “Bitcoin”, and click download like so: When opening the data set with Excel, it has the following CSV format: You can download a ZIP file with all data via this link: This will download the ZIP file, extract it to obtain the following rich data set: At the point of this writing, the ZIP file has 113 different data sets for different cryptocurrencies. However, I haven’t found direct download links that can be used in a Python script—probably, they want to sell the API for a premium price. However, all those data sets can be manually downloaded for free in a safe and secure way. To summarize, the second best way to download cryptocurrency data is via this link: Other Cryptocurrency Download Links In various forums, some links are thrown around. I think they’re not as good as the options provided above, but I’ll include them here for comprehensibility as well: All Cryptocurrencies Bitcoin - Coindesk Closing price and OHLC - Closing price blockchain.info - Bitcoin data on Quandl - Bitcoin data on Quandl II Ether If you have any additional data sets of interest, and/or you want to improve your Python skills, consider subscribing and send me an email by replying to any of our Python content emails.
https://blog.finxter.com/download-cryptocurrency-data-for-free-and-without-registering/
CC-MAIN-2022-21
refinedweb
739
69.31
That application was kind of elegant, using the Qt slot mechanism to connect the working threads with the main window GUI. Alas it was wrong, too. You could not see the error in such a simple application that apparently works correctly, but the point is that simply using "emit" in a thread to connect to another function is not enough to make this other function to run in the same thread. What happened there was that the called function was running in the same thread (the main application thread) whatever was the calling thread. Not exactely what was expected. So far about the bad news. Good news is that is not a big issue rewriting the code so that it works as I wanted. Here we see how to do it using the good old C++ callback mechanism for classes. By the way, if you need to call just a free function from different threads, you could have a look at another post, that shows exactely that. The basic idea is that our increasers should know how to call from their run() function - the actual function that runs in an own specific thread - the function in MainWindow that modifies the label. We can easily get this passing to the class a pointer to the MainWindow object itself: #ifndef INCREASER_H #define INCREASER_H #include <QThread> class MainWindow; // 1. class Increaser : public QThread { public: Increaser(int step); void run(); void stop(); void restart(); static void setCallback(MainWindow* that); // 2. private: bool isRunnable_; int step_; static MainWindow* that_; // 3. }; #endif // INCREASER_H 1. We don't need to know all the MainWindow class details at this point, we just need to know that we are talking about a class. 2. I assume that we want all the increasers working on the same instance of MainWindow (quite reasonable in this case), so the setCallback() method is static, it has to be called just once for all the increasers. 3. As said above, all the increasers share a reference to the same MainWindow object. The implementation code for the Increaser class does not change much, but I report it here all of it, just fof clarity sake: #include "increaser.h" #include "mainwindow.h" // 1. #include <iostream> MainWindow* Increaser::that_ = NULL; // 2. void Increaser::setCallback(MainWindow* that) { that_ = that; } // 3. Increaser::Increaser(int step) : isRunnable_(true), step_(step) {} void Increaser::run() { while(true) { std::cout << "Step: " << step_ << " in " << this->currentThreadId() << std::endl; msleep(500); if(!isRunnable_) break; if(that_) // 4. that_->changeValue(step_); } } void Increaser::stop() { isRunnable_ = false; wait(); } void Increaser::restart() { isRunnable_ = true; QThread::start(); } 1. Here we need the class details, so that the compiler could actually check that really MainWindow has a public method as the one we claim that we want to use. 2. that_ is a static member of the Increaser class, so we have to define it somewhere in the code. Here is a good place. Notice that we explicitely initialize it to NULL: no MainWindows object is available if not set by someone else. 3. And here is the static method that initialize the Increaser static member. 4. The MainWindow changeValue() method is called only if we actually have a valid reference to an object of that type. The changes in the MainWindow class are even lighter ones. In the class declaration the method changeValue() is not anymore defined as a slot, but as a "normal" public member: #ifndef MAINWINDOW_H #define MAINWINDOW_H #include <QMainWindow> // ... class MainWindow : public QMainWindow { Q_OBJECT public: // ... void changeValue(int); private slots: // ... }; #endif // MAINWINDOW_H The class constructor now sets the callback for the increasers: MainWindow::MainWindow(QWidget *parent) : QMainWindow(parent), ui(new Ui::MainWindow), increaser_(2), decreaser_(-3), curValue_(100) { ui->setupUi(this); ui->lbValue->setText(QString::number(curValue_)); Increaser::setCallback(this); } And here is again the function called by the incresers: void MainWindow::changeValue(int step) { QMutexLocker ml(&mCV_); std::cout << "change value in " << QThread::currentThreadId() << std::endl; ui->lbValue->setText(QString::number(curValue_ += step)); } Running the application we can now see how the thread id is the one of the calling thread. So we can finally say that we have got what we wanted.
http://thisthread.blogspot.com/2011_05_01_archive.html
CC-MAIN-2017-51
refinedweb
688
62.07
From: Gary Powell (Gary.Powell_at_[hidden]) Date: 2000-09-05 13:40:54 Now that there are several boost libraries which have "details" namespaces should we be using a namespace with less possible collisions? i.e. ggcl_details? any_details?, lambda_details? vtl_details? This comes up because I was thinking about making the libraries closer to boost conformity and the number of lambda, VTL specific classes is large. Leading to possible name collisions, which I'd like to avoid. I was thinking about using a define which I could do a global search/substitution but then why if I could do it right the first time, I'd rather do that. Thoughts? -Gary- gary.powell_at_[hidden] Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2000/09/5014.php
CC-MAIN-2020-50
refinedweb
135
66.74
This is a new. Yes it is time to get around to our 'Hello World'. We will create a simple scene and have a familiar looking sprite move from one side of the screen to the other. import 'dart:html'; import 'package:simplegamelib/simplegamelib.dart'; void main() { Game game = new Game("My Game", '#surface'); Sprite player = game.createSprite("images/ninjadude.png"); player ..position = new Point(0, 10) ..movement = Movements.east; print('starting game...'); game.start(); } As you can see it is easy to get started with just a few lines of code. Click here to see it in action. Hit reload to start again. Next time, we will look at adding more than one sprite and collision detection!
http://divingintodart.blogspot.com/2015/10/simple-sprite-based-web-games-in-dart.html
CC-MAIN-2017-17
refinedweb
118
78.75
In this post, we are going to learn how to read SAS (.sas7bdat) files in Python. As previously described (in the read .sav files in Python post) Python is a general-purpose language that also can be used for doing data analysis and data visualization. One potential downside, however, is that Python is not really user-friendly for data storage. This has, of course, lead to that our data many times are stored using Excel, SPSS, SAS, or similar software. See, for instance, the posts about reading .sav, .dta, and .xlxs files in Python: - How to read and write SPSS files in Python - How to read and write Stata files in Python - How to read and write Excel files in Pandas Can I Open a SAS File in Python? Now we may want to answer the question of how to open a SAS file in Python? In Python, there are two useful packages Pyreadstat, and Pandas that enable us to open SAS files. If we are working with Pandas, the read_sas method will load a .sav file into a Pandas dataframe. Note, Pyreadstat which is dependent on Pandas, will also create a Pandas dataframe from a .sas file. How to Open a SAS file in Python In this section, we are going to learn how to load a SAS file in Python using the Python package Pyreadstat. Of course, before we use Pyreadstat we need to make sure we have it installed. How to install Pyreadstat: Pyreadstat can be installed either using pip or conda: - Install Pyreadstat using pip: Open up a terminal, or Windows PowerShell, and type pip install pyreadstat - Install using Conda: Open up a terminal, or Windows PowerShell, and type conda install -c conda-forge pyreadstat Now, sometimes when we install Python packages with pip we may notice that we don’t have the most recent version of pip. If this is the case, we can update pip easily, using pip, or conda. How to Load a .sas7bdat File in Python Using Pyreadstat In this section, we are going to use pyreadstat to import data into a Pandas dataframe. First, we import pyreadstat: import pyreadstat Now, we are ready to import SAS files using the method read_sas7bdat (download airline.sas7bdat). Note that, when we load a file using the Pyreadstat package, recognize that it will look for the file in Python’s working directory. df, meta = pyreadstat.read_sas7bdat('airline.sas7bdat') In the code chunk above we create two variables; df, and meta. As can be seen when using type the variable “df” is a Pandas dataframe: type(df) Thus, we can use all methods available for Pandas dataframe objects. In the next line of code, we are going to print the 5 first rows of the dataframe using pandas head method. a SAS file with Python Using Pandas In this section, we are going to load the same .sav7bdat file into a Pandas dataframe but by using Pandas read_sas method, instead. This has the advantage that we can load the SAS file from a URL. Before we continue, we need to import Pandas: import pandas as pd Now, when we have done that, we can read the .sas7bdat file into a Pandas dataframe using the read_sas method. In the read SAS example here, we are importing the same data file as in the previous example. Here, we print the 5 last rows of the dataframe using Pandas tail method. url = '' df = pd.read_sas(url) df.tail() How to Read a SAS File and Specific Columns Note, that read_sas7bdat (Pyreadstat) have the argument “usecols”. By using this argument, we can also select which columns we want to load from the SAS file to the dataframe: cols = ['YEAR', 'Y', 'W'] df, meta = pyreadstat.read_sas7bdat('airline.sas7bdat', usecols=cols) df.head() How to Save a SAS file to CSV In this section of the Pandas SAS tutorial, we are going to export the .sas7bdat file to a .csv file. This is easily done, we just have to use the to_csv method from the dataframe object we created earlier: df.to_csv('data_from_sas.csv', index=False) Remember to put the right path, as the second argument, when using to_csv to save a .sas7bdat file as CSV. Summary: Read SAS Files using Python Now we have learned how to read and write SAS files in Python. It was quite simple and both methods are, in fact, using the same Python packages.
https://www.marsja.se/how-to-read-sas-files-in-python-with-pandas/
CC-MAIN-2020-24
refinedweb
738
71.44
Content-type: text/html fseek, fseek_unlocked, rewind, ftell, fgetpos, fsetpos - Reposition the file pointer of a stream Standard C Library (libc.so, libc.a) #include <stdio.h> int fseek( FILE *stream, long int offset, int whence); int fseek_unlocked( FILE *stream, long int offset, int whence); void rewind( FILE *stream); long int ftell( FILE *stream); int fsetpos( FILE *stream, const fpos_t *position); int fgetpos( FILE *stream, fpos_t *position); Interfaces documented on this reference page conform to industry standards as follows: fseek(), fgetpos(), fsetpos(), ftell(), rewind(): XPG4, XPG4-UNIX Refer to the standards(5) reference page for more information about industry standards and associated tags. Specifies the I/O stream. Determines the position of the next operation. Determines the value for the file pointer associated with the stream parameter. Specifies the value of the file position indicator. The fseek() function sets the position of the next input or output operation on the I/O stream specified by the stream parameter. The position of the next operation is determined by the offset parameter, which can be either positive or negative. The fseek() function sets the file pointer associated with the specified stream as follows: If the whence parameter is SEEK_SET(0), the pointer is set to the value of the offset parameter. If the whence parameter is SEEK_CUR(1), the pointer is set to its current location plus the value of the offset parameter. If the whence parameter is SEEK_END(2), the pointer is set to the size of the file plus the value of the offset parameter. The fseek() function fails if attempted on a file that was not opened with the fopen() function. In particular, the fseek() function cannot be used on a terminal or on a file opened with the popen() function. A successful call to the fseek() function clears the End-of-File indicator for the stream and undoes any effects of the ungetc() function on the same stream. After a call to the fseek() function, the next operation on an update stream may be either input or output. If the stream is writable, and buffered data was not written to the underlying file, the fseek() function causes the unwritten data to be written to the file and marks the st_ctime and st_mtime fields of the file for update. If the most recent operation (ignoring any ftell() operations) on a given stream was fflush(), then the fseek() function causes the file offset in the underlying open file descriptor to be adjusted to reflect the location specified by the fseek() function. The fseek() function allows the file-position indicator to be set beyond the end of existing data in the file. If data is later written at this point, subsequent reads of data in the gap will return bytes with the value 0 (zero) until data is actually written into the gap. The rewind() function is equivalent to (void) fseek (stream, 0L, SEEK_SET), except that rewind() also clears the error indicator. The ftell() function obtains the current value of the file position indicator for the specified stream. The fgetpos() and fsetpos() functions are similar to the ftell() and fseek() functions, respectively. The fgetpos() function stores the current value of the file position indicator for the stream pointed to by the stream parameter in the object pointed to by the position parameter. The fsetpos function sets the file position indicator according to the value of the position parameter, returned by a prior call to the fgetpos() function. A successful call to the fsetpos() function clears the EOF indicator and undoes any effects of the ungetc() function. [Digital] The fseek_unlocked() function is functionally identical to the fseek() function, except that fseek_unlocked() may be safely used only within a scope that is protected by the flockfile() and funlockfile() functions used as a pair. The caller must ensure that the stream is locked before using these functions. Upon successful completion, the fseek() and fseek_unlocked() functions return a value of 0 (zero). If the fseek() or fseek_unlocked() function fails, a value of -1 is returned, and errno is set to indicate the error. The rewind() function does not return a value. Upon successful completion, the ftell() function returns the offset of the current byte relative to the beginning of the file associated with the named stream. Otherwise, a value of -1 is returned, and errno is set to indicate the error. Upon successful completion, the fgetpos() and fsetpos() functions return a value of 0 (zero). If the fgetpos() or the fsetpos() function fails, a value of -1 is returned, and errno is set to indicate the error. The fseek() or fseek_unlocked() function fails if either the stream is unbuffered, or the stream's buffer needed to be flushed and the call to fseek() or fseek_unlocked() caused an underlying lseek() or write() function to be invoked. In addition, if any of the following conditions occurs, the fseek() or fseek_unlocked() function sets errno to the value that corresponds to the condition. The O_NONBLOCK flag is set for the file descriptor underlying the stream parameter and the process would be delayed in the write operation. The file descriptor underlying the stream parameter is not a valid file descriptor open for writing. An attempt was made to write to a file that exceeds the process's file size limit or the maximum file size. (See the ulimit(3) reference page.) The write operation was terminated by a signal, and either none, some, or all the data was transferred. If there is buffered I/O, it is recommended that you call the fflush() function before the fseek() function to guarantee that the buffer characters were written. The whence parameter is an invalid value, or the resulting file offset would be invalid. A physical I/O error has occurred, or the process is a member of a background process group attempting to write to its controlling terminal, the TOSTOP signal is set, the process is neither ignoring nor blocking SIGTTOU, and the process group of the process is orphaned. There was no free space remaining on the device containing the file. The file descriptor underlying stream is associated with a pipe or FIFO. The rewind() function fails under the same conditions as the fseek() function, with the exception of [EINVAL], which does not apply. If the following conditions occur, the fgetpos(), fsetpos() or ftell() function sets errno to the value that corresponds to the condition. The file descriptor underlying the stream parameter is not a valid file descriptor. [Digital] The stream parameter does not point to a valid FILE structure or the position parameter is negative. An illegal attempt was made to get or set the file position of a pipe or FIFO. Functions: lseek(2), fopen(3) Standards: standards(5) delim off
https://backdrift.org/man/tru64/man3/fgetpos.3.html
CC-MAIN-2017-39
refinedweb
1,122
58.92
In C# using break statement, we can exit from any loop like for loop, while loop or do-while loop. break statement can be used, when while executing code inside the loop, you want to exit if certain condition is true. When break statement is encountered inside the loop, then loop is immediately termindated or loop is no more executed and next statement after loop is executed. Syntax: break; Example: using System; public class DoWhileLoopWithBreakExample { public static void Main(string[] args) { int i = 1; do{ Console.WriteLine(i); //check if value of i is 5 if(i==5) { //if value of i is 5, exit from loop and it will not print remaining values break; } i++; } while (i <= 10) ; Console.Write("After loop ends"); } } In the above statement, as soon as value i reaches to 5, do-while loop is terminated and next statement of code Console.Write("After loop ends"); is executed. Output: 1 2 3 4 5 After loop ends Suppose, we have nested for loops, then break statement will exit from the loop, inside which it is placed and other outer for loop may continue. Example: using System; public class BreakInsideNestedForProgram { public static void Main() { for(int i=1; i <= 3; i++) { for(int j=1; j <= 3; j++) { // if value of j =2 , exit from inner loop if(j==2) { break; } Console.WriteLine(i +" "+j ); } Console.WriteLine("End of loop number "+i+" for i"); } } } Output: 1 1 End of loop number 1 for i 2 1 End of loop number 2 for i 3 1 End of loop number 3 for i As you can see from the output, value of j was never printed as 2, because we were breaking the inner loop, whenever it was equal to 2.
https://qawithexperts.com/tutorial/c-sharp/17/c-sharp-break-statement
CC-MAIN-2021-39
refinedweb
294
63.22
Maven's goal is to reduce the time data scientists spend on data cleaning and preparation by providing easy access to open datasets in both raw and processed formats. Project description Maven /meɪvən/ – a trusted expert who seeks to pass timely and relevant knowledge on to others. Maven's goal is to reduce the time data scientists spend on data cleaning and preparation by providing easy access to open datasets in both raw and processed formats. Maven was built to: - Improve availability and integrity of open data by eliminating data issues, adding common identifiers, and reshaping data to become model-ready. - Source data in its rawest form from the most authoritative data provider available with all transformations available as open source code to enhance integrity and trust. - Honour data licences wherever possible whilst avoiding potential issues relating to re-distribution of data (especially open datasets where no clear licence is provided) by performing all data retrieval and processing on-device. Install pip install maven Usage import maven maven.get('general-election/UK/2017/results', data_directory='./data/') Datasets Data dictionaries for all datasets are available by clicking on the dataset's name. Running tests To run tests against an installed version (either pip install . or pip install maven): $ cd /path/to/repo $ pytest To run tests whilst in development: $ cd /path/to/repo $ python -m pytest Licences Contributing Maven was designed for your contributions! - Check for open issues or open a fresh issue to start a discussion around your idea or a bug. - Fork the repository on GitHub to start making your changes to the master branch (or branch off of it). - For new datasets ensure the processed dataset is fully documented with a data dictionary. For new features and bugs, please write a test which shows that the bug was fixed or that the feature works as expected. - Send a pull request and bug the maintainer until it gets merged and published. 😄 Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/maven/
CC-MAIN-2021-04
refinedweb
351
60.04
Write a Java program to find square root and cubic root of a number: Java has inbuilt methods to find square root and cube root of a number . Both of these methods are already included in the ‘Math’ module. Following are these methods : static double sqrt(double a): This is a static method and we can find out the square root of a number using it. You can call it directly like ‘Math.sqrt(number)’ as it is a static method of ‘Math’ class. The return value is also ‘double’. If the argument is less than zero than the result will be ‘NaN’. If the argument is ‘-0’ or ‘0’, output will be ‘0.0’. public static double cbrt(double a): ‘cbrt’ is a static method and we can find out the cubic root of a number using this method. Similar to ‘sqrt’, this method is also a static method . Argument should be double and the return value is also double. If the argument is negative , then the output is also negative . e.g. cube root of -27 is -3.0 . Let’s try to understand both of these methods using an example : /* *.Scanner; /** * Example class */ public class ExampleClass { //utility method to print a string static void print(String value) { System.out.println(value); } public static void main(String[] args) { Scanner scanner = new Scanner(System.in); int userInput; print("Enter a number to find the cube-root : "); userInput = scanner.nextInt(); print("Cube root is : "+Math.cbrt(userInput)); print("Enter a number to find the square-root : "); userInput = scanner.nextInt(); print("Square root is : "+Math.sqrt(userInput)); } } Sample Example output : Enter a number to find the cube-root : 27 Cube root is : 3.0 Enter a number to find the square-root : 4 Square root is : 2.0 Enter a number to find the cube-root : -8 Cube root is : -2.0 Enter a number to find the square-root : -144 Square root is : NaN Enter a number to find the cube-root : -0 Cube root is : 0.0 Enter a number to find the square-root : -0 Square root is : 0.0 Similar tutorials : - Java program to extract all numbers from a string - Java program to check if a number is Pronic or Heteromecic - Java program to find three numbers in an array with total sum zero - Java example to find missing number in an array of sequence - How to add zeros to the start of a number in Java - Java program to check if a number is a buzz number or not
https://www.codevscolor.com/java-program-to-find-square-root-and-cubic-root
CC-MAIN-2020-50
refinedweb
423
64.91
docs/user/pir/objects.pod - Using Objects in Parrot. This document covers object oriented programming in PIR. Yes, you've read correctly. Parrot has the ability to create and manipulate objects (aka, object oriented programming). While it may seem strange for a low-level language like PIR to have the facility for object oriented programming, it makes perfect sense in this particular case. Remember, the original goal of Parrot was to be the underlying implementation for Perl6, which has object oriented features. Parrot's secondary goal is to provide a good platform for other dynamic languages such as Python, Ruby, PHP, Javascript, etc. and those languages too have the ability (if not the requirement) to be object oriented. Thus Parrot contains facilities for a manipulating objects so that language implementors can easily express the appropriate object semantics for their language of interest. Before I begin talking about how to create classes and instantiate objects, I first need to talk about an intimately related subject: namespaces. Namespaces serve a twofold purpose, they allow you to group related routines together and they allow you to give several subroutines the same name but different, domain specific, implementations. These characteristics are, oddly enough, similar to the basic requirements for a class. For instance, you may put all of your subroutines dealing with people in a Person namespace and all of your subroutines dealing with computer programs in the Process namespace. Both namespaces may have a subroutine called run() but with radically different implementations. Below is some code to illustrate this example: As you might guess, the .namespace directive tells Parrot what namespace to group subroutines under. A namespace ends when another .namespace directive changes the namespace or when the end of the file is reached. A .namespace directive with no names in the brackets changes back to the root namespace. Perl programmers will recognize that Parrot .namespace declarations are just like Perl package declarations, albeit with different syntax. But there are a few other differences. I'll talk more about how Parrot uses namespaces and classes together in just a minute. Creating classes in Parrot is relatively easy. There are opcodes for it. The easiest to start with is newclass; just say $P0 = newclass 'Foo' where $P0 is a PMC register, and 'Foo' is the name of the class you want to create. When you wish to instantiate objects that belong to the class you've created, it's equally simple. Just say myobj = new "Foo" where myobj is a PMC and "Foo" is the classname you've created with newclass. Here's a simple example: You may notice that I didn't use the return value of newclass. That's only because this is a simple example. :-) I'll talk about what to do with the return value of newclass a little later. Right now, let's talk about methods. So now that I've created a Dog class, how do I add methods to it? Remember before when I talked about namespaces? Well, that's the answer. To add methods to a class, you create a namespace with the same name as the class and then put your subroutines in that namespace. PIR also provides a syntactic marker to let everyone know these subroutines are methods. When declaring the subroutine, add the :method modifier after the subroutine name. Here's a familiar example to anyone who has read perlboot. It's important to note that even though I've declared the namespaces and put subroutines in them, this does not automatically create classes. The newclass declarations tell Parrot to create a class and as a side effect, namespaces with the same name as the class may be used to store methods for that class. One thing you may notice about method calls is that the method names are quoted. Why is that? If you would have left out the quotes, then the identifier is assumed to be a declared .local symbol. So, instead of writing: you could also have written: Another example of this is shown below. So far I've talked about namespaces and creating classes and associating methods with those classes, but what about storing data in the class? Remember how the newclass opcode returned a PMC that I didn't do anything to/with? Well, here's where it's used. The PMC returned from newclass is the handle by which you manipulate the class. One such manipulation involves class "attributes". Attributes are where you store your class-specific data. Parrot has several opcodes for manipulating attributes; they are: addattribute, setattribute, and getattribute. The addattribute opcode lets you add a spot in the class for storing a particular value which may be get and set with getattribute and setattribute respectively. The only restriction on these values is that currently all attributes must be PMCs. So, say I wanted to give my barnyard animals names (I'll illustrate with just one animal and you can infer how to do the same for the rest): Whew! There's a lot of new stuff in this code. I'll take them starting from the top of the program and working towards the bottom. One of the benefits of tagging your subroutines as methods is that they get a PMC named self that represents the object they are acting on behalf of. The name method takes advantage of this to retrieve the attribute called "name" from the self PMC and print it. Immediately after I create the class called "Dog", I use the PMC handle returned from newclass to add an attribute called "name" to the class. This just allocates a slot in the class for the value, it does nothing more. I create a new Dog and give it a name. Because attributes may only be PMCs, in order to give the Dog a name, I first have to create a new String PMC (this is one of the PMCs builtin to Parrot) and assign the name I wish to give the dog to this PMC. Then I can pass this PMC as a parameter to setattribute to give my Dog a name. Seems kind of complicated, doesn't it? Especially when you think about doing this for each animal. Each animal namespace would have an identical version of the name method. For each call to newclass I'd need to also call addattribute so that all of the animals may have a name. Each time I wish to assign a name to an animal, I'd first need to create a String and call setattribute on it. Et cetera. Surely there's a better way?!? There is ... You saw it coming didn't you? What's object oriented programming without inheritance? Parrot has an opcode subclass that lets you inherit data and methods from an existing class. We can use this ability to create a base class called "Animal" that contains the "name" attribute and two methods that are common to all animals: setname and getname Then, to create new animals, I just inherit from the Animal base class like so: Each subclass will contain an attribute called "name" that can be used to store the name of the animal. The setname method abstracts out the process of creating a String PMC and calling setattribute on it. And finally the getname method becomes a wrapper around getattribute. I hope this gives you an idea of how to do object oriented programming using Parrot. The opcodes illustrated here are what any language implementor that targets Parrot would use to implement object oriented features in their language. Of course there are more opcodes for richer object oriented behavior available in Parrot. This article only covers the basics. For more information see parrot/docs/pdds/pdd15_objects.pod. At the end of this article is a more complete listing of the program that gives my barnyard animals voices. There are many improvements that can be made to this code so take this opportunity to read and experiment and learn more about OOP in Parrot. * Thanks to Randal Schwartz for providing a neat set of examples in perlboot from which this article shamelessly borrows. * Thanks to the Parrot people for feedback Jonathan Scott Duff
http://search.cpan.org/~mstrout/Rakudo-Star-2012.08_001/rakudo-star/parrot/docs/user/pir/objects.pod
CC-MAIN-2013-48
refinedweb
1,368
64.1
On Tue, 19 Oct 1999, Webmaster Jim wrote: > On Tue, Oct 19, 1999 at 09:12:16AM -0500, Klaus Weide wrote: > > I wish you could have "just ignored the messages" from that Borland > > compiler, or found a switch to turn them off... > > Sorry, I sometimes assume compiler authors know more than I do :-) Did you tell it that you are really compiling C and not C++? :) > > There doesn't seem a point to most of those warnings you are trying > > to suppress. Or if there is, then maybe it should be fixed in the > > header files (e.g., what is BOOL?). > > I'd certainly agree to fixing the source of the warnings in a more > logical fashion. "BOOL" is short for BOOLEAN; why have both? > Why not just use TRUE/FALSE? I have no idea... It's historical, as far as I have been able to figure out. The WWW Library uses BOOL / YES / NO for its variables and functions. The Lynx application code uses BOOLEAN / TRUE / FALSE for its variables and functions. I find it is actually a useful distinction. When you see a BOOL in the src part, it's a hint that the variables may have something to do with the lower layers. When you see a BOLEAN in the Library part, it's a hint that it was added for Lynx and has something to do with the closer-to-the-user layers. Of course it has all got a it mixed up in the course of time. > This could be wiped out of HTUtils.h, leaving just BOOLEAN: > > #ifndef BOOL > #define BOOL BOOLEAN > #endif > > Or just use char, which is what BOOLEAN is typdef'd to. If anything, I would prefer to get rid of BOOLEAN and keep using BOOL, to make integrating code from the newer libwww not more difficult that in already is. But I see no point in such a cleanup anyway. Does the defien above actually apply? It wouldn't if your compiler predefines a BOOL symbol. Anyway, normally BOOL and BOOLEAN are the same; I don't think the difference has something to do with the warnings you got. Try #defining BOOL (and BOOLEAN?) to be 'int', or, if your compiler has such types, 'bool' or 'boolean'. That would probably get rid of the warnings without cast, but lynx would use slightly more memory (at least for 'int'). Klaus
http://lists.gnu.org/archive/html/lynx-dev/1999-10/msg00446.html
CC-MAIN-2014-15
refinedweb
399
81.93
ComboBox with checkboxuser627817 Oct 29, 2013 4:13 PM Hi All, I want to display a combobox with ablity to select multiple items and also be able to show selected items with different visuals - I think a checkbox to show selected/un-selected item is more appropriate but I can live with the alternative, as for example, highlight the selected ones with "bright green" etc. Is it possible to do this under JavaFX? Thanks, DP 1. Re: ComboBox with checkboxjsmith Oct 29, 2013 4:43 PM (in response to user627817) Your UI description does seem kind of strange to me. Usually a combo box is used to select a value from a drop down or manually enter a value. What will you render for the button cell? ComboBox (JavaFX 2.2) buttonCell I've never seen a multiple selection combobox before. I don't think you can easily create one as such a design is not fit within the original use-case intentions of the combo box design. ComboBox (JavaFX 2.2) selectionModelProperty states "The selection model for the ComboBox. A ComboBox only supports single selection." 2. Re: ComboBox with checkboxJames_D Oct 29, 2013 5:40 PM (in response to jsmith) I agree with jsmith that this sounds like an "off-label" use of a combo box, and you'd need to work around the API quite a lot to make it work. Consider instead using a MenuButton populated with CheckMenuItems. 3. Re: ComboBox with checkboxuser627817 Oct 29, 2013 5:50 PM (in response to jsmith) You right - I am not equipped with even a modest java UI knowledge, and last touched Java in 1.4 version with some Swing UI development. Now I need to come up with this small app. I want to display list of items, allow at a time a single item is to be selected, but once selected, item remains selected - could be unselected if clicked on it again. So if a list have 10 items, I could select 1 item at a time, and click 3 different items, those 3 are now selected. Also once selected, I need to make it appear differently so user will know which are "selected", which are "not selected" and which are "disabled". If I click on one of those 3 items again, I will unselect them. It could be a ListView component. If I could make an item with a "checkbox - TextDescription", it would be ideal. Thanks! DP 4. Re: ComboBox with checkboxJames_D Oct 29, 2013 7:15 PM (in response to user627817) import java.util.Arrays; import java.util.List; import javafx.application.Application; import javafx.beans.value.ChangeListener; import javafx.beans.value.ObservableValue; import javafx.scene.Scene; import javafx.scene.control.CheckMenuItem; import javafx.scene.control.ListView; import javafx.scene.control.MenuButton; import javafx.scene.layout.BorderPane; import javafx.stage.Stage; public class MultipleSelectionDropdownTest extends Application { @Override public void start(Stage primaryStage) { final MenuButton choices = new MenuButton("Fruit"); final List<CheckMenuItem> checkItems = Arrays.asList( new CheckMenuItem("Apples"), new CheckMenuItem("Oranges"), new CheckMenuItem("Pears"), new CheckMenuItem("Grapes"), new CheckMenuItem("Mangoes") ); choices.getItems().addAll(checkItems); // Keep track of selected items final ListView<String> selectedItems = new ListView<>(); for (final CheckMenuItem item : checkItems) { item.selectedProperty().addListener(new ChangeListener<Boolean>() { @Override public void changed(ObservableValue<? extends Boolean> obs, Boolean wasPreviouslySelected, Boolean isNowSelected) { if (isNowSelected) { selectedItems.getItems().add(item.getText()); } else { selectedItems.getItems().remove(item.getText()); } } }); } BorderPane root = new BorderPane(); root.setTop(choices); root.setCenter(selectedItems); primaryStage.setScene(new Scene(root, 600, 400)); primaryStage.show(); } public static void main(String[] args) { launch(args); } } 5. Re: ComboBox with checkboxuser627817 Oct 29, 2013 7:53 PM (in response to James_D) Works! Thanks so much!! -DP
https://community.oracle.com/thread/2598157
CC-MAIN-2017-34
refinedweb
611
51.04
One useful feature that you probably noticed in Windows Server 2003 is Shadow Copies for Shared Folders. I will not give too many details about it – suffice to say that it will create periodic versions of your shares. So if you lost or overwritten a file on some share then you simply right-click that file and you will be able to access its previous versions. This feature can be easily enabled on any volume – just go to the Properties page for a volume, select “Shadow Copy” tab and then click “Enable”. This will schedule a task that will create two versions per day. One more thing to add is that you can have a maximum of 64 versions for a given volume, which gives us about a month of history (for more details about this feature, see this link) OK – this might be good enough in many cases, but what about the case when you want a more granular control on the shadow copy history? For example, you might want to implement a more complex policy: shadow copies created daily should be persisted for two weeks. With one exception - shadow copies created Monday should stay there for at least one month, and so on and so forth. Here is a simple VBScript example which will delete all shadow copies that are more than two weeks old, with the exception of those created Monday (which remain there as long as possible). Just make sure you run this script once per day, using a scheduled task for example: Dim namespace Set namespace = GetObject("winmgmts://localhost/root/cimv2") Dim objSet Set objSet = namespace.ExecQuery("select * from Win32_ShadowCopy") Dim dateTime set dateTime = CreateObject("WbemScripting.SWbemDateTime") Dim vDate for each obj in objSet dateTime.Value = obj.InstallDate vDate = dateTime.GetVarDate(True) WScript.echo "- Snapshot on " & _ obj.VolumeName & " @ " & vDate if (DateDiff("d", vDate, Date) > 14) then dayOfWeek = DatePart("w", vDate) WScript.Echo "dayOfWeek = " & dayOfWeek if (dayOfWeek <> 2) then WScript.echo " [Deleting snapshot...]" obj.Delete_() end if end if next
http://blogs.msdn.com/b/adioltean/archive/2004/05/05/126295.aspx
CC-MAIN-2015-27
refinedweb
335
62.17
Easily calculate sunrise, sunset and twilight (civil, nautical and astronomical) times using Java with the jSunTimes package. Latest version: 1.0 (12th June 2011) Download | Documentation | Web Start | Examples | Version History | Licensing The jSunTimes package provides an API to calculate sunrise, sunset and twilight (civil, nautical and astronomical) times using Java. You are able to specify the exact latitude and longitude of the location to calculate the times for and can also specify the time zone and whether to take daylight savings time (summer time) into account. The package also contains a Julian date conversion class so that you can convert the date/time held in a Calendar instance to a Julian date. The package also contains classes so that you can run the calculations as an applet or as a standalone application. The algorithms used in the package are based on functions available at. Bear in mind that jSunTimes is still in development, so check back often for updates. To contact me about jSunTimes, send an e-mail to jsuntimes@jstott.me.uk. Download Current Version - v1.0 - 12th June 2011 - jsuntimes-1.0.jar (138kB) - v1.0 Javadoc Previous Versions - v0.3 - 6th March 2005 - jsuntimes-0.3.jar (113kB) - v0.3 Javadoc - v0.2 - jsuntimes-0.2.jar (66kB) - v0.1 - jsuntimes-0.1.jar (65kB) Documentation Java Web Start Application An online interface to jSunTimes is provided with a Java Web Start application. When run, this will display a window into which you can enter your latitude and longitude, the date, your time zone and whether daylight savings time applies. Then click the Go button in order to display the sunrise, sunset and twilight times. Examples The Sun class is easy to use. Remember that the jsuntimes-1.0.jar file should appear on your classpath before you can compile your code that uses it. Also remember that you have to import the package: import uk.me.jstott.sun.*; The SunTest class contains examples of calculating sunrise/sunset/twilight times for a couple of places. To include the jSunTimes applet in a webpage, use the following applet element in your web page: <applet code="uk.me.jstott.sun.SunTimesApplet.class" codebase="" archive="jsuntimes-1.0.jar" width="400" height="300"> </applet> You can download the jar file to your website, but make sure that you change the codebase and archive attributes to match your site and the current version of jSunTimes. Version History - 1.0 - 12th June 2011 - Fixed problem with converting Time objects to Strings when time has seconds ≥ 59.5 (was showing seconds as 60). - Fixed problem where Julian dates weren't being converted to midday (12:00:00) - necessary for the algorithm to produce accurate results. - Added eight new methods to the Sun class to allow the phenomena times to be calculated without the need to first convert a Calendar to a Julian date. This should simplify things a little bit! - Updated javadoc. - 0.3 - 6th March 2005 - Fixed longitude sign (should have been negative for longitudes west of the Greenwich Meridian - Updated Javadoc - 0.2 - 13th April 2004 - Added uk.me.jstott.util.JulianDateConverter utility class to convert the date and time held in a Calendar object to a Julian date. - Changed time zone handling to use the TimeZone class. - Added SunTimesApp and SunTimesApplet to provide a GUI to calculate sunrise, sunset and twilight times. - Provided a link to the SunTimesApp application as a Java Web Start link. - 0.1 - 31st March 2004 - Initial Version.:14:27 GMT
http://www.jstott.com/jsuntimes/
CC-MAIN-2019-39
refinedweb
586
58.38
14 December 2007 14:41 [Source: ICIS news] LONDON (ICIS news)--Basell, National Gas Co of Trinidad and Tobago (NGC) and National Energy Corp of Trinidad and Tobago (NEC) will build a 450,000 tonne/year polypropylene (PP) plant in the Caribbean state, they said on Friday. ?xml:namespace> The three companies have signed a memorandum of understanding (MoU) confirming their intention to build and operate the fully integrated plant, Basell said in a statement. “The MoU includes the construction of a methanol plant, which will exclusively supply a methanol-to-propylene [MTP] unit at the complex,” it said. “The propylene produced at the complex will be the feedstock for a worldscale 450,000 tonne/year PP plant,” it added. The plant, which is to be undertaken in conjunction with MTP technology supplier Lurgi AG, is tentatively scheduled to come on stream
http://www.icis.com/Articles/2007/12/14/9087085/basell-trinidad.html
CC-MAIN-2014-52
refinedweb
142
52.94
You are browsing the Symfony 4 documentation, which changes significantly from Symfony 3.x. If your app doesn't use Symfony 4 yet, browse the Symfony 3.4 documentation.: Best Practice Best Practice Put the form type classes in the App\Form namespace, unless you use other custom form classes like data transformers. To use the class, use createForm() and pass the fully qualified class name: Validation¶ The constraints option allows you to attach validation constraints to any form field. However, doing that prevents the validation from being reused in other forms or other places where the mapped object is used. Best Practice Best Practice Do not define your validation constraints in the form but on the object the form is mapped to. For example, to validate that the title of the post edited with a form is not blank, add the following in the Post object:.
https://symfony.com/doc/current/best_practices/forms.html
CC-MAIN-2019-22
refinedweb
147
59.23
DSON Importer for Poser 1.0.0.56 is now available! DAZ 3D is pleased to announce the next General Release version of DSON Importer for Poser, version 1.0.0.56! What is new in this version? Do I need to update my copy? The 1.0.0.56 version is a bugfix release. It resolves several issues with the loading and/or handling of DSON format content. All new downloads of the product will be of this version. A Change Log with additional information can be viewed on the Documentation Center. Highlights are: - Fixed crashed when a morph specifies an incorrect number of deltas - Fixed support for geometry culling when a conforming figure does not graft - Fixed support for Point At - Fixed a crash when items are loaded from directories that are not mapped - Fixed resolution of nodes where node name does not equal asset id - Fixed read of rigidity rotation - Fixed support for OSX 10.5 For more information on the previous release (1.0.0.27), see this thread. I could wish old content was being updated faster, however I did manage to get the supersuit and the boots saved DSON compatable and made the PCF for them. This is done in Poser Pro 2012 64 bit, using materials I've purchased elsewhere, but I'm happy to say that the default suit works just like it does in DS, though finding all the various material zones to get things textured is something of a challenge. (Then again it is in DS too if you aren't using one of the presets.) This was also done without the newer DSON Importer, which I shall go reset after posting this. edit: Gotta add the picture. Apparently some of the older Genesis content cannot be easily be converted to DSON compatible format. Possible never will be. I feel your pain, some items not available in Poser format are the reason I got interested in Genesis to begin with. So I've taking to learning how to use Daz. Thanks, but: 1. Is the Poser single axis scaling now fixed ? (The blendzones) 2. Is the internal scaling now fixed ? (The morphform dials) Hi, today I have downloaded the new files and installed them. But now I can't load any Genesis figure in Poser 9 pose room. I get allway a little white window with this message: Traceback (most recent call last): File "F:\! 3D\POSER\DSON-Genesis\Runtime\libraries\Character\DAZ People\Genesis.py", line 1, in import dson.dzdsonimporter ImportError: No module named dson.dzdsonimporter DsonImporter is installed in Poser 9 Program folder, Essentials are installed in an external runtime I made for Dson Content. Before I have used the previous release (1.0.0.27) and was able to load figures and content. Do you have any ideas? You installed the new files in the exact same locations as the old ones? Yes This is my 3rd version of Dsonimporter I use. Do you know if the installer uninstalled the old files first? Thank you answer. Yes it did. I even have looked yesterday if Dsonimporter is in Addon folder - yes it is. Also the scripts are in there typical folder. I'm stumped because I've done everything as usual, and it's not the first DSon version and Essentials, which I installed. Also, I can not confuse files because I had downloaded the 3 files needed for my system to a new folder down. ----------------- Edit: I wanted to add the logfile but it is not allowed. And her are the Dson files in my Poser 9 python folder: C:\Program Files (x86)\Smith Micro\Poser 9\Runtime\Python\addons\dson\boost_python-vc100-mt-1_50.dll C:\Program Files (x86)\Smith Micro\Poser 9\Runtime\Python\addons\dson\DzDSONIO.dll C:\Program Files (x86)\Smith Micro\Poser 9\Runtime\Python\addons\dson\QtCore4.dll C:\Program Files (x86)\Smith Micro\Poser 9\Runtime\Python\addons\dson\__init__.pyc C:\Program Files (x86)\Smith Micro\Poser 9\Runtime\Python\addons\dson\__init__.py C:\Program Files (x86)\Smith Micro\Poser 9\Runtime\Python\addons\dson\dzdsonimporter.pyd C:\Program Files (x86)\Smith Micro\Poser 9\Runtime\Python\poserScripts\ScriptsMenu\DSON Support\Importer Preferences....py C:\Program Files (x86)\Smith Micro\Poser 9\Runtime\Python\poserScripts\ScriptsMenu\DSON Support\Transfer Active Morphs.py C:\Program Files (x86)\Smith Micro\Poser 9\Runtime\Python\poserScripts\ScriptsMenu\DSON Support\SubDivision\Set SubDivision Level 0.py C:\Program Files (x86)\Smith Micro\Poser 9\Runtime\Python\poserScripts\ScriptsMenu\DSON Support\SubDivision\Set SubDivision Level 1.py C:\Program Files (x86)\Smith Micro\Poser 9\Runtime\Python\poserScripts\ScriptsMenu\DSON Support\SubDivision\Set SubDivision OFF.py For some reason, Poser can't find or can't see that directory.where dsondzdsoninporter is located. Make sure that it hasn't lost the DSON-Genesis library. Thank you for your ideas. I remove my Dson-Runtime in Poser GUI at Library and readd it. But no results. Than I have copied the Dsonimporter files from Python folder in my external runtime - the same failure message agai at try to load Genesis figure. Is DAZ aware the Poser library is still locking up in Poser Pro 2012 after loading Genesis/Genesis content to a scene? It doesn't happen immediately, but invariably while working on a scene with Genesis, the library no longer responds. They probably aren't, because not everyone useing PP2012 is having that problem. I use PP2012 (64bit) and I didn't have the problem when working on that image I posted. I've never had that problem that I've noticed, actually. I've never had a problem either Many people did and reported it and DAZ confirmed it. There were two workarounds. One was to set the Library to External rather than embedded, the second was to use Shaderworks Library. Which is cool, and I use it when I want to do something the Poser library can't do. But, I don't like the External Library. And ... since Genesis is the only thing that causes the Library to freeze, I don't need the Library to be External when I'm using Poser content. But, changing the Library from Embedded to External or from External to Embedded requires a relaunch of Poser. The second or third release supposedly fixed the Library freezing. I think the second introduced the freezing for some people who had not initially had that problem. It IS improved, and I can work on a scene with Genesis in it for longer before the Library freezes, but eventually, it always does. It's another one of those little gotchas that affect some computers and don't affect others. Who knows. Now that my XPS warranty is expired, it's time to replace it. The new computer may be ... immune ... to the frozen Library. Although ... something else could be broken instead ... Computers. Gotta love them. They make our lives so much more interesting. . Another option is to go into the imprter's preferences (in the Scripts>DSON Importer menu) and turn off the Show progress option. You won't get a progress bar during import, but your library should remain unfrozen. I just replaced my XPS 410 with XPS 8500. Can't wait to try rendering with the new PC. Did that when it was first suggested, but my Library still freezes. I'm looking at the Dell Precisions. There are entirely TOO many choices though. Which is good, but it means it will probably be several months before I decide on the build. I just replaced my XPS 410 with XPS 8500. Can't wait to try rendering with the new PC. Have you bug reported that? The devs need to know about it. Do I open a help request even though it's a bug and not a request for help? Have you bug reported that? The devs need to know about it. Submit it to Bug Tracker Oops. Where is Bug Tracker hidden? You will need to sign up for an account Yey, I would like to signup to tell about my problem but I have got no activation mail. :( Just the second time. I think my ISP don't like Daz mails. I even have problems to get all the newletters. Yes I did Daz to my adress book. But without a failure message I can't make anything. Today I have deleted my external runtime with Dson Content and have installed all new - Dsonimporter file and the both General Essentials. With the same result - Genesis.py can't find Dsonimporter. DSON importer needs to be installed to the Runtime in the main Poser program directory... Yes, I did. It is not my first Dsonimporter-Installation. That's why I am confused. I noticed from your sig that you are using Poser 9 on a 64 bit system. Are you by chance installing the 64 bit dson importer? If so that's the wrong one for you to use. You need the 32 bit importer. Poser 9 is not a 64 bit program, it's 32 bit. Thank you for your help. No, I have downloaded the 32 bit version of Dsonimporter and installed in Poser 9 Program folder like I allways have done the last times. Therefore I am confused because this time it doesn't work. That is... weird, and I have no idea what else it could be not that I'm an expert, but all the obvious we've covered. Have you tried the obvious, which is to just reinstall the DSON Importer?
http://www.daz3d.com/forums/discussion/comment/221003/
CC-MAIN-2017-43
refinedweb
1,605
68.26
UDL highlight hex numbers - Unc3nZureD last edited by In my language every single number is by default treated as hexadecimal, no prefix or suffix needed. num,bers can be strictly LOWERCASE numbers, everything else is treated as variable. I’d like to detect and highlight these numbers. When it comes to operators, no space is necessary, so making: D=ea totally valid state. Hexadecimal numbers can have question mark (as wildcard) as part of them, so actually I can treat them as a valid hex character (it would be even better and amazing to handle differently, so I could use different color for wildcards). I’ve tried every combination I could using the UDL GUI, but I couldn’t find a well-working version which works in every case. Do you have any good idea how could I achieve this highlighting? Or is it impossible to do with UDL? Thanks a lot! It sounds like one of those areas which aren’t possible to solve by UDL but I have to admit that I didn’t fully understand how your language works. A.) Do you mind sharing some examples of how it is and should look like? B.) I have something in mind which would involve pythonscript plugin.Would you be interested in? If yes, I would need, again, some example data for testing. - Unc3nZureD last edited by Sure. It’s mostly some custom and private thing, which I can’t fully reveal (company related), but I’ll try to give some example which might make you understand some basics. ; Comment. These all are checks if variable D equals to the hex number given on the right D==de07ed76 ; 0xde07ed76 D==e ; 0xe D == 123 ; 0x123 - note that spaces doesn't matter ; This is some pattern finding code. Everything is treated as hex and ?? marks are allowed. Note that spaces are UNNECESSARY and OPTIONAL here. [3ff] ( 68 ?? ?? 40 00 ) I hope you can more or less understand me now :) I’ve never tried pythonscripted plugin, but sure, if it’s interesting, then why not? :) Being back to work, partially, lunch break over :-) Will come back later this day So what I was thinking about is to use the UDL as the main lexer while using this script to do additional colorings. To use this script you need to install PythonScript plugin, create a new script, paste content from below, safe it. Then modify it to your needs - see configure section for more details. Basically add/modify the regexes and more important modify the lexer name self.lexer_name = 'User Defined language file' Once done, save it and run it. It should, hopefully, go hand ind hand with your UDL I have added two regular expressions just to show you how it can be extended. From the problem given, I assume you only need the second one. from Npp import editor,): editor.setIndicatorCurrent(0) editor.setIndicatorValue(color) editor.indicatorFillRange(pos, length) def style(self): start_line = editor.getFirstVisibleLine() end_line =, lambda m: self.paint_it(color[1], m.span()[0], m.span()[1] - m.span()[0]), 0, start_position, end_position) def configure(self): SC_INDICVALUEBIT = 0x1000000 SC_INDICFLAG_VALUEFORE = 1 editor.indicSetFore(0, (181, 188, 201)) editor.indicSetStyle(0, INDICATORSTYLE.TEXTFORE) editor(79, 175, 239) | SC_INDICVALUEBIT)] = r'\b([A-F0-9]+[\s\?]*)+\b' self.regexes[(1, self.rgb(189, 102, 216) | SC_INDICVALUEBIT)] = r'\b([a-f0-9]+[\s\?]*)+\b' # defining the lexer_name ensures that # only this type of document get stlyed # should be the same name as displayed in first field of statusbar self.lexer_name = 'User Defined language file' = None()
https://community.notepad-plus-plus.org/topic/17022/udl-highlight-hex-numbers/4?lang=en-US
CC-MAIN-2020-34
refinedweb
591
65.32
12110/can-use-inherited-contract-in-artifacts-require-statement Suppose I have 2 contracts A and B and I inherit B from A and add some code in contract B, when I use the artifacts.require() statement, should I include both the contracts or including only B will be enough? Will the following code be fine to use A and B? var B = artifacts.require("B"); module.exports = function(deployer) { deployer.deploy(B); }; Yes, this code will work just fine. When you inherit a contract, only one contract will be created during deployment. So when you inherit a contract then the child contract will have the contents of the parent contact. Look at this example and understanding how you can implement it: pragma solidity ^0.4.18; contract A { uint256 public balance; function() public payable { balance = msg.value; } } contract B is A { uint256 i; A a; function B(address _a) public { a = A(_a); function receiveForParent() public payable { a.transfer(msg.value); function getParentBalance() public constant returns (uint256) { return a.balance(); You cant access/embed real world data using ...READ MORE In am not sure about that, but ...READ MORE Right now, the Hyperledger Composer supports only ...READ MORE Yes, it is possible. You can look ...READ MORE Summary: Both should provide similar reliability of ...READ MORE This will solve your problem import org.apache.commons.codec.binary.Hex; Transaction txn ...READ MORE This was a bug. They've fixed it. ...READ MORE you can use const Socket = require('blockchain.info/Socket'); const mySocket ...READ MORE There are three ways you can do ...READ MORE OR Already have an account? Sign in.
https://www.edureka.co/community/12110/can-use-inherited-contract-in-artifacts-require-statement
CC-MAIN-2021-10
refinedweb
272
62.14
Service Release 1 for Visual Studio Team Edition for Database Professionals introduces the concept of “database references” which allows you to represent and resolve 3 and/or 4-part name usage inside a database project. Database references are conceptually the same as assembly references inside a C# or VB.NET projects; they allow you to reference objects from your database project that live inside an other namespace (database). Database references can be established between two or more database projects or between a database project and a dbmeta file or a combination of the two using project and dbmeta references. A dbmeta file is a new output introduced in Service Release 1, it is a single file that describes the schema represented in a database project, this representation can be used to resolve references to objects living inside that project without the need to have access to the project (dbmeta files are analogs to asmmeta files in the .NET world). Adding a database references automatically updates the project dependencies, so that the referenced project builds before the referencing project. Example: Creating a database reference Lets walk through the steps to creating and using database references. We are going to create a project and load the AdventureWorks schema in to this. Add a second project to the solution, to which we add objects that reference AdventureWorks. At first hand this will cause an error and some warnings because the references cannot be resolved. Then we will add a database reference between the projects and fix-up the 3-part names references to leverage the database reference. The result will be that the error and warnings will disappear as expected. Step 1: Create a database project First we will create a new database project using File -> New -> Project…, in the New Project dialog choose Database Projects -> Microsoft SQL Server -> SQL Server 2005, name the project AW. Step 2: Import schema In order to get something we can reference, we will import the AdventureWorks schema using: right-click on the AW database node, choose “Import Database Schema…” If needed create a connection that points to an existing instance of AdventureWorks on a SQL Server 2005 server, select the connection in the “Import Database Wizard” dialog and hit Finish. Now we have a project that reflects all the schema objects as exist inside the AdventureWorks database. We can browse through the schema using Schema View or visualize the files that are created as result of the “Import Database Schema” using the Solution Explorer. Step 3: Add second database project to the solution Lets add a second database projects, which we will name AWRef, using File -> Add -> Project… in the New Project dialog choose Database Projects -> Microsoft SQL Server -> SQL Server 2005, name the project AWRef. After we added the second project we should see something like this inside the Solution Explorer. Step 4: Import objects in to second project Next we will add two new schema objects to the AWRef project which are both referencing the AdventureWorks database. We will add a view and a procedure, since procedures are deferred name resolved and view mandate that the object(s) referenced are existing at the time of creation, hence these two objects will test the two different use cases demonstrating resolving both hard and soft references across a database boundary. Download the AWRef.sql script file and save it to C:\ or some other location (as long as you remember where:). This script file contains the definition of the VIEW and the PROCEDURE we are going to add. We will add the objects by using “Import Script”, using: right click on the AWRef project node choose “Import Script…” point to the location where you saved the AWRef.sql file and hit Finish. The import causes the two new objects to be added to our project; a VIEW named [dbo].[rvSalesPerson] and a PRCOEDURE named [dbo].[ruspUpdateEmployeeLogin]. If you check your “Error List” in Visual Studio you will find that we now have 1 error and 26 warnings. NOTE: You might not have an error, because you have a local copy of AdventureWorks on your local SQL Server instance which is used for design time validation. Step 5: Add database reference So far we have been setting up our environment, now it is time to start using Database References. In order to resolve the cross database references in the AWRef project to the AdventureWorks database we need to add a database reference in the AWRef project to the AW project, since it contains the definition of the AdventureWorks database. NOTE: The key thing to understand is that the reference is made to the content of the schema container, not to the name of the scheme container. In other words it does not matter that the project is named AW, as long as the content of the project matches the objects referenced in this case AdventureWorks. We can add references in two ways: through the project property pages or via the Solution Explorer References node. We will add the reference by right clicking on the References node in Solution Explorer of the AWRef project This brings up the Add Database Reference dialog, where you can select another database project which is part of the same solution or point to a dbmeta file. After you defined the which entity you want to add a reference to you need to define server and/or database variable names and values that will be used to abstract the 3 or 4-part name references. If the database reference is used to represent a database to database reference on the same instance, you only need to specify a database variable, specifying a server variable will actually causes a failure, causing the 3-part name reference not to be resolved. When you need to model a 4-part linked server reference, you need to specify both a server and a database variable. In this case we are only dealing with a database, 3-part name reference, so we create a database variable name $(AW) and a value AdventureWorks We also check the check-box at the bottom of the dialog named “Update the existing schema object definitions and scripts to use the database reference variables”. This will kickoff a special refactoring of the code, which will find all 3-part name references in this case that are using [AdventureWorks].[<schema_name>].[<object_name>]. Like rename refactoring this will show a dialog which highlights all changes, like this: Choose Apply to make all the changes suggested by the refactoring step. If you go back to the project property page named References you will see the definition of the database reference, the variable definition and variable value and a pointer to the dbmeta file. Creating a project to project reference is an indirect method of assigning a dbmeta file. Step 6: Check the project dependencies Now that we have setup the project to project reference, check the build dependency by right clicking on the solution file and choosing “Project Dependencies”. This will show you the Project Dependencies dialog which hosts two tabs, one display which project depends on which and a second tab displaying the Build Order. As you can see, adding the project reference, automatically changed the project dependencies and the build order. Step 7: Update the 3/4-part name references The refactoring step, launched from the Add Reference dialog has changed the T-SQL code, so that all references to [AdventureWorks] are replaced with [$(AW)] instead. The stored PROCEDURE therefore looks now like this: 1: CREATE PROCEDURE [dbo].[ruspUpdateEmployeeLogin] 2: @EmployeeID [int], 3: @ManagerID [int], 4: @LoginID [nvarchar](256), 5: @Title [nvarchar](50), 6: @HireDate [datetime] 7: WITH EXECUTE AS CALLER 8: AS 9: BEGIN 10: SET NOCOUNT ON; 11: 12: BEGIN TRY 13: UPDATE [$(AW)].[HumanResources].[Employee] 14: SET [ManagerID] = @ManagerID 15: ,[LoginID] = @LoginID 16: ,[Title] = @Title 17: ,[HireDate] = @HireDate 18: WHERE [EmployeeID] = @EmployeeID; 19: END TRY 20: BEGIN CATCH 21: EXECUTE [$(AW)].[dbo].[uspLogError]; 22: END CATCH; 23: END; If you would have forgotten to check the check-box your can always launch the same refactoring step via the Refactoring menu by choosing the “Rename Server/Database Reference…” refactoring. You can use this same refactoring type to undo the change and refactor variable reference back in to literals. NOTE: The variables used to identify 3 or 4-part names must to be placed between square brackets or double quotes. Failing to do so will result in a parser error (TSD2010) like this: Even though we added the reference and change the T-SQL code to reference the variables instead of the literal database name, we still have the error and warning. This is because we have not build the solution, so the dbmeta file for the AW.dbproj has not been created yet. So lets execute the remain step: BUILD the solution. After we have build the solution all errors and warnings have been resolved and we are ready to rock and roll! Step 8: Replace project reference with dbmeta reference (optional) No what if we do not want to use project to project references? We can add a reference directly to a dbmeta file. In order to create a dbmeta file you first have to create and build a project. Build will generate a <project>.sql and a <project>.dbmeta file. If this project does not change, or you do not want or simply cannot provide access to the project, you simply make the dbmeta file available and add the reference to that instead. In this step we will remove the project to project reference and replace it with a dbmeta reference. First we remove the reference from the AWRef project, by right clicking on the AW References node in Solution Explorer and choosing Remove. After that will remove the AW project from the solution, by right clicking on the AW database node in Solution Explorer and choosing “Remove”. Now we can no longer reference the project and we will add back a dbmeta reference instead, by right clicking on the References node in Solution Explorer again and choosing “Add Database Reference…” This brings up the same Add Database Reference dialog again, but we no longer have the ability to select an other project. Instead we select the AW.dbmeta file which is still on disk, since we did build the AW project once before. We define the variable $(AW) with the value AdventureWorks and we click OK. We do not have to refactor the code, since all 3-part references have already been changed in to variable references. Restrictions Since we added this functionality in a service release, we have some restrictions, below you will find an overview of the most important ones with an explanation on why these restrictions exist in the current system. - Cyclic references: we do not support cyclic references. For example project A references project B and project B references A. We currently cannot support this because we cannot determine the correct build order in a guaranteed fashion (which project to build first) and since we always build the full database and cannot perform partial builds, cyclic references would result in to infinite builds. - Only 3 or 4-part names are resolved, dynamic queries using for example OpenQuery, OpenRowset or OpenDataSource can not be resolved because the shape of the output can only be determined by executing the actual query against the actual targeted data source. Database references do not solve all causes for warnings and errors cause by cross database references, only those caused by explicit 3 and 4-part name usage, not for any of the dynamic query executions strategies. - Self references are not allowed, in other words a database project cannot not establish a reference to its self. The underlying reason is that database reference contribute to a different namespace then the current project. - You can not use the same database reference to resolve 3 and 4-part names at the same time; a database reference either contributes objects to a 4-part or a 3-part namespace. In real life you can have a linked server pointing to a local database on the same instance, this means you can reference objects in that database in two ways: through a 3-part name: SELECT * FROM D.S.T and through a 4-part name: SELECT * FROM SVR.D.S.T. If you have this situation you either have to change all your code to use a single access path, or you need two references. - Variables used to identify 3 or 4-part name references need to be placed between square brackets or double quotes, to prevent parser errors. Wrapping up: Now we are at the end of our exploration of database references, we have seen how we can define and use them. How can use project to project references and project to dbmeta references. We have also seen that in order to leverage the database references you need to change the T-SQL code making the 3 or 4-part name reference to use variables instead of literals. There is a huge advantage of using variables, because it will now allow you to deploy your database to any possible name combination, where before this information was hard coded in the T-SQL code. We hope this new functionality improves your abilities to use VSDBPro. -GertD Software Architect “DataDude” PingBack from I just catched up with a big bunch of news around the Database Professional and I am more then excited Thanks for posting this, it was a real lifesaver for me. As a matter of fact, I strongly recommend that the informaiton on this page be turned into one of the "Introductory Walkthroughs" in the MSDN documentation for VSTE for DBPro. Many database administrators are going to struggle with modifying their cross-database references to work in a disconnected environment. What would be the best way to handle a project made up of 4-5 databases, all of which reference some part of another? That is my current dilemma. I was hoping that I’d be able to work with this, but cannot see any way currently to set this up based on what I’m reading – that would result in "cyclic references". Is there something in the works for this in the future? Is there a way to work around this limitation? I completely understand why the limit is there – just wondering what we can do to work within our current configuration. Thanks. -Peter Schott Looks like I’m out of date on some of my references. In the initial release of VSTE for Database Professionals Lately we have been getting questions about self-referencing database calls. What I mean with that is Lately we have been getting questions about self-referencing database calls. What I mean with that is Is there a way to set the database reference variables from MsBuild? e.g. when I deploy a database from the build I’d like to set the database reference databasename to a different value.
https://blogs.msdn.microsoft.com/gertd/2007/07/27/database-references/
CC-MAIN-2016-36
refinedweb
2,512
56.08
This Photon Particle tutorial covers how to use Particle to develop an IoT project. This photon tutorial describes several aspects that can be useful when building an IoT project using Particle Photon. As you may already know, Particle Photon is an interesting IoT device that can be used in several scenarios. This Photon Particle tutorial describes several use cases where we can use this IoT device. What will you learn? - How to get started using Photon - Building your first Photon project integrating it with BMP280 - Connecting Photon to the cloud and get the temperature and pressure - How to use Particle Photon with Neopixel LEDs and control it What is Particle Photon? Before starting this Photon tutorial, it is useful to describe briefly what is Photon Particle and its main features. Particle Photon is a small IoT device with Wi-Fi built-in that can be used to build IoT projects. Particle Photon is part of the Particle ecosystem that offers an integrated environment to build and deploy IoT projects. Moreover, it supports cloud connectivity so that we can control Particle Photon from the cloud. From the hardware point of view, Photon has a Cypress Wi-Fi chip and STM32 ARM Cortex M3 microcontroller. As we can see later, Particle offers a Web IDE to develop IoT projects and a desktop IDE. Getting started using Particle Photon When we get for the first time, before using it, it is necessary to configure it and connect it to the Wi-fi to unleash the power of the Photon. Follow these steps: - Go to setup.particle.io - Download the file HTML - Open the file Once you opened the file, connect to the Photon Wifi: Configure the Wifi credential to connect to your Wifi: Finally, you can configure the name of your Photon: That’s all. Your device is ready: Connecting Photon to BMP280 sensor Once the Particle Photon is configured, we can develop the first project of this Particle Photon tutorial. There are two different options to start developing an IoT project: - Using Web IDE - Using Desktop IDE It is up to you to choose the one you prefer. In this Particle Photon tutorial, we use the Desktop IDE, anyway you can do the same things using the Web. This first project uses a temperature and humidity sensor (BMP280). Let us connect the BMP280 to the Photon, the picture below describes how to do it: This sensor is an I2C sensor so we need four different connections; - Vcc (+3.3V) - GND - CLK - SDA Open your IDE and start coding. Before using the sensor it is necessary to import the library that handles this sensor. You can do it using the Library Manager and looking for the sensor as shown in the picture below: Then add the library to your project. That’s all we are ready to use the sensor. Code language: PHP (php)Code language: PHP (php) #include <Adafruit_BMP280.h> Adafruit_BMP280 bmp280; double temp; double press; void setup() { Serial.begin(9600); if (!bmp280.begin()) { Serial.println("Can't find the sensor BMP280"); } Serial.println("BMP280 connected!"); } void loop() { temp = bmp280.readTemperature(); press = bmp280.readPressure(); Serial.println("Temperature ["+String(temp)+"] - Pressure ["+String(press)+"]"); delay(1000); } This simple Photon code reads the temperature and the pressure detected by the BMP280. Click on the flash icon and wait until the firmware is flashed Open the serial console and check the current temperature and pressure. How to connect Particle Photon to the cloud and get the temperature and pressure It is time to connect the Photon to the cloud and get the current temperature and pressure. Particle Photon has an interesting feature that simplifies the cloud connection. As you remember, we have covered how to connect Arduino to the cloud using API library, well we can do the same in a really simple way. In this Photon tutorial, we want to access the temperature and the pressure from the cloud. To do it, let us modify the code shown above in this way: Code language: JavaScript (javascript)Code language: JavaScript (javascript) void setup() { Serial.begin(9600); Particle.variable("temp", &temp, DOUBLE); Particle.variable("press", &press, DOUBLE); if (!bmp280.begin()) { Serial.println("Can't find the sensor BMP280"); } Serial.println("BMP280 connected!"); } To publish a variable to the cloud, it is necessary to use Particle(variable name, reference to the variable, variable type) In the example above, the temperature is published as temp and the pressure is published as Now it is possible to read, from the cloud, the variable values using a browser. Before doing it, it is necessary to have the device ID and the authorization token. To this purpose, go to the console and you should see your connected device: To retrieve the authorization token go to the web console and the to the settings: Now we can call retrieve the temperature using: How to use Particle Photon with Neopixel LEDs The next IoT project covered in this Particle Photon tutorial is how to control Neopixel LEDs to Particle and control it. You can use several Neopixel LEDs, in this Particle tutorial we are using the Neopixel Ring. Anyway, if you use a different type of Neopixel the connections remain the same. The picture below shows how to connect Neopixel to Photon: The code to handle these LEDs is simple:); void setup() { strip.begin(); } void loop() { // The core of your code will likely live here. for (int i=0; i < PIXEL_COUNT; i++) strip.setColor(i, 255,0,0); strip.show(); } In this example, the Photon turns on all the Neopixel LEDs using red color. Before using this code, it is necessary to import the Neopixel library into your project. How to publish data to the cloud Once we have connected the LEDs, we can control them remotely from the cloud. The way is almost the same we have covered in the previous paragraph and we have to use Particle.function . Let us modify the previous code:); int green; int red; int blue; void setup() { Particle.function("red", setRed); Particle.function("green", setGreen); Particle.function("blue", setBlue); Serial.begin(9600); strip.begin(); } void loop() { } void setStripColor() { Serial.println("Set color ["+String(red)+"," +String(green)+ "," +String(blue)+ "]"); for (int i=0; i < PIXEL_COUNT; i++) strip.setColor(i, red, green, blue); strip.show(); } int setRed(String r) { red = r.toInt(); setStripColor(); return red; } int setGreen(String g) { green = g.toInt(); setStripColor(); return green; } int setBlue(String b) { blue = b.toInt(); setStripColor(); return blue; } Notice that we added three different methods to handle the three different colors and exposed these methods using Particle.funcion. Now you can control remotely the LEDs from the cloud. More useful resource: MQTT Protocol Tutorial: Technical description Build an IoT soil moisture monitor using Arduino with an IFTTT alert system How to use Cayenne IoT with ESP8266 and MQTT: Complete Step-by-step practical guide How to use MQTT to publish data from Particle Photon In this last project, we want to publish the data acquired from the sensor using MQTT using Particle.publish. To do it, we will reuse the source code that reads data from BMP280 and we want to publish these values to the cloud. The code is shown below: Code language: PHP (php)Code language: PHP (php) #include <Adafruit_BMP280.h> Adafruit_BMP280 bmp280; double temp; double press; void setup() { Serial.begin(9600); Particle.variable("temp", &temp, DOUBLE); Particle.variable("press", &press, DOUBLE); if (!bmp280.begin()) { Serial.println("Can't find the sensor BMP280"); } Serial.println("BMP280 connected!"); } void loop() { temp = bmp280.readTemperature(); press = bmp280.readPressure(); Particle.publish("temperature", String(temp), 60, PRIVATE); Particle.publish("pressure", String(press), 60, PRIVATE); delay(1000); } In the loop() method, the Photon code publishes two events called: - temperature - pressure The first one is the value related to the temp variable, while the pressure is related to the press value. You can check the value published using the event console: Final considerations At the end of this Particle Photon tutorial, you hopefully gained the knowledge about how to use Particle Photon in different scenarios. You had the chance to verify its power and how simple it is building an IoT system.
https://www.survivingwithandroid.com/particle-photon-tutorial/
CC-MAIN-2022-27
refinedweb
1,355
54.93
I have a quick question on this code I'm working on. I'm trying to get a set of ints that act as possible word lengths from a dictionary text file I've been given. However, while the code seems to run, nothing ever gets printed. Including if I put a simple cout statement at the beginning of the code, nothing gets outputted. I have never seen something like this before and don't know what to google since I'm not getting an error code or anything. Some guidance would be greatly appreciated! Here's my main.cpp: #include <iostream> #include <set> #include <fstream> int main() { int wordLength = 0; std::string currentWord; std::ifstream myStream("dictionary.txt"); std::set<int> possLengths; while(!myStream.eof()) { myStream >> currentWord; possLengths.insert(currentWord.length()); } std::cout << "Welcome to hangman!" << std::endl; while(possLengths.count(wordLength) == 0){ std::cout << "Please enter a length for your word." << std::endl; std::cin >> wordLength; if(possLengths.count(wordLength) == 0) std::cout << "Sorry, I don't know any words of that length" << std::endl; } return 0; } And then the dictionary.txt is just a little under 130,000 lines with each line containing a unique word. If anyone could offer help, I'm really out of options.
https://www.queryxchange.com/q/27_55171404/nothing-prints-even-a-simple-cout-statement-outside-of-this-loop/
CC-MAIN-2019-13
refinedweb
210
68.87
#include <Teuchos_Handle.hpp> Inheritance diagram for Teuchos::ConstHandle< PointerType >: In writing derived types, it is usually simplest to use the TEUCHOS_CONST_HANDLE_CTORS macro to generate boilerplate constructor code. There are two modes of construction: construction from an existing RCP, ConstHandle<Base> h = new Derived(blahblah); Note that the first form with rcp() must be used whenever the object being handled has been allocated on the stack (using rcp(ptr,false) of course). Definition at line 66 of file Teuchos_Handle.hpp. Construct with an existing RCP. Definition at line 70 of file Teuchos_Handle.hpp. Construct with a raw pointer to a ConstHandleable. This will make a call to rcp(), thus removing that call from the user interface. Definition at line 73 of file Teuchos_Handle.hpp. The empty ctor will only be called by Handle ctors. Definition at line 80 of file Teuchos_Handle.hpp. Read-only access to the underlying smart pointer. Definition at line 75 of file Teuchos_Handle.hpp. Access to raw pointer. Definition at line 77 of file Teuchos_Handle.hpp. This function is needed in Handle ctors. The Handle ctors call the empty ConstHandle ctor and then set the pointer in the ConstHandle with a call to setRcp(). Definition at line 85 of file Teuchos_Handle.hpp. Protected non-const access to the underlying smart pointer. This will be called by the nonConstPtr() method of the non-const Handle subclass Definition at line 91 of file Teuchos_Handle.hpp.
http://trilinos.sandia.gov/packages/docs/r8.0/packages/teuchos/doc/html/classTeuchos_1_1ConstHandle.html
CC-MAIN-2014-35
refinedweb
236
51.34
This document attempts to clearly delineate the issues raised by embedding SVG in other documents and referencing SVG from other documents. Many of these issues are not unique to SVG, and suggest a need for a better mixed namespace document embedding and referencing framework. This document is a Note submitted to the W3C with the intention that it be used as a basis to further the work of embedding and referencing SVG from other documents. This Note has been produced by SVG Working Group representatives from SchemaSoft and represents the opinions of Philip Mansfield, Darryl Fuller, and Yuri Khramov, as well as information from e-mail discussion related to this topic with SVG Working Group members. SVG is a document format that can can be embedded or referenced by documents in other XML formats, and vice-versa. This can be done recursively, leading to complex multi-format content. Those other document formats are typically dialects such as SMIL, XHTML, or SVG itself. How SVG behaves when it is embedded within other SVG is fully defined, but how SVG behaves when it is referenced from SVG using the <image> element is not. How to embed XHTML with the <foreignObject> element is partially defined, but very little is specified about the resulting behavior. How XHTML is supposed to reference SVG with the <img> and <object> elements is defined, but how this behaves is incompletely specified. Furthermore, the existing specifications make no mention of using SVG as a background image format for XHTML. There are a number of issues that have not been methodically addressed. How are events handled? What is time zero for animations for a referenced or embedded SVG? How do the different DOM trees interact? How does CSS cascading work when the same property may have different interpretations in different namespaces? When is referenced or embedded content intended for rendering, and when is it not? While some of this is specified for the case of SVG embedding or referencing SVG, interacting with other document types such as XHTML and MathML needs to be looked at. We are already seeing mixed-namespace documents coming into being with work on modularizing XHTML and SVG, and the ability of browsers such as Mozilla and Amaya to handle XHTML and MathML mixed-namespace documents. Before we can come up with solutions, we need to clearly delineate the problems. Some of these issues have been addressed in different contexts, sometimes in conflicting ways. This also needs to be clearly documented. There are basically two main techniques of combining several documents: direct inclusion or referencing. Direct inclusion of one <svg> element into another does not generate different documents, and is covered by the SVG 1.0 W3C Recommendation. Direct support for SVG and MathML in some browsers allows for mixed-namespace situations. We will use the terms "hosted" and "hosting" documents to cover both situations, "referenced" and "referencing" documents for the referencing case, "embedded" and "enclosing" documents for the inclusion situation It is possible to differentiate between these two situations and define different behavior for them with regard to the issues described below. Moreover, the existing applications (cf. Adobe's SVG Viewer) do have different behavior depending on the way one document is embedded into another. Some mixed-in namespaces are not intended to be directly rendered; for example, they might describe metadata, schemas or processes. Even formats that are normally rendered might be mixed with the intent that they are not rendered. For example, the intent of a mixed X3D/SVG file might be that the SVG is the current 2D view of the 3D model encoded in X3D, and that 2D view may change via the DOM based on user interaction. In that case, it would be a mistake to render both the X3D and the SVG. Rather, one or the other should be rendered. Which one? If the intent is to do the rendering through DOM-driven SVG, then the SVG should always be rendered. However, if the intent is that the SVG is just a fallback for those who do not have an X3D-savvy user agent, then which gets rendered is dependent on the particular software setup. In general, when encountering two mixed namespaces, how does a User Agent decide whether to render both, one, or the other? Some languages have some support for this choice. For example, a hierarchy of fallback renderings can be specified with XHTML's <object> tag or with SVG's <switch> tag. However, there is no universal mechanism, and the models for how this works are not necessarily consistent from language to language. When rendering a document hosted by SVG, the hosted document's viewport may be constrained or transformed in a number of ways. Consider the case of the <image> element. Height and width may be specified by the hosted document format, or not. The aspect ratio may be preserved by the hosting document, or not. In general, the software that renders the SVG needs access to height, width and any other constraints on the hosted content, which it might not have if a separate piece of software takes over the parsing and rendering of that format, and does not have this information in its API. Next, consider XHTML hosted by SVG. Must it occur within a <foreignObject> element to be rendered? The stated aim of this element is to pass processing on to the hosted language processor, as with in-place activation. However, the XHTML may be in the context of a transform that makes the viewport not rectangular and upright. It may be rotated and/or skewed, for example. What happens when in-place activation requires the hosted content to be strictly upright and rectangular, as with XHTML, because its processor cannot handle the rotation or skew? Does it figure out the largest upright rectangle that will fit into the rotated and skewed viewport and render into that? Or must there be an API to pass pixels to the hosting SVG processor, so that the skew or rotate can be performed on pixels? If so, then all interaction with the HTML is presumably lost (hyperlinks and imagemaps, select/copy/paste text, scripted dynamic behavour, etc.) Consider the case of SVG being referenced by the <img> element of XHTML as specified. If the height and width are not specified by the <img> element, the SVG must communicate its size to the XHTML rendering engine. If the width of the SVG is "100%", then there has to be communication back and forth between the two engines. If the height of the SVG is "100%", what does that mean? In general, section 7.2 of the SVG 1.0 specification describes a negotiation process between hosted SVG and the hosting language, and is only specific about how that negotiatiation proceeds in isolated cases like CSS2 applied to HTML. There remain issues of generalizing this and issues of how to implement the two-way information exchange in independently written software modules. SVG has its own rules for Z order, composition, and opacity. It applies them when it references things with the <image> tag. A referenced PNG may have filters applied to it, be made translucent, and have other SVG from the hosting document both "in front of" and "behind" it. XHTML has completely different Z-order and compositing rules. Images (perhaps SVG) referenced by XHTML may be behind things (background images) or on top of things (on top of background image), but there isn't any more layering than that. There is no concept of filters or opacity. So what happens with SVG hosting XHTML or MathML? Can the HTML rendering engine be expected to apply an SVG filter effect? Whether or not this is possible may depend upon the underlying engine. Is it a plug-in like the Adobe viewer or is it an all in one engine like Mozilla? Certainly putting an SVG image on top of an XFORMS buttons seems to be a natural application, as does using an SVG image as background for XHTML or XHTML as a bit of wrapped text on top of an SVG rectangle or circle. Different XML dialects have different rules for Z-order, capabilities for compositing, etc., so how to you mix them or resolve conflicts? The XML Event specification () does not define a mechanism for event capture and bubbling between two different documents. This is clearly an issue for embedded documents. It is relevant for referenced documents as well; for example, in the case of an image map on an XHTML <object> that refers to an SVG file. There are several possible approaches for setting up this mechanism. Some options are described below. Apparently, the SVG Working Group inclines toward option 2. But shall we take into account Z-order if the svg document is used for a background image, say as the background for XHTML? The Z-order for hosting and hosted documents does not always have the hosted element on top. Consider the case of SVG referencing SVG using the <image> element. The hosting document could have things both on top of and behind the hosted SVG image. Z order seems to be more natural in this case. There are several possibilitites in setting up DOM interoperability options. One is not to allow any interoperability at all; a second is to glue the hosted document to the node of the hosting document as described in option 1 of the previous section. Another option is to have DOM interoperability for included documents but not referenced documents. Note that there are already people doing inspired hacks with the Adobe plugin to call into the DOM of an HTML document from script within an SVG document that it references via the nonstandard <embed> tag (the reverse is more straightforward due to the nature of browser plug-ins). See Kurt Cagle's Interactive SVG presentation for examples of this. Client-side script would be much more reliable if this sort of mechanism were pre-planned, uniform and vendor-neutral. If your hosting and/or hosted document have metadata, then what is the scope of that metadata? There are a couple of different options here. If you treat metadata scope like CSS Inheritance, and the trees are just "glued together", then the metadata of the hosting document applies to the hosted document. This may be less than desirable. Consider the case of SVG hosting SVG. Your hosting SVG could be a map, with metadata describing the co-ordinate system as being geographic. Your hosted SVG could be a company logo for a gas station that is to be put on the map. Surely the logo's co-ordinate system is not geographic! Furthermore, what if your map also hosts some MathML? The metadata about geographic co-ordinates does not seem to apply. Worse yet, you could have metadata that has the same name but conflicting meanings in the hosted and hosting document. The alternative is to have metadata scope not cross the host/hosting barrier, or to only cross this barrier in the case of embedded content. But surely there are cases where it would be appropriate for the hosting document's metadata to apply to the hosted document. At first glance, it would seem sensible that CSS inheritance passes seamlessly through included documents, but not through referenced documents; but things are not so simple. CSS presentation attributes may have different interpretations or even different syntax in different namespaces, so what does it mean when you pass through the namespace barrier? A trivial case of this could be font. In an SVG document, you could define your own SVG font, and even name it the same as some system font. If you then inherit into embedded XHTML, what does that font name mean? If there is further SVG hosted by that XHTML, does it pick up the outermost meaning of the font name, or the meaning that it had in the XHTML name space? How do you resolve clashes? Even the notion of what stylesheets apply to what content is a problem for software. A generic CSS processor would recognize use of the xml-stylesheet processing instruction, and would have to apply such stylesheets to the whole document, no matter how many embedded namespaces it has. On the other hand, there are grammar-specific ways to reference CSS, such as the <style> element, style attribute and presentation attributes of SVG, or the <link rel="stylesheet"> construct in XHTML. This sets up a situation in which a CSS processor would have to know about all grammars that might use it. It also raises the question of whether some stylesheets apply to the whole document and others only to the parts that directly reference the stylesheets. Different XML dialects have different mechanisms for timeline synchronization. SMIL has time containers for specifying "time zero" for encapsulated items. SVG defines a single "time zero" for the entire document. XHTML, to the best of our knowledge, has no concept of "time zero" or synchronization, and has a tradition of progressive layout, so that each element's "time zero" is effectively its load time. So what happens when you have different hosting and hosted dialects? Furthermore, is the behavior different when hosted documents are referenced rather than embedded? There are two main options here. The first is that the hosted document honors the hosting document's concept of time zero. So if a SMIL document is hosting an SVG document within a time container, the SVG document's time zero is whatever the SMIL time container says it is. If XHTML is hosting SVG, time zero is as soon as the SVG document loads. If an SVG document is hosting an SVG document by referencing it with the <image> element, then the hosted SVG document's time zero is when the hosting document finishes loading. But what if the hosted document "isn't ready yet"? The other main option is that the hosted document determines its own time zero, and informs the hosting document of it. This seems very natural for SVG hosting SVG by reference, but seems backwards in the case of SMIL hosting. Many XML dialects have ways of hosting other documents by reference right now. XHTML has the <img> and <object> tags, SVG has the <image> and <foreignObject> tags. But how do you embed and how do you validate the results? Currently, you wind up having to write custom DTD's every time you want to host a new kind of XML by embedding if you want to validate. You could use "ANY" for the content model of embedded types, but then you couldn't validate embedded documents. Even so, for pragmatic reasons, we often have to define how an XML dialect should be hosted by some other. Consider the case of SVG and XHTML. It was SVG that had to specify what was the "right" way for it to be hosted by XHTML (using the <img> or <object> tag). Had this not been specified, then every browser would do it differently. Even so, this is not a consistant definition. If <img> works, why not the background image attribute? Worse yet, popular implementations currently only support one of these methods (the <object> tag). This seems rather ad-hoc and painful. It would be much nicer if there were one cross-language way of referencing and embedding, just like xlink. Also, do you allow hosting document fragments or just complete documents? If you allow hosting document fragments, you could wind up with "tag soup". But there are very natural cases where you would just want to host a fragment. SVG already allows hosting just fragments of other SVG documents. Many SVG elements can use a URL to reference just a part of another document. It seems very natural to have libraries of markers, patterns, symbols, etc. packaged up in single files. This document is intended to raise issues and requirements for further SVG language design and for the more general design of cross-language features. We do not attempt to propose detailed solutions to every problem raised. However, we will suggest a general direction for the SVG Working Group to take. To handle the issues we have raised obviously requires a closer degree of cooperation between the software modules processing each of the documents involved (the hosting document and the referenced or embedded document). Modules have to advise each other of their capabilities and restrictions, preferably through a standard API. Software can communicate capabilities while running, but XML can only be used to encode requested behavior - it is up to the software to resolve how that information is used. For instance, enclosing SVG may advise embedded XHTML of a skewed viewport, but it is up to the XHTML processor to decide whether or not it can skew. The reality is that each grammar will have its own unique requirements, so one must allow a range of behaviors to be requested and have defined fallbacks, rather than mandating a single behavior in each case. The problem of software co-operation for document embedding is not a new one. It has been solved before in various component oriented document models such as OpenDoc and Bonobo. It is our opinion that we should follow these examples in coming up with our model for document embedding and co-operation. We believe that there is a need for a separate cross-language XML specification for interaction of documents and document fragments in different namespaces, much as there are other cross-language specifications such as XLink, XPointer, DOM and XML Namespaces themselves. Right now, the SVG Working Group is facing these problems, and SVG may have some mechanisms to address these issues. Therefore the SVG Working Group may be appropriate initiators of this effort. [DOMLevel2Events] "Document Object Model Level 2 Events Specification", Tom Pixley. W3C Recommendation 13 November, 2000 Available at
http://www.w3.org/2004/04/webapps-cdf-ws/papers/SchemaSoftEmbeddingPaper.html
crawl-002
refinedweb
2,982
53.81
Agenda See also: IRC log [fyi] i formally proposed to PF that HTML5 add @role for IMG <ShaneM> Scribe: ShaneM Registration is open. Who is planning to come? *crickets* <oedipus> GJR: coming good lord willin' and the crick don't rise Tina is unsure if she will have time. Steven thinks it will be very sparsely attended. Will see about organizing a teleconference. XSD - not due for another month yet. Mark is doing it hopefully. We went to PR and it went to a vote. *Steven stretches it out to increase the suspense* There were a reasonable number of member companies who voted. We did get somebody who sent comments that are not suitable at this point in the review. They are more last call comments. The person asked for 5 changes to the spec. 1) Please define a clear path between CDF and XHTML M12N 1.1 2) What's the goal of referencing Unicode 4.1? 3) Please update the reference to the XML specification to 4th edition. 4) Update the reference to namespaces in XML to the second edition. 5) Informative references points to XLink PR instead of Rec. <inserted> ScribeNick: oedipus <inserted> ScribeNick+ oedipus SM: not our responsibility to do anything with regard to spec in regards CDF ... reference unicode 3.1 and mention that it is updated; not sure that's a change SP: normative reference - where referenced from? SM: (checks) - reference in Special Character Entity Set defs as symbol set def; in the DTD implementation itself, defining unicode points that map to ASCII SP: "new names do not clash with..." "names are unicode names" - that's the only place TH: supposed to reference unicode per se? what does requestor want us to reference? SP: comment based on references section - different from XHTML2 which references 3.2 SM: previous version referenced 3.1 - don't recall changing SP: answer - no difference - just reference most up to date one <ShaneM> We can reference 4th edition of XML. TH: version we refer to, versus version requestor wants to reference ... if changed drastically in XML, then could understand change, but don't reference most up-to-date all the time SM: provide links to latest version often ... these are a year-and-a-half old ... no diff marked version of namespaces spec TH: see point, but as dev, prefer when document a refers to document b.x - can compare better knowing dates and changes <inserted> ScribeNick+ ShaneM <ShaneM> Shane: The second edition of namespaces is harmless <ShaneM> Steven proposes we say we will not do the CDF thing, that it does not matter we reference 4.1, and that we will make the changes to the other references. <ShaneM> Steven will get his OK on this. <ShaneM> ACTION: Shane to update the references. [recorded in] <ShaneM> ACTION: Steven to reply to the commentor. [recorded in] <ShaneM> ACTION: Steven get M12N published as a Recommendation with the changes applied. [recorded in] <ShaneM> ACTION2=Steven to reply to M12N commentor about how we will address their M12N objections. <ShaneM> All good - everything is fine. <inserted> ScribeNick: oedipus SM: ian jacobs asked me to make a change so that our implementations are in markup space, but a fixed version in TR space; can do this and will do it before Rec ... fits in with "cool URIs don't change" ... have to do for M12n, anyway SP: changes to XML 4th edition, and so on? SM: just thought it was a friendly observation, so made change to references <ShaneM> ACTION: Shane to update XHTML M12N DTD links so that there are versions in TR space and MarkUp space, as per XHTML Basic 1.1 [recorded in] SM: 2 diff collections of SVG things - one bag not formally responded to; made most changes except for one request ... topic this week about adding @order to access - dougS was at last week's meeting; thought we had agreement, then reconsidered; took doug's wording and tweaked it and he didn' like it ... philosophic difference - ... i need to withdraw myself from decision path SM: Doug thinks should describe positive behaviors, when docs written in conflict with standard, and i think we shouldn't - describe either no behavior or negative behavior SP: agree - if conflict, just a bad doc and don't have to address SM: traditional approach - could go further SP: wants us to define what a duplicate id does SM: positively, and that should work in violation of the principles of the ML TH: if write broken documents, expect bad results SM: always said that underlying protocols rule; if dependent on HTTP, can't define behavior that violates HTTP, and so on; GJR: plus 1 to SM SM: doug's other point is that access module is useful outside of XHTML and XML and should define it so can be used in other places; don't have scope, is an XHTML module, SP: not chartered to do it; ocassionally extended boundaries (XML Events, for example) - not just to work with our stuff, but everyone's but that was within XML framework TH: that is the problem - within XML framework; scope Doug talks about is use in theoretical MLs that don't exist or are not XML based SM: what i heard was "how to use in HTML5" - want to use access in SVG, so should NOT be a problem with Access, because it is all XML SP: should work with SVG, think we are good to go GJR: plus 1 SM: doug proposed what happened when 2 diff IDs - added scenario to draft encased in @ signs <ShaneM>. SP: sounds like we are all of one mind that should define rules for broken stuff <ShaneM> in particular rules that have positive behavior. TH: no - really should NOT define rules for both content within scope of XML to satisfy needs of HTML5 <ShaneM> The current wording reads "Also note: When processing an invalid document, if there are duplicate ids, element groups based on targetid values may contain multiple values, just like those of targetrole values." SM: say wither "invalid document, behavior is undefined" or nothing SP: strongly feel NOTHING TH: agree GJR: violently agree SM: valid XML document shouldn't get to application layer anyway ... going to remove sentence <Tina> ... that was fun SP: agreed GJR: agreed <Tina> Agreed - tho I fell out. SP: since speaking about Access, whole bunch of comments from i18n <Steven> SM: one which most concerns me is keycodes and characters - isn't clear that this section (3.1.2.) didn't take into account keycode and key produced by pressing key (reads from post refered to above) TH: that is an application / implementation problem SM: character in document character set - how one gets to that is up to app TH: implementation side - use access key with this symbol and do this action - how implementation triggers accesskey, may be through eye-blinking entry - may press key-combo - whether press key combo to create specific char in specific char set, is implementation decision ... cites opera model GJR: agree that opera model best yet available TH: shouldn't say "have to press these specific keys" to get action; don't specify how, but structural construct is path to useability and accessibility ... shouldn't be Access Module that tells one how to get keystroke generated, just provide means ... should NOT define specific mechanism <ShaneM> The key attribute is an abstraction layer - how the user agent maps to that is up to the user agent. That's the whole point. TH: asian characters and glyphs from non-western scripts - user can use shortcut command to optimize functionality SP: agree entirely - we speak of characters, not keys -- way to get particular character is application-dependent ... different entry methods to enter extended / accented characters ... nothing we can say about it, save that it is UA dependent TH: use word "symbol" instead of "key" (GNOME on screen keyboard) <Steven> The invocation of access keys depends on the implementation. For instance, on some systems one may have to press an "alt" or "cmd" key in addition to the access key. TH: most often keypress, can be activated by other means, not in our scope, though SP: change to "or anything else that might happen" SM: or other user agent defined mechanisms for exposing shortcuts defined by access TH: can configure opera to trigger access key with mouse gestures <ShaneM> "or other user agent defined mechanisms that expose the abstraction made available by the access element and its key attribute." GJR: Opera+Voice accesskey implementation works TH: list of shortcuts in document SP: target change GJR: plus 1 TH: hard read at first, but covers it well, so ok SP: good ... 2 other comments from i18n ... comment 2 and comment 4 <Steven> SP: emphasizing visually role of key in label - problematic in non-western text SM: more basic problem with this text GJR: from UAAG SM: concern is we explicitly permit (and perhaps require) end users to over-ride mapping; how does that affect rendering if remap GJR: programmatically not an issue SM: agree, but have to state that TH: key="x" include x in label text, user changes to p, UA should remap action to p ... should we reverse this text - if UA understands access keys and can identify the key in the label text, should emphasize it for user ... right now says, author should include key - if user overrides key, then UA needs to pick up on it; but what if new key not in label? loose benefit SP: principle is to have keys all in one set and diff sets in diff modalities GJR: not opposed to anything said about access -- "visibility" in non-modal sense is stress of UAAG 2.0 drafting SP: more to discuss - continue on list <ShaneM> ACTION: Shane to go through I18N comments and propose specific changes. [recorded in] TH: have to discuss on list - other potential problems <ShaneM> ACTION: Tina to propose new text about how the access element @key value is exposed in labels. [recorded in] s/Access and SVG Request/Access Module: SVG and i18n Requests/Comments s/Access and SVG Request/Access Module - SVG and i18n Requests/Comments s/TOPIC: Access Module - SVG and i18n Requests & Comments/TOPIC: Access Module - SVG and i18n Requests/Comments This is scribe.perl Revision: 1.133 of Date: 2008/01/18 18:48:51 Check for newer version at Guessing input format: RRSAgent_Text_Format (score 1.00) Succeeded: s/references 1.2/references 3.2/ Succeeded: s/reference unicode 1.1/reference unicode 3.1/ Succeeded: i/SM: ian/ScribeNick: oedipus Succeeded: i/SM: not our/ScribeNick: oedipus Succeeded: i/SM: not our/ScribeNick+ oedipus Succeeded: i/Shane: The second/ScribeNick+ ShaneM WARNING: Bad s/// command: s/Access and SVG Request/Access Module: SVG and i18n Requests/Comments WARNING: Bad s/// command: s/Access and SVG Request/Access Module - SVG and i18n Requests/Comments WARNING: Bad s/// command: s/TOPIC: Access and SVG Request/TOPIC: Access Module - SVG and i18n Requests/Comments Succeeded: s/Access and SVG Request/Access Module - SVG and i18n Requests & Comments/ Found Scribe: ShaneM Inferring ScribeNick: ShaneM Found ScribeNick: oedipus Found ScribeNick: oedipus ScribeNicks: oedipus, ShaneM Default Present: Steven, ShaneM, Gregory_Rosmaita, Tina_Holmboe Present: Steven ShaneM Gregory_Rosmaita Tina_Holmboe Regrets: Roland Agenda: Got date from IRC log name: 13 Aug 2008 Guessing minutes URL: People with action items: shane steven tina WARNING: Input appears to use implicit continuation lines. You may need the "-implicitContinuations" option.[End of scribe.perl diagnostic output]
http://www.w3.org/2008/08/13-xhtml-minutes
crawl-002
refinedweb
1,918
56.49
How to use [ui Var].frame properly - NewbieCoder Im a noob and im not really sure how to use the ui.frame command correctly to assign where a button will be located. Here is a case to show a bit further into my question... import ui v = ui.View() v.frame = (0,0,400,400) v.name = 'test' b = ui.Button() b.title = 'My Title' b.background_color = 'white' b.border_color = 'blue' b.border_width = 1 b.corner_radius = 5 b.frame = (10,10,100,32) a = '' def tap(sender): a = sender.title print(a) b.action = tap v.add_subview(b) v.present('sheet') def tap(sender): new_frame = (sender.frame[0] + 10, sender.frame[1] + 10, sender.frame[2] - 1, sender.frame[3] - 1) sender.frame = new_frame @NewbieCoder, your code seems to work, so what is the challenge you have? Some key points about frame are that: - it refers to coordinates within the parent view boundscan be used to access parent’s internal dimensions - root views only become full screen after being presented flexcan be used to make the frame change as the parent view’s size changes
https://forum.omz-software.com/topic/5648/how-to-use-ui-var-frame-properly
CC-MAIN-2021-39
refinedweb
185
81.59
Getting started with Notification Hubs for Windows Store Apps Overview This tutorial by using the Windows Push Notification Service (WNS). When you're finished, you'll be able to use your notification hub to broadcast push notifications to all the devices running your app.: Microsoft Visual Studio Express 2013 for Windows with Update 2 or later. Register your app for the Windows Store To send push notifications to Windows Store apps, you must associate your app to the Windows Store. You must then configure your notification hub to integrate with WNS. If you have not already registered your app, navigate to the Windows Dev Center, sign in with your Microsoft account, and then click Create a new app. Type a name for your app and click Reserve app name. This creates a new Windows Store registration for your app. In Visual Studio, create a new Visual C# Store Apps project by using the Blank App template. In Solution Explorer, right-click the Windows Store app project, click Store, and then click Associate App with the Store.... The Associate Your App with the Windows Store wizard appears. In the wizard, click Sign in and then sign in with your Microsoft account. Click the app that you registered in step 2, click Next, and then click Associate. This adds the required Windows Store registration information to the application manifest. (Optional) Repeat steps 4–6 for the Windows Phone Store app project. Back on the Windows Dev Center page for your new app, click Services, click Push notifications, and then click Live Services site under Windows Push Notification Services (WNS) and Microsoft Azure Mobile Services. On the App Settings tab, make a note of the values of Client secret and Package security identifier (SID). Configure your notification hub Log on to the Azure Classic Portal, and then click +NEW at the bottom of the screen. Click on App Services, then Service Bus, then Notification Hub, then Quick Create. Enter a Notification Hub Name. Select your desired region and subscription. If you already have a service bus namespace that you want create the hub in, select your Namespace Name. Otherwise, you can use the default Namespace Name which will be created based on the hub name as long as the namespace name is available. Click Create a new Notification Hub. Once the namespace and notification hub are created, your namespaces in service bus will be displayed. Click the namespace that you just created your hub in (usually notification hub name-ns). On your namespace page, click the Notification Hubs tab at the top, and then click on the notification hub you just created. This will open the dashboard for your new notification hub. On the dashboard for your new hub click View Connection String. Take note of the two connection strings. You will use these later. Select the Configure tab at the top, enter the Client secret and Package SID values that you obtained from WNS in the previous section, and then click Save. Your notification hub is now configured to work with WNS, and you have the connection strings to register your app and send notifications. Connect your app to the notification hub In Visual Studio, right-click the solution, and then click Manage NuGet Packages. This displays the Manage NuGet Packages dialog box. WindowsAzure.Messaging.Managedand click Install, select all projects in the solution, and accept the terms of use. This downloads, installs, and adds a reference in all projects to the Azure Messaging library for Windows by using the WindowsAzure.Messaging.Managed NuGet package. Open the App.xaml.cs project file and add the following usingstatements. In a universal project, this file is located in the <project_name>.Sharedfolder. using Windows.Networking.PushNotifications; using Microsoft.WindowsAzure.Messaging; using Windows.UI.Popups; URI for the app from WNS, and then registers that channel URI with your notification hub. NOTE: Make sure to replace the "hub name" placeholder with the name of the notification hub that appears in the Azure Classic URI is registered in your notification hub each time the application is launched. In Solution Explorer, double-click Package.appxmanifest of the Windows Store app, and then in Notifications, set Toast capable to Yes: From the File menu, click Save All. (Optional) Repeat the previous step in the Windows Phone Store app project. Press the F5 key to run the app. A pop-up dialog that contains the registration key is displayed. (Optional) Repeat the previous step to run the Windows Phone project to register the app on a Windows Phone device. Your app is now ready to receive toast notifications. Send notifications You can test receiving notifications in your app by sending notifications in the Azure Classic Portal via the debug tab on the notification hub, as shown in the screen below. Push notifications are normally sent in a back-end service like Mobile Services or ASP.NET using a compatible library. You can also use the REST API directly to send notification messages if a library is not available for your back-end. notifications from an Azure Mobile Services backend that's integrated with Notification Hubs, see "Get started with push notifications in Mobile Services" (.NET backend | JavaScript backend). Java / PHP: For an example of how to send notifications by using the REST APIs, see "How to use Notification Hubs from Java/PHP" (Java | PHP). (Optional) Send notifications from a console app To send notifications by using a .NET console application follow these steps. Right-click the solution, select Add and New Project..., and then under Visual C#, click Windows and Console Application, and click OK. This adds a new Visual C# console application to the solution. You can also do this in a separate solution. In Visual Studio, click Tools, click NuGet Package Manager, and then click Package Manager Console. This displays the Package Manager Console in Visual Studio. file and add the following usingstatement: using Microsoft.Azure.NotificationHubs; Azure Classic Portal on the Notification Hubs tab. Also, replace the connection string placeholder with the connection string called DefaultFullSharedAccessSignature that you obtained in the section "Configure your notification hub." tapping the toast banner loads the app. You can find all the supported payloads in the toast catalog, tile catalog, and badge overview topics on MSDN. Next steps In this simple example, you sent broadcast notifications to all your Windows devices using the portal or a console app. We recommend the Use Notification Hubs to push notifications to users tutorial as the next step. It will show you how to send notifications from an ASP.NET backend using tags to target specific users. If you want to segment your users by interest groups, see Use Notification Hubs to send breaking news. To learn more general information about Notification Hubs, see Notification Hubs Guidance.
https://azure.microsoft.com/en-us/documentation/articles/notification-hubs-windows-store-dotnet-get-started/
CC-MAIN-2016-07
refinedweb
1,132
57.16
This guide is a form of writing down few techniques that I have been using with ups and downs for the past two years. Optimizations highly depend on your goals, how users are experiencing your app, whether you care more about time to interactive or overall size. It should not come as a surprise, that like always, there is no silver bullet. Consider yourself warned. Although you have to optimize for your use cases, there is a set of common methods and rules to follow. Those rules are a great starting point to make your build lighter and faster. TL;DR - Minify with UglifyJS - Remove dead code with Tree shaking - Compress with gzip - Routing, code splitting, and lazy loading - Dynamic imports for heavy dependencies - Split across multiple entry points - CommonsChunkPlugin - ModuleConcatenationPlugin - Optimize CSS class names - NODE_ENV="production" - Babel plugins optimizations Minify with UglifyJS UglifyJS is a truly versatile toolkit for transforming JavaScript. Despite the humongous amount of the configuration options available you only need to know about few to effectively reduce bundle size. A small set of common options brings major improvement. module.exports = { devtool: "source-map", // cheap-source-map will not work with UglifyJsPlugin plugins: [ new webpack.optimize.UglifyJsPlugin({ sourceMap: true, // enable source maps to map errors (stack traces) to modules output: { comments: false, // remove all comments }, }), ] }; You can take it from there and finely tune it. Change how UglifyJS mangles functions and properties names, decide whether you want to apply certain optimizations. While you are experimenting with UglifyJS keep it mind that certain options put certain restrictions on how you use certain language feature. Make yourself familiar with them or you come across, or rather your users come across, some few hard to debug problems present only in production bundle. Obey the rules of each option and test extensively after each change. Remove dead code with Tree shaking Tree shaking is a dead code, or more accurate, not imported code elimination technique which relies on ES2015 module import/export. In the old days, like 2015, if you have imported one function from the entire library you would still have to ship a lot of unused code to your user. Well, unless library supports methods cherry-picking like lodash does, but this is a story for a different post. Webpack has introduced support for native imports and Tree shaking in version 2. This optimization was popularized much earlier in JavaScript community by rollup.js. Although webpack offers support for Tree shaking, it does not remove any unused exports on its own. Webpack just adds a comment with an annotation for UglifyJS. To see the effects of marking exports for removal disable minimization. module.exports = { devtool: "source-map", plugins: [ // disable UglifyJS to see Tree shaking annotations for UglifyJS // new webpack.optimize.UglifyJsPlugin({ // sourceMap: true, // output: { // comments: false, // }, // }), ] }; Tree shaking only works with ES2015 modules. Make sure you have disabled transforming modules to commonjs in your Babel config. Node does not support ES2015 modules and you probably use it to run your unit tests. Make sure transformation is enabled in the test environment. { "presets": [["es2015", { "modules": false }], "react"], "env": { "test": { "presets": ["es2015", "react"], } } } Let's try it out on a simple example and see whether the unused export is marked for removal. From math.js we only need fib function and doFib function which is called by fib. // math.js function doFact(n, akk = 1) { if (n === 1) return akk; return doFact(n - 1, n * akk); } export function fact(n) { return doFact(n); } function doFib(n, pprev = 0, prev = 1) { if (n === 1) return prev; return doFib(n - 1, prev, pprev + prev); } export function fib(n) { return doFib(n); } // index.js import { fib } from "./common/math"; console.log(fib(10)); Now you should be able to find such comment in the bundle code with the entire module below. /* unused harmony export fact */ Enable UglifyJS back, fact along with doFact will be removed from the bundle. That is the theory behind the Tree shaking. How effective is it in a more real-life example? To keep proportions I am going to include React and lodash into the project. I am importing 10 lodash functions I use the most often: omit, pick, toPairs, uniqBy, sortBy, memoize, curry, flow, throttle, debounce. Such bundle weights 72KB after being gzipped. That basically means that in spite of using Tree shaking, webpack bundled entire lodash. So why we bundle lodash and only given exports from math.js? Lodash is meant for both browser and node. That is why by default it is available as a commonjs module. We can use lodash-es which is ES2015 module version of lodash. Such bundle with lodash-es weights... 84KB. That is what I call a failure. It is not a big deal as this can be fixed with babel-lodash-plugin. Now it is "only" 60KB. The thing is, I would not count on Tree shaking to significantly reduce bundled libraries size. At least not out of the box. The most popular libraries did not fully embrace it yet, publishing packages as ES modules is still a rare practice. Compress with gzip Gzip is a small program and a file format used for file compression. Gzip takes advantage of the redundancy. It is so effective in compressing text files that it can reduce response size by about 70%. Our gzipped 60KB bundle was 197KB before gzip compression! Although enabling gzip for serving static files seems an obvious thing to do only about 69% of pages is actually using it. If you are using express to serve your files use compression package. There are a few available options but level is the most influential. There is also a filter allowing you to pass a predicate which indicates whether a file should be compressed. Default filter function takes into account a size and a file type. const express = require("express"); const compression = require("compression"); const app = express(); function shouldCompress(req, res) { if (req.headers["x-no-compression"]) return false; return compression.filter(req, res); } app.use(express.static("build")); app.use(compression({ level: 2, // set compression level from 1 to 9 (6 by default) filter: shouldCompress, // set predicate to determine whether to compress })); When I do not need additional logic which is possible with Express (like Server Side Rendering) I prefer to use nginx for serving static files. You can read more about configuration options in nginx gzip docs. gzip on; gzip_static on; gzip_disable "msie6"; gzip_vary on; gzip_proxied any; gzip_comp_level 6; gzip_buffers 16 8k; gzip_http_version 1.1; gzip_min_length 256; gzip_types text/plain text/css application/json application/javascript application/x-javascript text/xml application/xml application/xml+rss text/javascript application/vnd.ms-fontobject application/x-font-ttf font/opentype image/svg+xml image/x-icon; Ok, but what level of compression should you choose? It depends on the size of your files and your server CPU. Just as a reminder, bundle I was working on before is 197KB JavaScript file. I did some stress tests with wrk using 120 connections, 8 threads, during 20 seconds on MacBook Pro 15" mid-2015: wrk -c 120 -d 20s -t 8 I am focusing on border values and a default one I use most of the time. I do not have any brilliant conclusion here. It is what I could expect and using default level 6 makes the most sense to me in this setup. Static files have this advantage that they can be compressed during the build time. const CompressionPlugin = require("compression-webpack-plugin"); module.exports = { plugins: [ new CompressionPlugin(), ] }; File compressed this way is 57.7KB and does not take processor time thanks to gzip_static on; set. Nginx will just serve /app.8f3854b71e9c3de39f8d.js.gz when /app.8f3854b71e9c3de39f8d.js is requested. The other takeaway about gzip is to never enable gzip for images, videos, PDFs, and other binary images as those are already compressed by their nature. Routing, code splitting, and lazy loading Code splitting allows for dividing your code into smaller chunks in such a way that each chunk can be loaded on demand, in parallel, or conditionally. Easy code splitting is one of the biggest advantages of using webpack which gained webpack great popularity. Code splitting made webpack the module bundler. Over two years ago, when I was moving from gulp to webpack, I was using angularjs for developing web apps. Implementing lazy loading in angular required few hacks here and there. With React and its declarative nature, it is much easier and much more elegant. Thre are few ways you can approach code splitting and we will go through most of them. Right now, let's focus on splitting our application by routes. The first thing we need is a react-router v4 and few routes definition. // routes.jsx import React from "react"; import { Route, Switch } from "react-router-dom"; import App from "./common/App/App"; import Home from "./routes/Home/Home"; import About from "./routes/About/About"; import Login from "./routes/Login/Login"; const Routes = () => ( <App> <Switch> <Route exact path="/" component={Home} /> <Route exact path="/about" component={About} /> <Route exact path="/login" component={Login} /> </Switch> </App> ); export default Routes; // Home.jsx import React from "react"; import { pick, toPairs, uniqBy } from "lodash-es"; const Home = () => { ... } // About.jsx import React from "react"; import { sortBy, memoize, curry } from "lodash-es"; const About = () => { ... } // Login.jsx import React from "react"; import { flow, throttle, debounce } from "lodash-es"; const Login = () => { ... } I have also split my 9 favorite lodash utility functions "equally" across the routes. Now with routes and react-router, application size is 69KB gzipped. The goal is to make loading each route faster by excluding other pages code. You can check code execution coverage in Chrome DevTools. Approximately, over 47% of the code bundled is not used when entering given route. React Router v4 is a full rewrite of the most popular router library for React. There is no simple migration path. The upside is that the new version is more modular and declarative. The downside is that you need few additional packages to match the functionality of the previous version like query-string or qs for parsing query params and react-loadable for components lazy loading. To defer loading a page component until it is really needed we can use react-loadable. Loadable HOC expects function which will lazy load and return a component. I am not keen on the idea of adding this code to each route. Imagine next version is a breaking change and you have to go though every route to change the code. I am going to create a LoadableRoute component and use in my routes definition. // routes.jsx import React from "react"; import { Route, Switch } from "react-router-dom"; import Loadable from "react-loadable"; import App from "./common/App/App"; const LazyRoute = (props) => { const component = Loadable({ loader: props.component, loading: () => <div>Loading…</div>, }); return <Route {...props} component={component} />; }; const Routes = () => ( <App> <Switch> <LazyRoute exact path="/" component={() => import("./routes/Home/Home")} /> <LazyRoute exact path="/about" component={() => import("./routes/About/About")} /> <LazyRoute exact path="/login" component={() => import("./routes/Login/Login")} /> </Switch> </App> ); export default Routes; After implementing dynamic import for routes components, loading any given route takes 65KB instead or 69KB. You can say it is not much but keep in mind that we have just installed react-loadable. Down the road, this improvement pays off. Before webpack 2, when imports were not natively supported, to code split and load code dynamically you used require.ensure. The disadvantage of using webpack 2 imports is it doesn not allow you to name your chunks. It is not really webpack 2 fault, it is just not the part of import proposal. Instead of the app.[hash].js, home.[hash].js, about.[hash].js and login.[hash].js bundle contains the app.[hash].js, 0.[hash].js, 1.[hash].js, 2.[hash].js. This is not very helpful, especially when you trying to tackle regression issues. For example, after some change you heve noticed that bundle has grown in size. Adding new dynamic import can change names of unrelated modules. Fortunately, webpack 3 already addressed that issue with so called "magic comments": <LazyRoute exact path="/" component={() => import(/* webpackChunkName: "home" */ "./routes/Home/Home")} /> <LazyRoute exact path="/about" component={() => import(/* webpackChunkName: "about" */ "./routes/About/About")} /> <LazyRoute exact path="/login" component={() => import(/* webpackChunkName: "login" */ "./routes/Login/Login")} /> // -rw-r--r-- 1 michal staff 62K Aug 6 11:07 build/app.ca30de797934ff9484e2.js.gz // -rw-r--r-- 1 michal staff 3.2K Aug 6 11:07 build/home.a5a7f7e91944ead98904.js.gz // -rw-r--r-- 1 michal staff 6.0K Aug 6 11:07 build/about.d8137ade9345cc48795e.js.gz // -rw-r--r-- 1 michal staff 1.4K Aug 6 11:07 build/login.a68642ebb547708cf0bc.js.gz Dynamic imports for heavy dependencies There are multiple ways to benefit from dynamic imports. I have covered dynamic imports for React components already. The other way to optimize is to find a library or other module of significant size which is used only under certain conditions. An example of such dependency can be libphonenumber-js for phone number formatting and validation (70-130KB, vary depending on selected metadata) or zxcvbn for a password strength check (whopping 820KB). Imagine you have to implement login page which contains 2 forms: login and signup. You want to load neither libphonenumber-js nor zxcvbn when a user wants to only log in. This, yet naive, example shows how you can introduce better, more refined rules for on demand, dynamic code loading. We want to show password strength only when user focus the input, not sooner. class SignUpForm extends Component { state = { passwordStrength: -1, }; static LABELS = ["terrible", "bad", "weak", "good", "strong"]; componentWillReceiveProps = (newProps) => { if (this.props.values.password !== newProps.values.password) { this.setPasswordStrength(newProps.values.password); } }; showPasswordStrength = () => { if (this.state.passwordStrength === -1) { // import on demand import("zxcvbn").then((zxcvbn) => { this.zxcvbn = zxcvbn; this.setPasswordStrength(this.props.values.password); }); } }; setPasswordStrength = (password) => { if (this.zxcvbn) { this.setState({ passwordStrength: this.zxcvbn(password).score }); } }; render() { const { onSubmit, onChange, values } = this.props; const { passwordStrength } = this.state; return ( <form onSubmit={onSubmit}> <div> <label> Email:{" "} <input type="email" name="email" value={values.email} onChange={onChange} /> </label> </div> <div> <label> Password:{" "} <input type="password" name="password" value={values.password} onChange={onChange} onFocus={this.showPasswordStrength} /> {passwordStrength > -1 && <div>Password is {SignUpForm.LABELS[passwordStrength]}</div>} </label> </div> <input type="submit" /> </form> ); } } Split across multiple entry points As you probably know, a single webpack build can have multiple entry points. This feature can be used to very effectively reduce loaded code for particular parts of the application. Imagine that your app works similar to Heroku frontend. So, you have a homepage which introduces the service but most of the features, so as code, is meant for logged in users (apps management, monitoring, billing etc.). Maybe you do not even need to use React for your homepage and entire JavaScript code required comes down to displaying a lame popup. Let's write some VanillaJS! // home.js const email = prompt("Sign up to our lame newsletter!"); fetch("/api/emails", { method: "POST", body: JSON.stringify({ email }) }); We are going to use different HTML file and code most of the code there. HTMLWebpackPlugin is used for generating HTML file with injected entry points paths. We need two separate files, home.html for our new homepage, and index.html for the rest of the pages. To generate two separate files use two instances of HTMLWebpackPlugin with different config. You want to explicitly specify chunks for each file. modules.exports = { entry: { app: [path.resolve("src/index.jsx")], home: [path.resolve("src/home.js")], }, plugins: [ new HTMLWebpackPlugin({ filename: "home.html", excludeChunks: ["app"], template: path.resolve("src/home.html"), }), new HTMLWebpackPlugin({ excludeChunks: ["home"], template: path.resolve("src/index.html"), }), ] }; The last thing is to customize your server so GET / serves home.html. Add handler for GET / before express.static middleware so home.html takes precedence over index.html. app.get("/", (req, res) => { res.sendFile(path.resolve("build", "home.html")); }); app.use(express.static("build")); app.get("*", (req, res) => { res.sendFile(path.resolve("build", "index.html")); }); This way we went down from loading over 70KB of JavaScript to only 3.5KB! Using multiple entry points requires good planning and understanding business requirements. On the other hand, the implementation itself is really simple. CommonsChunkPlugin The CommonsChunkPlugin can be used to create a separate file (a chunk) consisting of modules which are used across multiple entry points and their children. The advantage of having one file for common modules it the lack of repetition, which is always a good thing. Moreover, a chunk with a hash calculated from its content in the name can be aggressively cached. Once a file is downloaded, it can be later served from the disk until it changes. There are few ways to use CommonsChunkPlugin and you can combine them for the best result. Provided configuration consists of creating two instances. The simplest is just creating a separate file, of a given name, with modules reused across entry points (app and home). Next configuration makes sure that modules reused across children (about and login) are also going to be exported to separate file and will not add size to each children chunk. With mixChunks you can set the least number of chunks reusing a module before module can be exported. module.exports = { entry: { home: [path.resolve("src/home.js")], app: [path.resolve("src/index.jsx")], }, plugins: [ new webpack.optimize.CommonsChunkPlugin({ name: "commons", }), new webpack.optimize.CommonsChunkPlugin({ children: true, async: true, minChunks: 2, // the least number of chunks reusing a module before module can be exported }), ], }; ModuleConcatenationPlugin ModuleConcatenationPlugin or actually Scope Hoisting is the main feature of webpack 3. The basic idea behind Scope Hoisting is to reduce the number of wrapper functions around the modules. Rollbar does that too. In theory, it should speed up the execution of the bundle by reducing the number of closures that have to be called. I might sound skeptic but I am really excited that webpack team is pursuing performance! The talk I linked to is just very good and sheds some light on micro-optimizations. Although some users report significant improvement in a bundle size that improvement would mean that webpack glue code is a majority of their bundled code. From my experience, it may save you few kilobytes here and there (for an average bundle ~350KB) but that is it. module.exports = { plugins: [ new webpack.optimize.ModuleConcatenationPlugin(), ], }; Similar to Tree shaking, this optimization works only with ES modules. Optimize CSS class names Whether styles splitting applies to you or not depends on how do you handle your styles. If you use CSS-in-JS kind of solution, you just ship JS code and CSS strings styling your components together with components themselves. On the other hand, if you do prefer to use css-loader and ExtractTextPlugin there is a way in which you can affect the size of the shipped CSS code. I heve recently been testing how class names influence the size of the bundle. As you may know, css-loader allows for specifying a pattern in which selector is mapped to unique ident. By default, it is 23 characters hash. I did few tests, tried few patterns in one of the projects and I was more than pleased with the results. At first glance, the less code the better so the shortest class names should give the best result. Due to nature of how compression works making idents more similar to each other results in smaller gzipped bundle. Selector names are used in both CSS file and components, size savings are doubled. If you have a lot of small components and many generic, similar class names ( wrapper, title, container, item) your results will be similar to mine. If you have a smaller number of components but a lot of CSS selectors you might be better off with [name]-[hash:5]. To use ExtractTextPlugin, source map, and UglifyJsPlugin, remember to enable sourceMap option. Otherwise, you can come across some not so obvious to debug issues. By not so obvious I mean some crazy error messages which does not say you anything and does not seem to be related anyway to what you have just done. Love it or hate it but sometimes that is the price of tools which are doing the heavy lifting for you. NODE_ENV="production" This one seems pretty obvious! I bet when you read the title "Optimize React build for production with webpack" you could think of at least two: NODE_ENV and UglifyJS. Although it is a pretty common knowledge, confusions happen. How does it work? It is not a react-specific optimization and you can quickly try it out. Create entry point with following content: if (process.env.NODE_ENV !== "production") { alert("Hello!"); } Create a development build. That is should be the content: webpackJsonp([1],{10:function(n,o,e){n.exports=e(11)},11:function(n,o,e){alert("Hello!")}},[10]); //# sourceMappingURL=app.fc8e58739d91fe5afee6.js.map As you can see there is even no if statement but let's move along. Make sure that NODE_ENV is set to "production". You can do it with: module.exports = { plugins: [ // make sure that NODE_ENV="production" during the build new webpack.EnvironmentPlugin(["NODE_ENV"]), ], }; // or module.exports = { plugins: [ new webpack.DefinePlugin({ "process.env": { NODE_ENV: JSON.stringify("production"), }, }), ], }; Now that is the build: webpackJsonp([1],{10:function(n,o,c){n.exports=c(11)},11:function(n,o,c){}},[10]); //# sourceMappingURL=app.3065b2840be2e08955ce.js.map Here is how we got back to UglifyJS and dead code elimination. During the build process.env.NODE_ENV is replaced with a string value. UglifyJS seeing if ("development" !== "production") skips if statement as it is always true. Opposite happens when minimizer comes across if ("production" !== "production") This condition is always false and UglifyJS drops the dead code. It works the same with prop types! Babel plugins optimizations Babel opens new opportunities not only for writing cutting edge JavaScript for production but also for a wide variety of optimizations. I heve mentioned babel-lodash-plugin already, there is a lot more to explore. You know that React does not check props in production. You can also notice that despite this optimization, prop types are still present in components code. It is a dead code, I know that, you know that, but uglify does not know that. You can use babel-plugin-transform-react-remove-prop-types to get rid of those calls. { "presets": [["es2015", { "modules": false }], "stage-2", "react"], "env": { "production": { "plugins": ["transform-react-remove-prop-types"] } } } Wrap up I have went though few possible optimizations but did not mention about one of the most important thing in the entire process. Whatever optimization you are applying make sure to extensively test the outcome. There are few good tools which can help you to do that e.g. BundleAnalyzerPlugin. If you prefer to do it the old way: tar -czf build.tar.gz build && du -k build.tar.gz Minimizing data users download is only the first step to improve performance. Stay tuned and if you like the article follow me on twitter to learn more performance optimization techniques! Photo by Goh Rhy Yan on Unsplash.
https://michalzalecki.com/optimize-react-build-for-production-with-webpack/
CC-MAIN-2019-26
refinedweb
3,852
58.28
Nothing, about this problem I had to swap the lines 32 and 33 in direct.php: if($output) $this->extdapi->output(); $this->session->set_userdata(array('ext-direct-state' => $this->extdapi->getState())); to: $this->session->set_userdata(array('ext-direct-state' => $this->extdapi->getState())); if($output) $this->extdapi->output(); Because if output is TRUE (default) then there is already a header send: r. 233 extdapi.php header('Content-Type: text/javascript'); (I use the DB to store my sessions and it seems when setting userdata the header gets changed) @cherbert: did you solve your problem? I have the same now... ;-( OK, it took me a while but the problem described by cherbert is easy to fix. When the api is called (on the page) the state is made and set (session). But if some ajax magic happens the api is not called (which is good) and the set state by the api is not used cousin errors... So get the set state... The part that there is no set state then run api first was already there... controllers/Direct.php PHP Code: public function router() { $state = $this->session->userdata('ext-direct-state'); if ( ! $state) $this->api(FALSE); else $this->ext_direct_api->setState($state); $this->load->library('ext_direct_router', array('api' => $this->ext_direct_api)); $this->ext_direct_router->dispatch(); $this->ext_direct_router->getResponse(TRUE); } I found a problem and have a fix (which took me again a while). If you want to use @formHandler you will keep getting errors... libraries/extdapi.php r.55 YES YES YES now all three ext direct examples work!!!YES YES YES now all three ext direct examples work!!!PHP Code: $this->setFormAttribute($state['nameAttribute']); // must be $this->setNameAttribute($state['nameAttribute']); change: <script type="text/javascript" src="php/api.php"></script> to: <script type="text/javascript" src="/direct/api"></script> (this is the controller file at: controllers/Direct.php) And transform the two files Profile and TestAction in classes the way Echo, File, Time etc. is done. I am going to look at both your suggestions and look for any good deals on these. Thanks again for the help. If anyone wants to use this under CodeIgniter 2.0, all you have to do is the following: ChangeIn controller/direct.php: Into:Into:Code:class Direct extends Controller { function Direct() { parent::Controller(); $this->load->library('extdapi'); $this->load->library('extdcacheprovider', array('filePath' => 'cache/api_cache.txt')); } Code:class Direct extends CI_Controller { function __construct() { parent::__construct(); $this->load->library('ext.direct/extdapi'); $this->load->library('ext.direct/extdcacheprovider', array('filePath' => 'cache/api_cache.txt')); } Last edited by ReLexEd; 29 Apr 2011 at 4:03 AM. Reason: Code formatting Is this or any other library been updated? I wanted to use CI + Ext.Direct and having trouble with CI 2.0.2... Is anyone using this? Mind to share latest version? I even tried latest download from Mike's link...problems arise... (namely I got stuck trying to get ext.designer to validate the api url - i assume it is because it was returning JS instead of pure JSON) I'm trying Mike's library, but I've got stuck and wonder whether anyone can help me. I see actions in provider, but when trying to use them I can't , because actions namespaces are not included in global namespace. Any suggestion would be greatly appreciated.
https://www.sencha.com/forum/showthread.php?79211-Ext.Direct-for-CodeIgniter/page4
CC-MAIN-2017-51
refinedweb
548
58.89
Hey, The file covered in this article, /proc/<pid>/stack,. This is the fifth article in a series of 30 articles around procfs: A Month of /proc. If you'd like to keep up to date with it, make sure you join the mailing list! -B\n", (void *)entries[i]); } unlock_trace(task); } kfree(entries); return err; } Looking at the file where the function is defined, we can git blame that seq_printf and see who's there to blame for putting that hardcoded [<0>]. Guess what - Torvalds did the change! Author: Linus Torvalds <torvalds@linux-foundation.org> Date: Mon Nov 27 16:45:56 2017 -0800 proc: don't report kernel addresses in /proc/<pid>/stack This just changes the file to report them as zero, although maybe even that could be removed. I checked, and at least procps doesn't actually seem to parse the 'stack' file at all. And since the file doesn't necessarily even exist (it requires CONFIG_STACKTRACE), possibly other tools don't really use it either. That said, in case somebody parses it with tools, just having that zero there should keep such tools happy. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> diff --git a/fs/proc/base.c b/fs/proc/base.c index 31934cb9dfc8..28fa85276eec 100644 --- a/fs/proc/base.c +++ b/fs/proc/base.c @@ -443,8 +443,7 @@ static int proc_pid_stack(struct seq_file *m, struct pid_namespace *ns, save_stack_trace_tsk(task, &trace); for (i = 0; i < trace.nr_entries; i++) { - seq_printf(m, "[<%pK>] %pB\n", - (void *)entries[i], (void *)entries[i]); + seq_printf(m, "[<0>] %pB\n", (void *)entries[i]); } unlock_trace(task); } Not being a Kernel expert (at all!), I tried to understand what that %pB and %pK are all about - I've never used such kind of formatting with printf after all. Looking at the docs for printk format specifiers, we can see what that very specialized formatting is all about: The Bspecifier results in the symbol name with offsets and should be used when printing stack backtraces. [The Kspecifier is used …] For printing kernel pointers which should be hidden from unprivileged users. Meaning that yeah, previously you could retrieve the kernel addresses, but not anymore, for the reasons presented by Linus. When the stack does not help much While it's very clear why knowing the in-kernel stack trace in the example above was useful, it's not all that much when it comes to servers that make use of async io (like most of the modern web servers do). Here's how a code that is very similar to the TCP acceptor we wrote above in C looks like in Go: package main import ( "net" ) func main () { // Create the necessary underlying data // structures for listening on port 1337 // on all interfaces. listener, err := net.Listen("tcp", ":1337") if err != nil { panic(err) } // Release all the resources when leaving defer listener.Close() for { // Accept a connection from the // backlog of connections that // finalized the 3-way handshake conn, err := listener.Accept() if err != nil { panic(err) } // Close them right away conn.Close() } } Although in the code above we spawn no goroutines other than the main one, under the hood, the Go runtime ends up setting a single event pool file that allows us to monitor multiple file descriptors and not block on them individually. By the way, Julia Evans has a great blog post on async IO - make sure you check it out! It's called Async IO on Linux: select, poll, and epoll. We can notice that by looking at which syscall the Kernel is blocked at when our process runs: # Display the stack of every thread that # pertains to the `go_accept` command that # is running. find /proc/$(pidof go_accept)/task -name "stack" | \ xargs -I{} /bin/sh -c 'echo {} ; cat {}' /proc/17019/task/17019/stack [<0>] ep_poll+0x29c/0x3a0 [<0>] SyS_epoll_pwait+0x19e/0x220 [<0>] do_syscall_64+0x73/0x130 [<0>] entry_SYSCALL_64_after_hwframe+0x3d/0xa2 [<0>] 0xffffffffffffffff /proc/17019/task/17020/stack [<0>] futex_wait_queue_me+0xc4/0x120 [<0>] futex_wait+0x10a/0x250 [<0>] do_futex+0x325/0x500 [<0>] SyS_futex+0x13b/0x180 [<0>] do_syscall_64+0x73/0x130 [<0>] entry_SYSCALL_64_after_hwframe+0x3d/0xa2 [<0>] 0xffffffffffffffff /proc/17019/task/17021/stack [<0>] futex_wait_queue_me+0xc4/0x120 ... /proc/17019/task/17022/stack [<0>] futex_wait_queue_me+0xc4/0x120 ... /proc/17019/task/17023/stack [<0>] futex_wait_queue_me+0xc4/0x120 ... /proc/17019/task/17024/stack [<0>] futex_wait_queue_me+0xc4/0x120 ... Notice that differently from the C code, here I'm looking at the stack of each task under the task group identified by the PID of the go_accept command. Given that Go will run more than one thread when it starts (so that we can schedule goroutines to run across the poll of actual threads), we can take a look at the stack across all of the threads and see their stack (in the end, each thread is a task, so we can take the stack of each of them). If we use dlv, we can see why it's the case that we have those 5 threads just waiting with a futex_wait, while there's another one blocked on ep_poll (the actual block on async IO): # Attach to the currently running Go # process with delve so that we can # check from the userspace perspective # what is going on with the Go runtime dlv attach $(pidof go_accept) # Check the thread pool (dlv) threads * Thread 17019 at .../sys_linux_amd64.s:671 runtime.epollwait Thread 17020 at .../sys_linux_amd64.s:532 runtime.futex Thread 17021 at .../sys_linux_amd64.s:532 runtime.futex Thread 17022 at .../sys_linux_amd64.s:532 runtime.futex Thread 17023 at .../sys_linux_amd64.s:532 runtime.futex Thread 17024 at .../sys_linux_amd64.s:532 runtime.futex # Check the pool of goroutines (dlv) goroutines [4 goroutines] Goroutine 1 - ...netpoll.go:173 internal/poll.runtime_pollWait (0x427146) Goroutine 2 - ...proc.go:303 runtime.gopark (0x42c74b) Goroutine 3 - ...proc.go:303 runtime.gopark (0x42c74b) Goroutine 4 - ...proc.go:303 runtime.gopark (0x42c74b) # Switch to goroutine 1 (dlv) goroutine # See the userspace stack that got us # to being parked at `epoll wait` (dlv) stack 0 0x000000000042c74b in runtime.gopark at /usr/local/go/src/runtime/proc.go:303 1 0x0000000000427a99 in runtime.netpollblock at /usr/local/go/src/runtime/netpoll.go:366 2 0x0000000000427146 in internal/poll.runtime_pollWait at /usr/local/go/src/runtime/netpoll.go:173 3 0x000000000048e81a in internal/poll.(*pollDesc).wait at /usr/local/go/src/internal/poll/fd_poll_runtime.go:85 4 0x000000000048e92d in internal/poll.(*pollDesc).waitRead at /usr/local/go/src/internal/poll/fd_poll_runtime.go:90 5 0x000000000048fc20 in internal/poll.(*FD).Accept at /usr/local/go/src/internal/poll/fd_unix.go:384 6 0x00000000004b6572 in net.(*netFD).accept at /usr/local/go/src/net/fd_unix.go:238 7 0x00000000004c972e in net.(*TCPListener).accept at /usr/local/go/src/net/tcpsock_posix.go:139 8 0x00000000004c86c7 in net.(*TCPListener).Accept at /usr/local/go/src/net/tcpsock.go:260 9 0x00000000004d55f4 in main.main at /tmp/tcp/main.go:16 10 0x000000000042c367 in runtime.main at /usr/local/go/src/runtime/proc.go:201 11 0x0000000000456391 in runtime.goexit at /usr/local/go/src/runtime/asm_amd64.s:1333 Having now both the userspace and kernelspace stacks, we can properly identify what's going on with the Go application. Closing thoughts The conclusion? IMO, /proc/<pid>/stack (or the equivalent /proc/<pid>/task/<task_id>/stack) is great, but it only takes us so far. In the end, we need a mix of userspace and kernel space tools that can help us debug the state in which a system is present. Luckily, from time to time new tools like dlv show up, pprof improves, and even more powerful tools to inspect the Kernel emerge. I hope this article was useful for you! Please let me know if you have any questions. I'm @cirowrc on Twitter, and I'd love to chat! Have a good one!
https://ops.tips/blog/using-procfs-to-get-process-stack-trace/
CC-MAIN-2020-05
refinedweb
1,298
67.04
Contents - Introduction - Background - Advancements in AngleSharp - Connecting Jint - Using the code - Points of Interest - History Introduction More than a year ago I announced the AngleSharp project, which is quite ambitious. The project tries to implement the specification provided by the W3C and the WHATWG regarding HTML/CSS. In its core we find a state-of-the-art HTML(5) parser and a capable CSS(3) parser, which contains L4 modules, such as CSS4 Selectors. Since its announcement the project has been quite a success. AngleSharp has been released in form of a portable class library (PCL). This enables the usage on .NET 4.5+, WindowsPhone 8+ and Windows Store applications. In general the tendency to use lightweight portable solutions is still rising, which explains the ongoing demand and questions regarding the project. Even though AngleSharp started as a tool for parsing HTML, it is much more. AngleSharp provides several points for extensions and can not only be used to parse HTML in order to create a DOM as specified by the W3C, it knows about scripting and styling. AngleSharp itself does not contain a scripting engine out-of-the-box, but it provides a default styling engine that knows CSS. Part of the reason is the strong connection between HTML and CSS. As an example the querySelector() and querySelectorAll() API methods use a string as argument, which is interpreted by the CSS engine. This article will discuss some of the current extension points. We will also have a look at upcoming extension points - just to illustrate the roadmap and vision for AngleSharp. Finally the whole extension point topic will be motivated by showing how an existing JavaScript engine can be connected to AngleSharp. We will see that the whole process is more or less straight forward, does not require much code and results in interesting applications. Background JavaScript is not the only language that can be used for writing web applications. In fact, Microsoft provided VBScript as an alternative from the beginning. Other browser vendors also provide or provided alternatives. Nevertheless, JavaScript is the preferred choice for making the web dynamic on the client's computer. There are several reasons for this dominance. The most important reason is consistency. Every browser provides a JavaScript implementation. Even though there are some differences on the surface (API) and implementations, all implementations are at least ECMAScript 3 compatible. Here we have our common code basis that can be used to power browsers, which may run on variety of devices starting with computers over smartphones, tablets and TV sets. It is this dominance that makes JavaScript interesting once a developer enters the web space. One might prefer static typing, but sometimes casting adds some noise that is just unwanted. One might prefer traditional classes and scopes, but the scripting nature is sometimes more straight forward and to the point. Any application that could be written in JavaScript, will be written in JavaScript. Therefore just having an HTML parser in C# might be the basis of a great tool for some tasks, but it always excludes a huge fraction of code that is only available in JavaScript form. It is that fraction that we do not want to exclude. If there is already a great script that manipulates a webpage the way we want, why should we have to rewrite it? AngleSharp has an extension point to include scripting engines. This may be used to register custom scripting languages (experimentally, or to have a great alternative), or to include official ones (such as VBScript or an ECMAScript implementation). Lucky for us, there are already some great scripting languages written in .NET: - ScriptCS (and others), to make C# itself scriptable - IPython (and other Python implementations) - NeoLua (and other Lua implementations) - Jurassic (JavaScript compiled to MSIL) - JavaScript.NET (based on V8, a wrapper) - Jint (JavaScript interpreter, presented as a PCL) Of course there are many other languages. In this article we will use Jint to interpret ECMAScript (5) compatible code and apply DOM manipulations. Why Jint? The performance is certainly not as great as Jurassic (compiled) or JavaScript.NET (V8 is unbeatable, but the wrapper gives us a little penalty), however, it is strictly following the spec, available via NuGet (unlike JavaScript.NET) and PCL compatible. This does not exclude using the solution we discuss in this article on platforms such as WindowsPhone or Windows Store platforms. Advancements in AngleSharp The initial AngleSharp article discussed the principle behind DOM generation. Mainly the tokenization and parsing process have been viewed. There have not been any major changes besides some refinement and the addition of the HTML5 HTMLTemplateElement interface. The latter has been included in the parser as specified in the HTML 5.1 specification. The main changes have been committed in the API and the CSS engine. While the API has been changed in such a way, that is flexible for users and following .NET coding standards, the CSS engine has been rewritten from scratch. It is now more flexible, easier to extend and faster. We will first discuss the changes in the parsers, before we provide information on the configuration model implement in AngleSharp. Finally we will go into details of the currently provided DOM API and the available extension points. HTML and CSS status As already explained there are barely any changes to the HTML parser part. Nevertheless, some commits have been pushed and therefore we can expect some changes. Mostly further HTML DOM elements have been created, the API has been implemented and some performance improvements are available. A quite important change, however, has been made to the API that is exposed. Instead of dealing with the implementation of the DOM elements, a standard user deals with the interface definition alone. This is like the W3C proposed and makes changes to the underlying implementation a lot easier. Here we have some kind of contract that is not very likely to break. Also the concrete implementation details are hidden. One thing to note is, that such an encapsulation is very important, since it allows refactoring of the implementation, without breaking user codes. It also allows to expose an API that is implement in a completely different way. The following image showes a snippet of how the current HTML DOM tree looks like. The current version of AngleSharp implements the latest DOM (4). Therefore the EventTarget interface is on top. Additionally the Attr interface is not a Node. Finally the picture uses the prefixed notation as available in AngleSharp due to .NET convention. One example of what is possible with AngleSharp is submitting HTML forms. Everyone who tried to wrap an existing website using C# knows the problem. There is no API and one needs to submit a form for receiving data. Leaving alone the data extraction problem, we need to inspect the form with all fields (text, hidden, ...), probably filling out some fields on the way. Most of the time one could not make a process as described work sufficiently good to use it in production. This is now over! The HTMLFormElement contains the Submit() method, which will send the form's data to the specified action url. The method uses the current method (POST, GET, ...) and creates a request as specified officially. This makes the following scenario work: Get the contents of a login page, perform the login by submitting a form using user name and password, extract the data of a specific subpage and log out. A really important piece for the described process is cookies. The session of the user must be saved in form of a (session) cookie. Additionally we require a unit to receive and send HTTP requests. Fortunately all three parts, - Form submission - Sending / receiving cookies - Sending requests are included in AngleSharp. Even though the last two points are only provided by default implementations and may be upgraded with better ones by any user. A form submission can be as simple as follows (taken from the unit tests): var url = ""; var config = new Configuration { AllowRequests = true }; var html = DocumentBuilder.Html(new Uri(url), config); var form = html.Forms[0] as IHtmlFormElement; var name = form.Elements["Name"] as IHtmlInputElement; var number = form.Elements["Number"] as IHtmlInputElement; var active = form.Elements["IsActive"] as IHtmlInputElement; name.Value = "Test"; number.Value = "1"; active.IsChecked = true; form.Submit(); In the recent versions the DOM model of AngleSharp has been updated to DOM4. Only a few methods and properties marked as obsolete are still being included. The only remaining ones are API artefacts that are heavily used by current websites. Also tree traversal has been updated by including helpers such as the NodeIterator, the TreeWalker and the Range class. These data structures make it possible to iterate through the DOM tree by filtering important nodes. As usual the Document instance is used to create instances of such data structures. The idea is that the constructor with all the required data can be called from the Document without requiring the user of the API to know the exact signature of the specific class constructor. That also indicates the strong relation between the Document and e.g. the TreeWalker. The conclude the example the specific method is called CreateTreeWalker(). AngleSharp also contains methods that follow the official tree language. While the definitions for a parent or a child of an element are trivial, the specializations of ancestors and decendants are not. An inclusive ancestor is any ancestor of an element including the element. An inclusive descendant therefore is also the union of the element with the set of descendents. The root of an element is the top parent, i.e. the parent that does not have a parent. Not only vertically there are such definitions, but also horizontally. Here we are interested in the width of the tree. From the perspective of a single element we will therefore have a look at its siblings. A sibling is every child of the parent of the current element, excluding the current element. We may differentiate between preceding and following siblings. While preceding siblings are found before the current element in tree order, following siblings come after the current element. In short: The index of preceding siblings is smaller than the index of the current element. Following siblings have an index that is greater than the index of the current element. The index is the position in the array of children of the parent. There are two special kinds of siblings: The previous (element) is the sibling with an index that is the nearest smaller index compared to the index of the current element. Similarly we have the next (element), which is the sibling with the closest higher index. For an index i of the current element we have i-1 for the previous element and i+1 for the next element. There may not be a previous or next element. Finally a remark to the Range structure. The following diagram (taken from the official specification available at the W3C homepage) illustrates a few uses: In the diagram we see four different ranges. the start of the range is called s#. The end is denoted by e#, where # is the number of the Range instance. For range 2, the start is in the body element. It is placed immediately after the h1 element and immediately before the paragraph tag. Therefore its position is between the h1 and p children of the document's body. The offset of a boundary-point whose container is not a text node is - 0 if it is before the first child, - 1 if between the first and second child, and - n if between the n-th and the (n+1)-th child. So, for the start of range 2, the container is body and the offset is 1. The offset of a boundary-point whose container is a text node is obtained similarly. Here we use the number of (16-bit) characters for position information. For example, the boundary-point labelled s1 of range 1 has a Text node (the one containing "Title") as its container and an offset of 2 since it is between the second and third character. We should note that the boundary-points of ranges 3 and 4 correspond to the same location in the text representation. An important feature of the Range class is that a boundary-point of a range can unambiguously represent every position within the document tree. All iterators are live collections, i.e. they will change their representation according to changes in the DOM tree. The reason for this is quite simple: They do not have a fixed content, but retrieve their content upon request from the underlying DOM specified by the Document. The relation to the Document is therefore the minimum requirement for any structure that is live. So what about CSS? The CSS API is now also nearly done. The last CSS version is CSS3, which builds directly upon CSS 2.1. However, CSS3 is also prepared for so-called modules. Everything is now in its own module. AngleSharp implements selectors module 4. There are other modules which are also implemented in their current state, an older state, or not at all. In general there are too many modules for implementing them in the core, or at all. Even current browsers do not implement all modules. Some of them are too special, some of them not in a production ready state and some of them are probably already outdated and have never been used widely. What is most important is the declaration tree. When dealing with CSS we will eventually deal with a stylesheet that consists of CSS rules. These rules may nest. For instance a CSS media rule can nest other rules such as other media rules, normal styling rules or other document specific rules. A styling rule on the other side is probably the most important currency in CSS. It cannot nest other rules and consists of an element selector and a set of declarations. Accessing this set of declarations is possible via the ICssStyleDeclaration interface. Each declaration is represented by a ICssProperty interface. A property is always split in multiple parts. We have the name of a property and its value. Historically there has been a way of accessing such values via the ICssValue interface. However, AngleSharp will most probably drop the support for this interface. It is very limited, not actively implemented and it will be removed by the W3C soon. Therefore the tree, as shown in form of a snippet above, will change in further versions. In JavaScript (and probably also in C#) a regular user will set the value via a string. In C# there will also be the possibility of directly setting value, which happens to be the real deal. There won't be any clumsy abstraction that is very limited and just to unhandy to use in practice. Configuration Usually the process of parsing an HTML document involves construction of an instance of the HtmlParser class. However, the process may be confusing and too indirect for some users. Therefore a shorter way exists. We have the DocumentBuilder to shorten most parsing processes. The DocumentBuilder provides some static methods to directly create IDocument or ICssStyleSheet documents from different sources ( string, Stream or Uri instances). The problem is the variety of configuration that may be required for any process. For instance what kind (if any) requester (and for what kind of protocols, such as http or https) should be used? Are we providing a scripting engine or is scripting enabled at all? There are a lot of configuration options that could be set. Currently the precedence is as follows: - A configuration has been supplied - take it. - A custom configuration has been has been set as default - take it. - Consider the default configuration. A configuration is an object that implements IConfiguration. Depending on the specific needs one may create a class to implement IConfiguration from scratch, or based on Configuration, which contains a default implementation of the IConfiguration interface. Most of the times it will be sufficient just to instantiate the Configuration class and modify the content, without changing the implementation. The default configuration can be set by calling the SetDefault() method of the Configuration class. By default an instance of the same class is the default configuration. This instance is practically immutable, since we cannot get the instance externally. Let's have a look at what IConfiguration provides. public interface IConfiguration { Boolean IsScripting { get; set; } Boolean IsStyling { get; set; } Boolean IsEmbedded { get; set; } CultureInfo Culture { get; set; } IEnumerable<IScriptEngine> ScriptEngines { get; } IEnumerable<IStyleEngine> StyleEngines { get; } IEnumerable<IService> Services { get; } IEnumerable<IRequester> Requesters { get; } void ReportError(ParseErrorEventArgs e); } We see that the configuration interface setups the context as we need it. We can specify if styling and scripting should be treated as active. This influences, e.g., how <noscript> tags will be treated. Additionally settings like the language, drawn from the Culture property, will be considered when picking the right encoding for the document. Most notably we have two properties Services and Requesters that provide most of the magic behind extensions. While Services contain arbitrary services that are used for very special purposes, such as retrieving and storing cookies, Requesters contain all registeres stream requesters. There may be zero (just interested in parsing a single, provided document) or more requesters. The first requester that supports the protocol (scheme) of the current request is then considered. Of course a major use case may be to retrieve a document (with sub-documents) from a network source that supports the HTTP protocol. Therefore a default requester (for the http and https protocol) has been supplied. Activing the default requester is quite easy. var config = new Configuration().WithDefaultRequester(); The method WithDefaultRequester() is implemented as an extension method that requires an instance of Configuration or specialized. It returns the provided instance to support chaining. We are allowed to write expressions such as: var config = new Configuration() .WithDefaultRequester() .WithCss() .WithScripting(); The former expression creates a new AngleSharp configuration instance with the default http / https requester, the CSS styling engine registered and being active, and scripting being active. Note: No scripting engine has been added in the line. Even though we active scripting, we did not supply a (JavaScript) scripting engine and will therefore not be able to run any scripts. Nevertheless, the <noscript> tag will be treated as containing a block of raw text, i.e. as being ignored in the DOM generation. The DOM API One of the critics about the AngleSharp project is the usage of the official W3C DOM API. In my opinion, however, it is much better to start with the original API and add sugar on top of it. The reasons are many fold: - It is much easier to connect script engines to it, which allow running scripts that already access the original API. - A lot of documentation for the official API exists. - Standards drive the web, we should first be fully standard conform before adding our own thing. - Extensions (see jQuery) can be created on top of it, which provide a much better API. Right now the basic (official) API is nearly finished. The implementation of some methods is still behind, but will be complete in the next couple of months. Most improvements and shortcuts will be implemented by using extension methods. This will not collide with the existing API, make the right indication and also ensure that everything is build on the official API (which should be the common base - just to ensure validity). The API shifted to DOM L4. Only a few artefacts from former DOM specifications are still included. Most of those artefacts come from heavy usage on various webpages. If an API method or property still seems to be in heavy use, but was excluded in DOM L4, we still included it in AngleSharp (most likely). Extension points AngleSharp lives from its ability to be extended. Extensions are only useful, because AngleSharp knows how to integrate them. They are called in certain scenarios. Let's consider the case we want to investigate in this article. Here we are interested in integrating a scripting engine. What has to be done? On finding a certain kind of <script> tag we want AngleSharp to look for a script engine that matches the given scripting language. The scripting language is usually set in form of an attribute. If there is no attribute for the scripting language, JavaScript is taken automatically. In this case we have "text/javascript". If an engine with the given type is registered, we need to run a method for evaluating the script. This is where it becomes tricky. A script might either be delivered inline (just a string), or in form of an external resource. If the latter is true, then AngleSharp will automatically perform the request (if possible). In this scenario the source is presented in form of a response, which contains the content as a stream, the header and additional information about the response, such as the URL. The following defines a script engine. public interface IScriptEngine { String Type { get; } void Evaluate(String source, ScriptOptions options); void Evaluate(IResponse response, ScriptOptions options); } There are other examples that follow this pattern. The following snippet shows the interface for registering a style engine. Even though CSS3 is already included in AngleSharp, we might consider replacing it with our own implementation, or another engine (such as the ported one from Mozilla Firefox). We could also consider registering a styling engine for other formats, such as XML, LESS or SASS. The latter two could be preprocessed and transformed using the original CSS3 engine. public interface IStyleEngine { String Type { get; } IStyleSheet Parse(String source, StyleOptions options); IStyleSheet Parse(IResponse response, StyleOptions options); } Having these interfaces on board makes AngleSharp quite flexible. It also gives AngleSharp the ability to expose functionality that is not included out-of-the-box. Sometimes this is due to the focus of the project, sometimes this is due to platform constraints. Whatever the reasons are, being easily extensible is certainly a bonus. Before we go on we should note, that AngleSharp is still in beta, which is why the API (including the interfaces for extensions) is still in flux. For information on the latest version, please consult the official repository hosted at GitHub. Another example is the the ICookieService interface. This interface also uses the IService interface, which marks general services. Services are just AngleSharps way to declare a huge variety of extensions, which are not directly vital for any functions, but may bring in new connections, possibilities or gather information. The ICookieService has its purpose to provide a common store for cookies. Additionally information about a cookie, for instance if the cookie is http-only, has to be read and evaluated. public interface ICookieService : IService { String this[String origin] { get; set; } } We see that the service defines a getter and setter property, for a given origin string. Cookies are always considered for a particular "domain", called origin. This consists of the complete host and protocol part of the url. The cookie service is used when the cookie has to be read from a response, has been changed in the DOM or is required for sending a request. Connecting Jint Connecting a JavaScript engine to AngleSharp using the IScriptEngine interface can be either very easy or very challenging. The main issue is the connection between .NET objects and the engine's expected form. Usually we will be required to write wrappers, which follow the basic structure dictated by the specified engine. Additionally we might want to rename some of the .NET class and method (including property) names. Events, constructors and indexers also may require additional attention from our side. Nevertheless, with reflection on our side we can write such a wrapper generator without spending much time. In the following we will first glance at Jint. After having uerstndood the main purpose and inner workings of Jint we will move on to automatically wrap the incoming objects as scriptable objects. The outgoing objects have to be unwraped as well, however, this part is then trivial as we will see. Understanding Jint Jint is a .NET implementation of the EcmaScript 5 specification. It therefore contains (almost) everything that has been specified officially. However, that does not mean that everything works as one expects in a browsers. Most parts of the JS API in the browser come from the DOM / W3C. For instance the XmlHttpRequester for making requests from JavaScript. JavaScript focuses on key-value pairs, where keys are strings. The values in Jint are represented as JsValue instances. However, we can be more general and register functions for retrieving the specific value. A function is not confuse with a JavaScript function. That is a kind of value. Why would we want to use functions instead of values? This is how properties work in C#. A property is nothing but a function, that is called like a variable. We want to expose that same behavior in JavaScript, otherwise we would be required to read out every value when we touch the parent object. Reading out every value once we touch the parent object is not only a performance problem. It also implies that we would be required to update the parent object. In general this is not what we want. We want the parent to appear "live" without being forced to manually update all the (JavaScript) properties. Therefore obtaining the same behavior as in C# is indeed desired and may reduce some unintentional side-effects, that may come when exposing an API with a wrapper. The first thing we need to do when connecting a JavaScript engine is to create a class that implements the IScriptEngine interface. In the end we need to register an instance of our engine in the configuration we supply to AngleSharp. It is important to set the correct type for the new scripting engine. The type corresponds to the mime-type of the supplied script. At the moment it is unclear if the Type property will remain in the interface, or if it will be replaced by a method, that checks if a provided type matches the types covered by the scripting engine. The latter is in general harder to implement, however, more flexible. It could match several (or any, or none) types. Let's see how the implementation looks in this case: public class JavaScriptEngine : IScriptEngine { readonly Engine _engine; readonly LexicalEnvironment _variable; public JavaScriptEngine() { _engine = new Engine(); _engine.SetValue("console", new ConsoleInstance(_engine)); _variable = LexicalEnvironment.NewObjectEnvironment(_engine, _engine.Global, null, false); } public String Type { get { return "text/javascript"; } } public void Evaluate(String source, ScriptOptions options) { var context = new DomNodeInstance(_engine, options.Context ?? new AnalysisWindow(options.Document)); var env = LexicalEnvironment.NewObjectEnvironment(_engine, context, _engine.ExecutionContext.LexicalEnvironment, true); _engine.EnterExecutionContext(env, _variable, context); _engine.Execute(source); _engine.LeaveExecutionContext(); } public void Evaluate(IResponse response, ScriptOptions options) { var reader = new StreamReader(response.Content, options.Encoding ?? Encoding.UTF8, true); var content = reader.ReadToEnd(); reader.Close(); Evaluate(content, options); } } Creating the engine requires us to provide the usual stuff that can be found in all popular implementations (any browser, node, ...). An example would be the console object. In this sample we only supply the most important methods, such as log() for piping an arbitrary number of objects to the console output. What else is there to say? First of all the two Evaluate() methods do not really differ much. The one that takes a Stream as input just reads the whole Stream into a String. Normally we would prefer a Stream, however, Jint just takes a String as source. Nevertheless, what is important is that Jint supports executions contexts. This allows us to place a custom execution context just for the provided evaluation. After the script is finished, we will leave the execution context. The object to host the execution context is the Window object supplied by the scripting options. In JavaScript the context is everything. But the context is a very relative concept. In fact, while the context with its parents determines the variable resolution, it also determines the associated this pointer. The global context is the one layer that cannot be left. Whatever we do, we will always end up in the global context. Our global context in DOM manipulation is actually two-fold. On the one hand we have the JavaScript API with objects such as Math, JSON and the original type objects ( String, Number, Object and others). On the other hand we also have a DOM layer that maps the IWindow object. Therefore the naming resolution will directly resolve IWindow based properties. Nevertheless, this "JavaScript" object, which represents the IWindow ".NET" object, can also be extended. Therefore it will also act as the this, which is extended with properties if no available property is found. Other contexts can be placed on top of this. Jint allows the manual placement of an execution context, which makes wrapping objects such as the IWindow instance possible. Without it, we cannot distinguish between our global object and the global JavaScript API. Another thing that is essential (and perfectly done right by Jint) is the representation of the JavaScript prototype pattern. Every object has a prototype. The instantiation of any object results in an object with a prototype. That may sound weird at first, but this detailled picture will help: In ES5 we have two properties. One is called prototype and the other is called __proto__. If we have a classic constructor function, such as function Foo() { /* ... */ } we have obviously something from type Function. Therefore the __proto__ property maps to the Function object. Nevertheless, once we create an instance of Foo, we have an Object. Therefore, at least in the beginning, the prototype of Foo maps to Object. So the prototype of a constructor function becomes the __proto__ of an instance created by the constructor function. We can go on in this chain, but the only thing to note at this point is that Jint does everything right. Our part consists only of deriving from the respective object, such as ObjectInstance. By doing so, we set the __proto__ property of the resulting instance. That's all! Automating wrappers As we have seen, one thing we actually need to do is creating wrappers or unwrapping objects from our wrappers. In order to generate wrapper objects on the fly a simple mechanism has been developed. Nevertheless, a much harder thing than just wrapping (DOM) objects to (JS) Jint objects is wrapping functions (delegates). Here we need to apply some reflection magic. Additionally we need to consider some properties of the FunctionInstance class that is provided by Jint. The following code shows two things: - Wrapping a function instance to an arbitrary delegate - Wrapping a function instance to a DomEventHandler There is a shortcut if the specific target is not known. By calling the ToDelegate method we will always get the right choice. static class DomDelegates { public static Delegate ToDelegate(this Type type, FunctionInstance function) { if (type == typeof(EventListener)) return ToListener(function); var method = typeof(DomDelegates).GetMethod("ToCallback").MakeGenericMethod(type); return method.Invoke(null, new Object[] { function }) as Delegate; } public static DomEventHandler ToListener(this FunctionInstance function) { return (obj, ev) => { var engine = function.Engine; function.Call(obj.ToJsValue(engine), new[] { ev.ToJsValue(engine) }); }; } public static T ToCallback<T>(this FunctionInstance function) { var type = typeof(T); var methodInfo = type.GetMethod("Invoke"); var convert = typeof(Extensions).GetMethod("ToJsValue"); var mps = methodInfo.GetParameters(); var parameters = new ParameterExpression[mps.Length]; for (var i = 0; i < mps.Length; i++) parameters[i] = Expression.Parameter(mps[i].ParameterType, mps[i].Name); var obj = Expression.Constant(function); var engine = Expression.Property(obj, "Engine"); var call = Expression.Call(obj, "Call", new Type[0], new Expression[] { Expression.Call(convert, parameters[0], engine), Expression.NewArrayInit(typeof(JsValue), parameters.Skip(1).Select(m => Expression.Call(convert, m, engine)).ToArray()) }); return Expression.Lambda<T>(call, parameters).Compile(); } } The first two parts are quite obvious, but the third method is where real work has to be applied. Here we are basically constructing a delegate of type T, which wraps the passed FunctionInstance instance. The problem that is solved by the method is wrapping the parameter by passing on the required existing parameters. Nevertheless, the problem with this approach comes from JavaScripts dynamic nature. We have to live with the possibility of leaving out parameters. The question is: Are the parameters that have been left out optional? What if we find too many arguments? Should be drop the additional ones, or is a params parameter the reason for this? If the latter is the case, we actually need to construct an array. There are more questions, but the most important ones are addressed by the implementation shown above. Finally, we compile an Expression instance, which is quite a heavy process, but unfortunately required for the shown process. One could (and should) buffer the result to speed up further usages, but this is not required for our little example. So let's have a look at our standard (DOM) object wrapper. The following class is basically all that's required for connecting AngleSharp to Jint. Of course there is more than meets the eye, but most of the other classes and methods deal with special cases like the one above, where we wrapped delegates. Similar cases involve indexers, events and more. sealed class DomNodeInstance : ObjectInstance { readonly Object _value; public DomNodeInstance(Engine engine, Object value) : base(engine) { _value = value; SetMembers(value.GetType()); } void SetMembers(Type type) { if (type.GetCustomAttribute<DomNameAttribute>() == null) { foreach (var contract in type.GetInterfaces()) SetMembers(contract); } else { SetProperties(type.GetProperties()); SetMethods(type.GetMethods()); } } void SetProperties(PropertyInfo[] properties) { foreach (var property in properties) { var names = property.GetCustomAttributes<DomNameAttribute>(); foreach (var name in names.Select(m => m.OfficialName)) { FastSetProperty(name, new PropertyDescriptor( new DomFunctionInstance(this, property.GetMethod), new DomFunctionInstance(this, property.SetMethod), false, false)); } } } void SetMethods(MethodInfo[] methods) { foreach (var method in methods) { var names = method.GetCustomAttributes<DomNameAttribute>(); foreach (var name in names.Select(m => m.OfficialName)) FastAddProperty(name, new DomFunctionInstance(this, method), false, false, false); } } public Object Value { get { return _value; } } } Alright, quite some code dump, but the important step is to call the SetMembers() method with the type information of the current value, i.e. value.GetType(). The SetMembers() method itself is responsible for obtaining the right (DOM) interface or populating triggering the population of properties and methods if a DOM interface has been found. Finally we start adding the methods that are contained in the interface. As name we choose the value stored in the DomNameAttribute attribute. Similarly we also include all properties listed in the interface, also with the right name. How does the DomFunctionInstance look like? The key is to provide a new implementation for the Call() method. Here we actually perform some magic. The main issue is unwrapping the arguments into the usual structure. We already discussed the problem briefly. Now we want to look on how these arguments are being processed. The BuildArgs() method is responsible for this magic. sealed class DomFunctionInstance : FunctionInstance { readonly MethodInfo _method; readonly DomNodeInstance _host; public DomFunctionInstance(DomNodeInstance host, MethodInfo method) : base(host.Engine, GetParameters(method), null, false) { _host = host; _method = method; } public override JsValue Call(JsValue thisObject, JsValue[] arguments) { if (_method != null && thisObject.Type == Types.Object) { var node = thisObject.AsObject() as DomNodeInstance; if (node != null) return _method.Invoke(node.Value, BuildArgs(arguments)).ToJsValue(Engine); } return JsValue.Undefined; } Object[] BuildArgs(JsValue[] arguments) { var parameters = _method.GetParameters(); var max = parameters.Length; var args = new Object[max]; if (max > 0 && parameters[max - 1].GetCustomAttribute<ParamArrayAttribute>() != null) max--; var n = Math.Min(arguments.Length, max); for (int i = 0; i < n; i++) args[i] = arguments[i].FromJsValue().As(parameters[i].ParameterType); for (int i = n; i < max; i++) args[i] = parameters[i].IsOptional ? parameters[i].DefaultValue : parameters[i].ParameterType.GetDefaultValue(); if (max != parameters.Length) { var array = Array.CreateInstance(parameters[max].ParameterType.GetElementType(), Math.Max(0, arguments.Length - max)); for (int i = max; i < arguments.Length; i++) array.SetValue(arguments[i].FromJsValue(), i - max); args[max] = array; } return args; } static String[] GetParameters(MethodInfo method) { if (method == null) return new String[0]; return method.GetParameters().Select(m => m.Name).ToArray(); } } What are we doing here? We provide some checks that the incoming arguments are sane. Then we construct an array called args, which is basically the set of parameters that needs to be processed. Finally, after including a variety of possibilities such as optional types, variadic input, we return the converted and packaged arguments in the proper format, as an array of objects. Using the code I extracted the scripting project in form of a sample WPF JS DOM REPL (a WPF JavaScript Document Object Model Read-Evaluate-Print-Loop) application. Hence this is basically a JavaScript console as known from almost every browser. The provided source code is frozen and won't be updated. Instead the current source (from this small project, as well as AngleSharp and related projects) can be found on the project's page, hosted at GitHub. The project can be reached at github.com/FlorianRappl/AngleSharp. If you find extreme bugs to the sample, then consider posting here. If small bugs in the application are found, or any bug in AngleSharp itself, then please prefer the issues page at GitHub. Also if you are in doubt where to post anything related to the project, consider the issues page at GitHub. So in short: Article related (typos, bugs in the sample, praise [hopefully!], ...) comments should be posted here, all other topics are discussed on GitHub. Personally, I will never ignore them, however, it makes things simpler and easier to follow for everyone if the discussion is centralized and focused. Points of Interest AngleSharp is not the only HTML parser written entirely in C#. In fact AngleSharp has quite a tough competition to fight against. There are reasons to choose some other parser for doing the job, however, in general AngleSharp might be more than okay for the job. If in doubt, then choose AngleSharp. If we want to create a headless browser, then AngleSharp is certainly the tool for the job. One might argue that other options for controlling a headless browser exist. That is certainly true, but most of these tools build on libraries that have been written in another language which do not compile to .NET code. Therefore these libraries are just wrappers, which may have drawbacks in the areas of performance or agility. Also robustness may be an issue. History - v1.0.0 | Initial Release | 07.11.2014
http://florian-rappl.de/Articles/Page/265/anglesharp-javascript
CC-MAIN-2017-26
refinedweb
6,411
57.57
I get no video from the camera that is connected with firewire. I have a question. I get no video from the camera that is connected with firewire. ordinary web camera works properly I work with opencv3.0.0 and VS12. there is no error, as if the program does not see the camera with VS10 and Opencv231 everything goes without problem what could be the problem? #include "opencv2/opencv.hpp" using namespace cv; int main() { int c; Mat img; VideoCapture cap(0); while (true) { cap >> img; if(img.empty()){ continue; } //or break; Mat edges; cvtColor(img, edges, CV_BGR2GRAY); Canny(edges, edges, 10, 100); imshow("Canny", edges); imshow("Norm", img); c=waitKey(1); if(c==27) break; } return 0; } use VideoCapture cap(CV_CAP_FIREWARE+deviceID);where deviceID=0means default device. VideoCapture cap (CV_CAP_FIREWARE + 0); VideoCapture cap (CV_CAP_FIREWIRE + 0); Does not help
https://answers.opencv.org/question/77871/i-get-no-video-from-the-camera-that-is-connected-with-firewire/?answer=77900
CC-MAIN-2021-31
refinedweb
140
66.33
7 May 2012 By clicking Submit, you accept the Adobe Terms of Use. Prior experience working with Flash Professional CS6 is not required. This introductory article provides all the steps you need to get started. Although this sample project includes some ActionScript code, previous knowledge of programming is not necessary. Note: If desired, you can download the sample files to review a working version of the completed project. Otherwise, just follow along with the instructions provided below to create the sample project from scratch. Beginning Note: For information on how to create your first Flash Professional CS5 document, refer to this article. Adobe Flash Professional CS6 is an authoring tool that you can use to create games, applications, and other content that responds to user interaction. Flash projects can include simple animations, video content, complex user interfaces, applications, and everything in between. In general, individual projects created with Flash Professional are called applications (or SWF applications), even though they might only contain basic animation. You can make media-rich applications by including pictures, sound, video, and special effects. The SWF format is extremely well suited for delivery over the web because SWF files are very small and take little time to download. Flash projects often include extensive use of vector graphics. Vector graphics require significantly less memory and storage space than bitmap graphics because they are represented by mathematical formulas instead of large data sets. Using bitmap graphics in Flash projects results in larger file sizes because each individual pixel in the image requires a separate piece of data to represent it. Additionally, Flash allows you to select graphic elements and convert them to symbols—making them easier to reuse and further improving performance when SWF files are viewed online. To build an application in Flash Professional CS5, you create vector graphics and design elements with the drawing tools and import additional media elements such as audio, video, and images into your document. Next, you use the Timeline and the Stage to position the elements and define how and when they appear. Using Adobe ActionScript (a scripting language) you create functions to specify how the objects in the application behave. When you author content in Flash Professional (by choosing File > New), you work with the master document, which is called a FLA file. FLA files use the file extension .fla (FLA). While editing a FLA file in the Flash authoring environment, you'll notice that the user interface is divided into five main parts: The five areas of the workspace are identified in Figure 1. ActionScript code allows you to add interactivity to the elements in your document. For example, you can add code that causes a button to display a new image when it is clicked. You can also use ActionScript to add logic to your applications. Logic enables your application to behave in different ways depending on the user's actions or other conditions. There are different versions of ActionScript. Flash Professional uses ActionScript 3 when an ActionScript 3 or Adobe AIR file is created, or ActionScript 1 and 2 when an ActionScript 2 file is created in the New Document dialog box. Flash includes many features that make it powerful but easy to use, such as prebuilt drag-and-drop user interface components, built-in motion effects that you can use to animate elements on the Timeline, pre-written code snippets, and special effects that you can add to media objects. When you have finished authoring your FLA file, you publish it by selecting File > Publish (or pressing Shift+F12). The publish operation generates a compressed version of your file with the extension .swf (known as a SWF file). Adobe Flash Player is used to play the SWF file in a web browser or as a stand-alone application. Although you don't upload or distribute the FLA file itself, you'll always want to keep a copy of the master document. If you need to make any changes, you can open the FLA file in Flash, edit it, and then publish an updated SWF file. This tutorial guides you through the process of creating a basic FLA document. You'll use this workflow when authoring projects in Flash Professional. The first step involves creating a new document: Note: Later, you can create a preset of your own custom workspace by positioning the panels in any way that you prefer. Choose the New Workspace option and enter a name to save your personal configuration. Once it's saved, you can reset the workspace by choosing its name from the workspace menu. Tip: You can set the background color of the Stage in the Flash movie by choosing Modify > Document or by selecting the Stage and then modifying the Stage color swatch in the Property inspector. There's no need to draw a rectangle to define the background color. When you publish your movie, Flash sets the background color of the published HTML page to the same color as the Stage background color (if you choose to generate an HTML file). After you've created your Flash document, you are ready to add some artwork to the project. Drawing shapes is a common task in Flash. When you use the drawing tools in the Tools panel, the vector graphics you create can be edited at any time. The following steps describe how to create a circle; later, you'll use this circle to create some basic animation. Follow these steps: To learn more about the two drawing mode options, see the Drawing modes section of the Flash Professional online documentation. Note: The Shift key works similarly with other auto shapes; when you press and hold Shift while drawing a shape with the Rectangle tool, you'll create a perfect square. Tip: If you're drawing your circle and you see only an outline of the shape instead of a fill color, first check to ensure that the stroke and fill options are set correctly in the Property inspector while the circle is selected. If the fill color swatch is set to a color and the stroke is set to No Color, the settings are correct. Next, make sure that the option to Show Outlines is not selected in the layers area of the Timeline. (There are three icons to the right of the layer names: eyeball icon, lock icon, and outlines icon. Double-check that the outlines icon displays a solid fill and not just a square outline. If you are not sure if the Show Outlines option is enabled, click the icon repeatedly to toggle the visual state between normal view and outline view.) After drawing some artwork, you can turn it into a reusable asset by converting it to a symbol. A symbol is a media asset that can be reused anywhere in your document without the need to re-create it. Symbols can contain bitmap and vector images and animations, along with other types of content. It is common to use symbols to create tweened animations. You can also use symbols to store graphic content (as described in the next set of steps). As you become more familiar with Flash Professional, you'll use symbols to structure applications and interactivity using multiple timelines. Symbols are useful for compartmentalizing parts of a project to make it easier for you to edit specific sections later. Follow these steps to create a symbol: Tip: You can also convert a graphic into a symbol by selecting it and dragging it into the Library panel. If the Library panel is not open, choose Window > Library to access it. The new symbol is now listed in the Library panel. (When you drag a copy of the symbol from the Library panel to the Stage, the copy on the Stage is called an instance of the symbol.) In this section, you'll use the symbol in your document to create a basic animation that moves across the Stage: You noticed that the animation loops by default as the movie plays in Flash Player. This occurs automatically because in Flash Professional, the Timeline is set up to loop back to Frame 1 after exiting the last frame—unless you instruct the movie to do otherwise. When you want to add a command that controls the Timeline, you'll add ActionScript code to a keyframe (indicated by a dot symbol) on the Timeline. This is known as adding a frame script. Tip: Keyframes are used to place ActionScript and assets on specific frames in the Timeline. When you review the Timeline of a FLA file, you can locate scripts and content by looking for the keyframe dots. Keyframes that have frame scripts display a lower case "a" symbol. Follow the steps below to add ActionScript code to your FLA file. You'll add one of the most common Timeline commands, which is called the stop action: stop(); Note: This step assumes that you're using the default mode of the Actions panel. If the Actions panel is in Script Assist mode, it won't allow you to type directly into to the text area. To return to the default mode, uncheck the magic wand icon in the upper right corner of the Actions panel. When you are finished creating your FLA file, and you've tested it repeatedly, you are ready to publish it. The files that you output when publishing can be uploaded to a host server so that the project can be viewed in a browser. When you publish the file, Flash Professional compresses the data in the FLA file and generates a much smaller, more compact (and non-editable) SWF file. It's important to note that the FLA is your master, authoring file. Always keep the FLA file handy in case you need to make changes to the project. The SWF file that is generated by Flash when you publish a project is the file that you'll embed in a web page. If you are familiar with Adobe Photoshop, you can think of the FLA file as the equivalent of a master PSD file, and the SWF file as the equivalent of the exported JPEG file that will be inserted on a web page. While you can choose to edit the publish settings and only publish the SWF file (and then use Adobe Dreamweaver or another HTML editor to insert the SWF file in a page) the Publish command makes things even easier. You can set the publish settings to automatically generate an HTML file that contains the code to embed the SWF file for you. Follow these steps to publish the FLA file and output the SWF file with the HTML file, so that you can view the published project in a browser: Tip: You can change the Target field in the top right of the Publish Settings dialog box to target different versions of the Flash Player or the AIR runtimes. Publishing to AIR provides an easy way to create desktop or mobile applications from your web-based application without doing extra work. You've successfully completed your first FLA file. To learn more about publishing your document, read the Publishing and Exporting section in the Flash Professional online documentation. In an earlier section, you stopped the animation from looping by adding a stop() action to the last frame of the Timeline. When the playhead reaches the last frame, it is instructed to stop, which prevents it from looping back to Frame 1. In this set of instructions, you'll learn how to add a Replay button. When a user clicks the button, it causes the playhead to begin playing from Frame 1 again. This is the behavior of the example shown in Figure 19. Flash is extremely flexible. There are many strategies that you could use to create a Replay button, including restarting the playhead when a user presses a key on their keyboard, or when they click on the Stage, or when they click a button. To achieve any of these options, you'll add some ActionScript code that responds to the user interactivity at runtime (while the SWF file is playing). This section is a little more advanced than the previous sections—it covers some new concepts that you'll use to control the behavior of your Flash movies with programming. Follow these steps to add a Replay button and the corresponding ActionScript to your file: Note: One of the most common mistakes is to enter the name of a frame label, rather than entering the instance name of an object on the Stage. Be very careful to always select the object on the Stage first. Then you can access the Property inspector to check that the panel refers to the selected symbol and indicates that an instance is selected. Verify that the field says "<Instance Name>" before you type the name of the instance. If the Property inspector refers to a frame, it means you've accidentally clicked the Timeline. Naming instances can be confusing until you are familiar with the options presented for selected symbol instances and selected keyframes. One thing to check: If you accidentally enter a name in the Property inspector while a keyframe is selected (instead of an object on the Stage), you'll see a red flag icon appear in the Timeline on the keyframe. Frame labels can be very helpful when creating navigation that jumps to different frames in the Timeline; for this example, however, it is critical that you select the button on the Stage and enter the instance name of the button in the button layer as replay_btn. Also make sure there are no typos; otherwise the script will not work. import flash.events.MouseEvent; function onClick(event:MouseEvent):void { gotoAndPlay(1); } replay_btn.addEventListener(MouseEvent.CLICK, onClick); onClick. The last line of code uses the addEventListenermethod, which registers the function as an "event listener" for the button's "click" event. The translation of this code essentially says: "When a user clicks the button named replay_btn, run the function named onClick. When the onClickfunction runs, it instructs the playhead to jump to Frame 1 of the Timeline and begin playing the frames." This is the standard format used when writing ActionScript 3; one section checks for user interactivity (such as a mouse click) and that triggers another function to respond to timing cues from objects in Flash Player. In this case, the event handler function instructs Flash Player to return to Frame 1 and start playing the Timeline again. (You'll use a similar syntax of creating an event handler function and assigning it to a button instance any time you create an interactive button in Flash.) Tip: You can use the Code Snippets panel as a shortcut when adding events to buttons. There are many online resources you can use to learn more about working with Flash Professional:
https://www.adobe.com/devnet/flash/articles/create-first-flash-document.html
CC-MAIN-2015-32
refinedweb
2,483
59.03
Hi, I have data in Excel . these are latest data. I want to update mysql table data with Excel data. How do I do it ? Please guide N.B: I use SQLYog , MySQLAdministrator tool to manage db work. Can these tool be of help for this work. how ? import the excel data into a table, then update your main table from this imported table using a joined update what is a joined update ? Do I need any tool to execute joined update ? UPDATE items,month SET items.price=month.price WHERE items.id=month.id; source: I want to add trim into query.... here is a bad query ..like this UPDATE items,month SET items.price=month.priceWHERE TRIM(items.id)=TRIM(month.id); Can you please tell what would be the correct version of this ? i have no idea it all depends on how your tables are related so far you have not revealed what your tables look like It may not have much things to do with table structure. I'm worried because of this ... suppose Excel data contains some SPACES ...and after import this SPACE goes to the CELL. I don't want this extra SPACE damage my joined update query. ...now I have two choices to get rid of this space.. (a) just need a TRIM to work ....I asked the same for this .....This could be a solution also. (b) remove SPACE from the EXCELL cell itself so that after import into table there will no SPACE in the CELL.... but unfortunately I don't know how to remove SPACE from Excel cell ....Do you know this ? This could be a solution also well, in that case, i would use TRIM you haven't showed us any data where it didn't work
https://www.sitepoint.com/community/t/how-to-update-table-data-with-excel-data/26521
CC-MAIN-2016-07
refinedweb
298
86.81
Last Friday I was thinking about writing a small application that went out and told me how many notebooks, sections and pages that I had open in OneNote. I was thinking about this because I was looking at the new search indexer and seeing how many pages it had indexed from my notebooks. Search has been getting really good in OneNote and I am very excited about the new release which will much improved. Additionally I have been hearing from more people that they wanted more sample code to see how to program with the OneNote API. Since our app is still in beta I am still seeking feedback on what documentation you want. Please let me know what you are looking for and we can go from there. Without further ado: OneNoteStats an easy application that tells you how many items open you have in OneNote. Steps: - Create a new console project - On the Solution Explorer under References right-click, choose Add and select the Microsoft.Office.Interop.OneNote item - Copy and paste the code below - Compile and run Code: using System.Xml; using OneNote = Microsoft.Office.Interop.OneNote; namespace OneNoteStats { class Program { static void Main(string[] args) { //string to store all of the OneNote hierarchy XML string onHierarchy; //bind to OneNote via the COM Interop OneNote.Application onApp = new Microsoft.Office.Interop.OneNote.Application(); //get the OneNote hierarchy //GetHierarchy(start (null for root), scope of what you want, were to put the output onApp.GetHierarchy(null, Microsoft.Office.Interop.OneNote.HierarchyScope.hsPages, out onHierarchy); //Create an XML Document, load the XML and add the OneNote namespace XmlDocument xdoc = new XmlDocument(); xdoc.LoadXml(onHierarchy); string OneNoteNamespace = ""; XmlNamespaceManager nsmgr = new XmlNamespaceManager(xdoc.NameTable); nsmgr.AddNamespace("one", OneNoteNamespace); //Use the SelectNodes method to pass an XPath query and select the matching nodes. //One for each top-level item XmlNodeList notebooks = xdoc.SelectNodes("//one:Notebook", nsmgr); System.Console.WriteLine("You have " + notebooks.Count + " notebooks"); XmlNodeList sections = xdoc.SelectNodes("//one:Section", nsmgr); System.Console.WriteLine("You have " + sections.Count + " sections"); XmlNodeList pages = xdoc.SelectNodes("//one:Page", nsmgr); System.Console.WriteLine("You have " + pages.Count + " pages"); } } } If you have any questions...this is a really simple application and I hope to have more in the near future to show how you can work with the API. I use OneNote both at home and work. I am planning to build an app using Amazon’s Online Storage service (paid subscription). I am always looking for some samples on OneNote. Would it be possible to use password protect & encrypt Notes so they are not visible if someone decides to go against the raw XML file. These are great questions Kris. I will address them in an upcoming blog post. Contact me if you need anything else. thanks for the comment PingBack from I would love to see more example code. Love to see some more and different examples of the UpdateHierarchy to update and create sections, pages and outlines. Right now there seems to be just one example of update. Great I love this feedback…I will work on some blog posts to cover this! Last Friday I was thinking about writing a small application that went out and told me how many notebooks, sections and pages that I had open in OneNote. I was thinking about this because I was looking at the new search indexer and seeing how many page Hi Dan VERY nice 🙂 This gives me the heads-up on how to jump in to doing some tweaks of my own. I'll be sure to let you know if I come up with anything useful 🙂 Regards, Richard
https://blogs.msdn.microsoft.com/descapa/2006/08/13/sample-codeapp-onenote-stats/
CC-MAIN-2017-43
refinedweb
605
66.33
[ ] Abin Shahab resolved YARN-2481. ------------------------------- Resolution: Cannot Reproduce Assignee: Abin Shahab Verified on trunk that JAVA_HOME can be set for containers(mappers and reducers for example), and that is not overridden by the NM's JAVA_HOME. > YARN should allow defining the location of java > ----------------------------------------------- > > Key: YARN-2481 > URL: > Project: Hadoop YARN > Issue Type: New Feature > Reporter: Abin Shahab > Assignee: Abin Shahab > > Yarn right now uses the location of the JAVA_HOME on the host to launch containers. This does not work with Docker containers which have their own filesystem namespace and OS. If the location of the Java binary of the container to be launched is configurable, yarn can launch containers that have java in a different location than the host. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
http://mail-archives.us.apache.org/mod_mbox/hadoop-yarn-dev/201409.mbox/%3CJIRA.12737981.1409380077000.143315.1411964074846@Atlassian.JIRA%3E
CC-MAIN-2019-43
refinedweb
131
58.92
Pass the MCAD/MCSD: Learning to Access and Manipulate XML Data - May 9, 2003 Objectives This chapter covers the following Microsoft-specified objective for the "Consuming and Manipulating Data" section of the "Developing XML Web Services and Server Components with Microsoft Visual C# .NET and the Microsoft .NET Framework" exam: Access and manipulate XML Data. Access an XML file by using the Document Object Model (DOM) and an XmlReader. Transform DataSet data into XML data. Use XPath to query XML data. Generate and use an XSD schema. Write a SQL statement that retrieves XML data from a SQL Server database. Update a SQL Server database by using XML. Validate an XML document. Extensible Markup Language (far better known as XML) is pervasive in .NET. It's used as the format for configuration files, as the transmission format for SOAP messages, and in many other places. It's also rapidly becoming the most widespread common language for many development platforms. This objective tests your ability to perform many XML development tasks. To pass this section of the exam, you need to know how to read an XML file from disk, and how to create your own XML from a DataSet object in your application. You also need to be familiar with the XPath query language, and with the creation and use of XSD schema files. You'll also need to understand the connections that Microsoft SQL Server has with the XML universe. You need to be able to extract SQL Server data in XML format, and to be able to update a SQL Server database by sending it properly formatted XML. Finally, the exam tests your ability to validate XML to confirm that it conforms to a proper format. The .NET Framework includes several means of validating XML that you should be familiar - Starting with an XML Schema Understanding XPath - The XPath Language - Using the XPathNavigator Class - Selecting Nodes with XPath - Navigating Nodes with XPath Generating and Using XSD Schemas - Generating an XSD Schema - Using an XSD Schema - Validating Against XSD - Validating Against a DTD Using XML with SQL Server - Generating XML with SQL Statements - Understanding the FOR XML Clause - Using ExecuteXmlReader() to create XSD files. Inspect the generated XSD and understand how it relates to the original objects. Use XML to read and write SQL Server data. You can install the MSDE version of SQL Server from your Visual Studio .NET CD-ROMs if you don't have a full SQL Server to work with. Use the XmlValidatingReader with desktop applications, but if you want to write XML Web Services and other distributed applications, XML knowledge is even more important. The .NET Framework uses XML for many purposes itself, but it also makes it very easy for you to use XML in your own applications. The FCL's support for XML is mainly contained in the System.Xml namespace. This namespace contains objects to parse, validate, and manipulate XML. You can read and write XML, use XPath to navigate through an XML document, or check to see whether a particular document is valid XML by using the objects in this namespace. NOTE XML Basics In this chapter, I've assumed that you're already familiar with the basics of XML, such as elements and attributes. If you need a refresher course on XML Basics, refer to Appendix B, "XML Standards and Syntax." As you're learning about XML, you'll become familiar with some other standards as well. These include the XPath query language and the XSD schema language. You'll see in this chapter how these other standards are integrated into the .NET Framework's XML support.
http://www.informit.com/articles/article.aspx?p=31696&f1=rss
crawl-003
refinedweb
608
62.78
This part of the manual describes how to import and export images and main DGtal objects from/to various formats. In DGtal, file readers and writers are located in the "io/readers/" and "io/writers/" folders respectively. Most of them are dedicated to image format import/export but some other DGtal data structures can have such tools (e.g. point set/mesh readers). Before going into details, let us first present an interesting tool for image visualisation or image export: predefined colormaps to convert scalars or to (red,green,blue) triplets. Colormap models satisfy the CColormap concept. For short, a colormap is parametrized by a scalar value template type (Value). When constructed from two min and max values (of type Value), the colormap offers an operator returning a DGtal::Color for each value v in the interval [min,max]. For example, RandomColorMap returns a random color for each value v. More complex colormaps (GradientColorMap, HueShadeColorMap, ...) offer better colormap for scientific visualisation purposes. Beside colormaps, TickedColorMap is a colormap adapter that adds ticks. For example, you can adapt a Black-Red gradient colormap to add regular white ticks (see usage in testTickedColorMap.cpp): In some situations, we may have to convert colors into scalar values (see below). In this case, basic conversion functors are available in the DGtal::functors namespace. For example, you would find in this namespace a DGtal::functors::RedChannel converter or a DGtal::functors::MeanChannels converter. Hence, to implement a functor taking values and returning the red channel of a colormap, you just have to compose the two functors with the help of the Composer: We first detail import/export format for DGtal images. Please refer to Images for details on images in DGtal and their associated concepts. First of all: Hence, for image writers, some functors may return a DGtal::Color or a scalar value depending on the writer. For scalar value format (PGM, Vol, Longvol, Raw, ...), the associated template class have a default functor type. Hence, if you just want to cast your image values to the file format value type (e.g. "unsigned char" for Vol), do not specify any functor. The class GenericWriter allows to automatically export any image (2d, 3d, nd) from its filename. The class is templated with an image container type, a dimension value (given by default by the image container dimension), a value type, (also given by default by the image container) and a functor type (by default set to the DefaultFunctor type). To use it you need first to include the following header: After constructing and filling an image (anImage2D or anImage3D), by default you can save it with: As the other export functions, a functor can be used as optional argument (as given in the previous example): If you don't need to specify special functor and if don't need to change default image type, you can use a less generic writer with the stream operator and the string filename: To write color images, you need to use a functor which transform a scalar value into a Color. You can use the previous Colormaps : The class GenericReader allows to automatically import any image (2d, 3d, nd) from its filename. The class is templated with an image container type, a dimension value (given by default by the image container dimension), a value type, (also given by default by the image container). Note that the reader choice between 8 bits or 32 bits is automatically done according to the templated image container type. So you can have an DGtal::IOException if you try to read a 8 bits raw image from an unsigned int image type (choose an 8 bits type like unsigned char or explicitly call the specific reader RawReader::importRaw8). Use the same import function for both 2D or 3D images: The static class PointListReader allows to read discrete points represented in simple file where each line represent a single point. The static class MeshReader allows to import Mesh from OFF or OFS file format. Actually this class can import surface mesh (Mesh) where faces are potentially represented by triangles, quadrilaters and polygons. Notes that Mesh can be directly displayed with Viewer3D. The mesh importation can be done automatically from the extension file name by using the "<<" operator. For instance (see. Import 3D mesh from OFF file ): You can also export a Mesh object by using the operator (">>"). Notes that the class Display3D permits also to generate a Mesh which can be exported (see. Export 3D mesh in OFF and OBJ format). Importing and visualizing a digital set from a vol file can be done in few code lines. (see. digitalSetFromVol.cpp). First we select the Image type with int: Then the initial image is imported: Afterwards the set is thresholded in ]0,255[: Then you will obtain the following visualisation: The example digitalSetFromPointList.cpp shows a simple example of 3d set importation: We can change the way to select the coordinate field: You may obtain the following visualisation: The following example meshFromOFF.cpp shows in few lines how to import and display an OFF 3D mesh. Add the following headers to access to OFF reader and Viewer3D: then import an example ".off" file from the example/sample directory: Display the result: You may obtain the following visualisation: You can also import large scale mesh, like the one of classic Angel scan ( available here: ) The following example display3DToOFF.cpp shows in few lines how to export in OFF format a DigitalSet object. This object will be exported with a Display3D object (see. display3DToOFF.cpp). Notes that the export can also be done in two steps: The resulting mesh can be visualized for instance by using meshlab; This code can be useful if you want to generate illustrations in the U3D format. For instance by using the U3D/PDF export from MeshLab or JReality ( www3.math.tu-berlin.de/jreality/). You can for instance generate some exports in pdf like this example: (see this pdf file: )
https://dgtal.org/doc/0.9.2/moduleIO.html
CC-MAIN-2020-50
refinedweb
998
51.99
#?(:clj (Clojure expression) :cljs (ClojureScript expression) :cljr (Clojure CLR expression) :default (fallthrough expression)) Reader conditionals are a feature added in Clojure 1.7. They are designed to allow different dialects of Clojure to share common code that is mostly platform independent, but contains some platform dependent code. If you are writing code across multiple platforms that is mostly independent you should separate .clj and .cljs files instead. Reader conditionals are integrated into the Clojure reader, and don’t require any extra tooling beyond Clojure 1.7 or greater. To use reader conditionals, all you need is for your file to have a .cljc extension and to use Clojure 1.7 or ClojureScript 0.0-3196 or higher. Reader conditionals are expressions, and can be manipulated like ordinary Clojure expressions. For more technical details, see the reference page on the reader. There are two types of reader conditionals, standard and splicing. The standard reader conditional behaves similarly to a traditional cond. The syntax for usage is #? and looks like: #?(:clj (Clojure expression) :cljs (ClojureScript expression) :cljr (Clojure CLR expression) :default (fallthrough expression)) The platform tags :clj, etc are a fixed set of tags hard-coded into each platform. The :default tag is a well-known tag to catch and provide an expression if no platform tag matches. If no tags match and :default is not provided, the reader conditional will read nothing (not nil, but as if nothing was read from the stream at all). The syntax for a splicing reader conditional is #?@. It is used to splice lists into the containing form. So the Clojure reader would read this: (defn build-list [] (list #?@(:clj [5 6 7 8] :cljs [1 2 3 4]))) as this: (defn build-list [] (list 5 6 7 8)) One important thing to note is that in Clojure 1.7 a splicing conditional reader cannot be used to splice in multiple top level forms. In concrete terms, this means you can’t do this: ;; Don't do this!, will throw an error #?@(:clj [(defn clj-fn1 [] :abc) (defn clj-fn2 [] :cde)]) ;; CompilerException java.lang.RuntimeException: Reader conditional splicing not allowed at the top level. Instead you’d need to do wrap each function individually: #?(:clj (defn clj-fn1 [] :abc)) #?(:clj (defn clj-fn2 [] :cde)) or use a do to wrap all of the top level functions: #?(:clj (do (defn clj-fn1 [] :abc) (defn clj-fn2 [] :cde))) Let’s go through some examples of places you might want to use these new reader conditionals. Host interop is one of the biggest pain points solved by reader conditionals. You may have a Clojure file that is almost pure Clojure, but needs to call out to the host environment for one function. This is a classic example: (defn str->int [s] #?(:clj (java.lang.Integer/parseInt s) :cljs (js/parseInt s))) Namespaces are the other big pain point for sharing code between Clojure and ClojureScript. ClojureScript has different syntax for requiring macros than Clojure. To use macros that work in both Clojure and ClojureScript in a .cljc file, you’ll need reader conditionals in the namespace declaration. Here is an example from a test in route-ccrs (ns route-ccrs.schema.ids.part-no-test (:require #?(:clj [clojure.test :refer :all] :cljs [cljs.test :refer-macros [is]]) #?(:cljs [cljs.test.check :refer [quick-check]]) #?(:clj [clojure.test.check.properties :as prop] :cljs [cljs.test.check.properties :as prop :include-macros true]) [schema.core :as schema :refer [check]])) Here is another example, we want to be able to use the rethinkdb.query namespace in Clojure and ClojureScript. However we can’t load the required rethinkdb.net in ClojureScript as it uses Java sockets to communicate with the database. Instead we use a reader conditional so the namespace is only required when read by Clojure programs. (ns rethinkdb.query (:require [clojure.walk :refer [postwalk postwalk-replace]] #?(:clj [rethinkdb.net :as net]))) ;; snip... #?(:clj (defn run [query conn] (let [token (get-token conn)] (net/send-start-query conn token (replace-vars query))))) Exception handling is another area that benefits from reader conditionals. ClojureScript supports (catch :default) to catch everything, however you will often still want to handle host specific exceptions. Here’s an example from lemon-disc. (defn message-container-test [f] (fn [mc] (passed? (let [failed* (failed mc)] (try (let [x (:data mc)] (if (f x) mc failed*)) (catch #?(:clj Exception :cljs js/Object) _ failed*)))))) The splicing reader conditional is not as widely used as the standard one. For an example on its usage, let’s look at the tests for reader conditionals in the ClojureCLR reader. What might not be obvious at first glance is that the vectors inside the splicing reader conditional are being wrapped by a surrounding vector. (deftest reader-conditionals ;; snip (testing "splicing" (is (= [] [#?@(:clj [])])) (is (= [:a] [#?@(:clj [:a])])) (is (= [:a :b] [#?@(:clj [:a :b])])) (is (= [:a :b :c] [#?@(:clj [:a :b :c])])) (is (= [:a :b :c] [#?@(:clj [:a :b :c])])))) There isn’t a clear community consensus yet around where to put .cljc files. Two options are to have a single src directory with .clj, .cljs, and .cljc files, or to have separate src/clj, src/cljc, and src/cljs directories. Before reader conditionals were introduced, the same goal of sharing code between platforms was solved by a Leiningen plugin called cljx. cljx processes files with the .cljx extension and outputs multiple platform specific files to a generated sources directory. These were then read as normal Clojure or ClojureScript files by the Clojure reader. This worked well, but required another piece of tooling to run. cljx was deprecated on June 13 2015 in favour of reader conditionals. Sente previously used cljx for sharing code between Clojure and ClojureScript. I’ve rewritten the main namespace to use reader conditionals. Notice that we’ve used the splicing reader conditional to splice the vector into the parent :require. Notice also that some of the requires are duplicated between :clj and :cljs. (ns taoensso.sente (:require #?@(:clj [[clojure.string :as str] [clojure.core.async :as async] [taoensso.encore :as enc] [taoensso.timbre :as timbre] [taoensso.sente.interfaces :as interfaces]] :cljs [)]))) (ns taoensso.sente #+clj (:require [clojure.string :as str] [clojure.core.async :as async)] [taoensso.encore :as enc] [taoensso.timbre :as timbre] [taoensso.sente.interfaces :as interfaces]) #+cljs (:require )])) At the time of writing, there is no way to use .cljc files in versions of Clojure less than 1.7, nor is there any porting mechanism to preprocess .cljc files to output .clj and .cljs files like cljx does. For that reason library maintainers may need to wait for a while until they can safely drop support for older versions of Clojure and adopt reader conditionals. Original author: Daniel Compton
https://clojure.org/guides/reader_conditionals
CC-MAIN-2017-39
refinedweb
1,121
69.07
[WARNING: this post has no business value whatsoever, and that is by design. Please do not take any of this post as something you’d do in your applications. I got a new toy and I am just goofing around with it :-)] If you’ve been reading this blog through the years, you know I am a big fan of gadgets and in general anything I can tinker with. Last August I read about Blink(1), a very promising project on Kickstarter for an über USB status light, which would allow you to translate any event you want to keep an eye on in blinks of arbitrary color/intensity/frequency. I instantly wanted in! I proudly joined the ranks of the project backers, and hoped that it would actually become reality. Today, almost 4 months later, I found in the mailbox a nice padded mailer containing a cute box with the blinker: beautifully made, and a perfect match with the original vision. Isn’t the epoch we live in absolutely amazing? 🙂 Anyway, the enthusiasm for the little toy was soon tempered by the somewhat shaky support for using Blink(1) from .NET. That wasn’t enough to deter me from messing with it anyway and from finding a way of concocting something claims-related. And now that you know the overall story, without further ado… Unboxing Blink(1) Here, savor with me the pleasure of unboxing a new gadget. After the various updates from the project creators, I knew I was going to get Blink(1) this week. When I dug out this white bubble-wrap envelope from the daily pile of magazines (the magic of magformiles and having held a travelling job for few years) I *knew* that it was it 🙂 In a very Japanese turn of events the envelope contained another mailer, also bubble-lined. Those guys really made sure that the goods arrived in good shape. And finally, the product box! It’s really tiny: the drawing on the top is a 1:1 representation. Notable detail, the lid snaps to the box body via a magnet. Very classy 🙂 Before opening it, let’s take a look on the bottom side: there you’ll find enough info to grok what the product is about and to get you started using it. …and finally, ta-dah! Here’s Blink(1), hugged by its foam bed. It’s exactly as advertised: a small translucent enclosure of white plastic with a gently chamfered aluminum top, all fitting together in perfect alignment. Here there’s another view, to get you a feeling of the dongle size before plugging it in. …and in it goes. The Blink(1) gets recognized immediately by Windows: it flashes once, then it goes quiet waiting for your commands. More about that in the next section. Above you can see a shot of Blink(1) plugged in my Lenovo X230 Tablet, shining a nice shade of green. How did I get it to do that? Read on… Goofing Around with Blink(1) and Claims The main appeal Blink(1) has for me is its potential to manifest in meatspace events that would normally require you to watch a screen and interpret it. It might not have the directness of acquiring a brand-new new sense, but it can certainly contribute to confer an intuitive dimension to events that one would normally only understand by cognition. That sounds all nice and dandy, but how to make all of that concrete? I wanted to get my hands dirty with some code, and obviously doing something identity-related came to mind. I thought about which aspects of claims-based identity could be made intuitive by a multi-colored blinking light. I won’t bore you with a long digression about the process that led me to choose what I picked, and will go straight to my conclusions: I decided to give a color to the identity providers. Say that you have a Web application, and that you outsourced authentication to the Windows Azure Access Control Service. Let’s also say that you accept identities from the following sources: - One or more Windows Azure Active Directory tenants - Yahoo! - Microsoft Accounts (formerly Windows Live ID) If you keep a log you can determine how often your users come from one provider or another, however wouldn’t it be nice to also get an intuitive idea of the mix of users signing in at any given moment? What if, for example, every time you’d walk by your PC you could see a light indicator shining of a different color every time a user from a given identity provider signs in? That would not be very useful to draw conclusions and projections, you need the quantitative data in your logs for that, but for an anxious control freak like me it would be nice to keep an eye on the pulse of the application. With that idea in mind, I headed to the Git repository for Blink(1) to find out how to drive my unit as described. During the various updates about the project it looked like .NET was going to be one of the platform better supported, but something must have shifted during development. The Windows application, which I believe is required for processing the Blink(1) JSON API, does not even build in my system, apparently due to a dependency to a library that does not work on x64 systems (I think I am x64-only since 2007, and so are most of the devs I know). All of the pre-built applications failed to run, complaining that they could not find javaw. I installed JRE; they still complained about the same. I added a JAVA_HOME environment variable with the path: sometime it helps, but still nothing. Finally, I added it to the PATH system variable: the apps stopped complaining about javaw, however they still didn’t work at all. The even log showed nothing. Luckily, there was an exception to all this: the Blink(1) command line tool, which worked great from the very beginning. The command line tool finally allowed me to get some satisfaction from my Blink(1). I experimented with different colors and update times, and I was not disappointed. Also, despite the exposure to rapid flashing I didn’t experience any enhancement in my alpha abilities, which I suspect means I have none… 😉 If anything, it left me with an even greater thirst for playing with the thing. Undeterred, I decided that if the command line was the only thing that worked, then I would have wrapped it and used it as API. After all, it’s not like if I had to do anything that would even go in production. First thing, I fired VS2012 and created an MVC4 application. Then I added a brute-force wrap of the command line tool, exposing only a function which changes the color of Blink(1). I didn’t even bother to copy the exe from its original download location 🙂 public class BlinkCmdWrapper { public static void Fade(Color color) { Process serverSideProcess = new Process(); serverSideProcess.StartInfo.FileName = @"C:\Users\vittorio\Downloads\blink1-tool-win\blink1-tool.exe"; serverSideProcess.StartInfo.Arguments = "--rgb " + color.R.ToString() + "," + color.G.ToString() + "," + color.B.ToString(); serverSideProcess.StartInfo.WindowStyle = ProcessWindowStyle.Hidden; serverSideProcess.Start(); } } Not much to explain there, really. The “rgb” command fades the current color into the specified RGB triplet; the method calls the tool with the necessary parameters in a hidden window. Horribly inefficient, yes; but for seeing some lights blink on my laptop within the next 5 mins, it will do. Then I configured the MVC app to use WIF to outsource authentication to ACS. Thanks to the fact that I already had a dev namespace configured with all the identity providers I needed, and the VS2012 Identity and Access tool, that took about 5 seconds :-). All that was left was to add the logic that switches color according to the original identity provider of incoming tokens. WIF offers a nice collection of events which fire at various stages of the authentication pipeline: a good candidate for this functionality would be the SignedIn event. Discovering the identity of the original IP is pretty trivial for tokens issued by ACS, given that the STS graciously adds a claim () meant to convey exactly that information. In the end, the thing that resulted most difficult was to choose a color for every provider 🙂 In short, I added the following to the global.asax: void WSFederationAuthenticationModule_SignedIn(object sender, EventArgs e) { // retrieve the identity provider claim Claim idpClaim = ClaimsPrincipal.Current.FindFirst(""); // no IP claim, but signin succeeded; it might be a custom provider if (idpClaim == null) { BlinkCmdWrapper.Fade(Color.White); } else switch (idpClaim.Value) { case "uri:WindowsLiveID": BlinkCmdWrapper.Fade(Color.Pink); break; case "Yahoo!": BlinkCmdWrapper.Fade(Color.Green); break; case "Google": BlinkCmdWrapper.Fade(Color.Yellow); break; default: if(idpClaim.Value.StartsWith("Facebook")) BlinkCmdWrapper.Fade(Color.Blue); else if(idpClaim.Value.StartsWith("")) BlinkCmdWrapper.Fade(Color.Azure); else BlinkCmdWrapper.Fade(Color.Red); break; } } Once again, very straightforward. The method looks up the IP claim. If it isn’t there, the provider might not have a rule to add it, or the application might be trusting another STS instead of ACS; either ways the SignedIn succeeded, hence we need to signal something: I picked white as the most neutral behavior. If there is an IP claim: in the case of tokens from Microsoft Accounts, Yahoo! or Google the expected value is fixed (hence the switch statement). In the case of Facebook or Windows Azure Active Directory what is known is how the string begins, hence the cascading ifs in the default clause. Well, that’s pretty much all that is required to achieve the effect we want. Let’s hit F5 and see what happens. On the left side you can see an app screenshot, on the right the status of the Blink(1). At first launch you get the classic HRD page from ACS. The Blink(1) is still off, no sing in took place yet. Let’s hit Try Research 2, a Windows Azure Active Directory tenant, and authenticate. …aand – it’s a kind of magic – the Blink(1) comes alive and shows an appropriate azure glow! Want to see that changing? Sure. Copy the address of the app (localhost+port) and open an in-private instance of IE (or an instance of Firefox). Paste the address in, and this time sign in with Facebook: Yes, the picture does not show a striking difference, but I can assure you that the indicator is now of a much deeper blue. Want to see more difference? Close the extra browser, re-open it and repeat the above but choosing Windows Account: Nice and pink! Repeat the above for Yahoo: and it’s green! You get the idea. The above is a good proof of concept of the original idea. Is it a good solution, though? Well, no. Apart from the ugly workaround of spawning a process for wrapping the command line tool, there are multiple shortcomings to deal with: - The default fade values are too long, rapid events would not be displayed without some adjustments (pretty easy) - Subsequent, adjacent sign-ins of users from the same provider would not be displayed. Blink(1) should probably blink rather than fade - The token validation would take place in the cloud or in some remote HW. The events would have to be transported down to the machine where the Blink(1) is plugged, possibly via some kind of queue. ServiceBus seems a good solution here But you know what? My goal here was to play with my new Blink(1), and that I did 🙂 Yes, it’s 3:17am and I am a happy tinkerer: I can call it a night. Thanks to the ThingM guys for having created a really nice item. The Kickstarter process is obviously over at this point, but ThingM now offers Blink(1) in their regular store, at exactly the same price. Looking forward to more hacking when the .NET support will catch up! .
https://blogs.msdn.microsoft.com/vbertocci/2012/12/09/fun-with-blink1-and-claims-unboxing-and-identity-providers-synesthesia/
CC-MAIN-2017-22
refinedweb
2,017
70.23
In this part of my Java Video Tutorial, I continue to provide you with a complete understanding of Java. Today I’m covering the Object and Class class, along with clone. We will explore all of the methods that every object gets by default. Use the code that follows the video to help you learn. If you missed the previous parts make sure you check them out here Java Video Tutorial. If you like videos like this share it Code From the Video LESSONSIXTEEN.JAVA public class LessonSixteen{ public static void main(String[] args){ // Every object inherits all the methods in the Object class Object superCar = new Vehicle(); // superCar inherits all of the Object methods, but an object // of class Object can't access the Vehicle methods // System.out.println(superCar.getSpeed()); * Throws an error // You can cast from type Object to Vehicle to access those methods System.out.println(((Vehicle)superCar).getSpeed()); // The methods of the Object class Vehicle superTruck = new Vehicle(); // equals tells you if two objects are equal System.out.println(superCar.equals(superTruck)); // hashCode returns a unique identifier for an object System.out.println(superCar.hashCode()); // finalize is called by the java garbage collector when an object // is no longer of use. If you call it there is no guarantee it will // do anything though // getClass returns the class of the object System.out.println(superCar.getClass()); // THE CLASS OBJECT // You can use the Class object method getName to get just the class name System.out.println(superCar.getClass().getName()); // You can check if 2 objects are of the same class with getClass() if(superCar.getClass() == superTruck.getClass()){ System.out.println("They are in the same class"); } // getSuperclass returns the super class of the class System.out.println(superCar.getClass().getSuperclass()); // the toString method is often overwritten for an object System.out.println(superCar.toString()); // toString is often used to convert primitives to strings int randNum = 100; System.out.println(Integer.toString(randNum)); // THE CLONE METHOD // clone copies the current values of the object and assigns // them to another. If changes are made after the clone both // objects aren't effected though superTruck.setWheels(6); Vehicle superTruck2 = (Vehicle)superTruck.clone(); System.out.println(superTruck.getWheels()); System.out.println(superTruck2.getWheels()); // They are separate objects and don't have equal hashcodes System.out.println(superTruck.hashCode()); System.out.println(superTruck2.hashCode()); // There are subobjects defined in an object clone won't // also clone them. You'd have to do that manually, but this // topic will be covered in the future because of complexity } } VEHICLE.JAVA public class Vehicle extends Crashable implements Drivable, Cloneable{ int numOfWheels = 2; double theSpeed = 0; int carStrength = 0; public int getWheels(){ return this.numOfWheels; } public void setWheels(int numWheels){ this.numOfWheels = numWheels; } public double getSpeed(){ return this.theSpeed; } public void setSpeed(double speed){ this.theSpeed = speed; } public Vehicle(){ } public Vehicle(int wheels, double speed){ this.numOfWheels = wheels; this.theSpeed = speed; } public void setCarStrength(int carStrength){ this.carStrength = carStrength; } public int getCarStrength(){ return this.carStrength; } public String toString(){ return "Num of Wheels: " + this.numOfWheels; } public Object clone(){ Vehicle car; try{ car = (Vehicle) super.clone(); } catch(CloneNotSupportedException e){ return null; } return car; } } I am taking your tutorial at this moment, and have noticed that the ” Object superCar = new Vehicle(); ” and ” Vehicle superTruck = new Vehicle(); ” line within the L16 code both come up as errors. It would be very helpful to have some feed back on how to fix the error. Thank You Sincerely Ben T.W There has to be something else wrong because that code is fine. What error are you getting? This happened to me too, but it was because in the code for lesson 15 the Vehicle class was missing a constructor. This is added in the code above, so it all works now. Phew! Amazing tutorials, thank you Derek! Oops, disregard this comment! I don’t think I quite understand Java yet… ha. Was it because Line 07 in JavaLesson16 is missing the int and double arg? thanx eddie. .me too got the same error,and created a costructor in Vehicle class. .it was gone. . Phew!!!!! 😉 The constructor in the VEHICLE class defines two parameters: public Vehicle(int wheels, double speed) { this.numOfWheels = wheels; this.theSpeed = speed; } Adding a blank constructor can fix this: public Vehicle(){ } OR: In the JAVALESSONSIXTEEN class these parameters are undefined. Object superCar = new Vehicle(); It should be: Object superCar = new Vehicle (4, 10); [or whatever int and double values you would choose in place of the 4 and the 10] hi there, I copied ur code to netbeans and I tried to run it, but no luck. do you think it is becouse I use netbeans insted of Eclipise? thanks for your advice. this is the error I get: Exception in thread “main” java.lang.VerifyError: (class: lessonsixteen/Vehicle, method: signature: (ID)V) Constructor must call super() or this() at lessonsixteen.LessonSixteen.main(LessonSixteen.java:19) This seems to be a NetBeans error. This is the fix I found online Delete .class files from under project/build/classes. Click Run -> Clean & Build Project I hope that helps Hello thnaks , do we need to copy the crashable and drivable classes to ur code for this lesson? as they are not included in ur code for this lesson. another question I have: does it matter to name the clases with uppercase or lowercase? are Crashable different from CRASHABLE? THNAKS Yes you need those classes and they are available here Java Video Tutorial 15. You can name your classes using different cases if you’d like, but it is common to name them per word like this ThisIsAClassName Thanks for ur reply. 🙂 would you plz. make video tutorial series on J2EE(servelet,jsp)?? That series of tutorials is in the works now Hi Derek! As many of your students have already stated, and I reiterate, thank you from the bottom of my heart. What you are doing here is giving of your time and from your skill set to help others, and there is no greater gift or deed than to lend a hand to your fellow man. I am grateful for what you do and I feel a bit obligated to learn as much from you as I can, since you’re spending your time teaching us. Now that I’ve spit-shined your sitting muscle, I have a few questions that I’d like to ask. I love programming and I would enjoy working with a firm developing Java based software. How far off base am I to believe once I finish all 60 of your Java tutorials and I have a firm handle on the language, that I would be able to land a job using this new skill? It would be awesome, but I feel there is a lot more that I don’t know about this and I am hoping you could help me understand what employer would be looking for from me and my skill set so that I could be better prepared and enter the workforce as a competent Java programmer. Sincerely, William Mosley Hi William, Thank you for the kind message. It is very much appreciated 🙂 I tried to do something with this Java tutorial that has never been done. I tried to cover everything imaginable. I wanted to make a person willing to go through everything to be an expert on programming java, but also programming in general. The java tutorial actually continues after the original 60 to Design Patterns, Java Algorithms, Object Oriented Design, Refactoring, UML, and now Android. After Android apps, I’ll cover Android game development. Then I’ll cover the final part being all J2EE related topics. I’ll also probably develop straight Java Games. If you go through all of them I’d say you will be a Java expert and an expert in programming in general. I hope that helps 🙂 Derek Hello Derek, Thank you very much for uploading this tutorial. I have one question about the code. In LESSONSIXTEEN.JAVA, line 53, System.out.println(superCar.toString()); why don’t you use superCar.getClasss().toString()? You’re very welcome 🙂 I just used the code that I did to explain how toString works. From where to download eclipse for programming of java? and which version. I made a tutorial for that called Install Eclipse for Java. I hope it helps 🙂 Hi Derek, I’m trying to understand the syntax for the following statement: superCar.getClass().getName() Is such a statement does it say that there is a function (method) as part of the object (superCar) and inside the getClass() method there is another method defined as getName()? Thanks for the explanation. jp Hi superCar.getClass().getName() is getting the name of the class for the object superCar excelent tuto, gracias Derek Thank you 🙂
http://www.newthinktank.com/2012/02/java-video-tutorial-16/?replytocom=20632
CC-MAIN-2019-18
refinedweb
1,469
57.16
Hello, I am a beginning C programmer and I am stuck on a question that our teacher gave us. We are suppose to have a file that inputs a .txt file with 2 names and phone numbers. The end result should look like this: Clayton xxx-xxx-xxxx Jonathan xxx-xxx-xxxx (there are actual numbers where there are x's) Here is the code I have so far: ********************** #include <stdio.h> main() { struct telephone { char name[80]; long long int number; }; int n; int i; int c; FILE *tele; tele = fopen("phone.txt","r"); fscanf(tele, " %d",&n); printf("The number of name/phone number pairs is %d\n",n); struct telephone a[n]; for (i =0; i < n; i++){ fscanf(tele, " %s %lld", a[i].name,&a[i].number); printf("%s %lld\n",a[i].name,a[i].number); } struct telephone swap; for(c = 0; c < (n-1); c++) { for(i = 0; i < (c-n-1); i++) { if (a[i].number < a[i+1].number) { swap = a[i]; a[i] = a[i+1]; a[i+1] = swap; } } } printf("%s %lld\n",printNumber(g[h])); fclose(tele); return(0); } void printNumber(long long int f){ int g[10], h; for(h = 9; h >= 0; h--){ g[h] = f % 10; f /= 10; } printf("%d%d%d-%d%d%d-%d%d%d%d ", g[0], g[1], g[2], g[3], g[4], g[5], g[6], g[7], g[8], g[9]); } ******************* I know I need a for loop before the second to last printf statement. I also know that for that printf statement I need to call the printNumber function. I just don't know how to implement this. Any help would be great!!
http://forums.devshed.com/programming-42/phone-program-954262.html
CC-MAIN-2015-32
refinedweb
286
79.3
Your finished project still looks a bit better than your current project, so let’s spice it up with a bit of Bootstrap. You’re going to center the contents and add some navigation. More Styling 00:00 Our finished project still looks a bit better than our current project, so let’s go ahead and spice it up with a bit of Bootstrap. 00:16 I’m going to simply replace what we have in here, for now, with a bunch of Bootstrap code that I copy-pasted. Let’s take a quick look. We’re using a heading for our project.title, and remember, we’re passing this project from our view, from the detail_view(). 00:32 Then, I’m just applying some Bootstrap classes. Here’s an image link to "{% static project.image %}", our image URL sitting in here, and as an alternative text, we’re giving a project description. 00:45 Additionally, I’m going to fill 100% of this column part that Bootstrap creates with this setup. And what we have here is another heading, a smaller heading, {{ project.description }}, and what does is it built with—the technology. 01:00 Let’s take a look how this looks. When I reload it, this looks already much better. The one thing that’s still different over here is we have the whole thing centered and up there, we also have some nice navigation. 01:15 I will show you how to implement this by just putting the pieces into the right place. This is going to be another bunch of Bootstrap code. Read up on it in the associated text, or just learn some more about Bootstrap if you’re interested in that. For now, we’re just going to make our app look as good as this one does. 01:35 Notice that the page that I’m editing—the template that I’m editing—is going to be base.html, because I want this header to be appearing on all of the pages that I have inside of my app. 01:47 So, I’m going here above {% block content %} and I’m pasting a navbar that I just took from the Bootstrap website. We’re also opening up this "container" class before our block, which is going to add some Bootstrap styling, and I’m going to have to close it afterwards. 02:09 So, what’s happening here is this part 02:15 creates the Bootstrap header, and then this container around whatever content we’re putting in the templates that we’re extending base.html from, is going to help to apply a certain styling that Bootstrap uses. 02:32 Let’s check it out. 02:37 Okay, so look at that! We’re getting a NoReverseMatch. 'projects:projects'. 02:45 And that is simply because I copy-pasted that from the finished project, but we actually built it a little differently. So here, I’m linking to 'projects:projects', but that’s not how we called it, so we got a NoReverseMatch. Remember where we go to check? 03:02 We go to check first in urls, make sure that we have the app_name—correct—and we have the view name— 'all_projects', 'project_detail'. That’s our home, so that’s where we want to link to. 03:16 And then next, we check inside of the template—in this case, base.html. We look for the url tags. Here is one and there it is, see? Okay, the app_name is correct, but the path() name is incorrect. So here, we need the proper path() name. 03:34 And we see this happens down here again. So, we’re linking back to the page. And now, we should have dealt with our NoReverseMatch. Let’s check it out. 03:46 I reload this. It’s working and it finally looks great. We have the navbar up top. We have our details page. Let’s check that when we click Home, we come back to the list view. We can click in here, go to my test project, come back Home, check up the next project. 04:04 Everything is looking nice and very similar to our finished page. Awesome! We’re totally getting there. We did some nice styling here. Congrats. Before we stop the styling for this one, here’s one last thing. 04:21 Take a look at how does this page look like if we check it out on the phone. Now we could not really check it out on the phone yet, but what each modern browser offers are developer tools, if you right-click and then say Inspect Element, 04:38 you can choose to display it either as a normal web page, or if you click up here, as a device. So here, for example, on iPhone X, or you can choose a different phone, as well. 04:52 Our page looks pretty crappy on the phone, right? Do you see that? Like, it’s very small. Imagine seeing a page like that on your phone. You would probably not impress anyone with your portfolio if it looks like this. 05:03 So, there’s actually a very simple thing we can do to solve this problem because we’re using Bootstrap, which is designed for being mobile-responsive. The only thing we need to do is add this one line of HTML code into our base.html so that it applies to all of the pages. You can simply Google this one if you don’t know it. It’s the meta "viewport", and then it allows us to make the page mobile-responsive by declaring that the content is going to look for what’s the device screen-width, and then adapt to that one. I’ll put it into the code now, and then we can see how much better our page looks on the phone. 05:46 Inside of base.html, 05:51 I put this viewport line in here. If we reload the page now, you can see that this looks much better. Now, the cards adapt to the device width, and we can click here, Read More. 06:05 Also, our single page looks quite decent. So, that’s nice. With this said and done, I think we’re fine with leaving the design as it is right now. You can always dive deeper into this front end development if you want to make it look better, check out Bootstrap, but I think we’ve got a pretty decent page that looks good both on the phone as well as just normally on the browser. 06:31 Okay, great! So, what I want to show you in the next video is moving a bit away from styling, but we want to be able to add more of those projects and we want to make this in an easier way than always having to go into the Django shell. 06:46 So, check that out in the next video. See you there! The Home and other nav buttons are still present, Bootstrap’s responsive nav-bar simply nests them in a drop-down menu to optimize for space. When you click on the hamburger menu on the top right of the page (when viewed on a phone) you should see the other nav links there. Once again, I believe I have done everything right. I have spent much time debugging and found many errors that I have corrected. The additional styling does not appear even though the page does. I am just showing you the source code from the page, (show source code in Safari) to show you that the class “col-md-8” doesn’t make it. It’s like the new code in detail.html just doesn’t make on the path to the page. If you think there is something specific I should look for, I would appreciate knowing it: <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Portfolio<"> <h1>Projects</h1> <div class="row"> <div class="col-md-4"> <div class="card mb-2"> <img class="card-img-top" src="/static/projects/img/720720.jpg"> <div class="card-body"> <h5 class="card-title">Good, good</h5> <p class="card-text">this is with good image</p> <p class="card-text">Django</p> <a href="/projects/4" class="btn btn-primary"> Give me More </a> </div> </div> </div> <div class="col-md-4"> <div class="card mb-2"> <img class="card-img-top" src="/static/projects/img/Nancy2.jpg"> <div class="card-body"> <h5 class="card-title">More good</h5> <p class="card-text">Nancy at her best</p> <p class="card-text">Django</p> <a href="/projects/5" class="btn btn-primary"> Give me More </a> </div> </div> </div> </div> </div> </body> </html> The video at 1:23 shows three tabs at the top: Portfolio Home Blog At 3:51, after adding the Bootstrap code, the tabs are: Portfolio Home Somehow, blog didn’t make it. However, when you check the page and blog is missing, when you click of the “more” button, blog appears again. When you go back, it disappears. Clearly some inconsistency here somewhere. How do I copy the Bootstrap code to put in the base.html file? The problem that I have struggled with is caused by not having access to the bootstrap code. How can anyone trying to follow along with your app develp that app without the code? Did you supply it somewhere that I missed? The frame in your video is to narrow to show the whole code, otherwise I would just freeze the frame and hand copy the code. Am I legitimately frustrated or did I make a big miss? I would very much like to complete this. @reblark: The associated written tutorial (under Supporting Materials) has a downloadable sample project. I’ve also added direct links throughout the course now (for example in the first lesson and in the last one). Hope that helps you out! :) Hei @reblark–yes, the finished project I am showing is the one from the associated written tutorial, which has an additional Blog app built out that I’m pointing to in the final section. Dan and Martin Thanks to both of you for responding. With all due respect, I feel that the ending of Martin’s tutorial is a bit messy. As you know, all along, I tried diligently to follow and replicate the app. The ending just isn’t quite right with two different pages—one with “”, one with not—and providing the code that Martin used in the last styling video should have been easy. Just referring to the written tutorial, which does not discuss that code in detail and having an unfamiliar reference to the Bootstrap code in the beginning is, well, to me, it could have been better, more responsible. I am really trying my best and when you present the styling video and show me the cool work that can be done then tell me to go study Bootstrap separately because you are not tidying up at the end, hurts. The blog app, of course, is an entirely different and exciting issue, but finishing up this app cleanly and completely is important to me. I would hope that somehow I could get a copy of the code that Martin put in the base.html file in that video. That should be easy, I think. I am grateful for this course and starting others, like Migrations, but I really would like to finish this one neatly and correctly. Cheers. Ralph By the way, I wondered during the course what language spells “hi” as “hei?” Must be your native language Martin.” Oh, by the way, I think I found what I was looking for in terms of finishing the site. I found the stuff about the Boostrap code for the blog. Exciting. reblark is right. There are not that many lines of code in “More Styling” video. Copy-paste was a ‘shock’ to me and now back @Bootstrap there is completely different code for that bright-gray navbar :) At least Martin is quickly showing full code in a video, so I can pause,write it down and inspect. Anyone else having similar problems to reblark can find the html code in the sample project zip file, that is where I found it. Is there a tutorial on implementing styling without having to use CORS or bootstrapping? For learning to implement all the styling by yourself without using a framework (whether it’s served through a CDN or downloaded and directly included in your project), you’d need to look for a tutorial on writing CSS. Since Real Python is a site focused on Python, we don’t have tutorials specifically on CSS. I’ve personally found CSS Tricks to be a great resource, however, it’s more a compendium of CSS topics than a full-fledged intro tutorial. @marcinmukosiej what do you mean you are missing? You can download the project code at the top under Supporting Materials. Let me know if there’s something else that you’re missing. I’m looking for a Django and Python specific tutorial for styling. I have experience with Babel, Webpack, and React apps- and how CSS can be used for a Reaact Portfolio. I would like to explore how to create a similar structure for this django portfolio without having to rely on outside websites to be operational in order for my site to look good. The path I am heading is to have django framework housed on AWS and to put my django portfolio into production in a S3 bucket- again without using AWS’s CDN, but manually doing so as to be able to learn what it actually means to create all the parts of things like beanstalk would do or and other currently used highly abstracted automation from AWS. In the solutions architect tract from acloud.guru in 2018 the CDN tooling was very abstract, and now I would like a much deeper understanding of how to have ownership of such production grade endeavors than things like a simple SQLite3 DB and a loopback address on my own machine. If there is a tutorial on production grade site building which includes PostgreSQL or a no SQL solution, including a means to remove the dependencies associated with using a bootstrap, that is what I will be looking for after creating a few blogs for some friends of mine. I do realize technology moves fast, and I just wanted to see if anyone at Real Python has already been put top this task in 2020, instead of me having to spend precious time digging through other tutorial sites, which often yield in paid services losing me as their customer. Hi @rolandgarceau. You could look into Django REST framework instead? That way you could separate your back-end as an API and use your React skills to use a react front-end for your apps. Using Javascript based PWAs is becoming quite common, and Django REST framework is a great way to supply a back-end for such an app. To write only “Paste via cmd+V” is not a good way for the lines from Bootstrap. The code is not accessible within the provided .zip directory. Why could you not provide this here? Hi @Monika. You can get the code for Bootstrap nav from their official docs page, here: getbootstrap.com/docs/4.4/components/navbar/ Hi Rick. Thanks! Good support. I might have another question… I’m planning something like a Dashboard. Main topic is to provide several display types/tabledata in one view. How far I did understand django structure, I have to make several def functions within in one class based view in views.py responding different table data requests and if possible simple text without relationship to a database. Can I find an appropriate lesson in RealPython Tutorial or elsewhere? Perhaps you find the time to tell me another helpful link? Hi @Monika. In Django, class-based views and function-based views (the ones used in this tutorial) are different things. You wouldn’t nest a function-based view, the ones you make with def, inside of a class-based view. Generally, you can make multiple database queries in one function in views.py and pass all the results to a template via the context dictionary. This allows you to render data from multiple different tables in one view, that you could design as a dashboard. Check out this tutorial link which might provide you with the info you need. For anyone facing what @reblark and others experienced, here’s the bootstrap code. ------------------------------BootstrapCodeBegin------------------------------ <nav class=”navbar navbar-expand-lg navbar-light bg-light”> <div class=”container”> <a class=”navbar-brand” href=”{% url ‘projects-------------------------------- I messed it up, here it is. ------------------------------BootstrapCodeBegin------------------------------ <nav class="navbar navbar-expand-lg navbar-light bg-light"> <div class="container"> <a class="navbar-brand" href="{% url 'projects-------------------------------- Hei Martin, Thank you for this well paced and informative course. What I particularly valued was the ‘error centric why’ that gave context to the ‘what’. Now I find I groan less when I get an error :) One thing I did note is that you only <link> to the Bootstrap css stylesheet in the Tutorial, and the Sample code. Whereas, I couldn’t get my Hamburger toggle to work until I added into base.html a jquery script and a Bootstrap javascript, as specified in the Bootstrap4 documentation. Is that what you’d expect me to need to do? Hi @karakus, glad you’re liking the error-centric approach :) Nice work on figuring out the Bootstrap hamburger menu! 🙌 Collapsible Hamburger Menu You are right, Bootstrap’s default function of expanding the hamburger menu when it is collapsed relies on JavaScript code that you can include in your page by adding the following two lines to the <head> of your base.html: > Responsive Expanded Menu If you want to keep your code as lean as possible and don’t add any JavaScript to it, and still keep the menu functional also on mobile, another option is to change the navbar-toggle and/or navbar-collapse classes to navbar-expand. This will simply avoid collapsing the menu items into a hamburger menu and always display them at the top of your mobile page, just like it does on the web browser. Thanks Martin - I’ll try that. I have one other question that I’ve not found a definitive answer for … yet, which I’d value your input on if you have the bandwidth. I’m working on adding the Blog app to the Portfolio project and I’m unsure of whether I should place base.html in a portfolio/templates folder and change the individual app templates to reference that, thereby adhering to the DRY principle across the portfolio. Or whether, for app portability reasons, I should keep a base.html in each app templates folder. I’m also wondering whether the django template language would enable me to put a conditional statement in that looks in the top level (portfolio/templates) folder for a base.html first and only use the app specific one if the portfolio one doesn’t exist. PS: I currently have base.html at the portfolio level and actually used the friendly errors to figure out what I needed change in various places to ensure my app templates found it! Hi @karakus! That’s an interesting question and might come down to personal preference, although let’s see what Django suggests: One common way of using inheritance is the following three-level approach: - Create a base.htmltemplate that holds the main look-and-feel of your site. - Create a base_SECTIONNAME.htmltemplate for each “section” of your site. For example, base_news.html, base_sports.html. These templates all extend base.htmland include section-specific styles/design. - Create individual templates for each type of page, such as a news article or blog entry. These templates extend the appropriate section template. While this doesn’t explicitly address your question about multiple apps, I could see it used like that as well. E.g. you’d use a project-wide base.html template, and then app-specific base_APPNAME.html templates etc. Personally, I would make this choice on a per-project basis, since both approaches make sense to me. If it’s a project where the apps are tied together rather firmly and it’s unlikely that I would segment one of them out to use it elsewhere, then I would opt for the general base.html template, especially if unified styling and structure is something I’m going for. Otherwise, having everything neatly packaged inside of its app feels the most comfortable to me. :) Tl;dr: I’m not sure there is a definite answer to your question. There are some good practices you can follow, such as the approach of double-inheritance that the Django docs suggest, and it’s a good idea to consider the trade-offs of either approach on a per-project basis. Other Approaches There are also ways to make the inheritance conditional that I have never used myself, but you could play around with it. Here are two links on StackOverflow: - stackoverflow.com/questions/5380984/any-way-to-make-extends-conditional-django - stackoverflow.com/questions/2575282/django-conditional-template-inheritance I like sticking with the other approach better, though, since it feels to me this could potentially get a little messy ¯\_(ツ)_/¯ Thanks for the info Martin - it all helps! i’m still getting this error message NoReverseMatch at /projects/1 ‘projects’ is not a registered namespace Hi @charneuk. For the NoReverseMatch to go away, there are a couple of things you can check, see if you can find the bug by following the suggestions in NoReverseMatch Debugging. Let me know what you tried and what worked - or if it didn’t work so we can dig deeper. Become a Member to join the conversation. Gascowin on Oct. 21, 2019 I observed that when you previewed the app in the Iphone view that the ‘Home’ nav button failed to show. I added some other buttons like a dropdown link in my own version of the app that also would not show unless I previewed the app on an iPad Pro. I could not seem to fix this. Would you happen to know of a way around this?
https://realpython.com/lessons/more-styling/
CC-MAIN-2021-17
refinedweb
3,766
71.85